无法使OpenCV在iOS上检测到人 [英] Cant make OpenCV detect people on iOS

查看:135
本文介绍了无法使OpenCV在iOS上检测到人的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我从OpenCV来源中提取了一个示例,并尝试将其在iOS上使用,我做了以下工作:

I took a sample from OpenCV sources and tried to put it in use on iOS, I did the following:

- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection {

  // get cv::Mat from CMSampleBufferRef

  UIImage * img = [self imageFromSampleBuffer: sampleBuffer];  
  cv::Mat cvImg = [img CVGrayscaleMat];

  cv::HOGDescriptor hog;
  hog.setSVMDetector(cv::HOGDescriptor::getDefaultPeopleDetector());
  cv::vector<cv::Rect> found;     

  hog.detectMultiScale(cvImg, found, 0.2, cv::Size(8,8), cv::Size(16,16), 1.05, 2);


  for( int i = 0; i < (int)found.size(); i++ )
  {

    cv::Rect r = found[i];

    dispatch_async(dispatch_get_main_queue(), ^{
      self.label.text = [NSString stringWithFormat:@"Found at %d, %d, %d, %d", r.x, r.y, r.width, r.height];
    });

    NSLog(@"Found at %d, %d, %d, %d", r.x, r.y, r.width, r.height);         

  }
}

CVGrayscaleMat所在的地方

where CVGrayscaleMat was

-(cv::Mat)CVGrayscaleMat
{
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
    CGFloat cols = self.size.width;
    CGFloat rows = self.size.height;

    cv::Mat cvMat = cv::Mat(rows, cols, CV_8UC1); // 8 bits per component, 1 channel

    CGContextRef contextRef = CGBitmapContextCreate(cvMat.data,                 // Pointer to backing data
                                                    cols,                      // Width of bitmap
                                                    rows,                     // Height of bitmap
                                                    8,                          // Bits per component
                                                    cvMat.step[0],              // Bytes per row
                                                    colorSpace,                 // Colorspace
                                                    kCGImageAlphaNone |
                                                    kCGBitmapByteOrderDefault); // Bitmap info flags

    CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), self.CGImage);
    CGContextRelease(contextRef);
    CGColorSpaceRelease(colorSpace);

    return cvMat;
}

和imageFromSampleBuffer是来自Apple文档的示例.问题是-该应用程序无法检测到人,我尝试了不同的大小和姿势-对我来说没有任何用处.我想念什么?

and imageFromSampleBuffer was sample from Apple's docs. The thing is - the app can not detect people, I tried different sizes and poses - nothing works for me. What am I missing?

推荐答案

我设法使其正常运行.事实证明,尽管openCV并未告知CV_8UC1矩阵是正确的矩阵,但当我将其传递给detectMultiScale方法时,出了点问题. 当我将CV_8UC4转换为CV_8UC3时

I've managed to make it work. It turns out, that CV_8UC1 matrix is not the right one, although openCV doesn't tell, that something is wrong, when I pass it to detectMultiScale method. When I convert CV_8UC4 to CV_8UC3 with

-(cv::Mat) CVMat3Channels
{
  cv::Mat rgbaMat = [self CVMat];

  cv::Mat rgbMat(self.size.height, self.size.width, CV_8UC3); // 8 bits per component, 3 channels

  cvtColor(rgbaMat, rgbMat, CV_RGBA2RGB, 3);

  return rgbMat;
}

检测开始起作用.

这篇关于无法使OpenCV在iOS上检测到人的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆