iOS面部探测器方向和CIImage方向的设置 [英] iOS face detector orientation and setting of CIImage orientation

查看:917
本文介绍了iOS面部探测器方向和CIImage方向的设置的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

编辑 发现此代码有助于前置摄像头图像 http://blog.logichigh.com/2008/06/05/uiimage-fix/

希望其他人有一个类似的问题,可以帮助我。尚未找到解决方案。 (它可能看起来有点长,但只是一堆帮助代码)

Hope others have had a similar issue and can help me out. Haven't found a solution yet. (It may seem a bit long but just a bunch of helper code)

我在相机(正面和背面)获取的图像上使用ios面部检测器以及来自图库的图像(我正在使用 UIImagePicker - 用于通过相机捕获图像和从图库中选择图像 - 不使用avfoundation拍摄像squarecam中的照片演示)

I'm using the ios face detector on images aquired from the camera (front and back) as well as images from the gallery (I'm using the UIImagePicker - for both image capture by camera and image selection from the gallery - not using avfoundation for taking pictures like in the squarecam demo)

我得到了检测的坐标(如果有的话),所以我写了一个简短的调试方法来获取面的边界以及实用程序在它们上面画了一个正方形,我想检查探测器的工作方向:

I am getting really messed up coordinates for the detection (if any) so I wrote a short debug method to get the bounds of the faces as well as a utility that draws a square over them, and i wanted to check for which orientation the detector was working:

#define RECTBOX(R)   [NSValue valueWithCGRect:R]
- (NSArray *)detectFaces:(UIImage *)inputimage
{
    _detector = \[CIDetector detectorOfType:CIDetectorTypeFace context:nil options:\[NSDictionary dictionaryWithObject:CIDetectorAccuracyLow forKey:CIDetectorAccuracy\]\];
    NSNumber *orientation = \[NSNumber numberWithInt:\[inputimage imageOrientation\]\]; // i also saw code where they add +1 to the orientation
    NSDictionary *imageOptions = \[NSDictionary dictionaryWithObject:orientation forKey:CIDetectorImageOrientation\];

    CIImage* ciimage = \[CIImage imageWithCGImage:inputimage.CGImage options:imageOptions\];


    // try like this first
    //    NSArray* features = \[self.detector featuresInImage:ciimage options:imageOptions\];
    // if not working go on to this (trying all orientations)
    NSArray* features;

    int exif;
    // ios face detector. trying all of the orientations
    for (exif = 1; exif <= 8 ; exif++)
    {
        NSNumber *orientation = \[NSNumber numberWithInt:exif\];

        NSDictionary *imageOptions = \[NSDictionary dictionaryWithObject:orientation forKey:CIDetectorImageOrientation\];

        NSTimeInterval start = \[NSDate timeIntervalSinceReferenceDate\];

        features = \[self.detector featuresInImage:ciimage options:imageOptions\];

        if (features.count > 0)
        {
            NSString *str = \[NSString stringWithFormat:@"found faces using exif %d",exif\];
                    \[faceDetection log:str\];
            break;
        }
        NSTimeInterval duration = \[NSDate timeIntervalSinceReferenceDate\] - start;
        NSLog(@"faceDetection: facedetection total runtime is %f s",duration);
    }
    if (features.count > 0)
    {
        [faceDetection log:@"-I- Found faces with ios face detector"];
        for(CIFaceFeature *feature in features)
        {
            CGRect rect = feature.bounds;
            CGRect r = CGRectMake(rect.origin.x,inputimage.size.height - rect.origin.y - rect.size.height,rect.size.width,rect.size.height);
            [returnArray addObject:RECTBOX(r)];
        }
        return returnArray;
    } else {
        // no faces from iOS face detector. try OpenCV detector
    }

[1]

在尝试了大量不同的图片后,我注意到面部检测器的方向与相机图像属性不一致。我从前置摄像头拍摄了一堆照片
,其中uiimage方向为3(查询imageOrienation),但是面部检测器没有找到该设置的面孔。当通过所有exif可能性时,面部检测器最终拾取面部,但是一起使用不同的方向。

After trying tons of different pictures, I noticed that the face detector orientation is not consistent with the camera image property. I took a bunch of photos from the front facing camera where the uiimage orientation was 3 (querying imageOrienation) but the face detector wasn't finding faces for that setting. When running through all of the exif possibilities, the face detector was finally picking up faces but for a different orientation all together.

![1]: http:/ /i.stack.imgur.com/D7bkZ.jpg

我该如何解决这个问题?我的代码中有错吗?

How can I solve this? Is there a mistake in my code?

我遇到的另一个问题,(但与面部检测器紧密相关),当面部检测器拾取面部时,但是错误的方向(主要发生在前置摄像头上)最初使用的 UIImage 在uiiimageview中正确显示,但是当我绘制正方形叠加层时(我在我的应用中使用opencv)所以我决定将 UIImage 转换成cvmat用opencv绘制叠加层)整个图像旋转90度(只有cvmat图像而不是 UIImage 我最初显示)

Another problem I was having, (but closely connected with the face detector), when the face detector picks up faces, but for the "wrong" orientation (happens mostly on front facing camera) the UIImage initially used displays correctly in a uiiimageview, but when I draw a square overlay (I am using opencv in my app so I decided to convert the UIImage to cvmat to draw the overlay with opencv) the whole image is rotated 90 degrees (Only the cvmat image and not the UIImage i initially displayed)

我能想到的原因是面部检测器正在弄乱一些缓冲区(上下文?) UIimage转换为opencv mat正在使用。我该如何分离这些缓冲区?

The reasoning I can think of here is that the face detector is messing with some buffer (context?) that the UIimage conversion to opencv mat is using. How can I seperate these buffers?

将uiimage转换为cvmat的代码是(来自着名的 UIImage 类别):

The code for converting uiimage to cvmat is (from the "famous" UIImage category someone made):

-(cv::Mat)CVMat
{

    CGColorSpaceRef colorSpace = CGImageGetColorSpace(self.CGImage);
    CGFloat cols = self.size.width;
    CGFloat rows = self.size.height;

    cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels

    CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
                                                    cols, // Width of bitmap
                                                    rows, // Height of bitmap
                                                    8, // Bits per component
                                                    cvMat.step[0], // Bytes per row
                                                    colorSpace, // Colorspace
                                                    kCGImageAlphaNoneSkipLast |
                                                    kCGBitmapByteOrderDefault); // Bitmap info flags

    CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), self.CGImage);
    CGContextRelease(contextRef);

    return cvMat;
}

- (id)initWithCVMat:(const cv::Mat&)cvMat
{
    NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];

    CGColorSpaceRef colorSpace;

    if (cvMat.elemSize() == 1)
    {
        colorSpace = CGColorSpaceCreateDeviceGray();
    }
    else
    {
        colorSpace = CGColorSpaceCreateDeviceRGB();
    }

    CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);

    CGImageRef imageRef = CGImageCreate(cvMat.cols,                                     // Width
                                            cvMat.rows,                                     // Height
                                            8,                                              // Bits per component
                                            8 * cvMat.elemSize(),                           // Bits per pixel
                                            cvMat.step[0],                                  // Bytes per row
                                            colorSpace,                                     // Colorspace
                                            kCGImageAlphaNone | kCGBitmapByteOrderDefault,  // Bitmap info flags
                                            provider,                                       // CGDataProviderRef
                                            NULL,                                           // Decode
                                            false,                                          // Should interpolate
                                            kCGRenderingIntentDefault);                     // Intent   

     self = [self initWithCGImage:imageRef];
     CGImageRelease(imageRef);
     CGDataProviderRelease(provider);
     CGColorSpaceRelease(colorSpace);

     return self;
 }  

 -(cv::Mat)CVRgbMat
 {
     cv::Mat tmpimage = self.CVMat;
     cv::Mat image;
     cvtColor(tmpimage, image, cv::COLOR_BGRA2BGR);
     return image;
 }

 - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingImage:(UIImage *)img editingInfo:(NSDictionary *)editInfo {
    self.prevImage = img;
 //   self.previewView.image = img;
    NSArray *arr = [[faceDetection sharedFaceDetector] detectFaces:img];
    for (id r in arr)
    {
         CGRect rect = RECTUNBOX(r);
         //self.previewView.image = img;
         self.previewView.image = [utils drawSquareOnImage:img square:rect];
    }
    [self.imgPicker dismissModalViewControllerAnimated:YES];
    return;
}


推荐答案

我不认为这是一个好主意是旋转整堆图像像素并匹配CIFaceFeature。您可以想象在旋转方向上重绘是非常沉重的。我有同样的问题,我通过转换CIFaceFeature的坐标系相对于UIImageOrientation来解决它。我使用一些转换方法扩展了CIFaceFeature类,以获得关于UIImage及其UIImageView(或UIView的CALayer)的正确点位置和边界。完整的实施发布在此处: https://gist.github.com/laoyang/5747004 。您可以直接使用。

I don't think it's a good idea to rotate whole bunch of image pixels and match the CIFaceFeature. You can imagine redrawing at the rotated orientation is very heavy. I had the same problem, and I solved it by converting the coordinate system of the CIFaceFeature with respect to the UIImageOrientation. I extended the CIFaceFeature class with some conversion methods to get the correct point locations and bounds with respect to the UIImage and its UIImageView (or the CALayer of a UIView). The complete implementation is posted here: https://gist.github.com/laoyang/5747004. You can use directly.

以下是来自CIFaceFeature的点的最基本转换,返回的CGPoint根据图像的方向进行转换:

Here is the most basic conversion for a point from CIFaceFeature, the returned CGPoint is converted based on image's orientation:

- (CGPoint) pointForImage:(UIImage*) image fromPoint:(CGPoint) originalPoint {

    CGFloat imageWidth = image.size.width;
    CGFloat imageHeight = image.size.height;

    CGPoint convertedPoint;

    switch (image.imageOrientation) {
        case UIImageOrientationUp:
            convertedPoint.x = originalPoint.x;
            convertedPoint.y = imageHeight - originalPoint.y;
            break;
        case UIImageOrientationDown:
            convertedPoint.x = imageWidth - originalPoint.x;
            convertedPoint.y = originalPoint.y;
            break;
        case UIImageOrientationLeft:
            convertedPoint.x = imageWidth - originalPoint.y;
            convertedPoint.y = imageHeight - originalPoint.x;
            break;
        case UIImageOrientationRight:
            convertedPoint.x = originalPoint.y;
            convertedPoint.y = originalPoint.x;
            break;
        case UIImageOrientationUpMirrored:
            convertedPoint.x = imageWidth - originalPoint.x;
            convertedPoint.y = imageHeight - originalPoint.y;
            break;
        case UIImageOrientationDownMirrored:
            convertedPoint.x = originalPoint.x;
            convertedPoint.y = originalPoint.y;
            break;
        case UIImageOrientationLeftMirrored:
            convertedPoint.x = imageWidth - originalPoint.y;
            convertedPoint.y = originalPoint.x;
            break;
        case UIImageOrientationRightMirrored:
            convertedPoint.x = originalPoint.y;
            convertedPoint.y = imageHeight - originalPoint.x;
            break;
        default:
            break;
    }
    return convertedPoint;
}

以下是基于上述转换的类别方法:

And here are the category methods based on the above conversion:

// Get converted features with respect to the imageOrientation property
- (CGPoint) leftEyePositionForImage:(UIImage *)image;
- (CGPoint) rightEyePositionForImage:(UIImage *)image;
- (CGPoint) mouthPositionForImage:(UIImage *)image;
- (CGRect) boundsForImage:(UIImage *)image;

// Get normalized features (0-1) with respect to the imageOrientation property
- (CGPoint) normalizedLeftEyePositionForImage:(UIImage *)image;
- (CGPoint) normalizedRightEyePositionForImage:(UIImage *)image;
- (CGPoint) normalizedMouthPositionForImage:(UIImage *)image;
- (CGRect) normalizedBoundsForImage:(UIImage *)image;

// Get feature location inside of a given UIView size with respect to the imageOrientation property
- (CGPoint) leftEyePositionForImage:(UIImage *)image inView:(CGSize)viewSize;
- (CGPoint) rightEyePositionForImage:(UIImage *)image inView:(CGSize)viewSize;
- (CGPoint) mouthPositionForImage:(UIImage *)image inView:(CGSize)viewSize;
- (CGRect) boundsForImage:(UIImage *)image inView:(CGSize)viewSize;

(另外需要注意的是从UIImage方向提取面部特征时指定正确的EXIF方向。相当令人困惑......这就是我所做的:

(Another thing need to notice is specifying the correct EXIF orientation when extracting the face features from UIImage orientation. Quite confusing... here is what I did:

int exifOrientation;
switch (self.image.imageOrientation) {
    case UIImageOrientationUp:
        exifOrientation = 1;
        break;
    case UIImageOrientationDown:
        exifOrientation = 3;
        break;
    case UIImageOrientationLeft:
        exifOrientation = 8;
        break;
    case UIImageOrientationRight:
        exifOrientation = 6;
        break;
    case UIImageOrientationUpMirrored:
        exifOrientation = 2;
        break;
    case UIImageOrientationDownMirrored:
        exifOrientation = 4;
        break;
    case UIImageOrientationLeftMirrored:
        exifOrientation = 5;
        break;
    case UIImageOrientationRightMirrored:
        exifOrientation = 7;
        break;
    default:
        break;
}

NSDictionary *detectorOptions = @{ CIDetectorAccuracy : CIDetectorAccuracyHigh };
CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:detectorOptions];

NSArray *features = [faceDetector featuresInImage:[CIImage imageWithCGImage:self.image.CGImage]
                                          options:@{CIDetectorImageOrientation:[NSNumber numberWithInt:exifOrientation]}];

这篇关于iOS面部探测器方向和CIImage方向的设置的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆