CIDetector在面部特征上的位置错误 [英] CIDetector give wrong position on facial features

查看:111
本文介绍了CIDetector在面部特征上的位置错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

现在我知道坐标系混乱了.我试过反转视图和imageView,什么也没有.然后,我尝试反转特征上的坐标,但仍然遇到相同的问题.我知道它可以检测到脸部,眼睛和嘴巴,但是当我尝试从示例代码中放置叠加框时,它们的位置不正确(确切地说,它们位于屏幕外的右侧).我为为什么会发生而感到困惑.

Now i know that the coordinate system is messed up. I have tried reversing the view and imageView, nothing. I then tried to reverse the coordinates on the features and i still get the same problem. I know it detects the faces, eyes and mouth, but when i try to place the overlaying boxes from the samples codes, they are out of position (to be exact, they are on the right off-screen). Im stumped as to why this is happening.

我会发布一些代码,因为我知道你们中的一些人喜欢这种特殊性:

Ill post some code because i know some of you guys like the specificity:

-(void)faceDetector
{
    // Load the picture for face detection
//    UIImageView* image = [[UIImageView alloc] initWithImage:mainImage];
    [self.imageView setImage:mainImage];
    [self.imageView setUserInteractionEnabled:YES];

    // Draw the face detection image
//    [self.view addSubview:self.imageView];

    // Execute the method used to markFaces in background
//    [self performSelectorInBackground:@selector(markFaces:) withObject:self.imageView];

    // flip image on y-axis to match coordinate system used by core image
//    [self.imageView setTransform:CGAffineTransformMakeScale(1, -1)];

    // flip the entire window to make everything right side up
//    [self.view setTransform:CGAffineTransformMakeScale(1, -1)];

//    [toolbar setTransform:CGAffineTransformMakeScale(1, -1)];
    [toolbar setFrame:CGRectMake(0, 0, 320, 44)];

    // Execute the method used to markFaces in background
    [self performSelectorInBackground:@selector(markFaces:) withObject:_imageView];
//    [self markFaces:self.imageView];
}

-(void)markFaces:(UIImageView *)facePicture
{
    // draw a CI image with the previously loaded face detection picture
    CIImage* image = [CIImage imageWithCGImage:facePicture.image.CGImage];

    // create a face detector - since speed is not an issue we'll use a high accuracy
    // detector
    CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
                                              context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];

//    CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
    CGAffineTransform transform = CGAffineTransformMakeScale(self.view.frame.size.width/mainImage.size.width, -self.view.frame.size.height/mainImage.size.height);
    transform = CGAffineTransformTranslate(transform, 0, -self.imageView.bounds.size.height);

    // create an array containing all the detected faces from the detector
    NSDictionary* imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:6] forKey:CIDetectorImageOrientation];
    NSArray* features = [detector featuresInImage:image options:imageOptions];
//    NSArray* features = [detector featuresInImage:image];

    NSLog(@"Marking Faces: Count: %d", [features count]);

    // we'll iterate through every detected face.  CIFaceFeature provides us
    // with the width for the entire face, and the coordinates of each eye
    // and the mouth if detected.  Also provided are BOOL's for the eye's and
    // mouth so we can check if they already exist.
    for(CIFaceFeature* faceFeature in features)
    {


        // create a UIView using the bounds of the face
//        UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];
        CGRect faceRect = CGRectApplyAffineTransform(faceFeature.bounds, transform);

        // get the width of the face
//        CGFloat faceWidth = faceFeature.bounds.size.width;
        CGFloat faceWidth = faceRect.size.width;

        // create a UIView using the bounds of the face
        UIView *faceView = [[UIView alloc] initWithFrame:faceRect];

        // add a border around the newly created UIView
        faceView.layer.borderWidth = 1;
        faceView.layer.borderColor = [[UIColor redColor] CGColor];

        // add the new view to create a box around the face
        [self.imageView addSubview:faceView];
        NSLog(@"Face -> X: %f, Y: %f, W: %f, H: %f",faceRect.origin.x, faceRect.origin.y, faceRect.size.width, faceRect.size.height);

        if(faceFeature.hasLeftEyePosition)
        {

            // create a UIView with a size based on the width of the face
            CGPoint leftEye = CGPointApplyAffineTransform(faceFeature.leftEyePosition, transform);
            UIView* leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(leftEye.x-faceWidth*0.15, leftEye.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
            // change the background color of the eye view
            [leftEyeView setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
            // set the position of the leftEyeView based on the face
            [leftEyeView setCenter:leftEye];
            // round the corners
            leftEyeView.layer.cornerRadius = faceWidth*0.15;
            // add the view to the window
            [self.imageView addSubview:leftEyeView];
            NSLog(@"Has Left Eye -> X: %f, Y: %f",leftEye.x, leftEye.y);
        }

        if(faceFeature.hasRightEyePosition)
        {

            // create a UIView with a size based on the width of the face
            CGPoint rightEye = CGPointApplyAffineTransform(faceFeature.rightEyePosition, transform);
            UIView* leftEye = [[UIView alloc] initWithFrame:CGRectMake(rightEye.x-faceWidth*0.15, rightEye.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
            // change the background color of the eye view
            [leftEye setBackgroundColor:[[UIColor yellowColor] colorWithAlphaComponent:0.3]];
            // set the position of the rightEyeView based on the face
            [leftEye setCenter:rightEye];
            // round the corners
            leftEye.layer.cornerRadius = faceWidth*0.15;
            // add the new view to the window
            [self.imageView addSubview:leftEye];
            NSLog(@"Has Right Eye -> X: %f, Y: %f", rightEye.x, rightEye.y);
        }

//        if(faceFeature.hasMouthPosition)
//        {
//            // create a UIView with a size based on the width of the face
//            UIView* mouth = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2, faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)];
//            // change the background color for the mouth to green
//            [mouth setBackgroundColor:[[UIColor greenColor] colorWithAlphaComponent:0.3]];
//            // set the position of the mouthView based on the face
//            [mouth setCenter:faceFeature.mouthPosition];
//            // round the corners
//            mouth.layer.cornerRadius = faceWidth*0.2;
//            // add the new view to the window
//            [self.imageView addSubview:mouth];
//        }
    }
}

我知道代码段有点长,但这就是它的主要要旨.他们与此唯一相关的另一件事是,我有一个UIImagePickerController,它为用户提供了选择现有图像或拍摄新图像的选项.然后将图像设置到屏幕的UIImageView中,以与各种框和圆一起显示,但是没有运气来显示它们:/

I know the code segment is a little long but thats the main gist of it. They only other thing relevant to this is that I have a UIImagePickerController that gives the user the option to pick an existing image or take a new one. Then the image is set into the UIImageView of the screen to be displayed along with the various boxes and circles but no luck to show them :/

任何帮助将不胜感激.谢谢〜

Any help would be appreciated. Thank~

更新:

我已经添加了一张现在所做的照片,以便你们可以有个主意,我应用了新的缩放比例,该缩放比例效果更好,但与我想要的功能相去甚远.

Ive added a photo of what it does now so you guys can have an idea, ive applied the new scaling which works a little better but nowhere near what i want it to do.

推荐答案

只需使用Apple的SquareCam应用中的代码即可.对于前后摄像头,它可以在任何方向上正确对齐正方形.沿着faceRect进行插值以获取正确的眼睛和嘴巴位置.注意:您必须从人脸特征中将x位置换成y位置.不知道为什么要进行调换,但这会为您提供正确的位置.

Just use the code from Apple's SquareCam app. It aligns the square correctly in any orientation for both the front and rear cameras. Interpolate along the faceRect for the correct eye and mouth positions. Note: you do have to swap the x position with the y position from the face feature. Not sure why exactly you have to do the swap but that gives you the correct positions.

这篇关于CIDetector在面部特征上的位置错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆