Objective-C:无关紧要CIDetector永远是零 [英] Objective-C : No Matter what I do CIDetector is always nil
问题描述
尝试使用Apple的面部检测API获得简单的概念证明.我看了其他几个例子,包括苹果的SquareCam,还有一个 https://github.com/jeroentrappers/FaceDetectionPOC
Trying to get a simple Proof of concept going with Apple's face detection API. I've looked at a couple of other examples including Apple's SquareCam, and this one https://github.com/jeroentrappers/FaceDetectionPOC
基于这些,看来我正在遵循正确的模式来使API正常运行,但是我陷入了困境.无论我做什么,面部检测器的CIDetector始终为零!!!
based on these, it seems like I am following the correct pattern to get the APIs going, but I am stuck. No matter what I do, the CIDetector for my face detector is always nil!!!
我将非常感谢您提供的任何帮助,线索-提示-建议!
I would seriously appreciate any help, clues - hints - suggestions!
-(void)initCamera{
session = [[AVCaptureSession alloc]init];
AVCaptureDevice *device;
/*
if([self frontCameraAvailable]){
device = [self frontCamera];
}else{
device = [self backCamera];
}*/
device = [self frontCamera];
isUsingFrontFacingCamera = YES;
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if(input && [session canAddInput:input]){
[session addInput:input];
}else{
NSLog(@"Error %@", error);
//make this Dlog...
}
videoDataOutput = [[AVCaptureVideoDataOutput alloc]init];
NSDictionary *rgbOutputSettings = [NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCMPixelFormat_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
[videoDataOutput setVideoSettings:rgbOutputSettings];
[videoDataOutput setAlwaysDiscardsLateVideoFrames:YES];
videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL);
[videoDataOutput setSampleBufferDelegate:self queue:videoDataOutputQueue];
[[videoDataOutput connectionWithMediaType:AVMediaTypeVideo]setEnabled:YES];
if ([session canAddOutput:videoDataOutput]) {
[session addOutput:videoDataOutput];
}
[self embedPreviewInView:self.theImageView];
[session startRunning];
}
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate);
CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(__bridge NSDictionary *)attachments];
if(attachments){
CFRelease(attachments);
}
UIDeviceOrientation curDeviceOrientation = [[UIDevice currentDevice] orientation];
NSDictionary *imageOptions = @{CIDetectorImageOrientation:[self exifOrientation:curDeviceOrientation] };
NSDictionary *detectorOptions = @{CIDetectorAccuracy: CIDetectorAccuracyLow};
CIDetector *faceDetector = [CIDetector detectorOfType:CIFeatureTypeFace context:nil options:detectorOptions];
NSArray *faceFeatures = [faceDetector featuresInImage:ciImage options:imageOptions];
if([faceFeatures count]>0){
NSLog(@"GOT a face!");
NSLog(@"%@", faceFeatures);
}
dispatch_async(dispatch_get_main_queue(), ^(void) {
//NSLog(@"updating main thread");
});
}
推荐答案
我假设您正在使用此文章,因为我也有同样的问题.他的代码中实际上存在一个错误. CIDetector实例应如下所示:
I'm assuming you're using this article, because I was too and had the same problem. There's actually a bug in his code. The CIDetector instantiation should look like:
CIDetector *smileDetector = [CIDetector detectorOfType:CIDetectorTypeFace
context:context
options:@{CIDetectorTracking: @YES,
CIDetectorAccuracy: CIDetectorAccuracyLow}];
请注意,检测器类型为CIDetectorTypeFace,而不是CIDetectorSmile. CIDetectorSmile是一个功能选项,而不是检测器类型,因此,要提取微笑(而不是全部面孔),请使用以下代码:
Note that the detector type is CIDetectorTypeFace, rather than CIDetectorSmile. CIDetectorSmile is a feature option rather than a detector type, so to extract just the smiles (and not all the faces), use this code:
NSArray *features = [smileDetector featuresInImage:image options:@{CIDetectorSmile: @YES}];
这篇关于Objective-C:无关紧要CIDetector永远是零的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!