使用AVFoundation时对哪个对象实际包含捕获的图像感到困惑 [英] Confused on what object actually contains the captured image when using AVFoundation

查看:108
本文介绍了使用AVFoundation时对哪个对象实际包含捕获的图像感到困惑的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个使用AVFoundation的照相应用程序.到目前为止,一切正常.

I have a photo taking app that is using AVFoundation. So far everything works perfectly.

但是,令我真正困惑的一件事是,所捕获的图像实际包含在什么对象中?

However, the one thing that is really confusing me is, what object is the captured image actually contained in?

我一直在NSLogging所有对象及其一些属性,但我仍然无法弄清楚捕获的图像所在的位置.

I have been NSLogging all of the objects and some of their properties and I still can't figure out where the captured image is contained.

这是我用于设置捕获会话的代码:

Here is my code for setting up the capture session:

self.session =[[AVCaptureSession alloc]init];


 [self.session setSessionPreset:AVCaptureSessionPresetPhoto];



 self.inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];


    NSError *error;


     self.deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:self.inputDevice error:&error];




     if([self.session canAddInput:self.deviceInput])
    [self.session addInput:self.deviceInput];



  self.previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:self.session];


  self.rootLayer = [[self view]layer];


  [self.rootLayer setMasksToBounds:YES];



[self.previewLayer setFrame:CGRectMake(0, 0, self.rootLayer.bounds.size.width, self.rootLayer.bounds.size.height)];


[self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];



[self.rootLayer insertSublayer:self.previewLayer atIndex:0];


self.stillImageOutput = [[AVCaptureStillImageOutput alloc] init];


[self.session addOutput:self.stillImageOutput];

[self.session startRunning];


}

这是我的代码,用于在用户按下捕获按钮时捕获静止图像:

And then here is my code for capturing a still image when the user presses the capture button:

-(IBAction)stillImageCapture {




AVCaptureConnection *videoConnection = nil;

videoConnection.videoOrientation = AVCaptureVideoOrientationPortrait;


for (AVCaptureConnection *connection in self.stillImageOutput.connections){
    for (AVCaptureInputPort *port in [connection inputPorts]){

        if ([[port mediaType] isEqual:AVMediaTypeVideo]){

            videoConnection = connection;



            break;
        }
    }
    if (videoConnection) {
        break;
    }
}




[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {

    [self.session stopRunning];


}

 ];}

当用户按下捕获按钮并执行上面的代码时,捕获的图像成功显示在iPhone屏幕上,但是我无法弄清楚哪个对象实际持有捕获的图像.

When the user presses the capture button, and the above code executes, the captured image is successfully displayed on the iPhone screen, but I can't figure out which object is actually holding the captured image.

感谢您的帮助.

推荐答案

The CMSampleBuffer is what actually contains the image.

在您的captureStillImageAsynchronouslyFromConnection完成处理程序中,您将需要以下内容:

In your captureStillImageAsynchronouslyFromConnection completion handler, you'll want something like:

NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage* capturedImage = [[UIImage alloc] initWithData:imageData];

我的工作实现方式:

- (void)captureStillImage
{
    @try {
        AVCaptureConnection *videoConnection = nil;
        for (AVCaptureConnection *connection in _stillImageOutput.connections){
            for (AVCaptureInputPort *port in [connection inputPorts]){

                if ([[port mediaType] isEqual:AVMediaTypeVideo]){

                    videoConnection = connection;
                    break;
                }
            }
            if (videoConnection) {
                break;
            }
        }
        NSLog(@"About to request a capture from: %@", [self stillImageOutput]);
        [[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:videoConnection
                                                             completionHandler:^(CMSampleBufferRef imageSampleBuffer, NSError *error) {

                                                                 // This is here for when we need to implement Exif stuff. 
                                                                 //CFDictionaryRef exifAttachments = CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);

                                                                 NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];

                                                                 // Create a UIImage from the sample buffer data
                                                                 _capturedImage = [[UIImage alloc] initWithData:imageData];


                                                                 BOOL autoSave = YES;
                                                                 if (autoSave)
                                                                 {
                                                                     UIImageWriteToSavedPhotosAlbum(_capturedImage, self, @selector(image:didFinishSavingWithError:contextInfo:), nil);
                                                                 }

                                                             }];
    }
    @catch (NSException *exception) {
        NSlog(@"ERROR: Unable to capture still image from AVFoundation camera: %@", exception);
    }
}

这篇关于使用AVFoundation时对哪个对象实际包含捕获的图像感到困惑的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆