iPhone SDK 4 AVFoundation - 如何正确使用 captureStillImageAsynchronouslyFromConnection? [英] iPhone SDK 4 AVFoundation - How to use captureStillImageAsynchronouslyFromConnection correctly?

查看:12
本文介绍了iPhone SDK 4 AVFoundation - 如何正确使用 captureStillImageAsynchronouslyFromConnection?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用新的 AVFoundation 框架用 iPhone 拍摄静态照片.

按下按钮调用此方法.我可以听到快门声,但看不到日志输出.如果我多次调用此方法,相机预览将冻结.

有没有关于如何使用 captureStillImageAsynchronouslyFromConnection 的教程?

[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:[[self stillImageOutput].connections objectAtIndex:0]完成处理程序:^(CMSampleBufferRef imageDataSampleBuffer,NSError *错误){NSLog(@"里面");}];

<上一页>- (void)initCapture {AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInputdeviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]错误:无];AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];captureOutput.alwaysDiscardsLateVideoFrames = YES;dispatch_queue_t 队列;queue = dispatch_queue_create("cameraQueue", NULL);[captureOutput setSampleBufferDelegate:self queue:queue];调度释放(队列);NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;NSNumber* 值 = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];[captureOutput setVideoSettings:videoSettings];self.captureSession = [[AVCaptureSession alloc] init];self.captureSession.sessionPreset = AVCaptureSessionPresetLow;[self.captureSession addInput:captureInput];[self.captureSession addOutput:captureOutput];self.prevLayer = [AVCaptureVideoPreviewLayer layerWithSession: self.captureSession];[self.prevLayer setOrientation:AVCaptureVideoOrientationLandscapeLeft];self.prevLayer.frame = CGRectMake(0.0, 0.0, 480.0, 320.0);self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;[self.view.layer addSublayer: self.prevLayer];//设置默认文件输出AVCaptureStillImageOutput *_stillImageOutput = [[[AVCaptureStillImageOutput alloc] init] autorelease];NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey,零];[_stillImageOutput setOutputSettings:outputSettings];[输出设置发布];[自我 setStillImageOutput:_stillImageOutput];if ([self.captureSession canAddOutput:stillImageOutput]) {[self.captureSession addOutput:stillImageOutput];}[self.captureSession 提交配置];[self.captureSession startRunning];}

解决方案

我们在 4.0 仍处于测试阶段时遇到了这个问题.我尝试了很多东西.如下:

  • AVCaptureStillImageOutput 和 AVCaptureVideoDataOutput 似乎不能很好地相互配合.如果视频输出正在运行,则图像输出似乎永远不会完成(直到您通过让手机进入睡眠状态来暂停会话;然后您似乎只输出了一个图像).
  • AVCaptureStillImageOutput 似乎只适用于 AVCaptureSessionPresetPhoto;否则,您将有效地获得 JPEG 编码的视频帧.不妨使用更高质量的 BGRA 帧(顺便说一下,相机的原生输出似乎是 BGRA;它似乎没有 2vuy/420v 的颜色子采样).
  • 视频(除照片以外的所有内容)和照片预设似乎根本不同;如果会话处于照片模式,您将永远不会收到任何视频帧(您也不会收到错误消息).也许他们改变了这一点……
  • 您似乎不能有两个捕获会话(一个带有视频预设和一个视频输出,一个带有照片预设和一个图像输出).他们可能已经解决了这个问题.
  • 您可以停止会话,将预设更改为照片,开始会话,拍照,当照片完成时,停止,更改预设,然后重新开始.这需要一段时间,视频预览层会停止并且看起来很糟糕(它会重新调整曝光级别).这在 beta 中也偶尔会陷入僵局(在调用 -stopRunning 之后,session.running 仍然是 YES).
  • 您也许可以禁用 AVCaptureConnection(它应该工作).我记得这个僵局;他们可能已经解决了这个问题.

我最终只是捕获了视频帧.拍照"按钮只是设置一个标志;在视频帧回调中,如果设置了标志,则返回视频帧而不是 UIImage*.这足以满足我们的图像处理需求——拍照"在很大程度上存在,因此用户可以获得否定响应(以及提交错误报告的选项);我们实际上并不想要 2/3/5 兆像素的图像,因为它们需要很长时间才能处理.

如果视频帧不够好(即您想在高分辨率图像捕获之间捕获取景器帧),我会首先查看它们是否已使用多个 AVCapture 会话修复,因为这是您可以同时设置两者的唯一方法预设.

可能值得提交一个错误.我在发布 4.0 GM 时提交了一个错误;Apple 要求我提供一些示例代码,但那时我决定使用视频框架解决方法并发布一个版本.

此外,低"预设是非常低分辨率(并导致低分辨率、低帧率视频预览).如果可以,我会选择 640x480,如果没有,我会选择中等.

I am trying to use the new AVFoundation framework for taking still pictures with the iPhone.

With a button press this methos is called. I can hear the shutter sound but I can't see the log output. If I call this method several times the camera preview will freeze.

Is there any tutorial out there how to use captureStillImageAsynchronouslyFromConnection?

[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:
                [[self stillImageOutput].connections objectAtIndex:0]
                     completionHandler:^(CMSampleBufferRef imageDataSampleBuffer,
                            NSError *error) {
                                              NSLog(@"inside");
                            }];

- (void)initCapture {
    AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput 
                                          deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] 
                                          error:nil];

    AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];

    captureOutput.alwaysDiscardsLateVideoFrames = YES; 

    dispatch_queue_t queue;
    queue = dispatch_queue_create("cameraQueue", NULL);
    [captureOutput setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);

    NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 
    NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; 
    NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 
    [captureOutput setVideoSettings:videoSettings]; 

    self.captureSession = [[AVCaptureSession alloc] init];
    self.captureSession.sessionPreset = AVCaptureSessionPresetLow;

    [self.captureSession addInput:captureInput];
    [self.captureSession addOutput:captureOutput];

    self.prevLayer = [AVCaptureVideoPreviewLayer layerWithSession: self.captureSession];

    [self.prevLayer setOrientation:AVCaptureVideoOrientationLandscapeLeft];

    self.prevLayer.frame = CGRectMake(0.0, 0.0, 480.0, 320.0);
    self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;

    [self.view.layer addSublayer: self.prevLayer];


    // Setup the default file outputs
    AVCaptureStillImageOutput *_stillImageOutput = [[[AVCaptureStillImageOutput alloc] init] autorelease];
    NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:
                                    AVVideoCodecJPEG, AVVideoCodecKey,
                                    nil];
    [_stillImageOutput setOutputSettings:outputSettings];
    [outputSettings release];
    [self setStillImageOutput:_stillImageOutput];   

    if ([self.captureSession canAddOutput:stillImageOutput]) {
        [self.captureSession addOutput:stillImageOutput];
    }

    [self.captureSession commitConfiguration];
    [self.captureSession startRunning];

}

解决方案

We had this problem when 4.0 was still in beta. I tried a fair bunch of things. Here goes:

  • AVCaptureStillImageOutput and AVCaptureVideoDataOutput do not appear to play nicely with each other. If the video output is running, the image output never seems to complete (until you pause the session by putting the phone to sleep; then you seem to get a single image out).
  • AVCaptureStillImageOutput only seems to work sensibly with AVCaptureSessionPresetPhoto; otherwise you effectively get JPEG-encoded video frames. Might as well use higher-quality BGRA frames (incidentally, the camera's native output appears to be BGRA; it doesn't appear to have the colour subsampling of 2vuy/420v).
  • The video (everything that isn't Photo) and Photo presets seem fundamentally different; you never get any video frames if the session is in photo mode (you don't get an error either). Maybe they changed this...
  • You can't seem to have two capture sessions (one with a video preset and a video output, one with Photo preset and an image output). They might have fixed this.
  • You can stop the session, change the preset to photo, start the session, take the photo, and when the photo completes, stop, change the preset back, and start again. This takes a while and the video preview layer stalls and looks terrible (it re-adjusts exposure levels). This also occasionally deadlocked in the beta (after calling -stopRunning, session.running was still YES).
  • You might be able to disable the AVCaptureConnection (it's supposed to work). I remember this deadlocking; they may have fixed this.

I ended up just capturing video frames. The "take picture" button simply sets a flag; in the video frame callback, if the flag is set, it returns the video frame instead of a UIImage*. This was sufficient for our image-processing needs — "take picture" exists largely so the user can get a negative response (and an option to submit a bug report); we don't actually want 2/3/5 megapixel images, since they take ages to process.

If video frames are not good enough (i.e. you want to capture viewfinder frames between high-res image captures), I'd first see whether they've fixed using multiple AVCapture sessions, since that's the only way you can set both presets.

It's probably worth filing a bug. I filed a bug around the launch of 4.0 GM; Apple asked me for some sample code, but by then I'd decided to use the video frame workaround and had a release to release.

Additionally, the "low" preset is very low-res (and results in a low-res, low-framerate video preview). I'd go for 640x480 if available, falling back to Medium if not.

这篇关于iPhone SDK 4 AVFoundation - 如何正确使用 captureStillImageAsynchronouslyFromConnection?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆