iPhone SDK 4 AVFoundation - 如何正确使用captureStillImageAsynchronouslyFromConnection? [英] iPhone SDK 4 AVFoundation - How to use captureStillImageAsynchronouslyFromConnection correctly?

查看:4022
本文介绍了iPhone SDK 4 AVFoundation - 如何正确使用captureStillImageAsynchronouslyFromConnection?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我尝试使用新的 AVFoundation框架为iPhone拍摄静态照片。

I am trying to use the new AVFoundation framework for taking still pictures with the iPhone.

按钮按下这个methos被调用。我可以听到快门声音,但我看不到日志输出。如果我调用这个方法几次相机预览会冻结。

With a button press this methos is called. I can hear the shutter sound but I can't see the log output. If I call this method several times the camera preview will freeze.

是否有任何教程如何使用 captureStillImageAsynchronouslyFromConnection

Is there any tutorial out there how to use captureStillImageAsynchronouslyFromConnection?

[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:
                [[self stillImageOutput].connections objectAtIndex:0]
                     completionHandler:^(CMSampleBufferRef imageDataSampleBuffer,
                            NSError *error) {
                                              NSLog(@"inside");
                            }];




- (void)initCapture {
    AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput 
                                          deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] 
                                          error:nil];

    AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];

    captureOutput.alwaysDiscardsLateVideoFrames = YES; 

    dispatch_queue_t queue;
    queue = dispatch_queue_create("cameraQueue", NULL);
    [captureOutput setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);

    NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 
    NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; 
    NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 
    [captureOutput setVideoSettings:videoSettings]; 

    self.captureSession = [[AVCaptureSession alloc] init];
    self.captureSession.sessionPreset = AVCaptureSessionPresetLow;

    [self.captureSession addInput:captureInput];
    [self.captureSession addOutput:captureOutput];

    self.prevLayer = [AVCaptureVideoPreviewLayer layerWithSession: self.captureSession];

    [self.prevLayer setOrientation:AVCaptureVideoOrientationLandscapeLeft];

    self.prevLayer.frame = CGRectMake(0.0, 0.0, 480.0, 320.0);
    self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;

    [self.view.layer addSublayer: self.prevLayer];


    // Setup the default file outputs
    AVCaptureStillImageOutput *_stillImageOutput = [[[AVCaptureStillImageOutput alloc] init] autorelease];
    NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:
                                    AVVideoCodecJPEG, AVVideoCodecKey,
                                    nil];
    [_stillImageOutput setOutputSettings:outputSettings];
    [outputSettings release];
    [self setStillImageOutput:_stillImageOutput];   

    if ([self.captureSession canAddOutput:stillImageOutput]) {
        [self.captureSession addOutput:stillImageOutput];
    }

    [self.captureSession commitConfiguration];
    [self.captureSession startRunning];

}


推荐答案

beta。我尝试了一大堆东西。这里:

We had this problem when 4.0 was still in beta. I tried a fair bunch of things. Here goes:


  • AVCaptureStillImageOutput和AVCaptureVideoDataOutput看起来不会很好地相互播放。如果视频输出正在运行,则图像输出似乎不会完成(直到您通过将手机置于休眠状态来暂停会话;然后您似乎只能看到一个图像)。

  • AVCaptureStillImageOutput只是似乎工作明智与AVCaptureSessionPresetPhoto;否则你有效地获得JPEG编码的视频帧。也可以使用更高质量的BGRA帧(顺便说一下,相机的本机输出看起来是BGRA;它似乎没有2vuy / 420v的颜色子采样)。

  • 视频(一切不是照片)和照片预设似乎根本不同;你永远不会得到任何视频帧,如果会话是在照片模式(你也没有得到一个错误)。也许他们改变了这个...

  • 你似乎没有两个捕获会话(一个具有视频预设和视频输出,一个具有照片预设和图像输出)。

  • 您可以停止会话,将预设更改为照片,开始会话,拍摄照片,照片完成后停止,更改预设,并重新开始。这需要一段时间,视频预览图层停顿,看起来很可怕(重新调整曝光级别)。这也偶尔会在测试版中死锁(在调用-stopRunning后,session.running仍然为YES)。

  • 您可以禁用AVCaptureConnection(上班)。我记得这种僵局;他们可能已经解决了这个问题。

  • AVCaptureStillImageOutput and AVCaptureVideoDataOutput do not appear to play nicely with each other. If the video output is running, the image output never seems to complete (until you pause the session by putting the phone to sleep; then you seem to get a single image out).
  • AVCaptureStillImageOutput only seems to work sensibly with AVCaptureSessionPresetPhoto; otherwise you effectively get JPEG-encoded video frames. Might as well use higher-quality BGRA frames (incidentally, the camera's native output appears to be BGRA; it doesn't appear to have the colour subsampling of 2vuy/420v).
  • The video (everything that isn't Photo) and Photo presets seem fundamentally different; you never get any video frames if the session is in photo mode (you don't get an error either). Maybe they changed this...
  • You can't seem to have two capture sessions (one with a video preset and a video output, one with Photo preset and an image output). They might have fixed this.
  • You can stop the session, change the preset to photo, start the session, take the photo, and when the photo completes, stop, change the preset back, and start again. This takes a while and the video preview layer stalls and looks terrible (it re-adjusts exposure levels). This also occasionally deadlocked in the beta (after calling -stopRunning, session.running was still YES).
  • You might be able to disable the AVCaptureConnection (it's supposed to work). I remember this deadlocking; they may have fixed this.

拍照按钮只是设置一个标志;在视频帧回调中,如果设置了标志,则返回视频帧而不是UIImage *。这足以满足我们的图像处理需求。 拍照很大程度上存在,因此用户可以得到否定回答(以及提交错误报告的选项);我们实际上不需要2/3/5像素的图像,因为它们需要很长时间才能处理。

I ended up just capturing video frames. The "take picture" button simply sets a flag; in the video frame callback, if the flag is set, it returns the video frame instead of a UIImage*. This was sufficient for our image-processing needs — "take picture" exists largely so the user can get a negative response (and an option to submit a bug report); we don't actually want 2/3/5 megapixel images, since they take ages to process.

如果视频帧不够好(即您想捕获取景器在高分辨率图像捕获之间的帧),我首先看看他们是否已经使用多个AVCapture会话固定,因为这是您可以设置两个预设的唯一方法。

If video frames are not good enough (i.e. you want to capture viewfinder frames between high-res image captures), I'd first see whether they've fixed using multiple AVCapture sessions, since that's the only way you can set both presets.

这可能值得提交一个错误。我提交了一个关于4.0 GM的推出的错误;苹果问我一些示例代码,但到那时我决定使用视频帧解决方法,并发布了一个版本。

It's probably worth filing a bug. I filed a bug around the launch of 4.0 GM; Apple asked me for some sample code, but by then I'd decided to use the video frame workaround and had a release to release.

此外,低预设是非常低分辨率(并产生低分辨率,低帧率的视频预览)。我会去640x480(如果有的话),如果没有,则回到中等。

Additionally, the "low" preset is very low-res (and results in a low-res, low-framerate video preview). I'd go for 640x480 if available, falling back to Medium if not.

这篇关于iPhone SDK 4 AVFoundation - 如何正确使用captureStillImageAsynchronouslyFromConnection?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆