是否可以在图形上下文中呈现 AVCaptureVideoPreviewLayer? [英] Is it possible to render AVCaptureVideoPreviewLayer in a graphics context?

查看:21
本文介绍了是否可以在图形上下文中呈现 AVCaptureVideoPreviewLayer?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这看似简单的任务,却让我抓狂.是否可以将包含 AVCaptureVideoPreviewLayer 作为子层的 UIView 转换为要保存的图像?我想创建增强现实叠加层并有一个按钮将图片保存到相机胶卷.按住电源按钮 + 主页键将屏幕截图捕获到相机胶卷,这意味着我的所有捕获逻辑都在工作,并且可以完成任务.但我似乎无法使其以编程方式工作.

This seems like a simple task, yet it is driving me nuts. Is it possible to convert a UIView containing AVCaptureVideoPreviewLayer as a sublayer into an image to be saved? I want to create an augmented reality overlay and have a button save the picture to the camera roll. Holding the power button + home key captures the screenshot to the camera roll, meaning that all of my capture logic is working, AND the task is possible. But I cannot seem to be able to make it work programmatically.

我正在使用 AVCaptureVideoPreviewLayer 捕获相机图像的实时预览.我所有渲染图像的尝试都失败了:

I'm capturing a live preview of the camera's image using AVCaptureVideoPreviewLayer . All of my attempts to render the image fail:

  previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
//start the session, etc...


//this saves a white screen
- (IBAction)saveOverlay:(id)sender {
    NSLog(@"saveOverlay");

    UIGraphicsBeginImageContext(appDelegate.window.bounds.size);
        UIGraphicsBeginImageContext(scrollView.frame.size);

    [previewLayer.presentationLayer renderInContext:UIGraphicsGetCurrentContext()];


//    [appDelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];

    UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    UIImageWriteToSavedPhotosAlbum(screenshot, self, 
                                   @selector(image:didFinishSavingWithError:contextInfo:), nil);
}

//这会渲染所有内容,除了预览层,它是空白的.

//this renders everything, EXCEPT for the preview layer, which is blank.

[appDelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];

我在某处读到这可能是由于 iPhone 的安全问题.这是真的吗?

I've read somewhere that this may be due to security issues of the iPhone. Is this true?

澄清一下:我不想为相机保存图像.我想保存叠加在另一个图像上的透明预览层,从而创建透明度.但由于某种原因我无法使它工作.

Just to be clear: I don't want to save the image for the camera. I want to save the transparent preview layer superimposed over another image, thus creating transparency. Yet for some reason I cannot make it work.

推荐答案

我喜欢@Roma 关于使用 GPU Image 的建议 - 好主意....但是,如果您想要纯 CocoaTouch 方法,请执行以下操作:

I like @Roma's suggestion of using GPU Image - great idea. . . . however if you want a pure CocoaTouch approach, here's what to do:

实现 AVCaptureVideoDataOutputSampleBufferDelegate

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
    // Create a UIImage+Orientation from the sample buffer data
    if (_captureFrame)
    {
        [captureSession stopRunning];

        _captureFrame = NO;
        UIImage *image = [ImageTools imageFromSampleBuffer:sampleBuffer];
        image = [image rotate:UIImageOrientationRight];

        _frameCaptured = YES;

        if (delegate != nil)
        {
            [delegate cameraPictureTaken:image];
        }
    }
}

捕获如下:

+ (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer 
{
    // Get a CMSampleBuffer's Core Video image buffer for the media data
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    // Lock the base address of the pixel buffer
    CVPixelBufferLockBaseAddress(imageBuffer, 0); 

    // Get the number of bytes per row for the pixel buffer
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); 

    // Get the number of bytes per row for the pixel buffer
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    // Get the pixel buffer width and height
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 

    // Create a device-dependent RGB color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 

    // Create a bitmap graphics context with the sample buffer data
    CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, 
                                             bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    // Create a Quartz image from the pixel data in the bitmap graphics context
    CGImageRef quartzImage = CGBitmapContextCreateImage(context); 
    // Unlock the pixel buffer
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    // Free up the context and color space
    CGContextRelease(context); 
    CGColorSpaceRelease(colorSpace);

    // Create an image object from the Quartz image
    UIImage *image = [UIImage imageWithCGImage:quartzImage];

    // Release the Quartz image
    CGImageRelease(quartzImage);

    return (image);
}

将 UIImage 与叠加层混合

  • 现在你有了 UIImage,把它添加到一个新的 UIView.
  • 在顶部添加叠加层作为子视图.

捕获新的 UIView

+ (UIImage*)imageWithView:(UIView*)view
{
    UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, [UIScreen    mainScreen].scale);
    [view.layer renderInContext:UIGraphicsGetCurrentContext()];
    UIImage* img = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return img;
}

这篇关于是否可以在图形上下文中呈现 AVCaptureVideoPreviewLayer?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆