如何将CVPixelBuffer变成UIImage? [英] How to turn a CVPixelBuffer into a UIImage?

查看:881
本文介绍了如何将CVPixelBuffer变成UIImage?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在从CVPixelBuffer获取UIIMage时遇到一些问题。这就是我想要的:

  CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(imageDataSampleBuffer); 
CFDictionaryRef附件= CMCopyDictionaryOfAttachments(kCFAllocatorDefault,imageDataSampleBuffer,kCMAttachmentMode_ShouldPropagate);
CIImage * ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(NSDictionary *)attachments];
if(附件)
CFRelease(附件);
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
if(width&& height){//测试以确保我们有有效尺寸
UIImage * image = [[UIImage alloc] initWithCIImage:ciImage];

UIImageView * lv = [[UIImageView alloc] initWithFrame:self.view.frame];
lv.contentMode = UIViewContentModeScaleAspectFill;
self.lockedView = lv;
[lv release];
self.lockedView.image = image;
[image release];
}
[ciImage release];

height width 都正确设置为相机的分辨率。 image 已创建,但我似乎是黑色(或透明?)。我不太明白问题出在哪里。任何想法都将不胜感激。

解决方案

首先,与你的问题没有直接关系的显而易见的事情是: AVCaptureVideoPreviewLayer 是将视频从任一摄像机传输到独立视图的最便宜方式,如果这是数据的来源,并且您没有立即修改它的计划。您无需自行推动,预览图层直接连接到 AVCaptureSession 并自行更新。



<我不得不承认对这个核心问题缺乏信心。 CIImage 和其他两种类型的图像之间存在语义差异 - CIImage 是图像的配方和不一定由像素支持。它可以是从这里获取像素,像这样变换,应用此过滤器,像这样变换,与其他图像合并,应用此过滤器。在您选择渲染之前,系统不知道 CIImage 是什么样子。它本身也不知道光栅化它的适当界限。



UIImage 只是为了包装一个 CIImage 。它不会将其转换为像素。大概 UIImageView 应该实现这一点,但如果是这样,那么我似乎无法找到你提供适当输出矩形的位置。



我成功只是躲避这个问题:

  CIImage * ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer ]。 

CIContext * temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage = [temporaryContext
createCGImage:ciImage
fromRect:CGRectMake(0,0,
CVPixelBufferGetWidth(pixelBuffer),
CVPixelBufferGetHeight(pixelBuffer))];

UIImage * uiImage = [UIImage imageWithCGImage:videoImage];
CGImageRelease(videoImage);

使用明确的机会指定输出矩形。我确定没有使用 CGImage 作为中间人,所以请不要认为这个解决方案是最佳做法。


I'm having some problems getting a UIIMage from a CVPixelBuffer. This is what I am trying:

CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(imageDataSampleBuffer);
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, imageDataSampleBuffer, kCMAttachmentMode_ShouldPropagate);
CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(NSDictionary *)attachments];
if (attachments)
    CFRelease(attachments);
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
if (width && height) { // test to make sure we have valid dimensions
    UIImage *image = [[UIImage alloc] initWithCIImage:ciImage];

    UIImageView *lv = [[UIImageView alloc] initWithFrame:self.view.frame];
    lv.contentMode = UIViewContentModeScaleAspectFill;
    self.lockedView = lv;
    [lv release];
    self.lockedView.image = image;
    [image release];
}
[ciImage release];

height and width are both correctly set to the resolution of the camera. image is created but I it seems to be black (or maybe transparent?). I can't quite understand where the problem is. Any ideas would be appreciated.

解决方案

First of all the obvious stuff that doesn't relate directly to your question: AVCaptureVideoPreviewLayer is the cheapest way to pipe video from either of the cameras into an independent view if that's where the data is coming from and you've no immediate plans to modify it. You don't have to do any pushing yourself, the preview layer is directly connected to the AVCaptureSession and updates itself.

I have to admit to lacking confidence about the central question. There's a semantic difference between a CIImage and the other two types of image — a CIImage is a recipe for an image and is not necessarily backed by pixels. It can be something like "take the pixels from here, transform like this, apply this filter, transform like this, merge with this other image, apply this filter". The system doesn't know what a CIImage looks like until you chose to render it. It also doesn't inherently know the appropriate bounds in which to rasterise it.

UIImage purports merely to wrap a CIImage. It doesn't convert it to pixels. Presumably UIImageView should achieve that, but if so then I can't seem to find where you'd supply the appropriate output rectangle.

I've had success just dodging around the issue with:

CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];

CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage = [temporaryContext
                   createCGImage:ciImage
                   fromRect:CGRectMake(0, 0, 
                          CVPixelBufferGetWidth(pixelBuffer),
                          CVPixelBufferGetHeight(pixelBuffer))];

UIImage *uiImage = [UIImage imageWithCGImage:videoImage];
CGImageRelease(videoImage);

With gives an obvious opportunity to specify the output rectangle. I'm sure there's a route through without using a CGImage as an intermediary so please don't assume this solution is best practice.

这篇关于如何将CVPixelBuffer变成UIImage?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆