如何将CVImageBufferRef转换为UIImage [英] how to convert a CVImageBufferRef to UIImage
问题描述
我正在尝试从相机捕捉视频。我已经得到 captureOutput:didOutputSampleBuffer:
回调触发它,它给了我一个样本缓冲区,然后我转换为 CVImageBufferRef
。然后我尝试将该图像转换为 UIImage
然后我可以在我的应用中查看。
I am trying to capture video from a camera. i have gotten the captureOutput:didOutputSampleBuffer:
callback to trigger and it gives me a sample buffer that i then convert to a CVImageBufferRef
. i then attempt to convert that image to a UIImage
that i can then view in my app.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
/*Get information about the image*/
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
/*We unlock the image buffer*/
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/*Create a CGImageRef from the CVImageBufferRef*/
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
/*We release some components*/
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
/*We display the result on the custom layer*/
/*self.customLayer.contents = (id) newImage;*/
/*We display the result on the image view (We need to change the orientation of the image so that the video is displayed correctly)*/
UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];
self.capturedView.image = image;
/*We relase the CGImageRef*/
CGImageRelease(newImage);
}
在调用 CGBitmapContextCreate
。它总是返回 NULL
指针。因此,该函数的其余部分都不起作用。无论我似乎传递它,该函数返回null。我不明白为什么。
the code seems to work fine up until the call to CGBitmapContextCreate
. it always returns a NULL
pointer. so consequently none of the rest of the function works. no matter what i seem to pass it the function returns null. i have no idea why.
推荐答案
传递baseAddress的方式假定图像数据的格式为
The way that you are passing on the baseAddress presumes that the image data is in the form
ACCC
(其中C是一些颜色成分,R || G || B)。
( where C is some color component, R || G || B ).
如果您已设置AVCaptureSession以原始格式捕获视频帧,那么您很可能会以平面YUV420格式恢复视频数据。 (参见:链接文本)为了完成您在此处尝试执行的操作,可能最简单的方法是指定您希望在kCVPixelFormatType_32RGBA中捕获的视频帧。如果您以非平面格式捕获视频帧,Apple建议您捕获kCVPixelFormatType_32BGRA中的视频帧,其原因没有说明,但我可以合理地假设是出于性能考虑。
If you've set up your AVCaptureSession to capture the video frames in native format, more than likely you're getting the video data back in planar YUV420 format. (see: link text ) In order to do what you're attempting to do here, probably the easiest thing to do would be specify that you want the video frames captured in kCVPixelFormatType_32RGBA . Apple recommends that you capture the video frames in kCVPixelFormatType_32BGRA if you capture it in non-planar format at all, the reasoning for which is not stated, but I can reasonably assume is due to performance considerations.
警告:我没有这样做,并且假设访问这样的CVPixelBufferRef内容是构建图像的合理方法。我无法保证这实际上有效,但我/可以/告诉你,由于您(可能)捕获视频帧的像素格式,您现在可靠地执行操作的方式将无法正常工作。
Caveat: I've not done this, and am assuming that accessing the CVPixelBufferRef contents like this is a reasonable way to build the image. I can't vouch for this actually working, but I /can/ tell you that the way you are doing things right now reliably will not work due to the pixel format that you are (probably) capturing the video frames as.
这篇关于如何将CVImageBufferRef转换为UIImage的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!