AVFoundation - 从Y平面获取灰度图像(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) [英] AVFoundation - Get grayscale image from Y plane (kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)
问题描述
我正在使用AVFoundation拍摄视频,我正在以 kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
格式录制。我想直接从YpCbCr格式的Y平面制作灰度图像。
I'm using AVFoundation to take a video and I'm recording in kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
format. I want to make grayscale image directly from the Y plane of the YpCbCr format.
我试图通过调用 CGBitmapContextCreate
CGContextRef >,但问题是,我不知道选择什么colorspace和pixelformat。
I've tried to create CGContextRef
by calling CGBitmapContextCreate
, but the problem is, that I don't know what colorspace and pixelformat to choose.
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
/* Get informations about the Y plane */
uint8_t *YPlaneAddress = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);
size_t width = CVPixelBufferGetWidthOfPlane(imageBuffer, 0);
size_t height = CVPixelBufferGetHeightOfPlane(imageBuffer, 0);
/* the problematic part of code */
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef newContext = CGBitmapContextCreate(YPlaneAddress,
width, height, 8, bytesPerRow, colorSpace, kCVPixelFormatType_1Monochrome);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
UIImage *grayscaleImage = [[UIImage alloc] initWithCGImage:newImage];
// process the grayscale image ...
}
当我运行上面的代码时,我遇到了这个错误:
When I run the code above, I got this errors:
<Error>: CGBitmapContextCreateImage: invalid context 0x0
<Error>: CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 16 bits/pixel; 1-component color space; kCGImageAlphaPremultipliedLast; 192 bytes/row.
PS:对不起我的英语。
PS: Sorry for my english.
推荐答案
如果我没弄错,你不应该通过 CGContext
。相反,你应该创建一个数据提供者,然后直接创建图像。
If I'm not mistaken, you shouldn't go via a CGContext
. Instead, you should create a data provider and then directly the image.
你的代码中的另一个错误是使用 kCVPixelFormatType_1Monochrome
常数。它是视频处理(AV库)中使用的常量,而不是Core Graphics(CG库)中使用的常量。只需使用 kCGImageAlphaNone
即可。每个像素需要一个单独的组件(灰色)(而不是RGB的三个)来自色彩空间。
Another mistake in your code is the use of the kCVPixelFormatType_1Monochrome
constant. It's a constant used in video processing (AV libraries), not in Core Graphics (CG libraries). Just use kCGImageAlphaNone
. That a single component (gray) per pixel is needed (instead of three as for RGB) is derived from the color space.
它可能如下所示:
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, YPlaneAdress,
height * bytesPerRow, NULL);
CGImageRef newImage = CGImageCreate(width, height, 8, 8, bytesPerRow,
colorSpace, kCGImageAlphaNone, dataProvider, NULL, NO, kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
这篇关于AVFoundation - 从Y平面获取灰度图像(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!