如何对AVAssetWriter输出进行颜色管理 [英] How to color manage AVAssetWriter output

查看:132
本文介绍了如何对AVAssetWriter输出进行颜色管理的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我无法获得渲染视频的颜色以匹配源内容的颜色.我正在将图像渲染到CGContext中,将支持数据转换为CVPixelBuffer并将其作为框架附加到AVAssetWriterInputPixelBufferAdaptor.这会导致我要绘制到CGContext中的图像与生成的视频文件之间的颜色略有差异.

I'm having trouble getting a rendered video's colors to match the source content's colors. I'm rendering images into a CGContext, converting the backing data into a CVPixelBuffer and appending that as a frame to an AVAssetWriterInputPixelBufferAdaptor. This causes slight color differences between the images that I'm drawing into the CGContext and the resulting video file.

似乎有3件事需要解决:

It seems like there are 3 things that need to be addressed:

  1. 告诉AVFoundation视频所在的色彩空间.
  2. 使AVAssetWriterInputPixelBufferAdaptor和我附加的CVPixelBuffers匹配该颜色空间.
  3. 为CGContext使用相同的色彩空间.

文档非常糟糕,因此,我希望您能获得有关如何执行这些操作的指南,或者是否需要做一些其他事情来使颜色在整个过程中得以保留.

The documentation is terrible, so I'd appreciate any guidance on how to do these things or if there is something else I need to do to make the colors be preserved throughout this entire process.

完整代码:

AVAssetWriter                        *_assetWriter;
AVAssetWriterInput                   *_assetInput;
AVAssetWriterInputPixelBufferAdaptor *_assetInputAdaptor;

NSDictionary *outputSettings = @{ AVVideoCodecKey :AVVideoCodecH264,
                                  AVVideoWidthKey :@(outputWidth),
                                  AVVideoHeightKey:@(outputHeight)};

_assetInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
                                                 outputSettings:outputSettings];


NSDictionary *bufferAttributes = @{å(NSString*)kCVPixelBufferPixelFormatTypeKey:@(kCVPixelFormatType_32ARGB)};
_assetInputAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:_assetInput
                                                                                      sourcePixelBufferAttributes:bufferAttributes];


_assetWriter = [AVAssetWriter assetWriterWithURL:aURL fileType:AVFileTypeMPEG4 error:nil];
[_assetWriter addInput:_assetInput];
[_assetWriter startWriting];
[_assetWriter startSessionAtSourceTime:kCMTimeZero];

NSInteger bytesPerRow = outputWidth * 4;
long size = bytesPerRow * outputHeight;
CGColorSpaceRef srgbSpace = CGColorSpaceCreateWithName(kCGColorSpaceSRGB);

UInt8 *data = (UInt8 *)calloc(size, 1);
CGContextRef ctx = CGBitmapContextCreateWithData(data, outputWidth, outputHeight, 8, bytesPerRow, srgbSpace, kCGImageAlphaPremultipliedFirst, NULL, NULL);

// draw everything into ctx

CVPixelBufferRef pixelBuffer;
CVPixelBufferCreateWithBytes(kCFAllocatorSystemDefault,
                                 outputWidth, outputHeight,
                                 k32ARGBPixelFormat,
                                 data,
                                 bytesPerRow,
                                 ReleaseCVPixelBufferForCVPixelBufferCreateWithBytes,
                                 NULL,
                                 NULL,
                             &pixelBuffer);

NSDictionary *pbAttachements = @{(id)kCVImageBufferCGColorSpaceKey : (__bridge id)srgbSpace};
CVBufferSetAttachments(pixelBuffer, (__bridge CFDictionaryRef)pbAttachements, kCVAttachmentMode_ShouldPropagate);
[_assetInputAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:CMTimeMake(0, 60)];

CGColorSpaceRelease(srgbSpace);

[_assetInput markAsFinished];
[_assetWriter finishWritingWithCompletionHandler:^{}];

推荐答案

这是一个令人困惑的主题,Apple文档确实并不能提供太多帮助.我将描述基于BT.709色彩空间而制定的解决方案,我确信有人会基于色度正确性和各种视频标准的怪异性提出异议,但这是一个复杂的话题.首先,请勿使用kCVPixelFormatType_32ARGB作为像素类型.始终改为传递kCVPixelFormatType_32BGRA,因为BGRA是MacOSX和iPhone硬件上的本机像素布局,并且BGRA更快.接下来,当您创建CGBitmapContext渲染为使用BT.709色彩空间(kCGColorSpaceITUR_709)时.另外,不要渲染到malloc()缓冲区中,而要通过在同一内存上创建位图上下文来直接渲染到CoreVideo像素缓冲区中,CoreGraphics将处理从输入图像到BT.709及其任何内容的色彩空间和伽马转换.相关的伽玛.然后,您需要告诉AVFoundation像素缓冲区的色彩空间,方法是进行ICC配置文件副本并在CoreVideo像素缓冲区上设置kCVImageBufferICCProfileKey.这样就可以解决问题1和2,您无需使用这种方法就可以在相同的色彩空间中获得输入图像.现在,这当然很复杂,并且很难获得实际有效的源代码(是的,确实有效).这是一个指向执行这些确切步骤的小项目的github链接,该代码已获得BSD许可,因此可以随时使用它.请特别注意H264Encoder类,该类将所有这些恐怖因素包装到一个可重用的模块中.您可以在encode_h264.m中找到调用代码,这是一个MacOSX命令行实用程序,用于将PNG编码为M4V.还随附了与该主题相关的3项Apple文档 1 2

This is quite a confusing subject and the Apple docs really do not help all that much. I am going to describe the solution I have settled on based on using the BT.709 colorspace, I am sure someone will have an objection based on Colorimetric correctness and the weirdness of various video standards, but this is complex topic. First off, don't use kCVPixelFormatType_32ARGB as the pixel type. Always pass kCVPixelFormatType_32BGRA instead, since BGRA is the native pixel layout on both MacOSX and iPhone hardware and it BGRA is just faster. Next, when you create a CGBitmapContext to render into use the BT.709 colorspace (kCGColorSpaceITUR_709). Also, don't render into a malloc() buffer, render directly into the CoreVideo pixel buffer by creating a bitmap context over the same memory, CoreGraphics will handle the colorspace and gamma conversion from whatever your input image is to BT.709 and its associated gamma. Then you need to tell AVFoundation the colorspace of the pixel buffer, do that by making an ICC profile copy and setting the kCVImageBufferICCProfileKey on the CoreVideo pixel buffer. That takes care of your issues 1 and 2, you do not need to have input images in this same colorspace with this approach. Now, this is of course complex and actual working source code (yes actually working) is hard to come by. Here is a github link to a small project that does these exact steps, the code is BSD licensed, so feel free to use it. Note specifically the H264Encoder class which wraps all this horror up into a reusable module. You can find calling code in encode_h264.m, it is a little MacOSX command line util to encode PNG to M4V. Also attached 3 keys Apple docs related to this subject 1, 2, 3.

MetalBT709Decoder

这篇关于如何对AVAssetWriter输出进行颜色管理的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆