MediaCodec的解码器为什么不输出统一的YUV格式(如YUV420P)? [英] Why doesn't the decoder of MediaCodec output a unified YUV format(like YUV420P)?

查看:880
本文介绍了MediaCodec的解码器为什么不输出统一的YUV格式(如YUV420P)?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

"MediaCodec解码器可以使用上述格式之一或专有格式来生成ByteBuffers中的数据.例如,基于Qualcomm SoC的设备通常使用OMX_QCOM_COLOR_FormatYUV420PackedSemiPlanar32m(#2141391876/0x7FA30C04)."

这甚至使处理输出缓冲区变得非常困难.为什么不使用统一的YUV格式?为什么有那么多YUV颜色格式?

@fadden,我发现可以解码到Surface并获取RGB缓冲区(例如 http: //bigflake.com/mediacodec/ExtractMpegFramesTest.java.txt ),我可以将RGB缓冲区转换为YUV格式,然后对其进行编码吗?

而且,很遗憾,我尝试使用API​​ 18+并遇到了一些问题.我引用了ContinuousCaptureActivity和ExtractMpegFramesTest代码. 在ContinuousCaptureActivity中:

    mEglCore = new EglCore(null, EglCore.FLAG_RECORDABLE);
    mDisplaySurface = new WindowSurface(mEglCore, holder.getSurface(), false);
    mDisplaySurface.makeCurrent();

    mFullFrameBlit = new FullFrameRect(
            new Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT));
    mTextureId = mFullFrameBlit.createTextureObject();
    mCameraTexture = new SurfaceTexture(mTextureId);
    mCameraTexture.setOnFrameAvailableListener(this);
    mCamera.setPreviewTexture(mCameraTexture);

FullFrameRect创建一个SurfaceTexture,并将其设置为相机预览纹理.

但是在ExtractMpegFramesTest中,使用了CodecOutputSurface并创建了纹理,我该如何一起使用CodecOutputSurface和FullFrameRect?(一个提供表面来接收解码器输出,一个提供缩放比例并渲染到编码器输入表面.)

解决方案

让我尝试回答为什么" 部分.经您的允许,我将更改问题的顺序.

为什么有那么多YUV颜色格式?

YUV 只是用于表示视觉信息的许多色彩空间之一.由于许多技术和历史原因,此色彩空间最常用于摄影数据(包括视频).维基百科声称,YUV是在电视开始从BW转变为彩色时发明的.那时,这些是模拟信号.后来,不同公司和国家的不同工程师开始独立地发明以数字格式存储此YUV数据的方法.难怪他们没有提出一种格式.

此外,YUV格式在它们存储的色度信息量方面有所不同. YUV 420、422和444都有权存在,这在精度和尺寸之间做出了不同的折衷,这是很自然的.

最后,YUV格式中的某些差异与像素的物理布局有关,并针对不同的光学传感器进行了优化.

这将我们带到您的问题的第一部分:

为什么不使用统一的YUV格式?

将照片信息从光学传感器传输到计算机(智能手机)内存是一项技术挑战.当我们谈论数百万像素的实时高速视频流时,带宽限制电子噪声变得很重要.对某些YUV格式(例如 uyvy 或OMX_QCOM_COLOR_FormatYUV420PackedSemiPlanar32m)进行了优化,以减少电子拥塞在从光学传感器到字节缓冲区的途中.

如果在适当的集成电路上使用,这些格式可能具有明显的优势,或者在不同类型的硬件上使用,这些格式可能根本没有优势.

对于硬件编解码器也是如此. h264解码器的不同实现可针对不同的交错YUV格式利用缓存位置.

>

"The MediaCodec decoders may produce data in ByteBuffers using one of the above formats or in a proprietary format. For example, devices based on Qualcomm SoCs commonly use OMX_QCOM_COLOR_FormatYUV420PackedSemiPlanar32m (#2141391876 / 0x7FA30C04)."

This make it difficult even not possible to deal with the output buffer.Why not use a unified YUV format?And why there are so many YUV color formats?

@fadden,I find it possible to decode to Surface and get the RGB buffer(like http://bigflake.com/mediacodec/ExtractMpegFramesTest.java.txt), Can I transfer the RGB buffer to YUV format and then encode it?

And,fadden,I tried to use API 18+ and came across some problems.I refered to the ContinuousCaptureActivity and ExtractMpegFramesTest code. In ContinuousCaptureActivity:

    mEglCore = new EglCore(null, EglCore.FLAG_RECORDABLE);
    mDisplaySurface = new WindowSurface(mEglCore, holder.getSurface(), false);
    mDisplaySurface.makeCurrent();

    mFullFrameBlit = new FullFrameRect(
            new Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT));
    mTextureId = mFullFrameBlit.createTextureObject();
    mCameraTexture = new SurfaceTexture(mTextureId);
    mCameraTexture.setOnFrameAvailableListener(this);
    mCamera.setPreviewTexture(mCameraTexture);

The FullFrameRect creates a SurfaceTexture and it is set to the camera preview texture.

But in ExtractMpegFramesTest, a CodecOutputSurface is used and it also creates a texture.How can I use the CodecOutputSurface and FullFrameRect together?(one supplies surface to receive the decoder output and one rescale and render to the encoder input surface.)

解决方案

Let me try to answer the 'why' part. With your kind permission, I will change the order of the questions.

And why there are so many YUV color formats?

YUV is only one of many color spaces used to represent visual information. For many technical and historical reasons, this color space is most popular for photographical data (including video). Wikipedia claims that YUV was invented when television began to transform from BW to color. Back then, these were analogue signals. Later, different engineers in different corporations and countries began to independently invent the ways to store this YUV data in digital format. No wonder that they did not come up with one format.

Furthermore, the YUV formats differ in the volume of chroma information they store. It's quite natural that YUV 420, 422, and 444 all have right to exist, giving different compromises between precision and size.

Finally, some of the differences in YUV formats are related to physical layout of the pixels, and are optimized for different optical sensors.

Which brings us to the first part of your question:

Why not use a unified YUV format?

Transfer of photo information from the optical sensor to computer (smartphone) memory is a technical challenge. When we speak about a many-megapixel live high-speed video stream, the bandwidth limitations and electronic noise become important. Some YUV formats, like uyvy or OMX_QCOM_COLOR_FormatYUV420PackedSemiPlanar32m, are optimized to reduce electron congestion on the way from optical sensor to byte buffer.

These formats may have significant advantage if used on the proper integrated circuit, or no advantage at all if used on a different type of hardware.

Same is true for hardware codecs. Different implementations of h264 decoder may take advantage of cache locality for different interlaced YUV formats.

这篇关于MediaCodec的解码器为什么不输出统一的YUV格式(如YUV420P)?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆