用MediaCodec将getInputImage用于编码 [英] Using getInputImage with MediaCodec for encoding

查看:1347
本文介绍了用MediaCodec将getInputImage用于编码的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

背景:我做视频文件解码,解码视频轨道,对接收到的帧进行一些更改,再次解码和复用。



在Android中这样做的已知问题是供应商指定编码器/解码器颜色格式的数量。 Android 4.3引入了表面以获取设备独立性,但是我发现很难与它们一起使用,因为我的框架更改例程需要一个Canvas写入。



由于Android 5.0,使用灵活的YUV420颜色格式是有希望的。与用于解码的getOutputImage和用于编码的getInputImage联合使用,可以将Image对象用作从解码MediaCodec检索的格式。我使用getOutputImage进行解码工作,并可以在RGB转换后显示结果。为了编码YUV图像并将其排队到MediaCodec(编码器)中,似乎有一个缺少的链接:



从MediaCodec出来一个输入缓冲区后

  int inputBufferId = encoder.dequeueInputBuffer(5000); 

我可以访问由


$ b $返回的正确图片b

  encoder.getInputImage(inputBufferId); 

我填写图像缓冲区 - 这也是工作,但我没有看到一种方式排队输入缓冲区返回编码解码器编码...只有一个

  encoder.queueInputBuffer(inputBufferId,position,size, presentationUs,0); 

方法可用,但与图像无关。可以使用

  ByteBuffer byteBuffer = encoder.getInputBuffer(inputBufferId)检索呼叫所需的大小; 

  byteBuffer.remaining(); 

但是除了getInputImage()之外,这个调用似乎搞砸了编码器。



另一个缺失的文件或只是我错了?

解决方案

确实有点问题 - 最愚蠢的方法可能是计算任何飞机的最后一个字节之间的任何平面的开头指针之间的最大距离,但是你需要本地代码做这个(为了获得直接字节缓冲区的实际指针值)。



另一种替代方法是使用 getInputBuffer ,但有一个警告。首先调用 getInputBuffer 获取 ByteBuffer 并调用 remaining()就可以了(或者也许 capacity()的效果更好?只有这样,才能调用 getInputImage 。细节是调用 getInputImage 时, getInputBuffer ByteBuffer c>无效,反之亦然。 (该文档说调用此方法之后,任何先前为相同输入索引返回的ByteBuffer或Image对象必须不再使用。在 MediaCodec.getInputBuffer(int) 。)


Background: I do video file demuxing, decode the video track, apply some changes to frames received, decode and mux them again.

The known issue doing this in Android are the number of vendor specify encoder / decoder color formats. Android 4.3 introduced surfaces to get device independent, but I found it hard to work with them as my frame changing routines require a Canvas to write to.

Since Android 5.0, the use of flexible YUV420 color formats is promising. Jointly with getOutputImage for decoding and getInputImage for encoding, Image objects can be used as format retrieved from a decoding MediaCodec. I got decoding working using getOutputImage and could visualize the result after RGB conversion. For encoding a YUV image and queuing it into a MediaCodec (encoder), there seems to be a missing link however:

After dequeuing an input buffer from MediaCodec

int inputBufferId = encoder.dequeueInputBuffer (5000);

I can get access to a proper image returned by

encoder.getInputImage (inputBufferId);

I fill in the image buffers - which is working too, but I do not see a way to queue the input buffer back into the codec for encoding... There is only a

encoder.queueInputBuffer (inputBufferId, position, size, presentationUs, 0);

method available, but nothing that matches an image. The size required for the call can be retrieved using

ByteBuffer  byteBuffer = encoder.getInputBuffer (inputBufferId);

and

byteBuffer.remaining ();

But this seems to screw up the encoder when called in addition to getInputImage().

Another missing piece of documentation or just something I get wrong?

解决方案

This is indeed a bit problematic - the most foolproof way probably is to calculate the maximum distance between the start pointer of any plane in the Image to the last byte of any plane, but you need native code to do this (in order to get the actual pointer values for the direct byte buffers).

A second alternative is to use getInputBuffer as you show, but with one caveat. First call getInputBuffer to get the ByteBuffer and call remaining() on it. (Or perhaps capacity() works better?). Only after this, call getInputImage. The detail is that when calling getInputImage, the ByteBuffer returned by getInputBuffer gets invalidated, and vice versa. (The docs says "After calling this method any ByteBuffer or Image object previously returned for the same input index MUST no longer be used." in MediaCodec.getInputBuffer(int).)

这篇关于用MediaCodec将getInputImage用于编码的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆