Android MediaCodec在异步模式下编码和解码 [英] Android MediaCodec Encode and Decode In Asynchronous Mode

查看:288
本文介绍了Android MediaCodec在异步模式下编码和解码的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试从文件中解码视频,并使用API​​级别21及更高版本(Android OS 5.0 Lollipop)支持的新异步模式中的>用MediaCodec将其编码为其他格式.

I am trying to decode a video from a file and encode it into a different format with MediaCodec in the new Asynchronous Mode supported in API Level 21 and up (Android OS 5.0 Lollipop).

Grafika ,以及StackOverflow上的许多答案,但是它们都不支持异步模式.

There are many examples for doing this in Synchronous Mode on sites such as Big Flake, Google's Grafika, and dozens of answers on StackOverflow, but none of them support Asynchronous mode.

在此过程中,我不需要显示视频.

I do not need to display the video during the process.

我认为一般的过程是使用MediaExtractor作为MediaCodec(decoder)的输入来读取文件,允许Decoder的输出呈现为也是共享输入的Surface放入MediaCodec(编码器),然后最后通过MediaMuxer写入编码器输出文件. Surface是在设置编码器期间创建的,并与解码器共享.

I believe that the general procedure is to read the file with a MediaExtractor as the input to a MediaCodec(decoder), allow the output of the Decoder to render into a Surface that is also the shared input into a MediaCodec(encoder), and then finally to write the Encoder output file via a MediaMuxer. The Surface is created during setup of the Encoder and shared with the Decoder.

我可以将视频解码为TextureView,但是与编码器而不是屏幕共享Surface并不成功.

I can Decode the video into a TextureView, but sharing the Surface with the Encoder instead of the screen has not been successful.

我为我的两个编解码器设置了MediaCodec.Callback() s.我认为一个问题是,我不知道在编码器的回调onInputBufferAvailable()函数中该怎么做.我不知道如何(或不知道如何)将数据从Surface复制到编码器中-这应该是自动发生的(就像在使用codec.releaseOutputBuffer(outputBufferId, true);的解码器输出中所做的一样).但是,我相信onInputBufferAvailable要求调用codec.queueInputBuffer才能起作用.我只是不知道如何在不从解码端使用的MediaExtractor之类的数据中获取数据的情况下设置参数.

I setup MediaCodec.Callback()s for both of my codecs. I believe that an issues is that I do not know what to do in the Encoder's callback's onInputBufferAvailable() function. I do not what to (or know how to) copy data from the Surface into the Encoder - that should happen automatically (as is done on the Decoder output with codec.releaseOutputBuffer(outputBufferId, true);). Yet, I believe that onInputBufferAvailable requires a call to codec.queueInputBuffer in order to function. I just don't know how to set the parameters without getting data from something like a MediaExtractor as used on the Decode side.

如果您有一个示例,该示例打开一个视频文件,对其进行解码,使用异步MediaCodec回调将其编码为不同的分辨率或格式,然后将其另存为文件,请分享您的示例代码.

If you have an Example that opens up a video file, decodes it, encodes it to a different resolution or format using the asynchronous MediaCodec callbacks, and then saves it as a file, please share your sample code.

=== 编辑 ===

这是我尝试在异步模式下执行的同步模式下的工作示例:ExtractDecodeEditEncodeMuxTest.java:

Here is a working example in synchronous mode of what I am trying to do in asynchronous mode: ExtractDecodeEditEncodeMuxTest.java: https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/ExtractDecodeEditEncodeMuxTest.java This example is working in my application

推荐答案

我相信您不需要在编码器的onInputBufferAvailable()回调中做任何事情-您不应该调用encoder.queueInputBuffer().就像您从未在同步模式下进行Surface输入编码时从未手动调用encoder.dequeueInputBuffer()encoder.queueInputBuffer()一样,您也不应该在异步模式下进行此操作.

I believe you shouldn't need to do anything in the encoder's onInputBufferAvailable() callback - you should not call encoder.queueInputBuffer(). Just as you never call encoder.dequeueInputBuffer() and encoder.queueInputBuffer() manually when doing Surface input encoding in synchronous mode, you shouldn't do it in asynchronous mode either.

在调用decoder.releaseOutputBuffer(outputBufferId, true);(在同步和异步模式下)时,此操作在内部(使用提供的Surface)使输入缓冲区从表面出队,将输出渲染到表面,然后将其排队回到表面(至编码器).同步模式和异步模式之间的唯一区别在于,在公共API中公开缓冲区事件的方式不同,但是在使用Surface输入时,它使用不同的(内部)API来访问同一事件,因此,同步与异步模式对于这一点.

When you call decoder.releaseOutputBuffer(outputBufferId, true); (in both synchronous and asynchronous mode), this internally (using the Surface you provided) dequeues an input buffer from the surface, renders the output into it, and enqueues it back to the surface (to the encoder). The only difference between synchronous and asynchronous mode is in how the buffer events are exposed in the public API, but when using Surface input, it uses a different (internal) API to access the same, so synchronous vs asynchronous mode shouldn't matter for this at all.

据我所知(尽管我自己还没有尝试过),您应该将onInputBufferAvailable()回调保留为编码器为空.

So as far as I know (although I haven't tried it myself), you should just leave the onInputBufferAvailable() callback empty for the encoder.

因此,我尝试自己做,而且(几乎)如上所述那样简单.

So, I tried doing this myself, and it's (almost) as simple as described above.

如果将编码器输入表面直接配置为解码器的输出(中间没有SurfaceTexture),则一切正常,将同步解码编码循环转换为异步循环.

If the encoder input surface is configured directly as output to the decoder (with no SurfaceTexture inbetween), things just work, with a synchronous decode-encode loop converted into an asynchronous one.

但是,如果使用SurfaceTexture,则可能会遇到一个小陷阱.与调用线程有关的如何等待帧到达SurfaceTexture的问题,请参见

If you use SurfaceTexture, however, you may run into a small gotcha. There is an issue with how one waits for frames to arrive to the SurfaceTexture in relation to the calling thread, see https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/DecodeEditEncodeTest.java#106 and https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/EncodeDecodeTest.java#104 and https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/OutputSurface.java#113 for references to this.

据我所见,问题出现在awaitNewImage中,就像

The issue, as far as I see it, is in awaitNewImage as in https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/OutputSurface.java#240. If the onFrameAvailable callback is supposed to be called on the main thread, we have an issue if the awaitNewImage call also is run on the main thread. If the onOutputBufferAvailable callbacks also are called on the main thread and you call awaitNewImage from there, we have an issue, since you'll end up waiting for a callback (with a wait() that blocks the whole thread) that can't be run until the current method returns.

因此,我们需要确保onFrameAvailable回调与调用awaitNewImage的线程不在同一线程上.一种非常简单的方法是创建一个新的单独线程,该线程除了服务onFrameAvailable回调外不执行任何操作.为此,您可以例如这个:

So we need to make sure that the onFrameAvailable callbacks come on a different thread than the one that calls awaitNewImage. One pretty simple way of doing this is to create a new separate thread, that does nothing but service the onFrameAvailable callbacks. To do that, you can do e.g. this:

    private HandlerThread mHandlerThread = new HandlerThread("CallbackThread");
    private Handler mHandler;
...
        mHandlerThread.start();
        mHandler = new Handler(mHandlerThread.getLooper());
...
        mSurfaceTexture.setOnFrameAvailableListener(this, mHandler);

我希望这足以使您能够解决问题,如果您需要我编辑一个公共示例以在那里实现异步回调,请告诉我.

I hope this is enough for you to be able to solve your issue, let me know if you need me to edit one of the public examples to implement asynchronous callbacks there.

同样,由于GL渲染可能是从onOutputBufferAvailable回调内部完成的,因此该线程可能与设置EGL上下文的线程不同.因此,在这种情况下,需要在设置它的线程中释放EGL上下文,如下所示:

Also, since the GL rendering might be done from within the onOutputBufferAvailable callback, this might be a different thread than the one that set up the EGL context. So in that case, one needs to release the EGL context in the thread that set it up, like this:

mEGL.eglMakeCurrent(mEGLDisplay, EGL10.EGL_NO_SURFACE, EGL10.EGL_NO_SURFACE, EGL10.EGL_NO_CONTEXT);

并在渲染之前将其重新连接到另一个线程中

And reattach it in the other thread before rendering:

mEGL.eglMakeCurrent(mEGLDisplay, mEGLSurface, mEGLSurface, mEGLContext);

此外,如果在同一线程上接收到编码器和解码器回调,则进行渲染的解码器onOutputBufferAvailable可能会阻止编码器回调的传递.如果未交付,则由于编码器无法获取返回的输出缓冲区,因此可以无限阻止渲染.可以通过确保在其他线程上接收到视频解码器回调来解决此问题,并且可以避免onFrameAvailable回调出现问题.

Additionally, if the encoder and decoder callbacks are received on the same thread, the decoder onOutputBufferAvailable that does rendering can block the encoder callbacks from being delivered. If they aren't delivered, the rendering can be blocked infinitely since the encoder don't get the output buffers returned. This can be fixed by making sure the video decoder callbacks are received on a different thread instead, and this avoids the issue with the onFrameAvailable callback instead.

我尝试在ExtractDecodeEditEncodeMuxTest之上实现所有这些功能,并使它看起来不错,请查看 https://github.com/mstorsjo/android-decodeencodetest .最初,我导入了未更改的测试,然后将其转换为异步模式,并分别对棘手的细节进行了修复,以使查看提交日志中的各个修复变得容易.

I tried implementing all this on top of ExtractDecodeEditEncodeMuxTest, and got it working seemingly fine, have a look at https://github.com/mstorsjo/android-decodeencodetest. I initially imported the unchanged test, and did the conversion to asynchronous mode and fixes for the tricky details separately, to make it easy to look at the individual fixes in the commit log.

这篇关于Android MediaCodec在异步模式下编码和解码的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆