Android MediaCodec 在异步模式下编码和解码 [英] Android MediaCodec Encode and Decode In Asynchronous Mode

本文介绍了Android MediaCodec 在异步模式下编码和解码的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试从文件中解码视频,并在 API 级别 21 及更高版本(Android)支持的新异步模式中使用 MediaCodecOS 5.0 棒棒糖).

解决方案

我相信你不需要在编码器的 onInputBufferAvailable() 回调中做任何事情 - 你不应该调用 encoder.queueInputBuffer().正如您在同步模式下进行 Surface 输入编码时从不手动调用 encoder.dequeueInputBuffer()encoder.queueInputBuffer() 一样,您也不应该在异步模式下进行.

当您调用 decoder.releaseOutputBuffer(outputBufferId, true);(在同步和异步模式下)时,这在内部(使用您提供的 Surface)使输入出列从表面缓冲,将输出渲染到其中,并将其排回表面(到编码器).同步和异步模式的唯一区别在于缓冲区事件如何在公共 API 中公开,但是在使用 Surface 输入时,它使用不同的(内部)API 来访问相同的 API,因此同步与异步模式对于完全没有这个.

据我所知(虽然我自己没有尝试过),你应该让编码器的 onInputBufferAvailable() 回调为空.

所以,我尝试自己做这件事,它(几乎)和上面描述的一样简单.

如果将编码器输入表面直接配置为解码器的输出(中间没有 SurfaceTexture),则一切正常,同步解码-编码循环转换为异步循环.

但是,如果您使用 SurfaceTexture,您可能会遇到一个小问题.与调用线程相关的如何等待帧到达 SurfaceTexture 存在问题,请参阅 https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/DecodeEditEncodeTest.java#106https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/EncodeDecodeTest.java#104https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/OutputSurface.java#113 用于对此的引用.

在我看来,问题出在 awaitNewImagehttps://android.googlesource.com/platform/cts/+/jb-mr2-release/测试/测试/媒体/src/android/media/cts/OutputSurface.java#240.如果 onFrameAvailable 回调应该在主线程上调用,如果 awaitNewImage 调用也在主线程上运行,我们就会遇到问题.如果在主线程上也调用了 onOutputBufferAvailable 回调,并且您从那里调用 awaitNewImage,我们就会遇到问题,因为您最终将等待回调(带有wait() 阻塞整个线程),直到当前方法返回才能运行.

因此,我们需要确保 onFrameAvailable 回调位于与调用 awaitNewImage 的线程不同的线程上.一种非常简单的方法是创建一个新的单独线程,它只为 onFrameAvailable 回调提供服务.为此,您可以执行例如这个:

 private HandlerThread mHandlerThread = new HandlerThread("CallbackThread");私人处理程序 mHandler;...mHandlerThread.start();mHandler = new Handler(mHandlerThread.getLooper());...mSurfaceTexture.setOnFrameAvailableListener(this, mHandler);

我希望这足以让您解决您的问题,如果您需要我编辑其中一个公共示例以在那里实现异步回调,请告诉我.

此外,由于 GL 渲染可能是在 onOutputBufferAvailable 回调中完成的,因此这可能与设置 EGL 上下文的线程不同.因此,在这种情况下,需要在设置它的线程中释放 EGL 上下文,如下所示:

mEGL.eglMakeCurrent(mEGLDisplay, EGL10.EGL_NO_SURFACE, EGL10.EGL_NO_SURFACE, EGL10.EGL_NO_CONTEXT);

并在渲染之前将其重新附加到另一个线程中:

mEGL.eglMakeCurrent(mEGLDisplay, mEGLSurface, mEGLSurface, mEGLContext);

此外,如果编码器和解码器回调是在同一线程上接收到的,则进行渲染的解码器 onOutputBufferAvailable 可以阻止编码器回调的传递.如果它们没有被传递,渲染可能会被无限阻塞,因为编码器没有得到返回的输出缓冲区.这可以通过确保在不同的线程上接收视频解码器回调来解决,这可以避免 onFrameAvailable 回调的问题.

我尝试在 ExtractDecodeEditEncodeMuxTest 之上实现所有这些,并且看起来工作正常,请查看 https://github.com/mstorsjo/android-decodeencodetest.我最初导入了未更改的测试,然后将其转换为异步模式并单独修复了棘手的细节,以便在提交日志中轻松查看各个修复.

I am trying to decode a video from a file and encode it into a different format with MediaCodec in the new Asynchronous Mode supported in API Level 21 and up (Android OS 5.0 Lollipop).

There are many examples for doing this in Synchronous Mode on sites such as Big Flake, Google's Grafika, and dozens of answers on StackOverflow, but none of them support Asynchronous mode.

I do not need to display the video during the process.

I believe that the general procedure is to read the file with a MediaExtractor as the input to a MediaCodec(decoder), allow the output of the Decoder to render into a Surface that is also the shared input into a MediaCodec(encoder), and then finally to write the Encoder output file via a MediaMuxer. The Surface is created during setup of the Encoder and shared with the Decoder.

I can Decode the video into a TextureView, but sharing the Surface with the Encoder instead of the screen has not been successful.

I setup MediaCodec.Callback()s for both of my codecs. I believe that an issues is that I do not know what to do in the Encoder's callback's onInputBufferAvailable() function. I do not what to (or know how to) copy data from the Surface into the Encoder - that should happen automatically (as is done on the Decoder output with codec.releaseOutputBuffer(outputBufferId, true);). Yet, I believe that onInputBufferAvailable requires a call to codec.queueInputBuffer in order to function. I just don't know how to set the parameters without getting data from something like a MediaExtractor as used on the Decode side.

If you have an Example that opens up a video file, decodes it, encodes it to a different resolution or format using the asynchronous MediaCodec callbacks, and then saves it as a file, please share your sample code.

=== EDIT ===

Here is a working example in synchronous mode of what I am trying to do in asynchronous mode: ExtractDecodeEditEncodeMuxTest.java: https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/ExtractDecodeEditEncodeMuxTest.java This example is working in my application

解决方案

I believe you shouldn't need to do anything in the encoder's onInputBufferAvailable() callback - you should not call encoder.queueInputBuffer(). Just as you never call encoder.dequeueInputBuffer() and encoder.queueInputBuffer() manually when doing Surface input encoding in synchronous mode, you shouldn't do it in asynchronous mode either.

When you call decoder.releaseOutputBuffer(outputBufferId, true); (in both synchronous and asynchronous mode), this internally (using the Surface you provided) dequeues an input buffer from the surface, renders the output into it, and enqueues it back to the surface (to the encoder). The only difference between synchronous and asynchronous mode is in how the buffer events are exposed in the public API, but when using Surface input, it uses a different (internal) API to access the same, so synchronous vs asynchronous mode shouldn't matter for this at all.

So as far as I know (although I haven't tried it myself), you should just leave the onInputBufferAvailable() callback empty for the encoder.

EDIT: So, I tried doing this myself, and it's (almost) as simple as described above.

If the encoder input surface is configured directly as output to the decoder (with no SurfaceTexture inbetween), things just work, with a synchronous decode-encode loop converted into an asynchronous one.

If you use SurfaceTexture, however, you may run into a small gotcha. There is an issue with how one waits for frames to arrive to the SurfaceTexture in relation to the calling thread, see https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/DecodeEditEncodeTest.java#106 and https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/EncodeDecodeTest.java#104 and https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/OutputSurface.java#113 for references to this.

The issue, as far as I see it, is in awaitNewImage as in https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/OutputSurface.java#240. If the onFrameAvailable callback is supposed to be called on the main thread, we have an issue if the awaitNewImage call also is run on the main thread. If the onOutputBufferAvailable callbacks also are called on the main thread and you call awaitNewImage from there, we have an issue, since you'll end up waiting for a callback (with a wait() that blocks the whole thread) that can't be run until the current method returns.

So we need to make sure that the onFrameAvailable callbacks come on a different thread than the one that calls awaitNewImage. One pretty simple way of doing this is to create a new separate thread, that does nothing but service the onFrameAvailable callbacks. To do that, you can do e.g. this:

    private HandlerThread mHandlerThread = new HandlerThread("CallbackThread");
    private Handler mHandler;
...
        mHandlerThread.start();
        mHandler = new Handler(mHandlerThread.getLooper());
...
        mSurfaceTexture.setOnFrameAvailableListener(this, mHandler);

I hope this is enough for you to be able to solve your issue, let me know if you need me to edit one of the public examples to implement asynchronous callbacks there.

EDIT2: Also, since the GL rendering might be done from within the onOutputBufferAvailable callback, this might be a different thread than the one that set up the EGL context. So in that case, one needs to release the EGL context in the thread that set it up, like this:

mEGL.eglMakeCurrent(mEGLDisplay, EGL10.EGL_NO_SURFACE, EGL10.EGL_NO_SURFACE, EGL10.EGL_NO_CONTEXT);

And reattach it in the other thread before rendering:

mEGL.eglMakeCurrent(mEGLDisplay, mEGLSurface, mEGLSurface, mEGLContext);

EDIT3: Additionally, if the encoder and decoder callbacks are received on the same thread, the decoder onOutputBufferAvailable that does rendering can block the encoder callbacks from being delivered. If they aren't delivered, the rendering can be blocked infinitely since the encoder don't get the output buffers returned. This can be fixed by making sure the video decoder callbacks are received on a different thread instead, and this avoids the issue with the onFrameAvailable callback instead.

I tried implementing all this on top of ExtractDecodeEditEncodeMuxTest, and got it working seemingly fine, have a look at https://github.com/mstorsjo/android-decodeencodetest. I initially imported the unchanged test, and did the conversion to asynchronous mode and fixes for the tricky details separately, to make it easy to look at the individual fixes in the commit log.

这篇关于Android MediaCodec 在异步模式下编码和解码的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆