H264使用HW库stagefright解码加速Android中 [英] H264 HW accelerated decoding in Android using stagefright library

查看:1038
本文介绍了H264使用HW库stagefright解码加速Android中的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我真的试图去使用HW与Stagefright库code H264视频。

I`m trying to decode h264 video using HW with Stagefright library.

我已经使用的此处。即时通讯在 MedaBuffer 让德codeD数据。渲染 MediaBuffer->()我在AwesomePlayer.cpp试图数据 AwesomeLocalRenderer

i have used an example in here. Im getting decoded data in MedaBuffer. For rendering MediaBuffer->data() i tried AwesomeLocalRenderer in AwesomePlayer.cpp.

但画面屏幕失真

这里是链接原件及坠毁的画面。

Here is The Link of original and crashed picture.

和也试过这example`

And also tried this in example`

sp<MetaData> metaData = mVideoBuffer->meta_data();
int64_t timeUs = 0;
metaData->findInt64(kKeyTime, &timeUs);
native_window_set_buffers_timestamp(mNativeWindow.get(), timeUs * 1000);
err = mNativeWindow->queueBuffer(mNativeWindow.get(), 
mVideoBuffer->graphicBuffer().get(), -1);`

不过,我的家乡code崩溃。我不能老是让真实的画面了或​​损坏或黑屏。

But my native code crashes. I can`t get real picture its or corrupted or it black screen.

请,我需要你们的帮助,我在做什么错了?

Please, I need Your help, What i'm doing wrong?

在此先感谢。

推荐答案

如果您使用的是 HW 加速去codeR,那么在输出端口上配置您的组件将被基于一个本机窗口。换句话说,输出缓冲基本上是一个 gralloc 处理已经由通过Stagefright 框架。 (参考: OMX codeC :: allocateOutputBuffersFromNativeWindow )。因此, MediaBuffer 返回不应该PTED为纯 YUV 缓冲间$ P $

If you are using a HW accelerated decoder, then the allocation on the output port of your component would have been based on a Native Window. In other words, the output buffer is basically a gralloc handle which has been passed by the Stagefright framework. (Ref: OMXCodec::allocateOutputBuffersFromNativeWindow). Hence, the MediaBuffer being returned shouldn't be interpreted as a plain YUV buffer.

AwesomeLocalRenderer ,框架进行软件的颜色,当转换<一个href=\"http://androidxref.com/4.4.2_r1/xref/frameworks/av/media/libstagefright/colorconversion/SoftwareRenderer.cpp#140\"相对=nofollow> mTarget-&GT;渲染如下所示被调用。如果您跟踪code流,你会发现,在 MediaBuffer 内容PTED为 YUV 缓冲区。

In case of AwesomeLocalRenderer, the framework performs a software color conversion when mTarget->render is invoked as shown here. If you trace the code flow, you will find that the MediaBuffer content is directly interpreted as YUV buffer.

有关 HW 加速codeCS,你应该使用的 AwesomeNativeWindowRenderer 。如果你有使用 AwesomeLocalRenderer 的任何特殊情况,请不要突出相同。我可以适当的优化此响应。

For HW accelerated codecs, you should be employing AwesomeNativeWindowRenderer. If you have any special conditions for employing AwesomeLocalRenderer, please do highlight the same. I can refine this response appropriately.

P.S:为了调试,你也可以参考<一个href=\"http://stackoverflow.com/questions/21717728/how-to-dump-yuv-from-omx$c$cc-decoding-output\">this问题它捕获到转储YUV数据和分析相同的方法。

P.S: For debug purposes, you could also refer to this question which captured the methods to dump the YUV data and analyze the same.

这篇关于H264使用HW库stagefright解码加速Android中的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆