硬件加速的视频去$ C $的Andr​​oid之前,果冻豆的下H.264 [英] Hardware accelerated video decode for H.264 in android prior to Jelly Bean

查看:321
本文介绍了硬件加速的视频去$ C $的Andr​​oid之前,果冻豆的下H.264的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我工作的一个视频会议项目。我们使用软件codeC的EN code和德$ C $的视频帧,将做(最高320P)的罚款支持较低分辨率的温度。我们计划,以支持我们的更高的分辨率应用程序也高达720p。我才知道,硬件加速将做好这项工作相当好。

I am working on a video conferencing project. We were using software codec for encode and decode of video frames which will do fine for lower resolutions( up to 320p). We have planned to support our application for higher resolutions also up to 720p. I came to know that hardware acceleration will do this job fairly well.

由于硬件codeC API媒体codeC可从果冻豆起,我已经使用了EN code和德code和工作正常。但我的应用程序从2.3的支持。所以,我需要有一个硬件加速的视频去$ C $下的720p 30fps的H.264帧。

As the hardware codec api Media codec is available from Jelly Bean onward I have used it for encode and decode and are working fine. But my application is supported from 2.3 . So I need to have an hardware accelerated video decode for H.264 frames of 720p at 30fps.

在研究遇到使用OMX codeC通过修改怯场framework.I的想法已经读取的硬件去codeR对H.264可从2.1和连接codeR是存在的3.0。我已经经历了许多文章和问题在这个网站给出了,确认我可以继续。

On research came across the idea of using OMX codec by modifying the stage fright framework.I had read that the hardware decoder for H.264 is available from 2.1 and encoder is there from 3.0. I have gone through many articles and questions given in this site and confirmed that I can go ahead.

我读过有关怯场架构这里-architecture 这里 - stagefright它是如何工作

I had read about stage fright architecture here -architecture and here- stagefright how it works

和我读到OMX codeC的这里 - 的用Android的硬件去codeR-与-OMX codeC-在-NDK

And I read about OMX codec here- use-android-hardware-decoder-with-omxcodec-in-ndk.

我有一个启动的麻烦,并在其implementation.I一些混淆想有一些关于它的信息。

I am having a starting trouble and some confusions on its implementation.I would like to have some info about it.

  1. 对于使用OMX codeC在我的code我要建立我的项目,整个Android的源代码树或者我可以做从AOSP源添加一些文件(如果是所有)。
  2. 什么是从头开始,我应该遵循实现这一目标的步骤。

有人可以给我一个指引,这个

Can someone give me a guideline on this

谢谢...

推荐答案

最好的例子来描述的原生层 OMX codeC 的融合是命令行公用 stagefright 如可以观察到的这里姜饼本身。这个例子显示了<如何href="http://androidxref.com/2.3.7/xref/frameworks/base/cmds/stagefright/stagefright.cpp#81"><$c$c>OMX$c$cc创建。

The best example to describe the integration of OMXCodec in native layer is the command line utility stagefright as can be observed here in GingerBread itself. This example shows how a OMXCodec is created.

有几点需要注意:

  1. 输入到 OMX codeC 应建模为 MediaSource的,因此,你应确保你的应用程序处理这项要求。创建一个 MediaSource的基源的例子可以在<一个被发现href="http://androidxref.com/2.3.7/xref/frameworks/base/cmds/stagefright/record.cpp#45"><$c$c>record实用程序文件为 DummySource

  1. The input to OMXCodec should be modeled as a MediaSource and hence, you should ensure that your application handles this requirement. An example for creating a MediaSource based source can be found in record utility file as DummySource.

输入脱codeR即 MediaSource的应通过的法数据因此,应用程序应为每个调用单个帧。

The input to decoder i.e. MediaSource should provide the data through the read method and hence, your application should provide individual frames for every read call.

本德codeR可与的NativeWindow 创建输出缓冲区分配。在这种情况下,如果你想从CPU存取缓冲,你或许应该参考<一个href="http://stackoverflow.com/questions/21717728/how-to-dump-yuv-from-omx$c$cc-decoding-output">this查询更多的细节。

The decoder could be created with NativeWindow for output buffer allocation. In this case, if you wish to access the buffer from the CPU, you should probably refer to this query for more details.

这篇关于硬件加速的视频去$ C $的Andr​​oid之前,果冻豆的下H.264的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆