使用MediaCodec进行H264流传输 [英] Use MediaCodec for H264 streaming

查看:505
本文介绍了使用MediaCodec进行H264流传输的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前正在尝试将Android用作Skype端点.在此阶段,我需要将视频编码为H.264(因为它是Skype支持的唯一格式),然后将其与RTP封装在一起,以使流传输正常工作.

I'm currently trying to use Android as a Skype endpoint. At this stage, I need to encode video into H.264 (since it's the only format supported by Skype) and encapsulate it with RTP in order to make the streaming work.

显然,出于各种原因,MediaRecorder不太适合此操作.一种是因为它在完成后添加了MP4或3GP标头.另一个原因是,为了将等待时间降至最低,硬件加速可能会派上用场.这就是为什么我想利用最近对框架进行的低级添加,例如MediaCodecMediaExtractor等.

Apparently the MediaRecorder is not very suited for this for various reasons. One is because it adds the MP4 or 3GP headers after it's finished. Another is because in order to reduce latency to a minimum, hardware accelaration may come in handy. That's why I would like to make use of the recent low-level additions to the framework, being MediaCodec, MediaExtractor, etc.

目前,我计划进行以下工作.摄像机将其视频写入缓冲区. MediaCodec使用H264对视频进行编码,然后将结果写入另一个缓冲区.该缓冲区由RTP封装器读取,该封装器将流数据发送到服务器.这是我的第一个问题:这个计划听起来对您来说可行吗?

At the moment, I plan on working as follows. The camera writes its video into a buffer. The MediaCodec encodes the video with H264 and writes the result to another buffer. This buffer is read by an RTP-encapsulator, which sends the stream data to the server. Here's my first question: does this plan sounds feasible to you?

现在,我已经坚持第一步.由于互联网上有关使用相机的所有文档均使用MediaRecorder,因此我找不到在编码之前将其原始数据存储到缓冲区中的方法. addCallbackBuffer 适合吗?任何人都有示例链接吗?

Now I'm already stuck with step one. Since all documentation on the internet about using the camera makes use of MediaRecorder, I cannot find a way to store its raw data into a buffer before encoding. Is addCallbackBuffer suited for this? Anyone has a link with an example?

接下来,我找不到很多有关MediaCodec的文档(因为它是相当新的).有扎实的教程吗?

Next, I cannot find a lot of documentation about MediaCodec (since it's fairly new). Anyone who has a solid tutorial?

最后:关于RTP库有什么建议吗?

Lastly: any recommendations on RTP libraries?

非常感谢!

推荐答案

更新
我终于能够从h264帧创建适当的RTP包.这是您要记住的(实际​​上很简单):

UPDATE
I was finally able to create proper RTP packages from the h264 frames. Here's what you have to keep in mind (it's actually quite simple):

编码器确实为每个帧创建NAL标头.但是它将每个帧作为h264 字节流返回.这意味着每个帧都以三个0字节和一个1字节开头.您所需要做的就是删除这些起始前缀,然后将帧放入RTP数据包(或使用FU-As对其进行拆分).

The encoder does create NAL headers for each frame. But it returns each frame as a h264 bytestream. This means that each frame starts with three 0-bytes and a 1-byte. All you have to do is remove those start prefixes, and put the frame into a RTP packet (or split them up using FU-As).

现在有您的问题:

我找不到在编码之前将其原始数据存储到缓冲区中的方法. addCallbackBuffer适合吗?

I cannot find a way to store its raw data into a buffer before encoding. Is addCallbackBuffer suited for this?

您应该使用camera.setPreviewCallback(...),并将每个帧添加到编码器中.

You should use camera.setPreviewCallback(...), and add each frame to the encoder.

我找不到很多有关MediaCodec的文档(因为它是相当新的).有扎实的教程吗?

I cannot find a lot of documentation about MediaCodec (since it's fairly new). Anyone who has a solid tutorial?

对于MediaCodec的工作方式,这应该是一个很好的介绍. http://dpsm.wordpress.com/2012/07/28/android-mediacodec-decoded/

This should be a good introduction as to how the MediaCodec works. http://dpsm.wordpress.com/2012/07/28/android-mediacodec-decoded/

最后:关于RTP库有什么建议吗?

Lastly: any recommendations on RTP libraries?

我正在使用jlibrtp来完成工作.

I'm using jlibrtp which gets the job done.

这篇关于使用MediaCodec进行H264流传输的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆