通过MP4实时流式传输 [英] Live streaming through MP4

查看:786
本文介绍了通过MP4实时流式传输的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在从事在线电视服务.目标之一是在不使用任何其他浏览器插件(Flash除外)的情况下播放视频.

I am working on an online TV service. One of the goals is for the video to be played without any additional browser plug-ins (except for Flash).

我决定使用MP4,因为大多数HTML5浏览器和Flash(用于后备)都支持它.视频由FFMpeg从服务器上的ASF进行转码.

I decided to use MP4, because it is supported by the majority of HTML5 browsers and by Flash (for fallback). The videos are transcoded from ASF on a server by FFMpeg.

但是,我发现MP4无法进行实时流传输,因为它具有用于元数据的Moov原子,而元数据必须指定长度. FFMpeg无法将mp4直接流式传输到stdout,因为它将moov放在文件的末尾. (实时转码和MP4的流传输在Android中可用,但在Flash Player中出现NetStream.Play.FileStructureInvalid错误会失败)

However, I found that MP4 cannot be live-streamed because it has a moov atom for metadata that has to specify the length. FFMpeg cannot directly stream mp4 to stdout, because it puts the moov at the end of the file. ( Live transcoding and streaming of MP4 works in Android but fails in Flash player with NetStream.Play.FileStructureInvalid error )

当然,存在MPEG-TS,但HTML5 <video>不支持它.

Of course, MPEG-TS exists, but it is not supported by HTML5 <video>.

我想到的是一种将流实时转码为MP4的方法,并且在针对每个新的HTTP请求时,首先发送一个moov,该Moov为视频的长度指定了一个非常长的数字,然后开始发送MP4文件的其余部分.

What I thought about is a method to transcode the stream in real-time to MP4, and on each new HTTP request for it, first send a moov that specifies a very long number for the video's length, and then start sending the rest of the MP4 file.

是否可以通过这种方式使用MP4进行流式传输?

Is it possible to use MP4 for streaming that way?

经过研究和av501的回答,我了解必须知道框架的大小,以便它可以工作.

After some research and av501's answer, I understand that the sizes of the frames must be known so that it can work.

可以将mp4文件分割成较小的部分以便流式传输吗?

Can the mp4 file be segmented into smaller parts so that it can be streamed?

当然,可以选择切换到另一个容器/格式,但是同时兼容Flash和HTML5的唯一格式是mp4/h264,因此,如果我必须同时支持这两种格式,则必须进行两次转码.

Of course, switching to another container/format is an option, but the only format compatible with both Flash and HTML5 is mp4/h264, so if I have to support both, I'd have to transcode twice.

推荐答案

这是我的想法,有些想法可能在其他方面遥遥无期.我恳求无知,因为没有人真正完整地记录了这一过程,这全是有根据的猜测.

Here's my thoughts guys some of it might be right on others way way off. I plead ignorance because no one have really documented this process fully, its all an educated guess.

AvAssetWriter仅编码为一个文件,似乎无法将编码后的视频存储到内存中.从后台线程写入文件时说它是套接字,这会从基本线程中读取文件,这实际上是一个m4v,它是一个具有h264/acc mdata的容器,但没有moov原子. (换句话说,没有标题) 苹果提供的播放器无法播放此流,但是基于ffplay的经过修改的播放器应能够解码和播放该流.这应该行得通,因为ffplay使用libavformat可以解码基本流,这是一个警告,因为没有文件长度信息,某些事情必须由播放,DTS和PTS来确定,而且播放器也不能在文件中查找.

AvAssetWriter only encodes to a file, there seems to be no way to get encoded video to memory. Reading the file while it is being written to from a background thread to say a socket results in an elementary stream, this is essentially an m4v, which its a container with h264/acc mdata, but no moov atoms. (in other words no header) No apple supplied player can play this stream, but a modified player based on ffplay should be able to decode and play the stream. This should work, because ffplay use libavformat which can decode elementary streams, one caveat since there is no file length info, some things have to be determined by the play, the DTS and PTS and also the player can't seek within the file.

或者,可以使用m4v流中的原始内容来构建rtmp流.

Alternatively an the raw naul's from the m4v stream can be used to construct an rtmp stream.

如果您想进一步讨论,可以直接与我联系.

If you want to discuss further you can contact me directly.

如何获取数据.

由于无论如何您都必须在接收端重建文件,所以我想您可以对它进行分段,史蒂夫·麦克法林(Steve Mcfarin)在他的github页面上写了一个小小的appleSegmentedEcorder,这解决了moov的一些问题原子,因为您拥有所有文件信息.

Since your going to have to rebuild the file on the receiving side anyway, I guess you could just kind of segment it, Steve Mcfarin wrote a little appleSegmentedEcorder you can find on his github page, this solves some of the issues for moov atoms since you have all the file info.

这篇关于通过MP4实时流式传输的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆