iPhone上的HTTP直播服务器 [英] HTTP live streaming server on iPhone

查看:167
本文介绍了iPhone上的HTTP直播服务器的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在iPhone上运行HTTP直播服务器,该服务器从相机捕获视频流并将其提供给HTML5客户端(支持HTTP直播流)。

I am trying to run a HTTP live streaming server on iPhone, which captures the video stream from the camera and feed it to the HTML5 client (which supports HTTP Live Streaming).

到目前为止,我有以下工作。

So far, I've got following working.


  1. iOS上的HTTP直播服务器(用Node.js编写),动态
    从列表中更新索引文件视频捕获模块生成的传输流(视频/ MP2T)文件。

  2. 视频捕获模块,它使用AVCaptureMovieFileOutput连续生成一系列
    10秒的QuickTime文件(
    之间有一个小差距,但它足够小对于我的应用程序)。

我需要的是一个即时转换器,它将每个QuickTime文件转换为传输流文件(无需更改编码,我只需要一个不同的容器),它连接上面的两个模块。

What I need is a on-the-fly converter that converts each QuickTime file into a Transport Stream file (no need to change the encoding, I just need a different container), which bridges two modules above.

我采用这种方法,因为这是利用它的唯一方法据我所知,iPhone的硬件视频编码器(我在这里做了很多关于这个主题的研究,我99%肯定。如果我错了,请告诉我)。

I am taking this approach because this is the only way to take advantage of the hardware video encoder of iPhone as far as I know (I've done quite a research on this topic here, and I'm 99% sure. Please let me know if I am wrong).

有些人建议使用ffmpeg,但我宁愿使用MIT许可证(如果有的话)使用更小的代码或者从头开始编写东西(并使用MIT开源)许可)。

A few people suggested ffmpeg, but I'd rather use much smaller code with MIT license (if any) or write something from scratch (and open-source it with MIT license).

我对这个媒体容器的事情很陌生,如果有人能指出我正确的方向,我真的很感激(示例代码,开源代码) ,文件,...)。

I'm quite new to this media container thing, and I'd really appreciate if somebody could point me into the right direction (sample code, open source, documents, ...).

推荐答案

我在苹果开发者论坛上发布了这个,我们进行了热烈的(借口双关语)讨论。这是对那些提出类似观念的人的回答。

I posted this on the apple developer forum, we carrying on a lively (excuse the pun) discussion. This was in answer to someone who brought up a similar notion.

如果我错了,我想纠正我,并举个例子说明如果你不同意创建一个mpeg你从AVCaptureVideoDataOutput获得的原始h264中的ts不是一个
简易任务,除非你使用x264或类似代码进行转码。让我们假设您可以轻松获取mpeg ts文件,然后在m3u8容器中编译它们,启动一个小型Web服务器并提供它们将是一件简单的事情。
据我所知,并且有很多应用程序都这样做,使用设备中的localhost隧道不是拒绝问题。所以也许你可以以某种方式从设备中生成hls我会质疑你会得到的性能。

I think correct me if I am wrong, and give us an example how if you disagree that creating an mpeg ts from the raw h264 which you get from AVCaptureVideoDataOutput is not an easy task unless you transcode using x264 or something similar. lets assume for a minute that you could easy get mpeg ts files, then it would be a simple matter of compiling them in an m3u8 container, launching a little web server and serving them. As far as I know , and there are many many apps that do it, using localhost tunnels from the device are not a reject issue. So maybe somehow you could generate hls from the device I question the performance you would get.

所以技术数字2
仍然使用AvCaptureVideoDataOutput,你捕获帧,用一些整洁的小协议,json或者像bencode这样更深奥的东西包装它们打开一个套接字并将它们发送到您的服务器。
啊......祝你好运拥有一个不错的强大网络,因为即使在wifi上发送未压缩的帧也需要带宽。

So on to technique number 2 Still using AvCaptureVideoDataOutput, you capture the frames , wrap them in some neat little protocol , json or perhaps something more esoteric like bencode open a socket and send them to your server. Ahh... good luck better have a nice robust network because sending uncompressed frames even over wifi is going to require bandwidth.

所以对于技术3 。

你用avassetwriter写一部新电影并使用标准的c函数从temp文件中读回来,这很好,但你有的是原始h264,mp4不是完成因此它没有任何moov原子,现在有趣的部分重新生成这个标题。祝你好运。

You write a new movie using avassetwriter and read back from the temp file using standard c functions, this is fine but what you have is raw h264, the mp4 is not complete thus it does not have any moov atoms, now comes the fun part regenerating this header. good luck.

所以tecnique 4似乎实际上有一些优点

So on to tecnique 4 that seems to actually have some merit

我们创造的不是一个但是2 avassetwriters,我们使用gcd dispatch_queue管理它们,因为在实例化之后avassetwriters只能使用一次,我们在一个计时器上启动第一个,在预定的时间后说10秒我们开始第二个同时撕下第一个。现在我们有一系列带有完整moov原子的.mov文件,每个文件都包含压缩的h264视频。现在我们可以将它们发送到服务器并将它们组装成一个完整的视频流。或者,我们可以使用一个简单的流媒体来获取mov文件并使用librtmp将它们包装在rtmp协议中并将它们发送到媒体服务器。

We create not one but 2 avassetwriters , we manage them using a gcd dispatch_queue, since after instantiation avassetwriters can only be used one time , we start the first one on a timer , after a pre-determined period say 10 seconds we start the second while tearing the first one down. Now we have a series of .mov files with complete moov atoms, each of these contained compressed h264 video. Now we can send these to the server and assemble them into one complete video stream. Alternately we could use a simple streamer that takes the mov files and wraps them in rtmp protocol using librtmp and send them to a media server.

我们可以发送每个单独的mov文件到另一个苹果设备因此得到设备到设备的通信,这个问题被多次误解,通过wifi定位另一个iphone设备在同一个子网上非常容易并且可以完成。通过celluar连接在tcp上定位另一个设备几乎是神奇的,如果它可以完成它唯一可能在使用可寻址IP的蜂窝网络而不是所有普通运营商。

Could we just send each individual mov file to another apple device thus getting device to device communication, that question has been misinterpreted many many times, locating another iphone device on the same subnet over wifi is pretty easy and could be done. Locating another device on tcp over celluar connection is almost magical, if it can be done its only possible on cell networks that use addressable ip's and not all common carriers do.

假设你可以,那么你有一个额外的问题,因为非基础视频播放器将能够处理许多不同的单独电影文件之间的过渡。您可能必须根据ffmpeg解码编写自己的流媒体播放器。 (那确实很有效)

Say you could , then you have an additional issue because non of the avfoundation video players will be able to handle the transition between that many different seperate movie files. You would have to write your own streaming player probably based off of ffmpeg decoding. (thats does work rather well)

这篇关于iPhone上的HTTP直播服务器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆