如何从iphone相机获取实时视频流并将其发送到服务器? [英] How to get real time video stream from iphone camera and send it to server?

查看:52
本文介绍了如何从iphone相机获取实时视频流并将其发送到服务器?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 AVCaptureSession 来捕获视频并从 iPhone 摄像头获取实时帧,但是如何通过帧和声音的多路复用将其发送到服务器以及如何使用 ffmpeg 来完成此任务,如果任何人有任何关于 ffmpeg 的教程或任何示例,请在此处分享.

I am using AVCaptureSession to capture video and get real time frame from iPhone camera but how can I send it to server with multiplexing of frame and sound and how to use ffmpeg to complete this task, if any one have any tutorial about ffmpeg or any example please share here.

推荐答案

我这样做的方法是实现一个 AVCaptureSession,它有一个委托,并在每一帧上运行一个回调.该回调将每个帧通过网络发送到服务器,服务器有一个自定义设置来接收它.

The way I'm doing it is to implement an AVCaptureSession, which has a delegate with a callback that's run on every frame. That callback sends each frame over the network to the server, which has a custom setup to receive it.

流程如下:

http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/03_MediaCapture.html#//apple_ref/doc/uid/TP40010188-CH5-SW2

这是一些代码:

// make input device

NSError *deviceError;

AVCaptureDevice *cameraDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

AVCaptureDeviceInput *inputDevice = [AVCaptureDeviceInput deviceInputWithDevice:cameraDevice error:&deviceError];

// make output device

AVCaptureVideoDataOutput *outputDevice = [[AVCaptureVideoDataOutput alloc] init];

[outputDevice setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

// initialize capture session

AVCaptureSession *captureSession = [[[AVCaptureSession alloc] init] autorelease];

[captureSession addInput:inputDevice];

[captureSession addOutput:outputDevice];

// make preview layer and add so that camera's view is displayed on screen

AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer    layerWithSession:captureSession];
previewLayer.frame = view.bounds;
[view.layer addSublayer:previewLayer];

// go!

[captureSession startRunning];

然后输出设备的委托(这里是self)必须实现回调:

Then the output device's delegate (here, self) has to implement the callback:

-(void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection

{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer( sampleBuffer );

CGSize imageSize = CVImageBufferGetEncodedSize( imageBuffer );

// also in the 'mediaSpecific' dict of the sampleBuffer

   NSLog( @"frame captured at %.fx%.f", imageSize.width, imageSize.height );

    }

发送原始帧或单个图像对您来说永远不会足够好(因为数据量和帧数).您也无法通过电话合理地提供任何服务(WWAN 网络有各种防火墙).您需要对视频进行编码,并将其流式传输到服务器,最有可能通过标准流格式(RTSP、RTMP).iPhone >= 3GS 上有 H.264 编码器芯片.问题是它不是面向流的.也就是说,它输出最后解析视频所需的元数据.这让您有几个选择.

Sending raw frames or individual images will never work well enough for you (because of the amount of data and number of frames). Nor can you reasonably serve anything from the phone (WWAN networks have all sorts of firewalls). You'll need to encode the video, and stream it to a server, most likely over a standard streaming format (RTSP, RTMP). There is an H.264 encoder chip on the iPhone >= 3GS. The problem is that it is not stream oriented. That is, it outputs the metadata required to parse the video last. This leaves you with a few options.

1) 获取原始数据并在手机上使用 FFmpeg 进行编码(将使用大量 CPU 和电池).

1) Get the raw data and use FFmpeg to encode on the phone (will use a ton of CPU and battery).

2) 为 H.264/AAC 输出编写自己的解析器(非常难).

2) Write your own parser for the H.264/AAC output (very hard).

3) 分块记录和处理(将增加与块长度相等的延迟,并在您开始和停止会话时在每个块之间减少大约 1/4 秒的视频).

3) Record and process in chunks (will add latency equal to the length of the chunks, and drop around 1/4 second of video between each chunk as you start and stop the sessions).

这篇关于如何从iphone相机获取实时视频流并将其发送到服务器?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆