如何从iPhone摄像头获取实时视频流并将其发送到服务器? [英] How to get real time video stream from iphone camera and send it to server?

查看:437
本文介绍了如何从iPhone摄像头获取实时视频流并将其发送到服务器?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 AVCaptureSession 来捕捉视频并从iPhone摄像头获取实时帧,但是如何将其发送到服务器,复用帧和声音以及如何使用ffmpeg要完成这个任务,如果有任何关于ffmpeg或任何示例的任何教程请在这里分享。

解决方案

我的方式是实现一个AVCaptureSession,它具有一个在每个帧上运行的回调代理。该回调通过网络将每个帧发送到具有自定义设置以接收它的服务器。



以下是流程:


$ b -SW2rel =nofollow noreferrer> http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/03_MediaCapture.html#//apple_ref/doc/uid/TP40010188-CH5 -SW2



这里有一些代码:

  // make input device 

NSError * deviceError;

AVCaptureDevice * cameraDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

AVCaptureDeviceInput * inputDevice = [AVCaptureDeviceInput deviceInputWithDevice:cameraDevice error:& deviceError];

//输出设备

AVCaptureVideoDataOutput * outputDevice = [[AVCaptureVideoDataOutput alloc] init];

[outputDevice setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

//初始化捕获会话

AVCaptureSession * captureSession = [[[AVCaptureSession alloc] init] autorelease];

[captureSession addInput:inputDevice];

[captureSession addOutput:outputDevice];

//使预览图层添加,以便相机的视图显示在屏幕上

AVCaptureVideoPreviewLayer * previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
previewLayer.frame = view.bounds;
[view.layer addSublayer:previewLayer];

// go!

[captureSession startRunning];

然后输出设备的委托(这里,self)必须实现回调:

   - (void)captureOutput :( AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection 

{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

CGSize imageSize = CVImageBufferGetEncodedSize(imageBuffer);

//也在sampleBuffer的mediaSpecificdict中

NSLog(@以%.fx%.f捕获的帧,imageSize.width,imageSize.height );

}

发送原始帧或单个图像将无法正常工作你(因为数据量和帧数)。你也不能从电话中合理地服务任何东西(WWAN网络有各种防火墙)。您需要对视频进行编码,并将其流式传输到服务器,很可能是通过标准流格式(RTSP,RTMP)。 iPhone> = 3GS上有一个H.264编码芯片。问题在于它不是面向流的。也就是说,它输出最后解析视频所需的元数据。



1)获取原始数据并使用FFmpeg在手机上进行编码(将使用大量的CPU和电池)。 2)为H.264 / AAC输出编写自己的解析器(很难)。



3)记录并在块中处理(将添加延迟等于块的长度,并且在您启动和停止会话时,每个块之间的视频延迟约1/4秒)。


I am using AVCaptureSession to capture video and get real time frame from iPhone camera but how can I send it to server with multiplexing of frame and sound and how to use ffmpeg to complete this task, if any one have any tutorial about ffmpeg or any example please share here.

解决方案

The way I'm doing it is to implement an AVCaptureSession, which has a delegate with a callback that's run on every frame. That callback sends each frame over the network to the server, which has a custom setup to receive it.

Here's the flow:

http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/03_MediaCapture.html#//apple_ref/doc/uid/TP40010188-CH5-SW2

And here's some code:

// make input device

NSError *deviceError;

AVCaptureDevice *cameraDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

AVCaptureDeviceInput *inputDevice = [AVCaptureDeviceInput deviceInputWithDevice:cameraDevice error:&deviceError];

// make output device

AVCaptureVideoDataOutput *outputDevice = [[AVCaptureVideoDataOutput alloc] init];

[outputDevice setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

// initialize capture session

AVCaptureSession *captureSession = [[[AVCaptureSession alloc] init] autorelease];

[captureSession addInput:inputDevice];

[captureSession addOutput:outputDevice];

// make preview layer and add so that camera's view is displayed on screen

AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer    layerWithSession:captureSession];
previewLayer.frame = view.bounds;
[view.layer addSublayer:previewLayer];

// go!

[captureSession startRunning];

Then the output device's delegate (here, self) has to implement the callback:

-(void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection

{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer( sampleBuffer );

CGSize imageSize = CVImageBufferGetEncodedSize( imageBuffer );

// also in the 'mediaSpecific' dict of the sampleBuffer

   NSLog( @"frame captured at %.fx%.f", imageSize.width, imageSize.height );

    }

Sending raw frames or individual images will never work well enough for you (because of the amount of data and number of frames). Nor can you reasonably serve anything from the phone (WWAN networks have all sorts of firewalls). You'll need to encode the video, and stream it to a server, most likely over a standard streaming format (RTSP, RTMP). There is an H.264 encoder chip on the iPhone >= 3GS. The problem is that it is not stream oriented. That is, it outputs the metadata required to parse the video last. This leaves you with a few options.

1) Get the raw data and use FFmpeg to encode on the phone (will use a ton of CPU and battery).

2) Write your own parser for the H.264/AAC output (very hard).

3) Record and process in chunks (will add latency equal to the length of the chunks, and drop around 1/4 second of video between each chunk as you start and stop the sessions).

这篇关于如何从iPhone摄像头获取实时视频流并将其发送到服务器?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆