Android:同时录制和串流 [英] Android: Recording and Streaming at the same time

查看:232
本文介绍了Android:同时录制和串流的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这不是一个真正的问题,因为它是我所有尝试解决我面临的最具挑战性的功能之一的演示。



我使用 libstreaming 图书馆将实时视频流式传输到 Wowza服务器,我需要在SD卡内同时录制。我正在下列所有尝试,以便从社区收集新的想法。



将字节从libstreaming流复制到mp4文件



开发



我们在libstreaming库中创建了一个拦截,将所有发送的字节复制到mp4文件。 Libstreaming通过LocalSocket将字节发送到Wowza服务器。它使用MediaRecorder来访问摄像机和设备的麦克风,并将输出文件设置为LocalSocket的输入流。我们所做的是从InputStream创建一个围绕这个输入流的包装,并在其中创建一个File输出流。所以,每当libstreaming执行对LocaSocket输入流的读取时,我们将所有数据复制到输出流,尝试创建一个有效的MP4文件。



阻止



当我们尝试读取文件时,它已损坏。我们意识到MP4文件中缺少元信息。具体是moov原子。我们试图延迟流媒体的关闭,以便有时间发送此标题(这仍然是一个猜测),但它没有工作。为了测试这些数据的一致性,我们使用付费软件尝试恢复视频,包括标题。它变得可玩,但它主要是绿色屏幕。所以这成了一个不可靠的解决方案。我们还尝试使用untrunc,这是一个免费的开源命令行程序,甚至无法启动恢复,因为没有moov atom。



使用ffmpeg编译到android来访问相机



开发



FFMPEG有一个毕业插件一个java界面,可以在Android应用中使用它。我们以为我们可以通过命令行访问摄像头(它可能在/ dev / video0),并将其发送到媒体服务器。



/ strong>



尝试访问相机时,我们收到错误权限被拒绝。解决方法是将设备固定为可以访问该设备,但会使手机的保修失效,并且可以将其打包。



使用ffmpeg编译为android与MediaRecorder



开发



我们尝试使FFMPEG流中录制的mp4文件电话通过MediaRecorder



障碍



FFMPEG无法流式传输MP4文件尚未完成录制。



使用ffmpeg编译为android与libstreaming



开发



Libstreaming使用LocalServerSocket作为应用程序和服务器之间的连接,所以我们以为我们可以使用与LocalServerSocket本地地址连接的ffmpeg将流直接复制到SD卡内的本地文件。流媒体开始之后,我们还运行ffmpeg命令开始将数据记录到文件中。使用ffmpeg,我们相信它将以正确的方式创建一个MP4文件,这意味着包含了moov原子头。



障碍 / p>

创建的地址无法通过命令行读取,作为手机中的本地地址。所以复制是不可能的。



使用OpenCV



开发



OpenCV是一个开源的跨平台库,为计算机视觉实验和应用程序提供构建块。它提供用于捕获,处理和呈现图像数据的高级接口。它有自己的API与设备相机连接,所以我们开始研究它,看看它是否具有必要的功能,同时流式传输和记录。



Impediment



我们发现,库并没有真正定义为这样做,更像图像数学操作。我们甚至建议使用libstreaming(我们已经做到了)。



使用Kickflip SDK



开发



Kickflip是一种媒体流服务,可在Android和IOS中提供自己的开发SDK。它还使用HLS而不是RTMP,这是一个较新的协议。



障碍



他们的SDK要求我们创建一个占有整个设备屏幕的相机视图的Activity,从而打破了我们应用程序的可用性。



使用Adobe Air



开发



我们开始咨询Play商店中已有的应用程序的其他开发人员,



障碍



与这些开发人员联系,他们保证不会使用这种技术同时记录和流传。更重要的是,我们必须使用Adobe Air从头开始重做整个应用程序。



更新



Webrtc



开发



我们开始使用WebRTC,这个伟大的项目。我们将信令服务器包括在NODEJS服务器中,并开始通过套接字进行标准握手。我们仍然通过webrtc切换本地录制和流式传输。



障碍



Webrtc在每个网络配置中都不起作用。除此之外,相机获取是所有本地代码,这使得很难尝试复制字节或拦截它。

解决方案

p>一旦OpenCV 3.0可用(RC1可以下载这里),我们可以在此列表中添加另一个选项:



使用OpenCV内置于 Motion-JPEG encoder


This is not really a question as much as it is a presentation of all my attempts to solve one of the most challenging functionalities I was faced with.

I use libstreaming library to stream realtime videos to Wowza Server and I need to record it at the same time inside the SD card. I am presenting below all my attempts in order to collect new ideias from the community.

Copy bytes from libstreaming stream to a mp4 file

Development

We created an interception in libstreaming library to copy all the sent bytes to a mp4 file. Libstreaming sends the bytes to Wowza server through a LocalSocket. It users MediaRecorder to access the camera and the mic of the device and sets the output file as the LocalSocket's input stream. What we do is create a wrapper around this input stream extending from InputStream and create a File output stream inside it. So, every time libstreaming executes a reading over the LocaSocket's input stream, we copy all the data to the output stream, trying to create a valid MP4 file.

Impediment

When we tried to read the file, it is corrupted. We realized that there are meta information missing from the MP4 file. Specifically the moov atom. We tried to delay the closing of the streaming in order to give time to send this header (this was still a guessing) but it didn't work. To test the coherence of this data, we used a paid software to try to recover the video, including the header. It became playable, but it was mostly green screen. So this became an not trustable solution. We also tried using "untrunc", a free open source command line program and it couldn't even start the recovery, since there was no moov atom.

Use ffmpeg compiled to android to access the camera

Development

FFMPEG has a gradle plugin with a java interface to use it inside Android apps. We thought we could access the camera via command line (it is probably in "/dev/video0") and sent it to the media server.

Impediment

We got the error "Permission Denied" when trying to access the camera. The workaround would be to root the device to have access to it, but it make the phones loose their warranty and could brick them.

Use ffmpeg compiled to android combined with MediaRecorder

Development

We tried to make FFMPEG stream a mp4 file being recorded inside the phone via MediaRecorder

Impediment

FFMPEG can not stream MP4 files that are not yet done with the recording.

Use ffmpeg compiled to android with libstreaming

Development

Libstreaming uses LocalServerSocket as the connection between the app and the server, so we thought that we could use ffmpeg connected with LocalServerSocket local address to copy the streaming directly to a local file inside the SD card. Right after the streaming started, we also ran the ffmpeg command to start recording the data to a file. Using ffmpeg, we believed that it would create a MP4 file in the proper way, which means with the moov atom header included.

Impediment

The "address" created is not readable via command line, as a local address inside the phone. So the copy is not possible.

Use OpenCV

Development

OpenCV is an open-source, cross-platform library that provides building blocks for computer vision experiments and applications. It offers high-level interfaces for capturing, processing, and presenting image data. It has their own APIs to connect with the device camera so we started studding it to see if it had the necessary functionalities to stream and record at the same time.

Impediment

We found out that the library is not really defined to do this, but more as image mathematical manipulation. We got even the recommendation to use libstreaming (which we do already).

Use Kickflip SDK

Development

Kickflip is a media streaming service that provides their own SDK for development in android and IOS. It also uses HLS instead of RTMP, which is a newer protocol.

Impediment

Their SDK requires that we create a Activity with camera view that occupies the entire screen of the device, breaking the usability of our app.

Use Adobe Air

Development

We started consulting other developers of app's already available in the Play Store, that stream to servers already.

Impediment

Getting in touch with those developers, they reassured that would not be possible to record and stream at the same time using this technology. What's more, we would have to redo the entire app from scratch using Adobe Air.

UPDATE

Webrtc

Development

We started using WebRTC following this great project. We included the signaling server in our NODEJS server and started doing the standard handshake via socket. We were still toggling between local recording and streaming via webrtc.

Impediment

Webrtc does not work in every network configuration. Other than that, the camera acquirement is all native code, which makes a lot harder to try to copy the bytes or intercept it.

解决方案

As soon as OpenCV 3.0 is available (the RC1 can be downloaded here), we could add another option to this list:

Using OpenCV's built in Motion-JPEG encoder

这篇关于Android:同时录制和串流的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆