在被 getUserMedia 捕获后,使用 WebRTC 将 MediaStream 发送到主机服务器 [英] Sending a MediaStream to host Server with WebRTC after it is captured by getUserMedia

查看:24
本文介绍了在被 getUserMedia 捕获后,使用 WebRTC 将 MediaStream 发送到主机服务器的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 getUserMedia() 捕获音频数据,我想将其发送到我的服务器,以便将其保存为 MySQL 字段中的 Blob.

I am capturing audio data using getUserMedia() and I want to send it to my server so I can save it as a Blob in a MySQL field.

这就是我想要做的.我已经多次尝试使用 WebRTC 来做到这一点,但目前我什至不知道这是否正确,或者是否是最好的方法.

This is all I am trying to do. I have made several attempts to do this using WebRTC, but I don't even know at this point if this is right or even the best way to do this.

有人可以帮助我吗?

这是我用来从麦克风捕获音频的代码:

Here is the code I am using to capture audio from the microphone:

navigator.getUserMedia({
    video:false,
    audio:true,
},function(mediaStream){

    // output mediaStream to speakers:
    var mediaStreamSource=audioContext.createMediaStreamSource(mediaStream);
    mediaStreamSource.connect(audioContext.destintion);

    // send mediaStream to server:

    // WebRTC code? not sure about this...
    var RTCconfig={};
    var conn=new RTCPeerConnection(RTCconfig);

    // ???

},function(error){
    console.log('getUserMedia() fail.');
    console.log(error);
});

如何将这个 mediaStream 发送到服务器?

How can I send this mediaStream up to the server?

在谷歌搜索之后,我一直在研究 WebRTC但这似乎只是用于点对点通信 - 实际上,现在我正在研究更多,我认为这是要走的路.这似乎是从客户端的浏览器到主机网络服务器的通信方式,但我尝试的任何东西都无法正常工作.

After Googling around I've been looking into WebRTC, but this seems to be for just peer to peer communication - actually, now I'm looking into this more, I think this is the way to go. It seems to be the way to communicate from the client's browser up to the host webserver, but nothing I try even comes close to working.

我一直在浏览 W3C 文档(我发现它太抽象了),我一直在阅读这篇关于 HTML5 Rocks 的文章(其中提出的问题多于答案).显然我需要一种信令方法,谁能建议哪种信令方法最适合发送媒体流、XHR、XMPP、SIP、Socket.io 或其他内容?

I've been going through the W3C documentation (which I am finding way too abstract), and I've been going thru this article on HTML5 Rocks (which is bringing up more questions than answers). Apparently I need a signalling method, can anyone advise which signalling method is best for sending mediaStreams, XHR, XMPP, SIP, Socket.io or something else?

服务器上需要什么来支持WebRTC的接收?我的网络服务器正在运行一个基本的 LAMP 堆栈.

What will I need on the server to support the receiving of WebRTC? My web server is running a basic LAMP stack.

另外,最好等到 mediaStream 完成录制后再将其发送到服务器,还是最好将 mediaStream 作为其发送录了?我想知道我是否要以正确的方式做这件事.我已经用 javascript 和 HTML5 编写了文件上传器,但是上传这些 mediaStreams 中的一个似乎非常复杂,我不确定我是否正确地接近它.

Also, is it best to wait until the mediaStream is finished recording before I send it up to the server, or is it better to send the mediaStream as its being recorded? I want to know if I am going about doing this the right way. I have written file uploaders in javascript and HTML5, but uploading one of these mediaStreams seems hellishly more complicated and I'm not sure if I am approaching it right.

对此的任何帮助将不胜感激.

Any help on this would be greatly appreciated.

推荐答案

您无法在实时流运行时上传它本身.这是因为它是一个实时流.

You cannot upload the live stream itself while it is running. This is because it is a LIVE stream.

所以,这给您留下了一些选择.

So, this leaves you with a handful options.

  1. 使用众多录音机之一录制音频流RecordRTC 效果很好.等待流完成,然后上传文件.
  2. 使用计时器发送较小的录制音频块,然后在服务器端再次合并它们.这是一个例子
  3. 将通过 websocket 出现的音频数据包发送到您的服务器,以便您可以在那里操作和合并它们.我的 RecordRTC 版本可以做到这一点.
  4. 与您的服务器建立实际的对等连接,以便它可以获取原始 rtp 流,并且您可以使用一些较低级别的代码记录流.这可以通过 Janus-Gateway 轻松完成.
  1. Record the audio stream using one of the many recorders out there RecordRTC works fairly well. Wait until the stream is completed and then upload the file.
  2. Send smaller chuncks of recorded audio with a timer and merge them again server side. This is an example of this
  3. Send the audio packets as they occur over websockets to your server so that you can manipulate and merge them there. My version of RecordRTC does this.
  4. Make an actual peer connection with your server so it can grab the raw rtp stream and you can record the stream using some lower level code. This can easily be done with the Janus-Gateway.

至于等待发送流还是分块发送,这完全取决于您录制的时间.如果需要更长的时间,我会说分块发送录音或通过 websockets 主动发送音频数据包是更好的解决方案,因为从客户端上传和存储更大的音频文件对客户端来说可能很困难.

As for waiting to send the stream vs sending it in chunks, it all depends on how long you are recording. If it is for a longer period of time, I would say sending the recording in chunks or actively sending audio packets over websockets is a better solution as uploading and storing larger audio files from the client side can be arduous for the client.

Firefox 实际上有其自己的录制解决方案,但在chrome 所以它可能不适用于您的情况.

Firefox actually has a its own solution for recording but it is not supported in chrome so it may not work in your situation.

顺便说一句,所提到的信令方法用于会话构建/销毁,实际上与媒体本身无关.如果您可能使用上面显示的解决方案 4,您只会真正担心这一点.

As an aside, the signalling method mentioned is for session build/destroy and really has nothing to do with the media itself. You would only really worry about this if you were using possibly solution number 4 shown above.

这篇关于在被 getUserMedia 捕获后,使用 WebRTC 将 MediaStream 发送到主机服务器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆