发送MediaStream可为主机服务器与WebRTC中它被抓获getUserMedia后 [英] Sending a MediaStream to host Server with WebRTC after it is captured by getUserMedia

查看:2033
本文介绍了发送MediaStream可为主机服务器与WebRTC中它被抓获getUserMedia后的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我用的音频数据捕捉 getUserMedia(),我想将它发送到我的服务器,这样我可以把它保存为一个MySQL领域的斑点。

I am capturing audio data using getUserMedia() and I want to send it to my server so I can save it as a Blob in a MySQL field.

这是所有我想要做的事。我曾几次试图做到这一点使用WebRTC技术,但我不,即使在这一点上知道这是否是正确的,甚至要做到这一点的最好办法。

This is all I am trying to do. I have made several attempts to do this using WebRTC, but I dont even know at this point if this is right or even the best way to do this.

灿的任何的帮助我吗?

下面是code我使用捕捉来自麦克风音频:

Here is the code I am using to capture audio from the microphone:

navigator.getUserMedia({
    video:false,
    audio:true,
},function(mediaStream){

    // output mediaStream to speakers:
    var mediaStreamSource=audioContext.createMediaStreamSource(mediaStream);
    mediaStreamSource.connect(audioContext.destintion);

    // send mediaStream to server:

    // WebRTC code? not sure about this...
    var RTCconfig={};
    var conn=new RTCPeerConnection(RTCconfig);

    // ???

},function(error){
    console.log('getUserMedia() fail.');
    console.log(error);
});

我怎样才能发送此媒体流到服务器?

周围的Googling我一直都盼望到 WebRTC技术,之后的但这似乎只是点对点通信的 - 其实,现在我寻找到这种更,我想这是一段路要走。这似乎是来自客户端的浏览器到Web服务器主机进行通信的方式,但没有我尝试甚至是接近的工作。

After Googling around Ive been looking into WebRTC, but this seems to be for just peer to peer communication - actually, now I'm looking into this more, I think this is the way to go. It seems to be the way to communicate from the clients browser up to the host webserver, but nothing I try even comes close to working.

我一直经历 W3C的文档(其中我发现的方式过于抽象)和我一直打算直通这篇文章的HTML5岩(这是造就更多的问题比答案)。显然,我需要的信令方法,任何人都可以建议哪些信号方法最适合发送mediaStreams,XHR,XMPP,SIP,Socket.io还是其他什么东西?

Ive been going through the W3C documentation (which I am finding way too abstract), and Ive been going thru this article on HTML5 Rocks (which is bringing up more questions than answers). Apparently I need a signalling method, can anyone advise which signalling method is best for sending mediaStreams, XHR, XMPP, SIP, Socket.io or something else?

我需要在服务器上支持WebRTC技术的接收?我的Web服务器正在运行一个基本的LAMP堆栈。

What will I need on the server to support the receiving of WebRTC? My web server is running a basic LAMP stack.

也是它最好等到媒体流完成录制之前,我把它送上去到服务器,或者是它更好地发送媒体流作为其正在录制?我想知道如果我要对这个做了正确的道路。我已经写文件上传JavaScript和HTML5,但上载的其中一个 mediaStreams 似乎hellishly更复杂,我不知道如果我接近它的权利。

Also is it best to wait until the mediaStream is finished recording before I send it up to the server, or is it better to send the mediaStream as its being recorded? I want to know if I am going about doing this the right way. I have written file uploaders in javascript and HTML5, but uploading one of these mediaStreams seems hellishly more complicated and I'm not sure if I am approaching it right.

任何帮助将大大AP preciated。

Any help on this would be greatly appreciated.

推荐答案

在运行时,您不能上传实时数据流本身。这是因为它是一个实时流。

You cannot upload the live stream itself while it is running. This is because it is a LIVE stream.

所以,这个给你留下了一把选项。

So, this leaves you with a handful options.


  1. 记录使用很多记录仪,在那里 RecordRTC 作品相当不错的音频流。等到流完毕后再上传文件。

  2. 发送录制音频的小chuncks带有定时和服务器端再次合并。 <一href=\"http://www.smartjava.org/content/record-audio-using-webrtc-chrome-and-speech-recognition-websockets\">This这是
  3. 的例子
  4. ,因为它们发生在的WebSockets到你的服务器,这样就可以操纵和他们那里合并发送音频数据包。 我RecordRTC版本做到这一点

  5. 请与您的服务器实际的对等连接,因此它可以抓取原始RTP流,你可以记录使用一些较低级别code流。这很容易与剑锋网关完成。

  1. Record the audio stream using one of the many recorders out there RecordRTC works fairly well. Wait until the stream is completed and then upload the file.
  2. Send smaller chuncks of recorded audio with a timer and merge them again server side. This is an example of this
  3. Send the audio packets as they occur over websockets to your server so that you can manipulate and merge them there. My version of RecordRTC does this.
  4. Make an actual peer connection with your server so it can grab the raw rtp stream and you can record the stream using some lower level code. This can easily be done with the Janus-Gateway.

对于等待发送流VS在块发送时,这一切都取决于你如何长时间录音。如果是的较长一段时间,我会说发送记录中块或主动发送音频数据包过的WebSockets是一个更好的解决方案,如上传文件,并从客户端存储较大的音频文件可以是艰巨的客户端。

As for waiting to send the stream vs sending it in chunks, it all depends on how long you are recording. If it is for a longer period of time, I would say sending the recording in chunks or actively sending audio packets over websockets is a better solution as uploading and storing larger audio files from the client side can be arduous for the client.

火狐实际上有其自己的解决方案,用于记录但它不支持的铬所以它可能不会在你的情况下工作。

Firefox actually has a its own solution for recording but it is not supported in chrome so it may not work in your situation.

顺便说一句,提到的信令方法是建立会话/销毁,真的没有任何关系与媒体本身。你只真担心这个,如果你正在使用可能上面显示的解决方案数4。

As an aside, the signalling method mentioned is for session build/destroy and really has nothing to do with the media itself. You would only really worry about this if you were using possibly solution number 4 shown above.

这篇关于发送MediaStream可为主机服务器与WebRTC中它被抓获getUserMedia后的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆