如何将getUsermedia音频流转换为blob或缓冲区? [英] how to convert getUsermedia audio stream into a blob or buffer?

查看:928
本文介绍了如何将getUsermedia音频流转换为blob或缓冲区?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我从getUserMeda获取音频流,然后将其转换为blob或缓冲区并将其发送到服务器,因为音频正在进行我使用socket.io将其发送到服务器如何将音频媒体流转换为缓冲区?

I am getting audio stream from getUserMeda and then convert it into a blob or buffer and send it to server as audio is comming I am using socket.io to emit it to server how can i convert audio mediastream into buffer?

以下是我编写的代码

navigator.getUserMedia({audio: true, video: false}, function(stream) {
webcamstream = stream;
var media = stream.getAudioTracks();
socket.emit("sendaudio", media);
},
function(e){
   console.log(e);
  }
});

如何将流转换为缓冲区并将其发送到node.js服务器,因为流来自getusermedia函数?

How to convert stream into buffer and emit it to node.js server as stream comes from getusermedia function?

推荐答案

Per @ MuazKhan的评论,使用MediaRecorder(在Firefox中,最终将在Chrome中)或RecordRTC /等来捕获数据进入blob。然后,您可以通过几种方法之一将其导出到服务器进行分发:WebSockets,WebRTC DataChannels等。请注意,这些不能保证实时传输数据,而且MediaRecorder还没有比特率控制。如果传输延迟,数据可能会在本地累积。

Per @MuazKhan's comment, use MediaRecorder (in Firefox, eventually will be in Chrome) or RecordRTC/etc to capture the data into blobs. Then you can export it via one of several methods to the server for distribution: WebSockets, WebRTC DataChannels, etc. Note that these are NOT guaranteed to transfer the data in realtime, and also MediaRecorder does not yet have bitrate controls. If transmission is delayed, data may build up locally.

如果实时(重新)传输很重要,请强烈考虑使用PeerConnection代替服务器(根据@ Robert的评论)然后将其转换为流。 (如何完成将取决于服务器,但您应该将Opus数据编码为重新打包或解码并重新编码。)虽然重新编码通常不好,但在这种情况下,您最好通过NetEq进行解码( webrtc.org stack的jitter-buffer和PacketLossConcealment代码)并获得一个干净的实时音频流来重新编码流,丢失和抖动处理。

If realtime (re)transmission is important, strongly consider using instead a PeerConnection to a server (per @Robert's comment) and then transform it there into a stream. (How that is done will depend on the server, but you should have encoded Opus data to either repackage or decode and re-encode.) While re-encoding is generally not good, in this case you would do best to decode through NetEq (webrtc.org stack's jitter-buffer and PacketLossConcealment code) and get a clean realtime audio stream to re-encode for streaming, with loss and jitter dealt with.

这篇关于如何将getUsermedia音频流转换为blob或缓冲区?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆