从NodeJS服务器获取实时流式音频到客户端 [英] Get live streaming audio from NodeJS server to clients

查看:972
本文介绍了从NodeJS服务器获取实时流式音频到客户端的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我需要从1个客户端到服务器到多个侦听器客户端的实时实时音频流。

I need to have a real time live audio stream from 1 client to a server to multiple listener clients.

目前我从客户端进行录音工作,并通过socket.io将音频流传输到服务器。服务器接收此数据,并且必须将音频(也通过socket.io?)传输到想要侦听此流的客户端。它必须尽可能实时(最小化延迟)。

Currently I have the recording from the client working and stream the audio through socket.io to the server. The server receives this data and must stream the audio (also through socket.io?) to the clients that want to listen to this stream. It must be as real time as possible (minimize delay).

我正在使用GetUserMedia来录制麦克风(浏览器兼容性在这里并不重要)。我希望客户端使用HTML5音频标签来收听流。在服务器上接收的数据是打包在带有audio / wav类型的blob中的块(当前由700打包)。

I'm using GetUserMedia to record the microphone (browser compatibility is not important here). I want the clients to use HTML5 audio tag to listen to the stream. The data received on the server are chunks (currently packed by 700) packed in a blob with type audio/wav.

这是我将代码发送到服务器的代码:

This is my code to send it to the server:

mediaRecorder.ondataavailable = function(e) {
    this.chunks.push(e.data);
    if (this.chunks.length >= 700)
    {
        this.sendData(this.chunks);
        this.chunks = [];
    }
};
mediaRecorder.sendData = function(buffer) {
    blob = new Blob(buffer, { 'type' : 'audio/wav' });
    socket.emit('voice', blob);
}

在服务器上我能够将块发送到客户端这样的方式:

On the server I'm able to send the chunks to the client the same way like this:

socket.on('voice', function(blob) {
    socket.broadcast.emit('voice', blob);
});

在客户端我可以这样玩:

On the client I can play this like this:

var audio = document.createElement('audio');
socket.on('voice', function(arrayBuffer) {
    var blob = new Blob([arrayBuffer], { 'type' : 'audio/wav' });
    audio.src = window.URL.createObjectURL(blob);
    audio.play();
});

这适用于我发送的第一块blob但你不能继续改为音频.src到新的URL源,所以这不是一个有效的解决方案。

This works for the first blob of chunks I send but you're not allowed to keep changing to audio.src to new URL source so this is not a working solution.

我想我必须在服务器上创建某种流,我可以在听力客户端上放入HTML5的音频标签,但我不知道如何。接收到的带有块的blob应该实时附加到此流。

I think I have to create some kind of stream on the server which I can put in the audio tag of the HTML5 on the listening clients but I don't know how. The received blobs with chunks should than be appended to this stream in real time.

这样做的最佳方法是什么?我是从客户端麦克风到服务器吗?

What is the best approach to do this? Am I doing it right from client microphone to server?

推荐答案

我在这里参加聚会有点晚了,但它看起来像如果你还没有完成它,那么网络音频API将成为你的朋友。它允许您直接将音频流播放到输出设备,而无需将其附加到音频元素。

I'm a bit late to the party here but it looks like the web audio API will be your friend here, if you haven't already finished it. It allows you to play an audio stream directly to the output device without messing around with attaching it to an audio element.

我正在考虑做同样的事情和你的问题已经回答了我的问题 - 如何从客户端到服务器获取数据。 Web音频API的好处是能够将流一起添加并在服务器上应用音频效果。

I'm looking at doing the same thing and your question has answered my question - how to get data from client to server. The benefit of the web audio API is the ability to add streams together and apply audio effects to it on the server.

MDN Web Audio API

io事件应该是替换音频上下文中音频缓冲区对象中的数据。音频处理可以在nodeJS Web音频上下文中发生,然后作为单个流发送到每个客户端。

The io events should replace the data in an audio buffer object in the audio context. Audio processing can happen in the nodeJS web audio context before being emitted as a single stream to each client.

这篇关于从NodeJS服务器获取实时流式音频到客户端的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆