我可以使用nodejs在客户端之间流传麦克风音频吗? [英] Can I stream microphone audio from client to client using nodejs?

查看:425
本文介绍了我可以使用nodejs在客户端之间流传麦克风音频吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试创建实时语音聊天.一旦客户端按住一个按钮并进行对话,我希望声音通过套接字发送到nodejs后端,然后我想将此数据流式传输到另一个客户端.

I'm trying to create a realtime voice chat. once a client is holding a button and talks, I want the sound to be sent over the socket to the nodejs backend, then I want to stream this data to another client.

这是发件人客户端代码:

here is the sender client code:

socket.on('connect', function() {
      var session = {
          audio: true,
          video: false
      };

      navigator.getUserMedia(session, function(stream){
          var audioInput = context.createMediaStreamSource(stream);
          var bufferSize = 2048;

          recorder = context.createScriptProcessor(bufferSize, 1, 1);

          recorder.onaudioprocess = onAudio;

          audioInput.connect(recorder);

          recorder.connect(context.destination);

      },function(e){

      });

      function onAudio(e) {

          if(!broadcast) return;

          var mic = e.inputBuffer.getChannelData(0);

          var converted = convertFloat32ToInt16(mic);

          socket.emit('broadcast', converted);
      }

    });

然后,服务器获取此缓冲区并将其流式传输到另一个客户端(在此示例中,是同一客户端)

The server then gets this buffer and stream it to another client (in this example, the same client)

服务器代码

socket.on('broadcast', function(buffer) {
    socket.emit('broadcast', new Int16Array(buffer));
});

然后,为了在另一端(接收方)播放声音,客户端代码如下:

And then, in order to play the sound at the other side (the receiver), the client code is like:

socket.on('broadcast', function(raw) {

      var buffer = convertInt16ToFloat32(raw);

      var src = context.createBufferSource();
      var audioBuffer = context.createBuffer(1, buffer.byteLength, context.sampleRate);

      audioBuffer.getChannelData(0).set(buffer);

      src.buffer = audioBuffer;

      src.connect(context.destination);

      src.start(0);
    });

我的预期结果是,将在客户端B中听到来自客户端A的声音,我可以看到服务器上的缓冲区,可以看到客户端中的缓冲区,但什么也没听到.

My expected result is that the sound from client A will be heard in client B, I can see the buffer on the server, I can see the buffer back in the client but I hear nothing.

我知道socket.io 1.x支持二进制数据,但是我找不到进行语音聊天的任何示例,我也尝试过使用BinaryJS,但结果是相同的,而且,我知道使用WebRTC这是一个简单的任务,但是我不想使用WebRTC,有人可以指出我一个好的资源或告诉我我缺少什么吗?

I know socket.io 1.x supports binary data but I can't find any example of making a voice chat, I tried also using BinaryJS but the results are the same, also, I know that with WebRTC this is a simple task but I don't want to use WebRTC, can anyone point me to a good resource or tell me what am I missing?

推荐答案

几周前,我自己构建了类似的东西.我遇到的问题(有时会遇到):

I build something like this on my own a few weeks ago. Problems I ran into (you will at some point):

  • 在不降低比特率和采样率的情况下(通过互联网)获取大量数据
  • 没有插值或更好的音频压缩的音频质量差
  • 即使未显示出来,您也会从不同的计算机声卡(我的PC = 48kHz,我的笔记本电脑= 32Khz)中获得不同的采样率,这意味着您必须编写一个重采样器
  • 在WebRTC中,如果检测到不良的互联网连接,它们会降低音频质量.您不能这样做,因为这是低级的东西!
  • 您必须以一种快速的方式来实现它,因为如果不使用JS,JS将会阻止您的前端>使用网络工作人员
  • 翻译成JS的音频编解码器非常慢,您会得到意想不到的结果(请参阅我的一个音频编解码器问题: Opus ,但是还没有好结果.
  • To much Data without reducing bitrate and samplerate (over internet)
  • bad audio quallity without interpolation or a better audio compression
  • Even if its not shown to you, you will get different samplerates from different computers sound cards (my PC = 48kHz, my Laptop = 32Khz) that means you have to write a resampler
  • In WebRTC they reduce audio quallity if a bad internet connection is detected. You can not do this because this is low level stuff!
  • You have to implement this in a fast way because JS will block your frontent if not > use webworkers
  • Audio codex translated to JS are very slow and you will get unexpected results (see one audiocodex question from me: here) I have tried Opus as well, but no good results yet.

我目前不在此项目上工作,但是您可以在以下位置获取代码: https://github. com/cracker0dks/nodeJsVoip

I dont work on this project at the moment but you can get the code at: https://github.com/cracker0dks/nodeJsVoip

和工作示例:(删除了链接)用于多用户voip音频. (不再工作了!Websocketserver掉了!) 如果进入设置>音频(在页面上),则可以选择更高的位并进行采样以提高音频质量.

and the working example: (link removed) for multi user voip audio. (Not working anymore! Websocketserver is down!) If you go into settings>audio (on the page) you can choose a higher bit and samplerate for better audioquallity.

您能告诉我为什么您不想使用WebRTC吗?

Can you tell me why u not want to use WebRTC?

这篇关于我可以使用nodejs在客户端之间流传麦克风音频吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆