如何使用 Javascript 创建实时媒体流 [英] How to create a live media stream with Javascript

查看:71
本文介绍了如何使用 Javascript 创建实时媒体流的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想创建从一台设备到节点服务器的实时音频流,然后节点服务器可以将该实时馈送广播到多个前端.

我对此进行了广泛的搜索,但确实遇到了困难,所以希望有人能提供帮助.

我可以从 window.navigator.getUserMedia API 获取我的音频输入.

getAudioInput(){常量约束 = {视频:假的,音频:{deviceId:this.state.deviceId ?{确切:this.state.deviceId}:未定义},};window.navigator.getUserMedia(约束,this.initializeRecorder,this.handleError);}

然后将流传递给 initializeRecorder 函数,该函数利用 AudioContext API 创建 createMediaStreamSource`

initializeRecorder = (stream) =>{const audioContext = window.AudioContext;const context = new audioContext();const audioInput = context.createMediaStreamSource(stream);const bufferSize = 2048;//创建一个javascript节点const recorder = context.createScriptProcessor(bufferSize, 1, 1);//指定处理函数recorder.onaudioprocess = this.recorderProcess;//将流连接到我们的记录器audioInput.connect(recorder);//将我们的记录器连接到之前的目的地recorder.connect(context.destination);}

在我的 recorderProcess 函数中,我现在有一个可以流式传输的 AudioProcessingEvent 对象.

目前我通过套接字连接将音频事件作为流发送,如下所示:

recorderProcess = (e) =>{const left = e.inputBuffer.getChannelData(0);this.socket.emit('stream', this.convertFloat32ToInt16(left))}

这是最好的方法还是唯一的方法?使用 fs.createReadStream 然后通过 Axios 发布端点有没有更好的方法?据我所知,这只适用于文件而不是连续直播?

服务器

我有一个非常简单的套接字服务器在 express 上运行.目前我监听流事件,然后发出相同的输入:

io.on('connection', (client) => {client.on('stream', (stream) => {client.emit('流', 流)});});

不确定它的可扩展性如何,但如果您有更好的建议,我非常乐意接受.

客户

现在这就是我真正陷入困境的地方:

在我的客户端上,我正在侦听 stream 事件并希望在浏览器中将流作为音频输出收听.我有一个接收事件的函数,但我不知道如何使用正在返回的 arrayBuffer 对象.

retrieveAudioStream = () =>{this.socket.on('stream', (buffer) => {//...我如何将缓冲区作为音频收听})}

  1. 我流式传输音频的方式是上传到节点服务器的最佳/唯一方式吗?
  2. 如何监听客户端返回的 arrayBuffer 对象?

解决方案

  1. 流式传输音频的方式是上传到节点服务器的最佳/唯一方式吗?

不是最好的,但我见过更糟的,它不是使用 websockets 的唯一方法,从角度来看它被认为是可以的,因为你希望事情是实时"的,而不是每 5 秒发送一次 http post 请求.

<块引用>

  1. 如何监听客户端返回的 arrayBuffer 对象?

你可以试试这个 BaseAudioContext.decodeAudioData要收听流式传输的数据,示例非常简单.

<小时>

根据您提供的代码片段,我假设您想从头开始构建一些东西来了解它是如何工作的.

在这种情况下,您可以尝试MediaStream Recording API 以及将数据块发送到 X 客户端的 websocket 服务器,以便他们可以重现音频等.

WebRTC API,了解如何从客户端流式传输到另一个客户端.

另请查看以下链接以获取一些有用的信息.

I am wanting to create a live audio stream from one device to a node server which can then broadcast that live feed to several front ends.

I have searched extensively for this and have really hit a wall so hoping somebody out there can help.

I am able to get my audio input from the window.navigator.getUserMedia API.

getAudioInput(){
  const constraints = { 
    video: false, 
    audio: {deviceId: this.state.deviceId ? {exact: this.state.deviceId} : undefined},
  };

  window.navigator.getUserMedia(
    constraints, 
    this.initializeRecorder, 
    this.handleError
  );
}

This then passes the stream to the initializeRecorder function which utilises the AudioContext API to create a createMediaStreamSource`

initializeRecorder = (stream) => {
  const audioContext = window.AudioContext;
  const context = new audioContext();
  const audioInput = context.createMediaStreamSource(stream);
  const bufferSize = 2048;
  // create a javascript node
  const recorder = context.createScriptProcessor(bufferSize, 1, 1);
  // specify the processing function
  recorder.onaudioprocess = this.recorderProcess;
  // connect stream to our recorder
  audioInput.connect(recorder);
  // connect our recorder to the previous destination
  recorder.connect(context.destination);
}

In my recorderProcess function, I now have an AudioProcessingEvent object which I can stream.

Currently I am emitting the audio event as as a stream via a socket connection like so:

recorderProcess = (e) => {
  const left = e.inputBuffer.getChannelData(0);
  this.socket.emit('stream', this.convertFloat32ToInt16(left))
}

Is this the best or only way to do this? Is there a better way by using fs.createReadStream and then posting the an endpoint via Axios? As far as I can tell this will only work with a file as opposed to a continuous live stream?

Server

I have a very simple socket server running ontop of express. Currently I listen for the stream event and then emit that same input back out:

io.on('connection', (client) => {

  client.on('stream', (stream) => {
    client.emit('stream', stream)
  });

});

Not sure how scalable this is but if you have a better suggestion, I'm very open to it.

Client

Now this is where I am really stuck:

On my client I am listening for the stream event and want to listen to the stream as audio output in my browser. I have a function that receives the event but am stuck as to how I can use the arrayBuffer object that is being returned.

retrieveAudioStream = () => {
  this.socket.on('stream', (buffer) => {
     // ... how can I listen to the buffer as audio
  })
}

  1. Is the way I am streaming audio the best / only way I can upload to the node server?
  2. How can I listen to the arrayBuffer object that is being returned on my client side?

解决方案

  1. Is the way I am streaming audio the best / only way I can upload to the node server?

Not really the best but i have seen worse, its not the only way either using websockets its considered ok from point of view since you want things to be "live" and not keep sending http post request every 5sec.

  1. How can I listen to the arrayBuffer object that is being returned on my client side?

You can try this BaseAudioContext.decodeAudioData to listen to data streamed, the example is pretty simple.


From the code snippets you provide i assume you want to build something from scratch to learn how things work.

In that case, you can try MediaStream Recording API along with an websocket server that sends the chunks to X clients so they can reproduce the audio, etc.

It would make sense to invest time into WebRTC API, to learn how to stream from client to another client.

Also take a look at the links below for some useful information.

这篇关于如何使用 Javascript 创建实时媒体流的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆