用于直播的Web Audio API? [英] Web Audio API for live streaming?

查看:240
本文介绍了用于直播的Web Audio API?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们需要将实时音频(从医疗设备)流式传输到网络浏览器,端到端延迟不超过3-5秒(假设网络延迟时间不超过200毫秒)。今天我们使用浏览器插件(NPAPI)进行解码过滤(高,低,带)和播放音频流(通过Web套接字传送)。

We need to streaming live audio (from a medical device) to web browsers with no more than 3-5s of end-to-end delay (assume 200mS or less network latency). Today we use a browser plugin (NPAPI) for decoding, filtering (high, low, band), and playback of the audio stream (delivered via Web Sockets).

我们想要替换插件。

我正在查看各种 Web Audio API演示和我们所需的大多数功能(播放,增益控制,过滤)似乎都可以在 Web Audio API中找到。但是,我不清楚Web Audio API是否可用于流式传输源,因为大多数Web Audio API都使用短音和/或音频剪辑。

I was looking at various Web Audio API demos and the most of our required functionality (playback, gain control, filtering) appears to be available in Web Audio API. However, it is not clear to me if Web Audio API can be used for streamed sources as most of the Web Audio API makes use of short sounds and/or audio clips.

可以使用Web Audio API播放实时流媒体音频吗?

Can Web Audio API be used to play live streamed audio?

更新(2015年2月11日):

经过一番研究和本地原型设计后,我不确定使用Web Audio API 的实时音频流是否可行。由于Web Audio API的 decodeAudioData 并非真正设计用于处理音频数据的随机块(在我们的例子中通过WebSockets提供)。它似乎需要整个'文件'才能正确处理它。

After a bit more research and local prototyping, I am not sure live audio streaming with Web Audio API is possible. As Web Audio API's decodeAudioData isn't really designed to handle random chunks of audio data (in our case delivered via WebSockets). It appears to need the whole 'file' in order to process it correctly.

请参阅stackoverflow:

See stackoverflow:

  • How to stream MP3 data via WebSockets with node.js and socket.io?
  • Define 'valid mp3 chunk' for decodeAudioData (WebAudio API)

现在可以使用 createMediaElementSource < audio> 元素连接到Web Audio API,但我的经验是< audio> 元素会导致大量的端到端延迟(15-30秒),并且似乎没有任何方法可以将延迟减少到低于3-5秒。

Now it is possible with createMediaElementSource to connect an <audio> element to Web Audio API, but it has been my experience that the <audio> element induces a huge amount of end-to-end delay (15-30s) and there doesn't appear to be any means to reduce the delay to below 3-5 seconds.

认为唯一的解决方案是将WebRTC与Web Aduio API一起使用。我希望避免使用WebRTC,因为它需要对我们的服务器端实现进行重大更改。

I think the only solution is to use WebRTC with Web Aduio API. I was hoping to avoid WebRTC as it will require significant changes to our server-side implementation.

更新(2015年2月12日)第一部分

我还没有完全消除< audio> 标签(需要完成我的原型)。一旦我排除了它,我怀疑createScriptProcessor(已弃用但仍然支持)将是我们环境的一个不错的选择,因为我可以流(通过WebSockets)我们的ADPCM数据到浏览器,然后(在JavaScript中)将其转换为PCM。类似于Scott的库(见下文)使用createScriptProcessor。此方法不要求数据处于正确大小的块和关键时序,如decodeAudioData方法。

I haven't completely eliminated the <audio> tag (need to finish my prototype). Once I have ruled it out, I suspect the createScriptProcessor (deprecated but still supported) will be a good choice for our environment as I could 'stream' (via WebSockets) our ADPCM data to the browser and then (in JavaScript) convert it to PCM. Similar to what to Scott's library (see below) does using the createScriptProcessor. This method doesn't require the data to be in properly sized 'chunks' and critical timing as the decodeAudioData approach.

更新(2015年2月12日)第二部分

经过更多测试后,我将< audio> 删除为Web Audio API接口因为,根据源类型,压缩和浏览器,端到端延迟可以是3-30s。这留下了createScriptProcessor方法(参见Scott的帖子)或WebRTC。在与我们的决策者讨论后,我们决定采用WebRTC方法。我假设它会起作用。但它需要更改我们的服务器端代码。

After more testing, I eliminated the <audio> to Web Audio API interface because, depending on source type, compression and browser, the end-to-end delay can be 3-30s. That leaves the createScriptProcessor method (See Scott's post below) or WebRTC. After talking discussing with our decision makers, it has been decided we will take the WebRTC approach. I assume it will work. But it will require changes to our server side code.

我将标记第一个答案,只是问题已关闭。

I'm going to mark the first answer, just so the 'question' is closed.

感谢收听。可以根据需要随意添加注释。

Thanks for listening. Feel free to add comments as needed.

推荐答案

是的,Web Audio API(以及AJAX或Websockets)可用于流式传输。

Yes, the Web Audio API (along with AJAX or Websockets) can be used for streaming.

基本上,你下拉(或在Websockets的情况下发送)一些 n 长度的块。然后用Web Audio API对它们进行解码,并将它们排队等待播放。

Basically, you pull down (or send, in the case of Websockets) some chunks of n length. Then you decode them with the Web Audio API and queue them up to be played, one after the other.

因为Web Audio API具有高精度计时,所以你赢了如果正确进行调度,请不要在每个缓冲区的播放之间听到任何接缝。

Because the Web Audio API has high-precision timing, you won't hear any "seams" between the playback of each buffer if you do the scheduling correctly.

这篇关于用于直播的Web Audio API?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆