在浏览器中使用 Opus(或其他编解码器)对 AudioBuffer 进行编码 [英] Encode AudioBuffer with Opus (or other codec) in Browser

查看:62
本文介绍了在浏览器中使用 Opus(或其他编解码器)对 AudioBuffer 进行编码的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试通过 Websocket 流式传输音频.

I am trying to stream Audio via Websocket.

我可以通过 Web-Audio-Api 从麦克风(或其他源)获取 AudioBuffer 并流式传输 RAW-Audio-Buffer,但我认为这不会很有效.所以我四处寻找以某种方式对 AudioBuffer 进行编码.- 如果 Opus-Codec 不可行,我对替代方案持开放态度,并感谢您提供正确方向的任何提示.

I can get an AudioBuffer from the Microphone (or other Source) via Web-Audio-Api and stream the RAW-Audio-Buffer, but i think this would not be very efficient. So i looked arround to encode the AudioBuffer somehow. - If the Opus-Codec would not be practicable, i am open to alternatives and thankful for any hints in the right direction.

我曾尝试使用 MediaRecorder(来自 MediaStreamRecording-API),但似乎无法使用该 API 进行流式传输,而不是普通录制.

I have tried to use the MediaRecorder (from MediaStreamRecording-API) but it seems not possible to stream with that API, instead of plain recording.

这是我如何获得 RAW-AudioBuffer 的部分:

Here is the Part how i get the RAW-AudioBuffer:

const handleSuccess = function(stream) {
    const context = new AudioContext();
    const source = context.createMediaStreamSource(stream);
    const processor = context.createScriptProcessor(16384, 1, 1);

    source.connect(processor);
    processor.connect(context.destination);

    processor.onaudioprocess = function(e) {
    
      bufferLen = e.inputBuffer.length
        const inputBuffer = new Float32Array(bufferLen);
        e.inputBuffer.copyFromChannel(inputBuffer, 0);

        let data_to_send = inputBuffer


    
      //And send the Float32Array ...
    
    }

navigator.mediaDevices.getUserMedia({ audio: true, video: false })
      .then(handleSuccess);

所以主要问题是:我如何编码 AudioBuffer.(并在接收器处对其进行解码)是否有 API 或库?我可以从浏览器中的另一个 API 获取编码的缓冲区吗?

So the Main Question is: How can i encode the AudioBuffer. (and Decode it at the Receiver) Is there an API or Library? Can i get the encoded Buffer from another API in the Browser?

推荐答案

Web Audio API 有一个 MediaStreamDestination 节点,它将公开一个 .stream MediaStream,然后您可以通过 WebRTC API.

The Web Audio API has a MediaStreamDestination node that will expose a .stream MediaStream that you can then pass through the WebRTC API.

但如果您只处理麦克风输入,则直接将该 MediaStream 传递给 WebRTC,无需 Web Audio 步骤.

But if you are only dealing with a microphone input, then pass directly that MediaStream to WebRTC, no need for the Web Audio step.

Ps:对于那些只想编码到 opus 的人,那么 MediaRecorder 目前是唯一的原生方式.它会产生延迟,会生成 webm 文件,不仅是原始数据,而且处理数据的速度不会比实时快.

Ps: for the ones that only want to encode to opus, then MediaRecorder is currently the only native way. It will incur a delay, will generate a webm file, not only the raw data, and will process the data no faster than real-time.

现在只有其他选择是编写您自己的编码器并在 WabAssembly 中运行它.

Only other options now are to write your own encoders and run it in WabAssembly.

希望在不久的将来,我们将能够访问 WebCodecs API 应该可以解决这个用例等.

Hopefully in a near future, we'll have access to the WebCodecs API which should solve this use case among others.

这篇关于在浏览器中使用 Opus(或其他编解码器)对 AudioBuffer 进行编码的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆