使用Web Audio API进行离线/非实时渲染 [英] Offline / Non-Realtime Rendering with the Web Audio API

查看:128
本文介绍了使用Web Audio API进行离线/非实时渲染的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

问题



我正在开发一个Web应用程序,用户可以对音频样本进行排序,还可以将效果应用于使用Web Audio API创建的音乐模式。这些模式存储为JSON数据,我想对服务器端每个模式的渲染音频进行一些分析。这让我有两个选择,据我所见:


  1. 运行我自己的渲染代码服务器端,尝试尽可能忠实于浏览器内渲染。也许我甚至可以从Chromium项目中提取Web Audio代码并对其进行修改,但这似乎有很多工作可能。

  2. 渲染客户端有希望快于实时,然后将渲染后的音频发送到服务器。这是理想的(和DRY),因为只有一个引擎用于模式呈现。 可能解决方案

    这个问题引导我此代码示例Chromium存储库,这似乎表明离线处理是一种可能性。这个技巧似乎是用一些参数构造一个 webkitAudioContext (通常使用一个零参数构造函数)。以下是我对这些参数意味着什么的猜测:

     新的webkitAudioContext(2,//频道
    10 * 44100 ,//样品中的长度
    44100); //采样率

    我稍微调整了示例,并在Windows上的Chrome 23.0.1271.91中对其进行了测试, Mac和Linux。 以下是现场示例和结果(打开Dev Tools Javascript控制台查看发生了什么):


    • Mac - It Works !!
    • FAIL - SYNTAX_ERR:DOM异常12
    • Linux - FAIL
    • / ul>

      上面描述的 webkitAudioContext 构造函数会导致Windows和Linux上的异常。



      我的问题



      离线渲染对我所要做的事情来说是完美的,但我无法找到任何地方的文档,支持都不太理想。有没有人有关于此的更多信息?我应该很快就会期待在Windows和/或Linux中对此有所支持,或者我应该在Mac上很快就会支持消失

      解决方案

      几个月前,我对此进行了一些研究,并且audioContext上有一个startRendering函数,但Google的人告诉我,当时的实现是由于更改。我不认为这件事发生了,它仍然不是官方文档的一部分,所以我会小心构建一个依赖于它的应用程序。



      当前的实现并不比实时更快(可能稍微在非常轻的应用程序中),有时甚至比实时慢。



      如果您需要非实时渲染,最好的办法是敲击壕沟并实施Web Audio服务器端。如果您可以接受实时渲染,那么 https://github.com/mattdiamond/Recorderjs 上有一个项目,它可能是有趣的。



      请注意,我自己并不是一个Google员工,我被告知并不是一个承诺。 $ b

      The Problem

      I'm working on a web application where users can sequence audio samples and optionally apply effects to the musical patterns they create using the Web Audio API. The patterns are stored as JSON data, and I'd like to do some analysis of the rendered audio of each pattern server-side. This leaves me with two options, as far as I can see:

      1. Run my own rendering code server-side, trying to make it as faithful as possible to the in-browser rendering. Maybe I could even pull out the Web Audio code from the Chromium project and modify that, but this seems like potentially a lot of work.

      2. Do the rendering client-side, hopefully faster-than-realtime, and then send the rendered audio to the server. This is ideal (and DRY), because there's only one engine being used for pattern rendering.

      The Possible Solution

      This question lead me to this code sample in the Chromium repository, which seems to indicate that offline processing is a possibility. The trick seems to be constructing a webkitAudioContext with some arguments (usually, a zero-argument constructor is used). The following are my guesses at what the parameters mean:

      new webkitAudioContext(2,          // channels
                             10 * 44100, // length in samples
                             44100);     // sample rate
      

      I adapted the sample slightly, and tested it in Chrome 23.0.1271.91 on Windows, Mac, and Linux. Here's the live example, and the results (open up the Dev Tools Javascript Console to see what's happening):

      • Mac - It Works!!
      • Windows - FAIL - SYNTAX_ERR: DOM Exception 12
      • Linux - FAIL - SYNTAX_ERR: DOM Exception 12

      The webkitAudioContext constructor I described above causes the exception on Windows and Linux.

      My Question

      Offline rendering would be perfect for what I'm trying to do, but I can't find documentation anywhere, and support is less-than-ideal. Does anyone have more information about this? Should I be expecting support for this in Windows and/or Linux soon, or should I be expecting support to disappear soon on Mac?

      解决方案

      I did some research on this a few months back, and there is a startRendering function on the audioContext, but I was told by Google people that the implementation was, at that time, due to change. I don't think this has happened yet, and it's still not a part of the official documentation, so I'd be careful building an app that depends on it.

      The current implementation doesn't render any faster than realtime either (maybe slightly in very light applications), and sometimes even slower than realtime.

      Your best bet is hitting the trenches and implement Web Audio server-side if you need non-realtime rendering. If you could live with realtime rendering there's a project at https://github.com/mattdiamond/Recorderjs which might be of interest.

      Please note that I'm not a googler myself, and what I was told was not a promise in any way.

      这篇关于使用Web Audio API进行离线/非实时渲染的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆