Safari中的OfflineAudioContext和FFT [英] OfflineAudioContext and FFT in Safari

查看:200
本文介绍了Safari中的OfflineAudioContext和FFT的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用OfflineAudioContext在后台进行波形分析.

I am using OfflineAudioContext to do waveform analysis in the background.

所有功能在Chrome,Firefox和Opera上都可以正常运行,但是在Safari中,我的行为非常狡猾. 波形应该由许多样本组成(329个),但在Safari中,样本只有〜38个.

All works fine in Chrome, Firefox and Opera but in Safari I get a very dodgy behaviour. The waveform should be composed by many samples (329), but in Safari the samples are only ~38.

window.AudioContext = window.AudioContext || window.webkitAudioContext;
window.OfflineAudioContext = window.OfflineAudioContext || 
window.webkitOfflineAudioContext;

const sharedAudioContext = new AudioContext();

const audioURL = 'https://s3-us-west-2.amazonaws.com/s.cdpn.io/1141585/song.mp3';

const audioDidLoad = ( buffer ) =>
{
  console.log("audio decoded");
  var samplesCount = 0;
  const context = new OfflineAudioContext(1, buffer.length, 44100);
  const source = context.createBufferSource();
  const processor = context.createScriptProcessor(2048, 1, 1);

  const analyser = context.createAnalyser();
  analyser.fftSize = 2048;
  analyser.smoothingTimeConstant = 0.25;

  source.buffer = buffer;

  source.connect(analyser);
  analyser.connect(processor);
  processor.connect(context.destination);

  var freqData = new Uint8Array(analyser.frequencyBinCount);
  processor.onaudioprocess = () =>
  {
    analyser.getByteFrequencyData(freqData);
    samplesCount++;
  };

  source.start(0);
  context.startRendering();

  context.oncomplete = (e) => {
    document.getElementById('result').innerHTML = 'Read ' + samplesCount + ' samples';

   source.disconnect( analyser );
    processor.disconnect( context.destination );
  };
};

var request = new XMLHttpRequest();
request.open('GET', audioURL, true);
request.responseType = 'arraybuffer';
request.onload = () => {
  var audioData = request.response;
  sharedAudioContext.decodeAudioData(
    audioData,
    audioDidLoad,
    e => { console.log("Error with decoding audio data" + e.err); }
  );
};
request.send();

请参见 Codepen .

推荐答案

我认为在这里Safari具有正确的行为,而不是其他行为. onaudioprocess 的工作方式是这样的:给定缓冲区大小(创建 scriptProcessor 时的第一个参数,此处为2048个样本),并且每次处理该缓冲区时,该事件将被触发.因此,您可以采用采样率(默认为44.1 kHz,即每秒44100个采样),然后除以缓冲区大小(即每次将要处理的采样数),然后获得每秒的时间数音频处理事件将被触发.参见 https://webaudio.github.io/web-audio-api/#OfflineAudioContext方法

I think here, Safari has the correct behavior, not the others. The way onaudioprocess works is like this: you give a buffer size (first parameter when you create your scriptProcessor, here 2048 samples), and each time this buffer will be processed, the event will be triggered. So you take your sample rate (which by default is 44.1 kHz, meaning 44100 sample per second), then divide by the buffer size, which is the number of sample that will be processed each time, and you get the number of time per second that an audioprocess event will be triggered. See https://webaudio.github.io/web-audio-api/#OfflineAudioContext-methods

此值控制onaudioprocess事件的频率 分派,每个呼叫需要处理多少个样本帧.

This value controls how frequently the onaudioprocess event is dispatched and how many sample-frames need to be processed each call.

当您实际播放声音时,这是对的.您需要在适当的时间处理适当的音量,以便正确播放声音.但是offlineAudioContext可以处理音频,而无需关心实际的播放时间.

That's true when you're actually playing the sound. You need to prcess the proper amount in the proper time so that the sounds is played correctly. But offlineAudioContext processes the audio without caring about the real playback time.

它不会渲染到音频硬件,而是渲染为 尽可能快地完成渲染的返回承诺 结果作为AudioBuffer

It does not render to the audio hardware, but instead renders as quickly as possible, fulfilling the returned promise with the rendered result as an AudioBuffer

因此,使用OfflineAudioContext,无需进行时间计算. Chrome和其他浏览器似乎每次处理缓冲区时都会触发onaudioprocess,但是对于离线音频上下文,这并不是必须的.

So with OfflineAudioContext, there's no need to have a time calculation. Chrome and others seem to trigger onaudioprocess each time a buffer is processed, but with offline audio context, it shouldn't really be necessary.

话虽如此,通常也无需将 onaudioprocess offlineAudioContext 一起使用,只是可能会对性能有所了解.所有数据都可以从上下文中获得.此外,这329个样本的意义并不大,基本上只是样本数除以缓冲区大小.在您的示例中,您有673830个样本源,每秒44100个样本.因此,您的音频为15279秒.如果您一次处理2048个样本,则您将处理大约329次音频,这是您使用Chrome获得的329次音频.无需使用 onaudioprocess 即可获取此号码.

That being said, there's also normally no need to use onaudioprocess with offlineAudioContext, except maybe to have a sense of the performance. All data is available from the context. Also, the 329 samples doesn't mean much, it's basically only the number of samples divided by the buffer size. In your example you have a source of 673830 samples, at 44100 samples per second. So your audio is 15,279 seconds. If you process 2048 samples at a time, you process audio about 329 times, which is your 329 that you get with Chrome. No need to use onaudioprocess to get this number.

并且由于您使用脱机音频上下文,因此无需实时处理这些样本,甚至无需在每2048个样本中调用 onaudioprocess .

And since you use the offline audio context, there's no need to process these samples in real time, or even to call the onaudioprocess at each 2048 samples.

这篇关于Safari中的OfflineAudioContext和FFT的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆