有没有办法使用Web Audio API来比实时更快地采样音频? [英] Is there a way to use the Web Audio API to sample audio faster than real-time?
问题描述
processor = context .createJavascriptNode(2048,1,1);
processor.onaudioprocess = processAudio;
...
函数processAudio {
var freqByteData = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(freqByteData);
//计算幅度&渲染为画布
}
它看起来就是那个分析器。 frequencyBinCount
仅在播放声音时填充(关于缓冲区的填充内容)。
我想要的是可以手动/编程方式尽可能快地遍历文件,以生成画布图像。
到目前为止我所得到的是:
$ ('change',function(e){
var FileList = e。$ b $ prefle =lang-js prettyprint-override>
$(#files target.files,
Reader = new FileReader();
var File = FileList [0];
Reader.onload =(function(theFile){
返回函数(e){
context.decodeAudioData(e.target.result,function(buffer){
source.buffer = buffer;
source.connect(analyzer);
analyser.connect(jsNode);
var freqData = new Uint8Array(buffer.getChannel数据(0));
console.dir(分析器);
console.dir(jsNode);
jsNode.connect(context.destination);
//source.noteOn(0);
});
};
})(文件);
Reader.readAsArrayBuffer(File);
});
但getChannelData()总是返回一个空的类型数组。
任何洞察力都会受到赞赏 - 即使事实证明它无法完成。我想我是互联网唯一一个不想实时做东西的人。
谢谢。
有一个非常了不起的'offline'模式,您可以通过音频上下文预处理整个文件,然后对结果进行处理:
var context = new webkitOfflineAudioContext();
var source = context.createBufferSource();
source.buffer = buffer;
source.connect(context.destination);
source.noteOn(0);
context.oncomplete = function(e){
var audioBuffer = e.renderedBuffer;
};
context.startRendering();
因此,设置看起来与实时处理模式完全相同,除非您设置 oncomplete
回调以及对 startRendering()
的调用。你在 e.redneredBuffer
中得到的是 AudioBuffer
。
I'm playing around with the Web Audio API & trying to find a way to import an mp3 (so therefore this is only in Chrome), and generate a waveform of it on a canvas. I can do this in real-time, but my goal is to do this faster than real-time.
All the examples I've been able to find involve reading the frequency data from an analyser object, in a function attached to the onaudioprocess event:
processor = context.createJavascriptNode(2048,1,1);
processor.onaudioprocess = processAudio;
...
function processAudio{
var freqByteData = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(freqByteData);
//calculate magnitude & render to canvas
}
It appears though, that analyser.frequencyBinCount
is only populated when the sound is playing (something about the buffer being filled).
What I want is to be able to manually/programmatically step through the file as fast as possible, to generate the canvas image.
What I've got so far is this:
$("#files").on('change',function(e){
var FileList = e.target.files,
Reader = new FileReader();
var File = FileList[0];
Reader.onload = (function(theFile){
return function(e){
context.decodeAudioData(e.target.result,function(buffer){
source.buffer = buffer;
source.connect(analyser);
analyser.connect(jsNode);
var freqData = new Uint8Array(buffer.getChannelData(0));
console.dir(analyser);
console.dir(jsNode);
jsNode.connect(context.destination);
//source.noteOn(0);
});
};
})(File);
Reader.readAsArrayBuffer(File);
});
But getChannelData() always returns an empty typed array.
Any insight is appreciated - even if it turns out it can't be done. I think I'm the only one the Internet not wanting to do stuff in real-time.
Thanks.
There is a really amazing 'offline' mode of the Web Audio API that allows you to pre-process an entire file through an audio context and then do something with the result:
var context = new webkitOfflineAudioContext();
var source = context.createBufferSource();
source.buffer = buffer;
source.connect(context.destination);
source.noteOn(0);
context.oncomplete = function(e) {
var audioBuffer = e.renderedBuffer;
};
context.startRendering();
So the setup looks exactly the same as the real-time processing mode, except you set up the oncomplete
callback and the call to startRendering()
. What you get back in e.redneredBuffer
is an AudioBuffer
.
这篇关于有没有办法使用Web Audio API来比实时更快地采样音频?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!