Web Audio API Analyzer节点不使用麦克风输入 [英] Web Audio API Analyser Node Not Working With Microphone Input

查看:134
本文介绍了Web Audio API Analyzer节点不使用麦克风输入的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

根据 http://code.google.com/p/阻止麦克风输入的错误chrome Canary的chrome / issues / detail?id = 112367 现已修复。这部分似乎确实有效。我可以将麦克风输入分配给音频元素并通过扬声器听到结果。

The bug preventing getting microphone input per http://code.google.com/p/chromium/issues/detail?id=112367 for Chrome Canary is now fixed. This part does seem to be working. I can assign the mic input to an audio element and hear the results through the speaker.

但我想连接分析仪节点以进行FFT。如果我将音频源设置为本地文件,分析器节点可以正常工作。问题是当连接到mic音频流时,分析器节点只返回基值,就好像它根本没有音频流一样。 (如果你很好奇的话,一遍又一遍-100。)

But I'd like to connect an analyser node in order to do FFT. The analyser node works fine if I set the audio source to a local file. The problem is that when connected to the mic audio stream, the analyser node just returns the base value as if it doesn't have an audio stream at all. (It's -100 over and over again if you're curious.)

任何人都知道怎么了?它还没有实现吗?这是一个铬虫吗?我在Windows 7上运行26.0.1377.0并启用了getUserMedia标志,并通过python的simpleHTTPServer通过localhost服务,因此它可以请求权限。

Anyone know what's up? Is it not implemented yet? Is this a chrome bug? I'm running 26.0.1377.0 on Windows 7 and have the getUserMedia flag enabled and am serving through localhost via python's simpleHTTPServer so it can request permissions.

代码:

var aCtx = new webkitAudioContext();
var analyser = aCtx.createAnalyser();
if (navigator.getUserMedia) {
  navigator.getUserMedia({audio: true}, function(stream) {
    // audio.src = "stupid.wav"
    audio.src = window.URL.createObjectURL(stream);
  }, onFailure);
}
$('#audio').on("loadeddata",function(){
    source = aCtx.createMediaElementSource(audio);
    source.connect(analyser);
    analyser.connect(aCtx.destination);
    process();
});

再次,如果我将audio.src设置为注释版本,它可以工作,但是使用麦克风它是不。流程包含:

Again, if I set audio.src to the commented version, it works, but with microphone it is not. Process contains:

FFTData = new Float32Array(analyser.frequencyBinCount);
analyser.getFloatFrequencyData(FFTData);
console.log(FFTData[0]);

我也尝试过使用createMediaStreamSource并绕过音频元素 - 例4 - https://dvcs.w3.org/hg/audio/raw-file/tip/ webaudio /的WebRTC-integration.html 。也不成功。 :(

I've also tried using the createMediaStreamSource and bypassing the audio element - example 4 - https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/webrtc-integration.html. Also unsuccessful. :(

    if (navigator.getUserMedia) {
        navigator.getUserMedia({audio: true}, function(stream) {
        var microphone = context.createMediaStreamSource(stream);
        microphone.connect(analyser);
        analyser.connect(aCtx.destination);
        process();
    }

我想有可能将mediasteam写入缓冲区然后使用dsp。 js或者其他什么要做fft,但我想在我走这条路之前先检查一下。

I imagine it might be possible to write the mediasteam to a buffer and then use dsp.js or something to do fft, but I wanted to check first before I go down that road.

推荐答案

这是一个变量范围问题。对于第二个例子,我在本地定义了麦克风,然后尝试使用分析器在另一个函数中访问它的流。我只是让所有的Web Audio API节点都是全局的。这也需要几秒钟的时间。分析器节点开始报告非-100值。感兴趣的人的工作代码:

It was a variable scoping issue. For the second example, I was defining the microphone locally and then trying to access its stream with the analyser in another function. I just made all the Web Audio API nodes globals for peace of mind. Also it takes a few seconds for the analyser node to start reporting non -100 values. Working code for those interested:

// Globals
var aCtx;
var analyser;
var microphone;
if (navigator.getUserMedia) {
    navigator.getUserMedia({audio: true}, function(stream) {
        aCtx = new webkitAudioContext();
        analyser = aCtx.createAnalyser();
        microphone = aCtx.createMediaStreamSource(stream);
        microphone.connect(analyser);
        // analyser.connect(aCtx.destination);
        process();
    });
};
function process(){
    setInterval(function(){
        FFTData = new Float32Array(analyser.frequencyBinCount);
        analyser.getFloatFrequencyData(FFTData);
        console.log(FFTData[0]);
    },10);
}

如果您想听实时音频,可以将分析仪连接到目的地(发言者)如上所述。请注意一些可爱的反馈!

If you would like to hear the live audio, you can connect the analyser to destination (speakers) as commented out above. Watch out for some lovely feedback though!

这篇关于Web Audio API Analyzer节点不使用麦克风输入的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆