使用Web音频API分析来自麦克风的输入(将MediaStreamSource转换为BufferSource) [英] Using web audio api for analyzing input from microphone (convert MediaStreamSource to BufferSource)

查看:514
本文介绍了使用Web音频API分析来自麦克风的输入(将MediaStreamSource转换为BufferSource)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正尝试使用Web音频Api获取每分钟的节拍(BPM),就像在以下链接中所做的一样(

I am trying to get the beats per Minute (BPM) using the Web Audio Api like it is done in the following links (http://joesul.li/van/beat-detection-using-web-audio/ or https://github.com/JMPerez/beats-audio-api/blob/gh-pages/script.js) but from an audio stream (microphone). Unfortunately, I don´t get it running. Does somebody know how I can convert the microphone MediaStreamSource to a BufferSource and continue like on the first linked Website? Here´s the Code I´ve come to this Point:

navigator.mediaDevices.getUserMedia({ audio: true, video: false })
.then(function(stream) {
    /* use the stream */

    var OfflineContext = window.OfflineAudioContext || window.webkitOfflineAudioContext;
    var source = OfflineContext.createMediaStreamSource(stream);
    source.connect(OfflineContext);
    var offlineContext = new OfflineContext(2, 30 * 44100, 44100);

    offlineContext.decodeAudioData(stream, function(buffer) {
      // Create buffer source
      var source = offlineContext.createBufferSource();
      source.buffer = buffer;
      // Beats, or kicks, generally occur around the 100 to 150 hz range.
      // Below this is often the bassline.  So let's focus just on that.
      // First a lowpass to remove most of the song.
      var lowpass = offlineContext.createBiquadFilter();
      lowpass.type = "lowpass";
      lowpass.frequency.value = 150;
      lowpass.Q.value = 1;
      // Run the output of the source through the low pass.
      source.connect(lowpass);
      // Now a highpass to remove the bassline.
      var highpass = offlineContext.createBiquadFilter();
      highpass.type = "highpass";
      highpass.frequency.value = 100;
      highpass.Q.value = 1;
      // Run the output of the lowpass through the highpass.
      lowpass.connect(highpass);
      // Run the output of the highpass through our offline context.
      highpass.connect(offlineContext.destination);
      // Start the source, and render the output into the offline conext.
      source.start(0);
      offlineContext.startRendering();
    });
})
.catch(function(err) {
    /* handle the error */
    alert("Error");
});

谢谢!

推荐答案

这些文章很棒.您当前的方法存在一些问题:

Those articles are great. There are a few things wrong with your current approach:

  1. 您不需要解码流-您需要使用MediaStreamAudioSourceNode将其连接到Web音频上下文,然后使用ScriptProcessor(不建议使用)或AudioWorker(尚未在所有地方实现)来抓取比特并执行检测. encodeAudioData采用编码缓冲区(即MP3文件的内容)而不是流对象.
  2. 请记住,这是一个流,而不是单个文件-您不能真正地将整个歌曲音频文件交给节拍检测器.好吧,您可以-但是,如果要进行流式传输,则需要等到整个文件进入后再这样做,这很不好.您必须分块工作,并且歌曲中的bpm可能会改变.因此,一次收集一个块-一次可能收集一秒钟或更多的音频-传递给节拍检测代码.
  3. 虽然对数据进行低通滤波可能是个好主意,但对它进行高通滤波可能不值得.请记住,滤波器不是砖墙滤波器-它们不会将频率之上或之下的所有内容都切掉,而只是对其进行衰减.

这篇关于使用Web音频API分析来自麦克风的输入(将MediaStreamSource转换为BufferSource)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆