覆盖两个音频缓冲到一个缓冲区源 [英] Overlay two audio buffers into one buffer source

查看:677
本文介绍了覆盖两个音频缓冲到一个缓冲区源的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

尝试两个缓冲区合并成一个;
我已经能够创建从音频文件的两个缓冲区,并加载和播放。现在我需要两个缓冲器合并成一个缓冲器。怎样才能获得它们合并?

Trying to merge two buffers into one; I have been able to create the two buffers from the audio files and load and play them. Now I need to merge the two buffers into one buffer. How can they get merged?

  context = new webkitAudioContext();
  bufferLoader = new BufferLoader(
    context,
    [
      'audio1.mp3',
      'audio2.mp3',
    ],
    finishedLoading
    );

  bufferLoader.load();

function finishedLoading(bufferList) {
  // Create the two buffer sources and play them both together.
  var source1 = context.createBufferSource();
  var source2 = context.createBufferSource();
  source1.buffer = bufferList[0];
  source2.buffer = bufferList[1];

  source1.connect(context.destination);
  source2.connect(context.destination);
  source1.start(0);
  source2.start(0);  
}

现在这些源分别装入并同时播放;但我怎么合并这两个来源成一个缓冲源?我不想追加他们,我要叠加/合并。

Now these sources are loaded separately and are played at the same time; but how do I merge these two sources into one buffer source? I do NOT want to append them, I want to overlay/merge them.

解释和/或片段将是巨大的。

explanations and/or snippets would be great.

推荐答案

在音频,为的混合的两个音频流(在这里,缓冲区)合并为一个,你可以简单地添加每个采样值在一起。实际上,这是我们能够做到这一点,建立在你的代码片段:

In audio, to mix two audio stream (here, buffers) into one, you can simply add each sample value together. Practically, here is we can do this, building on your snippet:

/* `buffers` is a javascript array containing all the buffers you want
 * to mix. */
function mix(buffers) {
  /* Get the maximum length and maximum number of channels accros all buffers, so we can
   * allocate an AudioBuffer of the right size. */
  var maxChannels = 0;
  var maxDuration = 0;
  for (var i = 0; i < buffers.length; i++) {
    if (buffers[i].numberOfChannels > maxChannels) {
      maxChannels = buffers[i].numberOfChannels;
    }
    if (buffers[i].duration > maxDuration) {
      maxDuration = buffers[i].duration;
    }
  }
  var out = context.createBuffer(maxChannels,
                                 context.sampleRate * maxLength,
                                 context.sampleRate);

  for (var j = 0; j < buffers.length; j++) {
    for (var srcChannel = 0; srcChannel < buffers[j].numberOfChannels; srcChannel++) {
      /* get the channel we will mix into */
      var out = mixed.getChanneData(srcChannel);
      /* Get the channel we want to mix in */
      var in = buffers[i].getChanneData(srcChannel);
      for (var i = 0; i < toMix.length; i++) {
        out[i] += in[i];
      }
    }
  }
  return out;
}

然后,只需影响从这个函数到一个新的 AudioBufferSourceNode.buffer 的回报,并发挥它像往常一样。

Then, simply affect the return from this function to a new AudioBufferSourceNode.buffer, and play it like usual.

一对夫妇的注意事项:我段假定,为简单起见,即:

A couple notes: my snippet assumes, for simplicity, that:


  • 如果您有一个单缓冲器和立体声缓冲,你只会听到在混合缓冲区的左声道单缓冲器。如果你希望它复制到左边和右边,你将不得不做,我们叫的上混的;

  • 如果你想有一个缓冲比另一个缓冲区安静或大声(例如,如果你搬到一个混合控制台上的音量推子),简单地乘以 toMix [I] 由一些比1.0更小的值,使之quiter,大于1.0,使其更响亮。

  • If you have a mono buffer and a stereo buffer, you will only hear the mono buffer in the left channel of the mixed buffer. If you want it copied to the left and right, you will have to do we is called up-mixing ;
  • If you want a buffer to be quieter or louder than another buffer (like if you moved a volume fader on a mixing console), simply multiply the toMix[i] value by a number lesser than 1.0 to make it quiter, greater than 1.0 to make it louder.

话又说回来,网络音频API做一切给你,所以我不知道为什么你需要自己做,但至少现在你知道如何:-)。

Then again, the Web Audio API does all that for you, so I wonder why you need to do it yourself, but at least now you know how :-).

这篇关于覆盖两个音频缓冲到一个缓冲区源的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆