当我有多个轨道时运行AudioBufferSourceNode.start()时,有时会出现延迟 [英] When I run AudioBufferSourceNode.start() when I have multiple tracks, I sometimes get a delay

查看:70
本文介绍了当我有多个轨道时运行AudioBufferSourceNode.start()时,有时会出现延迟的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在制作一个可以读取和播放两个音频文件的应用程序.
CodeSnadBox
上面的CodeSandBox具有以下规范.

I am making an application that reads and plays two audio files.
CodeSnadBox
The above CodeSandBox has the following specifications.

  • 按播放"按钮按钮播放音频.
  • 两个音轨中每个音轨的音量都可以更改.

播放音频时有时会有所延迟.
但是,音频延迟并不总是如此,有时可能会完全同时播放两条音轨.

When playing audio, there is sometimes a delay.
However, there is not always an audio delay, and there are times when two tracks can be played back at exactly the same time.

尽管上面的CodeSandBox中没有实现,但我当前正在处理的应用程序实现了一个搜索栏,以指示当前的播放位置.通过移动搜索栏以指示当前的播放位置,可以重新加载音频,并可以解决由此产生的延迟.另一方面,即使音频在完全相同的时间播放,移动搜索栏也可能会导致延迟.

Although not implemented in the CodeSandBox above, the application I am currently working on implements a seek bar to indicate the current playback position. By moving the seek bar to indicate the current playback position, the audio is reloaded and the resulting delay may be cured. On the other hand, moving the seek bar may cause a delay even though the audio was playing at exactly the same timing.

无论如何,有没有一种方法可以稳定且一致地同时播放多个音轨?

Anyway, is there a way to play multiple audio tracks at the same time in a stable and consistent manner?

let ctx,
  tr1,
  tr2,
  tr1gain = 0,
  tr2gain = 0,
  start = false;

const trackList = ["track1", "track2"];

const App = () => {
  useEffect(() => {
    ctx = new AudioContext();
    tr1 = ctx.createBufferSource();
    tr2 = ctx.createBufferSource();
    tr1gain = ctx.createGain();
    tr2gain = ctx.createGain();
    trackList.forEach(async (item) => {
      const res = await fetch("/" + item + ".mp3");
      const arrayBuffer = await res.arrayBuffer();
      const audioBuffer = await ctx.decodeAudioData(arrayBuffer);
      item === "track1"
        ? (tr1.buffer = audioBuffer)
        : (tr2.buffer = audioBuffer);
    });
    tr1.connect(tr1gain);
    tr1gain.connect(ctx.destination);
    tr2.connect(tr2gain);
    tr2gain.connect(ctx.destination);
    return () => ctx.close();
  }, []);

  const [playing, setPlaying] = useState(false);
  const playAudio = () => {
    if (!start) {
      tr1.start();
      tr2.start();
      start = true;
    }
    ctx.resume();
    setPlaying(true);
  };
  const pauseAudio = () => {
    ctx.suspend();
    setPlaying(false);
  };

  const changeVolume = (e) => {
    const target = e.target.ariaLabel;
    target === "track1"
      ? (tr1gain.gain.value = e.target.value)
      : (tr2gain.gain.value = e.target.value);
  };
  const Inputs = trackList.map((item, index) => (
    <div key={index}>
      <span>{item}</span>
      <input
        type="range"
        onChange={changeVolume}
        step="any"
        max="1"
        aria-label={item}
      />
    </div>
  ));

  return (
    <>
      <button
        onClick={playing ? pauseAudio : playAudio}
        style={{ display: "block" }}
      >
        {playing ? "pause" : "play"}
      </button>
      {Inputs}
    </>
  );
};

推荐答案

在不带参数的情况下调用 start()时,与在 AudioContext 作为第一个参数.在您的示例中,看起来像这样:

When calling start() without a parameter it's the same as calling start with currentTime of the AudioContext as the first parameter. In your example that would look like this:

tr1.start(tr1.context.currentTime);
tr2.start(tr2.context.currentTime);

根据定义, AudioContext currentTime 随时间增加.这很可能在两个调用之间发生.因此,解决该问题的第一个尝试可能是确保两个函数调用使用相同的值.

By definition the currentTime of an AudioContext increases over time. It's totally possible that this happens between the two calls. Therefore a first attempt to fix the problem could be to make sure both function calls use the same value.

const currentTime = tr1.context.currentTime;

tr1.start(currentTime);
tr2.start(currentTime);

由于 currentTime 通常随着渲染时间的增加而增加,因此您可以通过添加一些延迟来添加额外的安全网.

Since currentTime usually increases by the time of a render quantum you could add an extra safety net by adding a little delay.

const currentTime = tr1.context.currentTime + 128 / tr1.context.sampleRate;

tr1.start(currentTime);
tr2.start(currentTime);

如果这样做没有帮助,您还可以使用 OfflineAudioContext 将混音预先呈现为单个 AudioBuffer .

If this doesn't help you could also use an OfflineAudioContext to render your mix upfront into a single AudioBuffer.

这篇关于当我有多个轨道时运行AudioBufferSourceNode.start()时,有时会出现延迟的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆