<音频>标记为audioBuffer - 有可能吗? [英] <audio> tag to audioBuffer - is it possible?

查看:133
本文介绍了<音频>标记为audioBuffer - 有可能吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的javascript-webApp首先读取一个短的mp3文件,并在其中找到沉默间隙(出于导航目的),然后播放相同的mp3文件,提示它开始一个静音或另一个完成。这与通常的webAudio场景不同,后者旨在授予对当前正在流中播放的音频数据的访问权限(而不是整个轨道)。

My javascript-webApp first reads a short mp3 file and finds silence-gaps in it (for navigational purposes), then it plays the same mp3 file cueing it to start where one silence or another finishes. This differs from the usual webAudio scenario designed to grant access to audio data currently being played in the stream (not to the whole track).

让我的webApp工作我必须阅读/访问mp3文件两次

To get my webApp to work I have to read/access the mp3 file twice:


  1. 通过 XMLHttpRequest 读取整个MP3文件并将其放入audioBuffer中,然后我可以使用 audioContext.decodeAudioData()进行解码 - 如下所述:每t秒提取音频数据

  2. 指定< audio> 标记,允许我按需播放文件,以毫秒为单位指定提示/起点。 使用Javascript播放音频?

  1. via XMLHttpRequest to read an entire MP3 file and put it in to an audioBuffer that I can subsequently decode using audioContext.decodeAudioData() - as explained here: Extracting audio data every t seconds
  2. by specifying the <audio> tag to allow me to play the file on demand specifying in milliseconds the cue/start point. Playing audio with Javascript?.

问:我目前是否有任何方式可以先声明< audio> 标记,然后以某种方式推导出audioBuffer直接来自它,没有诉诸 XMLHttpRequest

Q: Is there currently any way I might declare the <audio>tag first then somehow derive the audioBuffer directly from it, without resorting to XMLHttpRequest ?

阅读关于 createMediaElementSource 但我看不清楚通过使用它获得 audioBuffer

I've read about createMediaElementSource but I can't see how to get an audioBuffer by using it.

推荐答案

在做第一次XHR时,要求一个blob:

When doing your first XHR, ask for a blob:

xhr.responseType = 'blob'

然后获取一个ArrayBuffer out of it:

Then get an ArrayBuffer out of it:

var arrayBuffer;
var fileReader = new FileReader();
fileReader.onload = function() {
    arrayBuffer = this.result;
};
fileReader.readAsArrayBuffer(blob);

并将其交给decodeAudioData以照常获取AudioBuffer。您现在可以进行处理。

and give that to decodeAudioData to get the AudioBuffer as usual. You can now do your processing.

然后,当您的处理完成后,将blob作为源提供给标记,当您想要播放它时,它将像往常一样工作:

Then, when your processing is done, give the blob to the tag, as a source, when you want to play it, it will work as usual:

 audio.src = window.URL.createObjectURL(blob);

您可能需要在URL前加上webkit供应商前缀,我不记得他们是否实现了没有前缀的版本。无论如何,blob是要走的路!

You might need to prefix URL with the webkit vendor prefix, I can't remember if they implement the unprefixed version. Anyways, blobs are the way to go !

这篇关于&LT;音频&GT;标记为audioBuffer - 有可能吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆