没有getUserMedia的声音分析 [英] Sound analysis without getUserMedia
问题描述
我正在尝试分析浏览器的音频输出,但是我不希望出现getUserMedia提示(要求获得麦克风许可). 声音源是SpeechSynthesis和Mp3文件. 这是我的代码:
return navigator.mediaDevices.getUserMedia({
audio: true
})
.then(stream => new Promise(resolve => {
const track = stream.getAudioTracks()[0];
this.mediaStream_.addTrack(track);
this._source = this.audioContext.createMediaStreamSource(this.mediaStream_);
this._source.connect(this.analyser);
this.draw(this);
}));
此代码可以正常工作,但正在寻求使用麦克风的许可!我对麦克风一点都不感兴趣,我只需要测量音频输出即可.如果我检查所有可用设备:
navigator.mediaDevices.enumerateDevices()
.then(function(devices) {
devices.forEach(function(device) {
console.log(device.kind + ": " + device.label +
" id = " + device.deviceId);
});
})
我获得了浏览器中可用设备的列表,包括音频输出". 那么,是否有一种方法可以将音频输出路由到媒体流中,然后可以在'createMediaStreamSource'函数中使用它? 我已经检查了音频API的所有文档,但找不到它. 感谢任何可以提供帮助的人!
有多种方法来获取源自gUM的MediaStream,但您将无法捕获所有可能的音频输出...
但是,对于您的mp3文件,如果您通过MediaElement(<audio>
或<video>
),和来读取该文件,并且在没有破坏CORS的情况下提供了该文件,则可以使用 MediaElement.captureStream
.
如果您是从WebAudioAPI上阅读的,或者是针对不支持captureStream
的浏览器的,则可以使用 setSinkId 方法.
I am trying to analyse the audio output from the browser, but I don't want the getUserMedia prompt to appear (which asks for microphone permission). The sound sources are SpeechSynthesis and an Mp3 file. Here's my code:
return navigator.mediaDevices.getUserMedia({
audio: true
})
.then(stream => new Promise(resolve => {
const track = stream.getAudioTracks()[0];
this.mediaStream_.addTrack(track);
this._source = this.audioContext.createMediaStreamSource(this.mediaStream_);
this._source.connect(this.analyser);
this.draw(this);
}));
This code is working fine, but it's asking for permission to use the microphone! I a not interested at all in the microphone I only need to gauge the audio output. If I check all available devices:
navigator.mediaDevices.enumerateDevices()
.then(function(devices) {
devices.forEach(function(device) {
console.log(device.kind + ": " + device.label +
" id = " + device.deviceId);
});
})
I get a list of available devices in the browser, including 'audiooutput'. So, is there a way to route the audio output in a media stream that can be then used inside 'createMediaStreamSource' function? I have checked all the documentation for the audio API but could not find it. Thanks for anyone that can help!
There are various ways to get a MediaStream which is originating from gUM, but you won't be able to catch all possible audio output...
But, for your mp3 file, if you read it through an MediaElement (<audio>
or <video>
), and if this file is served without breaking CORS, then you can use MediaElement.captureStream
.
If you read it from WebAudioAPI, or if you target browsers that don't support captureStream
, then you can use AudioContext.createMediaStreamDestination.
For SpeechSynthesis, unfortunately you will need gUM... and a Virtual Audio Device: first you would have to set your default output to the VAB_out, then route your VAB_out to VAB_in and finally grab VAB_in from gUM...
Not an easy nor universally doable task, moreover when IIRC SpeechSynthesis doesn't have any setSinkId method.
这篇关于没有getUserMedia的声音分析的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!