Azure认知服务语音休息API [英] Azure Cognitive Service Speech rest API

查看:78
本文介绍了Azure认知服务语音休息API的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

因此,我正在制作一个具有记录音频功能的浏览器,并希望在记录音频并返回文本后将其发送到语音休假api.根据文件说,它只接受.wav或.ogg文件.但是从下面的例子看来,这似乎需要 字节也是如此.我试图在node.js中做到这一点.但是api不断向我返回错误400< span style ="color:#212121; font-family:Consolas,'Lucida Console','Courier New',monospace; white-space:nowrap">不支持的音频格式.所以我很好奇 格式.

So I am making a browser with record audio function and I want it to send to the speech rest api after i record the audio and return texts. According the documents it says it only take .wav or .ogg files. But from the example down, it seems like it takes bytes as well. I tried to do that in node.js. But api keep returning me an error 400 Unsupported audio format. So I'm quite curious about what format does it require.

下面是我的代码.

这就是我所谓的api.

function getText(audio, callback){
console.log("in function audio "+audio);
const sendTime = Date.now();
fetch('https://westus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US',{
method:"POST",
headers:{
'Accept': 'application/json',
'Ocp-Apim-Subscription-Key': YOUR_API_KEY,
'Transfer-Encoding': 'chunked',
'Expect': '100-continue',
'Content-type':'audio/wav; codec=audio/pcm; samplerate=16000'
},
body: audio
})
.then(function (r){
return r.json();
})
.then(function (response){
if (sendTime < time){
return
}
time = sendTime
//callback(response)
}).catch(e =>{
console.log("Error",e)
})
}

这是我处理音频文件的方式.

here's how i handle audio files.

navigator.mediaDevices.getUserMedia({audio:true})
.then(stream => {handlerFunction(stream)})
function handlerFunction(stream) {
rec = new MediaRecorder(stream);
rec.ondataavailable = e => {
audioChunks.push(e.data);
if (rec.state == "inactive"){
let blob = new Blob(audioChunks,{type:'audio/wav; codec=audio/pcm; samplerate=16000'});
recordedAudio.src = URL.createObjectURL(blob);
recordedAudio.controls=true;
recordedAudio.autoplay=true;
console.log(blob);
let fileReader = new FileReader();
var arrayBuffer= new Uint8Array(1024);
var reader = new FileReader();
reader.readAsArrayBuffer(blob);
reader.onloadend = function() {
var byteArray = new Uint8Array(reader.result);
console.log("reader result"+reader.result)
setTimeout(() => getText(byteArray), 1000);
}
}
}
}

这是html文件.

<!DOCTYPE html>
<h2>Record</h2>
<p>
<button id=record>start</button>
<button id=stopRecord disabled>Stop</button>
</p>
<p>
<audio id=recordedAudio></audio>
</p>
<script src = "speech.js"></script>


推荐答案

请参考以下格式:

16 kHz,单声道

16 kHz, mono

有关更多详细信息,请参阅https://docs.microsoft.com/zh-cn/azure/cognitive-services/speech-service/rest-apis

For more details, please refer to https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/rest-apis

此致

宇通


这篇关于Azure认知服务语音休息API的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆