如何将实时音频流传递到“直线语音"端点? [英] How to pass real-time audio stream to the Direct Line Speech endpoint?

查看:97
本文介绍了如何将实时音频流传递到“直线语音"端点?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在自定义语音应用程序中使用直接语音(DLS). 语音"应用程序可以访问我想(直接进行pcm编码)的实时音频流,并将其直接转换为直接语音",从而可以进行实时来回通信.

I am trying to use Direct Line Speech (DLS) in my custom voice app. The Voice app has access to the real-time audio streams which I want to (pcm encoded) it directly to Direct Line Speech that allows a back and forth communication in real-time.

来自DLS客户端示例代码( https://github.com/Azure-Samples/Cognitive-Services-Direct-Line-Speech-Client ),我看到Microsoft.CognitiveServices.Speech.Dialog.DialogServiceConnector命名空间中的方法ListenOneAsync(),但看起来像是在捕获来自本地麦克风的媒体.

From the DLS Client sample code (https://github.com/Azure-Samples/Cognitive-Services-Direct-Line-Speech-Client), I see that the method ListenOneAsync() in Microsoft.CognitiveServices.Speech.Dialog.DialogServiceConnector namespace, but looks like it's capturing media from local microphone.

但是请看这里的回复(

But looking at the reply here (Is new ms botbuilder directline speech good fit for call center scenario?), it seems I can send the audio stream to the DLS directly. I can't seem to find any documentation around this. Can someone shed some light on how to achieve this?

推荐答案

我相信您的答案就在于 Microsoft.CognitiveServices.Speech.Audio.AudioConfig 类.看看

I believe your answer lies in the Microsoft.CognitiveServices.Speech.Audio.AudioConfig class. Have a look at this line in the Direct Line Speech client:

this.connector = new DialogServiceConnector(config, AudioConfig.FromDefaultMicrophoneInput());

AudioConfig除了FromDefaultMicrophoneInput还提供了许多选项.我怀疑您将要使用三个

AudioConfig provides many options besides FromDefaultMicrophoneInput. I suspect you'll want to use one of the three FromStreamInput overloads. If you do that then ListenOnceAsync will use your stream instead of the microphone.

这篇关于如何将实时音频流传递到“直线语音"端点?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆