System.Speech.Recognition 和 Microsoft.Speech.Recognition 有什么区别? [英] What is the difference between System.Speech.Recognition and Microsoft.Speech.Recognition?

查看:30
本文介绍了System.Speech.Recognition 和 Microsoft.Speech.Recognition 有什么区别?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

.NET 中有两个类似的命名空间和程序集用于语音识别.我正在尝试了解这些差异以及何时适合使用其中一种.

There are two similar namespaces and assemblies for speech recognition in .NET. I’m trying to understand the differences and when it is appropriate to use one or the other.

有 System.Speech.Recognition 来自程序集 System.Speech(在 System.Speech.dll 中).System.Speech.dll 是 .NET Framework 类库 3.0 及更高版本中的核心 DLL

There is System.Speech.Recognition from the assembly System.Speech (in System.Speech.dll). System.Speech.dll is a core DLL in the .NET Framework class library 3.0 and later

还有来自程序集 Microsoft.Speech(在 microsoft.speech.dll 中)的 Microsoft.Speech.Recognition.Microsoft.Speech.dll 是 UCMA 2.0 SDK 的一部分

There is also Microsoft.Speech.Recognition from the assembly Microsoft.Speech (in microsoft.speech.dll). Microsoft.Speech.dll is part of the UCMA 2.0 SDK

我发现文档令人困惑,我有以下问题:

I find the docs confusing and I have the following questions:

System.Speech.Recognition 说它是针对Windows 桌面语音技术"的,这是否意味着它不能用于服务器操作系统或不能用于大规模应用程序?

System.Speech.Recognition says it is for "The Windows Desktop Speech Technology", does this mean it cannot be used on a server OS or cannot be used for high scale applications?

UCMA 2.0 语音 SDK(http://msdn.microsoft.com/en-us/library/dd266409%28v=office.13%29.aspx ) 说它需要 Microsoft Office Communications Server 2007 R2 作为先决条件.但是,在大会和会议上有人告诉我,如果我不需要状态和工作流等 OCS 功能,我可以在没有 OCS 的情况下使用 UCMA 2.0 Speech API.这是真的吗?

The UCMA 2.0 Speech SDK ( http://msdn.microsoft.com/en-us/library/dd266409%28v=office.13%29.aspx ) says that it requires Microsoft Office Communications Server 2007 R2 as a prerequisite. However, I’ve been told at conferences and meetings that if I do not require OCS features like presence and workflow I can use the UCMA 2.0 Speech API without OCS. Is this true?

如果我正在为服务器应用程序构建一个简单的识别应用程序(比如我想自动转录语音邮件)并且我不需要 OCS 的功能,那么这两个 API 之间有什么区别?

If I’m building a simple recognition app for a server application (say I wanted to automatically transcribe voice mails) and I don’t need features of OCS, what are the differences between the two APIs?

推荐答案

简短的回答是 Microsoft.Speech.Recognition 使用 SAPI 的服务器版本,而 System.Speech.Recognition 使用 SAPI 的桌面版本.

The short answer is that Microsoft.Speech.Recognition uses the Server version of SAPI, while System.Speech.Recognition uses the Desktop version of SAPI.

API 基本相同,但底层引擎不同.通常,服务器引擎被设计为接受电话质量的音频以供命令和使用.控制应用;桌面引擎旨在接受更高质量的命令和音频.控制和听写应用.

The APIs are mostly the same, but the underlying engines are different. Typically, the Server engine is designed to accept telephone-quality audio for command & control applications; the Desktop engine is designed to accept higher-quality audio for both command & control and dictation applications.

您可以在服务器操作系统上使用 System.Speech.Recognition,但它的可扩展性不如 Microsoft.Speech.Recognition.

You can use System.Speech.Recognition on a server OS, but it's not designed to scale nearly as well as Microsoft.Speech.Recognition.

不同之处在于服务器引擎不需要培训,可以处理较低质量的音频,但识别质量低于桌面引擎.

The differences are that the Server engine won't need training, and will work with lower-quality audio, but will have a lower recognition quality than the Desktop engine.

这篇关于System.Speech.Recognition 和 Microsoft.Speech.Recognition 有什么区别?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆