如何注册自定义语音识别服务? [英] How to register a custom speech recognition service?

查看:455
本文介绍了如何注册自定义语音识别服务?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我创建了一个简单的语音识别服务:为此,我创建 android.speech.RecognitionService 的子类,我创建了一个活动来启动和停止此服务<。 / p>

我的自定义语音识别服务平凡使用默认的语音识别,因为我的目标很简单,就是如何理解 RecognitionService RecognitionService.Callback 类的工作。

 公共类SimpleVoiceService扩展RecognitionService {    私人SpeechRecognizer m_EngineSR;    @覆盖
    公共无效的onCreate(){
        super.onCreate();
        Log.i(SimpleVoiceService,服务启动);
    }    @覆盖
    公共无效的onDestroy(){
        super.onDestroy();
        Log.i(SimpleVoiceService,停止服务);
    }    @覆盖
    保护无效onCancel(回调监听){
        m_EngineSR.cancel();
    }    @覆盖
    保护无效onStartListening(意向recognizerIntent,回调监听){
        m_EngineSR.setRecognitionListener(新VoiceResultsListener(监听));
        m_EngineSR.startListening(recognizerIntent);
    }    @覆盖
    保护无效onStopListening(回调监听){
        m_EngineSR.stopListening();
    }
    / **
     *
     * /
    私有类VoiceResultsListener实现RecognitionListener {        私人回调m_UserSpecifiedListener;        / **
         *
         * @参数userSpecifiedListener
         * /
        公共VoiceResultsListener(回拨userSpecifiedListener){
            m_UserSpecifiedListener = userSpecifiedListener;
        }        @覆盖
        公共无效onBeginningOfSpeech(){
            尝试{
                m_UserSpecifiedListener.beginningOfSpeech();
            }赶上(RemoteException的E){
                e.printStackTrace();
            }
        }        @覆盖
        公共无效onBufferReceived(字节[]缓冲区){
            尝试{
                m_UserSpecifiedListener.bufferReceived(缓冲液);
            }赶上(RemoteException的E){
                e.printStackTrace();
            }
        }        @覆盖
        公共无效onEndOfSpeech(){
            尝试{
                m_UserSpecifiedListener.endOfSpeech();
            }赶上(RemoteException的E){
                e.printStackTrace();
            }
        }        @覆盖
        公共无效onerror的(INT ERROR){
            尝试{
                m_UserSpecifiedListener.error(错误);
            }赶上(RemoteException的E){
                e.printStackTrace();
            }
        }        @覆盖
        公共无效的onEvent(INT EVENTTYPE,捆绑PARAMS){; }        @覆盖
        公共无效onPartialResults(捆绑partialResults){
            尝试{
                m_UserSpecifiedListener.partialResults(partialResults);
            }赶上(RemoteException的E){
                e.printStackTrace();
            }
        }        @覆盖
        公共无效onReadyForSpeech(捆绑PARAMS){
            尝试{
                m_UserSpecifiedListener.readyForSpeech(PARAMS);
            }赶上(RemoteException的E){
                e.printStackTrace();
            }
        }        @覆盖
        公共无效onResults(捆绑的结果){
            尝试{
                m_UserSpecifiedListener.results(结果);
            }赶上(RemoteException的E){
                e.printStackTrace();
            }
        }        @覆盖
        公共无效onRmsChanged(浮动rmsdB){
            尝试{
                m_UserSpecifiedListener.rmsChanged(rmsdB);
            }赶上(RemoteException的E){
                e.printStackTrace();
            }
        }
    }}

我开始并使用下面的活动停止该服务。

 公共类VoiceServiceStarterActivity延伸活动{
    / **当第一次创建活动调用。 * /
    @覆盖
    公共无效的onCreate(捆绑savedInstanceState){
        super.onCreate(savedInstanceState);
        按钮startButton =新按钮(本);
        startButton.setText(启动服务);
        startButton.setOnClickListener(新View.OnClickListener(){
            @覆盖
            公共无效的onClick(视图v){startVoiceService(); }
        });
        按钮STOPBUTTON =新按钮(本);
        stopButton.setText(停止服务);
        stopButton.setOnClickListener(新View.OnClickListener(){
            @覆盖
            公共无效的onClick(视图v){stopVoiceService(); }
        });
        的LinearLayout布局=新的LinearLayout(本);
        layout.setOrientation(LinearLayout.VERTICAL);
        layout.addView(startButton);
        layout.addView(STOPBUTTON);
        的setContentView(布局);
    }    私人无效startVoiceService(){
        startService(新意图(这一点,SimpleVoiceService.class));
    }    私人无效stopVoiceService(){
        stopService(新意图(这一点,SimpleVoiceService.class));
    }
}

最后,我宣布我的服务的Andr​​oidManifest.xml (见Android SDK中的文件夹内VoiceRecognition样品)。

 &LT;服务机器人:名字=SimpleVoiceService
         机器人:标签=@字符串/服务名&GT;    &所述;意图滤光器&gt;
        &lt;作用机器人:名字=android.speech.RecognitionService/&GT;
        &LT;类机器人:名字=android.intent.category.DEFAULT/&GT;
    &所述; /意图滤光器&gt;
&LT; /服务&GT;

然后我安装在Android设备上这个应用程序,我开始吧:
  - 当我启动服务,它启动正常;
  - 当我停止,它停止正常

但是,如果我在一个又一个活动启动以下code时,活动 列表只包含一个元素,这是默认的语音识别

 软件包管理系统PM = getPackageManager();
清单&LT; ResolveInfo&GT;活动= pm.queryIntentActivities(
            新意图(RecognizerIntent.ACTION_RECOGNIZE_SPEECH),0);

为什么我的语音识别系统中没有这些present中回来了?


解决方案

如果你想 queryIntentActivities(新意图(RecognizerIntent.ACTION_RECOGNIZE_SPEECH),0)拿起你的活动( VoiceServiceStarterActivity ),那么你在你的应用程序的清单,此次活动手柄 RecognizerIntent.ACTION_RECOGNIZE_SPEECH ,这样

 &LT;活动机器人:名字=VoiceServiceStarterActivity&GT;
  &所述;意图滤光器&gt;
    &lt;作用机器人:名字=android.speech.action.RECOGNIZE_SPEECH/&GT;
    &LT;类机器人:名字=android.intent.category.DEFAULT/&GT;
  &所述; /意图滤光器&gt;
  ...
&LT; /活性GT;

有关更具体的code看看项目Kõnele(的来源$ C ​​$ C ),这实质上是通过一个设置在Android的语音识别界面的一个开源实现,也就是说,包括:


  • ACTION_RECOGNIZE_SPEECH

  • ACTION_WEB_SEARCH

  • RecognitionService

和使用开源的语音识别服务器。

I created a simple speech recognition service: for this purpose I created a subclass of android.speech.RecognitionService and I created an activity to start and stop this service.

My custom speech recognition service trivially uses the default speech recognizer, because my goal is simply to understand how the RecognitionService and RecognitionService.Callback classes work.

public class SimpleVoiceService extends RecognitionService {

    private SpeechRecognizer m_EngineSR;

    @Override
    public void onCreate() {
        super.onCreate();
        Log.i("SimpleVoiceService", "Service started");
    }

    @Override
    public void onDestroy() {
        super.onDestroy();
        Log.i("SimpleVoiceService", "Service stopped");
    }

    @Override
    protected void onCancel(Callback listener) {
        m_EngineSR.cancel();
    }

    @Override
    protected void onStartListening(Intent recognizerIntent, Callback listener) {
        m_EngineSR.setRecognitionListener(new VoiceResultsListener(listener));
        m_EngineSR.startListening(recognizerIntent);
    }

    @Override
    protected void onStopListening(Callback listener) {
        m_EngineSR.stopListening();
    }


    /**
     * 
     */
    private class VoiceResultsListener implements RecognitionListener {

        private Callback m_UserSpecifiedListener;

        /**
         * 
         * @param userSpecifiedListener
         */
        public VoiceResultsListener(Callback userSpecifiedListener) {
            m_UserSpecifiedListener = userSpecifiedListener;
        }

        @Override
        public void onBeginningOfSpeech() {
            try {
                m_UserSpecifiedListener.beginningOfSpeech();
            } catch (RemoteException e) {
                e.printStackTrace();
            }
        }

        @Override
        public void onBufferReceived(byte[] buffer) {
            try {
                m_UserSpecifiedListener.bufferReceived(buffer);
            } catch (RemoteException e) {
                e.printStackTrace();
            }
        }

        @Override
        public void onEndOfSpeech() {
            try {
                m_UserSpecifiedListener.endOfSpeech();
            } catch (RemoteException e) {
                e.printStackTrace();
            }
        }

        @Override
        public void onError(int error) {
            try {
                m_UserSpecifiedListener.error(error);
            } catch (RemoteException e) {
                e.printStackTrace();
            }
        }

        @Override
        public void onEvent(int eventType, Bundle params) { ; }

        @Override
        public void onPartialResults(Bundle partialResults) {
            try {
                m_UserSpecifiedListener.partialResults(partialResults);
            } catch (RemoteException e) {
                e.printStackTrace();
            }
        }

        @Override
        public void onReadyForSpeech(Bundle params) {
            try {
                m_UserSpecifiedListener.readyForSpeech(params);
            } catch (RemoteException e) {
                e.printStackTrace();
            }
        }

        @Override
        public void onResults(Bundle results) {
            try {
                m_UserSpecifiedListener.results(results);
            } catch (RemoteException e) {
                e.printStackTrace();
            }
        }

        @Override
        public void onRmsChanged(float rmsdB) {
            try {
                m_UserSpecifiedListener.rmsChanged(rmsdB);
            } catch (RemoteException e) {
                e.printStackTrace();
            }
        }
    }

}

I start and stop the service using the following activity.

public class VoiceServiceStarterActivity extends Activity {
    /** Called when the activity is first created. */
    @Override
    public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        Button startButton = new Button(this);
        startButton.setText("Start the service");
        startButton.setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View v) { startVoiceService(); }
        });
        Button stopButton = new Button(this);
        stopButton.setText("Stop the service");
        stopButton.setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View v) { stopVoiceService(); }
        });
        LinearLayout layout = new LinearLayout(this);
        layout.setOrientation(LinearLayout.VERTICAL);
        layout.addView(startButton);
        layout.addView(stopButton);
        setContentView(layout);
    }

    private void startVoiceService() {
        startService(new Intent(this, SimpleVoiceService.class));
    }

    private void stopVoiceService() {
        stopService(new Intent(this, SimpleVoiceService.class));
    }
}

Finally I declared my service on the AndroidManifest.xml (see VoiceRecognition sample within Android SDK folder).

<service android:name="SimpleVoiceService"
         android:label="@string/service_name" >

    <intent-filter>
        <action android:name="android.speech.RecognitionService" />
        <category android:name="android.intent.category.DEFAULT" />
    </intent-filter>
</service>

Then I installed this application on an Android device and I start it: - when I start the service, it starts properly; - when I stop it, it stops properly.

But if I launch the following code in a another activity, the activities List contains only an element, which is the default speech recognizer.

PackageManager pm = getPackageManager();
List<ResolveInfo> activities = pm.queryIntentActivities(
            new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH), 0);

Why is my speech recognizer not returned among those present in the system?

解决方案

If you want queryIntentActivities(new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH), 0) to pick up your activity (VoiceServiceStarterActivity) then you have to declare in your app's Manifest that this activity handles RecognizerIntent.ACTION_RECOGNIZE_SPEECH, like this

<activity android:name="VoiceServiceStarterActivity">
  <intent-filter>
    <action android:name="android.speech.action.RECOGNIZE_SPEECH" />
    <category android:name="android.intent.category.DEFAULT" />
  </intent-filter>
  ...
</activity>

For more concrete code have a look at the project Kõnele (source code) which is essentially an open source implementation of the interfaces via which speech recognition is provided on Android, i.e. it covers:

  • ACTION_RECOGNIZE_SPEECH
  • ACTION_WEB_SEARCH
  • RecognitionService

and uses open source speech recognition servers.

这篇关于如何注册自定义语音识别服务?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆