活动已泄漏ServiceConnection android.speech.SpeechRecognizer $连接 [英] Activity has leaked ServiceConnection android.speech.SpeechRecognizer$Connection
问题描述
我试图让谷歌在玻璃上的功能,可以让我的卡之间导航,而不要说启动指令确定的玻璃。
我试图创造一个SpeechRecognizer如果事情是正说还是不说会不断听取,如果被提到了正确的指挥应用程序将采取相应的行动。
然而,onError方法告诉我
时出错:RecognitionService忙。
和它的投掷,上面写着
一个错误 活动com.example.sw_stage.topfinder.MainActivity渗漏,最初这里必然ServiceConnection android.speech.SpeechRecognizer$Connection@41d79530
android.app.ServiceConnectionLeaked:活动com.example.sw_stage.topfinder.MainActivity渗漏,最初这里必然ServiceConnection android.speech.SpeechRecognizer$Connection@41d79530
在android.app.LoadedApk $ ServiceDispatcher<&初始化GT;(LoadedApk.java:970)
在android.app.LoadedApk.getServiceDispatcher(LoadedApk.java:864)
在android.app.ContextImpl.bindServiceCommon(ContextImpl.java:1575)
在android.app.ContextImpl.bindService(ContextImpl.java:1558)
在android.content.ContextWrapper.bindService(ContextWrapper.java:517)
在android.speech.SpeechRecognizer.startListening(SpeechRecognizer.java:287)
在com.example.sw_stage.topfinder.SpeechDetector<&初始化GT;(SpeechDetector.java:35)
在com.example.sw_stage.topfinder.MainActivity.onCreate(MainActivity.java:58)
在android.app.Activity.performCreate(Activity.java:5235)
在android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1089)
在android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2188)
在android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2273)
在android.app.ActivityThread.access $ 800(ActivityThread.java:138)
在android.app.ActivityThread $ H.handleMessage(ActivityThread.java:1236)
在android.os.Handler.dispatchMessage(Handler.java:102)
在android.os.Looper.loop(Looper.java:149)
在android.app.ActivityThread.main(ActivityThread.java:5045)
在java.lang.reflect.Method.invokeNative(本机方法)
在java.lang.reflect.Method.invoke(Method.java:515)
在com.android.internal.os.ZygoteInit $ MethodAndArgsCaller.run(ZygoteInit.java:786)
在com.android.internal.os.ZygoteInit.main(ZygoteInit.java:602)
在dalvik.system.NativeStart.main(本机方法)
我目前使用这个项目来测试一些我想在我的最终应用的功能。目前我使用
- 头的手势来导航
- 活卡和浸泡 的组合
- 在为了使用shell命令的类模拟触摸手势
下面是我写的SpeechRecognizer
类 包com.example.sw_stage.topfinder;进口android.content.Context;
进口android.content.Intent;
进口android.media.AudioManager;
进口android.os.Bundle;
进口android.speech.RecognitionListener;
进口android.speech.RecognizerIntent;
进口android.speech.SpeechRecognizer;
进口android.util.Log;进口的java.util.ArrayList;
公共类SpeechDetector {
AudioManager mAudioManager;
SpeechRecognizer mSpeechRecognizer;
意图意图;
issueKey mIssueKey =新issueKey();公共SpeechDetector(上下文的背景下)
{
Log.i(speechdetector,呼叫语音检测);
mAudioManager =(AudioManager)context.getSystemService(Context.AUDIO_SERVICE); mSpeechRecognizer = SpeechRecognizer.createSpeechRecognizer(上下文);
mSpeechRecognizer.setRecognitionListener(新监听器(上下文)); 意图=新意图(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
Log.i(包名,context.getPackageName());
intent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE,context.getPackageName());
mSpeechRecognizer.startListening(意向);
Log.i(111111,11111111+中);
}监听器类实现RecognitionListener
{
上下文CONTEXT1;
公共监听器(上下文的背景下)
{
CONTEXT1 =背景;
} @覆盖
公共无效onReadyForSpeech(束束){ } @覆盖
公共无效onBeginningOfSpeech(){ } @覆盖
公共无效onRmsChanged(浮点V){ } @覆盖
公共无效onBufferReceived(字节[]字节){ } @覆盖
公共无效onEndOfSpeech(){
mSpeechRecognizer.startListening(意向);
} @覆盖
公共无效onerror的(int i)以{
// 3录音错误。
// 5其他客户端的错误。
// 6 - 无语音输入
匹配// 7 -No识别结果。
// 8 RecognitionService忙。
// 9 - vInsufficient权限
如果(我== 1 ||我== || 2 ==我3 ||我== || 4 ==我5 ||我== || 6 ==我7 ||我== || 8我== 9)
{
Log.wtf(发生错误,错误+ I);
mSpeechRecognizer.startListening(意向);
}
} @覆盖
公共无效onResults(束束){
ArrayList的数据= bundle.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
字符串结果=输入;
串reslt =;
的for(int i = 0; I< data.size();我++)
{
reslt = reslt ++ data.get(ⅰ);
}
开关(结果)
{
案下一步:mIssueKey.issueKey(22);
打破;
案previous:mIssueKey.issueKey(21);
打破;
案输入:mIssueKey.issueKey(23);
打破;
案假:mIssueKey.issueKey(4);
} } @覆盖
公共无效onPartialResults(束束){ } @覆盖
公共无效的onEvent(INT I,捆绑软件包){ }
}
}
我在我的MainActivity onCreate()方法调用这个类。
私人SpeechDetector mSpeechDetector;@覆盖
保护无效的onCreate(捆绑包){
super.onCreate(包); mIssueKey =新issueKey();
mSpeechDetector =新SpeechDetector(getApplicationContext());
我改变了
新SpeechDetector(MainActivity.this);
到
新SpeechDetector(getApplicationContext());
和我的不可以获取泄露错误了。
但
显然泄漏错误和繁忙不相关的RecognitionService。
所以,我还是坚持了一个事实,即我的语音识别不起作用,因为它抛出一个 RecognitionService忙
错误
修改
我只注意到它写入了其他日志
03-03 13:11:13.252 20018-20018 /? A /时出错:RecognitionService忙
03-03 13:11:13.260 825-825 /? I / RecognitionService:同时接受startListening - 忽略此调用
它看起来像 RecognitionService
已经在您调用点使用 startListening()
。我相信这是因为你已经叫 startListening()
在构造函数,然后在 onEndOfSpeech再次调用它()
。
请确保您做出必要的取消()
调用 startListening()
再次调用之前。
I'm trying to make a function in google glass that allows me to navigate between the cards without having to say the hotword "ok glass".
I tried creating a SpeechRecognizer that will constantly listen if something is being said or not and if the correct "command" is being mentioned the app will act accordingly.
However the onError method tells me
Error occured: RecognitionService busy.
and it's throwing a error that says
Activity com.example.sw_stage.topfinder.MainActivity has leaked ServiceConnection android.speech.SpeechRecognizer$Connection@41d79530 that was originally bound here
android.app.ServiceConnectionLeaked: Activity com.example.sw_stage.topfinder.MainActivity has leaked ServiceConnection android.speech.SpeechRecognizer$Connection@41d79530 that was originally bound here
at android.app.LoadedApk$ServiceDispatcher.<init>(LoadedApk.java:970)
at android.app.LoadedApk.getServiceDispatcher(LoadedApk.java:864)
at android.app.ContextImpl.bindServiceCommon(ContextImpl.java:1575)
at android.app.ContextImpl.bindService(ContextImpl.java:1558)
at android.content.ContextWrapper.bindService(ContextWrapper.java:517)
at android.speech.SpeechRecognizer.startListening(SpeechRecognizer.java:287)
at com.example.sw_stage.topfinder.SpeechDetector.<init>(SpeechDetector.java:35)
at com.example.sw_stage.topfinder.MainActivity.onCreate(MainActivity.java:58)
at android.app.Activity.performCreate(Activity.java:5235)
at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1089)
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2188)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2273)
at android.app.ActivityThread.access$800(ActivityThread.java:138)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1236)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:149)
at android.app.ActivityThread.main(ActivityThread.java:5045)
at java.lang.reflect.Method.invokeNative(Native Method)
at java.lang.reflect.Method.invoke(Method.java:515)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:786)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:602)
at dalvik.system.NativeStart.main(Native Method)
I'm currently using this project to test some of the functions I want to have in my final application. Currently I'm using
- Head Gestures to navigate
- A combination of a live card and immersion
- A class to use shell commands in order to simulate touch gestures
Here is the class I wrote for the SpeechRecognizer
package com.example.sw_stage.topfinder;
import android.content.Context;
import android.content.Intent;
import android.media.AudioManager;
import android.os.Bundle;
import android.speech.RecognitionListener;
import android.speech.RecognizerIntent;
import android.speech.SpeechRecognizer;
import android.util.Log;
import java.util.ArrayList;
public class SpeechDetector {
AudioManager mAudioManager;
SpeechRecognizer mSpeechRecognizer;
Intent intent;
issueKey mIssueKey = new issueKey();
public SpeechDetector(Context context)
{
Log.i("speechdetector", "calling speech detector");
mAudioManager = (AudioManager) context.getSystemService(Context.AUDIO_SERVICE);
mSpeechRecognizer = SpeechRecognizer.createSpeechRecognizer(context);
mSpeechRecognizer.setRecognitionListener(new listener(context));
intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
Log.i("package name", context.getPackageName());
intent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE, context.getPackageName());
mSpeechRecognizer.startListening(intent);
Log.i("111111","11111111"+"in");
}
class listener implements RecognitionListener
{
Context context1;
public listener(Context context)
{
context1 = context;
}
@Override
public void onReadyForSpeech(Bundle bundle) {
}
@Override
public void onBeginningOfSpeech() {
}
@Override
public void onRmsChanged(float v) {
}
@Override
public void onBufferReceived(byte[] bytes) {
}
@Override
public void onEndOfSpeech() {
mSpeechRecognizer.startListening(intent);
}
@Override
public void onError(int i) {
//3 Audio recording error.
//5 Other client side errors.
//6 - No speech input
//7 -No recognition result matched.
//8 RecognitionService busy.
//9 - vInsufficient permissions
if(i == 1 || i == 2 || i == 3 || i == 4 || i == 5 || i == 6 || i == 7 || i == 8 || i == 9)
{
Log.wtf("Error occured", "Error " + i);
mSpeechRecognizer.startListening(intent);
}
}
@Override
public void onResults(Bundle bundle) {
ArrayList data = bundle.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
String result = "enter";
String reslt = "";
for (int i = 0; i < data.size(); i++)
{
reslt = reslt + " " + data.get(i);
}
switch(result)
{
case "next": mIssueKey.issueKey(22);
break;
case "previous": mIssueKey.issueKey(21);
break;
case "enter": mIssueKey.issueKey(23);
break;
case "leave": mIssueKey.issueKey(4);
}
}
@Override
public void onPartialResults(Bundle bundle) {
}
@Override
public void onEvent(int i, Bundle bundle) {
}
}
}
I'm calling this class in my MainActivity onCreate() method.
private SpeechDetector mSpeechDetector;
@Override
protected void onCreate(Bundle bundle) {
super.onCreate(bundle);
mIssueKey = new issueKey();
mSpeechDetector = new SpeechDetector(getApplicationContext());
I changed
new SpeechDetector(MainActivity.this);
to
new SpeechDetector(getApplicationContext());
and I'm not getting the leaked error anymore
HOWEVER
Apparently the leaked error and the RecognitionService busy aren't related.
So I'm still stuck with the fact that my speechrecognition doesn't work because it throws a RecognitionService busy
error
EDIT I just noticed it writes an additional log
03-03 13:11:13.252 20018-20018/? A/Error occured﹕ RecognitionService busy
03-03 13:11:13.260 825-825/? I/RecognitionService﹕ concurrent startListening received - ignoring this call
It looks like the RecognitionService
is already in use at the point that you call startListening()
. I believe this is because you already called startListening()
in your constructor and then call it again in onEndOfSpeech()
.
Make sure you make the necessary cancel()
calls before calling startListening()
again.
这篇关于活动已泄漏ServiceConnection android.speech.SpeechRecognizer $连接的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!