权限拒绝错误 - SpeechRecognizer作为一个持续服务? (android.permission.INTERACT_ACROSS_USERS_FULL) [英] Permission Denial Error - SpeechRecognizer as a continuous service? (android.permission.INTERACT_ACROSS_USERS_FULL)

查看:1027
本文介绍了权限拒绝错误 - SpeechRecognizer作为一个持续服务? (android.permission.INTERACT_ACROSS_USERS_FULL)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

编辑: 我已经改变了我的服务code来实现的启动服务,而不是IntentService作为更新StreamService.java以下 现在,我作为StreamService.java后logcat的消息描述收到有关许可拒绝错误错误

编辑:

 正如Android开发者网站,SpeechRecognizer API只能被用作应用程序上下文。是否有任何woraround与我能得到它的工作
 

我已经实现了拥有所有的UI组件MainActivity类。类是如下

code - MainActivity.java

 包com.example.speechsensorservice;

进口android.app.Activity;
进口android.content.BroadcastReceiver;
进口android.content.Context;
进口android.content.Intent;
进口android.content.IntentFilter;
进口android.os.Bundle;
进口android.util.Log;
进口android.view.Menu;
进口android.view.View;
进口android.widget.ImageButton;
进口android.widget.TextView;
进口android.widget.Toast;

公共类MainActivity延伸活动{


    私有静态最后字符串变量=SpeechSensor;

    私人布尔headsetConnected = FALSE;

    公众的TextView txtText;

    私人BroadcastReceiver的mReceiver;
    私人的ImageButton btnSpeak;

    @覆盖
    公共无效的onCreate(包savedInstanceState){
        super.onCreate(savedInstanceState);
        的setContentView(R.layout.activity_main);

        txtText =(TextView中)findViewById(R.id.txtText);
        btnSpeak =(的ImageButton)findViewById(R.id.btnSpeak);

        btnSpeak.setOnClickListener(新View.OnClickListener(){

            @覆盖
            公共无效的onClick(视图v){
                意向意图=新的意图(getApplicationContext(),StreamService.class);
                startService(意向);
            }
        });
    }

    保护无效onResume(){
        super.onResume();

        IntentFilter的SIF =新的IntentFilter();
        sIF.addAction(Intent.ACTION_HEADSET_PLUG);
        sIF.addAction(com.example.speechsensorservice.TEXT);
        mReceiver =新的BroadcastReceiver(){

                @覆盖
            公共无效的onReceive(背景为arg0,意图ARG1){
                // TODO自动生成方法存根
                字符串行为= arg1.getAction();
                Log.d(TAG,收到的行动=+行为);
                如果(Intent.ACTION_HEADSET_PLUG.equals(ACT)){
                    如果(arg1.hasExtra(国家)){
                        如果(headsetConnected&安培;!&安培; arg1.getIntExtra(国家,0)== 1){
                            headsetConnected = TRUE;
                            txtText.setText(耳机插入);
                            startNoiseProcessService();
                        }
                    }
                }
                否则如果(act.equals(com.example.speechsensorservice.TEXT)){
                    如果(arg1.hasExtra(标识)){
                        字符串s = arg1.getStringExtra(标识);
                        如果(s.equals(NA)){
                            吐司T = Toast.makeText(getApplicationContext()
                                    您的设备不列入支持语音到文本,
                                    Toast.LENGTH_SHORT);
                            t.show();
                        }
                        别的txtText.setText(多个);
                    }
                }
            }

        };

        this.registerReceiver(mReceiver,SIF);
    }

    公共无效的onPause(){
        super.onPause();
        this.unregisterReceiver(this.mReceiver);
    }

    @覆盖
    公共布尔onCreateOptionsMenu(功能菜单){
       。getMenuInflater()膨胀(R.menu.main,菜单);
        返回true;
    }

    公共无效startNoiseProcessService(){
        意向意图=新的意图(这一点,StreamService.class);
        startService(意向);
    }


}
 

这是我已经实现了启动语音识别服务是通过继承IntentService级后台任务另一类。具体的实现是低于

code - StreamService.java

 包com.example.speechsensorservice;

进口的java.util.ArrayList;

进口android.app.Service;
进口android.content.BroadcastReceiver;
进口android.content.Context;
进口android.content.Intent;
进口android.content.IntentFilter;
进口android.os.Bundle;
进口android.os.IBinder;
进口android.speech.RecognitionListener;
进口android.speech.RecognizerIntent;
进口android.speech.SpeechRecognizer;
进口android.util.Log;

公共类StreamService延伸服务{
     私有静态最后字符串变量=SpeechSensor;
     私有静态最后弦乐ACTION =com.example.speechsensorservice.TEXT;
    私人SpeechRecognizer SR;

    私人BroadcastReceiver的sReceiver;

    私人布尔headsetConnected = TRUE;

    文本字符串;


    @覆盖
    公众的IBinder onBind(意向为arg0){
        // TODO自动生成方法存根
        返回null;
    }

    @覆盖
    公共无效的onCreate(){
        Log.d(TAG的onCreate()StreamService法);
        super.onCreate();
        sReceiver =新的BroadcastReceiver(){
            公共无效的onReceive(背景为arg0,意图ARG1){
                // TODO自动生成方法存根
                如果(Intent.ACTION_HEADSET_PLUG.equals(arg1.getAction())){
                    如果(arg1.hasExtra(国家)){
                            如果(headsetConnected&安培;&安培; arg1.getIntExtra(国家,0)== 0){
                                headsetConnected = FALSE;
                                stopStreaming();
                            }
                    }
                }
            }

        };
        this.registerReceiver(sReceiver,新的IntentFilter(Intent.ACTION_HEADSET_PLUG));
    }

    @覆盖
    公众诠释onStartCommand(意向意图,诠释标志,诠释startId){
        Log.d(TAG,内部onStartCommand());
    // Runnable接口R =新的Runnable(){
    //公共无效的run(){
                startStreaming();
    //}
    //};

    //线程t =新主题(R);
    // t.start();

        返回Service.START_STICKY;

    }

    @覆盖
    公共无效的onDestroy(){
        Log.d(TAG的onDestroy()StreamService法);
        super.onDestroy();
        this.unregisterReceiver(this.sReceiver);
    }


     公共无效startStreaming(){
         Log.d(TAG,内部startStreaming());
            意向意图;
            文本=;
            如果(!SpeechRecognizer.isRecognitionAvailable(本)){
                Log.d(TAG,不适用与设备);
                文=NA;
                意图=新的意图(ACTION);
                intent.putExtra(身份,文本);
                sendBroadcast(意向);
            }
            其他 {
                Log.d(TAG,跑马圈地输入);
                SR = SpeechRecognizer.createSpeechRecognizer(this.getApplicationContext());

                意图=新的意图(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);

                //intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE,HI-IN);
                intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,EN-US); // RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); // RecognizerIntent.LANGUAGE_MODEL_WEB_SEARCH);
             // intent.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS,3);

                sr.setRecognitionListener(新myListener的());
                sr.startListening(意向);
            }

     }

     公共无效stopStreaming(){
            如果(SR == NULL)回报;
            Log.d(TAG,停止服用输入);
            sr.cancel();
            sr.destroy();
            SR = NULL;
            this.stopSelf();
     }

     公共则将isStreaming布尔(){
            // TODO自动生成方法存根
            Log.d(TAG,则将isStreaming:YES);
            如果(SR!= NULL)返回true;
            返回false;
     }

     类myListener的实现RecognitionListener {

            @覆盖
            公共无效onBeginningOfSpeech(){
                // TODO自动生成方法存根
                Log.d(TAG,onBeginningOfSpeech);
            }

            @覆盖
            公共无效onBufferReceived(byte []的为arg0){
                // TODO自动生成方法存根

            }

            @覆盖
            公共无效onEndOfSpeech(){
                // TODO自动生成方法存根
                Log.d(TAG,onEndOfSpeech);
            }

            @覆盖
            公共无效onerror的(INT为arg0){
                // TODO自动生成方法存根

            }

            @覆盖
            公共无效的onEvent(INT为arg0,捆绑ARG1){
                // TODO自动生成方法存根

            }

            @覆盖
            公共无效onPartialResults(捆绑为arg0){
                // TODO自动生成方法存根

            }

            @覆盖
            公共无效onReadyForSpeech(捆绑为arg0){
                // TODO自动生成方法存根
                Log.d(TAG,onReadyForSpeech);
            }

            @覆盖
            公共无效onResults(捆绑为arg0){
                // TODO自动生成方法存根


                Log.d(TAG,拿到结果);
                ArrayList的<字符串>人= arg0.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
                文本= al.get(0);
                的for(int i = 0; I< al.size();我++){
                    Log.d(TAG,结果=+ al.get(ⅰ));
                }
                意向意图=新的意图(ACTION);
                intent.putExtra(名称,文本);
                sendBroadcast(意向);
              // startStreaming();

            }

            @覆盖
            公共无效onRmsChanged(浮点为arg0){
                // TODO自动生成方法存根

            }

        }

}
 

在这里,我得到错误 java.lang.RuntimeException的:SpeechRecognizer只应该用于从应用程序的主线程

code流是这样的:

ImageButton->的onClick() - >消防服务意向StreamService.class->的onCreate() - > onHandleIntent() - >调用startStreaming() - >得到错误

LogCat中留言:

  12-13 17:03:24.822 794 7381èDatabaseUtils:写入例外包裹
12-13 17:03:24.822 794 7381èDatabaseUtils:java.lang.SecurityException异常:权限被拒绝:获取/设置设定为用户要求为用户运行-2而是从用户0调用;这需要android.permission.INTERACT_ACROSS_USERS_FULL
12-13 17:03:24.822 794 7381èDatabaseUtils:在com.android.server.am.ActivityManagerService.handleIncomingUser(ActivityManagerService.java:12754)
12-13 17:03:24.822 794 7381èDatabaseUtils:在android.app.ActivityManager.handleIncomingUser(ActivityManager.java:1998)
12-13 17:03:24.822 794 7381èDatabaseUtils:在com.android.providers.settings.SettingsProvider.call(SettingsProvider.java:574)
12-13 17:03:24.822 794 7381èDatabaseUtils:在android.content.ContentProvider $ Transport.call(ContentProvider.java:256)
12-13 17:03:24.822 794 7381èDatabaseUtils:在android.content.ContentProviderNative.onTransact(ContentProviderNative.java:256)
12-13 17:03:24.822 794 7381èDatabaseUtils:在android.os.Binder.execTransact(Binder.java:351)
12-13 17:03:24.822 794 7381èDatabaseUtils:在dalvik.system.NativeStart.run(本机方法)
 

解决方案

有些时候,这个特殊的错误实际上是误导性的,并且是由其他运行时的问题。

我记录一个这样的例子<一href="http://www.silverbaytech.com/2014/04/11/android-error-android-permission-interact_across_users_full/"相对=nofollow标题=点击这里>这里 - 一个NullPointerException异常抛出深跌结束了被报道,因为这同样的错误,即使它没有任何关系跨用户权限

在我的具体情况,ProGuard的被剥离出一个方法,我需要的,这造成了抛出NullPointerException。堆栈跟踪是这样的:

 权限拒绝:获取用户的要求为用户-2运行,但是从用户0呼叫/套设置;这需要android.permission.INTERACT_ACROSS_USERS_FULL
显示java.lang.NullPointerException
 在java.lang.Enum中$ 1。创建(Enum.java:43)
 在java.lang.Enum中$ 1。创建(Enum.java:35)
 在libcore.util.BasicLruCache.get(BasicLruCache.java:54)
 在java.lang.Enum.getSharedConstants(Enum.java:209)
 在java.lang.Enum.valueOf(Enum.java:189)
 在com.my.app.package.b.c.a(来源不明)
 在com.my.app.package.baonCreate(来源不明)
 在android.support.v4.app.FragmentManagerImpl.moveToState(来源不明)
 在android.support.v4.app.FragmentManagerImpl.moveToState(来源不明)
 在android.support.v4.app.BackStackRecord.run(来源不明)
 在android.support.v4.app.FragmentManagerImpl.execPendingActions(来源不明)
 在android.support.v4.app.FragmentManagerImpl $ 1.运行(来源不明)
 在android.os.Handler.handleCallback(Handler.java:730)
 在android.os.Handler.dispatchMessage(Handler.java:92)
 在android.os.Looper.loop(Looper.java:137)
 在android.app.ActivityThread.main(ActivityThread.java:5455)
 在java.lang.reflect.Method.invokeNative(本机方法)
 在java.lang.reflect.Method.invoke(Method.java:525)
 在com.android.internal.os.ZygoteInit $ MethodAndArgsCaller.run(ZygoteInit.java:1187)
 在com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1003)
 在dalvik.system.NativeStart.main(本机方法)
 

我还没有在世界上的线索,为什么机器人变成了NullPointerException异常进入android.permission.INTERACT_ACROSS_USERS_FULL错误,但显而易见的解决办法是调整ProGuard的配置,这样的方法没有被剥夺。

该方法我打电话说是不存在是在一个枚举的的valueOf的方法。事实证明,有罩(这是我进入在上面的链接)下参与一些有趣的反映,但对我来说,解决办法是添加以下到我ProGuard的配置。

  -keepclassmembers枚举* {
    公共静态** []值();
    公共静态**的valueOf(java.lang.String中);
}
 

EDITED: I have changed my service code to implement as started service instead of IntentService as updated StreamService.java below Now, I am getting error regarding permission denial error as described in logcat messages after StreamService.java

EDITED:

As mentioned in Android Developer site that SpeechRecognizer API can only be used as Application Context. Is there any woraround with which I can get it working

I have implemented MainActivity class that has all the UI Components. Class is as below

CODE - MainActivity.java

package com.example.speechsensorservice;

import android.app.Activity;
import android.content.BroadcastReceiver;
import android.content.Context;
import android.content.Intent;
import android.content.IntentFilter;
import android.os.Bundle;
import android.util.Log;
import android.view.Menu;
import android.view.View;
import android.widget.ImageButton;
import android.widget.TextView;
import android.widget.Toast;

public class MainActivity extends Activity {


    private static final String TAG = "SpeechSensor";

    private boolean headsetConnected = false;

    public TextView txtText;

    private BroadcastReceiver mReceiver;
    private ImageButton btnSpeak;

    @Override
    public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        txtText = (TextView) findViewById(R.id.txtText);
        btnSpeak = (ImageButton) findViewById(R.id.btnSpeak);

        btnSpeak.setOnClickListener(new View.OnClickListener() {

            @Override
            public void onClick(View v) {
                Intent intent = new Intent(getApplicationContext(),StreamService.class);
                startService(intent);
            }
        });
    }

    protected void onResume() {
        super.onResume();

        IntentFilter sIF = new IntentFilter();
        sIF.addAction(Intent.ACTION_HEADSET_PLUG);
        sIF.addAction("com.example.speechsensorservice.TEXT");
        mReceiver = new BroadcastReceiver() {

                @Override
            public void onReceive(Context arg0, Intent arg1) {
                // TODO Auto-generated method stub
                String act = arg1.getAction();
                Log.d(TAG, "Received Action = " + act);
                if ( Intent.ACTION_HEADSET_PLUG.equals(act) ) {
                    if ( arg1.hasExtra("state")) {
                        if ( !headsetConnected && arg1.getIntExtra("state", 0) == 1 ) {
                            headsetConnected = true;
                            txtText.setText("Headset Plugged in");
                            startNoiseProcessService();
                        }
                    }
                }
                else if ( act.equals("com.example.speechsensorservice.TEXT") ){
                    if ( arg1.hasExtra("Identity")) {
                        String s = arg1.getStringExtra("Identity");
                        if ( s.equals("NA") ) {
                            Toast t = Toast.makeText(getApplicationContext(), 
                                    "Your Device doesnot support Speech to Text", 
                                    Toast.LENGTH_SHORT);
                            t.show();
                        }
                        else txtText.setText(s);
                    }
                }
            }

        };  

        this.registerReceiver(mReceiver, sIF);      
    }

    public void onPause() {
        super.onPause();
        this.unregisterReceiver(this.mReceiver);
    }

    @Override
    public boolean onCreateOptionsMenu(Menu menu) {
       getMenuInflater().inflate(R.menu.main, menu);
        return true;
    }

    public void startNoiseProcessService() {
        Intent intent = new Intent(this,StreamService.class);
        startService(intent);
    }


}

Another class that I have implemented to start Speech Recognition Service as a background task by inheriting IntentService class. The implementation is as below

Code - StreamService.java

    package com.example.speechsensorservice;

import java.util.ArrayList;

import android.app.Service;
import android.content.BroadcastReceiver;
import android.content.Context;
import android.content.Intent;
import android.content.IntentFilter;
import android.os.Bundle;
import android.os.IBinder;
import android.speech.RecognitionListener;
import android.speech.RecognizerIntent;
import android.speech.SpeechRecognizer;
import android.util.Log;

public class StreamService extends Service {
     private static final String TAG = "SpeechSensor";
     private static final String ACTION = "com.example.speechsensorservice.TEXT";
    private SpeechRecognizer sr;

    private BroadcastReceiver sReceiver;

    private boolean headsetConnected = true;

    String text;


    @Override
    public IBinder onBind(Intent arg0) {
        // TODO Auto-generated method stub
        return null;
    }

    @Override
    public void onCreate() {
        Log.d(TAG, "onCreate() StreamService Method");
        super.onCreate();
        sReceiver = new BroadcastReceiver() {
            public void onReceive(Context arg0, Intent arg1) {
                // TODO Auto-generated method stub
                if ( Intent.ACTION_HEADSET_PLUG.equals(arg1.getAction()) ) {
                    if ( arg1.hasExtra("state")) {
                            if ( headsetConnected && arg1.getIntExtra("state", 0) == 0 ) {
                                headsetConnected = false;
                                stopStreaming(); 
                            } 
                    }
                }
            }

        };  
        this.registerReceiver(sReceiver, new IntentFilter(Intent.ACTION_HEADSET_PLUG)); 
    }

    @Override
    public int onStartCommand(Intent intent, int flags, int startId) {
        Log.d(TAG,"Inside onStartCommand()");
    //  Runnable r = new Runnable() {
    //      public void run() {
                startStreaming();
    //      }
    //  };

    //  Thread t = new Thread(r);
    //  t.start();

        return Service.START_STICKY;

    }

    @Override
    public  void onDestroy() {
        Log.d(TAG, "onDestroy() StreamService Method");
        super.onDestroy();
        this.unregisterReceiver(this.sReceiver);
    }


     public void startStreaming() {
         Log.d(TAG, "Inside startStreaming()");
            Intent intent;
            text = "";
            if ( !SpeechRecognizer.isRecognitionAvailable(this) ) {
                Log.d(TAG, "Not Applicable with your device");
                text = "NA";
                intent = new Intent(ACTION);
                intent.putExtra("Identity", text);
                sendBroadcast(intent);
            }
            else {
                Log.d(TAG, "started taking input");
                sr = SpeechRecognizer.createSpeechRecognizer(this.getApplicationContext());

                intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);

                //intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, "hi-IN");
                intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, "en-US");//RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);//RecognizerIntent.LANGUAGE_MODEL_WEB_SEARCH);
             //   intent.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, 3);

                sr.setRecognitionListener( new mylistener());
                sr.startListening(intent);
            }

     }

     public void stopStreaming() {
            if ( sr == null ) return;
            Log.d(TAG, "stopped taking input");
            sr.cancel();
            sr.destroy();
            sr = null;
            this.stopSelf();
     }

     public boolean isStreaming() {
            // TODO Auto-generated method stub
            Log.d(TAG,"isStreaming : YES");
            if ( sr != null ) return true;
            return false;
     }

     class mylistener implements RecognitionListener {

            @Override
            public void onBeginningOfSpeech() {
                // TODO Auto-generated method stub
                Log.d(TAG, "onBeginningOfSpeech");
            }

            @Override
            public void onBufferReceived(byte[] arg0) {
                // TODO Auto-generated method stub

            }

            @Override
            public void onEndOfSpeech() {
                // TODO Auto-generated method stub
                Log.d(TAG, "onEndOfSpeech");
            }

            @Override
            public void onError(int arg0) {
                // TODO Auto-generated method stub

            }

            @Override
            public void onEvent(int arg0, Bundle arg1) {
                // TODO Auto-generated method stub

            }

            @Override
            public void onPartialResults(Bundle arg0) {
                // TODO Auto-generated method stub

            }

            @Override
            public void onReadyForSpeech(Bundle arg0) {
                // TODO Auto-generated method stub
                Log.d(TAG, "onReadyForSpeech");
            }

            @Override
            public void onResults(Bundle arg0) {
                // TODO Auto-generated method stub


                Log.d(TAG, "Got Results");
                ArrayList<String> al = arg0.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
                text = al.get(0);
                for ( int i =0 ; i < al.size(); i++ ) {
                    Log.d(TAG,"result=" + al.get(i));
                }
                Intent intent = new Intent(ACTION);
                intent.putExtra("Identifier", text);
                sendBroadcast(intent);
              //  startStreaming();

            }

            @Override
            public void onRmsChanged(float arg0) {
                // TODO Auto-generated method stub

            }

        }

}

Here I am getting error java.lang.RuntimeException: SpeechRecognizer should be used only from the application's main thread

Code flow is like this:

ImageButton->onClick()->Fire the service Intent for StreamService.class->onCreate()->onHandleIntent()->calling startStreaming() -> getting error

LogCat Message:

12-13 17:03:24.822   794  7381 E DatabaseUtils: Writing exception to parcel
12-13 17:03:24.822   794  7381 E DatabaseUtils: java.lang.SecurityException: Permission Denial: get/set setting for user asks to run as user -2 but is calling from user 0; this requires android.permission.INTERACT_ACROSS_USERS_FULL
12-13 17:03:24.822   794  7381 E DatabaseUtils:     at com.android.server.am.ActivityManagerService.handleIncomingUser(ActivityManagerService.java:12754)
12-13 17:03:24.822   794  7381 E DatabaseUtils:     at android.app.ActivityManager.handleIncomingUser(ActivityManager.java:1998)
12-13 17:03:24.822   794  7381 E DatabaseUtils:     at com.android.providers.settings.SettingsProvider.call(SettingsProvider.java:574)
12-13 17:03:24.822   794  7381 E DatabaseUtils:     at android.content.ContentProvider$Transport.call(ContentProvider.java:256)
12-13 17:03:24.822   794  7381 E DatabaseUtils:     at android.content.ContentProviderNative.onTransact(ContentProviderNative.java:256)
12-13 17:03:24.822   794  7381 E DatabaseUtils:     at android.os.Binder.execTransact(Binder.java:351)
12-13 17:03:24.822   794  7381 E DatabaseUtils:     at dalvik.system.NativeStart.run(Native Method)

解决方案

There are times when this particular error is actually misleading, and is caused by other runtime problems.

I documented one such example here - a NullPointerException thrown down deep ended up being reported as this same error, even though it had nothing to do with cross-user permissions.

In my particular case, ProGuard was stripping out a method that I needed, which caused a NullPointerException to be thrown. The stack trace looked like this:

Permission Denial: get/set setting for user asks to run as user -2 but is calling from user 0; this requires android.permission.INTERACT_ACROSS_USERS_FULL
java.lang.NullPointerException
 at java.lang.Enum$1.create(Enum.java:43)
 at java.lang.Enum$1.create(Enum.java:35)
 at libcore.util.BasicLruCache.get(BasicLruCache.java:54)
 at java.lang.Enum.getSharedConstants(Enum.java:209)
 at java.lang.Enum.valueOf(Enum.java:189)
 at com.my.app.package.b.c.a(Unknown Source)
 at com.my.app.package.b.a.onCreate(Unknown Source)
 at android.support.v4.app.FragmentManagerImpl.moveToState(Unknown Source)
 at android.support.v4.app.FragmentManagerImpl.moveToState(Unknown Source)
 at android.support.v4.app.BackStackRecord.run(Unknown Source)
 at android.support.v4.app.FragmentManagerImpl.execPendingActions(Unknown Source)
 at android.support.v4.app.FragmentManagerImpl$1.run(Unknown Source)
 at android.os.Handler.handleCallback(Handler.java:730)
 at android.os.Handler.dispatchMessage(Handler.java:92)
 at android.os.Looper.loop(Looper.java:137)
 at android.app.ActivityThread.main(ActivityThread.java:5455)
 at java.lang.reflect.Method.invokeNative(Native Method)
 at java.lang.reflect.Method.invoke(Method.java:525)
 at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1187)
 at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1003)
 at dalvik.system.NativeStart.main(Native Method)

I haven't a clue in the world why Android turned the NullPointerException into the android.permission.INTERACT_ACROSS_USERS_FULL error, but the obvious solution was to tweak the ProGuard configuration so that the method wasn't being stripped.

The method I was calling that wasn't there was the "valueOf" method on an enum. It turns out that there's some interesting reflection involved under the hood (which I go into at the link above), but the solution for me was to add the following to my ProGuard configuration.

-keepclassmembers enum * {
    public static **[] values();
    public static ** valueOf(java.lang.String);
}

这篇关于权限拒绝错误 - SpeechRecognizer作为一个持续服务? (android.permission.INTERACT_ACROSS_USERS_FULL)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆