我如何克服“权限拒绝......使用 ACTION_OPEN_DOCUMENT 或相关 API 获得访问权限"? [英] How do I overcome "Permission Denial....obtain access using ACTION_OPEN_DOCUMENT or related APIs"?

查看:239
本文介绍了我如何克服“权限拒绝......使用 ACTION_OPEN_DOCUMENT 或相关 API 获得访问权限"?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 react-native-firebasereact-native-document-picker 我正在尝试遵循 人脸检测教程.

I'm using react-native-firebase and react-native-document-picker and I'm trying to follow the face detection tutorial.

尽管通过 PermissionsAndroid 具有读取权限,但目前仍收到以下错误:

Currently getting the following error despite having read access through PermissionsAndroid:

权限拒绝:从 pid=4746, uid=10135 读取 com.android.provides.media.MediaDocumentsProvider uri [uri] 需要您使用 ACTION_OPEN_DOCUMENT 或相关 API 获取访问权限

我可以在屏幕上显示用户选择的图像,但 react-native-firebase 功能似乎无法获得许可.错误发生在这个调用中:const faces = await vision().faceDetectorProcessImage(localPath);.

I am able to display the selected image by the user on the screen but the react-native-firebase functions seems to not be able to have permission. The error happens at this call: const faces = await vision().faceDetectorProcessImage(localPath);.

有关如何授予人脸检测功能访问权限或我做错了什么的任何建议?

Any suggestions on how to give the face detection function access or what am I doing wrong?

我的 AndroidManifest.xml 文件包含以下内容:

My AndroidManifest.xml file contains the following:

<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />

这是该组件中的所有代码以供参考:

Here is all the code in that component for reference:

import React, {useState} from 'react';
import { Button, Text, Image, PermissionsAndroid } from 'react-native';
import vision, { VisionFaceContourType } from '@react-native-firebase/ml-vision';
import DocumentPicker from 'react-native-document-picker';



async function processFaces(localPath) {

  console.log(localPath)
  const faces = await vision().faceDetectorProcessImage(localPath);
  console.log("Got faces")

  faces.forEach(face => {
    console.log('Head rotation on Y axis: ', face.headEulerAngleY);
    console.log('Head rotation on Z axis: ', face.headEulerAngleZ);

    console.log('Left eye open probability: ', face.leftEyeOpenProbability);
    console.log('Right eye open probability: ', face.rightEyeOpenProbability);
    console.log('Smiling probability: ', face.smilingProbability);

    face.faceContours.forEach(contour => {
      if (contour.type === VisionFaceContourType.FACE) {
        console.log('Face outline points: ', contour.points);
      }
    });
  });
}

async function pickFile () {
    // Pick a single file
    try {
        const res = await DocumentPicker.pick({
            type: [DocumentPicker.types.images],
        });
        console.log(
            res.uri,
            res.type, // mime type
            res.name,
            res.size
        );
        return res
    } catch (err) {
        if (DocumentPicker.isCancel(err)) {
        // User cancelled the picker, exit any dialogs or menus and move on
            console.log("User cancelled")
        } else {
            console.log("Error picking file or processing faces")
            throw err;
        }
    }
}

const requestPermission = async () => {
    try {
      const granted = await PermissionsAndroid.request(
        PermissionsAndroid.PERMISSIONS.READ_EXTERNAL_STORAGE,
        {
          title: "Files Permission",
          message:
            "App needs access to your files " +
            "so you can run face detection.",
          buttonNeutral: "Ask Me Later",
          buttonNegative: "Cancel",
          buttonPositive: "OK"
        }
      );
      if (granted === PermissionsAndroid.RESULTS.GRANTED) {
        console.log("We can now read files");
      } else {
        console.log("File read permission denied");
      }
      return granted
    } catch (err) {
      console.warn(err);
    }
  };

function FaceDetectionScreen ({navigation}) {
    const [image, setImage] = useState("");
    return (
        <>
            <Text>This is the Face detection screen.</Text>
            <Button title="Select Image to detect faces" onPress={async () => {
                const permission = await requestPermission();
                if (permission === PermissionsAndroid.RESULTS.GRANTED) {
                    const pickedImage = await pickFile();
                    const pickedImageUri = pickedImage.uri
                    setImage(pickedImageUri);
                    processFaces(pickedImageUri).then(() => console.log('Finished processing file.'));
                }
                }}/>
            <Image style={{flex: 1}} source={{ uri: image}}/>
        </>
    ); 
}

export default FaceDetectionScreen;

推荐答案

感谢 这个关于 github 问题的评论 我能够更新我的代码并通过将 processFaces 的前三行更新为:

Thanks to this comment on a github issue I was able to update my code and get it to work by updating the first three lines of processFaces as:

async function processFaces(contentUri) {
  const stat = await RNFetchBlob.fs.stat(contentUri)
  const faces = await vision().faceDetectorProcessImage(stat.path);

导入后从'rn-fetch-blob'导入RNFetchBlob.

rn-fetch-blob

这篇关于我如何克服“权限拒绝......使用 ACTION_OPEN_DOCUMENT 或相关 API 获得访问权限"?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆