如何在(Android Studio)NDK(C/C ++ API)中运行Tensorflow-Lite推理? [英] How to run a Tensorflow-Lite inference in (Android Studio) NDK (C / C++ API)?

查看:582
本文介绍了如何在(Android Studio)NDK(C/C ++ API)中运行Tensorflow-Lite推理?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

  • 我从Keras建立了Tensorflow(TF)模型,并将其转换为Tensorflow-Lite(TFL)
  • 我在Android Studio中构建了一个Android应用,并使用Java API运行了TFL模型
  • 在Java应用程序中,我使用了TFL支持库(请参见此处),并通过将implementation 'org.tensorflow:tensorflow-lite:+'包括在我的build.gradle依赖项下
  • ,从JCenter进行TensorFlow Lite AAR
  • I built a Tensorflow (TF) model from Keras and converted it to Tensorflow-Lite (TFL)
  • I built an Android app in Android Studio and used the Java API to run the TFL model
  • In the Java app, I used the TFL Support Library (see here), and the TensorFlow Lite AAR from JCenter by including implementation 'org.tensorflow:tensorflow-lite:+' under my build.gradle dependencies

推断时间不是很好,所以现在我想在Android的NDK中使用TFL.

Inference times are not so great, so now I want to use TFL in Android's NDK.

因此,我在Android Studio的NDK中构建了Java应用程序的精确副本,现在我试图在项目中包含TFL库.我遵循了 TensorFlow-Lite的Android指南,并在本地构建了TFL库(一个AAR文件),并将该库包含在Android Studio的NDK项目中.

So I built an exact copy of the Java app in Android Studio's NDK, and now I'm trying to include the TFL libs in the project. I followed TensorFlow-Lite's Android guide and built the TFL library locally (and got an AAR file), and included the library in my NDK project in Android Studio.

现在,我试图通过在代码中使用#include来尝试在我的C ++文件中使用TFL库,但是我收到一条错误消息:cannot find tensorflow(或我尝试使用的任何其他名称,根据我在CMakeLists.txt文件中提供的名称).

Now I'm trying to use the TFL library in my C++ file, by trying to #include it in code, but I get an error message: cannot find tensorflow (or any other name I'm trying to use, according to the name I give it in my CMakeLists.txt file).

应用程序 build.gradle :

apply plugin: 'com.android.application'

android {
    compileSdkVersion 29
    buildToolsVersion "29.0.3"

    defaultConfig {
        applicationId "com.ndk.tflite"
        minSdkVersion 28
        targetSdkVersion 29
        versionCode 1
        versionName "1.0"

        testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"

        externalNativeBuild {
            cmake {
                cppFlags ""
            }
        }

        ndk {
            abiFilters 'arm64-v8a'
        }

    }

    buildTypes {
        release {
            minifyEnabled false
            proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
        }
    }

    // tf lite
    aaptOptions {
        noCompress "tflite"
    }

    externalNativeBuild {
        cmake {
            path "src/main/cpp/CMakeLists.txt"
            version "3.10.2"
        }
    }
}

dependencies {
    implementation fileTree(dir: 'libs', include: ['*.jar'])

    implementation 'androidx.appcompat:appcompat:1.1.0'
    implementation 'androidx.constraintlayout:constraintlayout:1.1.3'
    testImplementation 'junit:junit:4.12'
    androidTestImplementation 'androidx.test.ext:junit:1.1.1'
    androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0'

    // tflite build
    compile(name:'tensorflow-lite', ext:'aar')

}

项目 build.gradle :

buildscript {

    repositories {
        google()
        jcenter()

    }
    dependencies {
        classpath 'com.android.tools.build:gradle:3.6.2'

    }
}

allprojects {
    repositories {
        google()
        jcenter()

        // native tflite
        flatDir {
            dirs 'libs'
        }

    }

}


task clean(type: Delete) {
    delete rootProject.buildDir
}

CMakeLists.txt :

cmake_minimum_required(VERSION 3.4.1)

add_library( # Sets the name of the library.
             native-lib

             # Sets the library as a shared library.
             SHARED

             # Provides a relative path to your source file(s).
             native-lib.cpp )

add_library( # Sets the name of the library.
        tensorflow-lite

        # Sets the library as a shared library.
        SHARED

        # Provides a relative path to your source file(s).
        native-lib.cpp )

find_library( # Sets the name of the path variable.
              log-lib

              # Specifies the name of the NDK library that
              # you want CMake to locate.
              log )


target_link_libraries( # Specifies the target library.
                       native-lib tensorflow-lite

                       # Links the target library to the log library
                       # included in the NDK.
                       ${log-lib} )

native-lib.cpp :

#include <jni.h>
#include <string>

#include "tensorflow"

extern "C" JNIEXPORT jstring JNICALL
Java_com_xvu_f32c_1jni_MainActivity_stringFromJNI(
        JNIEnv* env,
        jobject /* this */) {
    std::string hello = "Hello from C++";
    return env->NewStringUTF(hello.c_str());
}

class FlatBufferModel {
    // Build a model based on a file. Return a nullptr in case of failure.
    static std::unique_ptr<FlatBufferModel> BuildFromFile(
            const char* filename,
            ErrorReporter* error_reporter);

    // Build a model based on a pre-loaded flatbuffer. The caller retains
    // ownership of the buffer and should keep it alive until the returned object
    // is destroyed. Return a nullptr in case of failure.
    static std::unique_ptr<FlatBufferModel> BuildFromBuffer(
            const char* buffer,
            size_t buffer_size,
            ErrorReporter* error_reporter);
};

进度

我也尝试遵循这些规则:

Progress

I also tried to follow these:

  • Problems with using tensorflow lite C++ API in Android Studio Project
  • Android C++ NDK : some shared libraries refuses to link in runtime
  • How to build TensorFlow Lite as a static library and link to it from a separate (CMake) project?
  • how to set input of Tensorflow Lite C++
  • How can I build only TensorFlow lite and not all TensorFlow from source?

但是在我的情况下,我使用Bazel构建了TFL库.

but in my case I used Bazel to build the TFL libs.

尝试构建( label_image ),我设法将其构建并adb push到我的设备,但是在尝试运行时出现以下错误:

Trying to build the classification demo of (label_image), I managed to build it and adb push to my device, but when trying to run I got the following error:

ERROR: Could not open './mobilenet_quant_v1_224.tflite'.
Failed to mmap model ./mobilenet_quant_v1_224.tflite

  • 我关注了 zimenglyu的帖子:尝试在WORKSPACE中设置android_sdk_repository/android_ndk_repository会给我一个错误:WORKSPACE:149:1: Cannot redefine repository after any load statement in the WORKSPACE file (for repository 'androidsdk'),并将这些语句放在不同的位置会导致相同的错误.
  • 我删除了对WORKSPACE的这些更改,并继续了zimenglyu的帖子:我已经编译了libtensorflowLite.so,并编辑了CMakeLists.txt,以便引用了libtensorflowLite.so文件,但忽略了FlatBuffer部分. Android项目已成功编译,但没有明显变化,我仍然不能包含任何TFLite库.
    • I followed zimenglyu's post: trying to set android_sdk_repository / android_ndk_repository in WORKSPACE got me an error: WORKSPACE:149:1: Cannot redefine repository after any load statement in the WORKSPACE file (for repository 'androidsdk'), and locating these statements at different places resulted in the same error.
    • I deleted these changes to WORKSPACE and continued with zimenglyu's post: I've compiled libtensorflowLite.so, and edited CMakeLists.txt so that the libtensorflowLite.so file was referenced, but left the FlatBuffer part out. The Android project compiled successfully, but there was no evident change, I still can't include any TFLite libraries.
    • 尝试编译TFL,我在tensorflow/tensorflow/lite/BUILD中添加了cc_binary(遵循 label_image示例):

      Trying to compile TFL, I added a cc_binary to tensorflow/tensorflow/lite/BUILD (following the label_image example):

      cc_binary(
          name = "native-lib",
          srcs = [
              "native-lib.cpp",
          ],
          linkopts = tflite_experimental_runtime_linkopts() + select({
              "//tensorflow:android": [
                  "-pie",
                  "-lm",
              ],
              "//conditions:default": [],
          }),
          deps = [
              "//tensorflow/lite/c:common",
              "//tensorflow/lite:framework",
              "//tensorflow/lite:string_util",
              "//tensorflow/lite/delegates/nnapi:nnapi_delegate",
              "//tensorflow/lite/kernels:builtin_ops",
              "//tensorflow/lite/profiling:profiler",
              "//tensorflow/lite/tools/evaluation:utils",
          ] + select({
              "//tensorflow:android": [
                  "//tensorflow/lite/delegates/gpu:delegate",
              ],
              "//tensorflow:android_arm64": [
                  "//tensorflow/lite/delegates/gpu:delegate",
              ],
              "//conditions:default": [],
          }),
      )
      

      并尝试为x86_64arm64-v8a构建它,但出现错误:cc_toolchain_suite rule @local_config_cc//:toolchain: cc_toolchain_suite '@local_config_cc//:toolchain' does not contain a toolchain for cpu 'x86_64'.

      and trying to build it for x86_64, and arm64-v8a I get an error: cc_toolchain_suite rule @local_config_cc//:toolchain: cc_toolchain_suite '@local_config_cc//:toolchain' does not contain a toolchain for cpu 'x86_64'.

      在第47行中检查external/local_config_cc/BUILD(提供了错误):

      Checking external/local_config_cc/BUILD (which provided the error) in line 47:

      cc_toolchain_suite(
          name = "toolchain",
          toolchains = {
              "k8|compiler": ":cc-compiler-k8",
              "k8": ":cc-compiler-k8",
              "armeabi-v7a|compiler": ":cc-compiler-armeabi-v7a",
              "armeabi-v7a": ":cc-compiler-armeabi-v7a",
          },
      )
      

      ,这是找到的仅有2个cc_toolchain.在存储库中搜索"cc-compiler-",我仅发现" aarch64 ",我认为它是针对64位ARM的,但对于"x86_64"则没有任何帮助.不过有"x64_windows"-我在Linux上.

      and these are the only 2 cc_toolchains found. Searching the repository for "cc-compiler-" I only found "aarch64", which I assumed is for the 64-bit ARM, but nothing with "x86_64". There are "x64_windows", though - and I'm on Linux.

      尝试像这样使用aarch64进行构建:

      Trying to build with aarch64 like so:

      bazel build -c opt --fat_apk_cpu=aarch64 --cpu=aarch64 --host_crosstool_top=@bazel_tools//tools/cpp:toolchain //tensorflow/lite/java:tensorflow-lite
      

      导致错误:

      ERROR: /.../external/local_config_cc/BUILD:47:1: in cc_toolchain_suite rule @local_config_cc//:toolchain: cc_toolchain_suite '@local_config_cc//:toolchain' does not contain a toolchain for cpu 'aarch64'
      

      在Android Studio中使用库:

      我能够通过在build config中更改soname并在CMakeLists.txt中使用完整路径来构建用于x86_64体系结构的库.这产生了一个.so共享库.另外-通过调整aarch64_makefile.inc文件,我能够使用TFLite Docker容器为arm64-v8a构建库,但是我没有更改任何构建选项,而是让build_aarch64_lib.sh进行构建.这产生了一个.a静态库.

      Using the libraries in Android Studio:

      I was able to build the library for x86_64 architecture by changing the soname in build config and using full paths in CMakeLists.txt. This resulted in a .so shared library. Also - I was able to build the library for arm64-v8a using the TFLite Docker container, by adjusting the aarch64_makefile.inc file, but I did not change any build options, and let build_aarch64_lib.sh whatever it builds. This resulted in a .a static library.

      所以现在我有两个TFLite库,但是我仍然无法使用它们(例如,我不能#include "..."任何东西).

      So now I have two TFLite libs, but I'm still unable to use them (I can't #include "..." anything for example).

      尝试构建项目时,仅使用x86_64可以正常工作,但是尝试包含arm64-v8a库会导致忍者错误:'.../libtensorflow-lite.a', needed by '.../app/build/intermediates/cmake/debug/obj/armeabi-v7a/libnative-lib.so', missing and no known rule to make it.

      When trying to build the project, using only x86_64 works fine, but trying to include the arm64-v8a library results in ninja error: '.../libtensorflow-lite.a', needed by '.../app/build/intermediates/cmake/debug/obj/armeabi-v7a/libnative-lib.so', missing and no known rule to make it.

      1. 我在Android Studio中创建了一个Native C ++项目
      2. 我从Tensorflow的lite目录中获取了基本的C/C ++源文件和头文件,并在app/src/main/cpp中创建了一个类似的结构,其中包括(A)张量流,(B)absl和(C)平面缓冲区文件
      3. 我将所有tensorflow头文件中的#include "tensorflow/...行更改为相对路径,以便编译器可以找到它们.
      4. 在应用程序的build.gradle中,我为.tflite文件添加了无压缩行:aaptOptions { noCompress "tflite" }
      5. 我在应用程序中添加了assets目录
      6. native-lib.cpp中,我添加了 TFLite网站上的一些示例代码
      7. 试图使用包含的源文件来构建项目(构建目标为arm64-v8a).
      1. I created a Native C++ project in Android Studio
      2. I took the basic C/C++ source files and headers from Tensorflow's lite directory, and created a similar structure in app/src/main/cpp, in which I include the (A) tensorflow, (B) absl and (C) flatbuffers files
      3. I changed the #include "tensorflow/... lines in all of tensorflow's header files to relative paths so the compiler can find them.
      4. In the app's build.gradle I added a no-compression line for the .tflite file: aaptOptions { noCompress "tflite" }
      5. I added an assets directory to the app
      6. In native-lib.cpp I added some example code from the TFLite website
      7. Tried to build the project with the source files included (build target is arm64-v8a).

      我得到一个错误:

      /path/to/Android/Sdk/ndk/20.0.5594570/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/c++/v1/memory:2339: error: undefined reference to 'tflite::impl::Interpreter::~Interpreter()'
      

      <memory>中,第2339行是"delete __ptr;"行:

      in <memory>, line 2339 is the "delete __ptr;" line:

      _LIBCPP_INLINE_VISIBILITY void operator()(_Tp* __ptr) const _NOEXCEPT {
          static_assert(sizeof(_Tp) > 0,
                        "default_delete can not delete incomplete type");
          static_assert(!is_void<_Tp>::value,
                        "default_delete can not delete incomplete type");
          delete __ptr;
        }
      

      问题

      如何在Android Studio中包含TFLite库,以便可以从NDK运行TFL推论?

      Question

      How can I include the TFLite libraries in Android Studio, so I can run a TFL inference from the NDK?

      或者-如何使用gradle(当前使用 cmake )来构建和编译源文件?

      Alternatively - how can I use gradle (currently with cmake) to build and compile the source files?

      推荐答案

      我通过以下方式将原生TFL与C-API结合使用:

      I use Native TFL with C-API in the following way:

      1. 下载最新版本的 TensorFlow Lite AAR文件
      2. 将下载的.arr文件的文件类型更改为.zip并解压缩该文件以获取共享库(.so文件)
      3. TFL的c目录中下载所有头文件.仓库
      4. 在Android Studio中创建一个Android C ++应用
      5. app/src/main中创建jni目录(New-> Folder-> JNI Folder),并在其中创建体系结构子目录(例如,arm64-v8ax86_64)
      6. 将所有头文件放在jni目录(架构目录旁边)中,并将共享库放在架构目录中.
      7. 打开CMakeLists.txt文件,并为TFL库添加一个add_library节,在set_target_properties节中包含共享库的路径,并在include_directories节中包含标题(请参见以下注意"部分)
      8. 同步Gradle
      1. Download the latest version of TensorFlow Lite AAR file
      2. Change the file type of downloaded .arr file to .zip and unzip the file to get the shared library (.so file)
      3. Download all header files from the c directory in the TFL repository
      4. Create an Android C++ app in Android Studio
      5. Create a jni directory (New -> Folder -> JNI Folder) in app/src/main and also create architecture sub-directories in it (arm64-v8a or x86_64 for example)
      6. Put all header files in the jni directory (next to the architecture directories), and put the shared library inside the architecture directory/ies
      7. Open the CMakeLists.txt file and include an add_library stanza for the TFL library, the path to the shared library in a set_target_properties stanza and the headers in include_directories stanza (see below, in NOTES section)
      8. Sync Gradle

      用法:

      native-lib.cpp中包括标题,例如:

      USAGE:

      In native-lib.cpp include the headers, for example:

      #include "../jni/c_api.h"
      #include "../jni/common.h"
      #include "../jni/builtin_ops.h"
      

      可以直接调用TFL函数,例如:

      TFL functions can be called directly, for example:

      TfLiteModel * model = TfLiteModelCreateFromFile(full_path);
      TfLiteInterpreter * interpreter = TfLiteInterpreterCreate(model);
      TfLiteInterpreterAllocateTensors(interpreter);
      TfLiteTensor * input_tensor =
                  TfLiteInterpreterGetInputTensor(interpreter, 0);
      const TfLiteTensor * output_tensor =
                  TfLiteInterpreterGetOutputTensor(interpreter, 0);
      TfLiteStatus from_status = TfLiteTensorCopyFromBuffer(
                  input_tensor,
                  input_data,
                  TfLiteTensorByteSize(input_tensor));
      TfLiteStatus interpreter_invoke_status = TfLiteInterpreterInvoke(interpreter);
      TfLiteStatus to_status = TfLiteTensorCopyToBuffer(
                  output_tensor,
                  output_data,
                  TfLiteTensorByteSize(output_tensor));
      

      注意:

      • 在此安装程序中,使用了SDK版本29
      • cmake环境也包括cppFlags "-frtti -fexceptions"
      • NOTES:

        • In this setup SDK version 29 was used
        • cmake environment also included cppFlags "-frtti -fexceptions"
        • CMakeLists.txt示例:

          set(JNI_DIR ${CMAKE_CURRENT_SOURCE_DIR}/../jni)
          add_library(tflite-lib SHARED IMPORTED)
          set_target_properties(tflite-lib
                  PROPERTIES IMPORTED_LOCATION
                  ${JNI_DIR}/${ANDROID_ABI}/libtfl.so)
          include_directories( ${JNI_DIR} )
          target_link_libraries(
                  native-lib
                  tflite-lib
                  ...)
          

          这篇关于如何在(Android Studio)NDK(C/C ++ API)中运行Tensorflow-Lite推理?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆