如何在(Android Studio)NDK(C/C ++ API)中运行Tensorflow-Lite推理? [英] How to run a Tensorflow-Lite inference in (Android Studio) NDK (C / C++ API)?
问题描述
- 我从Keras建立了Tensorflow(TF)模型,并将其转换为Tensorflow-Lite(TFL)
- 我在Android Studio中构建了一个Android应用,并使用Java API运行了TFL模型
- 在Java应用程序中,我使用了TFL支持库(请参见此处),并通过将
implementation 'org.tensorflow:tensorflow-lite:+'
包括在我的build.gradle
依赖项下 ,从JCenter进行TensorFlow Lite AAR
- I built a Tensorflow (TF) model from Keras and converted it to Tensorflow-Lite (TFL)
- I built an Android app in Android Studio and used the Java API to run the TFL model
- In the Java app, I used the TFL Support Library (see here), and the TensorFlow Lite AAR from JCenter by including
implementation 'org.tensorflow:tensorflow-lite:+'
under mybuild.gradle
dependencies
推断时间不是很好,所以现在我想在Android的NDK中使用TFL.
Inference times are not so great, so now I want to use TFL in Android's NDK.
因此,我在Android Studio的NDK中构建了Java应用程序的精确副本,现在我试图在项目中包含TFL库.我遵循了 TensorFlow-Lite的Android指南,并在本地构建了TFL库(一个AAR文件),并将该库包含在Android Studio的NDK项目中.
So I built an exact copy of the Java app in Android Studio's NDK, and now I'm trying to include the TFL libs in the project. I followed TensorFlow-Lite's Android guide and built the TFL library locally (and got an AAR file), and included the library in my NDK project in Android Studio.
现在,我试图通过在代码中使用#include
来尝试在我的C ++文件中使用TFL库,但是我收到一条错误消息:cannot find tensorflow
(或我尝试使用的任何其他名称,根据我在CMakeLists.txt
文件中提供的名称).
Now I'm trying to use the TFL library in my C++ file, by trying to #include
it in code, but I get an error message: cannot find tensorflow
(or any other name I'm trying to use, according to the name I give it in my CMakeLists.txt
file).
应用程序 build.gradle :
apply plugin: 'com.android.application'
android {
compileSdkVersion 29
buildToolsVersion "29.0.3"
defaultConfig {
applicationId "com.ndk.tflite"
minSdkVersion 28
targetSdkVersion 29
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
externalNativeBuild {
cmake {
cppFlags ""
}
}
ndk {
abiFilters 'arm64-v8a'
}
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
// tf lite
aaptOptions {
noCompress "tflite"
}
externalNativeBuild {
cmake {
path "src/main/cpp/CMakeLists.txt"
version "3.10.2"
}
}
}
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation 'androidx.appcompat:appcompat:1.1.0'
implementation 'androidx.constraintlayout:constraintlayout:1.1.3'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'androidx.test.ext:junit:1.1.1'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0'
// tflite build
compile(name:'tensorflow-lite', ext:'aar')
}
项目 build.gradle :
buildscript {
repositories {
google()
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:3.6.2'
}
}
allprojects {
repositories {
google()
jcenter()
// native tflite
flatDir {
dirs 'libs'
}
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
CMakeLists.txt :
cmake_minimum_required(VERSION 3.4.1)
add_library( # Sets the name of the library.
native-lib
# Sets the library as a shared library.
SHARED
# Provides a relative path to your source file(s).
native-lib.cpp )
add_library( # Sets the name of the library.
tensorflow-lite
# Sets the library as a shared library.
SHARED
# Provides a relative path to your source file(s).
native-lib.cpp )
find_library( # Sets the name of the path variable.
log-lib
# Specifies the name of the NDK library that
# you want CMake to locate.
log )
target_link_libraries( # Specifies the target library.
native-lib tensorflow-lite
# Links the target library to the log library
# included in the NDK.
${log-lib} )
native-lib.cpp :
#include <jni.h>
#include <string>
#include "tensorflow"
extern "C" JNIEXPORT jstring JNICALL
Java_com_xvu_f32c_1jni_MainActivity_stringFromJNI(
JNIEnv* env,
jobject /* this */) {
std::string hello = "Hello from C++";
return env->NewStringUTF(hello.c_str());
}
class FlatBufferModel {
// Build a model based on a file. Return a nullptr in case of failure.
static std::unique_ptr<FlatBufferModel> BuildFromFile(
const char* filename,
ErrorReporter* error_reporter);
// Build a model based on a pre-loaded flatbuffer. The caller retains
// ownership of the buffer and should keep it alive until the returned object
// is destroyed. Return a nullptr in case of failure.
static std::unique_ptr<FlatBufferModel> BuildFromBuffer(
const char* buffer,
size_t buffer_size,
ErrorReporter* error_reporter);
};
进度
我也尝试遵循这些规则:
Progress
I also tried to follow these:
- 使用问题Android Studio Project中的tensorflow lite C ++ API
- Android C ++ NDK:一些共享库拒绝在运行时链接
- 如何设置Tensorflow Lite C ++的输入
- 我如何仅从源代码构建TensorFlow lite而不构建所有TensorFlow?
- Problems with using tensorflow lite C++ API in Android Studio Project
- Android C++ NDK : some shared libraries refuses to link in runtime
- How to build TensorFlow Lite as a static library and link to it from a separate (CMake) project?
- how to set input of Tensorflow Lite C++
- How can I build only TensorFlow lite and not all TensorFlow from source?
但是在我的情况下,我使用Bazel构建了TFL库.
but in my case I used Bazel to build the TFL libs.
尝试构建( label_image ),我设法将其构建并adb push
到我的设备,但是在尝试运行时出现以下错误:
Trying to build the classification demo of (label_image), I managed to build it and adb push
to my device, but when trying to run I got the following error:
ERROR: Could not open './mobilenet_quant_v1_224.tflite'.
Failed to mmap model ./mobilenet_quant_v1_224.tflite
- 我关注了 zimenglyu的帖子:尝试在
WORKSPACE
中设置android_sdk_repository
/android_ndk_repository
会给我一个错误:WORKSPACE:149:1: Cannot redefine repository after any load statement in the WORKSPACE file (for repository 'androidsdk')
,并将这些语句放在不同的位置会导致相同的错误. - 我删除了对
WORKSPACE
的这些更改,并继续了zimenglyu的帖子:我已经编译了libtensorflowLite.so
,并编辑了CMakeLists.txt
,以便引用了libtensorflowLite.so
文件,但忽略了FlatBuffer
部分. Android项目已成功编译,但没有明显变化,我仍然不能包含任何TFLite库. - I followed zimenglyu's post: trying to set
android_sdk_repository
/android_ndk_repository
inWORKSPACE
got me an error:WORKSPACE:149:1: Cannot redefine repository after any load statement in the WORKSPACE file (for repository 'androidsdk')
, and locating these statements at different places resulted in the same error. - I deleted these changes to
WORKSPACE
and continued with zimenglyu's post: I've compiledlibtensorflowLite.so
, and editedCMakeLists.txt
so that thelibtensorflowLite.so
file was referenced, but left theFlatBuffer
part out. The Android project compiled successfully, but there was no evident change, I still can't include any TFLite libraries.
尝试编译TFL,我在tensorflow/tensorflow/lite/BUILD
中添加了cc_binary
(遵循 label_image示例):
Trying to compile TFL, I added a cc_binary
to tensorflow/tensorflow/lite/BUILD
(following the label_image example):
cc_binary(
name = "native-lib",
srcs = [
"native-lib.cpp",
],
linkopts = tflite_experimental_runtime_linkopts() + select({
"//tensorflow:android": [
"-pie",
"-lm",
],
"//conditions:default": [],
}),
deps = [
"//tensorflow/lite/c:common",
"//tensorflow/lite:framework",
"//tensorflow/lite:string_util",
"//tensorflow/lite/delegates/nnapi:nnapi_delegate",
"//tensorflow/lite/kernels:builtin_ops",
"//tensorflow/lite/profiling:profiler",
"//tensorflow/lite/tools/evaluation:utils",
] + select({
"//tensorflow:android": [
"//tensorflow/lite/delegates/gpu:delegate",
],
"//tensorflow:android_arm64": [
"//tensorflow/lite/delegates/gpu:delegate",
],
"//conditions:default": [],
}),
)
并尝试为x86_64
和arm64-v8a
构建它,但出现错误:cc_toolchain_suite rule @local_config_cc//:toolchain: cc_toolchain_suite '@local_config_cc//:toolchain' does not contain a toolchain for cpu 'x86_64'
.
and trying to build it for x86_64
, and arm64-v8a
I get an error: cc_toolchain_suite rule @local_config_cc//:toolchain: cc_toolchain_suite '@local_config_cc//:toolchain' does not contain a toolchain for cpu 'x86_64'
.
在第47行中检查external/local_config_cc/BUILD
(提供了错误):
Checking external/local_config_cc/BUILD
(which provided the error) in line 47:
cc_toolchain_suite(
name = "toolchain",
toolchains = {
"k8|compiler": ":cc-compiler-k8",
"k8": ":cc-compiler-k8",
"armeabi-v7a|compiler": ":cc-compiler-armeabi-v7a",
"armeabi-v7a": ":cc-compiler-armeabi-v7a",
},
)
,这是找到的仅有2个cc_toolchain
.在存储库中搜索"cc-compiler-",我仅发现" aarch64 ",我认为它是针对64位ARM的,但对于"x86_64"则没有任何帮助.不过有"x64_windows"-我在Linux上.
and these are the only 2 cc_toolchain
s found. Searching the repository for "cc-compiler-" I only found "aarch64", which I assumed is for the 64-bit ARM, but nothing with "x86_64". There are "x64_windows", though - and I'm on Linux.
尝试像这样使用aarch64进行构建:
Trying to build with aarch64 like so:
bazel build -c opt --fat_apk_cpu=aarch64 --cpu=aarch64 --host_crosstool_top=@bazel_tools//tools/cpp:toolchain //tensorflow/lite/java:tensorflow-lite
导致错误:
ERROR: /.../external/local_config_cc/BUILD:47:1: in cc_toolchain_suite rule @local_config_cc//:toolchain: cc_toolchain_suite '@local_config_cc//:toolchain' does not contain a toolchain for cpu 'aarch64'
在Android Studio中使用库:
我能够通过在build config中更改soname
并在CMakeLists.txt
中使用完整路径来构建用于x86_64
体系结构的库.这产生了一个.so
共享库.另外-通过调整aarch64_makefile.inc
文件,我能够使用TFLite Docker容器为arm64-v8a
构建库,但是我没有更改任何构建选项,而是让build_aarch64_lib.sh
进行构建.这产生了一个.a
静态库.
Using the libraries in Android Studio:
I was able to build the library for x86_64
architecture by changing the soname
in build config and using full paths in CMakeLists.txt
. This resulted in a .so
shared library. Also - I was able to build the library for arm64-v8a
using the TFLite Docker container, by adjusting the aarch64_makefile.inc
file, but I did not change any build options, and let build_aarch64_lib.sh
whatever it builds. This resulted in a .a
static library.
所以现在我有两个TFLite库,但是我仍然无法使用它们(例如,我不能#include "..."
任何东西).
So now I have two TFLite libs, but I'm still unable to use them (I can't #include "..."
anything for example).
尝试构建项目时,仅使用x86_64
可以正常工作,但是尝试包含arm64-v8a
库会导致忍者错误:'.../libtensorflow-lite.a', needed by '.../app/build/intermediates/cmake/debug/obj/armeabi-v7a/libnative-lib.so', missing and no known rule to make it
.
When trying to build the project, using only x86_64
works fine, but trying to include the arm64-v8a
library results in ninja error: '.../libtensorflow-lite.a', needed by '.../app/build/intermediates/cmake/debug/obj/armeabi-v7a/libnative-lib.so', missing and no known rule to make it
.
- 我在Android Studio中创建了一个Native C ++项目
- 我从Tensorflow的
lite
目录中获取了基本的C/C ++源文件和头文件,并在app/src/main/cpp
中创建了一个类似的结构,其中包括(A)张量流,(B)absl和(C)平面缓冲区文件 - 我将所有tensorflow头文件中的
#include "tensorflow/...
行更改为相对路径,以便编译器可以找到它们. - 在应用程序的
build.gradle
中,我为.tflite
文件添加了无压缩行:aaptOptions { noCompress "tflite" }
- 我在应用程序中添加了
assets
目录 - 在
native-lib.cpp
中,我添加了 TFLite网站上的一些示例代码 - 试图使用包含的源文件来构建项目(构建目标为
arm64-v8a
).
- I created a Native C++ project in Android Studio
- I took the basic C/C++ source files and headers from Tensorflow's
lite
directory, and created a similar structure inapp/src/main/cpp
, in which I include the (A) tensorflow, (B) absl and (C) flatbuffers files - I changed the
#include "tensorflow/...
lines in all of tensorflow's header files to relative paths so the compiler can find them. - In the app's
build.gradle
I added a no-compression line for the.tflite
file:aaptOptions { noCompress "tflite" }
- I added an
assets
directory to the app - In
native-lib.cpp
I added some example code from the TFLite website - Tried to build the project with the source files included (build target is
arm64-v8a
).
我得到一个错误:
/path/to/Android/Sdk/ndk/20.0.5594570/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/c++/v1/memory:2339: error: undefined reference to 'tflite::impl::Interpreter::~Interpreter()'
<memory>
中,第2339行是"delete __ptr;"
行:
in <memory>
, line 2339 is the "delete __ptr;"
line:
_LIBCPP_INLINE_VISIBILITY void operator()(_Tp* __ptr) const _NOEXCEPT {
static_assert(sizeof(_Tp) > 0,
"default_delete can not delete incomplete type");
static_assert(!is_void<_Tp>::value,
"default_delete can not delete incomplete type");
delete __ptr;
}
问题
如何在Android Studio中包含TFLite库,以便可以从NDK运行TFL推论?
Question
How can I include the TFLite libraries in Android Studio, so I can run a TFL inference from the NDK?
或者-如何使用gradle(当前使用 cmake )来构建和编译源文件?
Alternatively - how can I use gradle (currently with cmake) to build and compile the source files?
推荐答案
我通过以下方式将原生TFL与C-API结合使用:
I use Native TFL with C-API in the following way:
- 下载最新版本的 TensorFlow Lite AAR文件
- 将下载的
.arr
文件的文件类型更改为.zip
并解压缩该文件以获取共享库(.so
文件) - 从 TFL的
c
目录中下载所有头文件.仓库 - 在Android Studio中创建一个Android C ++应用
- 在
app/src/main
中创建jni
目录(New
->Folder
->JNI Folder
),并在其中创建体系结构子目录(例如,arm64-v8a
或x86_64
) - 将所有头文件放在
jni
目录(架构目录旁边)中,并将共享库放在架构目录中. - 打开
CMakeLists.txt
文件,并为TFL库添加一个add_library
节,在set_target_properties
节中包含共享库的路径,并在include_directories
节中包含标题(请参见以下注意"部分) - 同步Gradle
- Download the latest version of TensorFlow Lite AAR file
- Change the file type of downloaded
.arr
file to.zip
and unzip the file to get the shared library (.so
file) - Download all header files from the
c
directory in the TFL repository - Create an Android C++ app in Android Studio
- Create a
jni
directory (New
->Folder
->JNI Folder
) inapp/src/main
and also create architecture sub-directories in it (arm64-v8a
orx86_64
for example) - Put all header files in the
jni
directory (next to the architecture directories), and put the shared library inside the architecture directory/ies - Open the
CMakeLists.txt
file and include anadd_library
stanza for the TFL library, the path to the shared library in aset_target_properties
stanza and the headers ininclude_directories
stanza (see below, in NOTES section) - Sync Gradle
用法:
在native-lib.cpp
中包括标题,例如:
USAGE:
In native-lib.cpp
include the headers, for example:
#include "../jni/c_api.h"
#include "../jni/common.h"
#include "../jni/builtin_ops.h"
可以直接调用TFL函数,例如:
TFL functions can be called directly, for example:
TfLiteModel * model = TfLiteModelCreateFromFile(full_path);
TfLiteInterpreter * interpreter = TfLiteInterpreterCreate(model);
TfLiteInterpreterAllocateTensors(interpreter);
TfLiteTensor * input_tensor =
TfLiteInterpreterGetInputTensor(interpreter, 0);
const TfLiteTensor * output_tensor =
TfLiteInterpreterGetOutputTensor(interpreter, 0);
TfLiteStatus from_status = TfLiteTensorCopyFromBuffer(
input_tensor,
input_data,
TfLiteTensorByteSize(input_tensor));
TfLiteStatus interpreter_invoke_status = TfLiteInterpreterInvoke(interpreter);
TfLiteStatus to_status = TfLiteTensorCopyToBuffer(
output_tensor,
output_data,
TfLiteTensorByteSize(output_tensor));
注意:
- 在此安装程序中,使用了SDK版本29
-
cmake
环境也包括cppFlags "-frtti -fexceptions"
- In this setup SDK version 29 was used
cmake
environment also includedcppFlags "-frtti -fexceptions"
NOTES:
CMakeLists.txt
示例:
set(JNI_DIR ${CMAKE_CURRENT_SOURCE_DIR}/../jni)
add_library(tflite-lib SHARED IMPORTED)
set_target_properties(tflite-lib
PROPERTIES IMPORTED_LOCATION
${JNI_DIR}/${ANDROID_ABI}/libtfl.so)
include_directories( ${JNI_DIR} )
target_link_libraries(
native-lib
tflite-lib
...)
这篇关于如何在(Android Studio)NDK(C/C ++ API)中运行Tensorflow-Lite推理?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!