TensorFlow Lite C ++ API示例进行推理 [英] TensorFlow Lite C++ API example for inference

查看:391
本文介绍了TensorFlow Lite C ++ API示例进行推理的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试获取一个TensorFlow Lite示例,使其在具有ARM Cortex-A72处理器的计算机上运行.不幸的是,由于缺乏有关如何使用C ++ API的示例,我无法部署测试模型.我将尝试解释到目前为止我所取得的成就.

I am trying to get a TensorFlow Lite example to run on a machine with an ARM Cortex-A72 processor. Unfortunately, I wasn't able to deploy a test model due to the lack of examples on how to use the C++ API. I will try to explain what I have achieved so far.

创建tflite模型

我创建了一个简单的线性回归模型并将其转换,该模型应该近似函数 f(x)= 2x-1 .我从一些教程中获得了此代码段,但现在找不到了.

I have created a simple linear regression model and converted it, which should approximate the function f(x) = 2x - 1. I got this code snippet from some tutorial, but I am unable to find it anymore.

import tensorflow as tf
import numpy as np
from tensorflow import keras
from tensorflow.contrib import lite

model = keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer='sgd', loss='mean_squared_error')

xs = np.array([ -1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([ -3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)

model.fit(xs, ys, epochs=500)

print(model.predict([10.0]))

keras_file = 'linear.h5'
keras.models.save_model(model, keras_file)

converter = lite.TocoConverter.from_keras_model_file(keras_file)
tflite_model = converter.convert()
open('linear.tflite', 'wb').write(tflite_model)

这将创建一个名为 linear.tflite 的二进制文件,我应该可以加载该文件.

This creates a binary called linear.tflite, which I should be able to load.

为我的机器编译TensorFlow Lite

TensorFlow Lite随附一个脚本,用于在具有aarch64架构的计算机上进行编译.我遵循指南此处来执行此操作,即使我不得不稍微修改Makefile.请注意,我是在目标系统上本地编译的.这创建了一个名为 libtensorflow-lite.a 的静态库.

TensorFlow Lite comes with a script for the compilation on machines with the aarch64 architecture. I followed the guide here to do this, even though I had to modify the Makefile slightly. Note that I compiled this natively on my target system. This created a static library called libtensorflow-lite.a.

问题:推断

我试图按照网站此处上的教程进行操作,并简单地粘贴了一起加载和运行模型的代码片段,例如

I tried to follow the tutorial on the site here, and simply pasted the the code snippets from loading and running the model together, e.g.

class FlatBufferModel {
  // Build a model based on a file. Return a nullptr in case of failure.
  static std::unique_ptr<FlatBufferModel> BuildFromFile(
      const char* filename,
      ErrorReporter* error_reporter);

  // Build a model based on a pre-loaded flatbuffer. The caller retains
  // ownership of the buffer and should keep it alive until the returned object
  // is destroyed. Return a nullptr in case of failure.
  static std::unique_ptr<FlatBufferModel> BuildFromBuffer(
      const char* buffer,
      size_t buffer_size,
      ErrorReporter* error_reporter);
};

tflite::FlatBufferModel model("./linear.tflite");

tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;
tflite::InterpreterBuilder(*model, resolver)(&interpreter);

// Resize input tensors, if desired.
interpreter->AllocateTensors();

float* input = interpreter->typed_input_tensor<float>(0);
// Fill `input`.

interpreter->Invoke();

float* output = interpreter->typed_output_tensor<float>(0);

在尝试通过以下方式进行编译时

When trying to compile this via

g++ demo.cpp libtensorflow-lite.a

我遇到了很多错误.日志:

I get a load of errors. Log:

root@localhost:/inference# g++ demo.cpp libtensorflow-lite.a 
demo.cpp:3:15: error: ‘unique_ptr’ in namespace ‘std’ does not name a template type
   static std::unique_ptr<FlatBufferModel> BuildFromFile(
               ^~~~~~~~~~
demo.cpp:10:15: error: ‘unique_ptr’ in namespace ‘std’ does not name a template type
   static std::unique_ptr<FlatBufferModel> BuildFromBuffer(
               ^~~~~~~~~~
demo.cpp:16:1: error: ‘tflite’ does not name a type
 tflite::FlatBufferModel model("./linear.tflite");
 ^~~~~~
demo.cpp:18:1: error: ‘tflite’ does not name a type
 tflite::ops::builtin::BuiltinOpResolver resolver;
 ^~~~~~
demo.cpp:19:6: error: ‘unique_ptr’ in namespace ‘std’ does not name a template type
 std::unique_ptr<tflite::Interpreter> interpreter;
      ^~~~~~~~~~
demo.cpp:20:1: error: ‘tflite’ does not name a type
 tflite::InterpreterBuilder(*model, resolver)(&interpreter);
 ^~~~~~
demo.cpp:23:1: error: ‘interpreter’ does not name a type
 interpreter->AllocateTensors();
 ^~~~~~~~~~~
demo.cpp:25:16: error: ‘interpreter’ was not declared in this scope
 float* input = interpreter->typed_input_tensor<float>(0);
                ^~~~~~~~~~~
demo.cpp:25:48: error: expected primary-expression before ‘float’
 float* input = interpreter->typed_input_tensor<float>(0);
                                                ^~~~~
demo.cpp:28:1: error: ‘interpreter’ does not name a type
 interpreter->Invoke();
 ^~~~~~~~~~~
demo.cpp:30:17: error: ‘interpreter’ was not declared in this scope
 float* output = interpreter->typed_output_tensor<float>(0);
                 ^~~~~~~~~~~
demo.cpp:30:50: error: expected primary-expression before ‘float’
 float* output = interpreter->typed_output_tensor<float>(0);

我对C ++比较陌生,所以这里可能缺少明显的东西.但是,似乎其他人也对C ++ API感到麻烦(请参阅此GitHub问题).有没有人偶然发现并运行它?

I am relatively new to C++, so I may be missing something obvious here. It seems, however, that other people have trouble with the C++ API as well (look at this GitHub issue). Has anybody also stumbled across this and got it to run?

我要介绍的最重要方面是:

The most important aspects for me to cover would be:

1.)我在哪里以及如何定义签名,以便模型知道将什么视为输入和输出?

1.) Where and how do I define the signature, so that the model knows what to treat as inputs and outputs?

2.)我必须包含哪些标题?

2.) Which headers do I have to include?

谢谢!

编辑

由于@Alex Cohn,链接器能够找到正确的标头.我还意识到,我可能不需要重新定义flatbuffers类,因此我最终得到了这段代码(已标记较小的更改):

Thanks to @Alex Cohn, the linker was able to find the correct headers. I also realized that I probably do not need to redefine the flatbuffers class, so I ended up with this code (minor change is marked):

#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/tools/gen_op_registration.h"

auto model = tflite::FlatBufferModel::BuildFromFile("linear.tflite");   //CHANGED

tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;
tflite::InterpreterBuilder(*model, resolver)(&interpreter);

// Resize input tensors, if desired.
interpreter->AllocateTensors();

float* input = interpreter->typed_input_tensor<float>(0);
// Fill `input`.

interpreter->Invoke();

float* output = interpreter->typed_output_tensor<float>(0);

这大大减少了错误的数量,但是我不确定如何解决其余的错误:

This reduces the number of errors greatly, but I am not sure how to resolve the rest:

root@localhost:/inference# g++ demo.cpp -I/tensorflow
demo.cpp:10:34: error: expected ‘)’ before ‘,’ token
 tflite::InterpreterBuilder(*model, resolver)(&interpreter);
                                  ^
demo.cpp:10:44: error: expected initializer before ‘)’ token
 tflite::InterpreterBuilder(*model, resolver)(&interpreter);
                                            ^
demo.cpp:13:1: error: ‘interpreter’ does not name a type
 interpreter->AllocateTensors();
 ^~~~~~~~~~~
demo.cpp:18:1: error: ‘interpreter’ does not name a type
 interpreter->Invoke();
 ^~~~~~~~~~~

我该如何解决这些问题?看来我必须定义自己的解析器,但是我不知道该怎么做.

How do I have to tackle these? It seems that I have to define my own resolver, but I have no clue on how to do that.

推荐答案

以下是包含的最小集合:

Here is the minimal set of includes:

#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/tools/gen_op_registration.h"

这些将包括其他标题,例如<内存> ,它定义了 std :: unique_ptr .

These will include other headers, e.g. <memory> which defines std::unique_ptr.

这篇关于TensorFlow Lite C ++ API示例进行推理的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆