TF-lite 模型测试因运行时错误而失败 [英] TF-lite model test fails with run-time Error

查看:60
本文介绍了TF-lite 模型测试因运行时错误而失败的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我为 MNIST 分类创建了一个 TF-lite 模型(我使用的是 TF 1.12.0 并在 Google Colab 上运行它)并且我想使用 TensorFlow Lite Python 解释器对其进行测试,如

I have created a TF-lite model for MNIST classification (I am using TF 1.12.0 and running this on Google Colab) and I want to test it using TensorFlow Lite Python interpreter as given in

https://github.com/freedomtan/tensorflow/blob/deeplab_tflite_python/tensorflow/contrib/lite/examples/python/label_image.py

但是当我尝试调用解释器时出现此错误 -

But I am getting this error when I try to invoke the interpreter -

RuntimeError                              Traceback (most recent call last)
<ipython-input-138-7d35ed1dfe14> in <module>()
----> 1 interpreter.invoke()

/usr/local/lib/python3.6/dist- 
packages/tensorflow/contrib/lite/python/interpreter.py in invoke(self)
251       ValueError: When the underlying interpreter fails raise 
ValueError.
252     """
--> 253     self._ensure_safe()
254     self._interpreter.Invoke()
255 

/usr/local/lib/python3.6/dist- 
packages/tensorflow/contrib/lite/python/interpreter.py in 
_ensure_safe(self)
 97       in the interpreter in the form of a numpy array or slice. Be sure 
 to
 98       only hold the function returned from tensor() if you are using 
 raw
 ---> 99       data access.""")

101   def _get_tensor_details(self, tensor_index):

 RuntimeError: There is at least 1 reference to internal data
  in the interpreter in the form of a numpy array or slice. Be sure to
  only hold the function returned from tensor() if you are using raw
  data access.

这是代码 -

import numpy as np

# Load TFLite model and allocate tensors.
interpreter = 
tf.contrib.lite.Interpreter(model_path="mnist/mnist_custom.tflite")
interpreter.allocate_tensors()

# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_details

[{'dtype': numpy.float32,'索引':3,'name': 'conv2d_1_input',量化":(0.0, 0),'形状':数组([ 1, 28, 28, 1], dtype=int32)}]

[{'dtype': numpy.float32, 'index': 3, 'name': 'conv2d_1_input', 'quantization': (0.0, 0), 'shape': array([ 1, 28, 28, 1], dtype=int32)}]

test_images[0].shape

(28, 28, 1)

(28, 28, 1)

input_data = np.expand_dims(test_images[0], axis=0)
input_data.shape

(1, 28, 28, 1)

(1, 28, 28, 1)

interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()

问题是我不明白这条消息的含义以及如何处理.

The problem is I do not understand what this message means and what to do about it.

推荐答案

tf.convert_to_tensorinterpreter.set_tensor 帮我完成了工作

tf.convert_to_tensor and interpreter.set_tensor did the job for me

tensor_index = interpreter.get_input_details()[0]['index']
input_tensor_z= tf.convert_to_tensor(z, np.float32)
interpreter.set_tensor(tensor_index, input_tensor_z)

我创建了一个 end2end 示例,从训练 Keras 模型开始,在 TensorFlow Lite 上为其提供服务 这里

I've created an end2end example starting from training a Keras model to serve it on TensorFlow Lite here

我也在 这个线程

这篇关于TF-lite 模型测试因运行时错误而失败的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆