使用 tflite_convert 为珊瑚转换 tfLite 的冻结图 [英] Convert Frozen graph for tfLite for Coral using tflite_convert

查看:46
本文介绍了使用 tflite_convert 为珊瑚转换 tfLite 的冻结图的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 MobileNetV2 并尝试使其适用于 Google Coral.除了 Coral Web Compiler 之外,一切似乎都能正常工作,抛出一个随机错误,未捕获的应用程序失败.所以我认为问题在于所需的中间步骤.例如,我将它与 tflite_convert

I'm using MobileNetV2 and trying to get it working for Google Coral. Everything seems to work except the Coral Web Compiler, throws a random error, Uncaught application failure. So I think the problem is the intemidary steps required. For example, I'm using this with tflite_convert

tflite_convert \
  --graph_def_file=optimized_graph.pb \
  --output_format=TFLITE \
  --output_file=mobilenet_v2_new.tflite \
  --inference_type=FLOAT \
  --inference_input_type=FLOAT \
  --input_arrays=input \
  --output_arrays=final_result \
  --input_shapes=1,224,224,3

我做错了什么?

推荐答案

这很可能是因为您的模型不是 量化.Edge TPU 设备目前不支持基于浮点的模型推理.为了获得最佳结果,您应该在训练期间启用量化(在链接中描述).但是,您也可以在 TensorFlow Lite 转换期间应用量化.

This is most likely because your model is not quantized. Edge TPU devices do not currently support float-based model inference. For the best results, you should enable quantization during training (described in the link). However, you can also apply quantization during TensorFlow Lite conversion.

通过训练后量化,您会牺牲准确性,但可以更快地测试某些内容.将图形转换为 TensorFlow Lite 格式时,将 inference_type 设置为 QUANTIZED_UINT8.您还需要在命令行上应用量化参数 (mean/range/std_dev).

With post-training quantization, you sacrifice accuracy but can test something out more quickly. When you convert your graph to TensorFlow Lite format, set inference_type to QUANTIZED_UINT8. You'll also need to apply the quantization parameters (mean/range/std_dev) on the command line as well.

tflite_convert \
  --graph_def_file=optimized_graph.pb \
  --output_format=TFLITE \
  --output_file=mobilenet_v2_new.tflite \
  --inference_type=QUANTIZED_UINT8 \
  --input_arrays=input \
  --output_arrays=final_result \
  --input_shapes=1,224,224,3 \
  --mean_values=128 --std_dev_values=127 \
  --default_ranges_min=0 --default_ranges_max=255

然后您可以将量化的 .tflite 文件传递​​给 模型编译器.

You can then pass the quantized .tflite file to the model compiler.

有关 Edge TPU 模型要求的更多详细信息,请查看 TensorFlow 模型边缘 TPU.

For more details on the Edge TPU model requirements, check out TensorFlow models on the Edge TPU.

这篇关于使用 tflite_convert 为珊瑚转换 tfLite 的冻结图的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆