如何正确地将tflite_graph.pb转换为detect.tflite [英] How to convert tflite_graph.pb to detect.tflite properly

查看:442
本文介绍了如何正确地将tflite_graph.pb转换为detect.tflite的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用tensorflow对象检测api,使用来自tensorflow

I am using tensorflow object-detection api for training a custom model using ssdlite_mobilenet_v2_coco_2018_05_09 from tensorflow model zoo.

我成功地训练了模型并使用

I successfully trained the model and test it out using a script provided in this tutorial.

这是问题所在,我需要一个 detect.tflite 才能在目标计算机(嵌入式系统)中使用它.但是,当我实际上用模型制作tflite时,它几乎不输出,而当它出现时,它的检测错误.要制作.tflite文件,我首先通过

Here is the problem, I need a detect.tflite to use it in my target machine (an embedded system). But when I actually make a tflite out of my model, it outputs almost nothing and when it does, its a wrong detection. To make the .tflite file, I first used export_tflite_ssd_graph.py and then toco on the output with this command by following the doc and some google searches:

toco --graph_def_file=$OUTPUT_DIR/tflite_graph.pb --output_file=$OUTPUT_DIR/detect.tflite --input_shapes=1,300,300,3 --input_arrays=normalized_input_image_tensor --output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' --allow_custom_ops

此外,当我使用ssd_mobilenet_v3_small_coco detect.tflite文件对其进行测试时,.tflite用于检测任务的代码也可以正常工作.

Also, the code I'm using for detection task from .tflite is working properly, as I tested it with ssd_mobilenet_v3_small_coco detect.tflite file.

推荐答案

问题出在toco命令.我使用的某些文档已过时并误导了我. toco已过时,我应该改用tflite_convert工具.

The problem was with the toco command. Some documents that I used were outdated and mislead me. toco is deprecated and I should have used tflite_convert tool instead.

这是我使用的完整命令(从您的培训目录中运行):

Here is the full command I used (run from your training directory):

tflite_convert --graph_def_file tflite_inference_graph/tflite_graph.pb --output_file =./detect.tflite --output_format = TFLITE --input_shapes = 1,300,300,3 --input_arrays = normalized_input_image_tensor --output_arrays ='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcessite:2',' ' --inference_type = QUANTIZED_UINT8 --mean_values = 128 --std_dev_values = 127 --change_concat_input_ranges = false --allow_custom_ops

tflite_convert --graph_def_file tflite_inference_graph/tflite_graph.pb --output_file=./detect.tflite --output_format=TFLITE --input_shapes=1,300,300,3 --input_arrays=normalized_input_image_tensor --output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' --inference_type=QUANTIZED_UINT8 --mean_values=128 --std_dev_values=127 --change_concat_input_ranges=false --allow_custom_ops

我对ssdlite_mobilenet_v2_coco_2018_05_09模型进行了培训,并将其添加到我的.config文件末尾.

I did the training on ssdlite_mobilenet_v2_coco_2018_05_09 model and added this at the end of my .config file.

 graph_rewriter {
  quantization {
    delay: 400
    weight_bits: 8
    activation_bits: 8
  }
}

我也使用此命令在tflite_inference_graph目录中生成tflite_graph.pb:

Also I used this command to generate tflite_graph.pb in tflite_inference_graph directory:

python export_tflite_ssd_graph.py --pipeline_config_path 2020-05-17_train_ssdlite_v2/ssd_mobilenet_v2_coco.config --trained_checkpoint_prefix 2020-05-17_train_ssdlite_v2/train/model.ckpt-1146 --output_directory 2020-05-17_train_ssdlite_v2/tflite_inference_graph --add_postprocessing_op=true

注意:我想在嵌入式系统上使用量化模型.这就是我在配置文件中添加graph_rewriter并在tflite_convert命令中添加--inference_type = QUANTIZED_UINT8的原因.

Note: I wanted to use a quantized model on my embedded system. That is the reason I added graph_rewriter in the config file and --inference_type=QUANTIZED_UINT8 in my tflite_convert command.

这篇关于如何正确地将tflite_graph.pb转换为detect.tflite的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆