了解 export_tflite_ssd_graph.py [英] Understanding export_tflite_ssd_graph.py

查看:25
本文介绍了了解 export_tflite_ssd_graph.py的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这里是关于在某些时候将 Mobilenet+SSD 转换为 tflite 的教程,他们使用 export_tflite_ssd_graph.py,据我所知,此自定义脚本用于支持tf.image.non_max_suppression 操作.

export CONFIG_FILE=gs://${YOUR_GCS_BUCKET}/data/pipeline.config出口 CHECKPOINT_PATH=gs://${YOUR_GCS_BUCKET}/train/model.ckpt-2000导出 OUTPUT_DIR=/tmp/tflitepython object_detection/export_tflite_ssd_graph.py \--pipeline_config_path=$CONFIG_FILE \--trained_checkpoint_prefix=$CHECKPOINT_PATH \--output_directory=$OUTPUT_DIR \--add_postprocessing_op=true

但我想知道什么是 pipeline.config 以及如何创建它,如果我使用自定义模型(例如 FaceBoxes) 使用 tf.image.non_max_suppression 操作?

解决方案

export_tflite_ssd_graph.py 的主要目标是将训练检查点文件导出到一个冻结图中,您可以稍后将其用于迁移学习或用于直接推理(因为它们包含模型结构信息以及训练的权重信息).实际上,model 中列出的所有模型zoo 是通过这种方式生成的冻结图.

至于 tf.image.non_max_suppressionexport_tflite_ssd_graph.py 不用于支持"它,但如果 --add_postprocessing_op 是set true 将会有另一个自定义操作节点添加到冻结图中,这个自定义节点将具有类似于操作 tf.image.non_max_suppression 的功能.请参阅参考

最后,pipeline.config 文件直接对应你用于训练的配置文件(--pipeline_config_path),它是它的一个副本,但通常带有修改后的分数阈值(参见此处a> about pipeline.config.),因此如果您使用自定义模型,则必须在训练之前创建它.并创建自定义配置文件,这里 是官方教程.

Here is tutorial about converting Mobilenet+SSD to tflite at some point they use export_tflite_ssd_graph.py, as I understand this custom script is used to support tf.image.non_max_suppression operation.

export CONFIG_FILE=gs://${YOUR_GCS_BUCKET}/data/pipeline.config
export CHECKPOINT_PATH=gs://${YOUR_GCS_BUCKET}/train/model.ckpt-2000
export OUTPUT_DIR=/tmp/tflite

python object_detection/export_tflite_ssd_graph.py \
--pipeline_config_path=$CONFIG_FILE \
--trained_checkpoint_prefix=$CHECKPOINT_PATH \
--output_directory=$OUTPUT_DIR \
--add_postprocessing_op=true

But I wonder what is pipeline.config and how to create it if I use custom model(for example FaceBoxes) that use tf.image.non_max_suppression operation?

解决方案

The main objective of export_tflite_ssd_graph.py is to export the training checkpoint files into a frozen graph that you can later use for transfer learning or for straight inference (because they contain the model structure info as well as the trained weights info). In fact, all the models listed in model zoo are the frozen graph generated this way.

As for the tf.image.non_max_suppression, export_tflite_ssd_graph.py is not used to 'support' it but if --add_postprocessing_op is set true there will be another custom op node added to the frozen graph, this custom node will have the functionality similar to op tf.image.non_max_suppression. See reference here.

Finally the pipeline.config file directly corresponds to a config file in the you use for training (--pipeline_config_path), it is a copy of it but often with a modified score threshold (See description here about pipeline.config.), so you will have to create it before the training if you use a custom model. And to create a custom config file, here is the official tutorial.

这篇关于了解 export_tflite_ssd_graph.py的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆