frozen_inference_graph.pb 和saved_model.pb 有什么区别? [英] What is difference frozen_inference_graph.pb and saved_model.pb?

查看:84
本文介绍了frozen_inference_graph.pb 和saved_model.pb 有什么区别?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个经过训练的模型 (Faster R-CNN),我使用 export_inference_graph.py 将其导出以用于推理.我试图了解创建的 frozen_inference_graph.pbsaved_model.pb 以及 model.ckpt* 文件之间的区别.我还看到了 .pbtxt 表示.

I have a trained model (Faster R-CNN) which I exported using export_inference_graph.py to use for inference. I'm trying to understand the difference between the created frozen_inference_graph.pb and saved_model.pb and also model.ckpt* files. I've also seen .pbtxt representations.

我尝试通读此文,但无法真正找到答案:https://www.tensorflow.org/extend/tool_developers/

I tried reading through this but couldn't really find the answers: https://www.tensorflow.org/extend/tool_developers/

这些文件中的每一个都包含什么?哪些可以转换为其他哪些?每个人的理想目的是什么?

What do each of these files contain? Which ones can be converted to which other ones? What is the ideal purpose of each?

推荐答案

frozen_inference_graph.pb,是一个不能再训练的冻结图,它定义了graphdef,实际上是一个序列化的图,可以用这个代码加载:

frozen_inference_graph.pb, is a frozen graph that cannot be trained anymore, it defines the graphdef and is actually a serialized graph and can be loaded with this code:

def load_graph(frozen_graph_filename):
    with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
        return graph_def
tf.import_graph_def(load_graph("frozen_inference_graph.pb"))

保存的模型是由 tf.saved_model.builder 生成的模型,必须导入到会话中,该文件包含具有所有训练权重的完整图(就像冻结图一样),但可以在此处进行训练,而这个没有序列化,需要通过这个片段加载.[] 是标签常量,可以由 saved_model_cli 读取.此模型也经常用于预测,例如 google ml 引擎 par 示例:

the saved model is a model generated by tf.saved_model.builder and is has to be imported into a session, this file contains the full graph with all training weights (just like the frozen graph) but here can be trained upon, and this one is not serialized and needs to be loaded by this snippet. The [] are tagconstants which can be read by the saved_model_cli. This model is also often served to predict on, like google ml engine par example:

with tf.Session() as sess:
    tf.saved_model.loader.load(sess, [], "foldername to saved_model.pb, only folder")

model.ckpt 文件是在训练期间生成的检查点,用于恢复训练或在训练后出现问题时进行备份.如果你有一个保存的模型和一个冻结图,那么你可以忽略这个.

model.ckpt files are checkpoints, generated during training, this is used to resume training or to have a back up when something goes wrong after along training. If you have a saved model and a frozen graph, then you can ignore this.

.pbtxt 文件与前面讨论的模型基本相同,但人类可读,而不是二进制文件.这些也可以忽略.

.pbtxt files are basically the same as previous discussed models, but then human readable, not binary. These can be ignored as well.

要回答您的转换问题:保存的模型可以转换为冻结图,反之亦然,虽然从冻结图中提取的saved_model也是不可训练的,但它的存储方式是保存模型格式.可以读入检查点并将其加载到会话中,然后您可以从中构建保存的模型.

To answer your conversion question: saved models can be transformed into a frozen graph and vice versa, although a saved_model extracted from a frozen graph is also no trainable, but the way it is stored is in saved model format. Checkpoints can be read in and loaded into a session, and there you can build a saved model from them.

希望我有所帮助,有任何问题,尽管问!

Hope I helped, any questions, ask away!

补充:

如何冻结图形,从保存的模型文件夹结构开始.这篇文章很旧,所以我之前使用的方法可能不再适用,它很可能仍然适用于 Tensorflow 1.+.

How to freeze a graph, starting from a saved model folder structure. This post is old, so the method I used before might not work anymore, it will most likely still work with Tensorflow 1.+.

首先从 tensorflow 库下载这个文件,然后这个代码片段应该可以解决问题:

Start of by downloading this file from the tensorflow library, and then this code snippit should do the trick:

    import freeze_graph # the file you just downloaded
    from tensorflow.python.saved_model import tag_constants # might be unnecessary

    freeze_graph.freeze_graph(
        input_graph=None,
        input_saver=None,
        input_binary=None,
        input_checkpoint=None,
        output_node_names="dense_output/BiasAdd",
        restore_op_name=None,
        filename_tensor_name=None,
        output_graph=os.path.join(path, "frozen_graph.pb"),
        clear_devices=None,
        initializer_nodes=None,
        input_saved_model_dir=path,
        saved_model_tags=tag_constants.SERVING
    )

output_node_names = 最终操作的节点名称,如果以dense layer结束,则为dense layer_name/BiasAdd

output_node_names = Node name of the final operation, if you end on a dense layer, it will be dense layer_name/BiasAdd

output_graph = 输出图名称

output_graph = output graph name

input_saved_model_dir = 保存模型的根目录

input_saved_model_dir = root folder of the saved model

saved_model_tags = 保存的模型标签,在您的情况下,这可以是 None,但我确实使用了标签.

saved_model_tags = saved model tags, in your case this can be None, I did however use a tag.

另一个补充:

上面已经提供了加载模型的代码.为了实际预测您需要一个会话,对于已保存的模型,此会话已经创建,对于冻结模型,则不是.

The code to load models is already provided above. To actually predict you need a session, for a saved model this session is already created, for a frozen model, it's not.

保存的模型:

with tf.Session() as sess:
    tf.saved_model.loader.load(sess, [], "foldername to saved_model.pb, only folder")
    prediction = sess.run(output_tensor, feed_dict={input_tensor: test_images})

冷冻模型:

tf.import_graph_def(load_graph("frozen_inference_graph.pb"))
with tf.Session() as sess:
    prediction = sess.run(output_tensor, feed_dict={input_tensor: test_images})

要进一步了解您的输入和输出层是什么,您需要使用 tensorboard 检查它们,只需将以下代码行添加到您的会话中:

To further understand what your input and output layers are, you need to check them out with tensorboard, simply add the following line of code into your session:

tf.summary.FileWriter("path/to/folder/to/save/logs", sess.graph)

此行将创建一个日志文件,您可以使用 cli/powershell 打开该文件,以查看如何运行 tensorboard,请查看此 之前发布的问题

This line will create a log file that you can open with the cli/powershell, to see how to run tensorboard, check out this previously posted question

这篇关于frozen_inference_graph.pb 和saved_model.pb 有什么区别?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆