来自 .meta .info .data 的 Tensorflow 冻结推理图并结合冻结推理图 [英] Tensorflow frozen inference graph from .meta .info .data and combining frozen inference graphs

查看:25
本文介绍了来自 .meta .info .data 的 Tensorflow 冻结推理图并结合冻结推理图的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是 tensorflow 的新手,目前正在努力解决一些问题:

I am new to tensorflow, and currently struggling with some issues :

  1. 如何在没有管道配置的情况下从 .meta .data .info 获取冻结的推理图

  1. How to get frozen inference graph from .meta .data .info without pipeline config

我想实时检查预先训练的交通标志检测模型.模型包含 3 个文件 - .meta .data .info,但我找不到信息,如何在没有管道配置的情况下将它们转换为冻结推理图.我发现的所有东西要么过时,要么需要管道配置.

I wanted to check pre trained models of traffic sign detection in real time. Model contains 3 files - .meta .data .info, but i cant find information, how to convert them into frozen inference graph without pipeline config. Everything i find is either outdated or needs pipeline config.

此外,我尝试自己训练模型,但我认为问题在于 .ppa 文件(GTSDB 数据集),因为使用 .png 或 .jpg 一切正常.

Also, i tried to train model myself, but i think that problem is .ppa files (GTSDB dataset), because with .png or .jpg everything worked just fine.

如何组合两个或多个冻结的推理图

How to combine two or more frozen inference graphs

我已经在我自己的数据集上成功训练了模型(检测某些特定对象),但我希望该模型能够与一些预先训练的模型一起使用,例如更快的 rcnn inception 或 ssd mobilenet.我知道我必须加载两个模型,但我不知道如何让它们同时工作,甚至可能吗?

I have successfully trained model on my own dataset (detect some specific object), but i want that model to work with some pre trained models like faster rcnn inception or ssd mobilenet. I understand that i have to load both models, but i have no idea how to make them work at the same time and is it even possible?

更新

第一个问题我已经解决了一半 - 现在我有了frozen_model.pb,问题出在输出节点名称上,我很困惑,不知道该放什么,所以经过数小时的调查",得到了工作代码:

I'm halfway there on first problem - now i have frozen_model.pb, problem was in output node names, i got confused and didn't know what to put there, so after hours of "investigating", got working code:

import os, argparse

import tensorflow as tf

# The original freeze_graph function
# from tensorflow.python.tools.freeze_graph import freeze_graph

dir = os.path.dirname(os.path.realpath(__file__))

def freeze_graph(model_dir):
    """Extract the sub graph defined by the output nodes and convert
    all its variables into constant
    Args:
        model_dir: the root folder containing the checkpoint state file
        output_node_names: a string, containing all the output node's names,
                            comma separated
    """
    if not tf.gfile.Exists(model_dir):
        raise AssertionError(
            "Export directory doesn't exists. Please specify an export "
            "directory: %s" % model_dir)

    # if not output_node_names:
    #     print("You need to supply the name of a node to --output_node_names.")
    #     return -1

    # We retrieve our checkpoint fullpath
    checkpoint = tf.train.get_checkpoint_state(model_dir)
    input_checkpoint = checkpoint.model_checkpoint_path

    # We precise the file fullname of our freezed graph
    absolute_model_dir = "/".join(input_checkpoint.split('/')[:-1])
    output_graph = absolute_model_dir + "/frozen_model.pb"
    # We clear devices to allow TensorFlow to control on which device it will load operations
    clear_devices = True

    # We start a session using a temporary fresh Graph
    with tf.Session(graph=tf.Graph()) as sess:

        # We import the meta graph in the current default Graph
        saver = tf.train.import_meta_graph(input_checkpoint + '.meta', clear_devices=clear_devices)

        # We restore the weights
        saver.restore(sess, input_checkpoint)

        # We use a built-in TF helper to export variables to constants
        output_graph_def = tf.graph_util.convert_variables_to_constants(
            sess, # The session is used to retrieve the weights
            tf.get_default_graph().as_graph_def(), # The graph_def is used to retrieve the nodes
            [n.name for n in tf.get_default_graph().as_graph_def().node] # The output node names are used to select the usefull nodes
        )

        # Finally we serialize and dump the output graph to the filesystem
        with tf.gfile.GFile(output_graph, "wb") as f:
            f.write(output_graph_def.SerializeToString())
        print("%d ops in the final graph." % len(output_graph_def.node))

    return output_graph_def

if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument("--model_dir", type=str, default="", help="Model folder to export")
    # parser.add_argument("--output_node_names", type=str, default="", help="The name of the output nodes, comma separated.")
    args = parser.parse_args()

    freeze_graph(args.model_dir)

我不得不更改几行 - 删除 --output_node_names 并将 output_graph_def 中的 output_node_names 更改为 [n.name for n in tf.get_default_graph().as_graph_def().node]现在我遇到了新问题 - 我无法将 .pb 转换为 .pbtxt,错误是:

I had to change few lines - remove --output_node_names and change output_node_names in output_graph_def to [n.name for n in tf.get_default_graph().as_graph_def().node] Now i got new problems - I can't convert .pb to .pbtxt, and error is :

ValueError: Input 0 of node prefix/Variable/Assign was passed float from prefix/Variable:0 incompatible with expected float_ref.

再一次,关于这个问题的信息已经过时了——我发现的所有东西都至少有一年的历史了.我开始认为对frozen_graph 的修复不正确,这就是我出现新错误的原因.

And once again, information on this problem is outdated - everything i found is at least year old. I'm starting to think that fix for frozen_graph is not correct, and that is the reason why i'm having new error.

我真的很感激关于这个问题的一些建议.

I would really appreciate some advice on this matter.

推荐答案

如果你写

[n.name for n in tf.get_default_graph().as_graph_def().node]

在您的 convert_variables_to_constants 函数中,您将图形具有的每个节点定义为输出节点,这当然不起作用.(这可能是你的 ValueError 的原因)

in your convert_variables_to_constants function, you define every node the graph has as an output node, which of course will not work. (This is probably the reason for your ValueError)

您需要找到实际输出节点的名称,最好的方法通常是查看 tensorboard 中训练好的模型并在那里分析图形,或者打印出图形的每个节点.通常最后一个打印出来的节点是你的输出节点(如果你把它用作优化器,请忽略名称中包含梯度"或亚当"的所有内容)

You need to find the name of the real output node, the best way for this is often to look at the trained model in tensorboard and analyze the graph there, or you print out every node of your graph. Often the last node that is printed out is your output node (ignore everything that has 'gradients' in the name or 'Adam' if you have used that as an optimizer)

一种简单的方法(在恢复会话后插入):

An easy way to do this (insert it after you restore the session):

gd = sess.graph.as_graph_def()
for node in gd.node:
    print(node.name)

这篇关于来自 .meta .info .data 的 Tensorflow 冻结推理图并结合冻结推理图的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆