无法将 Tensorflow 模型冻结到冻结(.pb)文件中 [英] Cannot freeze Tensorflow models into frozen(.pb) file

查看:51
本文介绍了无法将 Tensorflow 模型冻结到冻结(.pb)文件中的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我指的是 (

我想将此模型冻结到(.pb 文件)中.为此,我使用以下脚本:

import os, argparse将张量流导入为 tf# 原始的 freeze_graph 函数# 从 tensorflow.python.tools.freeze_graph 导入 freeze_graphdir = os.path.dirname(os.path.realpath(__file__))def freeze_graph(model_dir, output_node_names):"""提取输出节点定义的子图并转换它的所有变量都变成了常量参数:model_dir:包含检查点状态文件的根文件夹output_node_names:一个字符串,包含所有输出节点的名称,逗号分隔"""如果不是 tf.gfile.Exists(model_dir):引发断言错误(导出目录不存在.请指定导出"目录:%s"%model_dir)如果不是 output_node_names:print("您需要向--output_node_names 提供节点名称.")返回-1# 我们检索我们的检查点完整路径检查点 = tf.train.get_checkpoint_state(model_dir)input_checkpoint = checkpoint.model_checkpoint_path# 我们精确了冻结图的文件全名absolute_model_dir = "/".join(input_checkpoint.split('/')[:-1])output_graph = absolute_model_dir + "/frozen_model.pb"# 我们清除设备以允许 TensorFlow 控制它将加载操作的设备clear_devices = 真# 我们使用临时的新鲜 Graph 开始会话使用 tf.Session(graph=tf.Graph()) 作为 sess:# 我们在当前默认Graph中导入元图saver = tf.train.import_meta_graph(input_checkpoint + '.meta', clear_devices=clear_devices)# 我们恢复权重saver.restore(sess, input_checkpoint)# 我们使用内置的 TF helper 将变量导出为常量output_graph_def = tf.graph_util.convert_variables_to_constants(sess, # 会话用于检索权重tf.get_default_graph().as_graph_def(), #graph_def 用于检索节点output_node_names.split(",") # 输出节点名称用于选择有用的节点)# 最后我们将输出图序列化并转储到文件系统使用 tf.gfile.GFile(output_graph, "wb") 作为 f:f.write(output_graph_def.SerializeToString())打印(最终图中的 %d 个操作." % len(output_graph_def.node))返回 output_graph_def如果 __name__ == '__main__':解析器 = argparse.ArgumentParser()parser.add_argument("--model_dir", type=str, default="", help="要导出的模型文件夹")parser.add_argument("--output_node_names", type=str, default="", help="输出节点的名称,逗号分隔.")args = parser.parse_args()冻结图(args.model_dir,args.output_node_names)

我使用以下参数解析器来运行上面的代码

python3 freeze_graph.py --model_dir/Users/path_to_checkpoints/--output_node_names softmax

出现错误

 在 name_to_node_map 中断言 d,%s 不在图中" % d断言错误:softmax 不在图中

我的模型是用于文本分类的 CNN.我应该在 output_node_names 中写什么?在输出中生成一个成功的 .pb 文件

解决方案

使用以下脚本打印张量...最后一个张量将是输出张量.原作者:https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc

导入 argparse将张量流导入为 tfdef print_tensors(pb_file):print('模型文件:{}\n'.format(pb_file))# 将 pb 读入 graph_def使用 tf.gfile.GFile(pb_file, "rb") 作为 f:graph_def = tf.GraphDef()graph_def.ParseFromString(f.read())# 导入 graph_def使用 tf.Graph().as_default() 作为图形:tf.import_graph_def(graph_def)# 打印操作对于 graph.get_operations() 中的操作:打印(op.name + '\t' + str(op.values()))如果 __name__ == '__main__':解析器 = argparse.ArgumentParser()parser.add_argument("--pb_file", type=str, required=True, help="pb file")args = parser.parse_args()打印张量(args.pb_file)

I am referring (here) to freeze models into .pb file. My model is CNN for text classification I am using (Github) link to train CNN for text classification and exporting in form of models. I have trained models to 4 epoch and My checkpoints folders look as follows:

I want to freeze this model into (.pb file). For that I am using following script:

import os, argparse

import tensorflow as tf

# The original freeze_graph function
# from tensorflow.python.tools.freeze_graph import freeze_graph 

dir = os.path.dirname(os.path.realpath(__file__))

def freeze_graph(model_dir, output_node_names):
    """Extract the sub graph defined by the output nodes and convert 
    all its variables into constant 
    Args:
        model_dir: the root folder containing the checkpoint state file
        output_node_names: a string, containing all the output node's names, 
                            comma separated
    """
    if not tf.gfile.Exists(model_dir):
        raise AssertionError(
            "Export directory doesn't exists. Please specify an export "
            "directory: %s" % model_dir)

    if not output_node_names:
        print("You need to supply the name of a node to --output_node_names.")
        return -1

    # We retrieve our checkpoint fullpath
    checkpoint = tf.train.get_checkpoint_state(model_dir)
    input_checkpoint = checkpoint.model_checkpoint_path

    # We precise the file fullname of our freezed graph
    absolute_model_dir = "/".join(input_checkpoint.split('/')[:-1])
    output_graph = absolute_model_dir + "/frozen_model.pb"

    # We clear devices to allow TensorFlow to control on which device it will load operations
    clear_devices = True

    # We start a session using a temporary fresh Graph
    with tf.Session(graph=tf.Graph()) as sess:
        # We import the meta graph in the current default Graph
        saver = tf.train.import_meta_graph(input_checkpoint + '.meta', clear_devices=clear_devices)

        # We restore the weights
        saver.restore(sess, input_checkpoint)

        # We use a built-in TF helper to export variables to constants
        output_graph_def = tf.graph_util.convert_variables_to_constants(
            sess, # The session is used to retrieve the weights
            tf.get_default_graph().as_graph_def(), # The graph_def is used to retrieve the nodes 
            output_node_names.split(",") # The output node names are used to select the usefull nodes
        ) 

        # Finally we serialize and dump the output graph to the filesystem
        with tf.gfile.GFile(output_graph, "wb") as f:
            f.write(output_graph_def.SerializeToString())
        print("%d ops in the final graph." % len(output_graph_def.node))

    return output_graph_def

if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument("--model_dir", type=str, default="", help="Model folder to export")
    parser.add_argument("--output_node_names", type=str, default="", help="The name of the output nodes, comma separated.")
    args = parser.parse_args()

    freeze_graph(args.model_dir, args.output_node_names)

I am using following argument parser to run the above code

python3 freeze_graph.py --model_dir /Users/path_to_checkpoints/ --output_node_names softmax

It is giving error

    assert d in name_to_node_map, "%s is not in graph" % d
AssertionError: softmax is not in graph

My model is CNN for text classification. What should I write in output_node_names ? to produce a successful .pb file in the output

解决方案

Use the below script to print the tensors... the last tensor would be the output tensor. Original author: https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc

import argparse
import tensorflow as tf


def print_tensors(pb_file):
    print('Model File: {}\n'.format(pb_file))
    # read pb into graph_def
    with tf.gfile.GFile(pb_file, "rb") as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())

    # import graph_def
    with tf.Graph().as_default() as graph:
        tf.import_graph_def(graph_def)

    # print operations
    for op in graph.get_operations():
        print(op.name + '\t' + str(op.values()))


if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument("--pb_file", type=str, required=True, help="Pb file")
    args = parser.parse_args()
    print_tensors(args.pb_file)

这篇关于无法将 Tensorflow 模型冻结到冻结(.pb)文件中的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆