使用 tf.Estimator 创建的 tensorflow 服务的图形优化 [英] Graph optimizations on a tensorflow serveable created using tf.Estimator

查看:40
本文介绍了使用 tf.Estimator 创建的 tensorflow 服务的图形优化的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

上下文:

我有一个基于 tf.estimator.DNNClassifier 的简单分类器它通过意图标签获取文本和输出概率.我能够将模型导出到可服务对象,并使用 tensorflow 服务 为服务对象提供服务.问题是这个 servable 太大(大约 1GB),所以我想尝试一些 张量流图转换 以尝试减小所提供文件的大小.

I have a simple classifier based on tf.estimator.DNNClassifier that takes text and output probabilities over an intent tags. I am able to train an export the model to a serveable as well as serve the serveable using tensorflow serving. The problem is this servable is too big (around 1GB) and so I wanted to try some tensorflow graph transforms to try to reduce the size of the files being served.

问题:

我了解如何使用 saved_model.pb 并使用 freeze_model.py 创建一个新的 .pb 文件,该文件可用于调用转换.这些转换的结果(也是一个 .pb 文件)不是可服务的,不能与 tensorflow 服务一起使用.

I understand how to take the saved_model.pb and use freeze_model.py to create a new .pb file that can be used to call transforms on. The result of these transforms (a .pb file as well) is not a servable and cannot be used with tensorflow serving.

开发者如何从:

saved model -> graph transforms -> back to a servable

文档表明这当然是可能的,但从文档来看如何做到这一点一点都不直观.

There's documentation that suggests that this is certainly possible, but its not at all intuitive from the docs as to how to do this.

我的尝试:

import tensorflow as tf

from tensorflow.saved_model import simple_save
from tensorflow.saved_model import signature_constants
from tensorflow.saved_model import tag_constants
from tensorflow.tools.graph_transforms import TransformGraph


with tf.Session(graph=tf.Graph()) as sess_meta:
    meta_graph_def = tf.saved_model.loader.load(
        sess_meta,
        [tag_constants.SERVING],
        "/model/path")

    graph_def = meta_graph_def.graph_def

    other_graph_def = TransformGraph(
        graph_def,
        ["Placeholder"],
        ["dnn/head/predictions/probabilities"],
        ["quantize_weights"])


    with tf.Graph().as_default():
        graph = tf.get_default_graph()
        tf.import_graph_def(other_graph_def)
        in_tensor = graph.get_tensor_by_name(
            "import/Placeholder:0")
        out_tensor = graph.get_tensor_by_name(
            "import/dnn/head/predictions/probabilities:0")

        inputs = {"inputs": in_tensor}
        outputs = {"outputs": out_tensor}

        simple_save(sess_meta, "./new", inputs, outputs)

我的想法是加载 servable,从 meta_graph_def 中提取 graph_def,转换 graph_def,然后尝试重新创建 servable.这似乎是不正确的方法.

My idea was to load the servable, extract the graph_def from the meta_graph_def, transform the graph_def and then try to recreate the servable. This seems to be the incorrect approach.

有没有一种方法可以从导出的 servable 中成功地对图形执行转换(以减少推理时的文件大小),然后使用转换后的图形重新创建一个 servable?

Is there a way to successfully perform transforms (to reduce file size at inference) on a graph from an exported servable, and then recreate a servable with the transformed graph?

谢谢.

更新 (2018-08-28):

找到 contrib.meta_graph_transform() 看起来很有希望.

Found contrib.meta_graph_transform() which looks promising.

更新 (2018-12-03):

我打开的相关 github 问题 似乎已在一张详细的博客文章,列在票证的末尾.

A related github issue I opened that seems to be resolved in a detailed blog post which is listed at the end of the ticket.

推荐答案

我们可以使用以下提到的方法优化或减小 Tensorflow 模型的大小:

We can optimize or reduce the size of a Tensorflow Model using the below mentioned methods:

  1. 冻结:将存储在 SavedModel 的检查点文件中的变量转换为直接存储在模型图中的常量.这减小了模型的整体尺寸.

  1. Freezing: Convert the variables stored in a checkpoint file of the SavedModel into constants stored directly in the model graph. This reduces the overall size of the model.

剪枝:去除预测路径中未使用的节点和图形的输出,合并重复节点,以及清理其他节点操作,如摘要、身份等.

Pruning: Strip unused nodes in the prediction path and the outputs of the graph, merging duplicate nodes, as well as cleaning other node ops like summary, identity, etc.

常量折叠:在模型中查找总是计算为常量表达式的任何子图,并用这些常量替换它们.折叠批范数:将批归一化中引入的乘法折叠成前一层的权重乘法.

Constant folding: Look for any sub-graphs within the model that always evaluate to constant expressions, and replace them with those constants. Folding batch norms: Fold the multiplications introduced in batch normalization into the weight multiplications of the previous layer.

量化:将权重从浮点数转换为较低的精度,例如 16 位或 8 位.

Quantization: Convert weights from floating point to lower precision, such as 16 or 8 bits.

冻结图形的代码如下:

from tensorflow.python.tools import freeze_graph

output_graph_filename = os.path.join(saved_model_dir, output_filename)
initializer_nodes = ''

freeze_graph.freeze_graph(input_saved_model_dir=saved_model_dir,
      output_graph=output_graph_filename,
      saved_model_tags = tag_constants.SERVING,
      output_node_names=output_node_names,initializer_nodes=initializer_nodes,
      input_graph=None, input_saver=False, input_binary=False, 
      input_checkpoint=None, restore_op_name=None, filename_tensor_name=None,
      clear_devices=False, input_meta_graph=False)

修剪和恒定折叠的代码如下:

Code for Pruning and Constant Folding is mentioned below:

from tensorflow.tools.graph_transforms import TransformGraph

def get_graph_def_from_file(graph_filepath):
  with ops.Graph().as_default():
    with tf.gfile.GFile(graph_filepath, 'rb') as f:
      graph_def = tf.GraphDef()
      graph_def.ParseFromString(f.read())
      return graph_def

def optimize_graph(model_dir, graph_filename, transforms, output_node):
  input_names = []
  output_names = [output_node]
  if graph_filename is None:
    graph_def = get_graph_def_from_saved_model(model_dir)
  else:
    graph_def = get_graph_def_from_file(os.path.join(model_dir, 
         graph_filename))
  optimized_graph_def = TransformGraph(graph_def, input_names,      
      output_names, transforms)
  tf.train.write_graph(optimized_graph_def, logdir=model_dir, as_text=False, 
     name='optimized_model.pb')
  print('Graph optimized!')

我们通过传递所需优化的列表来调用模型上的代码,如下所示:

We call the code on our model by passing a list of the desired optimizations, like so:

transforms = ['remove_nodes(op=Identity)', 'merge_duplicate_nodes',
 'strip_unused_nodes','fold_constants(ignore_errors=true)',
 'fold_batch_norms']

optimize_graph(saved_model_dir, "frozen_model.pb" , transforms, 'head/predictions/class_ids')

量化代码如下:

transforms = ['quantize_nodes', 'quantize_weights',]
optimize_graph(saved_model_dir, None, transforms, 'head/predictions/class_ids')

一旦应用了优化,我们需要将优化图转换回 GraphDef.代码如下所示:

Once the Optimizations are applied, we need to convert the Optimized Graph back to GraphDef. Code for that is shown below:

def convert_graph_def_to_saved_model(export_dir, graph_filepath):
  if tf.gfile.Exists(export_dir):
    tf.gfile.DeleteRecursively(export_dir)
  graph_def = get_graph_def_from_file(graph_filepath)
  with tf.Session(graph=tf.Graph()) as session:
    tf.import_graph_def(graph_def, name='')
    tf.saved_model.simple_save(
        session,
        export_dir,
        inputs={
            node.name: session.graph.get_tensor_by_name(
                '{}:0'.format(node.name))
            for node in graph_def.node if node.op=='Placeholder'},
        outputs={'class_ids': session.graph.get_tensor_by_name(
            'head/predictions/class_ids:0')}
    )
    print('Optimized graph converted to SavedModel!')

示例代码如下所示:

optimized_export_dir = os.path.join(export_dir, 'optimized')
optimized_filepath = os.path.join(saved_model_dir, 'optimized_model.pb')
convert_graph_def_to_saved_model(optimized_export_dir, optimized_filepath)

有关更多信息,请参阅@gobrewers14 提到的以下链接:

For more information, refer the below link, which was mentioned by @gobrewers14:

https://medium.com/google-cloud/优化tensorflow-models-for-serving-959080e9ddbf

这篇关于使用 tf.Estimator 创建的 tensorflow 服务的图形优化的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆