如何使用 Tensorflow Serving 提供重新训练的 Inception 模型? [英] How to serve retrained Inception model using Tensorflow Serving?

查看:34
本文介绍了如何使用 Tensorflow Serving 提供重新训练的 Inception 模型?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

所以我根据本指南训练了初始模型来识别花朵.https://www.tensorflow.org/versions/r0.8/how_tos/image_retraining/index.html

So I have trained inception model to recognize flowers according to this guide. https://www.tensorflow.org/versions/r0.8/how_tos/image_retraining/index.html

bazel build tensorflow/examples/image_retraining:retrain
bazel-bin/tensorflow/examples/image_retraining/retrain --image_dir ~/flower_photos

要通过命令行对图像进行分类,我可以这样做:

To classify the image via command line, I can do this:

bazel build tensorflow/examples/label_image:label_image && \
bazel-bin/tensorflow/examples/label_image/label_image \
--graph=/tmp/output_graph.pb --labels=/tmp/output_labels.txt \
--output_layer=final_result \
--image=$HOME/flower_photos/daisy/21652746_cc379e0eea_m.jpg

但是我如何通过 Tensorflow 服务来提供这个图?

But how do I serve this graph via Tensorflow serving?

Tensorflow 服务设置指南(https://tensorflow.github.io/serving/serving_basic) 没有说明如何合并图形 (output_graph.pb).服务器需要不同格式的文件:

The guide about setting up Tensorflow serving (https://tensorflow.github.io/serving/serving_basic) does not tell how to incorporate the graph (output_graph.pb). The server expects the different format of file:

$>ls /tmp/mnist_model/00000001
checkpoint export-00000-of-00001 export.meta

推荐答案

您必须导出模型.我有一个 PR 在再训练期间导出模型.其要点如下:

You have to export the model. I have a PR that exports the model during retraining. The gist of it is below:

import tensorflow as tf

def export_model(sess, architecture, saved_model_dir):
  if architecture == 'inception_v3':
    input_tensor = 'DecodeJpeg/contents:0'
  elif architecture.startswith('mobilenet_'):
    input_tensor = 'input:0'
  else:
    raise ValueError('Unknown architecture', architecture)
  in_image = sess.graph.get_tensor_by_name(input_tensor)
  inputs = {'image': tf.saved_model.utils.build_tensor_info(in_image)}

  out_classes = sess.graph.get_tensor_by_name('final_result:0')
  outputs = {'prediction': tf.saved_model.utils.build_tensor_info(out_classes)}

  signature = tf.saved_model.signature_def_utils.build_signature_def(
    inputs=inputs,
    outputs=outputs,
    method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME
  )

  legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')

  # Save out the SavedModel.
  builder = tf.saved_model.builder.SavedModelBuilder(saved_model_dir)
  builder.add_meta_graph_and_variables(
    sess, [tf.saved_model.tag_constants.SERVING],
    signature_def_map={
      tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature
    },
    legacy_init_op=legacy_init_op)
  builder.save()

上面会创建一个变量目录和saved_model.pb文件.如果你把它放在代表版本号的父目录下(例如 1/),那么你可以通过以下方式调用 tensorflow 服务:

Above will create a variables directory and saved_model.pb file. If you put it under a parent directory representing the version number (e.g. 1/) then you can call tensorflow serving via:

tensorflow_model_server --port=9000 --model_name=inception --model_base_path=/path/to/saved_models/

这篇关于如何使用 Tensorflow Serving 提供重新训练的 Inception 模型?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆