使用Tensorflow服务来服务Keras模型 [英] Serving Keras Models With Tensorflow Serving

查看:191
本文介绍了使用Tensorflow服务来服务Keras模型的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

现在,我们已经能够使用Tensorflow Serving成功地为模型提供服务.我们使用以下方法导出模型并使用Tensorflow Serving托管它.

Right now we are successfully able to serve models using Tensorflow Serving. We have used following method to export the model and host it with Tensorflow Serving.

     ------------
      For exporting 
     ------------------
     from tensorflow.contrib.session_bundle import exporter

     K.set_learning_phase(0)
     export_path = ... # where to save the exported graph
     export_version = ... # version number (integer)

     saver = tf.train.Saver(sharded=True)
     model_exporter = exporter.Exporter(saver)
     signature = exporter.classification_signature(input_tensor=model.input,
                                          scores_tensor=model.output)
     model_exporter.init(sess.graph.as_graph_def(),
                default_graph_signature=signature)
     model_exporter.export(export_path, tf.constant(export_version), sess)

      --------------------------------------

      For hosting
      -----------------------------------------------

      bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=default --model_base_path=/serving/models

但是我们的问题是-我们希望将keras与Tensorflow服务集成在一起.我们希望通过使用Keras的Tensorflow服务来提供模型. 我们之所以想要拥有该代码,是因为-在我们的架构中,我们采用了几种不同的方式来训练我们的模型,例如deeplearning4j + Keras, Tensorflow + Keras,但对于服务,我们只想使用一个可服务的引擎,即Tensorflow Serving.我们没有任何直接的方法可以实现这一目标.有任何评论吗?

However our issue is - we want keras to be integrated with Tensorflow serving. We would like to serve the model through Tensorflow serving using Keras. The reason we would like to have that is because - in our architecture we follow couple of different ways to train our model like deeplearning4j + Keras , Tensorflow + Keras, but for serving we would like to use only one servable engine that's Tensorflow Serving. We don't see any straight forward way to achieve that. Any comments ?

谢谢.

推荐答案

最近,TensorFlow更改了导出模型的方式,因此Web上的大多数教程都已过时.老实说,我不知道deeplearning4j的工作原理,但是我经常使用Keras.我设法创建了一个简单的示例,已经在TensorFlow Serving Github中的 issue 上发布了.

Very recently TensorFlow changed the way it exports the model, so the majority of the tutorials available on web are outdated. I honestly don't know how deeplearning4j works, but I use Keras quite often. I managed to create a simple example that I already posted on this issue in TensorFlow Serving Github.

我不确定这是否对您有帮助,但是我想分享我的工作方式,也许它会为您提供一些见识.在创建自定义模型之前,我的第一个试用是使用Keras上提供的经过训练的模型,例如VGG19.我这样做如下.

I'm not sure whether this will help you, but I'd like to share how I did and maybe it will give you some insights. My first trial prior to creating my custom model was to use a trained model available on Keras such as VGG19. I did this as follows.

模型创建

import keras.backend as K
from keras.applications import VGG19
from keras.models import Model

# very important to do this as a first thing
K.set_learning_phase(0)

model = VGG19(include_top=True, weights='imagenet')

# The creation of a new model might be optional depending on the goal
config = model.get_config()
weights = model.get_weights()
new_model = Model.from_config(config)
new_model.set_weights(weights)

导出模型

from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import utils
from tensorflow.python.saved_model import tag_constants, signature_constants
from tensorflow.python.saved_model.signature_def_utils_impl import     build_signature_def, predict_signature_def
from tensorflow.contrib.session_bundle import exporter

export_path = 'folder_to_export'
builder = saved_model_builder.SavedModelBuilder(export_path)

signature = predict_signature_def(inputs={'images': new_model.input},
                                  outputs={'scores': new_model.output})

with K.get_session() as sess:
    builder.add_meta_graph_and_variables(sess=sess,
                                         tags=[tag_constants.SERVING],
                                         signature_def_map={'predict': signature})
    builder.save()

一些注意事项

  • 它可能因Keras,TensorFlow和TensorFlow服务而异 版本.我用了最新的.
  • 当心签名名称,因为它们也应在客户端中使用.
  • 创建客户端时, 模型(例如preprocess_input())必须执行.我没有尝试 在图本身中添加诸如Inception客户示例之类的步骤.
  • It can vary depending on Keras, TensorFlow, and TensorFlow Serving version. I used the latest ones.
  • Beware of the names of the signatures, since they should be used in the client as well.
  • When creating the client, all preprocessing steps that are needed for the model (preprocess_input() for example) must be executed. I didn't try to add such step in the graph itself as Inception client example.

关于在同一服务器中提供不同模型的服务,我认为类似于创建model_config_file的操作可能会对您有所帮助.为此,您可以创建一个类似于以下内容的配置文件:

With respect to serving different models within the same server, I think that something similar to the creation of a model_config_file might help you. To do so, you can create a config file similar to this:

model_config_list: {
  config: {
    name: "my_model_1",
    base_path: "/tmp/model_1",
    model_platform: "tensorflow"
  },
  config: {
     name: "my_model_2",
     base_path: "/tmp/model_2",
     model_platform: "tensorflow"
  }
}

最后,您可以像这样运行客户端:

Finally, you can run the client like this:

bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --config_file=model_config.conf

这篇关于使用Tensorflow服务来服务Keras模型的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆