使用Tensorflow服务为Keras模型提供服务 [英] Serving a Keras model with Tensorflow Serving

查看:93
本文介绍了使用Tensorflow服务为Keras模型提供服务的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

Tensorflow 1.12发行说明指出:"Keras模型现在可以直接导出为SavedModel格式(tf.contrib.saved_model.save_keras_model()),并与Tensorflow Serving一起使用..所以我试了一下-

Tensorflow 1.12 release notes states that: "Keras models can now be directly exported to the SavedModel format(tf.contrib.saved_model.save_keras_model()) and used with Tensorflow Serving". So I gave it a shot -

我已经用单行用此op导出了一个简单的模型.但是,Tensorflow服务无法识别该模型.我猜问题出在docker调用,也许是模型定义中缺少'signature_defs'的问题.感谢您提供有关缺少的步骤的信息.

I have exported a simple model with this op using a single line. However, Tensorflow serving doesn't recognize the model. I guess the problem is with the docker call, and maybe with a missing 'signature_defs' in the model definition. I would be thankful for info regarding the missing steps.

1.训练模型并将其导出到TF服务:

这是基于Jason Brownlee的第一个NN 的代码(由于其简单性而选择)

Here is the code based on Jason Brownlee's first NN (chosen thanks to its simplicity)

(作为简短CSV文件的培训数据为

(the training data, as a short CSV file, is here):

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.contrib.saved_model import save_keras_model
import numpy

# fix random seed for reproducibility
numpy.random.seed(7)

# load pima indians dataset
dataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]

# create model
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))

# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

# Fit the model
model.fit(X, Y, epochs=150, batch_size=10)

# evaluate the model
scores = model.evaluate(X, Y)
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))

# calculate predictions
predictions = model.predict(X)
# round predictions
rounded = [round(x[0]) for x in predictions]
print(rounded)

# Save the model for serving
path = '/TensorFlow_Models/Keras_serving/saved_model' # full path of where to save the model
save_keras_model(model, path)

2.设置Tensorflow服务器:

可以通过docker或自己的构建来设置服务器. TF建议使用docker( TF参考).在此之后,并基于 TF博客 TF服务指南:

The server can be set via docker or by its own build. TF recommends docker (TF ref). Following this, and based on TF blog and TF Serving Tutorial:

  1. 安装Docker(从此处)
  2. 获取最新的TF服务版本:

docker pull tensorflow/服务

docker pull tensorflow/serving

  1. 使用此模型激活TF服务( TF参考):

docker run -p 8501:8501 --name NNN --mount type = bind,source = SSS,target = TTT -e MODEL_NAME = MMM -t tensorflow/服务&

docker run -p 8501:8501 --name NNN --mount type=bind,source=SSS,target=TTT -e MODEL_NAME=MMM -t tensorflow/serving &

如果可以确认,我将很高兴:

I would be happy if one could confirm:

  • NNN-docker容器名称-例如,用于 杀死进程.可以任意设置(例如:mydocker).
  • MMM-模型的名称,它似乎是任意设置的.
  • SSS-模型所在的文件夹,完整路径.
  • TTT-该设置为什么?
  • NNN - the docker container name - which is used, for instance, to kill the process. It can be set arbitrarily (e.g. to: mydocker).
  • MMM - the name of the model, which seem to be set arbitrarily.
  • SSS - the folder where the model is located, full path.
  • TTT - What should be this set to ?

3.客户

服务器可以通过gRPC或RESTful API获取请求.假设我们使用RESTful API,则可以使用curl(这是TF示例).但是我们如何设置模型的输入/输出呢?是否需要SignatureDefs( ref )?

The server can get requests either over gRPC or RESTful API. Assuming we go with RESTful API, the model can be accessed by using curl (here is a TF example). But how do we set the input/output of the model? does SignatureDefs needed (ref)?

全部,而"Keras模型现在可以直接导出为SavedModel格式(tf.contrib.saved_model.save_keras_model()),并与Tensorflow服务一起使用" ,如TF1.12发行说明所述,有一种方法可以实际使用该模型.我很乐意完成此操作.

All in all, while "Keras models can now be directly exported to the SavedModel format(tf.contrib.saved_model.save_keras_model()) and used with Tensorflow Serving", as stated in TF1.12 release notes, there is a way to go in order to actually serve the model. I would be happy for ideas on completing this.

推荐答案

您对NNN和SSS都是正确的. NNN可以是任意的,如果未指定,则docker将为其赋予一个随机名称.

You are all correct about NNN and SSS. NNN can be arbitrary, if not specified, docker will give it a random name.

对于MMM,最好给它起一个有意义的名字.

For MMM, better give it a meaningful name.

对于TTT,这是关于docker run命令的常规信息,您可以参考

For TTT this is general about docker run command, and you can refer docker doc. This is where you map(bind) SSS inside the container, usually set to /models/$MODEL_NAME. If you get into this container and open /models/$MODEL_NAME, you will see the version folder(s) just as in SSS.

RESTful API的输入与TensorFlow代码中模型的输入相同,在您的示例中为X = dataset[:,0:8].

The input of RESTful API is the same as the input to the model in TensorFlow code, in your example is X = dataset[:,0:8].

如果在保存模型时未定义签名,例如

If you didn't define signature when saving the model like the example in doc, then it's not necessary in serving.

这篇关于使用Tensorflow服务为Keras模型提供服务的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆