与针对Google CloudML部署的Tensorflow培训分开创建服务图吗? [英] Creating a serving graph separately from training in tensorflow for Google CloudML deployment?
问题描述
我正在尝试将tf.keras
图像分类模型部署到Google CloudML Engine.我是否必须包含代码来与培训分开创建服务图,以使其能够在网络应用程序中为我的模型提供服务?我已经有SavedModel格式(saved_model.pb
和可变文件)的模型,所以我不确定是否需要执行此额外步骤才能使其正常工作.
I am trying to deploy a tf.keras
image classification model to Google CloudML Engine. Do I have to include code to create serving graph separately from training to get it to serve my models in a web app? I already have my model in SavedModel format (saved_model.pb
& variable files), so I'm not sure if I need to do this extra step to get it to work.
例如这是直接来自GCP Tensorflow部署模型的代码文档 >
e.g. this is code directly from GCP Tensorflow Deploying models documentation
def json_serving_input_fn():
"""Build the serving inputs."""
inputs = {}
for feat in INPUT_COLUMNS:
inputs[feat.name] = tf.placeholder(shape=[None], dtype=feat.dtype)
return tf.estimator.export.ServingInputReceiver(inputs, inputs)
推荐答案
您可能正在使用实际的图像文件来训练模型,而最好将图像作为已编码的字节字符串发送到CloudML上托管的模型.因此,如上所述,导出模型时需要指定ServingInputReceiver
函数.一些用于Keras模型的样板代码:
You are probably training your model with actual image files, while it is best to send images as encoded byte-string to a model hosted on CloudML. Therefore you'll need to specify a ServingInputReceiver
function when exporting the model, as you mention. Some boilerplate code to do this for a Keras model:
# Convert keras model to TF estimator
tf_files_path = './tf'
estimator =\
tf.keras.estimator.model_to_estimator(keras_model=model,
model_dir=tf_files_path)
# Your serving input function will accept a string
# And decode it into an image
def serving_input_receiver_fn():
def prepare_image(image_str_tensor):
image = tf.image.decode_png(image_str_tensor,
channels=3)
return image # apply additional processing if necessary
# Ensure model is batchable
# https://stackoverflow.com/questions/52303403/
input_ph = tf.placeholder(tf.string, shape=[None])
images_tensor = tf.map_fn(
prepare_image, input_ph, back_prop=False, dtype=tf.float32)
return tf.estimator.export.ServingInputReceiver(
{model.input_names[0]: images_tensor},
{'image_bytes': input_ph})
# Export the estimator - deploy it to CloudML afterwards
export_path = './export'
estimator.export_savedmodel(
export_path,
serving_input_receiver_fn=serving_input_receiver_fn)
You can refer to this very helpful answer for a more complete reference and other options for exporting your model.
:如果此方法引发ValueError: Couldn't find trained model at ./tf.
错误,您可以尝试一下我在
If this approach throws a ValueError: Couldn't find trained model at ./tf.
error, you can try it the workaround solution that I documented in this answer.
这篇关于与针对Google CloudML部署的Tensorflow培训分开创建服务图吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!