如何为 AWS Sagemaker 托管的自定义 Tensorflow 模型使用多个输入 [英] How to use multiple inputs for custom Tensorflow model hosted by AWS Sagemaker

查看:31
本文介绍了如何为 AWS Sagemaker 托管的自定义 Tensorflow 模型使用多个输入的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个训练有素的 Tensorflow 模型,它使用两个输入进行预测.我已在 AWS Sagemaker 上成功设置并部署了模型.

I have a trained Tensorflow model that uses two inputs to make predictions. I have successfully set up and deployed the model on AWS Sagemaker.

from sagemaker.tensorflow.model import TensorFlowModel
sagemaker_model = TensorFlowModel(model_data='s3://' + sagemaker_session.default_bucket() 
                              + '/R2-model/R2-model.tar.gz',
                             role = role,
                             framework_version = '1.12',
                             py_version='py2',
                             entry_point='train.py')

predictor = sagemaker_model.deploy(initial_instance_count=1,
                              instance_type='ml.m4.xlarge')

predictor.predict([data_scaled_1.to_csv(),
                   data_scaled_2.to_csv()]
                 )

我总是收到错误消息.我可以使用 AWS Lambda 函数,但我没有看到任何关于为部署的模型指定多个输入的文档.有人知道怎么做吗?

I always receive an error. I could use an AWS Lambda function, but I don't see any documentation on specifying multiple inputs to deployed models. Does anyone know how to do this?

推荐答案

首先部署模型时,您需要实际构建正确的签名.此外,您需要使用 tensorflow 服务进行部署.

You need to actually build a correct signature when deploying the model first. Also, you need to deploy with tensorflow serving.

在推理时,您还需要在请求时提供正确的输入格式:基本上 sagemaker docker 服务器接收请求输入并将其传递给 tensorflow 服务.因此,输入需要匹配 TF 服务输入.

At inference, you need to also give a proper input format when requesting: basically sagemaker docker server takes the request input and passes it by to tensorflow serving. So, the input needs to match TF serving inputs.

以下是使用 Sagemaker 在 Tensorflow 服务中部署 Keras 多输入多输出模型以及之后如何进行推理的简单示例:

Here is a simple example of deploying a Keras multi-input multi-output model in Tensorflow serving using Sagemaker and how to make inference afterwards:

import tarfile

from tensorflow.python.saved_model import builder
from tensorflow.python.saved_model.signature_def_utils import predict_signature_def
from tensorflow.python.saved_model import tag_constants
from keras import backend as K
import sagemaker
#nano ~/.aws/config
#get_ipython().system('nano ~/.aws/config')
from sagemaker import get_execution_role
from sagemaker.tensorflow.serving import Model


def serialize_to_tf_and_dump(model, export_path):
    """
    serialize a Keras model to TF model
    :param model: compiled Keras model
    :param export_path: str, The export path contains the name and the version of the model
    :return:
    """
    # Build the Protocol Buffer SavedModel at 'export_path'
    save_model_builder = builder.SavedModelBuilder(export_path)
    # Create prediction signature to be used by TensorFlow Serving Predict API
    signature = predict_signature_def(
        inputs={
            "input_type_1": model.input[0],
            "input_type_2": model.input[1],
        },
        outputs={
            "decision_output_1": model.output[0],
            "decision_output_2": model.output[1],
            "decision_output_3": model.output[2]
        }
    )
    with K.get_session() as sess:
        # Save the meta graph and variables
        save_model_builder.add_meta_graph_and_variables(
            sess=sess, tags=[tag_constants.SERVING], signature_def_map={"serving_default": signature})
        save_model_builder.save()

# instanciate model
model = .... 

# convert to tf model
serialize_to_tf_and_dump(model, 'model_folder/1')

# tar tf model
with tarfile.open('model.tar.gz', mode='w:gz') as archive:
    archive.add('model_folder', recursive=True)

# upload it to s3
sagemaker_session = sagemaker.Session()
inputs = sagemaker_session.upload_data(path='model.tar.gz')

# convert to sagemaker model
role = get_execution_role()
sagemaker_model = Model(model_data = inputs,
    name='DummyModel',
    role = role,
    framework_version = '1.12')

predictor = sagemaker_model.deploy(initial_instance_count=1,
    instance_type='ml.t2.medium', endpoint_name='MultiInputMultiOutputModel')

在推理中,这里是如何请求预测:

At inference, here is how to request for predictions:

import json
import boto3

x_inputs = ... # list with 2 np arrays of size (batch_size, ...)
data={
    'inputs':{
        "input_type_1": x[0].tolist(),
        "input_type_2": x[1].tolist()
        }
}

endpoint_name = 'MultiInputMultiOutputModel'
client = boto3.client('runtime.sagemaker')
response = client.invoke_endpoint(EndpointName=endpoint_name, Body=json.dumps(data), ContentType='application/json')
predictions = json.loads(response['Body'].read())

这篇关于如何为 AWS Sagemaker 托管的自定义 Tensorflow 模型使用多个输入的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆