如何使用AWS SageMaker Notebook实例部署预先训练的模型? [英] How to deploy a Pre-Trained model using AWS SageMaker Notebook Instance?

查看:163
本文介绍了如何使用AWS SageMaker Notebook实例部署预先训练的模型?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个预先训练的模型,该模型可以从S3 Bucket加载到AWS SageMaker Notebook实例中,并提供测试图像以从S3 Bucket进行预测时,它会为我提供所需的准确结果.我想对其进行部署,以便拥有一个可以与AWS Lambda Function和AWS API GateWay进一步集成的终结点,以便可以在实时应用程序中使用该模型.知道如何从AWS Sagemaker Notebook实例部署模型并获取其端点吗?下面给出了 .ipynb 文件中的代码以供参考.

I have a pre-trained model which I am loading in AWS SageMaker Notebook Instance from S3 Bucket and upon providing a test image for prediction from S3 bucket it gives me the accurate results as required. I want to deploy it so that I can have an endpoint which I can further integrate with AWS Lambda Function and AWS API GateWay so that I can use the model with real time application. Any idea how can I deploy the model from AWS Sagemaker Notebook Instance and get its endpoint? Code inside the .ipynb file is given below for reference.

import boto3
import pandas as pd
import sagemaker
#from sagemaker import get_execution_role
from skimage.io import imread
from skimage.transform import resize
import numpy as np
from keras.models import load_model
import os
import time
import json
#role = get_execution_role()
role = sagemaker.get_execution_role()

bucketname = 'bucket' # bucket where the model is hosted
filename = 'test_model.h5' # name of the model
s3 = boto3.resource('s3')
image= s3.Bucket(bucketname).download_file(filename, 'test_model_new.h5')
model= 'test_model_new.h5'

model = load_model(model)

bucketname = 'bucket' # name of the bucket where the test image is hosted
filename = 'folder/image.png' # prefix
s3 = boto3.resource('s3')
file= s3.Bucket(bucketname).download_file(filename, 'image.png')
file_name='image.png'

test=np.array([resize(imread(file_name), (137, 310, 3))])

test_predict = model.predict(test)

print ((test_predict > 0.5).astype(np.int))

推荐答案

以下是对我有用的解决方案.只需按照以下步骤操作即可.

Here is the solution that worked for me. Simply follow the following steps.

1-在以下帮助下,将模型加载到SageMaker的jupyter环境中:

1 - Load your model in the SageMaker's jupyter environment with the help of

from keras.models import load_model

model = load_model (<Your Model name goes here>) #In my case it's model.h5

2-现在已加载模型,并借助

2 - Now that the model is loaded convert it into the protobuf format that is required by AWS with the help of

def convert_h5_to_aws(loaded_model):

from tensorflow.python.saved_model import builder
from tensorflow.python.saved_model.signature_def_utils import predict_signature_def
from tensorflow.python.saved_model import tag_constants

model_version = '1'
export_dir = 'export/Servo/' + model_version
# Build the Protocol Buffer SavedModel at 'export_dir'
builder = builder.SavedModelBuilder(export_dir)
# Create prediction signature to be used by TensorFlow Serving Predict API
signature = predict_signature_def(
    inputs={"inputs": loaded_model.input}, outputs={"score": loaded_model.output})
from keras import backend as K

with K.get_session() as sess:
    # Save the meta graph and variables
    builder.add_meta_graph_and_variables(
        sess=sess, tags=[tag_constants.SERVING], signature_def_map={"serving_default": signature})
    builder.save()
import tarfile
with tarfile.open('model.tar.gz', mode='w:gz') as archive:
    archive.add('export', recursive=True)
import sagemaker

sagemaker_session = sagemaker.Session()
inputs = sagemaker_session.upload_data(path='model.tar.gz', key_prefix='model')
convert_h5_to_aws(model):

3-现在,您可以在

!touch train.py
from sagemaker.tensorflow.model import TensorFlowModel
sagemaker_model = TensorFlowModel(model_data = 's3://' + sagemaker_session.default_bucket() + '/model/model.tar.gz',
                                  role = role,
                                  framework_version = '1.15.2',
                                  entry_point = 'train.py')
%%timelog
predictor = sagemaker_model.deploy(initial_instance_count=1,
                                   instance_type='ml.m4.xlarge')

这将生成一个端点,可以在Amazon SageMaker的推理"部分中看到该端点,借助该端点,您现在可以从jupyter笔记本以及Web和移动应用程序中进行预测.这部 YouTube教程,作者是利亚姆(Liam)和

This will generate the endpoint with can be seen in the Inference section of the Amazon SageMaker and with the help of that endpoint you can now make predictions from the jupyter notebook as well as from web and mobile applications as well. This Youtube tutorial by Liam and AWS blog by Priya helped me alot.

这篇关于如何使用AWS SageMaker Notebook实例部署预先训练的模型?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆