如何在 AWS sagemaker 中运行预先训练的模型? [英] how to run a pre-trained model in AWS sagemaker?

查看:31
本文介绍了如何在 AWS sagemaker 中运行预先训练的模型?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个经过预训练的 model.pkl 文件以及与 ml 模型相关的所有其他文件.我希望它在 aws sagemaker 上部署它.但是没有训练,如何将它部署到 aws sagmekaer,作为 aws sagemaker 中的 fit() 方法运行 train 命令并将 model.tar.gz 推送到 s3 位置,当使用 deploy 方法时,它使用相同的 s3 位置部署模型,我们不会在 s3 中手动创建与 aws 模型创建的位置相同的位置,并使用一些时间戳为其命名.如何在s3位置放出我们自己的个性化model.tar.gz文件,并使用同一个s3位置调用deploy()函数.

I have a model.pkl file which is pre-trained and all other files related to the ml model. I want it to deploy it on the aws sagemaker. But without training, how to deploy it to the aws sagmekaer, as fit() method in aws sagemaker run the train command and push the model.tar.gz to the s3 location and when deploy method is used it uses the same s3 location to deploy the model, we don't manual create the same location in s3 as it is created by the aws model and name it given by using some timestamp. How to put out our own personalized model.tar.gz file in the s3 location and call the deploy() function by using the same s3 location.

推荐答案

您只需要:

  1. 将您的模型放在 model.tar.gz 存档中的任意 S3 位置
  2. 在与 SageMaker 兼容的 docker 映像中有一个推理脚本,该脚本能够读取您的 model.pkl、提供它并处理推理.
  3. 创建一个端点,将您的工件与推理代码相关联
  1. to have your model in an arbitrary S3 location in a model.tar.gz archive
  2. to have an inference script in a SageMaker-compatible docker image that is able to read your model.pkl, serve it and handle inferences.
  3. to create an endpoint associating your artifact to your inference code

当您要求部署端点时,SageMaker 将负责下载您的 model.tar.gz 并解压缩到服务器的 docker 映像中的适当位置,即 /选择/毫升/模型

When you ask for an endpoint deployment, SageMaker will take care of downloading your model.tar.gz and uncompressing to the appropriate location in the docker image of the server, which is /opt/ml/model

根据您使用的框架,您可以使用预先存在的 docker 镜像(可用于 Scikit-learn、TensorFlow、PyTorch、MXNet),或者您可能需要创建自己的.

Depending on the framework you use, you may use either a pre-existing docker image (available for Scikit-learn, TensorFlow, PyTorch, MXNet) or you may need to create your own.

  • 关于创建自定义图像,请参阅 此处为规范,此处为 Rsklearn(sklearn 的相关性较低,因为有一个预构建的 docker 镜像以及 sagemaker sklearn SDK)
  • 关于将现有容器用于 Sklearn、PyTorch、MXNet、TF,请查看此示例:SageMaker Sklearn 容器中的随机森林.在这个例子中,没有什么能阻止你部署一个在别处训练的模型.请注意,如果训练/部署环境不匹配,您可能会因某些软件版本差异而运行错误.
  • Regarding custom image creation, see here the specification and here two examples of custom containers for R and sklearn (the sklearn one is less relevant now that there is a pre-built docker image along with a sagemaker sklearn SDK)
  • Regarding leveraging existing containers for Sklearn, PyTorch, MXNet, TF, check this example: Random Forest in SageMaker Sklearn container. In this example, nothing prevents you from deploying a model that was trained elsewhere. Note that with a train/deploy environment mismatch you may run in errors due to some software version difference though.

关于您的以下经历:

当使用 deploy 方法时,它使用相同的 s3 位置来部署模型,我们不会在 s3 中手动创建与创建相同的位置通过 aws 模型并使用一些时间戳命名它

when deploy method is used it uses the same s3 location to deploy the model, we don't manual create the same location in s3 as it is created by the aws model and name it given by using some timestamp

我同意有时使用 SageMaker Python SDK(许多 可用于 SageMaker 的 SDK)可能具有误导性,因为它们经常利用这样一个事实,即可以部署刚刚经过训练的 Estimator (Estimator.deploy(..)) 在同一会话中,而不必实例化将推理代码映射到模型工件的中间模型概念.这种设计大概是为了代码的兼容性而完成的,但在现实生活中,给定模型的训练和部署很可能是通过在不同系统中运行的不同脚本来完成的.完全可以在同一会话中通过先前的训练来部署模型,您需要实例化一个 sagemaker.model.Model 对象,然后部署它.

I agree that sometimes the demos that use the SageMaker Python SDK (one of the many available SDKs for SageMaker) may be misleading, in the sense that they often leverage the fact that an Estimator that has just been trained can be deployed (Estimator.deploy(..)) in the same session, without having to instantiate the intermediary model concept that maps inference code to model artifact. This design is presumably done on behalf of code compacity, but in real life, training and deployment of a given model may well be done from different scripts running in different systems. It's perfectly possible to deploy a model with training it previously in the same session, you need to instantiate a sagemaker.model.Model object and then deploy it.

这篇关于如何在 AWS sagemaker 中运行预先训练的模型?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆