如何使用Docker和DigitalOcean Spaces部署TensorFlow Serving [英] How to deploy TensorFlow Serving using Docker and DigitalOcean Spaces

查看:183
本文介绍了如何使用Docker和DigitalOcean Spaces部署TensorFlow Serving的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

如何配置TensorFlow Serving使用存储在DigitalOcean Spaces中的文件?

How do you configure TensorFlow Serving to use files stored in DigitalOcean Spaces?

解决方案很重要:

  • 提供对配置模型文件和模型文件的访问权限
  • 提供对数据的非公开访问
  • provides access to both the configuration and model files
  • provides non-public access to the data

我已经在DigitalOcean Spaces中配置了一个名为your_bucket_name的存储桶,其结构如下:

I have configured a bucket named your_bucket_name in DigitalOcean Spaces with the following structure:

- your_bucket_name
  - config
    - batching_parameters.txt
    - monitoring_config.txt
    - models.config
  - models
    - model_1
      - version_1.1
        - variables
          - variables.data-00000-of-00001
          - variables.index
        - saved_model.pb
   - model_2
       - ...
   - model_3
       - ...

推荐答案

TensorFlow Serving支持与Amazon S3存储桶的集成.由于DigitalOcean Spaces提供了类似的界面,因此可以通过背负S3接口,通过Docker轻松地与DigitalOcean Spaces一起运行TensorFlow服务.

TensorFlow Serving supports integration with Amazon S3 buckets. Since DigitalOcean Spaces provide a similar interface, it's possible to easily run TensorFlow Servings with DigitalOcean Spaces via Docker by piggybacking off the S3 interface.

为了使其他人更容易使用,我在下面详细介绍了有关运行服务器所需的所有知识:

To make it easier for others, I've detailed everything you need to know about running the server below:

在您的环境中定义以下变量:

Define the following variables in your environment:

AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...

(这不是严格必要的,但是定义这些变量比例如将值硬编码到docker-compose文件中使部署更安全.)

(This is not strictly necessary but defining these variables makes your deployment more secure than hard-coding the values into your docker-compose file, for example.)

在配置云存储存储桶的过程中,您将从DigitalOcean Spaces接收了这些变量的值.

You receive the values for these variables from DigitalOcean Spaces as part of configuring your cloud storage bucket.

您可以使用Docker或docker-compose启动服务器

You can start the server using Docker or docker-compose:

以下是用于从命令提示符启动服务器的最小docker命令:

Here's a minimal docker command for starting the server from a command prompt:

docker run \
    -p 8500:8500 \
    -p 8501:8501 \
    -e AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} \
    -e AWS_REGION=nyc3 \
    -e AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} \
    -e S3_ENDPOINT=nyc3.digitaloceanspaces.com \
    tensorflow/serving \
    --model_config_file=s3://your_bucket_name/config/models.config

(要在Windows上运行此命令,您可能需要删除反引号新行以使其成为单行命令.)

(For running this on Windows, you may need to remove backtick-newlines to make this a single-line command.)

此docker-compose配置在配置服务器的方式上更加详细,但是您也可以通过简单的docker命令使用这些选项.

This docker-compose configuration is a bit more detailed in the way that the server is configured, but you can use these options with the simple docker command as well.

version: "3"
services:
  tensorflow-servings:
    image: tensorflow/serving:latest
    ports:
      - 8500:8500
      - 8501:8501
    command:
      - --batching_parameters_file=s3://your_bucket_name/config/batching_parameters.txt
      - --enable_batching=true
      - --model_config_file=s3://your_bucket_name/config/only_toxic.config
      - --model_config_file_poll_wait_seconds=300
      - --monitoring_config_file=s3://your_bucket_name/config/monitoring_config.txt
      - --rest_api_timeout_in_ms=30000
    environment:
      - AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
      - AWS_LOG_LEVEL=3
      - AWS_REGION=nyc3
      - AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
      - S3_ENDPOINT=nyc3.digitaloceanspaces.com

此处减少了日志级别,因为有许多连接已释放"和无响应正文"消息不是实际错误. (有关更多详细信息,请参见 GitHub问题:AWS lib在使用S3时非常冗长.)

The log levels are reduced here because there are a lot of "Connection released" and "No response body" messages that are not actual errors. (See GitHub Issue: AWS lib is verbose when using S3 for more details.)

配置文件如下:

model_config_list {
  config {
    name: 'model_1'
    base_path: 's3://your_bucket_name/models/model_1/'
      model_platform: "tensorflow"
  },
  config {
    ...
  },
  config {
    ...
  }
}

3.2. batching_parameters.txt(可选)

此文件定义了TensorFlow Serving的准则;扩展其在服务器中处理批处理的方式.

3.2. batching_parameters.txt (Optional)

This file defines guidelines to TensorFlow Serving; shepherding the way it handles batching in the server.

    max_batch_size { value: 1024 }
    batch_timeout_micros { value: 100 }
    num_batch_threads { value: 4 }
    pad_variable_length_inputs: true

3.3. monitoring_config.txt(可选)

此文件可通过下面定义的端点提供各种统计信息.

3.3. monitoring_config.txt (Optional)

This file makes various statistics available via the endpoint defined below.

prometheus_config {
  enable: true,
  path: "/monitoring/metrics"
}```

这篇关于如何使用Docker和DigitalOcean Spaces部署TensorFlow Serving的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆