TensorFlow Serving:在运行时更新 model_config(添加额外模型) [英] TensorFlow Serving: Update model_config (add additional models) at runtime

查看:64
本文介绍了TensorFlow Serving:在运行时更新 model_config(添加额外模型)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正忙于配置 TensorFlow Serving 客户端,该客户端要求 TensorFlow Serving 服务器针对给定模型对给定输入图像进行预测.

I'm busy configuring a TensorFlow Serving client that asks a TensorFlow Serving server to produce predictions on a given input image, for a given model.

如果请求的模型尚未提供,则会从远程 URL 下载到服务器模型所在的文件夹.(客户这样做).此时我需要更新 model_config 并触发服务器重新加载它.

If the model being requested has not yet been served, it is downloaded from a remote URL to a folder where the server's models are located. (The client does this). At this point I need to update the model_config and trigger the server to reload it.

此功能似乎存在(基于 https://github.com/tensorflow/serving/pull/885https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/model_service.proto#L22),但我找不到有关如何实际使用它的任何文档.

This functionality appears to exist (based on https://github.com/tensorflow/serving/pull/885 and https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/model_service.proto#L22), but I can't find any documentation on how to actually use it.

我本质上是在寻找一个 python 脚本,我可以用它从客户端触发重新加载(或者以其他方式配置服务器以侦听更改并触发重新加载本身).

I am essentially looking for a python script with which I can trigger the reload from client side (or otherwise to configure the server to listen for changes and trigger the reload itself).

推荐答案

所以我花了很长时间浏览拉取请求,最终找到了一个代码示例.对于下一个和我有同样问题的人,这里有一个如何做到这一点的例子.(为此您需要 tensorflow_serving 包pip install tensorflow-serving-api).

So it took me ages of trawling through pull requests to finally find a code example for this. For the next person who has the same question as me, here is an example of how to do this. (You'll need the tensorflow_serving package for this; pip install tensorflow-serving-api).

基于此拉取请求(在撰写本文时尚未被接受并因需要审查而关闭):https://github.com/tensorflow/serving/pull/1065

Based on this pull request (which at the time of writing hadn't been accepted and was closed since it needed review): https://github.com/tensorflow/serving/pull/1065

from tensorflow_serving.apis import model_service_pb2_grpc
from tensorflow_serving.apis import model_management_pb2
from tensorflow_serving.config import model_server_config_pb2

import grpc

def add_model_config(host, name, base_path, model_platform):
  channel = grpc.insecure_channel(host) 
  stub = model_service_pb2_grpc.ModelServiceStub(channel)
  request = model_management_pb2.ReloadConfigRequest() 
  model_server_config = model_server_config_pb2.ModelServerConfig()

  #Create a config to add to the list of served models
  config_list = model_server_config_pb2.ModelConfigList()       
  one_config = config_list.config.add()
  one_config.name= name
  one_config.base_path=base_path
  one_config.model_platform=model_platform

  model_server_config.model_config_list.CopyFrom(config_list)

  request.config.CopyFrom(model_server_config)

  print(request.IsInitialized())
  print(request.ListFields())

  response = stub.HandleReloadConfigRequest(request,10)
  if response.status.error_code == 0:
      print("Reload sucessfully")
  else:
      print("Reload failed!")
      print(response.status.error_code)
      print(response.status.error_message)


add_model_config(host="localhost:8500", 
                    name="my_model", 
                    base_path="/models/my_model", 
                    model_platform="tensorflow")

这篇关于TensorFlow Serving:在运行时更新 model_config(添加额外模型)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆