Tensorflow预测grpc无法正常运行,但RESTful API可以正常运行 [英] Tensorflow predict grpc not working but RESTful API working fine

查看:829
本文介绍了Tensorflow预测grpc无法正常运行,但RESTful API可以正常运行的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当我尝试执行下面的客户端代码时,我遇到了错误,但通过RESTful API端点调用时成功了
curl -d '{"signature_name":"predict_output","instances":[2.0,9.27]}' -X POST http://10.110.110.13:8501/v1/models/firstmodel:predict

When i am trying to execute below piece of client code i am getting error but succeeded when calling via RESTful API end point
curl -d '{"signature_name":"predict_output","instances":[2.0,9.27]}' -X POST http://10.110.110.13:8501/v1/models/firstmodel:predict

请您在下面的代码中对我进行纠正

Could you please correct me in below code

import tensorflow as tf
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc
import numpy as np
import grpc
server = '10.110.110.13:8501'
channel = grpc.insecure_channel(server)
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
request = predict_pb2.PredictRequest()
request.model_spec.name = 'firstmodel'
request.model_spec.signature_name = 'predict_output'
request.inputs['input_x'].CopyFrom(tf.contrib.util.make_tensor_proto([12.0], shape=[1]))
result_future = stub.Predict(request,40.)
print(result_future.outputs['output_y'])

收到以下错误消息:

_Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "Trying to connect an http1.x server"
debug_error_string = "{"created":"@1545248014.367000000","description":"Error received from peer",
    "file":"src/core/lib/surface/call.cc","file_line":1083,"grpc_message":"Trying to connect an http1.x server","grpc_status":14}"

下面是组成的请求信息,供您参考

Below is the composed request information for your reference

model_spec {
  name: "firstmodel"
  signature_name: "predict_output"
}
inputs {
  key: "input_x"
  value {
    dtype: DT_FLOAT
    tensor_shape {
      dim {
        size: 1
      }
    }
    float_val: 12.0
  }
}

推荐答案

GRPC端口和HTTP端口不同.由于您正在监听8501上的http服务,因此GRPC服务必须使用另一个端口.默认值为8500,但是您可以在启动tf服务器时使用--port=参数进行更改.

GRPC port and HTTP port are different. Since you are listening for your http service on 8501, you GRPC service must use another port. The default is 8500 but you could change it with the --port= argument when you start your tf-server.

docker run -p 8500:8500 --mounttype=bind,source=/root/serving/Ser_Model,target=/models/firstmodel -e MODEL_NAME=firstmodel -t tensorflow/serving

这篇关于Tensorflow预测grpc无法正常运行,但RESTful API可以正常运行的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆