XOR 的张量流服务签名 [英] tensorflow-serving signature for an XOR

查看:33
本文介绍了XOR 的张量流服务签名的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用 tensorflow 服务导出我的第一个 xor NN,但在调用 gRPC 时没有得到任何结果.这里是我用来预测 XOR 的代码

I am trying to export my first xor NN using tensorflow serving but I am not getting any result when I call the gRPC. Here the code I use to predict the XOR

import tensorflow as tf
sess = tf.Session()
from keras import backend as K
K.set_session(sess)
K.set_learning_phase(0)  # all new operations will be in test mode from now on

from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import tag_constants, signature_constants, signature_def_utils_impl

from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
import numpy as np

model_version = "2" #Change this to export different model versions, i.e. 2, ..., 7
epoch = 100 ## the higher this number is the more accurate the prediction will be 10000 is a good number to s
et it at just takes a while to train

#Exhaustion of Different Possibilities
X = np.array([
    [0,0],
    [0,1],
    [1,0],
    [1,1]
])

#Return values of the different inputs
Y = np.array([[0],[1],[1],[0]])

#Create Model
model = Sequential()
model.add(Dense(8, input_dim=2))
model.add(Activation('tanh'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
sgd = SGD(lr=0.1)

model.compile(loss='binary_crossentropy', optimizer=sgd)
model.fit(X, Y, batch_size=1, nb_epoch=epoch)

test = np.array([[0.0,0.0]])

#setting values for the sake of saving the model in the proper format
x = model.input
y = model.output

print('Results of Model', model.predict_proba(X))

prediction_signature = tf.saved_model.signature_def_utils.predict_signature_def({"inputs": x}, {"prediction":
y})

valid_prediction_signature = tf.saved_model.signature_def_utils.is_valid_signature(prediction_signature)
if(valid_prediction_signature == False):
    raise ValueError("Error: Prediction signature not valid!")

builder = saved_model_builder.SavedModelBuilder('./'+model_version)
legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')

# Add the meta_graph and the variables to the builder
builder.add_meta_graph_and_variables(
      sess, [tag_constants.SERVING],
      signature_def_map={
           signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:prediction_signature,
      },
      legacy_init_op=legacy_init_op)

# save the graph
builder.save()

在docker中实现

docker run -p 8501:8501 --mount type=bind,source=/root/tensorflow3/projects/example/xor_keras_tensorflow_serving,target=/models/xor -e MODEL_NAME=xor -t tensorflow/serving &

然后我通过以下方式请求预测:

then I request the prediction by the following:

curl -d '{"inputs": [1,1]}' -X POST http://localhost:8501/v2/models/xor

结果总是这样

<HTML><HEAD>
<TITLE>404 Not Found</TITLE>
</HEAD><BODY>
<H1>Not Found</H1>
</BODY></HTML>

你能帮我找出我错在哪里吗?我试图用实例"改变卷曲中的输入",但没有谢谢,曼努埃尔

Can you help me to find where I wrong? I have tried to change "inputs" in the curl with "instances", but nothing Thanks, Manuel

推荐答案

可以先试试吗

curl http://localhost:8501/v1/models/xor

检查模型是否正在运行?这应该会返回模型的状态.

to check if the model is running? This should return the status of your model.

来自 REST API/a>,格式为GET http://host:port/v1/models/${MODEL_NAME}[/versions/${MODEL_VERSION}]

这篇关于XOR 的张量流服务签名的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆