ML引擎在线预测-意外的张量名称:值 [英] ML Engine Online Prediction - Unexpected tensor name: values

查看:120
本文介绍了ML引擎在线预测-意外的张量名称:值的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当尝试对ML Engine模型进行在线预测时,出现以下错误. 键值"不正确. (请参阅图像上的错误.) 在此处输入图片描述

I get the following error when trying to make an online prediction on my ML Engine model. The key "values" is not correct. (See error on image.) enter image description here

我已经使用RAW图像数据进行了测试:{"image_bytes":{"b64": base64.b64encode(jpeg_data)}} &将数据转换为numpy数组.

I already tested with RAW image data : {"image_bytes":{"b64": base64.b64encode(jpeg_data)}} & Converted the data to a numpy array.

当前,我有以下代码:

from googleapiclient import discovery
import base64
import os
from PIL import Image
import json
import numpy as np

os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "/Users/jacob/Desktop/******"

def predict_json(project, model, instances, version=None):
    """Send json data to a deployed model for prediction.

    Args:
        project (str): project where the Cloud ML Engine Model is deployed.
        model (str): model name.
        instances ([Mapping[str: Any]]): Keys should be the names of Tensors
            your deployed model expects as inputs. Values should be datatypes
            convertible to Tensors, or (potentially nested) lists of datatypes
            convertible to tensors.
        version: str, version of the model to target.
    Returns:
        Mapping[str: any]: dictionary of prediction results defined by the
            model.
    """
    # Create the ML Engine service object.
    # To authenticate set the environment variable
    # GOOGLE_APPLICATION_CREDENTIALS=<path_to_service_account_file>
    service = discovery.build('ml', 'v1')
    name = 'projects/{}/models/{}'.format(project, model)

    if version is not None:
        name += '/versions/{}'.format(version)

    response = service.projects().predict(
        name=name,
        body={'instances': instances}
    ).execute()

    if 'error' in response:
        raise RuntimeError(response['error'])

    return response['predictions']


savepath = 'upload/11277229_F.jpg'

img = Image.open('test/01011000/11277229_F.jpg')
test = img.resize((299, 299))
test.save(savepath)

img1 = open(savepath, "rb").read()

def load_image(filename):
    with open(filename) as f:
        return np.array(f.read())

predict_json('image-recognition-25***08', 'm500_200_waug', [{"values": str(base64.b64encode(img1).decode("utf-8")), "key": '87'}], 'v1')

推荐答案

错误消息本身表明(如您在问题中所指出的那样),键值"不是模型中指定的输入之一.要检查模型的输入,请使用saved_model_cli show --all --dir=/path/to/model.这将为您显示输入名称的列表.您需要使用正确的名称.

The error message itself indicates (as you point out in the question), that the key "values" is not one of the inputs specified in the model. To inspect the model's input, use saved_model_cli show --all --dir=/path/to/model. That will show you a list of the names of the inputs. You'll need to use the correct name.

也就是说,似乎还有另一个问题.从这个问题尚不清楚,虽然模型可能期望以下两种输入之一,但您的模型期望输入哪种类型:

That said, it appears there is another issue. It's not clear from the question what type of input your model is expecting, though it's likely one of two things:

  1. 整数或浮点数的矩阵
  2. 带有原始图像文件的字节字符串 内容.
  1. A matrix of integers or floats
  2. A byte string with the raw image file contents.

确切的解决方案取决于您使用的导出模型是哪种. saved_model_cli将在此根据输入的类型和形状提供帮助.分别是DT_FLOAT32(或其他int/float类型)和[NONE, 299, 299, CHANNELS]DT_STRING[NONE].

The exact solution will depend on which of the above your exported model is using. saved_model_cli will help here, based on the type and shape of the input. It will either be DT_FLOAT32 (or some other int/float type) and [NONE, 299, 299, CHANNELS] or DT_STRING and [NONE], respectively.

如果您的模型是类型(1),那么您将需要发送一个整数/浮点数的矩阵(不使用base64编码):

If your model is type (1), then you will need to send a matrix of ints/floats (which does not use base64 encoding):

predict_json('image-recognition-25***08', 'm500_200_waug', [{CORRECT_INPUT_NAME: load_image(savepath).tolist(), "key": '87'}], 'v1')

请注意使用tolist将numpy数组转换为列表列表.

Note the use of tolist to convert the numpy array to a list of lists.

对于类型(2),您需要通过添加{"b64":...}:来告诉服务您有一些base64数据:

In the case of type (2), you need to tell the service you have some base64 data by adding in {"b64": ...}:

predict_json('image-recognition-25***08', 'm500_200_waug', [{CORRECT_INPUT_NAME: {"b64": str(base64.b64encode(img1).decode("utf-8"))}, "key": '87'}], 'v1')

当然,所有这些都取决于为CORRECT_INPUT_NAME使用正确的名称.

All of this, of course, depends on using the correct name for CORRECT_INPUT_NAME.

最后一点,我假设您的模型实际上确实包含key作为附加输入,因为您已将其包括在请求中;再次,可以全部根据saved_model_cli show的输出进行验证.

One final note, I'm assuming your model actually does have key as an additional inputs since you included it in your request; again, that can all be verified against the output of saved_model_cli show.

这篇关于ML引擎在线预测-意外的张量名称:值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆