GCP ML引擎预测失败:处理输入错误:预期float32获得base64 [英] GCP ML Engine Prediction failed: Error processing input: Expected float32 got base64
问题描述
我正在尝试对部署到GCP ML引擎的经过定制训练的TensorFlow模型进行预测.当我尝试在模型上调用预测时,它返回以下错误消息"Expected float32 got base64"
I am trying to call a prediction on a custom trained TensorFlow model deployed to GCP ML engine. When I am trying to call a prediction on the model it is returning the following error message "Expected float32 got base64"
- 我使用了转移学习和TensorFlow的 retrain.py 脚本按照官方文档
- I've used transfer learning and the TensorFlow's retrain.py script to train my model on my images, following the official documentation
python retrain.py --image_dir ~/training_images saved_model_dir /saved_model_directory
- 我已经使用TensorFlow的在本地测试了预测label_img.py 脚本,预测在本地对图片有效
- I've tested the prediction locally using TensorFlow's label_img.py script, the prediction worked locally for my images
python label_image.py --graph=/tmp/output_graph.pb --labels=/tmp/output_labels.txt --input_layer=Placeholder --output_layer=final_result \
- 我已按照retrain.py脚本文档中的说明导出了我的模型以用于Tensorflow Serving.
python retrain.py --image_dir ~/training_images --saved_model_dir /saved_model_directory
-
我已将模型上传到Firebase,GCP验证并接受了我的模型,因此能够触发我的模型.
I've uploaded the model to Firebase, GCP validated and accepted my model, I was able to trigger my model.
尝试调用在线预测时,我收到"Expected float32"错误.
When trying to call an online prediction I am receiving the " Expected float32" error.
test.json ={"image_bytes": {"b64": "/9j/4AAQSkZJ.......=="}}
gcloud ml-engine predict \
--model my_model \
--version v1 \
--json-instances ./test.json
我是否需要修改retrain.py以使保存的模型接受base64,还是有其他解决方案?
Do I need to modify retrain.py to make my saved model accept base64 or is there any other solution for the problem?
我已经检查了以下答案,但不幸的是它不能解决我的问题: 如何将base64编码的图像传递给Tensorflow预测?
I've already checked the following answer, but unfortunately it does not solved my problem: How to pass base64 encoded image to Tensorflow prediction?
推荐答案
问题在于,retrain.py
导出的模型的输入值期望以浮点数的形式已经解码和调整大小的图像(请参见
The problem is that retrain.py
exports a model whose input is expecting an already decoded and resized image in the form of floats (see this line), but you are passing it raw, undecoded image data.
有两种解决方案.
- 以预期的格式(浮点数)创建JSON请求.这是一个简单的修复程序,但可能会对性能产生影响(发送float32数据会导致JSON效率低下).
- 更改模型以接受原始图像数据作为输入.这需要对模型进行一些修改.
对于(1),您将发送类似于以下内容的JSON文件:
For (1), you would send a JSON file similar to:
{"images": [[[0.0, 0.0, 0.0], [0,0,0], [...]], [...], ...]}
当然,您可能会使用某些客户端库进行构建
Of course, you'd probably construct that using some client library
(2)涉及更多一点. 此示例可以指导您该怎么做.
(2) is a little more involved. This sample can guide you on how to do that.
这篇关于GCP ML引擎预测失败:处理输入错误:预期float32获得base64的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!