TensorFlow:优化以推断由Estimator导出的SavedModel [英] TensorFlow: Optimize for Inference a SavedModel exported by Estimator

查看:517
本文介绍了TensorFlow:优化以推断由Estimator导出的SavedModel的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试优化已保存的图以进行推理,因此可以在Android中使用它.

I'm trying to optimize a saved graph for inference, so I can use it in Android.

我第一次尝试使用optimize_for_inference脚本失败了

My first attempt at using the optimize_for_inference script failed with

google.protobuf.message.DecodeError: Truncated message

所以我的问题是输入/输出节点是否错误或脚本无法处理SavedModels(尽管它与冻结图.pb具有相同的扩展名)

So my question is whether the input/output nodes are wrong or the script cannot handle SavedModels (although it's the same extension as a frozen graph .pb)

关于第一个:由于使用Estimators,我们提供input_fn而不是数据本身,应该将哪个视为输入?第一次进行tf操作吗?喜欢:

Regarding the first: since with Estimators we provide input_fn instead of the data itself, which should be considered the input? The first tf operation on it? Like:

x = x_dict['gestures']

# Data input is a 1-D vector of x_dim * y_dim features ("pixels")
# Reshape to match format [Height x Width x Channel]
# Tensor input become 4-D: [Batch Size, Height, Width, Channel]
x = tf.reshape(x, shape=[-1, x_dim, y_dim, 1], name='input')

(...)

pred_probs = tf.nn.softmax(logits, name='output')

顺便说一句:如果在Android中加载SavedModel有所不同,我也想知道.

BTW: if there is something different in loading a SavedModel in Android, I'd like to know too.

先谢谢您!

推荐答案

更新: https://www.tensorflow.org/mobile/prepare_models ,其中包含有关SavedModels处理方法的说明.您可以使用 --input_saved_model_dir冻结您的SavedModel 到Frozen_graph.py.

Update: There are good instructions at https://www.tensorflow.org/mobile/prepare_models which include an explaination of what to do with SavedModels. You can freeze your SavedModel using the --input_saved_model_dir to freeze_graph.py.

它们都是协议缓冲区(.pb),但不幸的是它们是不同的消息(即不同的文件格式).从理论上讲,您可以首先从中提取 MetaGraph SavedModel ,然后冻结" MetaGraph的GraphDef(将变量移动到常量中),然后在冻结的GraphDef上运行此脚本.在这种情况下,您希望input_fn只是占位符.

They're both protocol buffers (.pb), but unfortunately they're different messages (i.e. different file formats). Theoretically you could first extract a MetaGraph from the SavedModel, then "freeze" the MetaGraph's GraphDef (move variables into constants), then run this script on the frozen GraphDef. In that case you'd want your input_fn to be just placeholders.

您还可以在其中一个> Android的SavedModel支持" Github上添加加一表情符号问题.中期,我们希望在SavedModel上进行标准化;抱歉,您遇到了这个问题!

You could also add a plus one emoji on one of the "SavedModel support for Android" Github issues. Medium-term we'd like to standardize on SavedModel; sorry you've run into this!

这篇关于TensorFlow:优化以推断由Estimator导出的SavedModel的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆