在用于模型服务的Tensorflow中,服务输入功能应该做什么 [英] In Tensorflow for serving a model, what does the serving input function supposed to do exactly

查看:86
本文介绍了在用于模型服务的Tensorflow中,服务输入功能应该做什么的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

因此,我一直在努力理解在Tensorflow中出于训练目的导出训练有素的模型时,serving_input_fn()的主要任务是什么.网上有一些例子可以解释它,但是我在为自己定义它时遇到了问题.

So, I've been struggling to understand what the main task of a serving_input_fn() is when a trained model is exported in Tensorflow for serving purposes. There are some examples online that explain it but I'm having problems defining it for myself.

我要解决的问题是一个回归问题,其中我有29个输入和一个输出.是否有用于为此创建相应的服务输入功能的模板?如果我使用一类分类问题怎么办?我的服务输入功能是否需要更改,或者我可以使用相同的功能?

The problem I'm trying to solve is a regression problem where I have 29 inputs and one output. Is there a template for creating a corresponding serving input function for that? What if I use a one-class classification problem? Would my serving input function need to change or can I use the same function?

最后,我是否总是需要提供输入功能,还是仅当使用tf.estimator导出模型时?

And finally, do I always need serving input functions or is it only when I use tf.estimator to export my model?

推荐答案

如果您希望模型能够进行预测,则需要提供服务的输入功能. serving_input_fn指定predict()方法的调用者必须提供的内容.您实际上是在告诉模型它必须从用户那里获取什么数据.

You need a serving input function if you want your model to be able to make predictions. The serving_input_fn specifies what the caller of the predict() method will have to provide. You are essentially telling the model what data it has to get from the user.

如果您有29个输入,则您的服务输入功能可能类似于:

If you have 29 inputs, your serving input function might look like:

def serving_input_fn():
    feature_placeholders = {
      'var1' : tf.placeholder(tf.float32, [None]),
      'var2' : tf.placeholder(tf.float32, [None]),
      ...
    }
    features = {
        key: tf.expand_dims(tensor, -1)
        for key, tensor in feature_placeholders.items()
    }
    return tf.estimator.export.ServingInputReceiver(features, 
                                                    feature_placeholders)

这通常以JSON形式出现:

This would typically come in as JSON:

{"instances": [{"var1": [23, 34], "var2": [...], ...}]}

P.S.输出不是服务输入功能的一部分,因为这与要预测的输入有关. 如果您使用的是预先估算器,则输出已经预先确定.如果要编写自定义估算器,则需要编写导出签名.

P.S. The output is not part of the serving input function because this is about the input to predict. If you are using a pre-made estimator, the output is already predetermined. If you are writing a custom estimator, you'd write an export signature.

这篇关于在用于模型服务的Tensorflow中,服务输入功能应该做什么的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆