在 Tensorflow 中为模型提供服务,服务输入函数应该做什么 [英] In Tensorflow for serving a model, what does the serving input function supposed to do exactly

查看:31
本文介绍了在 Tensorflow 中为模型提供服务,服务输入函数应该做什么的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

因此,当在 Tensorflow 中导出训练模型用于服务目的时,我一直在努力理解 services_input_fn() 的主要任务是什么.网上有一些例子可以解释它,但我在为自己定义它时遇到了问题.

So, I've been struggling to understand what the main task of a serving_input_fn() is when a trained model is exported in Tensorflow for serving purposes. There are some examples online that explain it but I'm having problems defining it for myself.

我试图解决的问题是一个回归问题,我有 29 个输入和一个输出.是否有用于为此创建相应服务输入功能的模板?如果我使用一类分类问题怎么办?我的服务输入功能需要更改还是可以使用相同的功能?

The problem I'm trying to solve is a regression problem where I have 29 inputs and one output. Is there a template for creating a corresponding serving input function for that? What if I use a one-class classification problem? Would my serving input function need to change or can I use the same function?

最后,我是否总是需要提供输入功能,还是仅在我使用 tf.estimator 导出模型时才需要?

And finally, do I always need serving input functions or is it only when I use tf.estimator to export my model?

推荐答案

如果您希望您的模型能够进行预测,您需要一个服务输入函数.serving_input_fn 指定了 predict() 方法的调用者必须提供的内容.您实际上是在告诉模型它必须从用户那里获取哪些数据.

You need a serving input function if you want your model to be able to make predictions. The serving_input_fn specifies what the caller of the predict() method will have to provide. You are essentially telling the model what data it has to get from the user.

如果您有 29 个输入,您的服务输入函数可能如下所示:

If you have 29 inputs, your serving input function might look like:

def serving_input_fn():
    feature_placeholders = {
      'var1' : tf.placeholder(tf.float32, [None]),
      'var2' : tf.placeholder(tf.float32, [None]),
      ...
    }
    features = {
        key: tf.expand_dims(tensor, -1)
        for key, tensor in feature_placeholders.items()
    }
    return tf.estimator.export.ServingInputReceiver(features, 
                                                    feature_placeholders)

这通常以 JSON 形式出现:

This would typically come in as JSON:

{"instances": [{"var1": [23, 34], "var2": [...], ...}]}

附言输出不是服务输入函数的一部分,因为这是关于要预测的输入.如果您使用的是预先制作的估算器,则输出已经预先确定.如果您要编写自定义估算器,则需要编写导出签名.

P.S. The output is not part of the serving input function because this is about the input to predict. If you are using a pre-made estimator, the output is already predetermined. If you are writing a custom estimator, you'd write an export signature.

这篇关于在 Tensorflow 中为模型提供服务,服务输入函数应该做什么的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆