如何使用 tf.estimator 导入已保存的 Tensorflow 模型训练并预测输入数据 [英] How to import an saved Tensorflow model train using tf.estimator and predict on input data

查看:74
本文介绍了如何使用 tf.estimator 导入已保存的 Tensorflow 模型训练并预测输入数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用 tf.estimator .method export_savedmodel 保存模型如下:

I have save the model using tf.estimator .method export_savedmodel as follows:

export_dir="exportModel/"

feature_spec = tf.feature_column.make_parse_example_spec(feature_columns)

input_receiver_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)

classifier.export_savedmodel(export_dir, input_receiver_fn, as_text=False, checkpoint_path="Model/model.ckpt-400") 

如何导入这个保存的模型并用于预测?

How can I import this saved model and use for predictions?

推荐答案

我试图寻找一个很好的基础示例,但似乎该主题的文档和示例有点分散.因此,让我们从一个基本示例开始:tf.estimator 快速入门.

I tried to search for a good base example, but it appears the documentation and samples are a bit scattered for this topic. So let's start with a base example: the tf.estimator quickstart.

那个特定的例子实际上并没有导出模型,所以让我们这样做(用例 1 不需要):

That particular example doesn't actually export a model, so let's do that (not need for use case 1):

def serving_input_receiver_fn():
  """Build the serving inputs."""
  # The outer dimension (None) allows us to batch up inputs for
  # efficiency. However, it also means that if we want a prediction
  # for a single instance, we'll need to wrap it in an outer list.
  inputs = {"x": tf.placeholder(shape=[None, 4], dtype=tf.float32)}
  return tf.estimator.export.ServingInputReceiver(inputs, inputs)

export_dir = classifier.export_savedmodel(
    export_dir_base="/path/to/model",
    serving_input_receiver_fn=serving_input_receiver_fn)

此代码上的巨大星号:TensorFlow 1.3 中似乎存在一个错误,不允许您在罐装"估算器(例如 DNNClassifier)上进行上述导出.如需解决方法,请参阅附录:解决方法"部分.

Huge asterisk on this code: there appears to be a bug in TensorFlow 1.3 that doesn't allow you to do the above export on a "canned" estimator (such as DNNClassifier). For a workaround, see the "Appendix: Workaround" section.

下面的代码引用了export_dir(导出步骤的返回值)以强调它不是/path/to/model",而是一个子目录名称为时间戳的目录.

The code below references export_dir (return value from the export step) to emphasize that it is not "/path/to/model", but rather, a subdirectory of that directory whose name is a timestamp.

用例 1:在与训练相同的过程中执行预测

这是一种 sci-kit 学习类型的体验,并且已经在示例中得到了例证.为了完整起见,您只需在训练好的模型上调用 predict:

This is an sci-kit learn type of experience, and is already exemplified by the sample. For completeness' sake, you simply call predict on the trained model:

classifier.train(input_fn=train_input_fn, steps=2000)
# [...snip...]
predictions = list(classifier.predict(input_fn=predict_input_fn))
predicted_classes = [p["classes"] for p in predictions]

用例 2:将 SavedModel 加载到 Python/Java/C++ 中并执行预测

Python 客户端

如果您想在 Python 中进行预测,最容易使用的方法可能是 SavedModelPredictor.在将使用 SavedModel 的 Python 程序中,我们需要这样的代码:

Perhaps the easiest thing to use if you want to do prediction in Python is SavedModelPredictor. In the Python program that will use the SavedModel, we need code like this:

from tensorflow.contrib import predictor

predict_fn = predictor.from_saved_model(export_dir)
predictions = predict_fn(
    {"x": [[6.4, 3.2, 4.5, 1.5],
           [5.8, 3.1, 5.0, 1.7]]})
print(predictions['scores'])

Java 客户端

package dummy;

import java.nio.FloatBuffer;
import java.util.Arrays;
import java.util.List;

import org.tensorflow.SavedModelBundle;
import org.tensorflow.Session;
import org.tensorflow.Tensor;

public class Client {

  public static void main(String[] args) {
    Session session = SavedModelBundle.load(args[0], "serve").session();

    Tensor x =
        Tensor.create(
            new long[] {2, 4},
            FloatBuffer.wrap(
                new float[] {
                  6.4f, 3.2f, 4.5f, 1.5f,
                  5.8f, 3.1f, 5.0f, 1.7f
                }));

    // Doesn't look like Java has a good way to convert the
    // input/output name ("x", "scores") to their underlying tensor,
    // so we hard code them ("Placeholder:0", ...).
    // You can inspect them on the command-line with saved_model_cli:
    //
    // $ saved_model_cli show --dir $EXPORT_DIR --tag_set serve --signature_def serving_default
    final String xName = "Placeholder:0";
    final String scoresName = "dnn/head/predictions/probabilities:0";

    List<Tensor> outputs = session.runner()
        .feed(xName, x)
        .fetch(scoresName)
        .run();

    // Outer dimension is batch size; inner dimension is number of classes
    float[][] scores = new float[2][3];
    outputs.get(0).copyTo(scores);
    System.out.println(Arrays.deepToString(scores));
  }
}

C++ 客户端

您可能想要使用 tensorflow::LoadSavedModel会话.

You'll likely want to use tensorflow::LoadSavedModel with Session.

#include <unordered_set>
#include <utility>
#include <vector>

#include "tensorflow/cc/saved_model/loader.h"
#include "tensorflow/core/framework/tensor.h"
#include "tensorflow/core/public/session.h"

namespace tf = tensorflow;

int main(int argc, char** argv) {
  const string export_dir = argv[1];

  tf::SavedModelBundle bundle;
  tf::Status load_status = tf::LoadSavedModel(
      tf::SessionOptions(), tf::RunOptions(), export_dir, {"serve"}, &bundle);
  if (!load_status.ok()) {
    std::cout << "Error loading model: " << load_status << std::endl;
    return -1;
  }

  // We should get the signature out of MetaGraphDef, but that's a bit
  // involved. We'll take a shortcut like we did in the Java example.
  const string x_name = "Placeholder:0";
  const string scores_name = "dnn/head/predictions/probabilities:0";

  auto x = tf::Tensor(tf::DT_FLOAT, tf::TensorShape({2, 4}));
  auto matrix = x.matrix<float>();
  matrix(0, 0) = 6.4;
  matrix(0, 1) = 3.2;
  matrix(0, 2) = 4.5;
  matrix(0, 3) = 1.5;
  matrix(0, 1) = 5.8;
  matrix(0, 2) = 3.1;
  matrix(0, 3) = 5.0;
  matrix(0, 4) = 1.7;

  std::vector<std::pair<string, tf::Tensor>> inputs = {{x_name, x}};
  std::vector<tf::Tensor> outputs;

  tf::Status run_status =
      bundle.session->Run(inputs, {scores_name}, {}, &outputs);
  if (!run_status.ok()) {
    cout << "Error running session: " << run_status << std::endl;
    return -1;
  }

  for (const auto& tensor : outputs) {
    std::cout << tensor.matrix<float>() << std::endl;
  }
}

用例 3:使用 TensorFlow Serving 为模型提供服务

以适合服务于分类模型的方式导出模型 要求输入是 tf.Example 对象.以下是我们导出用于 TensorFlow 服务的模型的方法:

Exporting models in a manner amenable to serving a Classification model requires that the input be a tf.Example object. Here's how we might export a model for TensorFlow serving:

def serving_input_receiver_fn():
  """Build the serving inputs."""
  # The outer dimension (None) allows us to batch up inputs for
  # efficiency. However, it also means that if we want a prediction
  # for a single instance, we'll need to wrap it in an outer list.
  example_bytestring = tf.placeholder(
      shape=[None],
      dtype=tf.string,
  )
  features = tf.parse_example(
      example_bytestring,
      tf.feature_column.make_parse_example_spec(feature_columns)
  )
  return tf.estimator.export.ServingInputReceiver(
      features, {'examples': example_bytestring})

export_dir = classifier.export_savedmodel(
    export_dir_base="/path/to/model",
    serving_input_receiver_fn=serving_input_receiver_fn)

读者可以参考 TensorFlow Serving 的文档以获取有关如何设置 TensorFlow Serving 的更多说明,因此我将在此处仅提供客户端代码:

The reader is referred to TensorFlow Serving's documentation for more instructions on how to setup TensorFlow Serving, so I'll only provide the client code here:

  # Omitting a bunch of connection/initialization code...
  # But at some point we end up with a stub whose lifecycle
  # is generally longer than that of a single request.
  stub = create_stub(...)

  # The actual values for prediction. We have two examples in this
  # case, each consisting of a single, multi-dimensional feature `x`.
  # This data here is the equivalent of the map passed to the 
  # `predict_fn` in use case #2.
  examples = [
    tf.train.Example(
      features=tf.train.Features(
        feature={"x": tf.train.Feature(
          float_list=tf.train.FloatList(value=[6.4, 3.2, 4.5, 1.5]))})),
    tf.train.Example(
      features=tf.train.Features(
        feature={"x": tf.train.Feature(
          float_list=tf.train.FloatList(value=[5.8, 3.1, 5.0, 1.7]))})),
  ]

  # Build the RPC request.
  predict_request = predict_pb2.PredictRequest()
  predict_request.model_spec.name = "default"
  predict_request.inputs["examples"].CopyFrom(
      tensor_util.make_tensor_proto(examples, tf.float32))

  # Perform the actual prediction.
  stub.Predict(request, PREDICT_DEADLINE_SECS)

请注意,predict_request.inputs 中引用的密钥 examples 需要与导出时 serving_input_receiver_fn 中使用的密钥匹配时间(参见该代码中 ServingInputReceiver 的构造函数).

Note that the key, examples, that is referenced in the predict_request.inputs needs to match the key used in the serving_input_receiver_fn at export time (cf. the constructor to ServingInputReceiver in that code).

附录:解决从 TF 1.3 中的罐装模型导出

TensorFlow 1.3 中似乎存在一个错误,其中罐装模型无法针对用例 2 正确导出(自定义"估算器不存在该问题).这是一种包装 DNNClassifier 以使其工作的解决方法,特别是对于 Iris 示例:

There appears to be a bug in TensorFlow 1.3 in which canned models do not export properly for use case 2 (the problem does not exist for "custom" estimators). Here's is a workaround that wraps a DNNClassifier to make things work, specifically for the Iris example:

# Build 3 layer DNN with 10, 20, 10 units respectively.
class Wrapper(tf.estimator.Estimator):
  def __init__(self, **kwargs):
    dnn = tf.estimator.DNNClassifier(**kwargs)

    def model_fn(mode, features, labels):
      spec = dnn._call_model_fn(features, labels, mode)
      export_outputs = None
      if spec.export_outputs:
        export_outputs = {
           "serving_default": tf.estimator.export.PredictOutput(
                  {"scores": spec.export_outputs["serving_default"].scores,
                   "classes": spec.export_outputs["serving_default"].classes})}

      # Replace the 3rd argument (export_outputs)
      copy = list(spec)
      copy[4] = export_outputs
      return tf.estimator.EstimatorSpec(mode, *copy)

    super(Wrapper, self).__init__(model_fn, kwargs["model_dir"], dnn.config)

classifier = Wrapper(feature_columns=feature_columns,
                     hidden_units=[10, 20, 10],
                     n_classes=3,
                     model_dir="/tmp/iris_model")

这篇关于如何使用 tf.estimator 导入已保存的 Tensorflow 模型训练并预测输入数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆