如何在 Tensorflow 中保存估算器以备后用? [英] How to save estimator in Tensorflow for later use?

查看:27
本文介绍了如何在 Tensorflow 中保存估算器以备后用?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我按照教程TF 层指南:构建卷积神经网络"(这里是代码:https://github.com/tensorflow/tensorflow/blob/r1.1/tensorflow/examples/tutorials/layers/cnn_mnist.py).

I followed the tutorial "A Guide to TF Layers: Building a Convolutional Neural Network" (here is the code: https://github.com/tensorflow/tensorflow/blob/r1.1/tensorflow/examples/tutorials/layers/cnn_mnist.py).

我根据自己的需要调整了教程,即手部检测.

I adapted the tutorial for my needs, which is hand detection.

据我所知,本教程创建了 estimator(这是一个 CNN),然后进行拟合,最后评估 estimator 的性能.现在,我的问题是我想在另一个文件中使用 estimator,这将是我的主程序.如何从另一个文件访问估算器?我每次要使用估算器时都必须拟合它吗??(希望不会)

As far as I understand, this tutorial creates the estimator (which is a CNN), then does the fitting, and finally, it evaluates the performance of the estimator. Now, my problem is that I want to use the estimator int another file, which is going to be my main program. How do I access the estimator from another file? Do I have to fit the estimator every time I want to use it?? (I hope not)

我想知道是否有人可以帮助我了解如何保存估算器以供以后使用.(据我所知,我无法使用 tf.train.Saver 创建保护程序,因为我没有运行会话).

I was wondering if someone could help me understand how to save the estimator to use it later. (as far as I understand, I cant create a saver with tf.train.Saver, because I don't have a session running).

这是我的 train.py 文件中的代码:

Here is the code from my train.py file:

def main(unused_argv):

#Load training and eval data (part missing)


# Create the estimator
hand_detector = learn.Estimator(model_fn=cnn_model_fn, model_dir="\cnn_model_fn")

# Set up logging for predictions
# Log the values in the "Softmax" tensor with label "probabilities"
tensors_to_log = {"probabilities": "softmax_tensor"}
logging_hook = tf.train.LoggingTensorHook(
    tensors=tensors_to_log, every_n_iter=50)

# Train the model
hand_detector.fit(
    x=train_data,
    y=train_labels,
    batch_size=100,
    steps=20000,
    monitors=[logging_hook])

# Configure the accuracy metric for evaluation
metrics = {
    "accuracy":
        learn.MetricSpec(
            metric_fn=tf.metrics.accuracy, prediction_key="classes"),
}

# Evaluate the model and print results
eval_results = hand_detector.evaluate(
    x=eval_data, y=eval_labels, metrics=metrics)
print(eval_results)

# Save the model for later use (part missing!)

推荐答案

几乎所有机器学习的实际应用都试图训练模型一次,然后将其保存以备将来使用新数据使用.大多数分类器在训练阶段花费数小时,而在测试阶段花费几秒钟,因此了解如何成功保存已训练模型的基本知识.

Almost all real applications of machine learning seek to train a model once and then save it for future uses with new data. Most classifiers spend hours in the training stage and just few seconds in the testing stage, so is fundamental learn how to save successfully a trained model.

我将解释如何导出高级"Tensorflow 模型(使用export_savedmodel).函数export_savedmodel 需要参数serving_input_receiver_fn,这是一个没有参数的函数,它定义了模型和预测器的输入.因此,您必须创建自己的serving_input_receiver_fn,其中模型输入类型与训练脚本中的模型输入匹配,预测器输入类型与测试脚本中的预测器输入匹配.另一方面,如果您创建自定义模型,则必须定义 export_outputs,由函数 tf.estimator.export.PredictOutput 定义,该输入是定义必须匹配的名称的字典使用测试脚本中预测器输出的名称.

I'm going to explain how to export "high level" Tensorflow models (using export_savedmodel). The function export_savedmodel requires the argument serving_input_receiver_fn, that is a function without arguments, which defines the input from the model and the predictor. Therefore, you must create your own serving_input_receiver_fn, where the model input type match with the model input in the training script, and the predictor input type match with the predictor input in the testing script. On the other hand, if you create a custom model, you must define the export_outputs, defined by the function tf.estimator.export.PredictOutput, which input is a dictionary that define the name that has to match with the name of the predictor output in the testing script.

例如:

def serving_input_receiver_fn():
    serialized_tf_example = tf.placeholder(dtype=tf.string, shape=[None], name='input_tensors')
    receiver_tensors      = {"predictor_inputs": serialized_tf_example}
    feature_spec          = {"words": tf.FixedLenFeature([25],tf.int64)}
    features              = tf.parse_example(serialized_tf_example, feature_spec)
    return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
def estimator_spec_for_softmax_classification(logits, labels, mode):
    predicted_classes = tf.argmax(logits, 1)
    if (mode == tf.estimator.ModeKeys.PREDICT):
        export_outputs = {'predict_output': tf.estimator.export.PredictOutput({"pred_output_classes": predicted_classes, 'probabilities': tf.nn.softmax(logits)})}
        return tf.estimator.EstimatorSpec(mode=mode, predictions={'class': predicted_classes, 'prob': tf.nn.softmax(logits)}, export_outputs=export_outputs) # IMPORTANT!!!
    onehot_labels = tf.one_hot(labels, 31, 1, 0)
    loss          = tf.losses.softmax_cross_entropy(onehot_labels=onehot_labels, logits=logits)
    if (mode == tf.estimator.ModeKeys.TRAIN):
        optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
        train_op  = optimizer.minimize(loss, global_step=tf.train.get_global_step())
        return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)
    eval_metric_ops = {'accuracy': tf.metrics.accuracy(labels=labels, predictions=predicted_classes)}
    return tf.estimator.EstimatorSpec(mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)
def model_custom(features, labels, mode):
    bow_column           = tf.feature_column.categorical_column_with_identity("words", num_buckets=1000)
    bow_embedding_column = tf.feature_column.embedding_column(bow_column, dimension=50)   
    bow                  = tf.feature_column.input_layer(features, feature_columns=[bow_embedding_column])
    logits               = tf.layers.dense(bow, 31, activation=None)
    return estimator_spec_for_softmax_classification(logits=logits, labels=labels, mode=mode)
def main():
    # ...
    # preprocess-> features_train_set and labels_train_set
    # ...
    classifier     = tf.estimator.Estimator(model_fn = model_custom)
    train_input_fn = tf.estimator.inputs.numpy_input_fn(x={"words": features_train_set}, y=labels_train_set, batch_size=batch_size_param, num_epochs=None, shuffle=True)
    classifier.train(input_fn=train_input_fn, steps=100)
    full_model_dir = classifier.export_savedmodel(export_dir_base="C:/models/directory_base", serving_input_receiver_fn=serving_input_receiver_fn)

测试脚本

def main():
    # ...
    # preprocess-> features_test_set
    # ...
    with tf.Session() as sess:
        tf.saved_model.loader.load(sess, [tf.saved_model.tag_constants.SERVING], full_model_dir)
        predictor   = tf.contrib.predictor.from_saved_model(full_model_dir)
        model_input = tf.train.Example(features=tf.train.Features( feature={"words": tf.train.Feature(int64_list=tf.train.Int64List(value=features_test_set)) })) 
        model_input = model_input.SerializeToString()
        output_dict = predictor({"predictor_inputs":[model_input]})
        y_predicted = output_dict["pred_output_classes"][0]

(在 Python 3.6.3、Tensorflow 1.4.0 中测试的代码)

(Code tested in Python 3.6.3, Tensorflow 1.4.0)

这篇关于如何在 Tensorflow 中保存估算器以备后用?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆