TensorFlow 1.2 如何使用 Seq2Seq 在推理时设置时间序列预测 [英] TensorFlow 1.2 How to Setup Time Series Prediction at Inference Time Using Seq2Seq

查看:76
本文介绍了TensorFlow 1.2 如何使用 Seq2Seq 在推理时设置时间序列预测的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用玩具模型研究 TensorFlow 库的 tf.contrib.seq2seq 部分.目前,我的图表如下:

I am trying to study the tf.contrib.seq2seq section of the TensorFlow library using a toy model. Currently, my graph is as follows:

tf.reset_default_graph()

# Placeholders
enc_inp = tf.placeholder(tf.float32, [None, n_steps, n_input])
expect = tf.placeholder(tf.float32, [None, n_steps, n_output])
expect_length = tf.placeholder(tf.int32, [None])
keep_prob = tf.placeholder(tf.float32, [])

# Encoder
cells = [tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.BasicLSTMCell(n_hidden), output_keep_prob=keep_prob) for i in range(layers_stacked_count)]
cell = tf.contrib.rnn.MultiRNNCell(cells)
encoded_outputs, encoded_states = tf.nn.dynamic_rnn(cell, enc_inp, dtype=tf.float32)

# Decoder
de_cells = [tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.BasicLSTMCell(n_hidden), output_keep_prob=keep_prob) for i in range(layers_stacked_count)]
de_cell = tf.contrib.rnn.MultiRNNCell(de_cells)

training_helper = tf.contrib.seq2seq.TrainingHelper(expect, expect_length)

decoder = tf.contrib.seq2seq.BasicDecoder(cell=de_cell, helper=training_helper, initial_state=encoded_states)
final_outputs, final_state, final_sequence_lengths = tf.contrib.seq2seq.dynamic_decode(decoder)

decoder_logits = final_outputs.rnn_output

h = tf.contrib.layers.fully_connected(decoder_logits, n_output)

diff = tf.squared_difference(h, expect)
batch_loss = tf.reduce_sum(diff, axis=1)
loss = tf.reduce_mean(batch_loss)

optimiser = tf.train.AdamOptimizer(1e-3)
training_op = optimiser.minimize(loss)

该图训练得很好并且执行得很好.但是,我不确定在推理时要做什么,因为这个图总是需要 expect 变量(我试图预测的值).

The graph trains very well and executes fine. However, I am not sure what to do at inference time, since this graph always requires the expect variable (the value which I am trying to predict).

据我了解,TrainingHelper 函数正在使用地面实况作为输入,所以我需要的是推理时的另一个辅助函数.

As I understand, the TrainingHelper function is using the ground truth as input, so what I need is another helper function at inference time.

我使用过的大多数 seq2seq 模型实现似乎已经过时(tf.contrib.legacy_seq2seq).一些最新的模型经常使用 GreddyEmbeddingHelper,我不确定它是否适用于连续时间序列预测.

Most implementations of seq2seq model I've seem appears to be outdated (tf.contrib.legacy_seq2seq). Some of the most up-to-date models often use GreddyEmbeddingHelper, which I'm not sure is appropriate for continuous time series predictions.

我发现的另一个可能的解决方案是使用 CustomHelper 函数.然而,可供我学习的材料并不多,我只是不断地用头撞墙.

Another possible solution I've found is to use the CustomHelper function. However, there is no little material out there for me to learn and I've just kept banging my head against the wall.

如果我想实现一个 seq2seq 模型进行时间序列预测,在推理时我应该怎么做?

任何帮助或建议将不胜感激.提前致谢!

Any help or advice would be greatly appreciated. Thanks in advance!

推荐答案

您是对的,您需要使用另一个辅助函数进行推理,但您需要在测试和推理之间共享权重.

You are right that you need to use another helper function for inference, but you need to share weights between testing and inference.

你可以用 tf.variable_scope() 来做到这一点

You can do this with tf.variable_scope()

with tf.variable_scope("decode"):
    training_helper = ...

with tf.variable_scope("decode", reuse = True):
    inference_helper = ...

有关更完整的示例,请参阅以下两个示例之一:

For a more complete example, see one of these two examples:

这篇关于TensorFlow 1.2 如何使用 Seq2Seq 在推理时设置时间序列预测的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆