如何在没有嵌入的情况下使用 tensorflow seq2seq? [英] How to use tensorflow seq2seq without embeddings?

查看:10
本文介绍了如何在没有嵌入的情况下使用 tensorflow seq2seq?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在使用 tensorflow 在 LSTM 上进行时间序列预测.现在,我想尝试序列到序列(seq2seq).在官方网站上有一个教程展示了 NMT with embeddings .那么,如何在没有嵌入的情况下使用这个新的 seq2seq 模块呢?(直接使用时间序列序列").

I have been working on LSTM for timeseries forecasting by using tensorflow. Now, i want to try sequence to sequence (seq2seq). In the official site there is a tutorial which shows NMT with embeddings . So, how can I use this new seq2seq module without embeddings? (directly using time series "sequences").

# 1. Encoder
encoder_cell = tf.contrib.rnn.BasicLSTMCell(LSTM_SIZE)
encoder_outputs, encoder_state = tf.nn.static_rnn(
  encoder_cell,
  x,
  dtype=tf.float32)

# Decoder
decoder_cell = tf.nn.rnn_cell.BasicLSTMCell(LSTM_SIZE)


helper = tf.contrib.seq2seq.TrainingHelper(
    decoder_emb_inp, decoder_lengths, time_major=True)


decoder = tf.contrib.seq2seq.BasicDecoder(
  decoder_cell, helper, encoder_state)

# Dynamic decoding
outputs, _ = tf.contrib.seq2seq.dynamic_decode(decoder)
outputs = outputs[-1]

# output is result of linear activation of last layer of RNN
weight = tf.Variable(tf.random_normal([LSTM_SIZE, N_OUTPUTS]))
bias = tf.Variable(tf.random_normal([N_OUTPUTS]))
predictions = tf.matmul(outputs, weight) + bias

如果我使用 input_seq=x 和 output_seq=label,TrainingHelper() 的参数应该是什么?

What should be the args for TrainingHelper() if I use input_seq=x and output_seq=label?

decoder_emb_inp ???解码器长度???

decoder_emb_inp ??? decoder_lengths ???

其中 input_seq 是序列的前 8 个点,而 output_seq 是序列的最后 2 个点.提前致谢!

Where input_seq are the first 8 point of the sequence, and output_seq are the last 2 point of the sequence. Thanks on advance!

推荐答案

我使用非常基本的 InferenceHelper 让它在不嵌入的情况下工作:

I got it to work for no embedding using a very rudimentary InferenceHelper:

inference_helper = tf.contrib.seq2seq.InferenceHelper(
        sample_fn=lambda outputs: outputs,
        sample_shape=[dim],
        sample_dtype=dtypes.float32,
        start_inputs=start_tokens,
        end_fn=lambda sample_ids: False)

我的输入是形状为 [batch_size, time, dim] 的浮点数.对于下面的示例,dim 将为 1,但这可以很容易地扩展到更多维度.这是代码的相关部分:

My inputs are floats with the shape [batch_size, time, dim]. For the example below dim would be 1, but this can easily be extended to more dimensions. Here's the relevant part of the code:

projection_layer = tf.layers.Dense(
    units=1,  # = dim
    kernel_initializer=tf.truncated_normal_initializer(
        mean=0.0, stddev=0.1))

# Training Decoder
training_decoder_output = None
with tf.variable_scope("decode"):
    # output_data doesn't exist during prediction phase.
    if output_data is not None:
        # Prepend the "go" token
        go_tokens = tf.constant(go_token, shape=[batch_size, 1, 1])
        dec_input = tf.concat([go_tokens, target_data], axis=1)

        # Helper for the training process.
        training_helper = tf.contrib.seq2seq.TrainingHelper(
            inputs=dec_input,
            sequence_length=[output_size] * batch_size)

        # Basic decoder
        training_decoder = tf.contrib.seq2seq.BasicDecoder(
            dec_cell, training_helper, enc_state, projection_layer)

        # Perform dynamic decoding using the decoder
        training_decoder_output = tf.contrib.seq2seq.dynamic_decode(
            training_decoder, impute_finished=True,
            maximum_iterations=output_size)[0]

# Inference Decoder
# Reuses the same parameters trained by the training process.
with tf.variable_scope("decode", reuse=tf.AUTO_REUSE):
    start_tokens = tf.constant(
        go_token, shape=[batch_size, 1])

    # The sample_ids are the actual output in this case (not dealing with any logits here).
    # My end_fn is always False because I'm working with a generator that will stop giving 
    # more data. You may extend the end_fn as you wish. E.g. you can append end_tokens 
    # and make end_fn be true when the sample_id is the end token.
    inference_helper = tf.contrib.seq2seq.InferenceHelper(
        sample_fn=lambda outputs: outputs,
        sample_shape=[1],  # again because dim=1
        sample_dtype=dtypes.float32,
        start_inputs=start_tokens,
        end_fn=lambda sample_ids: False)

    # Basic decoder
    inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
                                                        inference_helper,
                                                        enc_state,
                                                        projection_layer)

    # Perform dynamic decoding using the decoder
    inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(
        inference_decoder, impute_finished=True,
        maximum_iterations=output_size)[0]

看看这个问题.我还发现这个 tutorial 非常有助于理解seq2seq 模型,尽管它确实使用嵌入.所以用我上面发布的 InferenceHelper 替换他们的 GreedyEmbeddingHelper.

Have a look at this question. Also I found this tutorial to be very useful to understand seq2seq models, although it does use embeddings. So replace their GreedyEmbeddingHelper by an InferenceHelper like the one I posted above.

附:我将完整代码发布在 https://github.com/Andreea-G/tensorflow_examples

P.s. I posted the full code at https://github.com/Andreea-G/tensorflow_examples

这篇关于如何在没有嵌入的情况下使用 tensorflow seq2seq?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆