如何在没有嵌入的情况下使用tensorflow seq2seq? [英] How to use tensorflow seq2seq without embeddings?

查看:116
本文介绍了如何在没有嵌入的情况下使用tensorflow seq2seq?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在使用tensorflow进行LSTM的时间序列预测.现在,我想尝试按序列排序(seq2seq).在官方站点上有一个教程,其中显示了带有嵌入的NMT.那么,如何在不嵌入的情况下使用这个新的seq2seq模块? (直接使用时间序列序列").

# 1. Encoder
encoder_cell = tf.contrib.rnn.BasicLSTMCell(LSTM_SIZE)
encoder_outputs, encoder_state = tf.nn.static_rnn(
  encoder_cell,
  x,
  dtype=tf.float32)

# Decoder
decoder_cell = tf.nn.rnn_cell.BasicLSTMCell(LSTM_SIZE)


helper = tf.contrib.seq2seq.TrainingHelper(
    decoder_emb_inp, decoder_lengths, time_major=True)


decoder = tf.contrib.seq2seq.BasicDecoder(
  decoder_cell, helper, encoder_state)

# Dynamic decoding
outputs, _ = tf.contrib.seq2seq.dynamic_decode(decoder)
outputs = outputs[-1]

# output is result of linear activation of last layer of RNN
weight = tf.Variable(tf.random_normal([LSTM_SIZE, N_OUTPUTS]))
bias = tf.Variable(tf.random_normal([N_OUTPUTS]))
predictions = tf.matmul(outputs, weight) + bias

如果我使用input_seq = x和output_seq = label,TrainingHelper()的参数应该是什么?

decoder_emb_inp ??? coder_lengths ???

其中input_seq是序列的前8个点,而output_seq是序列的后2个点. 提前谢谢!

解决方案

我使用非常简单的InferenceHelper使其无需嵌入即可工作:

inference_helper = tf.contrib.seq2seq.InferenceHelper(
        sample_fn=lambda outputs: outputs,
        sample_shape=[dim],
        sample_dtype=dtypes.float32,
        start_inputs=start_tokens,
        end_fn=lambda sample_ids: False)

我的输入是形状为[batch_size, time, dim]的浮点数.对于下面的示例,dim将为1,但是可以很容易地将其扩展到更大的尺寸.这是代码的相关部分:

projection_layer = tf.layers.Dense(
    units=1,  # = dim
    kernel_initializer=tf.truncated_normal_initializer(
        mean=0.0, stddev=0.1))

# Training Decoder
training_decoder_output = None
with tf.variable_scope("decode"):
    # output_data doesn't exist during prediction phase.
    if output_data is not None:
        # Prepend the "go" token
        go_tokens = tf.constant(go_token, shape=[batch_size, 1, 1])
        dec_input = tf.concat([go_tokens, target_data], axis=1)

        # Helper for the training process.
        training_helper = tf.contrib.seq2seq.TrainingHelper(
            inputs=dec_input,
            sequence_length=[output_size] * batch_size)

        # Basic decoder
        training_decoder = tf.contrib.seq2seq.BasicDecoder(
            dec_cell, training_helper, enc_state, projection_layer)

        # Perform dynamic decoding using the decoder
        training_decoder_output = tf.contrib.seq2seq.dynamic_decode(
            training_decoder, impute_finished=True,
            maximum_iterations=output_size)[0]

# Inference Decoder
# Reuses the same parameters trained by the training process.
with tf.variable_scope("decode", reuse=tf.AUTO_REUSE):
    start_tokens = tf.constant(
        go_token, shape=[batch_size, 1])

    # The sample_ids are the actual output in this case (not dealing with any logits here).
    # My end_fn is always False because I'm working with a generator that will stop giving 
    # more data. You may extend the end_fn as you wish. E.g. you can append end_tokens 
    # and make end_fn be true when the sample_id is the end token.
    inference_helper = tf.contrib.seq2seq.InferenceHelper(
        sample_fn=lambda outputs: outputs,
        sample_shape=[1],  # again because dim=1
        sample_dtype=dtypes.float32,
        start_inputs=start_tokens,
        end_fn=lambda sample_ids: False)

    # Basic decoder
    inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
                                                        inference_helper,
                                                        enc_state,
                                                        projection_layer)

    # Perform dynamic decoding using the decoder
    inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(
        inference_decoder, impute_finished=True,
        maximum_iterations=output_size)[0]

看看这个问题.另外,我发现此教程对于理解非常有用seq2seq模型,尽管它确实使用嵌入.因此,像我上面发布的那样,用InferenceHelper替换它们的GreedyEmbeddingHelper.

P.s.我将完整代码发布在 https://github.com/Andreea-G/tensorflow_examples

I have been working on LSTM for timeseries forecasting by using tensorflow. Now, i want to try sequence to sequence (seq2seq). In the official site there is a tutorial which shows NMT with embeddings . So, how can I use this new seq2seq module without embeddings? (directly using time series "sequences").

# 1. Encoder
encoder_cell = tf.contrib.rnn.BasicLSTMCell(LSTM_SIZE)
encoder_outputs, encoder_state = tf.nn.static_rnn(
  encoder_cell,
  x,
  dtype=tf.float32)

# Decoder
decoder_cell = tf.nn.rnn_cell.BasicLSTMCell(LSTM_SIZE)


helper = tf.contrib.seq2seq.TrainingHelper(
    decoder_emb_inp, decoder_lengths, time_major=True)


decoder = tf.contrib.seq2seq.BasicDecoder(
  decoder_cell, helper, encoder_state)

# Dynamic decoding
outputs, _ = tf.contrib.seq2seq.dynamic_decode(decoder)
outputs = outputs[-1]

# output is result of linear activation of last layer of RNN
weight = tf.Variable(tf.random_normal([LSTM_SIZE, N_OUTPUTS]))
bias = tf.Variable(tf.random_normal([N_OUTPUTS]))
predictions = tf.matmul(outputs, weight) + bias

What should be the args for TrainingHelper() if I use input_seq=x and output_seq=label?

decoder_emb_inp ??? decoder_lengths ???

Where input_seq are the first 8 point of the sequence, and output_seq are the last 2 point of the sequence. Thanks on advance!

解决方案

I got it to work for no embedding using a very rudimentary InferenceHelper:

inference_helper = tf.contrib.seq2seq.InferenceHelper(
        sample_fn=lambda outputs: outputs,
        sample_shape=[dim],
        sample_dtype=dtypes.float32,
        start_inputs=start_tokens,
        end_fn=lambda sample_ids: False)

My inputs are floats with the shape [batch_size, time, dim]. For the example below dim would be 1, but this can easily be extended to more dimensions. Here's the relevant part of the code:

projection_layer = tf.layers.Dense(
    units=1,  # = dim
    kernel_initializer=tf.truncated_normal_initializer(
        mean=0.0, stddev=0.1))

# Training Decoder
training_decoder_output = None
with tf.variable_scope("decode"):
    # output_data doesn't exist during prediction phase.
    if output_data is not None:
        # Prepend the "go" token
        go_tokens = tf.constant(go_token, shape=[batch_size, 1, 1])
        dec_input = tf.concat([go_tokens, target_data], axis=1)

        # Helper for the training process.
        training_helper = tf.contrib.seq2seq.TrainingHelper(
            inputs=dec_input,
            sequence_length=[output_size] * batch_size)

        # Basic decoder
        training_decoder = tf.contrib.seq2seq.BasicDecoder(
            dec_cell, training_helper, enc_state, projection_layer)

        # Perform dynamic decoding using the decoder
        training_decoder_output = tf.contrib.seq2seq.dynamic_decode(
            training_decoder, impute_finished=True,
            maximum_iterations=output_size)[0]

# Inference Decoder
# Reuses the same parameters trained by the training process.
with tf.variable_scope("decode", reuse=tf.AUTO_REUSE):
    start_tokens = tf.constant(
        go_token, shape=[batch_size, 1])

    # The sample_ids are the actual output in this case (not dealing with any logits here).
    # My end_fn is always False because I'm working with a generator that will stop giving 
    # more data. You may extend the end_fn as you wish. E.g. you can append end_tokens 
    # and make end_fn be true when the sample_id is the end token.
    inference_helper = tf.contrib.seq2seq.InferenceHelper(
        sample_fn=lambda outputs: outputs,
        sample_shape=[1],  # again because dim=1
        sample_dtype=dtypes.float32,
        start_inputs=start_tokens,
        end_fn=lambda sample_ids: False)

    # Basic decoder
    inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
                                                        inference_helper,
                                                        enc_state,
                                                        projection_layer)

    # Perform dynamic decoding using the decoder
    inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(
        inference_decoder, impute_finished=True,
        maximum_iterations=output_size)[0]

Have a look at this question. Also I found this tutorial to be very useful to understand seq2seq models, although it does use embeddings. So replace their GreedyEmbeddingHelper by an InferenceHelper like the one I posted above.

P.s. I posted the full code at https://github.com/Andreea-G/tensorflow_examples

这篇关于如何在没有嵌入的情况下使用tensorflow seq2seq?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆