序列到序列 - 用于时间序列预测 [英] Sequence to Sequence - for time series prediction

查看:50
本文介绍了序列到序列 - 用于时间序列预测的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我尝试构建一个序列到序列模型,以根据前几个输入预测传感器信号随时间的变化(见下图)

I've tried to build a sequence to sequence model to predict a sensor signal over time based on its first few inputs (see figure below)

模型工作正常,但我想增加趣味"并尝试在两个 LSTM 层之间添加一个注意力层.

The model works OK, but I want to 'spice things up' and try to add an attention layer between the two LSTM layers.

型号代码:

def train_model(x_train, y_train, n_units=32, n_steps=20, epochs=200,
                n_steps_out=1):

    filters = 250
    kernel_size = 3

    logdir = os.path.join(logs_base_dir, datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
    tensorboard_callback = TensorBoard(log_dir=logdir, update_freq=1)

    # get number of features from input data
    n_features = x_train.shape[2]
    # setup network
    # (feel free to use other combination of layers and parameters here)
    model = keras.models.Sequential()
    model.add(keras.layers.LSTM(n_units, activation='relu',
                                return_sequences=True,
                                input_shape=(n_steps, n_features)))
    model.add(keras.layers.LSTM(n_units, activation='relu'))
    model.add(keras.layers.Dense(64, activation='relu'))
    model.add(keras.layers.Dropout(0.5))
    model.add(keras.layers.Dense(n_steps_out))
    model.compile(optimizer='adam', loss='mse', metrics=['mse'])
    # train network
    history = model.fit(x_train, y_train, epochs=epochs,
                        validation_split=0.1, verbose=1, callbacks=[tensorboard_callback])
    return model, history

我看过文档,但我有点迷茫.任何帮助添加关注层或对当前模型的评论将不胜感激

I've looked at the documentation but I'm a bit lost. Any help adding the attention layer or comments on the current model would be appreciated

更新:在谷歌搜索之后,我开始认为我错了,我重写了我的代码.

Update: After Googeling around, I'm starting to think I got it all wrong and I rewrote my code.

我正在尝试迁移在此 GitHub 存储库中找到的 seq2seq 模型

I'm trying to migrate a seq2seq model that I've found in this GitHub repository. In the repository code the problem demonstrated is predicting a randomly generated sine wave baed on some early samples.

我遇到了类似的问题,我正在尝试更改代码以满足我的需要.

I have a similar problem, and I'm trying to change the code to fit my needs.

区别:

  • 我的训练数据形状是 (439, 5, 20) 439 个不同的信号,每个时间步长 5 个,具有 20 个特征
  • 我在拟合数据时没有使用 fit_generator

超参数:

layers = [35, 35] # Number of hidden neuros in each layer of the encoder and decoder

learning_rate = 0.01
decay = 0 # Learning rate decay
optimiser = keras.optimizers.Adam(lr=learning_rate, decay=decay) # Other possible optimiser "sgd" (Stochastic Gradient Descent)

num_input_features = train_x.shape[2] # The dimensionality of the input at each time step. In this case a 1D signal.
num_output_features = 1 # The dimensionality of the output at each time step. In this case a 1D signal.
# There is no reason for the input sequence to be of same dimension as the ouput sequence.
# For instance, using 3 input signals: consumer confidence, inflation and house prices to predict the future house prices.

loss = "mse" # Other loss functions are possible, see Keras documentation.

# Regularisation isn't really needed for this application
lambda_regulariser = 0.000001 # Will not be used if regulariser is None
regulariser = None # Possible regulariser: keras.regularizers.l2(lambda_regulariser)

batch_size = 128
steps_per_epoch = 200 # batch_size * steps_per_epoch = total number of training examples
epochs = 100

input_sequence_length = n_steps # Length of the sequence used by the encoder
target_sequence_length = 31 - n_steps # Length of the sequence predicted by the decoder
num_steps_to_predict = 20 # Length to use when testing the model

<小时>

编码器代码:


Encoder code:

# Define an input sequence.

encoder_inputs = keras.layers.Input(shape=(None, num_input_features), name='encoder_input')

# Create a list of RNN Cells, these are then concatenated into a single layer
# with the RNN layer.
encoder_cells = []
for hidden_neurons in layers:
    encoder_cells.append(keras.layers.GRUCell(hidden_neurons,
                                              kernel_regularizer=regulariser,
                                              recurrent_regularizer=regulariser,
                                              bias_regularizer=regulariser))

encoder = keras.layers.RNN(encoder_cells, return_state=True, name='encoder_layer')

encoder_outputs_and_states = encoder(encoder_inputs)

# Discard encoder outputs and only keep the states.
# The outputs are of no interest to us, the encoder's
# job is to create a state describing the input sequence.
encoder_states = encoder_outputs_and_states[1:]

<小时>

解码器代码:


Decoder code:

# The decoder input will be set to zero (see random_sine function of the utils module).
# Do not worry about the input size being 1, I will explain that in the next cell.
decoder_inputs = keras.layers.Input(shape=(None, 20), name='decoder_input')

decoder_cells = []
for hidden_neurons in layers:
    decoder_cells.append(keras.layers.GRUCell(hidden_neurons,
                                              kernel_regularizer=regulariser,
                                              recurrent_regularizer=regulariser,
                                              bias_regularizer=regulariser))

decoder = keras.layers.RNN(decoder_cells, return_sequences=True, return_state=True, name='decoder_layer')

# Set the initial state of the decoder to be the ouput state of the encoder.
# This is the fundamental part of the encoder-decoder.
decoder_outputs_and_states = decoder(decoder_inputs, initial_state=encoder_states)

# Only select the output of the decoder (not the states)
decoder_outputs = decoder_outputs_and_states[0]

# Apply a dense layer with linear activation to set output to correct dimension
# and scale (tanh is default activation for GRU in Keras, our output sine function can be larger then 1)
decoder_dense = keras.layers.Dense(num_output_features,
                                   activation='linear',
                                   kernel_regularizer=regulariser,
                                   bias_regularizer=regulariser)

decoder_outputs = decoder_dense(decoder_outputs)

<小时>

模型摘要:


Model Summary:

model = keras.models.Model(inputs=[encoder_inputs, decoder_inputs], 
outputs=decoder_outputs)
model.compile(optimizer=optimiser, loss=loss)
model.summary()

<小时>

Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
encoder_input (InputLayer)      (None, None, 20)     0                                            
__________________________________________________________________________________________________
decoder_input (InputLayer)      (None, None, 20)     0                                            
__________________________________________________________________________________________________
encoder_layer (RNN)             [(None, 35), (None,  13335       encoder_input[0][0]              
__________________________________________________________________________________________________
decoder_layer (RNN)             [(None, None, 35), ( 13335       decoder_input[0][0]              
                                                                 encoder_layer[0][1]              
                                                                 encoder_layer[0][2]              
__________________________________________________________________________________________________
dense_5 (Dense)                 (None, None, 1)      36          decoder_layer[0][0]              
==================================================================================================
Total params: 26,706
Trainable params: 26,706
Non-trainable params: 0
__________________________________________________________________________________________________

尝试拟合模型时:

history = model.fit([train_x, decoder_inputs],train_y, epochs=epochs,
                        validation_split=0.3, verbose=1)

我收到以下错误:

When feeding symbolic tensors to a model, we expect the tensors to have a static batch size. Got tensor with shape: (None, None, 20)

我做错了什么?

推荐答案

This is the answer to the EDITED QUESTION

THIS IS THE ANSWER TO THE EDITED QUESTION

首先,当你调用 fit 时,decoder_inputs 是一个张量,你不能用它来拟合你的模型.您引用的代码的作者,使用一个零数组,因此您必须这样做(我在下面的虚拟示例中这样做)

first of all, when you call fit, decoder_inputs is a tensor and you can't use it to fit your model. the author of the code you cited, use an array of zeros and so you have to do the same (I do it in the dummy example below)

其次,查看模型摘要中的输出层...它是 3D 的,因此您必须将目标管理为 3D 阵列

secondly, look at your output layer in the model summary... it is 3D so you have to manage your target as 3D array

第三,解码器输入必须是 1 个特征维度,而不是您报告的 20 个维度

thirdly, the decoder input must be 1 feature dimension and not 20 as you reported

设置初始参数

layers = [35, 35]
learning_rate = 0.01
decay = 0 
optimiser = keras.optimizers.Adam(lr=learning_rate, decay=decay)

num_input_features = 20
num_output_features = 1
loss = "mse"

lambda_regulariser = 0.000001
regulariser = None

batch_size = 128
steps_per_epoch = 200
epochs = 100

定义编码器

encoder_inputs = keras.layers.Input(shape=(None, num_input_features), name='encoder_input')

encoder_cells = []
for hidden_neurons in layers:
    encoder_cells.append(keras.layers.GRUCell(hidden_neurons,
                                              kernel_regularizer=regulariser,
                                              recurrent_regularizer=regulariser,
                                              bias_regularizer=regulariser))

encoder = keras.layers.RNN(encoder_cells, return_state=True, name='encoder_layer')
encoder_outputs_and_states = encoder(encoder_inputs)
encoder_states = encoder_outputs_and_states[1:] # only keep the states

定义解码器(1个特征维度输入!)

define decoder (1 feature dimension input!)

decoder_inputs = keras.layers.Input(shape=(None, 1), name='decoder_input') #### <=== must be 1

decoder_cells = []
for hidden_neurons in layers:
    decoder_cells.append(keras.layers.GRUCell(hidden_neurons,
                                              kernel_regularizer=regulariser,
                                              recurrent_regularizer=regulariser,
                                              bias_regularizer=regulariser))

decoder = keras.layers.RNN(decoder_cells, return_sequences=True, return_state=True, name='decoder_layer')
decoder_outputs_and_states = decoder(decoder_inputs, initial_state=encoder_states)

decoder_outputs = decoder_outputs_and_states[0] # only keep the output sequence
decoder_dense = keras.layers.Dense(num_output_features,
                                   activation='linear',
                                   kernel_regularizer=regulariser,
                                   bias_regularizer=regulariser)

decoder_outputs = decoder_dense(decoder_outputs)

定义模型

model = keras.models.Model(inputs=[encoder_inputs, decoder_inputs], outputs=decoder_outputs)
model.compile(optimizer=optimiser, loss=loss)
model.summary()

Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
encoder_input (InputLayer)      (None, None, 20)     0                                            
__________________________________________________________________________________________________
decoder_input (InputLayer)      (None, None, 1)      0                                            
__________________________________________________________________________________________________
encoder_layer (RNN)             [(None, 35), (None,  13335       encoder_input[0][0]              
__________________________________________________________________________________________________
decoder_layer (RNN)             [(None, None, 35), ( 11340       decoder_input[0][0]              
                                                                 encoder_layer[0][1]              
                                                                 encoder_layer[0][2]              
__________________________________________________________________________________________________
dense_4 (Dense)                 (None, None, 1)      36          decoder_layer[0][0]              
==================================================================================================

这是我的虚拟数据.形状和你的一样.注意 decoder_zero_inputs 它与你的 y 具有相同的维度,但它是一个零数组

this is my dummy data. the same as yours in shapes. pay attention to decoder_zero_inputs it has the same dimension of your y but is an array of zeros

train_x = np.random.uniform(0,1, (439, 5, 20))
train_y = np.random.uniform(0,1, (439, 56, 1))
validation_x = np.random.uniform(0,1, (10, 5, 20))
validation_y = np.random.uniform(0,1, (10, 56, 1))
decoder_zero_inputs = np.zeros((439, 56, 1)) ### <=== attention

拟合

history = model.fit([train_x, decoder_zero_inputs],train_y, epochs=epochs,
                     validation_split=0.3, verbose=1)

Epoch 1/100
307/307 [==============================] - 2s 8ms/step - loss: 0.1038 - val_loss: 0.0845
Epoch 2/100
307/307 [==============================] - 1s 2ms/step - loss: 0.0851 - val_loss: 0.0832
Epoch 3/100
307/307 [==============================] - 1s 2ms/step - loss: 0.0842 - val_loss: 0.0828

验证预测

pred_validation = model.predict([validation_x, np.zeros((10,56,1))])

这篇关于序列到序列 - 用于时间序列预测的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆