在dynamic_rnn中设置sequence_length对返回状态的影响 [英] Effect of setting sequence_length on the returned state in dynamic_rnn
问题描述
假设我有一个LSTM网络对长度为10的时间序列进行分类,则将时间序列馈送到LSTM的标准方法是形成一个[批量大小X 10 X向量大小]数组并将其馈送到LSTM:
Suppose I have an LSTM network to classify timeseries on length 10, the standard way to feed the timeseries to the LSTM is to form a [batch size X 10 X vector size] array and feed it to the LSTM:
self.rnn_t, self.new_state = tf.nn.dynamic_rnn( \
inputs=self.X, cell=self.lstm_cell, dtype=tf.float32, initial_state=self.state_in)
使用sequence_length
参数时,我可以指定时间序列的长度.
When using the sequence_length
parameter I can specify the length of the timeseries.
对于上面定义的场景,我的问题是,如果我使用大小为[批处理大小X 1 X向量大小]的向量调用dynamic_rnn
10次,则将时间序列中的匹配索引作为参数,并将返回的状态传递为initial_state上一次通话的结果,我最终会得到相同的结果吗?输出和状态?或不?
My question, for the scenario defined above, if I call dynamic_rnn
10 time with a vector of size [batch size X 1 X vector size], taking the matching index in the timeseries and passing the returned state as the initial_state of the preceding call, would I end up having the same results? outputs and state? or not?
推荐答案
在两种情况下,您都应该获得相同的输出.我将在下面的玩具示例中对此进行说明:
You should be getting the same output in both the cases. I'll illustrate this with a toy example below:
> 1.设置网络的输入和参数:
> 1. Setting up the inputs and the parameters of the network:
# Set RNN params
batch_size = 2
time_steps = 10
vector_size = 5
# Create a random input
dataset= tf.random_normal((batch_size, time_steps, vector_size), dtype=tf.float32, seed=42)
# input tensor to the RNN
X = tf.Variable(dataset, dtype=tf.float32)
> 2.输入以下内容的时间序列LSTM:[batch_size, time_steps, vector_size]
# Initializers cannot be set to random value, so set it a fixed value.
with tf.variable_scope('rnn_full', initializer=tf.initializers.ones()):
basic_cell= tf.contrib.rnn.BasicRNNCell(num_units=10)
output_f, state_f= tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)
> 3. LSTM调用循环计数time_steps
来创建tim_series,每个LSTM都被提供一个输入:[batch_size, vector_size]
,并将返回状态设置为初始状态
> 3. LSTM called in a loop count of time_steps
to create tim_series, where each LSTM is fed an input: [batch_size, vector_size]
and the returned state is set as the initial state
# Unstack the inputs across time_steps
unstack_X = tf.unstack(X,axis=1)
outputs = []
with tf.variable_scope('rnn_unstacked', initializer=tf.initializers.ones()):
basic_cell= tf.contrib.rnn.BasicRNNCell(num_units=10)
#init_state has to be set to zero
init_state = basic_cell.zero_state(batch_size, dtype=tf.float32)
# Create a loop of N LSTM cells, N = time_steps.
for i in range(len(unstack_X)):
output, state= tf.nn.dynamic_rnn(basic_cell, tf.expand_dims(unstack_X[i], 1), dtype=tf.float32, initial_state= init_state)
# copy the init_state with the new state
init_state = state
outputs.append(output)
# Transform the output to [batch_size, time_steps, vector_size]
output_r = tf.transpose(tf.squeeze(tf.stack(outputs)), [1, 0, 2])
> 4.检查输出
> 4. Checking the outputs
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
out_f, st_f =sess.run([output_f, state_f])
out_r, st_r =sess.run([output_r, state])
npt.assert_almost_equal(out_f, out_r)
npt.assert_almost_equal(st_f, st_r)
states
和outputs
都匹配.
这篇关于在dynamic_rnn中设置sequence_length对返回状态的影响的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!