对于tf.nn.rnn_cell.BasicRNN,状态和输出之间的区别是什么 [英] for the tf.nn.rnn_cell.BasicRNN,what's the difference between the state and output
问题描述
我知道
state = tanh(w *输入+ u * pre_state + b)
输出= state * w_out
,但是对于tf.nn.rnn_cell.BasicRNN,我只是得到unit_num(我认为这是状态的暗淡)
并在api网页上,最基本的RNN:输出= new_state =激活(W *输入+ U *状态+ B
所以我可以认为在这个函数中
state = output?并且该函数只有w,u,b,但是没有w_out?
as I know state = tanh(w * input + u * pre_state + b) output = state*w_out but for the tf.nn.rnn_cell.BasicRNN , I just get the unit_num (I think it's the dim of state) and at the api web page,Most basic RNN: output = new_state = activation(W * input + U * state + B so can I think in this function state = output? and the function just has w,u,b,but no w_out?
推荐答案
您描述的香草 RNN的作用是计算新的隐藏状态,然后使用一些输出投影来计算输出。在张量流中,他们将计算新的隐藏状态和计算输出投影部分分开。 code> BasicRNN 只会输出隐藏状态作为其输出,另一个名为 OutputProjectionWrapper
的类可以对其应用投影(并乘以 w_out
只是应用投影)。要获得所需的行为,您需要执行以下操作:
What "vanilla" RNN that you describe does is it computes the new hidden state, and then uses some output projection to compute the output. In tensorflow they separated that "compute new hidden state" and "compute output projection" parts. The BasicRNN
just outputs the hidden state as its output, another class called OutputProjectionWrapper
can then apply a projection to it (and multiplying by w_out
is just applying a projection). To get the behavior you want, you need to do:
tf.nn.rnn_cell.OutputProjectionWrapper(tf.nn.rnn_cell.BasicRNNCell(...), num_output_units)
它还允许您在隐藏状态和输出投影中具有不同数量的神经元。
It also allows you to have different number of neurons in your hidden state and in your output projection.
这篇关于对于tf.nn.rnn_cell.BasicRNN,状态和输出之间的区别是什么的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!