ValueError: 尝试使用不同于第一次使用的变量范围重用 RNNCell [英] ValueError: Attempt to reuse RNNCell with a different variable scope than its first use
问题描述
以下代码片段
import tensorflow as tf
from tensorflow.contrib import rnn
hidden_size = 100
batch_size = 100
num_steps = 100
num_layers = 100
is_training = True
keep_prob = 0.4
input_data = tf.placeholder(tf.float32, [batch_size, num_steps])
lstm_cell = rnn.BasicLSTMCell(hidden_size, forget_bias=0.0, state_is_tuple=True)
if is_training and keep_prob < 1:
lstm_cell = rnn.DropoutWrapper(lstm_cell)
cell = rnn.MultiRNNCell([lstm_cell for _ in range(num_layers)], state_is_tuple=True)
_initial_state = cell.zero_state(batch_size, tf.float32)
iw = tf.get_variable("input_w", [1, hidden_size])
ib = tf.get_variable("input_b", [hidden_size])
inputs = [tf.nn.xw_plus_b(i_, iw, ib) for i_ in tf.split(input_data, num_steps, 1)]
if is_training and keep_prob < 1:
inputs = [tf.nn.dropout(input_, keep_prob) for input_ in inputs]
outputs, states = rnn.static_rnn(cell, inputs, initial_state=_initial_state)
产生以下错误:
ValueError: 尝试重用 RNNCell
<tensorflow.contrib.rnn.python.ops.core_rnn_cell_impl.BasicLSTMCell
对象在 0x10210d5c0> 的变量范围与其第一次使用不同.单元格的第一次使用是作用域 'rnn/multi_rnn_cell/cell_0/basic_lstm_cell'
,这次尝试是作用域 `'rnn/multi_rnn_cell/cell_1/basic_lstm_cell'``.
ValueError: Attempt to reuse
RNNCell
<tensorflow.contrib.rnn.python.ops.core_rnn_cell_impl.BasicLSTMCell
object at 0x10210d5c0> with a different variable scope than its first use. First use of cell was with scope'rnn/multi_rnn_cell/cell_0/basic_lstm_cell'
, this attempt is with scope `'rnn/multi_rnn_cell/cell_1/basic_lstm_cell'``.
如果您希望单元格使用一组不同的权重,请创建一个新的单元格实例.
Please create a new instance of the cell if you would like it to use a different set of weights.
如果您之前使用的是:MultiRNNCell([BasicLSTMCell(...)] * num_layers)
,请更改为:MultiRNNCell([BasicLSTMCell(...) for _ in range(num_layers)])
.
If before you were using: MultiRNNCell([BasicLSTMCell(...)] * num_layers)
, change to: MultiRNNCell([BasicLSTMCell(...) for _ in range(num_layers)])
.
如果之前您使用相同的单元实例作为双向 RNN 的前向和反向单元,只需创建两个实例(一个用于前向,一个用于反向).
If before you were using the same cell instance as both the forward and reverse cell of a bidirectional RNN, simply create two instances (one for forward, one for reverse).
在 2017 年 5 月,当使用 scope=None
调用它时,我们将开始将此单元格的行为转换为使用现有存储的权重(如果有)(这会导致静默模型退化,因此此错误将一直保留到那时.)
In May 2017, we will start transitioning this cell's behavior to use existing stored weights, if any, when it is called with scope=None
(which can lead to silent model degradation, so this error will remain until then.)
如何解决这个问题?
我的 Tensorflow 版本是 1.0.
My version of Tensorflow is 1.0.
推荐答案
正如评论中所建议的,我的解决方案是:
改变这个
As suggested in the comments my solution is:
changing this
cell = tf.contrib.rnn.LSTMCell(state_size, state_is_tuple=True)
cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=0.8)
rnn_cells = tf.contrib.rnn.MultiRNNCell([cell for _ in range(num_layers)], state_is_tuple = True)
outputs, current_state = tf.nn.dynamic_rnn(rnn_cells, x, initial_state=rnn_tuple_state, scope = "layer")
进入:
def lstm_cell():
cell = tf.contrib.rnn.LSTMCell(state_size, reuse=tf.get_variable_scope().reuse)
return tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=0.8)
rnn_cells = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(num_layers)], state_is_tuple = True)
outputs, current_state = tf.nn.dynamic_rnn(rnn_cells, x, initial_state=rnn_tuple_state)
这似乎解决了可重用性问题.我从根本上不了解潜在的问题,但这在 TF 1.1rc2 上为我解决了问题
干杯!
which seems to solve the reusability problem. I don't fundamentally understand the underlying problem, but this solved the issue for me on TF 1.1rc2
cheers!
这篇关于ValueError: 尝试使用不同于第一次使用的变量范围重用 RNNCell的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!