在第一个 RNN 示例之后不存在张量流嵌入 [英] tensorflow embeddings don't exist after first RNN example

查看:21
本文介绍了在第一个 RNN 示例之后不存在张量流嵌入的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经设置了一个打印语句,我注意到第一批在输入 RNN 时存在嵌入,但在第二批之后它们不存在并且我收到以下错误:

I've setup a print statement and I've noticed that for the first batch when feeding an RNN, the embeddings exist, but after the second batch they don't and I get the following error:

ValueError: Variable RNNLM/RNNLM/Embedding/Adam_2/不存在,或者不是用 tf.get_variable() 创建的.您的意思是在 VarScope 中设置重用=无吗?

ValueError: Variable RNNLM/RNNLM/Embedding/Adam_2/ does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?

这是我生成嵌入的代码:

Here is my code for generating the embeddings:

def add_embedding(self):
    with tf.device('/gpu:0'):
      embedding = tf.get_variable("Embedding", [len(self.vocab), self.config.embed_size])
      e_x = tf.nn.embedding_lookup(embedding, self.input_placeholder)
      inputs = [tf.squeeze(s, [1]) for s in tf.split(1, self.config.num_steps, e_x)] 
      return inputs

这里是如何设置模型的,这就是我怀疑问题所在

Here is how the model is seutp, this is where I suspect the problem lies

def model(self, inputs):
   with tf.variable_scope("input_drop"):
      inputs_drop = [tf.nn.dropout(i, self.dropout_placeholder) for i in inputs]

    with tf.variable_scope("RNN") as scope:
      self.initial_state = tf.zeros([self.config.batch_size, self.config.hidden_size], tf.float32)
      state = self.initial_state
      states = []
      for t, e in enumerate(inputs_drop):
        print "t is {0}".format(t)
        if t > 0:
          scope.reuse_variables()
        H = tf.get_variable("Hidden", [self.config.hidden_size, self.config.hidden_size])
        I = tf.get_variable("I", [self.config.embed_size, self.config.hidden_size])
        b_1 = tf.get_variable("b_1", (self.config.hidden_size,))

        state = tf.sigmoid(tf.matmul(state, H) + tf.matmul(e, I) + b_1)
        states.append(state)

    with tf.variable_scope("output_dropout"):
      rnn_outputs = [tf.nn.dropout(o, self.dropout_placeholder) for o in states]
    return rnn_outputs

问题出现在损失函数上,定义如下

The issue arises when I get to the loss function, defined as follows

def add_training_op(self, loss):
    opt = tf.train.AdamOptimizer(self.config.lr)
    train_op = opt.minimize(loss)
    return train_op

编辑:这里有一些更新的代码来帮助大家

EDIT: Here is some updated code to help everyone out

 def __init__(self, config):
    self.config = config
    self.load_data(debug=False)
    self.add_placeholders()
    self.inputs = self.add_embedding()
    self.rnn_outputs = self.add_model(self.inputs)
    self.outputs = self.add_projection(self.rnn_outputs)
    self.predictions = [tf.nn.softmax(tf.cast(o, 'float64')) for o in self.outputs]
    output = tf.reshape(tf.concat(1, self.outputs), [-1, len(self.vocab)])
    self.calculate_loss = self.add_loss_op(output)
    self.train_step = self.add_training_op(self.calculate_loss)

这里有其他方法,与 add_projectioncalculate_loss 相关,所以我们可以排除它们.

Here are the other methods here, pertaining to add_projection and calculate_loss so we can rule them out.

def add_loss_op(self, output):
   weights = tf.ones([self.config.batch_size * self.config.num_steps], tf.int32)
    seq_loss = tf.python.seq2seq.sequence_loss(
      [output], 
      tf.reshape(self.labels_placeholder, [-1]), 
      weights
      )
    tf.add_to_collection('total_loss', seq_loss)
    loss = tf.add_n(tf.get_collection('total_loss')) 
    return loss

def add_projection(self, rnn_outputs):
   with tf.variable_scope("Projection", initializer=tf.contrib.layers.xavier_initializer()) as scope:
      U = tf.get_variable("U", [self.config.hidden_size, len(self.vocab)])
      b_2 = tf.get_variable("b_2", [len(self.vocab)])

      outputs = [tf.matmul(x, U) + b_2 for x in rnn_outputs]
      return outputs


def train_RNNLM():
  config = Config()
  gen_config = deepcopy(config)
  gen_config.batch_size = gen_config.num_steps = 1

  with tf.variable_scope('RNNLM') as scope:
    model = RNNLM_Model(config)
    # This instructs gen_model to reuse the same variables as the model above
    scope.reuse_variables()
    gen_model = RNNLM_Model(gen_config)

  init = tf.initialize_all_variables()
  saver = tf.train.Saver()

  with tf.Session() as session:
    best_val_pp = float('inf')
    best_val_epoch = 0

    session.run(init)
    for epoch in xrange(config.max_epochs):
      print 'Epoch {}'.format(epoch)
      start = time.time()
      ###
      train_pp = model.run_epoch(
          session, model.encoded_train,
          train_op=model.train_step)
      valid_pp = model.run_epoch(session, model.encoded_valid)
      print 'Training perplexity: {}'.format(train_pp)
      print 'Validation perplexity: {}'.format(valid_pp)
      if valid_pp < best_val_pp:
        best_val_pp = valid_pp
        best_val_epoch = epoch
        saver.save(session, './ptb_rnnlm.weights')
      if epoch - best_val_epoch > config.early_stopping:
        break
      print 'Total time: {}'.format(time.time() - start)

推荐答案

问题原来是下面这行代码:

The problem turned out to be the following line of code:

model = RNNLM_Model(config)
    # This instructs gen_model to reuse the same variables as the model above
    scope.reuse_variables()
    gen_model = RNNLM_Model(gen_config)

事实证明,使用 reuse_variables() 时,第二个模型存在问题.通过删除此行,问题消失了.

It turns out that the second model was an issue by using reuse_variables(). By removing this line by issues went away.

这篇关于在第一个 RNN 示例之后不存在张量流嵌入的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆