ValueError:Tensor必须与Tensorflow中具有双向RNN的Tensor来自同一图 [英] ValueError: Tensor must be from the same graph as Tensor with Bidirectinal RNN in Tensorflow

查看:432
本文介绍了ValueError:Tensor必须与Tensorflow中具有双向RNN的Tensor来自同一图的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用Tensorflow中的双向动态RNN进行文本标记。
在处理了输入的维度后,我尝试运行一个Session。
这是blstm设置部分:

I'm doing text tagger using Bidirectional dynamic RNN in tensorflow. After maching input's dimension, I tried to run a Session. this is blstm setting parts:

fw_lstm_cell = BasicLSTMCell(LSTM_DIMS)
bw_lstm_cell = BasicLSTMCell(LSTM_DIMS)

(fw_outputs, bw_outputs), _ = bidirectional_dynamic_rnn(fw_lstm_cell,
                                                        bw_lstm_cell,
                                                        x_place,
                                                        sequence_length=SEQLEN,
                                                        dtype='float32')

这正在运行:

  with tf.Graph().as_default():
    # Placehoder Settings
    x_place, y_place = set_placeholder(BATCH_SIZE, EM_DIMS, MAXLEN)

    # BLSTM Model Building
    hlogits = tf_kcpt.build_blstm(x_place)

    # Compute loss
    loss = tf_kcpt.get_loss(log_likelihood)

    # Training
    train_op = tf_kcpt.training(loss)

    # load Eval method
    eval_correct = tf_kcpt.evaluation(logits, y_place)

    # Session Setting & Init
    init = tf.global_variables_initializer()
    sess = tf.Session()
    sess.run(init)

    # tensor summary setting
    summary = tf.summary.merge_all()
    summary_writer = tf.summary.FileWriter(LOG_DIR, sess.graph)

    # Save
    saver = tf.train.Saver()

    # Run epoch
    for step in range(EPOCH):
        start_time = time.time()

        feed_dict = fill_feed_dict(KCPT_SET['train'], x_place, y_place)
        _, loss_value = sess.run([train_op, loss], feed_dict=feed_dict)

但是,它给了我错误:


ValueError:Tensor( Shape:0,shape =( 1,),dtype = int32)必须与Tensor( bidirectional_rnn / fw / fw / stack_2:0,shape =(1,),dtype = int32)来自同一张图。

ValueError: Tensor("Shape:0", shape=(1,), dtype=int32) must be from the same graph as Tensor("bidirectional_rnn/fw/fw/stack_2:0", shape=(1,), dtype=int32).

请帮助我

推荐答案

TensorFlow将所有操作存储在一个图形。该图定义了将什么功能输出到何处,并将它们链接在一起,以便可以按照图中设置的步骤生成最终输出。如果尝试将一个图的张量或操作输入到另一张图的张量或操作,它将失败。一切都必须在同一执行图上。

TensorFlow stores all operations on an operational graph. This graph defines what functions output to where, and it links it all together so that it can follow the steps you have set up in the graph to produce your final output. If you try to input a Tensor or operation on one graph into a Tensor or operation on another graph it will fail. Everything must be on the same execution graph.

尝试使用tf.Graph()。as_default()删除

Try removing with tf.Graph().as_default():

TensorFlow为您提供了一个默认图形,如果您未指定图形,则会引用该图形。您可能在一个位置使用默认图,而在训练块中使用了另一个图。

TensorFlow provides you a default graph which is referred to if you do not specify a graph. You are probably using the default graph in one spot and a different graph in your training block.

似乎没有理由在此处将图指定为默认图并且很可能您在事故中使用了单独的图表。如果确实要指定图形,则可能希望将其作为变量传递,而不是像这样设置。

There does not seem to be a reason you are specifying a graph as default here and most likely you are using separate graphs on accident. If you really want to specify a graph then you probably want to pass it as a variable, not set it like this.

这篇关于ValueError:Tensor必须与Tensorflow中具有双向RNN的Tensor来自同一图的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆