ValueError:张量必须与 Tensorflow 中具有双向 RNN 的张量来自同一图 [英] ValueError: Tensor must be from the same graph as Tensor with Bidirectinal RNN in Tensorflow

查看:36
本文介绍了ValueError:张量必须与 Tensorflow 中具有双向 RNN 的张量来自同一图的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在 tensorflow 中使用双向动态 RNN 进行文本标记.在加工输入的维度后,我尝试运行一个会话.这是 blstm 设置部分:

I'm doing text tagger using Bidirectional dynamic RNN in tensorflow. After maching input's dimension, I tried to run a Session. this is blstm setting parts:

fw_lstm_cell = BasicLSTMCell(LSTM_DIMS)
bw_lstm_cell = BasicLSTMCell(LSTM_DIMS)

(fw_outputs, bw_outputs), _ = bidirectional_dynamic_rnn(fw_lstm_cell,
                                                        bw_lstm_cell,
                                                        x_place,
                                                        sequence_length=SEQLEN,
                                                        dtype='float32')

这是运行部分:

  with tf.Graph().as_default():
    # Placehoder Settings
    x_place, y_place = set_placeholder(BATCH_SIZE, EM_DIMS, MAXLEN)

    # BLSTM Model Building
    hlogits = tf_kcpt.build_blstm(x_place)

    # Compute loss
    loss = tf_kcpt.get_loss(log_likelihood)

    # Training
    train_op = tf_kcpt.training(loss)

    # load Eval method
    eval_correct = tf_kcpt.evaluation(logits, y_place)

    # Session Setting & Init
    init = tf.global_variables_initializer()
    sess = tf.Session()
    sess.run(init)

    # tensor summary setting
    summary = tf.summary.merge_all()
    summary_writer = tf.summary.FileWriter(LOG_DIR, sess.graph)

    # Save
    saver = tf.train.Saver()

    # Run epoch
    for step in range(EPOCH):
        start_time = time.time()

        feed_dict = fill_feed_dict(KCPT_SET['train'], x_place, y_place)
        _, loss_value = sess.run([train_op, loss], feed_dict=feed_dict)

但是,它给了我错误:

ValueError: Tensor("Shape:0", shape=(1,), dtype=int32) 必须与 Tensor("bidirectional_rnn/fw/fw/stack_2:0", shape=(1,), dtype=int32).

ValueError: Tensor("Shape:0", shape=(1,), dtype=int32) must be from the same graph as Tensor("bidirectional_rnn/fw/fw/stack_2:0", shape=(1,), dtype=int32).

请帮帮我

推荐答案

TensorFlow 将所有操作存储在一个操作图上.该图定义了哪些函数输出到何处,并将它们链接在一起,以便它可以按照您在图中设置的步骤生成最终输出.如果您尝试将一个图上的张量或操作输入到另一个图上的张量或操作中,它将失败.一切都必须在同一个执行图上.

TensorFlow stores all operations on an operational graph. This graph defines what functions output to where, and it links it all together so that it can follow the steps you have set up in the graph to produce your final output. If you try to input a Tensor or operation on one graph into a Tensor or operation on another graph it will fail. Everything must be on the same execution graph.

尝试使用 tf.Graph().as_default() 删除 :

Try removing with tf.Graph().as_default():

TensorFlow 为您提供了一个默认图形,如果您未指定图形,则会引用该图形.您可能在一个地方使用了默认图表,而在训练块中使用了不同的图表.

TensorFlow provides you a default graph which is referred to if you do not specify a graph. You are probably using the default graph in one spot and a different graph in your training block.

您在此处将图表指定为默认值似乎没有理由,而且很可能您无意中使用了单独的图表.如果您真的想指定一个图形,那么您可能希望将其作为变量传递,而不是像这样设置.

There does not seem to be a reason you are specifying a graph as default here and most likely you are using separate graphs on accident. If you really want to specify a graph then you probably want to pass it as a variable, not set it like this.

这篇关于ValueError:张量必须与 Tensorflow 中具有双向 RNN 的张量来自同一图的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆