ValueError:张量 A 必须与张量 B 来自同一图 [英] ValueError: Tensor A must be from the same graph as Tensor B

查看:69
本文介绍了ValueError:张量 A 必须与张量 B 来自同一图的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 tensorflow 进行文本匹配,在我调用 tf.nn.embedding_lookup(word_embedding_matrix, combine_result) 之前,我必须组合 2 个句子中的一些单词(从句子 S1 中获取 m 个单词,然后还从句子 S2 中获取 m 个单词,然后将它们组合在一起作为combine_result"),但是当代码转到 tf.nn.embedding_lookup(word_embedding_matrix, combine_result) 时,它给了我错误:

I'm doing text matching using tensorflow, before i call tf.nn.embedding_lookup(word_embedding_matrix, combine_result), I have to combine some words from 2 sentence(get m words from sentence S1 and also get m words from sentence S2, then combine them together as "combine_result"), but when the code gose to tf.nn.embedding_lookup(word_embedding_matrix, combine_result) it gives me the error:

ValueError: Tensor("Reshape_7:0", shape=(1, 6), dtype=int32) 必须是来自与 Tensor("word_embedding_matrix:0", shape=(26320,50), dtype=float32_ref).

ValueError: Tensor("Reshape_7:0", shape=(1, 6), dtype=int32) must be from the same graph as Tensor("word_embedding_matrix:0", shape=(26320, 50), dtype=float32_ref).

代码如下:

import tensorflow as tf
import numpy as np
import os
import time
import datetime
import data_helpers

NUM_CLASS = 2
SEQUENCE_LENGTH = 47


    # Placeholders for input, output and dropout
    input_x = tf.placeholder(tf.int32, [None, 2, SEQUENCE_LENGTH], name="input_x")
    input_y = tf.placeholder(tf.float32, [None, NUM_CLASS], name="input_y")
    dropout_keep_prob = tf.placeholder(tf.float32, name="dropout_keep_prob")

    def n_grams(text, window_size):
                text_left_window = []
                # text_left_window = tf.convert_to_tensor(text_left_window, dtype=tf.int32)
                for z in range(SEQUENCE_LENGTH-2):
                    text_left = tf.slice(text, [z], [window_size])
                    text_left_window = tf.concat(0, [text_left_window, text_left])
                text_left_window = tf.reshape(text_left_window, [-1, window_size])
                return text_left_window


            def inference(vocab_size, embedding_size, batch_size, slide_window_size, conv_window_size):
                # # Embedding layer
                word_embedding_matrix = tf.Variable(tf.random_uniform([vocab_size, embedding_size], -1.0, 1.0),
                                                    name="word_embedding_matrix")
                # convo_unit = tf.Variable(tf.random_uniform([slide_window_size*2, ], -1.0, 1.0), name="convo_unit")

                text_comp_result = []
                for x in range(batch_size):
                    # input_x_slice_reshape = [[1 1 1...]
                    #                           [2 2 2...]]
                    input_x_slice = tf.slice(input_x, [x, 0, 0], [1, 2, SEQUENCE_LENGTH])
                    input_x_slice_reshape = tf.reshape(input_x_slice, [2, SEQUENCE_LENGTH])

                    # text_left_flat: [294, 6, 2, 6, 2, 57, 2, 57, 147, 57, 147, 5, 147, 5, 2,...], length = SEQUENCE_LENGTH
                    # text_right_flat: [17, 2, 2325, 2, 2325, 5366, 2325, 5366, 81, 5366, 81, 1238,...]
                    text_left = tf.slice(input_x_slice_reshape, [0, 0], [1, SEQUENCE_LENGTH])
                    text_left_flat = tf.reshape(text_left, [-1])
                    text_right = tf.slice(input_x_slice_reshape, [1, 0], [1, SEQUENCE_LENGTH])
                    text_right_flat = tf.reshape(text_right, [-1])

                    # extract both text.
                    # text_left_window: [[294, 6, 2], [6, 2, 57], [2, 57, 147], [57, 147, 5], [147, 5, 2],...]
                    # text_right_window: [[17, 2, 2325], [2, 2325, 5366], [2325, 5366, 81], [5366, 81, 1238],...]
                    text_left_window = n_grams(text_left_flat, slide_window_size)
                    text_right_window = n_grams(text_right_flat, slide_window_size)
                    text_left_window_sha = text_left_window.get_shape()
                    print 'text_left_window_sha:', text_left_window_sha

                    # composite the slice
                    text_comp_list = []
                    # text_comp_list = tf.convert_to_tensor(text_comp_list, dtype=tf.float32)
                    for l in range(SEQUENCE_LENGTH-slide_window_size+1):
                        text_left_slice = tf.slice(text_left_window, [l, 0], [1, slide_window_size])
                        text_left_slice_flat = tf.reshape(text_left_slice, [-1])
                        for r in range(SEQUENCE_LENGTH-slide_window_size+1):
                            text_right_slice = tf.slice(text_right_window, [r, 0], [1, slide_window_size])
                            text_right_slice_flat = tf.reshape(text_right_slice, [-1])

                            # convo_unit = [294, 6, 2, 17, 2, 2325]
                            convo_unit = tf.concat(0, [text_left_slice_flat, text_right_slice_flat])
                            convo_unit_reshape = tf.reshape(convo_unit, [-1, slide_window_size*2])
                            # convo_unit_shape_val = convo_unit_reshape.get_shape()
                            # print 'convo_unit_shape_val:', convo_unit_shape_val

                            embedded_chars = tf.nn.embedding_lookup(word_embedding_matrix, convo_unit_reshape)
                            embedded_chars_expanded = tf.expand_dims(embedded_chars, -1)
        ...

有人可以帮我吗?非常感谢!

could please someone help me? Thank you very much!

推荐答案

Yaroslav 在上面的评论中回答了 - 移至答案:

Yaroslav answered in a comment above - moving to an answer:

当您创建新的默认图表时会发生此错误.尝试在计算之前执行 tf.reset_default_graph() 并且不再创建任何图形(即调用 tf.Graph)

This error happens when you create new default graph. Try to do tf.reset_default_graph() before the computation and not create any more graphs (i.e., calls to tf.Graph)

这篇关于ValueError:张量 A 必须与张量 B 来自同一图的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆