张量流的变量在循环中产生错误 [英] Variables of tensorflow generate error in a loop

查看:76
本文介绍了张量流的变量在循环中产生错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个与 TensorFlow类似的问题:varscope.reuse_variables().

我正在对数据集进行交叉验证.

I am doing cross-validation on a dataset.

每次我调用一个函数带新数据的myFunctionInFile1())(当前由于空间有限,我省略了分配数据的详细信息).此函数不在同一个python文件中.由于这个原因,我从我的主python文件(file2)中的该文件导入了此函数.此功能可创建完整的CNN,并使用给定的训练和测试数据使用新初始化和训练的参数来训练和测试模型.

Each time I call a function e.g. myFunctionInFile1()) with new data (currently due to limited space, I am omitting the data assigning details). This function is not in same python file. Because of Which I import this function from that file in my main python file (file2). This function creates a complete CNN and train and test a model on the given training and testing data with newly initialized and trained parameters.

首先从主文件(file2)验证myFunctionInFile1,然后CNN模型训练并测试它,并将结果返回到主文件(file2).但是,在具有新数据的第二次迭代中,以下代码:

From the main file (file2), at first validation, myFunctionInFile1 is called and the CNN model train and test it and returns results to the main file (file2). However, in the second iteration with new data, following code:

def myFunctionInFile1():
    # Nodes for the input variables
    x = tf.placeholder("float", shape=[None, D], name='Input_data')
    y_ = tf.placeholder(tf.int64, shape=[None], name='Ground_truth')
    keep_prob = tf.placeholder("float")
    bn_train = tf.placeholder(tf.bool)  # Boolean value to guide batchnorm

    def bias_variable(shape, name):
        initial = tf.constant(0.1, shape=shape)
        return tf.Variable(initial, name=name)

    def conv2d(x, W):
        return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')

    def max_pool_2x2(x):
        return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
                          strides=[1, 2, 2, 1], padding='SAME')

    with tf.name_scope("Reshaping_data") as scope:
        x_image = tf.reshape(x, [-1, D, 1, 1])

    initializer = tf.contrib.layers.xavier_initializer()
    """Build the graph"""
    # ewma is the decay for which we update the moving average of the
    # mean and variance in the batch-norm layers

    with tf.name_scope("Conv1") as scope:
        # reuse = tf.AUTO_REUSE
        W_conv1 = tf.get_variable("Conv_Layer_1", shape=[5, 1, 1, num_filt_1], initializer=initializer)
        b_conv1 = bias_variable([num_filt_1], 'bias_for_Conv_Layer_1')
        a_conv1 = conv2d(x_image, W_conv1) + b_conv1
    with tf.name_scope('Batch_norm_conv1') as scope:
        a_conv1 = tf.contrib.layers.batch_norm(a_conv1,is_training=bn_train,updates_collections=None)    

给我以下错误:

ValueError: Variable BatchNorm_2/beta does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=tf.AUTO_REUSE in VarScope?

问题是什么,通常在C/C ++/Java编程中,如果退出函数,则该被调用函数中的局部变量将在返回时自动删除.并且在每次新的调用时,它都应该创建一组新的参数.比为什么它会给出此错误.我怎样才能解决这个问题?

What is the problem, as generally in programming in C/C++/Java if you exit a function, the local variables in that called function are deleted automatically at return. And at each time new call it should create a new set of parameters. Than why it gives this error. How can I fix this?

推荐答案

TensorFlow层(例如batch_norm)是使用tf.get_variable实现的. tf.get_variable具有重用参数(也可以从variable_scope获取),默认为False,并且在使用reuse = False进行调用时,它始终会创建变量.您可以使用reuse = True来调用它,这意味着它将重用现有变量,或者如果变量不存在则失败.

TensorFlow layers like batch_norm are implemented using tf.get_variable. tf.get_variable has a reuse argument (which it can also get from a variable_scope), defaulting to False, and when called with reuse=False it always creates variables. You can call it with reuse=True which means it will reuse existing variables or fail if the variables do not exist.

在您的情况下,您是第一次使用reuse = True调用批处理规范,因此很难创建变量.尝试在您的变量范围内设置复用= False,或者按照错误消息的建议使用tf.AUTO_REUSE.

In your case you're calling batch norm with reuse=True for the first time, so it's having a hard time creating the variables. Try setting reuse=False in your variable scope or using, as the error message suggests, tf.AUTO_REUSE.

这篇关于张量流的变量在循环中产生错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆