第二次运行tensorflow时出错 [英] Error while running tensorflow a second time

查看:296
本文介绍了第二次运行tensorflow时出错的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试运行以下tensorflow代码,并且第一次运行良好.如果我再次尝试运行它,它将不断抛出错误

I am trying to run the following tensorflow code and it's working fine the first time. If I try running it again, it keeps throwing an error saying

ValueError: Variable layer1/weights1 already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:

      File "C:\Users\owner\Anaconda3\envs\DeepLearning_NoGPU\lib\site-packages\tensorflow\python\framework\ops.py", line 1228, in __init__
        self._traceback = _extract_stack()
      File "C:\Users\owner\Anaconda3\envs\DeepLearning_NoGPU\lib\site-packages\tensorflow\python\framework\ops.py", line 2336, in create_op
        original_op=self._default_original_op, op_def=op_def)
      File "C:\Users\owner\Anaconda3\envs\DeepLearning_NoGPU\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 768, in apply_op
        op_def=op_def)

如果我重新启动控制台然后运行它,它再次可以正常运行.

If I restart the console and then run it, once again it runs just fine.

下面给出的是我对神经网络的实现.

Given below is my implementation of the neural network.

import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
import tensorflow as tf

learning_rate = 0.001
training_epochs = 100

n_input = 9
n_output = 1

n_layer1_node = 100
n_layer2_node = 100

X_train = np.random.rand(100, 9)
y_train = np.random.rand(100, 1)

with tf.variable_scope('input'):
    X = tf.placeholder(tf.float32, shape=(None, n_input))

with tf.variable_scope('output'):
    y = tf.placeholder(tf.float32, shape=(None, 1))

#layer 1
with tf.variable_scope('layer1'):
    weight_matrix1 = {'weights': tf.get_variable(name='weights1', 
                                                shape=[n_input, n_layer1_node], 
                                                initializer=tf.contrib.layers.xavier_initializer()),
                      'biases': tf.get_variable(name='biases1',
                                shape=[n_layer1_node],
                                initializer=tf.zeros_initializer())}
    layer1_output = tf.nn.relu(tf.add(tf.matmul(X, weight_matrix1['weights']), weight_matrix1['biases']))

#Layer 2
with tf.variable_scope('layer2'):
    weight_matrix2 = {'weights': tf.get_variable(name='weights2', 
                                                shape=[n_layer1_node, n_layer2_node], 
                                                initializer=tf.contrib.layers.xavier_initializer()),
                      'biases': tf.get_variable(name='biases2',
                                shape=[n_layer2_node],
                                initializer=tf.zeros_initializer())}
    layer2_output = tf.nn.relu(tf.add(tf.matmul(layer1_output, weight_matrix2['weights']), weight_matrix2['biases']))

#Output layer
with tf.variable_scope('layer3'):
    weight_matrix3 = {'weights': tf.get_variable(name='weights3', 
                                                shape=[n_layer2_node, n_output], 
                                                initializer=tf.contrib.layers.xavier_initializer()),
                      'biases': tf.get_variable(name='biases3',
                                shape=[n_output],
                                initializer=tf.zeros_initializer())}
    prediction = tf.nn.relu(tf.add(tf.matmul(layer2_output, weight_matrix3['weights']), weight_matrix3['biases']))

cost = tf.reduce_mean(tf.squared_difference(prediction, y))
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)

with tf.Session() as session:

    session.run(tf.global_variables_initializer())


    for epoch in range(training_epochs):

        session.run(optimizer, feed_dict={X: X_train, y: y_train})
        train_cost = session.run(cost, feed_dict={X: X_train, y:y_train})

        print(epoch, " epoch(s) done")

    print("training complete")

由于错误提示,我尝试将reuse=True作为参数添加到with tf.variable_scope():中,但同样无法正常工作.

As the error suggests I tried adding reuse=True as a parameter in with tf.variable_scope(): but that again is not working.

我正在conda环境中运行此程序.我在Windows 10中使用Python 3.5和CUDA 8(但这没关系,因为它未配置为在GPU中运行).

I am running this inside a conda environment. I am using Python 3.5 and CUDA 8(But it shouldn't matter because this is not configured to run in the GPU) in windows 10.

推荐答案

这与TF的工作方式有关.人们需要了解TF具有隐藏"状态-正在构建图形.大多数tf函数都会在此图中创建操作(如每个tf.Variable调用,每个算术运算等).另一方面,实际的执行"发生在tf.Session()中.因此,您的代码通常如下所示:

This is a matter of how TF works. One needs to understand that TF has a "hidden" state - a graph being built. Most of the tf functions create ops in this graph (like every tf.Variable call, every arithmetic operation and so on). On the other hand actual "execution" happens in the tf.Session(). Consequently your code will usually look like this:

build_graph()

with tf.Session() as sess:
  process_something()

由于所有实际变量,结果等仅保留在会话中,因此如果您要运行两次",则可以这样做

since all actual variables, results etc. leave in session only, if you want to "run it twice" you would do

build_graph()

with tf.Session() as sess:
  process_something()

with tf.Session() as sess:
  process_something()

通知我一次构建图.图是事物外观的抽象表示,它不包含任何计算状态.当您尝试做

Notice that I am building graph once. Graph is an abstract representation of how things look like, it does not hold any state of computations. When you try to do

build_graph()

with tf.Session() as sess:
  process_something()

build_graph()

with tf.Session() as sess:
  process_something()

由于试图创建具有相同名称的变量(在您的情况下会发生这种情况),图形已完成等,您可能在第二个build_graph()期间出错.在两者之间

you might get errors during second build_graph() due to trying to create variables with the same names (what happens in your case), graph being finalised etc. If you really need to run things this way you simply have to reset graph in between

build_graph()

with tf.Session() as sess:
  process_something()

tf.reset_default_graph()

build_graph()

with tf.Session() as sess:
  process_something()

可以正常工作.

这篇关于第二次运行tensorflow时出错的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆