TensorFlow:优化器将nan作为输出 [英] TensorFlow : optimizer gives nan as ouput

查看:98
本文介绍了TensorFlow:优化器将nan作为输出的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在运行一个非常简单的tensorflow程序

  W = tf.Variable([。3],tf.float32) 
b = tf.Variable([-。3],tf.float32)
x = tf.placeholder(tf.float32)

linear_model = W * x + b

y = tf.placeholder(tf.float32)

squared_error = tf.square(linear_model-y)

损失= tf.reduce_sum(squared_error)

优化器= tf.train.GradientDescentOptimizer(0.1)

火车= optimizer.minimize(损失)

init = tf.global_variables_initializer()

,其中tf.Session()为s:
file_writer = tf.summary.FileWriter('../../ tfLogs / graph',s.graph)
s.run( init)
for i in range(1000):
s.run(train,{x:[1,2,3,4],y:[0,-1,-2,-3 ]})
print(s.run([W,b]))

this给我



[array([nan],dtype = float32),array([nan],dtype = float32)]



我在做什么错了?

解决方案

您你呢使用 loss = tf.reduce_sum(squared_error)代替 reduce_mean 。使用 reduce_sum ,当您拥有更多数据时,损失会更大,即使是这个小例子,也意味着您的梯度足够大,导致模型发散。



当您的学习率太大时,可能会导致这种问题的其他原因。在这种情况下,您也可以通过将学习率从0.1更改为0.01来解决此问题,但是如果您仍在使用 reduce_sum ,当您添加更多积分时,它将再次中断。 / p>

I am running a very simple tensorflow program

W = tf.Variable([.3],tf.float32)
b = tf.Variable([-.3],tf.float32)
x = tf.placeholder(tf.float32)

linear_model = W*x + b

y = tf.placeholder(tf.float32)

squared_error = tf.square(linear_model - y)

loss = tf.reduce_sum(squared_error)

optimizer = tf.train.GradientDescentOptimizer(0.1)

train = optimizer.minimize(loss)

init = tf.global_variables_initializer()

with tf.Session() as s:
    file_writer = tf.summary.FileWriter('../../tfLogs/graph',s.graph)
    s.run(init)
    for i in range(1000):
        s.run(train,{x:[1,2,3,4],y:[0,-1,-2,-3]})
    print(s.run([W,b]))

this gives me

[array([ nan], dtype=float32), array([ nan], dtype=float32)]

what am i doing wrong?

解决方案

You're using loss = tf.reduce_sum(squared_error) instead of reduce_mean. With reduce_sum your loss gets bigger when you have more data, and even with this small example it means your gradient is big enough to cause your model to diverge.

Something else which can cause this type of problem is when your learning rate is too large. In this case you can also fix it by changing your learning rate from 0.1 to 0.01, but if you're still using reduce_sum it will break again when you add more points.

这篇关于TensorFlow:优化器将nan作为输出的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆