TensorFlow成本价值回报NAN [英] TensorFlow Cost Value return NAN

查看:102
本文介绍了TensorFlow成本价值回报NAN的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用Tensorflow创建一个简单的Logistic回归模型.但是成本值总是返回nan.

I am making a simple Logistic Regression Model using Tensorflow. But the cost value is always returning nan.

我的数据集分为x_data和y_data. x_data是图像中的坐标,而y_data是1或0,因为我的图像是黑白的.我正在尝试找到白色和黑色之间的分界线.

My data sets are divided into x_data and y_data. x_data is a coordinate in an image and y_data is 1 or 0 since my image is black and white. I am trying to find a dividing line between white color and black color.

def train(input,iterations):
import tensorflow as tf
tf.set_random_seed(777)  # for reproducibility

x_data = []
y_data = []

i_dim = input.shape[0]
j_dim = input.shape[1]

for i in range(i_dim):
    for j in range(j_dim):
        x_data.append([j,i_dim-i-1])
        y_data.append([int(input[i,j])])

# placeholders for a tensor that will be always fed.
X = tf.placeholder(tf.float32, shape=[None, 2])
Y = tf.placeholder(tf.float32, shape=[None, 1])

W = tf.Variable(tf.random_normal([2, 1]), name='weight')
b = tf.Variable(tf.random_normal([1]), name='bias')

# Hypothesis using sigmoid: tf.div(1., 1. + tf.exp(tf.matmul(X, W)))
hypothesis = tf.sigmoid(tf.matmul(X, W) + b)

# cost/loss function
cost = -tf.reduce_mean(Y * tf.log(hypothesis) + (1 - Y) *
                       tf.log(1 - hypothesis))

train = tf.train.AdamOptimizer(1e-4).minimize(cost)

# Launch graph
with tf.Session() as sess:
    # Initialize TensorFlow variables
    sess.run(tf.global_variables_initializer())

    for step in range(iterations):
        cost_val, _ = sess.run([cost, train], feed_dict={X: x_data, Y: y_data})
        print(step, cost_val)

这是我的日志 (0, nan) (1, nan) (2, nan) (3, nan) (4, nan) (5, nan) (6, nan) (7, nan) (8, nan) (9, nan) (10, nan) (11, nan) (12, nan) (13, nan) (14, nan) (15, nan) (16, nan) (17, nan) (18, nan) (19, nan) (20, nan)

this is my log (0, nan) (1, nan) (2, nan) (3, nan) (4, nan) (5, nan) (6, nan) (7, nan) (8, nan) (9, nan) (10, nan) (11, nan) (12, nan) (13, nan) (14, nan) (15, nan) (16, nan) (17, nan) (18, nan) (19, nan) (20, nan)

以此类推

推荐答案

当您的假设等于1时,损失的第二部分变为Y * log(0),因此是nan输出.我建议您在对数内添加一个小的常数,它应该可以工作.试试这个

When your hypothesis is equal to 1, your second part of the loss becomes Y * log(0), hence the nan output. I suggest you to add a small constant inside the logarithm and it should work. Try this

cost = -tf.reduce_mean(Y*(tf.log(hypothesis+1e-4))+(1-Y)*(tf.log(1-hypothesis+1e-4)))

这篇关于TensorFlow成本价值回报NAN的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆