成本函数在张量流中输出'nan' [英] cost function outputs 'nan' in tensorflow

查看:97
本文介绍了成本函数在张量流中输出'nan'的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在研究张量流时,我遇到了一个问题.
成本函数输出"nan".

While studying the tensorflow, I faced a problem.
The cost function output 'nan'.

而且,如果您在源代码中发现任何其他错误,请让我知道它的链接.

And, if you find any other wrong in source code let me know the links for it.

我正在尝试将成本函数值发送到训练有素的模型,但无法正常工作.

I am trying to send the cost function value to my trained model, but its not working.

tf.reset_default_graph()

tf.set_random_seed(777)

X = tf.placeholder(tf.float32, [None, 20, 20, 3])
Y = tf.placeholder(tf.float32, [None, 1])

with tf.variable_scope('conv1') as scope:
    W1 = tf.Variable(tf.random_normal([4, 4, 3, 32], stddev=0.01), name='weight1')      
    L1 = tf.nn.conv2d(X, W1, strides=[1, 1, 1, 1], padding='SAME')
    L1 = tf.nn.relu(L1)
    L1 = tf.nn.max_pool(L1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
    L1 = tf.reshape(L1, [-1, 10 * 10 * 32])

    W1_hist = tf.summary.histogram('conv_weight1', W1)
    L1_hist = tf.summary.histogram('conv_layer1', L1)

with tf.name_scope('fully_connected_layer1') as scope:
    W2 = tf.get_variable('W2', shape=[10 * 10 * 32, 1], initializer=tf.contrib.layers.xavier_initializer())        
    b = tf.Variable(tf.random_normal([1]))
    hypothesis = tf.matmul(L1, W2) + b

    W2_hist = tf.summary.histogram('fully_connected_weight1', W2)
    b_hist = tf.summary.histogram('fully_connected_bias', b)
    hypothesis_hist = tf.summary.histogram('hypothesis', hypothesis)

with tf.name_scope('cost') as scope:
    cost = -tf.reduce_mean(Y * tf.log(hypothesis) + (1 - Y) * tf.log(1 - hypothesis))
    cost_summary = tf.summary.scalar('cost', cost)

with tf.name_scope('train_optimizer') as scope:
    optimizer = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(cost)  

predicted = tf.cast(hypothesis > 0.5, dtype=tf.float32)
accuracy = tf.reduce_mean(tf.cast(tf.equal(predicted, Y), dtype=tf.float32))
accuracy_summary = tf.summary.scalar('accuracy', accuracy)

train_data_batch, train_labels_batch = tf.train.batch([train_data, train_labels], enqueue_many=True , batch_size=100, allow_smaller_final_batch=True)

with tf.Session() as sess:
    # tensorboard --logdir=./logs/planesnet2_log
    merged_summary = tf.summary.merge_all()
    writer = tf.summary.FileWriter('./logs/planesnet2_log')   
    writer.add_graph(sess.graph)

    sess.run(tf.global_variables_initializer())
    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(coord=coord)
    total_cost = 0

    for step in range(20):
        x_batch, y_batch = sess.run([train_data_batch, train_labels_batch])
        feed_dict = {X: x_batch, Y: y_batch}
        _, cost_val = sess.run([optimizer, cost], feed_dict = feed_dict)
        total_cost += cost_val
        print('total_cost: ', total_cost, 'cost_val: ', cost_val)
    coord.request_stop()
    coord.join(threads)

推荐答案

您使用了交叉熵损失而没有对hypothesis的S型激活函数,因此您的值不受[0,1]的限制. log函数未定义为负值,并且很可能会取一些.添加S形和ε因子以避免负值或0值,您应该没事.

You use a cross entropy loss without a sigmoid activation function to hypothesis, thus your values are not bounded in ]0,1]. The log function is not defined for negative values and it most likely get somes. Add a sigmoid and epsilon factor to avoid negative or 0 values and you should be fine.

这篇关于成本函数在张量流中输出'nan'的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆