如何记录或查看用于使用辍学训练TensorFlow神经网络的成本? [英] How do I log or view the cost used in training a TensorFlow neural network with dropout?

查看:114
本文介绍了如何记录或查看用于使用辍学训练TensorFlow神经网络的成本?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我如何查看训练 TensorFlow 神经网络,其中带有辍学?

How do I see the accuracy and cost for the dropouts actually used in training a TensorFlow neural network with dropout?

按预期,每次我运行摘要,例如与

As expected, each time I run a summary, e.g. with

train_writer.add_summary(sess.run(merged, feed_dict=foo), step)

print(sess.run(accuracy, feed_dict=foo))

如果网络包含辍学,并且foo提供的保持概率"不是1.0,则我将获得不同的值,例如,每次,我将获得不同的损失或准确性-例如,三个连续的精度计算

if the network includes dropout, and foo feeds a "keep probability" other than 1.0, I will get different values, so that, for example, I get a different loss or accuracy each time — e.g., three immediately successive computations of accuracy with

print(sess.run(accuracy, feed_dict=foo))
print(sess.run(accuracy, feed_dict=foo))
print(sess.run(accuracy, feed_dict=foo))

可能会给类似的东西

75.808
75.646
75.770 

尽管它们大致相同,但它们并不完全相同,大概是因为每次我评估时,网络都会丢弃不同的节点.其结果必然是我看不到培训中实际遇到的费用.

Though these are roughly the same, they are not exactly the same, presumably because each time I evaluate, the network drops out different nodes. A consequence of this must be that I don’t ever see the cost actually encountered in training.

我如何记录或查看在训练带有辍学的TensorFlow神经网络时实际使用的成本(或使用网络计算的其他汇总值)?

How do I log or view the cost (or other summary values computed using the network) actually used in training a TensorFlow neural network with dropout?

推荐答案

问题出在哪里?如果您呼叫三次随机网络,则获得三个不同的值.当您记录网络损失时,即记录了培训期间实际使用的损失.基本上,您可以从计算出的图形中读出值,例如:

And where is the problem? You should get three different values if you call three times a stochastic network. When you are logging your losses from network you are logging the ones that are actually used during training. Basically you can just read out value from your computed graph, like:

for i in range(100):
    batch_xs, batch_ys = mnist.train.next_batch(100)
    cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
    _, loss_val = sess.run([train_step, cross_entropy],
                           feed_dict={x: batch_xs, y_: batch_ys})
    print 'loss = ' + loss_val

将打印在训练步骤中计算出的损失(不会对其进行两次计算,因此不会对丢失的输出掩码进行重新采样).

which will print the loss which was computed during training step (it will not compute it twice, thus the dropout output mask will not be resampled).

相反,如果您想查看如果我现在停止学习,那么火车上的准确度将是多少?",您需要一个评估图

If instead you want to see "what would be the accuracy on train set if I stop learning now" you need an eval graph https://www.tensorflow.org/versions/r0.8/tutorials/mnist/tf/index.html#evaluate-the-model , which will tell your network, that it is time to change dropout units from stochastic, to scaling/averaging results.

这篇关于如何记录或查看用于使用辍学训练TensorFlow神经网络的成本?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆