在 TensorFlow 训练期间打印损失 [英] Printing the loss during TensorFlow training
问题描述
我正在研究 TensorFlow "MNIST For ML Beginners"教程,我想在每个训练步骤后打印出训练损失.
I am looking at the TensorFlow "MNIST For ML Beginners" tutorial, and I want to print out the training loss after every training step.
我的训练循环目前如下所示:
My training loop currently looks like this:
for i in range(100):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
现在,train_step
定义为:
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
其中 cross_entropy
是我想打印的损失:
Where cross_entropy
is the loss which I want to print out:
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
打印这个的一种方法是在训练循环中显式计算cross_entropy
:
One way to print this would be to explicitly compute cross_entropy
in the training loop:
for i in range(100):
batch_xs, batch_ys = mnist.train.next_batch(100)
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
print 'loss = ' + str(cross_entropy)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
我现在有两个关于此的问题:
I now have two questions regarding this:
鉴于
cross_entropy
已经在sess.run(train_step, ...)
期间计算过,计算两次似乎效率低下,需要两倍的数字所有训练数据的前向传递.在sess.run(train_step, ...)
期间计算时,有没有办法访问cross_entropy
的值?
Given that
cross_entropy
is already computed duringsess.run(train_step, ...)
, it seems inefficient to compute it twice, requiring twice the number of forward passes of all the training data. Is there a way to access the value ofcross_entropy
when it was computed duringsess.run(train_step, ...)
?
我什至如何打印tf.Variable
?使用 str(cross_entropy)
给我一个错误...
How do I even print a tf.Variable
? Using str(cross_entropy)
gives me an error...
谢谢!
推荐答案
您可以通过将 cross_entropy
的值添加到 sess.run(...)
.例如,您的 for
循环可以重写如下:
You can fetch the value of cross_entropy
by adding it to the list of arguments to sess.run(...)
. For example, your for
-loop could be rewritten as follows:
for i in range(100):
batch_xs, batch_ys = mnist.train.next_batch(100)
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
_, loss_val = sess.run([train_step, cross_entropy],
feed_dict={x: batch_xs, y_: batch_ys})
print 'loss = ' + loss_val
同样的方法可用于打印变量的当前值.假设,除了 cross_entropy
的值之外,您还想打印名为 W
的 tf.Variable
的值,您可以执行以下:
The same approach can be used to print the current value of a variable. Let's say, in addition to the value of cross_entropy
, you wanted to print the value of a tf.Variable
called W
, you could do the following:
for i in range(100):
batch_xs, batch_ys = mnist.train.next_batch(100)
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
_, loss_val, W_val = sess.run([train_step, cross_entropy, W],
feed_dict={x: batch_xs, y_: batch_ys})
print 'loss = %s' % loss_val
print 'W = %s' % W_val
这篇关于在 TensorFlow 训练期间打印损失的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!