TensorFlow:如何定义dataset.train.next_batch? [英] TensorFlow: how is dataset.train.next_batch defined?
问题描述
I am trying to learn TensorFlow and studying the example at: https://github.com/aymericdamien/TensorFlow-Examples/blob/master/notebooks/3_NeuralNetworks/autoencoder.ipynb
然后我在下面的代码中有一些疑问:
I then have some questions in the code below:
for epoch in range(training_epochs):
# Loop over all batches
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={X: batch_xs})
# Display logs per epoch step
if epoch % display_step == 0:
print("Epoch:", '%04d' % (epoch+1),
"cost=", "{:.9f}".format(c))
由于mnist只是一个数据集,所以mnist.train.next_batch
到底是什么意思? dataset.train.next_batch
是如何定义的?
Since mnist is just a dataset, what exactly does mnist.train.next_batch
mean? How was the dataset.train.next_batch
defined?
谢谢!
推荐答案
The mnist
object is returned from the read_data_sets()
function defined in the tf.contrib.learn
module. The mnist.train.next_batch(batch_size)
method is implemented here, and it returns a tuple of two arrays, where the first represents a batch of batch_size
MNIST images, and the second represents a batch of batch-size
labels corresponding to those images.
图像以大小为[batch_size, 784]
的二维NumPy数组返回(因为MNIST图像中有784个像素),标签返回的大小为大小为[batch_size]
的一维NumPy数组(如果read_data_sets()
是用one_hot=False
调用的)或大小为[batch_size, 10]
的二维NumPy数组(如果read_data_sets()
是用one_hot=True
调用的).
The images are returned as a 2-D NumPy array of size [batch_size, 784]
(since there are 784 pixels in an MNIST image), and the labels are returned as either a 1-D NumPy array of size [batch_size]
(if read_data_sets()
was called with one_hot=False
) or a 2-D NumPy array of size [batch_size, 10]
(if read_data_sets()
was called with one_hot=True
).
这篇关于TensorFlow:如何定义dataset.train.next_batch?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!