为什么不显示val_loss和val_acc? [英] Why val_loss and val_acc are not displaying?

查看:371
本文介绍了为什么不显示val_loss和val_acc?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

训练开始时,在运行窗口中仅显示loss和acc,缺少val_loss和val_acc.仅在最后显示这些值.

When the training starts, in the run window only loss and acc are displayed, the val_loss and val_acc are missing. Only at the end, these values are showed.

model.add(Flatten())
model.add(Dense(512, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(10, activation="softmax"))

model.compile(
    loss='categorical_crossentropy',
    optimizer="adam",
    metrics=['accuracy']
)

model.fit(
    x_train,
    y_train,
    batch_size=32, 
    epochs=1, 
    validation_data=(x_test, y_test),
    shuffle=True
)

这是培训开始的方式:

Train on 50000 samples, validate on 10000 samples
Epoch 1/1

   32/50000 [..............................] - ETA: 34:53 - loss: 2.3528 - acc: 0.0938
   64/50000 [..............................] - ETA: 18:56 - loss: 2.3131 - acc: 0.0938
   96/50000 [..............................] - ETA: 13:45 - loss: 2.3398 - acc: 0.1146

这是结束的时候

49984/50000 [============================>.] - ETA: 0s - loss: 1.5317 - acc: 0.4377
50000/50000 [==============================] - 231s 5ms/step - loss: 1.5317 - acc: 0.4378 - val_loss: 1.1503 - val_acc: 0.5951

我想在每一行中看到val_acc和val_loss

I want to see the val_acc and val_loss in each line

推荐答案

在每次迭代中计算验证指标没有多大意义,因为这会使您的训练过程慢得多,并且您的模型在迭代之间的变化不大.另一方面,在每个时期结束时计算这些指标更为有意义.

It doesn't make much sense to compute the validation metrics at each iteration, because it would make your training process much slower and your model doesn't change that much from iteration to iteration. On the other hand it makes much more sense to compute these metrics at the end of each epoch.

在您的情况下,训练集上有50000个样本,验证集上有10000个样本,批次大小为32.如果要在每次迭代后计算val_lossval_acc,则意味着每次32个训练样本更新您的权重,您将有32个验证样本的313(即10000/32)次迭代.由于您的每个时期都包含1563次迭代(即50000/32),因此您必须执行489219(即313 * 1563)仅用于评估模型的批量预测.这会使您的模型训练慢几个数量级

In your case you have 50000 samples on the training set and 10000 samples on the validation set and a batch size of 32. If you were to compute the val_loss and val_acc after each iteration it would mean that for every 32 training samples updating your weights you would have 313 (i.e. 10000/32) iterations of 32 validation samples. Since your every epoch consists of 1563 iterations (i.e. 50000/32), you'd have to perform 489219 (i.e. 313*1563) batch predictions just for evaluating the model. This would cause your model to train several orders of magnitude slower!

如果您仍想在每次迭代结束时计算验证指标(出于上述原因,不推荐),则可以简单地缩短历元",以便模型只看到1每个时期批处理:

If you still want to compute the validation metrics at the end of each iteration (not recommended for the reasons stated above), you could simply shorten your "epoch" so that your model sees just 1 batch per epoch:

model.fit(
    x_train,
    y_train,
    batch_size=32, 
    epochs=len(x_train) // batch_size + 1,  # 1563 in your case
    steps_per_epoch=1, 
    validation_data=(x_test, y_test),
    shuffle=True
    )

这并不完全等效,因为样本是从数据中随机抽取的,带有替换,但这是最容易获得的...

This isn't exactly equivalent because the samples will be drawn at random, with replacement, from the data but it is the easiest you can get...

这篇关于为什么不显示val_loss和val_acc?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆