火车准确性在某些时代下降 [英] Train accuracy drops in some epochs

查看:120
本文介绍了火车准确性在某些时代下降的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在训练ResNet(CIFAR-10数据集),并且训练精度大部分(以95%的时间间隔)增加,但有时下降5-10%,然后又开始增加.

这里是一个例子:

Epoch 45/100
40000/40000 [==============================] - 50s 1ms/step - loss: 0.0323 - acc: 0.9948 - val_loss: 1.6562 - val_acc: 0.7404
Epoch 46/100
40000/40000 [==============================] - 52s 1ms/step - loss: 0.0371 - acc: 0.9932 - val_loss: 1.6526 - val_acc: 0.7448
Epoch 47/100
40000/40000 [==============================] - 50s 1ms/step - loss: 0.0266 - acc: 0.9955 - val_loss: 1.6925 - val_acc: 0.7426
Epoch 48/100
40000/40000 [==============================] - 50s 1ms/step - loss: 0.0353 - acc: 0.9940 - val_loss: 2.2682 - val_acc: 0.6496
Epoch 49/100
40000/40000 [==============================] - 50s 1ms/step - loss: 1.6391 - acc: 0.4862 - val_loss: 1.2524 - val_acc: 0.5659
Epoch 50/100
40000/40000 [==============================] - 52s 1ms/step - loss: 0.9220 - acc: 0.6830 - val_loss: 0.9726 - val_acc: 0.6738
Epoch 51/100
40000/40000 [==============================] - 51s 1ms/step - loss: 0.5453 - acc: 0.8165 - val_loss: 1.0232 - val_acc: 0.6963

此后,我已经退出执行,但这是我的第二次运行,首先,发生了同样的事情,一段时间后,它又回到了99%.

批次为128,所以我想这不是问题.我没有更改学习率或其他任何亚当参数,但我想这也不是问题,因为大多数时候准确性都在提高.

那么,为什么突然发生滴水?

解决方案

由于训练和验证损失以及准确性都提高了,因此您的优化算法似乎暂时超过了它试图遵循的损失函数的下坡部分. /p>

请记住梯度下降和相关方法,计算一个点处的梯度,然后使用该梯度(有时还包括一些其他数据)来猜测移动的方向和距离.这并不总是完美的,有时它会走得太远,最终再次爬上坡.

如果您的学习速度过快,您会时不时地看到这种情况,但是您的收敛速度可能会比学习速度较小的时候快.您可以尝试不同的学习率,但是除非您的损失开始分散,否则我不会担心.

I'm training a ResNet (CIFAR-10 dataset) and train accuracy is mostly (in 95% epochs) increasing, but sometimes it drops 5-10% and then it starts increasing again.

Here is an example:

Epoch 45/100
40000/40000 [==============================] - 50s 1ms/step - loss: 0.0323 - acc: 0.9948 - val_loss: 1.6562 - val_acc: 0.7404
Epoch 46/100
40000/40000 [==============================] - 52s 1ms/step - loss: 0.0371 - acc: 0.9932 - val_loss: 1.6526 - val_acc: 0.7448
Epoch 47/100
40000/40000 [==============================] - 50s 1ms/step - loss: 0.0266 - acc: 0.9955 - val_loss: 1.6925 - val_acc: 0.7426
Epoch 48/100
40000/40000 [==============================] - 50s 1ms/step - loss: 0.0353 - acc: 0.9940 - val_loss: 2.2682 - val_acc: 0.6496
Epoch 49/100
40000/40000 [==============================] - 50s 1ms/step - loss: 1.6391 - acc: 0.4862 - val_loss: 1.2524 - val_acc: 0.5659
Epoch 50/100
40000/40000 [==============================] - 52s 1ms/step - loss: 0.9220 - acc: 0.6830 - val_loss: 0.9726 - val_acc: 0.6738
Epoch 51/100
40000/40000 [==============================] - 51s 1ms/step - loss: 0.5453 - acc: 0.8165 - val_loss: 1.0232 - val_acc: 0.6963

I've quit execution after this, but this was my second run and in first, same thing happened and after some time it got back to 99%.

Batch is 128 so I guess this is not a problem. I haven't change learning rate or any other Adam parameters, but I guess that's also not an issue since accuracy is increasing most of the time.

So, why are those sudden drops happening?

解决方案

Since training and validation loss and accuracy all increase it looks like your optimization algorithm has temporarily overshot the downhill part of the loss function that it was trying to follow.

Remember gradient descent and related methods calculate the gradient at a point and then use that (and sometimes some additional data) to guess the direction and distance to move. This is not always perfect and sometimes it will go too far and end up further uphill again.

If your learning rate is aggressive you will see this every now and then, but you might still converge faster than with a smaller learning rate. You can experiment with different learning rates, but I would not be concerned unless your loss starts to diverge.

这篇关于火车准确性在某些时代下降的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆