Keras:模型的准确性在达到99%的准确性和损失0.01后下降 [英] Keras: model accuracy drops after reaching 99 percent accuracy and loss 0.01

查看:212
本文介绍了Keras:模型的准确性在达到99%的准确性和损失0.01后下降的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在喀拉拉邦使用了经过修改的LeNet模型进行二进制分类.我有大约250,000个训练样本,比率为60/40.我的模型训练得很好.在第一个时期,准确性达到97%,损失为0.07. 10个星期之后,精度超过99%,损失为0.01.我正在使用CheckPointer在模型改进时保存它们.

I am using an adapted LeNet model in keras to make a binary classification. I have about 250,000 training samples with ratio 60/40. My model is training very well. The first epoch the accuracy reaches 97 percent with a loss of 0.07. After 10 epochs the accuracy is over 99 percent with a loss of 0.01. I am using a CheckPointer to save my models when they improve.

在第11个时期,准确度下降到55%左右,损失约6个.怎么可能呢?是因为该模型无法更准确,并且试图找到更好的权重,但却完全没有做到这一点吗?

Around the 11th epoch the accuracy drops to around 55 percent with a loss of around 6. How, could this be possible? Is it because the model cannot be more accurate and it tries to find better weights but completely fails to do so?

我的模型是LeNet模型的改编:

lenet_model = models.Sequential()
lenet_model.add(Convolution2D(filters=filt_size, kernel_size=(kern_size, kern_size), padding='valid',\
                        input_shape=input_shape))
lenet_model.add(Activation('relu'))
lenet_model.add(BatchNormalization())
lenet_model.add(MaxPooling2D(pool_size=(maxpool_size, maxpool_size)))
lenet_model.add(Convolution2D(filters=64, kernel_size=(kern_size, kern_size), padding='valid'))
lenet_model.add(Activation('relu'))
lenet_model.add(BatchNormalization())
lenet_model.add(MaxPooling2D(pool_size=(maxpool_size, maxpool_size)))
lenet_model.add(Convolution2D(filters=128, kernel_size=(kern_size, kern_size), padding='valid'))
lenet_model.add(Activation('relu'))
lenet_model.add(BatchNormalization())
lenet_model.add(MaxPooling2D(pool_size=(maxpool_size, maxpool_size)))
lenet_model.add(Flatten())
lenet_model.add(Dense(1024, kernel_initializer='uniform'))
lenet_model.add(Activation('relu'))
lenet_model.add(Dense(512, kernel_initializer='uniform'))
lenet_model.add(Activation('relu'))
lenet_model.add(Dropout(0.2))
lenet_model.add(Dense(n_classes, kernel_initializer='uniform'))
lenet_model.add(Activation('softmax'))

lenet_model.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])

推荐答案

该问题在于应用binary_crossentropy损失,而在这种情况下应应用categorical_crossentropy.另一种方法是使binary_crossentropy丢失,但将输出更改为具有dim=1并将其激活为sigmoid.奇怪的行为来自以下事实:使用binary_crossentropy实际上可以解决多类二进制分类(具有两个类),而您的任务是单类二进制分类.

The problem lied in applying a binary_crossentropy loss whereas in this case categorical_crossentropy should be applied. Another approach is to leave binary_crossentropy loss but to change output to have dim=1 and activation to sigmoid. The weird behaviour comes from the fact that with binary_crossentropy a multiclass binary classification (with two classes) is actually solved whereas your task is a single class binary classification.

这篇关于Keras:模型的准确性在达到99%的准确性和损失0.01后下降的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆