为什么我的acc总是更高而我的val_acc很小? [英] Why my acc always higher but my val_acc is very small?

查看:1112
本文介绍了为什么我的acc总是更高而我的val_acc很小?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我尝试训练14000个训练数据集和3500个验证数据集,但是为什么每次训练时我总是在验证部分很小的情况下获得高精度结果

I tried to train 14000 training datasets and 3500 validation datasets, but why every time I train I always get high accuracy results while the validation section is very small

所以如果我希望验证的结果接近训练
的准确性,并为每个时期提供大量的补充,该怎么办?

so what should I do if I want the results of the validation to be close to the accuracy of the training and provide significant additions to each epoch

是加还是减的东西?
[抱歉英语不好]

does there have to be something to add or subtract? [sorry for bad english]

from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense

classifier = Sequential()


classifier.add(Conv2D(16, (3, 3), input_shape = (64, 64, 3), activation = 'relu'))
`classifier.add(MaxPooling2D(pool_size = (2, 2)))


classifier.add(Conv2D(32, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))`


classifier.add(Conv2D(64, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))

classifier.add(Dense(units = 128, activation = 'relu'))
classifier.add(Dense(units = 1, activation = 'sigmoid'))

classifier.add(Flatten())

classifier.add(Dense(units = 128, activation = 'relu'))
classifier.add(Dense(units = 1, activation = 'sigmoid'))

classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])

from keras.callbacks import TensorBoard
# Use TensorBoard
callbacks = TensorBoard(log_dir='./Graph')

from keras.preprocessing.image import ImageDataGenerator

train_datagen = ImageDataGenerator(rescale = 1./255,
                                   shear_range = 0.2,
                                   zoom_range = 0.2,
                                   horizontal_flip = True)

test_datagen = ImageDataGenerator(rescale = 1./255)

training_set = train_datagen.flow_from_directory('dataset/training_set',
                                                 target_size = (64, 64),
                                                 batch_size = 32,
                                                 class_mode = 'binary')

test_set = test_datagen.flow_from_directory('dataset/test_set',
                                            target_size = (64, 64),
                                            batch_size = 32,
                                            class_mode = 'binary')

classifier.fit_generator(training_set,
                         steps_per_epoch = 100,
                         epochs = 200,
                         validation_data = test_set,
                         validation_steps = 200)

classifier.save('model.h5')

我得到了这个结果(抱歉,我不知道如何在此处放置图片)

im got this result (sorry im don't know how to put image in here)

Epoch 198/200
100/100 [=============================]-114秒1秒/步-损失:0.1032-acc:0.9619-val_loss:1.1953-val_acc:0.7160

Epoch 198/200 100/100 [==============================] - 114s 1s/step - loss: 0.1032 - acc: 0.9619 - val_loss: 1.1953 - val_acc: 0.7160

Epoch 199/200
100/100 [======= =======================]-115s 1s / step-损耗:0.1107-acc:0.9591-val_loss:1.4148-val_acc:0.6702

Epoch 199/200 100/100 [==============================] - 115s 1s/step - loss: 0.1107 - acc: 0.9591 - val_loss: 1.4148 - val_acc: 0.6702

时期200/200
100/100 [============================ ==]-112秒1秒/步-丢失s:0.1229-acc:0.9528-val_loss:1.2995-val_acc:0.6928

Epoch 200/200 100/100 [==============================] - 112s 1s/step - loss: 0.1229 - acc: 0.9528 - val_loss: 1.2995 - val_acc: 0.6928

推荐答案

当您的训练准确度很高,但是您的验证准确性低,您的模型过度拟合。简而言之,您的模型已经了解了训练数据的结构,但无法对其进行概括。为了减少过度拟合,您可以尝试

When your training accuracy is very high, but your validation accuracy is low, you have overfitted your model. Simply, your model has learned the structure of your training data, but is unable to generalize it. In order to reduce overfitting, you can try to


  • 简化模型,

  • 引入退出到某些层,

  • 使用更大的训练批次。

  • simplify your model,
  • introduce dropout to some layers,
  • use bigger training batches.

这篇关于为什么我的acc总是更高而我的val_acc很小?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆