如何减少神经网络的过度拟合? [英] how to reduce overfitting in neural networks?

查看:59
本文介绍了如何减少神经网络的过度拟合?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在做一个声音识别项目.

I'm working on a sound recognition project.

我有1500个带标签的5类声音样本.(每个类别300个声音样本,持续2秒).

I have 1500 labeled sound samples of 5 classes. (300 sound samples of a duration of 2 seconds for each classe).

我正在使用在线工具来计算MFCC系数(Egde脉冲)(因此我无法提供代码),然后我正在训练神经网络.

I'm using an online tool to calculate the MFCC coefficients (Egde impulse) (So I can not provide the code) and then I'm training a neural network.

数据集已拆分:

  • 80%->分为80/20的训练集-训练/验证

  • 80% --> a training set which is splitted 80/20 - training/validation

20%->测试集

经过200个训练周期,我的网络的第一个发行版具有以下表现(非常糟糕):

After 200 training cycles, the first release of my network had the (very bad) following performances :

训练准确度= 100%/验证准确度= 30%

training accuracy = 100 % / Validation accuracy = 30 %

通过在网上和这个论坛上搜索,我找到了减少过度拟合的方法:

By searching on the net and on this forum, I found method(s) to reduce overfitting :

我最后一次发布的神经网络的最终性能如下:

The final performance of my last release of neural network is the following :

训练准确度= 80%/验证准确度= 60%(200个训练周期后)

training accuracy = 80 % / Validation accuracy = 60 % (after 200 training cycles)

如您所见,训练准确性和验证准确性之间仍然存在显着差异.

As you can see, there is still a significant difference between training accuracy and validation accuracy..

我的问题是如何继续提高验证准确性?

My question is how to continue to increase my validation accuracy ?

我的神经网络的代码:

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, InputLayer, Dropout, Conv1D, Flatten, Reshape, MaxPooling1D, BatchNormalization
from tensorflow.keras import regularizers
from tensorflow.keras.optimizers import Adam

# model architecture
model = Sequential()
model.add(InputLayer(input_shape=(X_train.shape[1], ), name='x_input'))
model.add(Reshape((int(X_train.shape[1] / 13), 13), input_shape=(X_train.shape[1], )))
model.add(Conv1D(30, kernel_size=1, activation='relu',kernel_regularizer=regularizers.l2(0.001)))
model.add(Dropout(0.5))
model.add(MaxPooling1D(pool_size=1, padding='same'))
model.add(Conv1D(10, kernel_size=1, activation='relu',kernel_regularizer=regularizers.l2(0.001)))
model.add(Dropout(0.5))
model.add(MaxPooling1D(pool_size=1, padding='same'))
model.add(Flatten())
model.add(Dense(classes, activation='softmax', name='y_pred'))

# this controls the learning rate
opt = Adam(lr=0.005, beta_1=0.9, beta_2=0.999)
#opt = Adadelta(learning_rate=1.0, rho=0.95)

# train the neural network
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
model.fit(X_train, Y_train, batch_size=50, epochs=200, validation_data=(X_test, Y_test), verbose=2)

谢谢

此致

Lionel

推荐答案

通常为减少过度拟合,可以执行以下操作:

In general to reduce overfitting, you can do the following:

  1. 添加更多正则化功能(例如,辍学率更高的多层辍学)
  2. 减少功能数量
  3. 减少网络容量(例如,减少层数或隐藏单元数)
  4. 减小批次大小

这篇关于如何减少神经网络的过度拟合?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆