Keras回调EarlyStopping比较培训和验证损失 [英] Keras Callback EarlyStopping comparing training and validation loss

查看:338
本文介绍了Keras回调EarlyStopping比较培训和验证损失的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在 Python Keras 中安装神经网络.

I'm fitting a neural network in Python Keras.

为避免过度拟合,我想监视训练/验证损失并创建适当的回调,当训练损失远小于验证损失时,该回调将停止计算.

To avoid overfitting I would like to monitor the training/validation loss and create a proper callback which stops computations when training loss is too much less than validation loss.

回调的示例是:

callback = [EarlyStopping(monitor='val_loss', value=45, verbose=0, mode='auto')]

与验证损失相比,训练损失太少了,有什么方法可以停止训练吗?

提前谢谢

推荐答案

您可以根据需要创建自定义回调类.

You can create a custom callback class for your purpose.

我创建了一个应该符合您的需求的

I have created one that should correspond to your need :

class CustomEarlyStopping(Callback):
    def __init__(self, ratio=0.0,
                 patience=0, verbose=0):
        super(EarlyStopping, self).__init__()

        self.ratio = ratio
        self.patience = patience
        self.verbose = verbose
        self.wait = 0
        self.stopped_epoch = 0
        self.monitor_op = np.greater

    def on_train_begin(self, logs=None):
        self.wait = 0  # Allow instances to be re-used

    def on_epoch_end(self, epoch, logs=None):
        current_val = logs.get('val_loss')
        current_train = logs.get('loss')
        if current_val is None:
            warnings.warn('Early stopping requires %s available!' %
                          (self.monitor), RuntimeWarning)

        # If ratio current_loss / current_val_loss > self.ratio
        if self.monitor_op(np.divide(current_train,current_val),self.ratio):
            self.wait = 0
        else:
            if self.wait >= self.patience:
                self.stopped_epoch = epoch
                self.model.stop_training = True
            self.wait += 1

    def on_train_end(self, logs=None):
        if self.stopped_epoch > 0 and self.verbose > 0:
            print('Epoch %05d: early stopping' % (self.stopped_epoch))

如果train_lossvalidation_loss之间的比率低于某个比率阈值,我可以自由地解释您想停止.此比率参数应在0.01.0之间.但是,1.0很危险,因为验证损失和训练损失在训练开始时可能会以不稳定的方式波动很大.

I took the liberty to interpret that you wanted to stop if the ratio between the train_loss and the validation_loss goes under a certain ratio threshold. This ratio argument should be between 0.0 and 1.0. However, 1.0 is dangerous as the validation loss and the training loss might fluctuate a lot in an erratic way at the beginning of the training.

您可以添加一个耐心参数,该参数将等待查看阈值是否在一定时期内保持不变.

You can add a patience argument which will wait to see if the breaking of your threshold is staying for a certain number of epochs.

使用此方法的方式例如:

The way to use this is for exampe :

callbacks = [CustomEarlyStopping(ratio=0.5, patience=2, verbose=1), 
            ... Other callbacks ...]
...
model.fit(..., callbacks=callbacks)

在这种情况下,如果训练损失保持低于0.5*val_loss超过2个纪元,它将停止.

In this case it will stop if the training loss stays lower than 0.5*val_loss for more than 2 epochs.

这对您有帮助吗?

这篇关于Keras回调EarlyStopping比较培训和验证损失的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆