Keras Callback EarlyStopping 比较训练和验证损失 [英] Keras Callback EarlyStopping comparing training and validation loss

查看:30
本文介绍了Keras Callback EarlyStopping 比较训练和验证损失的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在Python Keras 中拟合一个神经网络.

I'm fitting a neural network in Python Keras.

为了避免过度拟合,我想监控训练/验证损失并创建一个适当的回调,当训练损失远小于验证损失时停止计算.

To avoid overfitting I would like to monitor the training/validation loss and create a proper callback which stops computations when training loss is too much less than validation loss.

回调的一个例子是:

callback = [EarlyStopping(monitor='val_loss', value=45, verbose=0, mode='auto')]

当训练损失与验证损失相比太小时,有没有办法停止训练?

提前致谢

推荐答案

您可以根据自己的目的创建自定义回调类.

You can create a custom callback class for your purpose.

我已经创建了一个应该符合您需要的:

I have created one that should correspond to your need :

class CustomEarlyStopping(Callback):
    def __init__(self, ratio=0.0,
                 patience=0, verbose=0):
        super(EarlyStopping, self).__init__()

        self.ratio = ratio
        self.patience = patience
        self.verbose = verbose
        self.wait = 0
        self.stopped_epoch = 0
        self.monitor_op = np.greater

    def on_train_begin(self, logs=None):
        self.wait = 0  # Allow instances to be re-used

    def on_epoch_end(self, epoch, logs=None):
        current_val = logs.get('val_loss')
        current_train = logs.get('loss')
        if current_val is None:
            warnings.warn('Early stopping requires %s available!' %
                          (self.monitor), RuntimeWarning)

        # If ratio current_loss / current_val_loss > self.ratio
        if self.monitor_op(np.divide(current_train,current_val),self.ratio):
            self.wait = 0
        else:
            if self.wait >= self.patience:
                self.stopped_epoch = epoch
                self.model.stop_training = True
            self.wait += 1

    def on_train_end(self, logs=None):
        if self.stopped_epoch > 0 and self.verbose > 0:
            print('Epoch %05d: early stopping' % (self.stopped_epoch))

我冒昧地解释说,如果 train_lossvalidation_loss 之间的比率低于某个 比率 阈值,您想要停止.这个比率参数应该在 0.01.0 之间.然而,1.0 是危险的,因为在训练开始时验证损失和训练损失可能会以不稳定的方式波动很大.

I took the liberty to interpret that you wanted to stop if the ratio between the train_loss and the validation_loss goes under a certain ratio threshold. This ratio argument should be between 0.0 and 1.0. However, 1.0 is dangerous as the validation loss and the training loss might fluctuate a lot in an erratic way at the beginning of the training.

您可以添加一个耐心参数,该参数将等待查看阈值的突破是否会持续一定数量的 epoch.

You can add a patience argument which will wait to see if the breaking of your threshold is staying for a certain number of epochs.

使用方法例如:

callbacks = [CustomEarlyStopping(ratio=0.5, patience=2, verbose=1), 
            ... Other callbacks ...]
...
model.fit(..., callbacks=callbacks)

在这种情况下,如果训练损失保持低于 0.5*val_loss 超过 2 个时期,它将停止.

In this case it will stop if the training loss stays lower than 0.5*val_loss for more than 2 epochs.

这对你有帮助吗?

这篇关于Keras Callback EarlyStopping 比较训练和验证损失的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆