在培训期间更改keras中的优化器 [英] Changing optimizer in keras during training
问题描述
我正在使用nadam
优化器开发模型.我想知道在训练期间是否有一种方法可以在两个时期内验证损失都没有减少的情况下切换到sgd
.
I am developing a model using nadam
optimizer. I was wondering if there is a way to switch to sgd
during training if validation loss does not reduce for two epochs.
推荐答案
您可以创建一个EarlyStopping
回调,该回调将停止训练,在此回调中,您将创建一个函数来更改优化器并再次适合.
You can create an EarlyStopping
callback that will stop the training, and in this callback, you create a function to change your optimizer and fit again.
以下回调将监视验证损失(val_loss
),并在两个时期(patience
)之后停止训练,而不会使改善大于min_delta
.
The following callback will monitor the validation loss (val_loss
) and stop training after two epochs (patience
) without an improvement greater than min_delta
.
min_delta = 0.000000000001
stopper = EarlyStopping(monitor='val_loss',min_delta=min_delta,patience=2)
但是要在训练结束后添加额外的操作,我们可以扩展此回调并更改on_train_end
方法:
But for adding an extra action after the training is finished, we can extend this callback and change the on_train_end
method:
class OptimizerChanger(EarlyStopping):
def __init__(self, on_train_end, **kwargs):
self.do_on_train_end = on_train_end
super(OptimizerChanger,self).__init__(**kwargs)
def on_train_end(self, logs=None):
super(OptimizerChanger,self).on_train_end(self,logs)
self.do_on_train_end()
自定义函数在模型结束训练时调用:
For the custom function to call when the model ends training:
def do_after_training():
#warining, this creates a new optimizer and,
#at the beginning, it might give you a worse training performance than before
model.compile(optimizer = 'SGD', loss=...., metrics = ...)
model.fit(.....)
现在让我们使用回调:
changer = OptimizerChanger(on_train_end= do_after_training,
monitor='val_loss',
min_delta=min_delta,
patience=2)
model.fit(..., ..., callbacks = [changer])
这篇关于在培训期间更改keras中的优化器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!