网络完全融合后,停止Keras Training [英] Stop Keras Training when the network has fully converge

查看:156
本文介绍了网络完全融合后,停止Keras Training的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我将如何配置Keras使其停止训练,直到收敛或损失为0为止?我本来想过拟合的.我不想设置时期数.我只是想让它在收敛时停止.

How will I configure Keras to stop training until convergence or when the loss is 0? I intendedly want to overfit it. I don't want to set number of epochs. I just wanted it to stop when it converges.

推荐答案

使用 EarlyStopping 回调.您可以自由选择要观察的损失/指标以及何时停止.

Use an EarlyStopping callback. You may freely choose which loss/metric to observe and when to stop.

通常,您会看到验证损失"(val_loss),因为这是最重要的变量,它表明您的模型仍在学习概括.

Usually, you would look at the "validation loss" (val_loss), as this is the most important variable that tells that your model is still learning to generalize.

但是,由于您说过想要过度拟合,因此您可以查看训练损失"(loss).

But since you said you want to overfit, then you may look at the "training loss" (loss).

回调函数使用增量",而不使用绝对值,这很好,因为损失不一定以零"为目标.但是您可以使用baseline参数设置绝对值.

The callback works with "deltas", not with absolute values, which is good, because the loss doesn't necessarily have "zero" as its goal. But you can use the baseline argument for setting absolute values.

因此,通常情况下,回调会检查验证失败:

So, usually, a callback that looks at the validation loss:

from keras.callbacks import EarlyStopping
usualCallback = EarlyStopping()

这与EarlyStopping(monitor='val_loss', min_delta=0, patience=0)

一个会过拟合的东西:

overfitCallback = EarlyStopping(monitor='loss', min_delta=0, patience = 20)

当心patience参数,这很重要,因为损失值并不总是在每个时期都减小.在结束之前,让模型继续尝试几个新纪元.

Watch out for the patience argument, it's important as the loss value doesn't always decrease at every epoch. Let the model keep trying for a few more epochs before ending.

最后,只需将回调传递给fit以及大量的时期:

Finally, just pass the callback to fit along with a huge number of epochs:

model.fit(X, Y, epochs=100000000, callbacks=[overfitCallback])

这篇关于网络完全融合后,停止Keras Training的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆