在 keras 中保存最佳模型 [英] Saving best model in keras

查看:30
本文介绍了在 keras 中保存最佳模型的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在 keras 中训练模型时使用以下代码

I use the following code when training a model in keras

from keras.callbacks import EarlyStopping

model = Sequential()
model.add(Dense(100, activation='relu', input_shape = input_shape))
model.add(Dense(1))

model_2.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])


model.fit(X, y, epochs=15, validation_split=0.4, callbacks=[early_stopping_monitor], verbose=False)

model.predict(X_test)

但最近我想保存最好的训练模型,因为我正在训练的数据在高 val_loss vs epochs"图中给出了很多峰值,我想使用模型中可能的最好的模型.

but recently I wanted to get the best trained model saved as the data I am training on gives a lot of peaks in "high val_loss vs epochs" graph and I want to use the best one possible yet from the model.

有什么方法或功能可以帮助解决这个问题吗?

Is there any method or function to help with that?

推荐答案

EarlyStoppingModelCheckpoint 是您从 Keras 文档中需要的.

EarlyStopping and ModelCheckpoint is what you need from Keras documentation.

您应该在 ModelCheckpoint 中设置 save_best_only=True.如果需要任何其他调整,都是微不足道的.

You should set save_best_only=True in ModelCheckpoint. If any other adjustments needed, are trivial.

为了帮助你更多,你可以看到一个用法在 Kaggle 上.

Just to help you more you can see a usage here on Kaggle.

如果上面的 Kaggle 示例链接不可用,请在此处添加代码:

Adding the code here in case the above Kaggle example link is not available:

model = getModel()
model.summary()

batch_size = 32

earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save = ModelCheckpoint('.mdl_wts.hdf5', save_best_only=True, monitor='val_loss', mode='min')
reduce_lr_loss = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=7, verbose=1, epsilon=1e-4, mode='min')

model.fit(Xtr_more, Ytr_more, batch_size=batch_size, epochs=50, verbose=0, callbacks=[earlyStopping, mcp_save, reduce_lr_loss], validation_split=0.25)

这篇关于在 keras 中保存最佳模型的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆