在喀拉拉邦保存最佳模型 [英] Saving best model in keras

查看:101
本文介绍了在喀拉拉邦保存最佳模型的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在keras中训练模型时,我使用以下代码

I use the following code when training a model in keras

from keras.callbacks import EarlyStopping

model = Sequential()
model.add(Dense(100, activation='relu', input_shape = input_shape))
model.add(Dense(1))

model_2.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])


model.fit(X, y, epochs=15, validation_split=0.4, callbacks=[early_stopping_monitor], verbose=False)

model.predict(X_test)

但是最近我想保存训练最好的模型,因为我正在训练的数据在高val_loss vs epochs"图中出现了很多峰值,并且我想使用该模型中迄今为止最好的一个.

but recently I wanted to get the best trained model saved as the data I am training on gives a lot of peaks in "high val_loss vs epochs" graph and I want to use the best one possible yet from the model.

有什么方法或函数可以帮助您吗?

Is there any method or function to help with that?

推荐答案

早期停止 ModelCheckpoint 是Keras文档中需要的.

EarlyStopping and ModelCheckpoint is what you need from Keras documentation.

您应该在ModelCheckpoint中设置save_best_only=True.如果需要任何其他调整,则微不足道.

You should set save_best_only=True in ModelCheckpoint. If any other adjustments needed, are trivial.

只是为了帮助您更多,您可以查看用法在Kaggle上.

Just to help you more you can see a usage here on Kaggle.

在上述Kaggle示例链接不可用的情况下,在此处添加代码:

Adding the code here in case the above Kaggle example link is not available:

model = getModel()
model.summary()

batch_size = 32

earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save = ModelCheckpoint('.mdl_wts.hdf5', save_best_only=True, monitor='val_loss', mode='min')
reduce_lr_loss = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=7, verbose=1, epsilon=1e-4, mode='min')

model.fit(Xtr_more, Ytr_more, batch_size=batch_size, epochs=50, verbose=0, callbacks=[earlyStopping, mcp_save, reduce_lr_loss], validation_split=0.25)

这篇关于在喀拉拉邦保存最佳模型的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆