设置XGBoost提前停止的时间 [英] Setting Tol for XGBoost Early Stopping

查看:569
本文介绍了设置XGBoost提前停止的时间的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用XGBoost并提前停止.在大约1000个时代之后,该模型仍在改进,但是改进的幅度非常低.即:

I am using XGBoost with early stopping. After about 1000 epochs, the model is still improving, but the magnitude of improvement is very low. I.e.:

 clf = xgb.train(params, dtrain, num_boost_round=num_rounds, evals=watchlist, early_stopping_rounds=10)

是否可以为提前停止设置"tol"?即:不触发提前停止所需的最低改进水平.

Is it possible to set a "tol" for early stopping? I.e.: the minimum level of improvement that is required to not trigger early stopping.

Tol是SKLearn模型中的常用参数,例如MLPClassifier和QuadraticDiscriminantAnalysis.谢谢你.

Tol is a common parameter in SKLearn models, such as MLPClassifier and QuadraticDiscriminantAnalysis. Thank you.

推荐答案

我认为xgboost中没有参数tol,但是您可以将early_stopping_round设置得更高.此参数意味着,如果测试集的性能在early_stopping_round时间内没有改善,则它将停止.如果您知道在经过1000个周期之后,您的模型仍在改进,但速度非常缓慢,请将early_stopping_round设置为50,这样对于性能的细微变化将更加宽容".

I do not think that there is a parameter tol in xgboost but you can set the early_stopping_round higher. This parameters means that if the performance on the test set does not improve for early_stopping_round times, then it stops. If you know that after 1000 epochs your model is still improving but very slowly, set early_stopping_round at 50 for example so it will be more "tolerante" about small changes in performance.

这篇关于设置XGBoost提前停止的时间的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆