xgb.train 和 xgb.XGBRegressor(或 xgb.XGBClassifier)有什么区别? [英] What is the difference between xgb.train and xgb.XGBRegressor (or xgb.XGBClassifier)?
问题描述
我已经知道xgboost.XGBRegressor
是 XGBoost 的 Scikit-Learn Wrapper 接口."
I already know "xgboost.XGBRegressor
is a Scikit-Learn Wrapper interface for XGBoost."
但它们还有其他区别吗?
But do they have any other difference?
推荐答案
xgboost.train
是通过梯度提升方法训练模型的低级 API.
xgboost.train
is the low-level API to train the model via gradient boosting method.
xgboost.XGBRegressor
和 xgboost.XGBClassifier
是准备DMatrix
并传入相应的目标函数和参数.最后,fit
调用简单地归结为:
xgboost.XGBRegressor
and xgboost.XGBClassifier
are the wrappers (Scikit-Learn-like wrappers, as they call it) that prepare the DMatrix
and pass in the corresponding objective function and parameters. In the end, the fit
call simply boils down to:
self._Booster = train(params, dmatrix,
self.n_estimators, evals=evals,
early_stopping_rounds=early_stopping_rounds,
evals_result=evals_result, obj=obj, feval=feval,
verbose_eval=verbose)
这意味着可以通过 XGBRegressor
和 XGBClassifier
完成的一切都可以通过底层的 xgboost.train
来完成功能.反过来,显然不是这样,例如,XGBModel
API 不支持 xgboost.train
的一些有用参数.显着差异的列表包括:
This means that everything that can be done with XGBRegressor
and XGBClassifier
is doable via underlying xgboost.train
function. The other way around it's obviously not true, for instance, some useful parameters of xgboost.train
are not supported in XGBModel
API. The list of notable differences includes:
xgboost.train
允许设置在每次迭代结束时应用的回调
.xgboost.train
允许通过xgb_model
参数继续训练.xgboost.train
不仅允许 eval 函数的最小化,还允许最大化.
xgboost.train
allows to set thecallbacks
applied at end of each iteration.xgboost.train
allows training continuation viaxgb_model
parameter.xgboost.train
allows not only minization of the eval function, but maximization as well.
这篇关于xgb.train 和 xgb.XGBRegressor(或 xgb.XGBClassifier)有什么区别?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!