cross_val_score和cross_val_predict之间的区别 [英] Difference between cross_val_score and cross_val_predict

查看:465
本文介绍了cross_val_score和cross_val_predict之间的区别的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想使用交叉验证评估使用scikitlearn构建的回归模型并感到困惑,我应该使用两个函数cross_val_scorecross_val_predict中的哪个. 一种选择是:

I want to evaluate a regression model build with scikitlearn using cross-validation and getting confused, which of the two functions cross_val_score and cross_val_predict I should use. One option would be :

cvs = DecisionTreeRegressor(max_depth = depth)
scores = cross_val_score(cvs, predictors, target, cv=cvfolds, scoring='r2')
print("R2-Score: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))

另一种方法是将cv-predictions与标准r2_score一起使用:

An other one, to use the cv-predictions with the standard r2_score:

cvp = DecisionTreeRegressor(max_depth = depth)
predictions = cross_val_predict(cvp, predictors, target, cv=cvfolds)
print ("CV R^2-Score: {}".format(r2_score(df[target], predictions_cv)))

我认为这两种方法都是有效的,并且给出相似的结果.但这只是k折小的情况.尽管10倍cv的r ^ 2大致相同,但在使用"cross_vall_score"的第一个版本的情况下,对于较高的k值,r ^ 2变得越来越低.第二个版本在很大程度上不受折叠数变化的影响.

I would assume that both methods are valid and give similar results. But that is only the case with small k-folds. While the r^2 is roughly the same for 10-fold-cv, it gets increasingly lower for higher k-values in the case of the first version using "cross_vall_score". The second version is mostly unaffected by changing numbers of folds.

这种行为是可以预期的吗?我对SKLearn中的CV缺乏了解吗?

Is this behavior to be expected and do I lack some understanding regarding CV in SKLearn?

推荐答案

cross_val_score返回测试折叠的分数,其中cross_val_predict返回测试折叠的y预测值.

cross_val_score returns score of test fold where cross_val_predict returns predicted y values for the test fold.

对于cross_val_score(),您使用的是输出的平均值,该平均值将受到折痕数量的影响,因为这可能会导致某些折痕的错误率很高(无法正确拟合).

For the cross_val_score(), you are using the average of the output, which will be affected by the number of folds because then it may have some folds which may have high error (not fit correctly).

其中,cross_val_predict()对于输入中的每个元素,返回该元素在测试集中时获得的预测. [请注意,只能使用将所有元素完全分配给测试集一次的交叉验证策略].因此,增加折叠次数,只会增加测试元素的训练数据,因此其结果可能不会受到太大影响.

Whereas, cross_val_predict() returns, for each element in the input, the prediction that was obtained for that element when it was in the test set. [Note that only cross-validation strategies that assign all elements to a test set exactly once can be used]. So the increasing the number of folds, only increases the training data for the test element, and hence its result may not be affected much.

希望这会有所帮助.随时提出任何疑问.

Hope this helps. Feel free to ask any doubt.

在评论中回答问题

请查看以下有关cross_val_predict工作原理的答案:

Please have a look the following answer on how cross_val_predict works:

我认为cross_val_predict会过拟合,因为随着折叠倍数的增加,更多的数据将用于训练,而更少的数据将用于测试.因此,结果标签更依赖于训练数据.同样如上文所述,对一个样本的预测仅进行一次,因此它可能更容易受到数据分割的影响. 这就是为什么大多数地方或教程都建议使用cross_val_score进行分析的原因.

I think that cross_val_predict will be overfit because as the folds increase, more data will be for train and less will for test. So the resultant label is more dependent on training data. Also as already told above, the prediction for one sample is done only once, so it may be susceptible to the splitting of data more. Thats why most of the places or tutorials recommend using the cross_val_score for analysis.

这篇关于cross_val_score和cross_val_predict之间的区别的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆