如何在scikit-learn中正确执行交叉验证? [英] How to correctly perform cross validation in scikit-learn?

查看:73
本文介绍了如何在scikit-learn中正确执行交叉验证?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试对k-nn分类器进行交叉验证,但我对下面的以下两种方法中的哪一种能够正确进行交叉验证感到困惑.

I am trying to do a cross validation on a k-nn classifier and I am confused about which of the following two methods below conducts cross validation correctly.

training_scores = defaultdict(list)
validation_f1_scores = defaultdict(list)
validation_precision_scores = defaultdict(list)
validation_recall_scores = defaultdict(list)
validation_scores = defaultdict(list)

def model_1(seed, X, Y):
    np.random.seed(seed)
    scoring = ['accuracy', 'f1_macro', 'precision_macro', 'recall_macro']
    model = KNeighborsClassifier(n_neighbors=13)

    kfold = StratifiedKFold(n_splits=2, shuffle=True, random_state=seed)
    scores = model_selection.cross_validate(model, X, Y, cv=kfold, scoring=scoring, return_train_score=True)
    print(scores['train_accuracy'])
    training_scores['KNeighbour'].append(scores['train_accuracy'])
    print(scores['test_f1_macro'])
    validation_f1_scores['KNeighbour'].append(scores['test_f1_macro'])
    print(scores['test_precision_macro'])
    validation_precision_scores['KNeighbour'].append(scores['test_precision_macro'])
    print(scores['test_recall_macro'])
    validation_recall_scores['KNeighbour'].append(scores['test_recall_macro'])
    print(scores['test_accuracy'])
    validation_scores['KNeighbour'].append(scores['test_accuracy'])

    print(np.mean(training_scores['KNeighbour']))
    print(np.std(training_scores['KNeighbour']))
    #rest of print statments

第二个模型中的for循环似乎是多余的.

It seems that for loop in the second model is redundant.

def model_2(seed, X, Y):
    np.random.seed(seed)
    scoring = ['accuracy', 'f1_macro', 'precision_macro', 'recall_macro']
    model = KNeighborsClassifier(n_neighbors=13)

    kfold = StratifiedKFold(n_splits=2, shuffle=True, random_state=seed)
    for train, test in kfold.split(X, Y):
        scores = model_selection.cross_validate(model, X[train], Y[train], cv=kfold, scoring=scoring, return_train_score=True)
        print(scores['train_accuracy'])
        training_scores['KNeighbour'].append(scores['train_accuracy'])
        print(scores['test_f1_macro'])
        validation_f1_scores['KNeighbour'].append(scores['test_f1_macro'])
        print(scores['test_precision_macro'])
        validation_precision_scores['KNeighbour'].append(scores['test_precision_macro'])
        print(scores['test_recall_macro'])
        validation_recall_scores['KNeighbour'].append(scores['test_recall_macro'])
        print(scores['test_accuracy'])
        validation_scores['KNeighbour'].append(scores['test_accuracy'])

    print(np.mean(training_scores['KNeighbour']))
    print(np.std(training_scores['KNeighbour']))
    # rest of print statments

我正在使用 StratifiedKFold ,并且不确定是否需要像model_2函数中那样进行循环,或者在我们传递 cross_validate 函数是否已经使用了拆分cv = kfold 作为参数.

I am using StratifiedKFold and I am not sure if I need for loop as in model_2 function or does cross_validate function already use the split as we are passing cv=kfold as an argument.

我没有调用 fit 方法,这样可以吗? cross_validate 是自动调用吗,还是我需要在调用 cross_validate 之前先调用 fit ?

I am not calling fit method, is this OK? Does cross_validate calls that automatically or do I need to call fit before calling cross_validate?

最后,如何创建混淆矩阵?我是否需要为每个折叠创建它,如果是的话,如何计算最终/平均混淆矩阵?

Finally, how can I create confusion matrix? Do I need to create it for each fold, if yes, how can the final/average confusion matrix be calculated?

推荐答案

文档可以说是您最好的朋友;从简单的示例中可以明显看出,您既不应使用 for 循环,也不应使用对 fit 的调用.修改示例以使用 KFold 进行操作:

The documentation is arguably your best friend in such questions; from the simple example there it should be apparent that you should use neither a for loop nor a call to fit. Adapting the example to use KFold as you do:

from sklearn.model_selection import KFold, cross_validate
from sklearn.datasets import load_boston
from sklearn.tree import DecisionTreeRegressor

X, y = load_boston(return_X_y=True)
n_splits = 5
kf = KFold(n_splits=n_splits, shuffle=True)

model = DecisionTreeRegressor()
scoring=('r2', 'neg_mean_squared_error')

cv_results = cross_validate(model, X, y, cv=kf, scoring=scoring, return_train_score=False)
cv_results

结果:

{'fit_time': array([0.00901461, 0.00563478, 0.00539804, 0.00529385, 0.00638533]),
 'score_time': array([0.00132656, 0.00214362, 0.00134897, 0.00134444, 0.00176597]),
 'test_neg_mean_squared_error': array([-11.15872549, -30.1549505 , -25.51841584, -16.39346535,
        -15.63425743]),
 'test_r2': array([0.7765484 , 0.68106786, 0.73327311, 0.83008371, 0.79572363])}

如何创建混淆矩阵?我是否需要为每折创建一个

how can I create confusion matrix? Do I need to create it for each fold

没有人能告诉您是否需要为每个折叠创建混淆矩阵-这是您的选择.如果您选择这样做,最好跳过 cross_validate 并执行手动"过程-在

No one can tell you if you need to create a confusion matrix for each fold - it is your choice. If you choose to do so, it may be better to skip cross_validate and do the procedure "manually" - see my answer in How to display confusion matrix and report (recall, precision, fmeasure) for each cross validation fold.

如果是,如何计算最终/平均混淆矩阵?

if yes, how can the final/average confusion matrix be calculated?

没有最终/平均"混淆矩阵;如果您想要计算除链接答案中所述的 k 个以外的任何内容(每k个折叠一个),则需要有一个单独的验证集...

There is no "final/average" confusion matrix; if you want to calculate anything further than the k ones (one for each k-fold) as described in the linked answer, you need to have available a separate validation set...

这篇关于如何在scikit-learn中正确执行交叉验证?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆