在scikit学习中使用混淆矩阵作为交叉验证中的评分指标 [英] using confusion matrix as scoring metric in cross validation in scikit learn

查看:304
本文介绍了在scikit学习中使用混淆矩阵作为交叉验证中的评分指标的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在scikit learning中创建管道,

I am creating a pipeline in scikit learn,

pipeline = Pipeline([
    ('bow', CountVectorizer()),  
    ('classifier', BernoulliNB()), 
])

并使用交叉验证计算准确性

and computing the accuracy using cross validation

scores = cross_val_score(pipeline,  # steps to convert raw messages      into models
                     train_set,  # training data
                     label_train,  # training labels
                     cv=5,  # split data randomly into 10 parts: 9 for training, 1 for scoring
                     scoring='accuracy',  # which scoring metric?
                     n_jobs=-1,  # -1 = use all cores = faster
                     )

如何报告混乱矩阵而不是准确性"?

How can I report confusion matrix instead of 'accuracy'?

推荐答案

您可以使用cross_val_predict(

You could use cross_val_predict(See the scikit-learn docs) instead of cross_val_score.

代替:

from sklearn.model_selection import cross_val_score
scores = cross_val_score(clf, x, y, cv=10)

您可以:

from sklearn.model_selection import cross_val_predict
from sklearn.metrics import confusion_matrix
y_pred = cross_val_predict(clf, x, y, cv=10)
conf_mat = confusion_matrix(y, y_pred)

这篇关于在scikit学习中使用混淆矩阵作为交叉验证中的评分指标的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆