使用交叉验证计算特异性 [英] using cross validation for calculating specificity

查看:141
本文介绍了使用交叉验证计算特异性的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想使用交叉验证来计算特异性.我找到了用于计算精度,f1分数和精度的代码.但是我找不到它的特异性.例如,f1-score的代码如下:

I want to use cross-validation for calculating specificity. I found code for calculating accuracy, really, f1-score, and precision. but I couldn't found for specificity. for example, the code for f1-score is like:

cross_val_score(SVC, X, y, scoring="f1", cv = 7)

或者为了精确起见,就像:

or for precision is like:

cross_val_score(SVC, X, y, scoring="precision", cv = 7)

推荐答案

特异性基本上是True Negative Rate,与True True Rate(Recall)相同,但对于负数类别

The specifity is basically the True Negative Rate which is the same as the True Positive Rate (Recall) but for the negative class

如果您有一个二进制类,则应执行以下操作

If you have a binary class, you should do the following

  • metrics 导入指标 recall_score (详细信息

  • Import the metric recall_score from metrics (details here), and make_scorer function

from sklearn.metrics import recall_score
from sklearn.metrics import make_scorer

  • 然后您生成新的计分器,定义要为哪个类别计算召回率(默认情况下,召回率是在label = 1上计算的)

  • Then you generate your new scorer, defining for which class you are calculating recall (by default, the recall is calculated on the label=1)

    specificity = make_scorer(recall_score, pos_label=0)
    

  • 标签0通常是二进制问题中的否定类.

    The label 0 is usually the negative class in a binary problem.

    print(cross_val_score(classifier, X_train, y_train, cv=10, specificity))
    

    如果您想召回(真实阳性率),可以通过更改班级方式进行同样的操作

    if you want the recall (True positive rate) you can do the same changing the class

    sensitivity = make_scorer(recall_score, pos_label=1)
    print(cross_val_score(classifier, X_train, y_train, cv=10, sensitivity))
    

    无论如何,如果您需要更复杂的内容,就可以创建自定义计分器

    Anyway you can make your custom scorer, if you need something more complex

    make_scorer

    这篇关于使用交叉验证计算特异性的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆