计算精度并在混淆矩阵中调用 [英] calculate precision and recall in a confusion matrix

查看:279
本文介绍了计算精度并在混淆矩阵中调用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

假设我有一个混乱矩阵,如下所示.如何计算精度和召回率?

Suppose I have a confusion matrix as like as below. How can I calculate precision and recall?

推荐答案

首先,矩阵是上下颠倒排列的. 您想要排列标签,以便在对角线[(0,0),(1,1),(2,2)]上设置正正值,这是从sklearn和其他软件包.

first, your matrix is arranged upside down. You want to arrange your labels so that true positives are set on the diagonal [(0,0),(1,1),(2,2)] this is the arrangement that you're going to find with confusion matrices generated from sklearn and other packages.

一旦我们按照正确的方向对事物进行了排序,我们可以从此答案中获取一个页面,并说:

Once we have things sorted in the right direction, we can take a page from this answer and say that:

  1. 正正值位于对角线位置
  2. 误报是按列求和.没有对角线
  3. 假阴性是按行求和.没有对角线.

\然后我们从sklearn docs 中获取一些公式和回忆. 并将其全部放入代码中:

\ Then we take some formulas from sklearn docs for precision and recall. And put it all into code:

import numpy as np
cm = np.array([[2,1,0], [3,4,5], [6,7,8]])
true_pos = np.diag(cm)
false_pos = np.sum(cm, axis=0) - true_pos
false_neg = np.sum(cm, axis=1) - true_pos

precision = np.sum(true_pos / (true_pos + false_pos))
recall = np.sum(true_pos / (true_pos + false_neg))

由于我们删除了真正的肯定词来定义false_positives/negatives,只是将它们添加回去...我们可以通过跳过几个步骤来进一步简化:

Since we remove the true positives to define false_positives/negatives only to add them back... we can simplify further by skipping a couple of steps:

 true_pos = np.diag(cm) 
 precision = np.sum(true_pos / np.sum(cm, axis=0))
 recall = np.sum(true_pos / np.sum(cm, axis=1))

这篇关于计算精度并在混淆矩阵中调用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆