SelectKBest(chi2)如何计算分数? [英] How SelectKBest (chi2) calculates score?

查看:742
本文介绍了SelectKBest(chi2)如何计算分数?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试通过对我的数据集应用特征选择方法来找到最有价值的特征.我现在正在使用SelectKBest函数.我可以生成分数值并根据需要对其进行排序,但我不完全了解该分数值是如何计算的.我知道理论上更高的分数更有价值,但是我需要一个数学公式或一个示例来计算分数,以深入学习该知识.

I am trying to find the most valuable features by applying feature selection methods to my dataset. Im using the SelectKBest function for now. I can generate the score values and sort them as I want, but I don't understand exactly how this score value is calculated. I know that theoretically high score is more valuable, but I need a mathematical formula or an example to calculate the score for learning this deeply.

bestfeatures = SelectKBest(score_func=chi2, k=10)
fit = bestfeatures.fit(dataValues, dataTargetEncoded)
feat_importances = pd.Series(fit.scores_, index=dataValues.columns)
topFatures = feat_importances.nlargest(50).copy().index.values

print("TOP 50 Features (Best to worst) :\n")
print(topFatures)

提前谢谢

推荐答案

假设您具有一项功能,并且目标具有3个可能的值

Say you have one feature and a target with 3 possible values

X = np.array([3.4, 3.4, 3. , 2.8, 2.7, 2.9, 3.3, 3. , 3.8, 2.5])
y = np.array([0, 0, 0, 1, 1, 1, 2, 2, 2, 2])

     X  y
0  3.4  0
1  3.4  0
2  3.0  0
3  2.8  1
4  2.7  1
5  2.9  1
6  3.3  2
7  3.0  2
8  3.8  2
9  2.5  2

第一我们将目标二值化

y = LabelBinarizer().fit_transform(y)

     X  y1  y2  y3
0  3.4   1   0   0
1  3.4   1   0   0
2  3.0   1   0   0
3  2.8   0   1   0
4  2.7   0   1   0
5  2.9   0   1   0
6  3.3   0   0   1
7  3.0   0   0   1
8  3.8   0   0   1
9  2.5   0   0   1

然后在要素和目标之间进行点积运算,即按类值对所有要素值求和

Then perform a dot product between feature and target, i.e. sum all feature values by class value

observed = y.T.dot(X) 
>>> observed 
array([ 9.8,  8.4, 12.6])

接下来,对特征值求和并计算班级频率

Next take a sum of feature values and calculate class frequency

feature_count = X.sum(axis=0).reshape(1, -1)
class_prob = y.mean(axis=0).reshape(1, -1)

>>> class_prob, feature_count
(array([[0.3, 0.3, 0.4]]), array([[30.8]]))

现在,像第一步一样,我们获取点积,并获得期望和观察到的矩阵

Now as in the first step we take the dot product, and get expected and observed matrices

expected = np.dot(class_prob.T, feature_count)
>>> expected 
array([[ 9.24],[ 9.24],[12.32]])

最后我们计算一个chi ^ 2值:

Finally we calculate a chi^2 value:

chi2 = ((observed.reshape(-1,1) - expected) ** 2 / expected).sum(axis=0)
>>> chi2 
array([0.11666667])

我们有一个chi ^ 2值,现在我们需要判断它有多极端.为此,我们使用具有number of classes - 1自由度的 chi ^ 2分布,并进行计算从chi ^ 2到无穷大的面积来获得chi ^ 2的概率与我们得到的相同或更大.这是一个p值. (使用scipy的卡方生存函数)

We have a chi^2 value, now we need to judge how extreme it is. For that we use a chi^2 distribution with number of classes - 1 degrees of freedom and calculate the area from chi^2 to infinity to get the probability of chi^2 be the same or more extreme than what we've got. This is a p-value. (using chi square survival function from scipy)

p = scipy.special.chdtrc(3 - 1, chi2)
>>> p
array([0.94333545])

SelectKBest进行比较:

s = SelectKBest(chi2, k=1)
s.fit(X.reshape(-1,1),y)
>>> s.scores_, s.pvalues_
(array([0.11666667]), [0.943335449873492])

这篇关于SelectKBest(chi2)如何计算分数?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆