使用SHAP时如何解释多类分类问题的base_value? [英] How to interpret base_value of multi-class classification problem when using SHAP?

查看:477
本文介绍了使用SHAP时如何解释多类分类问题的base_value?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用shap库进行ML解释,以更好地理解k均值分割算法簇.简而言之,我制作了一些博客,使用k-means对它们进行聚类,然后将这些聚类作为标签,并使用xgboost尝试预测它们.我有5个聚类,所以这是一个信号标签多类分类问题.

 将numpy导入为np从sklearn.datasets导入make_blobs将熊猫作为pd导入从sklearn.preprocessing导入StandardScaler从sklearn.cluster导入KMeans将xgboost导入为xgb进口杂货X,y = make_blobs(n_samples = 500,centers = 5,n_features = 5,random_state = 0)数据= pd.DataFrame(np.concatenate((X,y.reshape(500,1)),轴= 1),列= ['var_1','var_2','var_3','var_4','var_5','cluster_id'])data ['cluster_id'] = data ['cluster_id'].astype(int).astype(str)定标器= StandardScaler()scaled_features = scaler.fit_transform(data.iloc [:,:-1])kmeans = KMeans(n_clusters = 5,** kmeans_kwargs)kmeans.fit(scaled_features)数据['predicted_cluster_id'] = kmeans.labels_.astype(int).astype(str)clf = xgb.XGBClassifier()clf.fit(scaled_data.iloc [:,:-1],scaled_data ['predicted_cluster_id'])shap.initjs()解释器= shap.TreeExplainer(clf)shap_values = explorer.shap_values(scaled_data.iloc [0,:-1] .values.reshape(1,-1))shap.force_plot(explainer.expected_value [0],shap_values [0],link ='logit')#为范围(0,5)中的i重复更改0 

由于类别为'3',因此上面的图片有意义.但是为什么这个base_value不应该是1/5?我前段时间问自己

解决方案

link ="logit" 似乎不适用于多类,因为它仅适用于二进制输出.这就是为什么看不到总计为1的概率的原因.

让我们简化您的代码:

 将numpy导入为np从sklearn.datasets导入make_blobs将熊猫作为pd导入从sklearn.preprocessing导入StandardScaler从sklearn.cluster导入KMeans将xgboost导入为xgb进口杂货从scipy.special导入softmax,logit,expitnp.random.seed(42)X,y_true = make_blobs(n_samples = 500,中心= 5,n_features = 3,random_state = 0)定标器= StandardScaler()X_scaled = scaler.fit_transform(X)kmeans = KMeans(n_clusters = 5)y_predicted = kmeans.fit_predict(X_scaled,)clf = xgb.XGBClassifier()clf.fit(X_scaled,y_predicted)shap.initjs() 

然后,您在以下位置看到的期望值:

  explainer = shap.TreeExplainer(clf)说明器.expected_value数组([0.67111245、0.60223354、0.53357694、0.50821152、0.50145331]) 

是原始空间中的基本分数.

可以使用 softmax 将多类原始分数转换为概率:

  softmax(explainer.expected_value)数组([0.22229282,0.20749694,0.19372895,0.18887673,0.18760457]) 

shap.force_plot(...,link ="logit")对于多类而言没有意义,并且似乎无法从原始切​​换为概率,并且仍然保持可加性(因为softmax(x + y)≠softmax(x)+ softmax(y)).

如果您想分析概率空间中的数据,请尝试 KernelExplainer :

 从shap导入KernelExplainermasker = shap.maskers.Independent(X_scaled,100)ke = KernelExplainer(clf.predict_proba,data = masker.data)ke.expected_value#array([0.18976762,0.1900516,0.20042894,0.19995041,0.21980143])shap_values = ke.shap_values(masker.data)shap.force_plot(ke.expected_value [0],shap_values [0] [0]) 

或摘要图:

导入

 shap.waterfall_plot(说明(shap_values [0] [0],ke.expected_value [0])) 

现在可以添加概率空间中的shap值,并且与第0个数据点的基本概率(请参见上文)和预测概率都很好地吻合:

  clf.predict_proba(masker.data [0] .reshape(1,-1))数组([[2.2844513e-04,8.1287889e-04,6.5225776e-04,9.9737883e-01,9.2762709e-04]],dtype = float32) 

I am using shap library for ML interpretability to better understand k-means segmentation algorithm clusters. In a nutshell I make some blogs, use k-means to cluster them and then take the clusters as label and xgboost to try to predict them. I have 5 clusters so it is a signle-label multi-class classification problem.

import numpy as np
from sklearn.datasets import make_blobs
import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans 
import xgboost as xgb
import shap

X, y = make_blobs(n_samples=500, centers=5, n_features=5, random_state=0)
data = pd.DataFrame(np.concatenate((X, y.reshape(500,1)), axis=1), columns=['var_1', 'var_2', 'var_3', 'var_4', 'var_5', 'cluster_id'])
data['cluster_id'] = data['cluster_id'].astype(int).astype(str)
scaler = StandardScaler()
scaled_features = scaler.fit_transform(data.iloc[:,:-1])
kmeans = KMeans(n_clusters=5, **kmeans_kwargs)
kmeans.fit(scaled_features)
data['predicted_cluster_id'] = kmeans.labels_.astype(int).astype(str)
clf = xgb.XGBClassifier()
clf.fit(scaled_data.iloc[:,:-1], scaled_data['predicted_cluster_id'])
shap.initjs()
explainer = shap.TreeExplainer(clf)
shap_values = explainer.shap_values(scaled_data.iloc[0,:-1].values.reshape(1,-1))
shap.force_plot(explainer.expected_value[0], shap_values[0], link='logit')  # repeat changing 0 for i in range(0, 5)

The pictures above make sense as the class is '3'. But why this base_value, shouldn't it be 1/5? I asked myself a while ago a similar question but this time I set already link='logit'.

解决方案

link="logit" does not seem right for multiclass, as it's only suitable for binary output. This is why you do not see probabilities summing up to 1.

Let's streamline your code:

import numpy as np
from sklearn.datasets import make_blobs
import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans 
import xgboost as xgb
import shap
from scipy.special import softmax, logit, expit
np.random.seed(42)

X, y_true = make_blobs(n_samples=500, centers=5, n_features=3, random_state=0)
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
kmeans = KMeans(n_clusters=5)
y_predicted = kmeans.fit_predict(X_scaled, )

clf = xgb.XGBClassifier()
clf.fit(X_scaled, y_predicted)
shap.initjs()

Then, what you see as expected values in:

explainer = shap.TreeExplainer(clf)
explainer.expected_value
array([0.67111245, 0.60223354, 0.53357694, 0.50821152, 0.50145331])

are base scores in raw space.

The multi-class raw scores can be converted to probabilities with softmax:

softmax(explainer.expected_value)
array([0.22229282, 0.20749694, 0.19372895, 0.18887673, 0.18760457])

shap.force_plot(..., link="logit") doesn't make sense for multiclass, and it seems impossible to switch from raw to probability and still maintain additivity (because softmax(x+y) ≠ softmax(x) + softmax(y)).

Should you wish to analyze your data in probability space try KernelExplainer:

from shap import KernelExplainer
masker = shap.maskers.Independent(X_scaled, 100)
ke = KernelExplainer(clf.predict_proba, data=masker.data)
ke.expected_value
# array([0.18976762, 0.1900516 , 0.20042894, 0.19995041, 0.21980143])
shap_values=ke.shap_values(masker.data)
shap.force_plot(ke.expected_value[0], shap_values[0][0])

or summary plot:

from shap import Explanation
shap.waterfall_plot(Explanation(shap_values[0][0],ke.expected_value[0]))

which are now additive for shap values in probability space and align well with both base probabilities (see above) and predicted probabilities for 0th datapoint:

clf.predict_proba(masker.data[0].reshape(1,-1))
array([[2.2844513e-04, 8.1287889e-04, 6.5225776e-04, 9.9737883e-01,
        9.2762709e-04]], dtype=float32)

这篇关于使用SHAP时如何解释多类分类问题的base_value?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆