作为估计器"confidence"的predict_proba或decision_function. [英] predict_proba or decision_function as estimator "confidence"

查看:70
本文介绍了作为估计器"confidence"的predict_proba或decision_function.的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用LogisticRegression作为模型来训练scikit-learn中的估计量.我使用的功能(大部分)是分类的;标签也一样.因此,我分别使用DictVectorizer和LabelEncoder对值进行正确编码.

I'm using LogisticRegression as a model to train an estimator in scikit-learn. The features I use are (mostly) categorical; and so are the labels. Therefore, I use a DictVectorizer and a LabelEncoder, respectively, to encode the values properly.

培训部分相当简单,但是我在测试部分遇到了问题.简单的事情是使用训练后的模型的预测"方法并获得预测的标签.但是,对于以后需要执行的处理,我需要每个特定实例的每个可能标签(类)的概率.我决定使用"predict_proba"方法.但是,对于同一个测试实例,无论是单独使用实例还是由其他实例使用此方法,我都会得到不同的结果.

The training part is fairly straightforward, but I'm having problems with the test part. The simple thing to do is to use the "predict" method of the trained model and get the predicted label. However, for the processing I need to do afterwards, I need the probability for each possible label (class) for each particular instance. I decided to use the "predict_proba" method. However, I get different results for the same test instance, whether I use this method when the instance is by itself or accompanied by others.

接下来,是重现该问题的代码.

Next, is a code that reproduces the problem.

from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction import DictVectorizer
from sklearn.preprocessing import LabelEncoder


X_real = [{'head': u'n\xe3o', 'dep_rel': u'ADVL'}, 
          {'head': u'v\xe3o', 'dep_rel': u'ACC'}, 
          {'head': u'empresa', 'dep_rel': u'SUBJ'}, 
          {'head': u'era', 'dep_rel': u'ACC'}, 
          {'head': u't\xeam', 'dep_rel': u'ACC'}, 
          {'head': u'import\xe2ncia', 'dep_rel': u'PIV'}, 
          {'head': u'balan\xe7o', 'dep_rel': u'SUBJ'}, 
          {'head': u'ocupam', 'dep_rel': u'ACC'}, 
          {'head': u'acesso', 'dep_rel': u'PRED'}, 
          {'head': u'elas', 'dep_rel': u'SUBJ'}, 
          {'head': u'assinaram', 'dep_rel': u'ACC'}, 
          {'head': u'agredido', 'dep_rel': u'SUBJ'}, 
          {'head': u'pol\xedcia', 'dep_rel': u'ADVL'}, 
          {'head': u'se', 'dep_rel': u'ACC'}] 
y_real = [u'AM-NEG', u'A1', u'A0', u'A1', u'A1', u'A1', u'A0', u'A1', u'AM-ADV', u'A0', u'A1', u'A0', u'A2', u'A1']

feat_encoder =  DictVectorizer()
feat_encoder.fit(X_real)

label_encoder = LabelEncoder()
label_encoder.fit(y_real)

model = LogisticRegression()
model.fit(feat_encoder.transform(X_real), label_encoder.transform(y_real))

print "Test 1..."
X_test1 = [{'head': u'governo', 'dep_rel': u'SUBJ'}]
X_test1_encoded = feat_encoder.transform(X_test1)
print "Features Encoded"
print X_test1_encoded
print "Shape"
print X_test1_encoded.shape
print "decision_function:"
print model.decision_function(X_test1_encoded)
print "predict_proba:"
print model.predict_proba(X_test1_encoded)

print "Test 2..."
X_test2 = [{'head': u'governo', 'dep_rel': u'SUBJ'}, 
           {'head': u'atrav\xe9s', 'dep_rel': u'ADVL'}, 
           {'head': u'configuram', 'dep_rel': u'ACC'}]

X_test2_encoded = feat_encoder.transform(X_test2)
print "Features Encoded"
print X_test2_encoded
print "Shape"
print X_test2_encoded.shape
print "decision_function:"
print model.decision_function(X_test2_encoded)
print "predict_proba:"
print model.predict_proba(X_test2_encoded)


print "Test 3..."
X_test3 = [{'head': u'governo', 'dep_rel': u'SUBJ'}, 
           {'head': u'atrav\xe9s', 'dep_rel': u'ADVL'}, 
           {'head': u'configuram', 'dep_rel': u'ACC'},
           {'head': u'configuram', 'dep_rel': u'ACC'},]

X_test3_encoded = feat_encoder.transform(X_test3)
print "Features Encoded"
print X_test3_encoded
print "Shape"
print X_test3_encoded.shape
print "decision_function:"
print model.decision_function(X_test3_encoded)
print "predict_proba:"
print model.predict_proba(X_test3_encoded)

以下是获得的输出:

Test 1...
Features Encoded
  (0, 4)    1.0
Shape
(1, 19)
decision_function:
[[ 0.55372615 -1.02949707 -1.75474347 -1.73324726 -1.75474347]]
predict_proba:
[[ 1.  1.  1.  1.  1.]]
Test 2...
Features Encoded
  (0, 4)    1.0
  (1, 1)    1.0
  (2, 0)    1.0
Shape
(3, 19)
decision_function:
[[ 0.55372615 -1.02949707 -1.75474347 -1.73324726 -1.75474347]
 [-1.07370197 -0.69103629 -0.89306092 -1.51402163 -0.89306092]
 [-1.55921001  1.11775556 -1.92080112 -1.90133404 -1.92080112]]
predict_proba:
[[ 0.59710757  0.19486904  0.26065002  0.32612646  0.26065002]
 [ 0.23950111  0.24715931  0.51348452  0.3916478   0.51348452]
 [ 0.16339132  0.55797165  0.22586546  0.28222574  0.22586546]]
Test 3...
Features Encoded
  (0, 4)    1.0
  (1, 1)    1.0
  (2, 0)    1.0
  (3, 0)    1.0
Shape
(4, 19)
decision_function:
[[ 0.55372615 -1.02949707 -1.75474347 -1.73324726 -1.75474347]
 [-1.07370197 -0.69103629 -0.89306092 -1.51402163 -0.89306092]
 [-1.55921001  1.11775556 -1.92080112 -1.90133404 -1.92080112]
 [-1.55921001  1.11775556 -1.92080112 -1.90133404 -1.92080112]]
predict_proba:
[[ 0.5132474   0.12507868  0.21262531  0.25434403  0.21262531]
 [ 0.20586462  0.15864173  0.4188751   0.30544372  0.4188751 ]
 [ 0.14044399  0.3581398   0.1842498   0.22010613  0.1842498 ]
 [ 0.14044399  0.3581398   0.1842498   0.22010613  0.1842498 ]]

可以看出,当同一实例与X_test2中的其他实例一起使用时,使用"predict_proba"获得的值在"X_test1"中的实例也会发生变化.另外,"X_test3"只是复制"X_test2"并添加了一个实例(等于"X_test2"中的最后一个实例),但是所有实例的概率值都发生了变化.为什么会这样? 另外,我发现"X_test1"的所有概率均为1真的不奇怪吗?

As can be seen, the values obtained with "predict_proba" for the instance in "X_test1" change when that same instance is with others in X_test2. Also, "X_test3" just reproduces the "X_test2" and adds one more instance (that is equal to the last in "X_test2"), but the probability values for all of them change. Why does this happen? Also, I find it really strange that ALL the probabilities for "X_test1" are 1, shouldn't the sum of all be 1?

现在,如果我使用"decision_function"代替"predict_proba",那么我所获得的值将保持一致.问题是我得到负系数,甚至某些正系数都大于1.

Now, if instead of using "predict_proba" I use "decision_function", I get the consistency in the values obtained that I need. The problem is that I get negative coefficients, and even some of the positives ones are greater than 1.

那我该怎么用?为什么"predict_proba"的值会发生这种变化?我不是正确地理解了这些值的含义吗?

So, what should I use? Why do the values of "predict_proba" change that way? Am I not understanding correctly what those values mean?

在此先感谢您能为我提供的帮助.

Thanks in advance for any help you could give me.

更新

根据建议,我更改了代码,以便同时打印编码的"X_test1","X_test2"和"X_test3"及其形状.这似乎不是问题,因为测试集之间相同实例的编码是一致的.

As suggested, I changed the code so as to also print the encoded "X_test1", "X_test2" and "X_test3", as well as their shapes. This doesn't appear to be the problem, as the encoding is consistant for the same instances between the test sets.

推荐答案

如问题注释所示,该错误是由我使用的scikit-learn版本的实现中的错误引起的.解决了该问题,将其更新到最新的稳定版本0.12.1

As indicated on the question's comments, the error was caused by a bug in the implementation for the version of scikit-learn I was using. The problem was solved updating to the most recent stable version 0.12.1

这篇关于作为估计器"confidence"的predict_proba或decision_function.的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆