如何获得朴素贝叶斯功能的重要性? [英] How to get feature Importance in naive bayes?

查看:167
本文介绍了如何获得朴素贝叶斯功能的重要性?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个评论集,其类别标签为正/负.我正在将朴素贝叶斯应用于该评论数据集.首先,我要转换成单词袋.这里的 sorted_data ['Text'] 是评论,而 final_counts是稀疏矩阵

I have a dataset of reviews which has a class label of positive/negative. I am applying Naive Bayes to that reviews dataset. Firstly, I am converting into Bag of words. Here sorted_data['Text'] is reviews and final_counts is a sparse matrix

count_vect = CountVectorizer() 
final_counts = count_vect.fit_transform(sorted_data['Text'].values)

我将数据分为训练数据集和测试数据集.

I am splitting the data into train and test dataset.

X_1, X_test, y_1, y_test = cross_validation.train_test_split(final_counts, labels, test_size=0.3, random_state=0)

我正在按以下方法应用朴素贝叶斯算法

I am applying the naive bayes algorithm as follows

optimal_alpha = 1
NB_optimal = BernoulliNB(alpha=optimal_aplha)

# fitting the model
NB_optimal.fit(X_tr, y_tr)

# predict the response
pred = NB_optimal.predict(X_test)

# evaluate accuracy
acc = accuracy_score(y_test, pred) * 100
print('\nThe accuracy of the NB classifier for k = %d is %f%%' % (optimal_aplha, acc))

这里X_test是测试数据集,其中pred变量为我们提供X_test中的向量是正类还是负类.

Here X_test is test dataset in which pred variable gives us whether the vector in X_test is positive or negative class.

X_test形状为(54626行,尺寸为82343)

pred的长度为54626

我的问题是我想获得每个向量中概率最高的单词,以便我可以通过单词了解为什么它被预测为阳性或阴性.因此,如何获得每个向量中概率最高的词?

My question is I want to get the words with highest probability in each vector so that I can get to know by the words that why it predicted as positive or negative class. Therefore, how to get the words which have highest probability in each vector?

推荐答案

通过使用coefs_feature_log_prob_属性,可以使每个单词的重要性超出适合度模型.例如

You can get the important of each word out of the fit model by using the coefs_ or feature_log_prob_ attributes. For example

neg_class_prob_sorted = NB_optimal.feature_log_prob_[0, :].argsort()
pos_class_prob_sorted = NB_optimal.feature_log_prob_[1, :].argsort()

print(np.take(count_vect.get_feature_names(), neg_class_prob_sorted[:10]))
print(np.take(count_vect.get_feature_names(), pos_class_prob_sorted[:10]))

为您的每个课程打印出十个最具预测性的单词.

Prints the top ten most predictive words for each of your classes.

这篇关于如何获得朴素贝叶斯功能的重要性?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆