在 NLTK 中实现词袋朴素贝叶斯分类器 [英] Implementing Bag-of-Words Naive-Bayes classifier in NLTK

查看:65
本文介绍了在 NLTK 中实现词袋朴素贝叶斯分类器的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我基本上有 和这个人一样的问题.. NLTK 书中的示例 用于朴素贝叶斯分类器仅考虑某个单词是否作为特征出现在文档中.它不考虑单词出现的频率作为要查看的特征(词袋").

I basically have the same question as this guy.. The example in the NLTK book for the Naive Bayes classifier considers only whether a word occurs in a document as a feature.. it doesn't consider the frequency of the words as the feature to look at ("bag-of-words").

其中一个答案似乎表明内置 NLTK 分类器无法做到这一点.是这样吗?如何使用 NLTK 进行频率/词袋 NB 分类?

One of the answers seems to suggest this can't be done with the built in NLTK classifiers. Is that the case? How can I do frequency/bag-of-words NB classification with NLTK?

推荐答案

scikit-learn多项朴素贝叶斯的实现,这是朴素贝叶斯的正确变体在这种情况下.不过,支持向量机 (SVM) 可能会更好地工作.

scikit-learn has an implementation of multinomial naive Bayes, which is the right variant of naive Bayes in this situation. A support vector machine (SVM) would probably work better, though.

正如 Ken 在评论中指出的,NLTK 有 scikit-learn 分类器的包装器.从文档中修改,这里有一个有点复杂的方法,它进行 TF-IDF 加权,根据 chi2 统计选择 1000 个最佳特征,然后将其传递到多项式朴素贝叶斯分类器中.(我敢打赌这有点笨拙,因为我对 NLTK 或 scikit-learn 都不太熟悉.)

As Ken pointed out in the comments, NLTK has a nice wrapper for scikit-learn classifiers. Modified from the docs, here's a somewhat complicated one that does TF-IDF weighting, chooses the 1000 best features based on a chi2 statistic, and then passes that into a multinomial naive Bayes classifier. (I bet this is somewhat clumsy, as I'm not super familiar with either NLTK or scikit-learn.)

import numpy as np
from nltk.probability import FreqDist
from nltk.classify import SklearnClassifier
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_selection import SelectKBest, chi2
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline

pipeline = Pipeline([('tfidf', TfidfTransformer()),
                     ('chi2', SelectKBest(chi2, k=1000)),
                     ('nb', MultinomialNB())])
classif = SklearnClassifier(pipeline)

from nltk.corpus import movie_reviews
pos = [FreqDist(movie_reviews.words(i)) for i in movie_reviews.fileids('pos')]
neg = [FreqDist(movie_reviews.words(i)) for i in movie_reviews.fileids('neg')]
add_label = lambda lst, lab: [(x, lab) for x in lst]
classif.train(add_label(pos[:100], 'pos') + add_label(neg[:100], 'neg'))

l_pos = np.array(classif.classify_many(pos[100:]))
l_neg = np.array(classif.classify_many(neg[100:]))
print "Confusion matrix:
%d	%d
%d	%d" % (
          (l_pos == 'pos').sum(), (l_pos == 'neg').sum(),
          (l_neg == 'pos').sum(), (l_neg == 'neg').sum())

这是为我打印的:

Confusion matrix:
524     376
202     698

不完美,但还不错,考虑到这不是一个超级简单的问题,而且只针对 100/100 进行了训练.

Not perfect, but decent, considering it's not a super easy problem and it's only trained on 100/100.

这篇关于在 NLTK 中实现词袋朴素贝叶斯分类器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆