使用我自己的语料库在Python NLTK中进行类别分类 [英] Using my own corpus for category classification in Python NLTK

查看:121
本文介绍了使用我自己的语料库在Python NLTK中进行类别分类的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是NTLK/Python的初学者,并且设法使用CategorizedPlaintextCorpusReader加载了自己的语料库,但实际上我该如何训练和使用数据进行文本分类?

I'm a NTLK/Python beginner and managed to load my own corpus using CategorizedPlaintextCorpusReader but how do I actually train and use the data for classification of text?

>>> from nltk.corpus.reader import CategorizedPlaintextCorpusReader
>>> reader = CategorizedPlaintextCorpusReader('/ebs/category', r'.*\.txt', cat_pattern=r'(.*)\.txt')
>>> len(reader.categories())
234

推荐答案

假设您要使用带有单词特征的朴素贝叶斯分类器:

Assuming you want a naive Bayes classifier with bag of words features:

from nltk import FreqDist
from nltk.classify.naivebayes import NaiveBayesClassifier

def make_training_data(rdr):
    for c in rdr.categories():
        for f in rdr.fileids(c):
            yield FreqDist(rdr.words(fileids=[f])), c

clf = NaiveBayesClassifier.train(list(make_training_data(reader)))

生成的clfclassify方法可用于任何FreqDist单词.

The resulting clf's classify method can be used on any FreqDist of words.

(但请注意:在您的cap_pattern中,似乎您的语料库中的每个文件都有示例.请检查这是否真的是您想要的.)

(But note: from your cap_pattern, it seems you have sample and a single category per file in your corpus. Please check whether that's really what you want.)

这篇关于使用我自己的语料库在Python NLTK中进行类别分类的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆