在特定文件上测试NLTK分类器 [英] Testing the NLTK classifier on specific file

查看:94
本文介绍了在特定文件上测试NLTK分类器的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

以下代码运行朴素贝叶斯电影评论分类器. 该代码生成了最有用的功能列表.

The following code run Naive Bayes movie review classifier. The code generate a list of the most informative features.

注意: **movie review**文件夹位于nltk中.

from itertools import chain
from nltk.corpus import stopwords
from nltk.probability import FreqDist
from nltk.classify import NaiveBayesClassifier
from nltk.corpus import movie_reviews
stop = stopwords.words('english')

documents = [([w for w in movie_reviews.words(i) if w.lower() not in stop and w.lower() not in string.punctuation], i.split('/')[0]) for i in movie_reviews.fileids()]


word_features = FreqDist(chain(*[i for i,j in documents]))
word_features = word_features.keys()[:100]

numtrain = int(len(documents) * 90 / 100)
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[:numtrain]]
test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag  in documents[numtrain:]]

classifier = NaiveBayesClassifier.train(train_set)
print nltk.classify.accuracy(classifier, test_set)
classifier.show_most_informative_features(5)

代码链接 https://stackoverflow.com/users/610569/alvas">alvas

link of code from alvas

我如何测试针对特定文件的分类器?

请问我的问题是否模棱两可或错误.

Please let me know if my question is ambiguous or wrong.

推荐答案

首先,仔细阅读以下答案,它们包含您需要的部分答案,还简要说明了分类器的功能及其在NLTK中的工作方式:

First, read these answers carefully, they contain parts of the answers you require and also briefly explains what the classifier does and how it works in NLTK:

  • nltk NaiveBayesClassifier training for sentiment analysis
  • Using my own corpus instead of movie_reviews corpus for Classification in NLTK
  • http://www.nltk.org/book/ch06.html

在带注释的数据上测试分类器

现在回答您的问题.我们假设您的问题是该问题的后续内容:

Now to answer your question. We assume that your question is a follow-up of this question: Using my own corpus instead of movie_reviews corpus for Classification in NLTK

如果测试文本的结构与movie_review语料库相同,则可以像读取训练数据一样简单地读取测试数据:

If your test text is structured the same way as the movie_review corpus, then you can simply read the test data as you would for the training data:

以防代码解释不清楚,这是一个演练:

Just in case the explanation of the code is unclear, here's a walkthrough:

traindir = '/home/alvas/my_movie_reviews'
mr = CategorizedPlaintextCorpusReader(traindir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')

上面的两行是读取具有以下结构的目录my_movie_reviews:

The two lines above is to read a directory my_movie_reviews with such a structure:

\my_movie_reviews
    \pos
        123.txt
        234.txt
    \neg
        456.txt
        789.txt
    README

然后,下一行提取带有pos/neg标签的文档,该标签是目录结构的一部分.

Then the next line extracts documents with its pos/neg tag that's part of the directory structure.

documents = [([w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]

以下是上述内容的解释:

Here's the explanation for the above line:

# This extracts the pos/neg tag
labels = [i for i.split('/')[0]) for i in mr.fileids()]
# Reads the words from the corpus through the CategorizedPlaintextCorpusReader object
words = [w for w in mr.words(i)]
# Removes the stopwords
words = [w for w in mr.words(i) if w.lower() not in stop]
# Removes the punctuation
words = [w for w in mr.words(i) w not in string.punctuation]
# Removes the stopwords and punctuations
words = [w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation]
# Removes the stopwords and punctuations and put them in a tuple with the pos/neg labels
documents = [([w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]

当您读取测试数据时,应应用SAME程序!!!

现在进行特征处理:

以下几行为分类器添加了前100个功能:

The following lines extra top 100 features for the classifier:

# Extract the words features and put them into FreqDist
# object which records the no. of times each unique word occurs
word_features = FreqDist(chain(*[i for i,j in documents]))
# Cuts the FreqDist to the top 100 words in terms of their counts.
word_features = word_features.keys()[:100]

在将文档处理为可分类格式之后:

Next to processing the documents into classify-able format:

# Splits the training data into training size and testing size
numtrain = int(len(documents) * 90 / 100)
# Process the documents for training data
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[:numtrain]]
# Process the documents for testing data
test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag  in documents[numtrain:]]

现在要解释对train_set和`test_set:的长列表理解:

Now to explain that long list comprehension for train_set and `test_set:

# Take the first `numtrain` no. of documents
# as training documents
train_docs = documents[:numtrain]
# Takes the rest of the documents as test documents.
test_docs = documents[numtrain:]
# These extract the feature sets for the classifier
# please look at the full explanation on https://stackoverflow.com/questions/20827741/nltk-naivebayesclassifier-training-for-sentiment-analysis/
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag  in train_docs]

对于测试文档中的特征提取,您还需要按上述方式处理文档!!!

因此,这是您如何读取测试数据的方法:

So here's how you can read the test data:

stop = stopwords.words('english')

# Reads the training data.
traindir = '/home/alvas/my_movie_reviews'
mr = CategorizedPlaintextCorpusReader(traindir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')

# Converts training data into tuples of [(words,label), ...]
documents = [([w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]

# Now do the same for the testing data.
testdir = '/home/alvas/test_reviews'
mr_test = CategorizedPlaintextCorpusReader(testdir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')
# Converts testing data into tuples of [(words,label), ...]
test_documents = [([w for w in mr_test.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr_test.fileids()]

然后继续上述处理步骤,只需执行此操作即可获取测试文档的标签,因为@yvespeirsman回答了:

Then continue with the processing steps described above, and simply do this to get the label for the test document as @yvespeirsman answered:

#### FOR TRAINING DATA ####
stop = stopwords.words('english')

# Reads the training data.
traindir = '/home/alvas/my_movie_reviews'
mr = CategorizedPlaintextCorpusReader(traindir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')

# Converts training data into tuples of [(words,label), ...]
documents = [([w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]
# Extract training features.
word_features = FreqDist(chain(*[i for i,j in documents]))
word_features = word_features.keys()[:100]
# Assuming that you're using full data set
# since your test set is different.
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag  in documents]

#### TRAINS THE TAGGER ####
# Train the tagger
classifier = NaiveBayesClassifier.train(train_set)

#### FOR TESTING DATA ####
# Now do the same reading and processing for the testing data.
testdir = '/home/alvas/test_reviews'
mr_test = CategorizedPlaintextCorpusReader(testdir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')
# Converts testing data into tuples of [(words,label), ...]
test_documents = [([w for w in mr_test.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr_test.fileids()]
# Reads test data into features:
test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag  in test_documents]

#### Evaluate the classifier ####
for doc, gold_label in test_set:
    tagged_label = classifier.classify(doc)
    if tagged_label == gold_label:
        print("Woohoo, correct")
    else:
        print("Boohoo, wrong")

如果上述代码和说明对您没有意义,则您必须在继续以下操作之前先阅读本教程:

If the above code and explanation makes no sense to you, then you MUST read this tutorial before proceeding: http://www.nltk.org/howto/classify.html

现在,假设您的测试数据中没有注释,即您的test.txt不在像movie_review这样的目录结构中,而只是一个纯文本文件:

Now let's say you have no annotation in your test data, i.e. your test.txt is not in the directory structure like the movie_review and just a plain textfile:

\test_movie_reviews
    \1.txt
    \2.txt

那么将其读入分类语料库是没有意义的,您只需阅读并标记文档即可,即:

Then there's no point in reading it into a categorized corpus, you can simply do read and tag the documents, i.e.:

for infile in os.listdir(`test_movie_reviews): 
  for line in open(infile, 'r'):
       tagged_label = classifier.classify(doc)

但是您不能在没有注释的情况下评估结果,因此您无法检查if-else标签,如果需要,则您需要标记文本不要使用CategorizedPlaintextCorpusReader.

BUT you CANNOT evaluate the results without annotation, so you can't check the tag if the if-else, also you need to tokenize your text if you're not using the CategorizedPlaintextCorpusReader.

如果您只想标记纯文本文件test.txt:

If you just want to tag a plaintext file test.txt:

import string
from itertools import chain
from nltk.corpus import stopwords
from nltk.probability import FreqDist
from nltk.classify import NaiveBayesClassifier
from nltk.corpus import movie_reviews
from nltk import word_tokenize

stop = stopwords.words('english')

# Extracts the documents.
documents = [([w for w in movie_reviews.words(i) if w.lower() not in stop and w.lower() not in string.punctuation], i.split('/')[0]) for i in movie_reviews.fileids()]
# Extract the features.
word_features = FreqDist(chain(*[i for i,j in documents]))
word_features = word_features.keys()[:100]
# Converts documents to features.
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents]
# Train the classifier.
classifier = NaiveBayesClassifier.train(train_set)

# Tag the test file.
with open('test.txt', 'r') as fin:
    for test_sentence in fin:
        # Tokenize the line.
        doc = word_tokenize(test_sentence.lower())
        featurized_doc = {i:(i in doc) for i in word_features}
        tagged_label = classifier.classify(featurized_doc)
        print(tagged_label)

再次,请不要只是复制并粘贴该解决方案,而是尝试了解其原因和工作原理.

Once again, please don't just copy and paste the solution and try to understand why and how it works.

这篇关于在特定文件上测试NLTK分类器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆