Python nltk计算单词和短语的频率 [英] Python nltk counting word and phrase frequency
问题描述
我正在使用NLTK并试图使单词短语的计数达到特定文档的特定长度以及每个短语的频率.我将字符串标记化以获取数据列表.
from nltk.util import ngrams
from nltk.tokenize import sent_tokenize, word_tokenize
from nltk.collocations import *
data = ["this", "is", "not", "a", "test", "this", "is", "real", "not", "a", "test", "this", "is", "this", "is", "real", "not", "a", "test"]
bigrams = ngrams(data, 2)
bigrams_c = {}
for b in bigrams:
if b not in bigrams_c:
bigrams_c[b] = 1
else:
bigrams_c[b] += 1
上面的代码给出并输出如下:
(('is', 'this'), 1)
(('test', 'this'), 2)
(('a', 'test'), 3)
(('this', 'is'), 4)
(('is', 'not'), 1)
(('real', 'not'), 2)
(('is', 'real'), 2)
(('not', 'a'), 3)
部分是我要寻找的. p>
我的问题是,是否有一种更方便的方法来对长度为4或5的短语进行说,而无需复制此代码来仅更改count变量?
由于您已标记此nltk
,因此以下是使用nltk
方法的方法,该方法比标准方法具有更多功能python集合.
from nltk import ngrams, FreqDist
all_counts = dict()
for size in 2, 3, 4, 5:
all_counts[size] = FreqDist(ngrams(data, size))
字典的每个元素all_counts
是ngram频率的字典.例如,您可以获得以下五个最常见的三字母组合:
all_counts[3].most_common(5)
I am using NLTK and trying to get the word phrase count up to a certain length for a particular document as well as the frequency of each phrase. I tokenize the string to get the data list.
from nltk.util import ngrams
from nltk.tokenize import sent_tokenize, word_tokenize
from nltk.collocations import *
data = ["this", "is", "not", "a", "test", "this", "is", "real", "not", "a", "test", "this", "is", "this", "is", "real", "not", "a", "test"]
bigrams = ngrams(data, 2)
bigrams_c = {}
for b in bigrams:
if b not in bigrams_c:
bigrams_c[b] = 1
else:
bigrams_c[b] += 1
the above code gives and output like this:
(('is', 'this'), 1)
(('test', 'this'), 2)
(('a', 'test'), 3)
(('this', 'is'), 4)
(('is', 'not'), 1)
(('real', 'not'), 2)
(('is', 'real'), 2)
(('not', 'a'), 3)
which is partially what I am looking for.
My question is, is there a more convenient way to do this for say up to phrases that are 4 or 5 in length without duplicating this code only to change the count variable?
Since you tagged this nltk
, here's how to do it using the nltk
's methods, which have some more features than the ones in the standard python collection.
from nltk import ngrams, FreqDist
all_counts = dict()
for size in 2, 3, 4, 5:
all_counts[size] = FreqDist(ngrams(data, size))
Each element of the dictionary all_counts
is a dictionary of ngram frequencies. For example, you can get the five most common trigrams like this:
all_counts[3].most_common(5)
这篇关于Python nltk计算单词和短语的频率的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!