如何在 pandas 数据框中找到一列的 ngram 频率? [英] How to find ngram frequency of a column in a pandas dataframe?
问题描述
以下是我拥有的输入熊猫数据框.
Below is the input pandas dataframe I have.
我想找到unigrams的频率&二元组.我期望的示例如下所示
I want to find the frequency of unigrams & bigrams. A sample of what I am expecting is shown below
如何使用 nltk 或 scikit learn 来做到这一点?
How to do this using nltk or scikit learn?
我写了下面的代码,它接受一个字符串作为输入.如何将其扩展到系列/数据框?
I wrote the below code which takes a string as input. How to extend it to series/dataframe?
from nltk.collocations import *
desc='john is a guy person you him guy person you him'
tokens = nltk.word_tokenize(desc)
bigram_measures = nltk.collocations.BigramAssocMeasures()
finder = BigramCollocationFinder.from_words(tokens)
finder.ngram_fd.viewitems()
推荐答案
如果你的数据是这样的
import pandas as pd
df = pd.DataFrame([
'must watch. Good acting',
'average movie. Bad acting',
'good movie. Good acting',
'pathetic. Avoid',
'avoid'], columns=['description'])
你可以使用sklearn
包的CountVectorizer
:
from sklearn.feature_extraction.text import CountVectorizer
word_vectorizer = CountVectorizer(ngram_range=(1,2), analyzer='word')
sparse_matrix = word_vectorizer.fit_transform(df['description'])
frequencies = sum(sparse_matrix).toarray()[0]
pd.DataFrame(frequencies, index=word_vectorizer.get_feature_names(), columns=['frequency'])
这给了你:
frequency
good 3
pathetic 1
average movie 1
movie bad 2
watch 1
good movie 1
watch good 3
good acting 2
must 1
movie good 2
pathetic avoid 1
bad acting 1
average 1
must watch 1
acting 1
bad 1
movie 1
avoid 1
编辑
fit
只会训练"您的向量化器:它将拆分您的语料库中的单词并用它创建词汇表.然后 transform
可以根据向量化词汇表获取一个新文档并创建频率向量.
fit
will just "train" your vectorizer : it will split the words of your corpus and create a vocabulary with it. Then transform
can take a new document and create vector of frequency based on the vectorizer vocabulary.
这里你的训练集是你的输出集,所以你可以同时做这两个 (fit_transform
).因为您有 5 个文档,所以它会创建 5 个向量作为矩阵.你想要一个全局向量,所以你必须做一个sum
.
Here your training set is your output set, so you can do both at the same time (fit_transform
). Because you have 5 documents, it will create 5 vectors as a matrix. You want a global vector, so you have to make a sum
.
编辑 2
对于大数据帧,您可以使用以下方法加快频率计算:
For big dataframes, you can speed up the frequencies computation by using:
frequencies = sum(sparse_matrix).data
这篇关于如何在 pandas 数据框中找到一列的 ngram 频率?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!