如何在数据框中使用word_tokenize [英] how to use word_tokenize in data frame
问题描述
我最近开始使用nltk模块进行文本分析.我陷入了困境.我想在数据框上使用word_tokenize,以便获取在数据框的特定行中使用的所有单词.
I have recently started using the nltk module for text analysis. I am stuck at a point. I want to use word_tokenize on a dataframe, so as to obtain all the words used in a particular row of the dataframe.
data example:
text
1. This is a very good site. I will recommend it to others.
2. Can you please give me a call at 9983938428. have issues with the listings.
3. good work! keep it up
4. not a very helpful site in finding home decor.
expected output:
1. 'This','is','a','very','good','site','.','I','will','recommend','it','to','others','.'
2. 'Can','you','please','give','me','a','call','at','9983938428','.','have','issues','with','the','listings'
3. 'good','work','!','keep','it','up'
4. 'not','a','very','helpful','site','in','finding','home','decor'
基本上,我想将所有单词分开,并找到数据框中每个文本的长度.
Basically, i want to separate all the words and find the length of each text in the dataframe.
我知道word_tokenize可以用于字符串,但是如何将其应用于整个数据帧?
I know word_tokenize can for it for a string, but how to apply it onto the entire dataframe?
请帮助!
预先感谢...
推荐答案
您可以使用DataFrame API的 apply 方法:
You can use apply method of DataFrame API:
import pandas as pd
import nltk
df = pd.DataFrame({'sentences': ['This is a very good site. I will recommend it to others.', 'Can you please give me a call at 9983938428. have issues with the listings.', 'good work! keep it up']})
df['tokenized_sents'] = df.apply(lambda row: nltk.word_tokenize(row['sentences']), axis=1)
输出:
>>> df
sentences \
0 This is a very good site. I will recommend it ...
1 Can you please give me a call at 9983938428. h...
2 good work! keep it up
tokenized_sents
0 [This, is, a, very, good, site, ., I, will, re...
1 [Can, you, please, give, me, a, call, at, 9983...
2 [good, work, !, keep, it, up]
要查找每个文本的长度,请尝试再次使用 apply 和 lambda函数:
For finding the length of each text try to use apply and lambda function again:
df['sents_length'] = df.apply(lambda row: len(row['tokenized_sents']), axis=1)
>>> df
sentences \
0 This is a very good site. I will recommend it ...
1 Can you please give me a call at 9983938428. h...
2 good work! keep it up
tokenized_sents sents_length
0 [This, is, a, very, good, site, ., I, will, re... 14
1 [Can, you, please, give, me, a, call, at, 9983... 15
2 [good, work, !, keep, it, up] 6
这篇关于如何在数据框中使用word_tokenize的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!