如何在Pandas数据框上将NLTK word_tokenize库应用于Twitter数据? [英] How to apply NLTK word_tokenize library on a Pandas dataframe for Twitter data?

查看:102
本文介绍了如何在Pandas数据框上将NLTK word_tokenize库应用于Twitter数据?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这是我用于Twitter语义分析的代码:-

This is the Code that I am using for semantic analysis of twitter:-

import pandas as pd
import datetime
import numpy as np
import re
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.stem.porter import PorterStemmer

df=pd.read_csv('twitDB.csv',header=None, 
sep=',',error_bad_lines=False,encoding='utf-8')

hula=df[[0,1,2,3]]
hula=hula.fillna(0)
hula['tweet'] = hula[0].astype(str) 
+hula[1].astype(str)+hula[2].astype(str)+hula[3].astype(str) 
hula["tweet"]=hula.tweet.str.lower()

ho=hula["tweet"]
ho = ho.replace('\s+', ' ', regex=True) 
ho=ho.replace('\.+', '.', regex=True)
special_char_list = [':', ';', '?', '}', ')', '{', '(']
for special_char in special_char_list:
ho=ho.replace(special_char, '')
print(ho)

ho = ho.replace('((www\.[\s]+)|(https?://[^\s]+))','URL',regex=True)
ho =ho.replace(r'#([^\s]+)', r'\1', regex=True)
ho =ho.replace('\'"',regex=True)

lem = WordNetLemmatizer()
stem = PorterStemmer()
fg=stem.stem(a)

eng_stopwords = stopwords.words('english') 
ho = ho.to_frame(name=None)
a=ho.to_string(buf=None, columns=None, col_space=None, header=True, 
index=True, na_rep='NaN', formatters=None, float_format=None, 
sparsify=False, index_names=True, justify=None, line_width=None, 
max_rows=None, max_cols=None, show_dimensions=False)
wordList = word_tokenize(fg)                                     
wordList = [word for word in wordList if word not in eng_stopwords]  
print (wordList)

输入即a:-

                                              tweet
0     1495596971.6034188::automotive auto ebc greens...
1     1495596972.330948::new free stock photo of cit...

以以下格式获取输出(wordList):-

getting output ( wordList) in this format:-

tweet
 0
1495596971.6034188
:
:automotive
auto

我只希望以行格式输出一行.我该怎么做? 如果您有更好的Twitter语义分析代码,请与我分享.

I want the output of a row in a row format only. How can I do it? If you have a better code for semantic analysis of twitter please share it with me.

推荐答案

简而言之:

df['Text'].apply(word_tokenize)

或者如果您想添加另一列来存储标记化的字符串列表:

Or if you want to add another column to store the tokenized list of strings:

df['tokenized_text'] = df['Text'].apply(word_tokenize) 

有专门为Twitter文本编写的令牌生成器,请参见 http://www.nltk.org/api/nltk.tokenize.html#module-nltk.tokenize.casual

There are tokenizers written specifically for twitter text, see http://www.nltk.org/api/nltk.tokenize.html#module-nltk.tokenize.casual

要使用nltk.tokenize.TweetTokenizer:

from nltk.tokenize import TweetTokenizer
tt = TweetTokenizer()
df['Text'].apply(tt.tokenize)

类似于:

如何在数据框中使用word_tokenize

如何应用pos_tag_sents()有效地熊猫数据框

将单词标记为新列在熊猫数据框中

通过熊猫数据框运行nltk send_tokenize

Python文本处理:NLTK和熊猫

这篇关于如何在Pandas数据框上将NLTK word_tokenize库应用于Twitter数据?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆