如何使用 spaCy 进行文本预处理? [英] How to do text pre-processing using spaCy?
本文介绍了如何使用 spaCy 进行文本预处理?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
如何使用python在spaCy中进行停用词去除、标点去除、词干提取和词形还原等预处理步骤.
How to do preprocessing steps like Stopword removal , punctuation removal , stemming and lemmatization in spaCy using python.
我在 csv 文件中有文本数据,如段落和句子.我想做文本清理.
I have text data in csv file like paragraphs and sentences. I want to do text cleaning.
请举例说明在 Pandas 数据帧中加载 csv
Kindly give example by loading csv in pandas dataframe
推荐答案
这可能会有所帮助:
import spacy #load spacy
nlp = spacy.load("en", disable=['parser', 'tagger', 'ner'])
stops = stopwords.words("english")
def normalize(comment, lowercase, remove_stopwords):
if lowercase:
comment = comment.lower()
comment = nlp(comment)
lemmatized = list()
for word in comment:
lemma = word.lemma_.strip()
if lemma:
if not remove_stopwords or (remove_stopwords and lemma not in stops):
lemmatized.append(lemma)
return " ".join(lemmatized)
Data['Text_After_Clean'] = Data['Text'].apply(normalize, lowercase=True, remove_stopwords=True)
这篇关于如何使用 spaCy 进行文本预处理?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文