为什么我的 NLTK 函数在处理 DataFrame 时很慢? [英] Why is my NLTK function slow when processing the DataFrame?

查看:22
本文介绍了为什么我的 NLTK 函数在处理 DataFrame 时很慢?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用数据集中的百万行来运行一个函数.

  1. 我从数据框中的 CSV 读取数据
  2. 我使用下拉列表删除不需要的数据
  3. 我在 for 循环中通过 NLTK 函数传递它.

代码:

def nlkt(val):val=repr(val)clean_txt = [如果 word.lower() 不在 stopwords.words('english') 中,则在 val.split() 中逐字逐句]nopunc = [char for char in str(clean_txt) 如果 char 不在 string.punctuation 中]nonum = [char for char in nopunc if not char.isdigit()]words_string = ''.join(nonum)返回 words_string

现在我正在使用 for 循环调用上述函数以运行数百万条记录.尽管我使用的是具有 24 核 cpu 和 88 GB Ram 的重量级服务器,但我看到循环花费了太多时间并且没有使用那里的计算能力

我像这样调用上面的函数

data = pd.read_excel(scrPath + "UserData_Full.xlsx", encoding='utf-8')droplist = ['提交者','环境']data.drop(droplist,axis=1,inplace=True)#合并栏目公司及详细说明data['Anylize_Text']= data['Company'].astype(str) + ' ' + data['Detailed_Description'].astype(str)决赛名单=[]对于 data['Anylize_Text'] 中的每个列表:z = nlkt(每个列表)finallist.append(z)

当我们有几百万条记录时,上面的代码工作得很好,只是太慢了.它只是 excel 中的一个示例记录,但实际数据将在 DB 中,该数据库将运行数亿.有什么方法可以加快操作速度以更快地通过函数传递数据 - 而是使用更多的计算能力?

解决方案

你原来的 nlkt() 每行循环 3 次.

def nlkt(val):val=repr(val)clean_txt = [如果 word.lower() 不在 stopwords.words('english') 中,则在 val.split() 中逐字逐句]nopunc = [char for char in str(clean_txt) 如果 char 不在 string.punctuation 中]nonum = [char for char in nopunc if not char.isdigit()]words_string = ''.join(nonum)返回 words_string

此外,每次调用 nlkt() 时,都会一次又一次地重新初始化它们.

  • stopwords.words('english')
  • string.punctuation

这些应该是全球性的.

stoplist = stopwords.words('english') + list(string.punctuation)

逐行处理:

val=repr(val)

我不确定您为什么需要这样做.但是您可以轻松地将列转换为 str 类型.这应该在您的预处理功能之外完成.

希望这是不言自明的:

<预><代码>>>>将熊猫导入为 pd>>>df = pd.DataFrame([[0, 1, 2], [2, 'xyz', 4], [5, 'abc', 'def']])>>>df0 1 20 0 1 21 2 xyz 42 5 ABC 定义>>>df[1]0 11 个2 ABC名称:1,数据类型:对象>>>df[1].astype(str)0 11 个2 ABC名称:1,数据类型:对象>>>列表(df [1])[1, 'xyz', 'abc']>>>列表(df[1].astype(str))['1', 'xyz', 'abc']

现在转到下一行:

clean_txt = [如果 word.lower() 不在 stopwords.words('english') 中,则在 val.split() 中逐字逐句]

使用 str.split() 很笨拙,您应该使用适当的标记器.否则,您的标点符号可能会卡在前面的单词上,例如

<预><代码>>>>从 nltk.corpus 导入停用词>>>从 nltk 导入 word_tokenize>>>导入字符串>>>stoplist = stopwords.words('english') + list(string.punctuation)>>>停止列表 = 设置(停止列表)>>>text = '这是 foo、bar 和 doh.'>>>[如果 word.lower() 不在停止列表中,则在 text.split() 中逐字逐句]['foo,', 'bar', 'doh.']>>>[如果 word.lower() 不在停止列表中,则在 word_tokenize(text) 中逐字逐句]['foo', 'bar', 'doh']

同时检查 .isdigit() 应该一起检查:

<预><代码>>>>text = '这是 foo, bar, 234, 567 和 doh.'>>>[word_tokenize(text) 中的逐字逐句,如果 word.lower() 不在停止列表中,也不是 word.isdigit()]['foo', 'bar', 'doh']

把它们放在一起你的 nlkt() 应该是这样的:

def 预处理(文本):返回 [word_tokenize(text) 中的逐字逐句,如果 word.lower() 不在停止列表中,也不在 word.isdigit()]

并且您可以使用 DataFrame.应用:

data['Anylize_Text'].apply(预处理)

I am trying to run through a function with my million lines in a datasets.

  1. I read the data from CSV in a dataframe
  2. I use drop list to drop data i don't need
  3. I pass it through a NLTK function in a for loop.

code:

def nlkt(val):
    val=repr(val)
    clean_txt = [word for word in val.split() if word.lower() not in stopwords.words('english')]
    nopunc = [char for char in str(clean_txt) if char not in string.punctuation]
    nonum = [char for char in nopunc if not char.isdigit()]
    words_string = ''.join(nonum)
    return words_string

Now i am calling the above function using a for loop to run through by million records. Even though i am on a heavy weight server with 24 core cpu and 88 GB Ram i see the loop is taking too much time and not using the computational power that is there

I am calling the above function like this

data = pd.read_excel(scrPath + "UserData_Full.xlsx", encoding='utf-8')
droplist = ['Submitter', 'Environment']
data.drop(droplist,axis=1,inplace=True)

#Merging the columns company and detailed description

data['Anylize_Text']= data['Company'].astype(str) + ' ' + data['Detailed_Description'].astype(str)

finallist =[]

for eachlist in data['Anylize_Text']:
    z = nlkt(eachlist)
    finallist.append(z)

The above code works perfectly OK just too slow when we have few million record. It is just a sample record in excel but actual data will be in DB which will run in few hundred millions. Is there any way I can speed up the operation to pass the data through the function faster - use more computational power instead?

解决方案

Your original nlkt() loops through each row 3 times.

def nlkt(val):
    val=repr(val)
    clean_txt = [word for word in val.split() if word.lower() not in stopwords.words('english')]
    nopunc = [char for char in str(clean_txt) if char not in string.punctuation]
    nonum = [char for char in nopunc if not char.isdigit()]
    words_string = ''.join(nonum)
    return words_string

Also, each time you're calling nlkt(), you're re-initializing these again and again.

  • stopwords.words('english')
  • string.punctuation

These should be global.

stoplist = stopwords.words('english') + list(string.punctuation)

Going through things line by line:

val=repr(val)

I'm not sure why you need to do this. But you could easy cast a column to a str type. This should be done outside of your preprocessing function.

Hopefully this is self-explanatory:

>>> import pandas as pd
>>> df = pd.DataFrame([[0, 1, 2], [2, 'xyz', 4], [5, 'abc', 'def']])
>>> df
   0    1    2
0  0    1    2
1  2  xyz    4
2  5  abc  def
>>> df[1]
0      1
1    xyz
2    abc
Name: 1, dtype: object
>>> df[1].astype(str)
0      1
1    xyz
2    abc
Name: 1, dtype: object
>>> list(df[1])
[1, 'xyz', 'abc']
>>> list(df[1].astype(str))
['1', 'xyz', 'abc']

Now going to the next line:

clean_txt = [word for word in val.split() if word.lower() not in stopwords.words('english')]

Using str.split() is awkward, you should use a proper tokenizer. Otherwise, your punctuations might be stuck with the preceding word, e.g.

>>> from nltk.corpus import stopwords
>>> from nltk import word_tokenize
>>> import string
>>> stoplist = stopwords.words('english') + list(string.punctuation)
>>> stoplist = set(stoplist)

>>> text = 'This is foo, bar and doh.'

>>> [word for word in text.split() if word.lower() not in stoplist]
['foo,', 'bar', 'doh.']

>>> [word for word in word_tokenize(text) if word.lower() not in stoplist]
['foo', 'bar', 'doh']

Also checking for .isdigit() should be checked together:

>>> text = 'This is foo, bar, 234, 567 and doh.'
>>> [word for word in word_tokenize(text) if word.lower() not in stoplist and not word.isdigit()]
['foo', 'bar', 'doh']

Putting it all together your nlkt() should look like this:

def preprocess(text):
    return [word for word in word_tokenize(text) if word.lower() not in stoplist and not word.isdigit()]

And you can use the DataFrame.apply:

data['Anylize_Text'].apply(preprocess)

这篇关于为什么我的 NLTK 函数在处理 DataFrame 时很慢?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆