更快的替代方案 [英] Faster alternative to iterrows

查看:70
本文介绍了更快的替代方案的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我知道这个话题已经解决了上千次.但是我想不出一个解决方案.

I know that this topic has been addressed a thousand times. But I can't figure out a solution.

我正在尝试计算列表(df2.list2)的列中出现列表(df1.list1的每一行)的频率.所有列表仅包含唯一值. List1包含约300.000行,list2包含30.000行.

I'm trying to count how often a list (each row of df1.list1) occurs in a column of list (df2.list2). All lists consist of unique values only. List1 includes about 300.000 rows and list2 30.000 rows.

我有一个有效的代码,但是它的运行速度非常慢(因为我正在使用iterrows).我也尝试过itertuples(),但它给了我一个错误(要解压缩的值太多(预期2)").我在网上发现了一个类似的问题: Pandas计算包含在列表列中的列表的出现次数.在提到的情况下,此人仅考虑一列列表中一个列表的出现.但是,我无法解决问题,因此将df1.list1中的每一行都与df2.list2进行了比较.

I've got a working code but its terribly slow (because I'm using iterrows). I also tried itertuples() but it gave me an error ("too many values to unpack (expected 2)"). I found a similar question online: Pandas counting occurrence of list contained in column of lists. In the mentioned case the person considers only the occurrence of one list within a column of lists. However, I can't work things out so each row in df1.list1 is compared to df2.list2.

那是我的列表的样子(简化):

Thats how my lists look like (simplified):

df1.list1

0   ["a", "b"]
1   ["a", "c"]
2   ["a", "d"]
3   ["b", "c"]
4   ["b", "d"]
5   ["c", "d"]


df2.list2

0    ["a", "b" ,"c", "d"]
1    ["a", "b"] 
2    ["b", "c"]
3    ["c", "d"]
4    ["b", "c"]

我想提出的内容:

df1

    list1         occurence   
0   ["a", "b"]    2
1   ["a", "c"]    1
2   ["a", "d"]    1
3   ["b", "c"]    3
4   ["b", "d"]    1
5   ["c", "d"]    2

那是我到目前为止所得到的:

Thats what I've got so far:

for index, row in df_combinations.iterrows():
    df1.at[index, "occurrence"] = df2["list2"].apply(lambda x: all(i in x for i in row['list1'])).sum()

有人建议我如何加快速度吗?预先感谢!

Any suggestions how I can speed things up? Thanks in advance!

推荐答案

这应该快得多:

df = pd.DataFrame({'list1': [["a","b"],
                             ["a","c"],
                             ["a","d"],
                             ["b","c"],
                             ["b","d"],
                             ["c","d"]]*100})
df2 = pd.DataFrame({'list2': [["a","b","c","d"],
                              ["a","b"], 
                              ["b","c"],
                              ["c","d"],
                              ["b","c"]]*100})

list2 = df2['list2'].map(set).tolist()

df['occurance'] = df['list1'].apply(set).apply(lambda x: len([i for i in list2 if x.issubset(i)]))

使用您的方法:

%timeit for index, row in df.iterrows(): df.at[index, "occurrence"] = df2["list2"].apply(lambda x: all(i in x for i in row['list1'])).sum()

1个循环,每个循环最好3:3.98 s 使用我的:

1 loop, best of 3: 3.98 s per loop Using mine:

%timeit list2 = df2['list2'].map(set).tolist();df['occurance'] = df['list1'].apply(set).apply(lambda x: len([i for i in list2 if x.issubset(i)]))

10个循环,每个循环最好3:29.7毫秒

10 loops, best of 3: 29.7 ms per loop

请注意,我已将列表的大小增加了100倍.

Notice that I've increased the size of list by a factor of 100.

编辑

这似乎更快:

list2 = df2['list2'].sort_values().tolist()
df['occurance'] = df['list1'].apply(lambda x: len(list(next(iter(())) if not all(i in list2 for i in x) else i for i in x)))

时间:

%timeit list2 =  df2['list2'].sort_values().tolist();df['occurance'] = df['list1'].apply(lambda x: len(list(next(iter(())) if not all(i in list2 for i in x) else i for i in x)))

100个循环,每个循环最好3:14.8毫秒

100 loops, best of 3: 14.8 ms per loop

这篇关于更快的替代方案的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆