更快的替代 iterrows [英] Faster alternative to iterrows

查看:26
本文介绍了更快的替代 iterrows的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我知道这个话题已经被提及一千次了.但我想不出解决办法.

我正在尝试计算列表(df1.list1 的每一行)在列表列 (df2.list2) 中出现的频率.所有列表仅包含唯一值.List1 包括大约 300.000 行,list2 包括 30.000 行.

我有一个工作代码,但它非常慢(因为我正在使用 iterrows).我也尝试过 itertuples() 但它给了我一个错误(解包的值太多(预期为 2)").我在网上发现了一个类似的问题:Pandas计算列表列中包含的列表的出现次数.在上述情况下,该人只考虑一列列表中一个列表的出现.但是,我无法解决问题,因此将 df1.list1 中的每一行与 df2.list2 进行比较.

这就是我的列表的样子(简化版):

df1.list10 ["a", "b"]1 ["a", "c"]2 ["a", "d"]3 ["b", "c"]4 ["b", "d"]5 ["c", "d"]df2.list20 ["a", "b", "c", "d"]1 ["a", "b"]2 ["b", "c"]3 ["c", "d"]4 ["b", "c"]

我想提出什么:

df1

 list1 出现0 ["a", "b"] 21 ["a", "c"] 12 ["a", "d"] 13 ["b", "c"] 34 ["b", "d"] 15 ["c", "d"] 2

这就是我到目前为止所得到的:

 用于索引,df_combinations.iterrows() 中的行:df1.at[index, "occurrence"] = df2["list2"].apply(lambda x: all(i in x for i in row['list1'])).sum()

有什么建议可以加快速度吗?提前致谢!

解决方案

这应该会快很多:

df = pd.DataFrame({'list1': [["a","b"],["a","c"],[广告"],[公元前"],["b","d"],["c","d"]]*100})df2 = pd.DataFrame({'list2': [["a","b","c","d"],["a","b"],[公元前"],[光盘"],["b","c"]]*100})list2 = df2['list2'].map(set).tolist()df['occurance'] = df['list1'].apply(set).apply(lambda x: len([i for i in list2 if x.issubset(i)]))

使用你的方法:

%timeit for index, row in df.iterrows(): df.at[index, "occurrence"] = df2["list2"].apply(lambda x: all(i in x for i in行['list1'])).sum()

<块引用>

1 个循环,最好的 3 个:每个循环 3.98 秒使用我的:

%timeit list2 = df2['list2'].map(set).tolist();df['occurance'] = df['list1'].apply(set).apply(lambda x:len([i for i in list2 if x.issubset(i)]))

<块引用>

10 个循环,最好的 3 个:每个循环 29.7 毫秒

请注意,我已将列表的大小增加了 100 倍.

编辑

这个似乎更快:

list2 = df2['list2'].sort_values().tolist()df['occurance'] = df['list1'].apply(lambda x: len(list(next(iter(()))) if not all(i in list2 for i in x) else i for i in x)))

和时间:

%timeit list2 = df2['list2'].sort_values().tolist();df['occurance'] = df['list1'].apply(lambda x: len(list(next(iter(())) if not all(i in list2 for i in x) else i for i in x)))

<块引用>

100 个循环,最好的 3 个:每个循环 14.8 毫秒

I know that this topic has been addressed a thousand times. But I can't figure out a solution.

I'm trying to count how often a list (each row of df1.list1) occurs in a column of list (df2.list2). All lists consist of unique values only. List1 includes about 300.000 rows and list2 30.000 rows.

I've got a working code but its terribly slow (because I'm using iterrows). I also tried itertuples() but it gave me an error ("too many values to unpack (expected 2)"). I found a similar question online: Pandas counting occurrence of list contained in column of lists. In the mentioned case the person considers only the occurrence of one list within a column of lists. However, I can't work things out so each row in df1.list1 is compared to df2.list2.

Thats how my lists look like (simplified):

df1.list1

0   ["a", "b"]
1   ["a", "c"]
2   ["a", "d"]
3   ["b", "c"]
4   ["b", "d"]
5   ["c", "d"]


df2.list2

0    ["a", "b" ,"c", "d"]
1    ["a", "b"] 
2    ["b", "c"]
3    ["c", "d"]
4    ["b", "c"]

What I would like to come up with:

df1

    list1         occurence   
0   ["a", "b"]    2
1   ["a", "c"]    1
2   ["a", "d"]    1
3   ["b", "c"]    3
4   ["b", "d"]    1
5   ["c", "d"]    2

Thats what I've got so far:

for index, row in df_combinations.iterrows():
    df1.at[index, "occurrence"] = df2["list2"].apply(lambda x: all(i in x for i in row['list1'])).sum()

Any suggestions how I can speed things up? Thanks in advance!

解决方案

This should be much faster:

df = pd.DataFrame({'list1': [["a","b"],
                             ["a","c"],
                             ["a","d"],
                             ["b","c"],
                             ["b","d"],
                             ["c","d"]]*100})
df2 = pd.DataFrame({'list2': [["a","b","c","d"],
                              ["a","b"], 
                              ["b","c"],
                              ["c","d"],
                              ["b","c"]]*100})

list2 = df2['list2'].map(set).tolist()

df['occurance'] = df['list1'].apply(set).apply(lambda x: len([i for i in list2 if x.issubset(i)]))

Using your approach:

%timeit for index, row in df.iterrows(): df.at[index, "occurrence"] = df2["list2"].apply(lambda x: all(i in x for i in row['list1'])).sum()

1 loop, best of 3: 3.98 s per loop Using mine:

%timeit list2 = df2['list2'].map(set).tolist();df['occurance'] = df['list1'].apply(set).apply(lambda x: len([i for i in list2 if x.issubset(i)]))

10 loops, best of 3: 29.7 ms per loop

Notice that I've increased the size of list by a factor of 100.

EDIT

This one seems even faster:

list2 = df2['list2'].sort_values().tolist()
df['occurance'] = df['list1'].apply(lambda x: len(list(next(iter(())) if not all(i in list2 for i in x) else i for i in x)))

And timing:

%timeit list2 =  df2['list2'].sort_values().tolist();df['occurance'] = df['list1'].apply(lambda x: len(list(next(iter(())) if not all(i in list2 for i in x) else i for i in x)))

100 loops, best of 3: 14.8 ms per loop

这篇关于更快的替代 iterrows的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆