如何用cython(或numpy)加快 pandas 速度 [英] How to speed up pandas with cython (or numpy)

查看:303
本文介绍了如何用cython(或numpy)加快 pandas 速度的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用Cython来加速Pandas DataFrame的计算,这相对简单:遍历DataFrame中的每一行,将该行添加到自身以及DataFrame中所有剩余的行中,将每一行的总和相加,然后产生这些和的清单.随着DataFrame中的行用完,这些系列的长度将减少.这些系列以字典形式存储在索引行号上.

def foo(df):
    vals = {i: (df.iloc[i, :] + df.iloc[i:, :]).sum(axis=1).values.tolist()
            for i in range(df.shape[0])}   
    return vals

除了在此函数的顶部添加%%cython之外,是否有人建议我如何使用cdefs将DataFrame值转换为double值,然后对这些代码进行cythonize处理?

下面是一些伪数据:

>>> df

          A         B         C         D         E
0 -0.326403  1.173797  1.667856 -1.087655  0.427145
1 -0.797344  0.004362  1.499460  0.427453 -0.184672
2 -1.764609  1.949906 -0.968558  0.407954  0.533869
3  0.944205  0.158495 -1.049090 -0.897253  1.236081
4 -2.086274  0.112697  0.934638 -1.337545  0.248608
5 -0.356551 -1.275442  0.701503  1.073797 -0.008074
6 -1.300254  1.474991  0.206862 -0.859361  0.115754
7 -1.078605  0.157739  0.810672  0.468333 -0.851664
8  0.900971  0.021618  0.173563 -0.562580 -2.087487
9  2.155471 -0.605067  0.091478  0.242371  0.290887

和预期输出:

>>> foo(df)

{0: [3.7094795101205236,
  2.8039983729106,
  2.013301815968468,
  2.24717712931852,
  -0.27313665495940964,
  1.9899718844711711,
  1.4927321304935717,
  1.3612155622947018,
  0.3008239883773878,
  4.029880107986906],

. . .

 6: [-0.72401524913338,
  -0.8555318173322499,
  -1.9159233912495635,
  1.813132728359954],
 7: [-0.9870483855311194, -2.047439959448434, 1.6816161601610844],
 8: [-3.107831533365748, 0.6212245862437702],
 9: [4.350280705853288]}

解决方案

如果您只是想更快地进行操作而不是专门使用cython,那么我将以简单的numpy方式(大约快50倍)进行操作.

def numpy_foo(arr):
    vals = {i: (arr[i, :] + arr[i:, :]).sum(axis=1).tolist()
            for i in range(arr.shape[0])}   
    return vals

%timeit foo(df)
100 loops, best of 3: 7.2 ms per loop

%timeit numpy_foo(df.values)
10000 loops, best of 3: 144 µs per loop

foo(df) == numpy_foo(df.values)
Out[586]: True

通常来说,与numpy相比,大熊猫为您提供了许多便利,但存在间接费用.因此,在熊猫没有真正添加任何东西的情况下,通常可以通过以numpy的方式来加快速度.对于另一个示例,请参见以下 解决方案

If you're just trying to do it faster and not specifically using cython, I'd just do it in plain numpy (about 50x faster).

def numpy_foo(arr):
    vals = {i: (arr[i, :] + arr[i:, :]).sum(axis=1).tolist()
            for i in range(arr.shape[0])}   
    return vals

%timeit foo(df)
100 loops, best of 3: 7.2 ms per loop

%timeit numpy_foo(df.values)
10000 loops, best of 3: 144 µs per loop

foo(df) == numpy_foo(df.values)
Out[586]: True

Generally speaking, pandas gives you a lot of conveniences relative to numpy, but there are overhead costs. So in situations where pandas isn't really adding anything, you can generally speed things up by doing it in numpy. For another example, see this question I asked which showed a roughly comparable speed difference (about 23x).

这篇关于如何用cython(或numpy)加快 pandas 速度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
Python最新文章
热门教程
热门工具
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆