快速Python的numpy功能? [英] fast python numpy where functionality?

查看:197
本文介绍了快速Python的numpy功能?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在几个中为循环使用了numpy的where函数,但是它太慢了。有什么方法可以更快地执行此功能?我读过你应该尝试在循环中进行循环,以及为循环之前的函数创建函数的局部变量,但似乎没有任何东西提高速度(< 1%)。 len(UNIQ_IDS)〜800. emiss_data obj_data 是形状=(2600,5200)的numpy ndarrays。我已经使用 import profile 来获取瓶颈的位置,其中位于 循环是一个很大的。

  import numpy as np 
max = np .max
where = np.where
MAX_EMISS = [max(emiss_data [where(obj_data == i)])for UNIQ_IDS)]


请考虑以下方法:

  import numpy作为np 
导入集合
导入itertools作为IT

形状=(2600,5200)
形状=(26,52)
emiss_data = np .random.random(shape)
obj_data = np.random.random_integers(1,800,size = shape)
UNIQ_IDS = np.unique(obj_data)
$ b $ def using_where ):
max = np.max
where = np.where
MAX_EMISS = [max(emiss_data [where(obj_data == i)])for UNIQ_IDS]
return MAX_EMISS

在UNIQ_IDS中,使用using_index():
max = np.max
MAX_EMISS = [max(emiss_data [obj_data == i]
return MAX_EMISS
$ b $ def using_max():
MAX_EMISS = [(emiss_data [obj_data == i])。max()for UNIQ_IDS]
return MAX_EMISS (),
$ b def using_loop():
result = collections.defaultdict(list)
for val,idx in IT.izip(emiss_data.ravel(),obj_data.ravel()) :
结果[idx] .append(val)
返回[max(result [idx])for idx in UNIQ_IDS]

def using_sort():
uind = np.digitize(obj_data.ravel(),UNIQ_IDS) - 1
vals = uind.argsort()
count = np.bincount(uind)
start = 0
end = 0
out = np.empty(count.shape [0])
for ind,x in np.ndenumerate(count):
end + = x
out [ind ] = np.max(np.take(emiss_data,vals [start:end]))
start + = x
返回
$ b $ def using_split():
ui nd = np.digitize(obj_data.ravel(),UNIQ_IDS) - 1
vals = uind.argsort()
count = np.bincount(uind)
return [np.take(emiss_data ,item).max()
for np.split(vals,count.cumsum())[: - 1]]

for func in(using_index,using_max,using_loop, using_sort,using_split):
assert using_where()== func()

这里是基准,与 shape =(2600,5200)

  In [ 57]:%timeit using_loop()
1个循环,最好是3:每个循环9.15 s

在[90]中:%timeit using_sort()
1个循环, 3:每循环9.33秒

在[91]中:%timeit using_split()
1循环,最好是3:每循环9.33秒

在[61 ]:%timeit using_index()
1个循环,最好是3:每循环63.2 s

在[62]中:%timeit using_max()
1个循环,最好是3 :每循环64.4 s

在[58]中:%timeit using_where()
1循环,最好3:每循环112 s
/ pre>

星期四s using_loop (纯Python)的结果比 using_where 快11倍以上。 b

我不完全确定为什么纯Python比NumPy快。我的猜测是,纯粹的Python版本通过两个阵列拉链(是的,双关语意图)。它利用了这样的事实:尽管所有的花式索引,我们真的只想访问每个值一次。因此,它不得不确定哪个组合中的每个值都属于哪个组。但是,这只是模糊的猜测。我不知道它会更快,直到我基准。


I am using numpy's where function many times inside several for loops, but it becomes way too slow. Are there any ways to perform this functionality faster? I read you should try to do in-line for loops, as well as make local variables for functions before the for loops, but nothing seems to improve speed by much (< 1%). The len(UNIQ_IDS) ~ 800. emiss_data and obj_data are numpy ndarrays with shape = (2600,5200). I've used import profile to get a handle on where the bottlenecks are, and where in for loops is a big one.

import numpy as np
max = np.max
where = np.where
MAX_EMISS = [max(emiss_data[where(obj_data == i)]) for i in UNIQ_IDS)]

解决方案

It turns out that a pure Python loop can be much much faster than NumPy indexing (or calls to np.where) in this case.

Consider the following alternatives:

import numpy as np
import collections
import itertools as IT

shape = (2600,5200)
# shape = (26,52)
emiss_data = np.random.random(shape)
obj_data = np.random.random_integers(1, 800, size=shape)
UNIQ_IDS = np.unique(obj_data)

def using_where():
    max = np.max
    where = np.where
    MAX_EMISS = [max(emiss_data[where(obj_data == i)]) for i in UNIQ_IDS]
    return MAX_EMISS

def using_index():
    max = np.max
    MAX_EMISS = [max(emiss_data[obj_data == i]) for i in UNIQ_IDS]
    return MAX_EMISS

def using_max():
    MAX_EMISS = [(emiss_data[obj_data == i]).max() for i in UNIQ_IDS]
    return MAX_EMISS

def using_loop():
    result = collections.defaultdict(list)
    for val, idx in IT.izip(emiss_data.ravel(), obj_data.ravel()):
        result[idx].append(val)
    return [max(result[idx]) for idx in UNIQ_IDS]

def using_sort():
    uind = np.digitize(obj_data.ravel(), UNIQ_IDS) - 1
    vals = uind.argsort()
    count = np.bincount(uind)
    start = 0
    end = 0
    out = np.empty(count.shape[0])
    for ind, x in np.ndenumerate(count):
        end += x
        out[ind] = np.max(np.take(emiss_data, vals[start:end]))
        start += x
    return out

def using_split():
    uind = np.digitize(obj_data.ravel(), UNIQ_IDS) - 1
    vals = uind.argsort()
    count = np.bincount(uind)
    return [np.take(emiss_data, item).max()
            for item in np.split(vals, count.cumsum())[:-1]]

for func in (using_index, using_max, using_loop, using_sort, using_split):
    assert using_where() == func()

Here are the benchmarks, with shape = (2600,5200):

In [57]: %timeit using_loop()
1 loops, best of 3: 9.15 s per loop

In [90]: %timeit using_sort()
1 loops, best of 3: 9.33 s per loop

In [91]: %timeit using_split()
1 loops, best of 3: 9.33 s per loop

In [61]: %timeit using_index()
1 loops, best of 3: 63.2 s per loop

In [62]: %timeit using_max()
1 loops, best of 3: 64.4 s per loop

In [58]: %timeit using_where()
1 loops, best of 3: 112 s per loop

Thus using_loop (pure Python) turns out to be more than 11x faster than using_where.

I'm not entirely sure why pure Python is faster than NumPy here. My guess is that the pure Python version zips (yes, pun intended) through both arrays once. It leverages the fact that despite all the fancy indexing, we really just want to visit each value once. Thus it side-steps the issue with having to determine exactly which group each value in emiss_data falls in. But this is just vague speculation. I didn't know it would be faster until I benchmarked.

这篇关于快速Python的numpy功能?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆