替代 numpy.argwhere 以加快 python 中的循环 [英] alternative to numpy.argwhere to speed up for loop in python
问题描述
我有两个数据集如下:
ds1:一个 DEM(数字高程模型)文件作为 2d numpy 数组和,
ds1: a DEM (digital elevation model) file as 2d numpy array and,
ds2:显示有一些多余水分的区域(像素).
ds2: which is showing areas (pixels) with some excess water in them.
我有一个 while 循环,负责根据其 8 个相邻像素及其自身的高程扩展(和更改)每个像素中的多余体积,直到每个像素中的多余体积小于某个值 d = 0.05.因此,在每次迭代中,我需要找到 ds2 中多余体积大于 0.05 的像素索引,如果没有像素剩余,则退出 while 循环:
I have a while loop that is responsible to spread (and change) excess volume in each pixel according to the elevation of its 8 neighbours and itself until the excess volume in each pixel is less than a certain value d = 0.05. Therefore in each iteration I need to find the index of pixels in ds2 where excess volume is greater than 0.05 and if there is no pixel left, exit the while loop:
exit_code == "No"
while exit_code == "No":
index_of_pixels_with_excess_volume = numpy.argwhere(ds2> 0.05) # find location of pixels where excess volume is greater than 0.05
if not index_of_pixels_with_excess_volume.size:
exit_code = "Yes"
else:
for pixel in index_of_pixels_with_excess_volume:
# spread those excess volumes to the neighbours and
# change the values of ds2
问题是 numpy.argwhere(ds2> 0.05) 非常慢.我正在寻找更快的替代解决方案.
the problem is that numpy.argwhere(ds2> 0.05) is very slow. I am looking for an alternative solution that is faster.
推荐答案
制作一个示例二维数组:
Make a sample 2d array:
In [584]: arr = np.random.rand(1000,1000)
找出其中的一小部分:
In [587]: np.where(arr>.999)
Out[587]:
(array([ 1, 1, 1, ..., 997, 999, 999], dtype=int32),
array([273, 471, 584, ..., 745, 310, 679], dtype=int32))
In [588]: _[0].shape
Out[588]: (1034,)
为argwhere
的各个片段计时:
In [589]: timeit arr>.999
2.65 ms ± 116 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [590]: timeit np.count_nonzero(arr>.999)
2.79 ms ± 26 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [591]: timeit np.nonzero(arr>.999)
6 ms ± 10 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [592]: timeit np.argwhere(arr>.999)
6.06 ms ± 58.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
所以大约 1/3 的时间花在了 >
测试上,剩下的时间花在寻找 True
元素上.将 where
元组转换为 2 列数组很快.
So about 1/3 of the time is spend doing the >
test, and the rest in finding the True
elements. Turning the where
tuple into a 2 column array is fast.
现在,如果目标只是找到第一个 >
值,argmax
很快.
Now if the goal was to just find the first >
value, argmax
is fast.
In [593]: np.argmax(arr>.999)
Out[593]: 1273 # can unravel this to (1,273)
In [594]: timeit np.argmax(arr>.999)
2.76 ms ± 143 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
argmax
短路,因此当它找到第一个值时,实际运行时间会有所不同.
argmax
short circuits, so the actual run time will vary on when it finds the first value.
flatnonzero
比 where
快:
In [595]: np.flatnonzero(arr>.999)
Out[595]: array([ 1273, 1471, 1584, ..., 997745, 999310, 999679], dtype=int32)
In [596]: timeit np.flatnonzero(arr>.999)
3.05 ms ± 26.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [599]: np.unravel_index(np.flatnonzero(arr>.999),arr.shape)
Out[599]:
(array([ 1, 1, 1, ..., 997, 999, 999], dtype=int32),
array([273, 471, 584, ..., 745, 310, 679], dtype=int32))
In [600]: timeit np.unravel_index(np.flatnonzero(arr>.999),arr.shape)
3.05 ms ± 3.58 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [601]: timeit np.transpose(np.unravel_index(np.flatnonzero(arr>.999),arr.shap
...: e))
3.1 ms ± 5.86 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
这与np.argwhere(arr>.999)
相同.
有趣的是,flatnonzero
方法将时间减少了一半!没想到进步这么大.
Interesting, the flatnonzero
approach cuts the time in half! I didn't expect such a big improvement.
比较迭代速度:
对来自 argwhere
的二维数组进行迭代:
Iteration on the 2d array from argwhere
:
In [607]: pixels = np.argwhere(arr>.999)
In [608]: timeit [pixel for pixel in pixels]
347 µs ± 5.29 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
使用 zip(*)
转置从 where
迭代元组:
Iterating on the tuple from where
with the zip(*)
transpose:
In [609]: idx = np.where(arr>.999)
In [610]: timeit [pixel for pixel in zip(*idx)]
256 µs ± 147 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each)
在数组上迭代通常比在列表上迭代慢一点,或者在这种情况下是压缩数组.
Iterating on an array is often a little slower than iterating on a list, or in this case zipped arrays.
In [611]: [pixel for pixel in pixels][:5]
Out[611]:
[array([ 1, 273], dtype=int32),
array([ 1, 471], dtype=int32),
array([ 1, 584], dtype=int32),
array([ 1, 826], dtype=int32),
array([ 2, 169], dtype=int32)]
In [612]: [pixel for pixel in zip(*idx)][:5]
Out[612]: [(1, 273), (1, 471), (1, 584), (1, 826), (2, 169)]
一个是数组列表,另一个是元组列表.但是将这些元组(单独)转换为数组很慢:
One is a list of arrays, the other a list of tuples. But turning those tuples into arrays (individually) is slow:
In [614]: timeit [np.array(pixel) for pixel in zip(*idx)]
2.26 ms ± 4.94 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
在平面非零数组上迭代更快
Iterating on the flat nonzero array is faster
In [617]: fdx = np.flatnonzero(arr>.999)
In [618]: fdx[:5]
Out[618]: array([1273, 1471, 1584, 1826, 2169], dtype=int32)
In [619]: timeit [i for i in fdx]
112 µs ± 23.5 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
但是将 unravel
单独应用于这些值需要时间.
but applying unravel
to those values individually will take time.
def foo(idx): # a simplified unravel
return idx//1000, idx%1000
In [628]: timeit [foo(i) for i in fdx]
1.12 ms ± 1.02 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
将这 1 ms 添加到 3 ms 以生成 fdx
,这个 flatnonzero
可能仍然领先.但在最好的情况下,我们谈论的是 2 倍的速度提升.
Add this 1 ms to the 3 ms to generate fdx
, this flatnonzero
might still come out ahead. But at its best we are talking about a 2x speed improvement.
这篇关于替代 numpy.argwhere 以加快 python 中的循环的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!