取决于数值的Numpy性能差异 [英] Numpy performance differences depending on numerical values
本文介绍了取决于数值的Numpy性能差异的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
在评估Numpy中的表达式时,我发现了一个奇怪的性能差异.
I found a strange performance difference while evaluating an expression in Numpy.
我执行了以下代码:
import numpy as np
myarr = np.random.uniform(-1,1,[1100,1100])
然后
%timeit np.exp( - 0.5 * (myarr / 0.001)**2 )
>> 184 ms ± 301 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
和
%timeit np.exp( - 0.5 * (myarr / 0.1)**2 )
>> 12.3 ms ± 34.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
在第二种情况下,计算速度提高了将近15倍!注意,唯一的区别是系数为0.1或0.001.
That's an almost 15x faster computation in the second case! Note that the only difference is the factor being 0.1 or 0.001.
这种行为的原因是什么?我可以更改某些内容以使第一个计算与第二个一样快吗?
What's the reason for this behaviour? Can I change something to make the first calculation as fast as the second?
推荐答案
这可能会产生非规范化的数字,从而减慢计算速度.
This may produce denormalised numbers which slow down computations.
您可能想使用 daz
库来禁用非规范化的数字:
You may like to disable denormalized numbers using daz
library:
import daz
daz.set_daz()
查看全文