在Python中更快实现ReLu派生工具? [英] Faster implementation for ReLu derivative in python?
问题描述
我将ReLu派生实现为:
I have implemented ReLu derivative as:
def relu_derivative(x):
return (x>0)*np.ones(x.shape)
我也尝试过:
def relu_derivative(x):
x[x>=0]=1
x[x<0]=0
return x
X的大小=(3072,10000). 但这需要花费大量时间进行计算.还有其他优化的解决方案吗?
Size of X=(3072,10000). But it's taking much time to compute. Is there any other optimized solution?
推荐答案
方法1:使用numexpr
在处理大数据时,我们可以使用 numexpr
模块支持多核处理.在这里,一种方法是-
Approach #1 : Using numexpr
When working with large data, we can use numexpr
module that supports multi-core processing if the intended operations could be expressed as arithmetic ones. Here, one way would be -
(X>=0)+0
因此,要解决我们的问题,应该是-
Thus, to solve our case, it would be -
import numexpr as ne
ne.evaluate('(X>=0)+0')
方法2:使用NumPy views
另一个技巧是使用views
,方法是将比较掩码作为int
数组查看,就像这样-
Approach #2 : Using NumPy views
Another trick would be to use views
by viewing the mask of comparisons as an int
array, like so -
(X>=0).view('i1')
关于性能,它应该与创建X>=0
相同.
On performance, it should be identical to creating X>=0
.
时间
在随机数组上比较所有已发布的解决方案-
Comparing all posted solutions on a random array -
In [14]: np.random.seed(0)
...: X = np.random.randn(3072,10000)
In [15]: # OP's soln-1
...: def relu_derivative_v1(x):
...: return (x>0)*np.ones(x.shape)
...:
...: # OP's soln-2
...: def relu_derivative_v2(x):
...: x[x>=0]=1
...: x[x<0]=0
...: return x
In [16]: %timeit ne.evaluate('(X>=0)+0')
10 loops, best of 3: 27.8 ms per loop
In [17]: %timeit (X>=0).view('i1')
100 loops, best of 3: 19.3 ms per loop
In [18]: %timeit relu_derivative_v1(X)
1 loop, best of 3: 269 ms per loop
In [19]: %timeit relu_derivative_v2(X)
1 loop, best of 3: 89.5 ms per loop
基于numexpr
的线程具有8
线程.因此,随着更多线程可用于计算,它应该进一步改进. Related post
有关如何控制多核功能的信息.
The numexpr
based one was with 8
threads. Thus, with more number of threads available for compute, it should improve further. Related post
on how to control multi-core functionality.
将两种方法混合使用,以获得大型阵列的最佳选择-
Mix both of those for the most optimal one for large arrays -
In [27]: np.random.seed(0)
...: X = np.random.randn(3072,10000)
In [28]: %timeit ne.evaluate('X>=0').view('i1')
100 loops, best of 3: 14.7 ms per loop
这篇关于在Python中更快实现ReLu派生工具?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!