改变的地方numpy的功能输出数组 [英] Altering numpy function output array in place
问题描述
我试图写一个阵列上执行数学运算并返回结果的函数。一个简单的例子可以是:
I'm trying to write a function that performs a mathematical operation on an array and returns the result. A simplified example could be:
def original_func(A):
return A[1:] + A[:-1]
有关速度和避免分配每个函数调用一个新的输出数组,我想有输出数组作为参数,并改变它在的地方:
For speed-up and to avoid allocating a new output array for each function call, I would like to have the output array as an argument, and alter it in place:
def inplace_func(A, out):
out[:] = A[1:] + A[:-1]
然而,以下面的方式调用这两个函数时,
However, when calling these two functions in the following manner,
A = numpy.random.rand(1000,1000)
out = numpy.empty((999,1000))
C = original_func(A)
inplace_func(A, out)
原有的功能似乎是<青霉>快两倍作为就地功能。这又如何解释呢?应该不是就地功能会更快,因为它没有分配内存?
the original function seems to be twice as fast as the in-place function. How can this be explained? Shouldn't the in-place function be quicker since it doesn't have to allocate memory?
推荐答案
我的认为的答案是以下内容:
I think that the answer is the following:
在这两种情况下,你计算 A [1:] + A [: - 1]。
,并在这两种情况下,你实际上创建中间矩阵
In both cases, you compute A[1:] + A[:-1]
, and in both cases, you actually create an intermediate matrix.
在第二种情况下,会发生什么,不过,是你的明确的整个大新分配的阵列复制到一个保留内存。复制这样的阵列需要大约相同的时间与原始的操作,所以你实际上两倍的时间。
What happens in the second case, though, is that you explicitly copy the whole big newly allocated array into a reserved memory. Copying such an array takes about the same time as the original operation, so you in fact double the time.
要总结向上,在第一种情况下,你做的:
To sum-up, in the first case, you do:
compute A[1:] + A[:-1] (~10ms)
在第二种情况下,你做
compute A[1:] + A[:-1] (~10ms)
copy the result into out (~10ms)
这篇关于改变的地方numpy的功能输出数组的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!