numpy性能-向量及其转置的外积 [英] Numpy Performance - Outer Product of a vector with its transpose

查看:236
本文介绍了numpy性能-向量及其转置的外积的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

据我了解,向量及其转置的外积在价值上是对称的.

Numpy是否仅考虑对输出的上三角部分进行乘法运算,还是会计算整个输出矩阵(即使它是对称的并且时间和内存可能会浪费掉?)

解决方案

探索一些替代方法:

In [162]: x=np.arange(100)
In [163]: np.outer(x,x)
Out[163]: 
array([[   0,    0,    0, ...,    0,    0,    0],
       [   0,    1,    2, ...,   97,   98,   99],
       [   0,    2,    4, ...,  194,  196,  198],
       ...,
       [   0,   97,  194, ..., 9409, 9506, 9603],
       [   0,   98,  196, ..., 9506, 9604, 9702],
       [   0,   99,  198, ..., 9603, 9702, 9801]])
In [164]: x1=x[:,None]
In [165]: x1*x1.T
Out[165]: 
array([[   0,    0,    0, ...,    0,    0,    0],
       [   0,    1,    2, ...,   97,   98,   99],
       [   0,    2,    4, ...,  194,  196,  198],
       ...,
       [   0,   97,  194, ..., 9409, 9506, 9603],
       [   0,   98,  196, ..., 9506, 9604, 9702],
       [   0,   99,  198, ..., 9603, 9702, 9801]])
In [166]: np.dot(x1,x1.T)
Out[166]: 
array([[   0,    0,    0, ...,    0,    0,    0],
       [   0,    1,    2, ...,   97,   98,   99],
       [   0,    2,    4, ...,  194,  196,  198],
       ...,
       [   0,   97,  194, ..., 9409, 9506, 9603],
       [   0,   98,  196, ..., 9506, 9604, 9702],
       [   0,   99,  198, ..., 9603, 9702, 9801]])

比较他们的时间:

In [167]: timeit np.outer(x,x)
40.8 µs ± 63.1 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [168]: timeit x1*x1.T
36.3 µs ± 22 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [169]: timeit np.dot(x1,x1.T)
60.7 µs ± 6.86 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

dot是否使用转置快捷方式?我不这样认为,或者如果这样做,在这种情况下它没有帮助.我对dot变慢感到有些惊讶.

In [170]: x2=x1.T
In [171]: timeit np.dot(x1,x2)
61.1 µs ± 30 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

另一种方法

In [172]: timeit np.einsum('i,j',x,x)
28.3 µs ± 19.4 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

einsumx1x2具有相同的时间.

在这种情况下,有趣的是matmuleinsum一样好(也许einsum委托给matmul?)

In [178]: timeit x1@x2
27.3 µs ± 1.09 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [179]: timeit x1@x1.T
27.2 µs ± 14.2 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

numpy高效矩阵自乘法(gram矩阵)演示了dot如何聪明地节省时间(对于1000x1000数组).

如链接中所述,dot可以检测一个参数何时是另一个参数的转置(可能通过检查数据缓冲区指针以及形状和步幅),并且可以使用针对对称计算优化的BLAS函数.但是我没有看到outer这样做的证据.而且广播乘法不可能采取这样的步骤.

It is my understanding that the Outer Product of a vector with its transpose is symmetric in value.

Does Numpy take this into account to only do the multiplications for the upper triangle part of the output or does it calculate the whole output matrix (even though it is symmetric and time + memory could go to waste?)

解决方案

Exploring some alternatives:

In [162]: x=np.arange(100)
In [163]: np.outer(x,x)
Out[163]: 
array([[   0,    0,    0, ...,    0,    0,    0],
       [   0,    1,    2, ...,   97,   98,   99],
       [   0,    2,    4, ...,  194,  196,  198],
       ...,
       [   0,   97,  194, ..., 9409, 9506, 9603],
       [   0,   98,  196, ..., 9506, 9604, 9702],
       [   0,   99,  198, ..., 9603, 9702, 9801]])
In [164]: x1=x[:,None]
In [165]: x1*x1.T
Out[165]: 
array([[   0,    0,    0, ...,    0,    0,    0],
       [   0,    1,    2, ...,   97,   98,   99],
       [   0,    2,    4, ...,  194,  196,  198],
       ...,
       [   0,   97,  194, ..., 9409, 9506, 9603],
       [   0,   98,  196, ..., 9506, 9604, 9702],
       [   0,   99,  198, ..., 9603, 9702, 9801]])
In [166]: np.dot(x1,x1.T)
Out[166]: 
array([[   0,    0,    0, ...,    0,    0,    0],
       [   0,    1,    2, ...,   97,   98,   99],
       [   0,    2,    4, ...,  194,  196,  198],
       ...,
       [   0,   97,  194, ..., 9409, 9506, 9603],
       [   0,   98,  196, ..., 9506, 9604, 9702],
       [   0,   99,  198, ..., 9603, 9702, 9801]])

Comparing their times:

In [167]: timeit np.outer(x,x)
40.8 µs ± 63.1 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [168]: timeit x1*x1.T
36.3 µs ± 22 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [169]: timeit np.dot(x1,x1.T)
60.7 µs ± 6.86 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

Is dot using a transpose short cut? I don't think so, or if it does, it doesn't help in this case. I'm a little surprised that dot is slower.

In [170]: x2=x1.T
In [171]: timeit np.dot(x1,x2)
61.1 µs ± 30 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

Another method

In [172]: timeit np.einsum('i,j',x,x)
28.3 µs ± 19.4 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

einsum with x1 and x2 has the same times.

Interesting that matmul does as well as einsum in this case (maybe einsum is delegating to matmul?)

In [178]: timeit x1@x2
27.3 µs ± 1.09 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [179]: timeit x1@x1.T
27.2 µs ± 14.2 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

Numpy efficient matrix self-multiplication (gram matrix) demonstrates how dot can be save time by being clever (for a 1000x1000 array).

As discussed in the links, dot can detect when one argument is the transpose of the other (probably by checking the data buffer pointer and shape and strides), and can use a BLAS function optimized for symmetric calculations. But I don't see evidence of outer doing that. And its unlikely that broadcasted multiplication would take such a step.

这篇关于numpy性能-向量及其转置的外积的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆