矩阵和向量的元素明智点积 [英] Element wise dot product of matrices and vectors

查看:103
本文介绍了矩阵和向量的元素明智点积的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

此处此处

注意:我的实际情况比matrix1matrix2

使用更复杂的矩阵

解决方案

对于np.einsum,您可以使用:

np.einsum("ijk,ki->ji", matrices, vectors)

#array([[ 1.,  5.],
#       [ 1.,  5.],
#       [ 1.,  5.],
#       [ 1.,  5.],
#       [ 1.,  5.]])

There are really similar questions here, here, here, but I don't really understand how to apply them to my case precisely.

I have an array of matrices and an array of vectors and I need element-wise dot product. Illustration:

In [1]: matrix1 = np.eye(5)

In [2]: matrix2 = np.eye(5) * 5

In [3]: matrices = np.array((matrix1,matrix2))

In [4]: matrices
Out[4]: 
array([[[ 1.,  0.,  0.,  0.,  0.],
        [ 0.,  1.,  0.,  0.,  0.],
        [ 0.,  0.,  1.,  0.,  0.],
        [ 0.,  0.,  0.,  1.,  0.],
        [ 0.,  0.,  0.,  0.,  1.]],

       [[ 5.,  0.,  0.,  0.,  0.],
        [ 0.,  5.,  0.,  0.,  0.],
        [ 0.,  0.,  5.,  0.,  0.],
        [ 0.,  0.,  0.,  5.,  0.],
        [ 0.,  0.,  0.,  0.,  5.]]])

In [5]: vectors = np.ones((5,2))

In [6]: vectors
Out[6]: 
array([[ 1.,  1.],
       [ 1.,  1.],
       [ 1.,  1.],
       [ 1.,  1.],
       [ 1.,  1.]])

In [9]: np.array([m @ v for m,v in zip(matrices, vectors.T)]).T
Out[9]: 
array([[ 1.,  5.],
       [ 1.,  5.],
       [ 1.,  5.],
       [ 1.,  5.],
       [ 1.,  5.]])

This last line is my desired output. Unfortunately it is very inefficient, for instance doing matrices @ vectors that computes unwanted dot products due to broadcasting (if I understand well, it returns the first matrix dot the 2 vectors and the second matrix dot the 2 vectors) is actually faster.

I guess np.einsum or np.tensordot might be helpful here but all my attempts have failed:

In [30]: np.einsum("i,j", matrices, vectors)
ValueError: operand has more dimensions than subscripts given in einstein sum, but no '...' ellipsis provided to broadcast the extra dimensions.

In [34]: np.tensordot(matrices, vectors, axes=(0,1))
Out[34]: 
array([[[ 6.,  6.,  6.,  6.,  6.],
        [ 0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.]],

       [[ 0.,  0.,  0.,  0.,  0.],
        [ 6.,  6.,  6.,  6.,  6.],
        [ 0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.]],

       [[ 0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.],
        [ 6.,  6.,  6.,  6.,  6.],
        [ 0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.]],

       [[ 0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.],
        [ 6.,  6.,  6.,  6.,  6.],
        [ 0.,  0.,  0.,  0.,  0.]],

       [[ 0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.],
        [ 6.,  6.,  6.,  6.,  6.]]])

NB: my real-case scenario use more complicated matrices than matrix1 and matrix2

解决方案

With np.einsum, you might use:

np.einsum("ijk,ki->ji", matrices, vectors)

#array([[ 1.,  5.],
#       [ 1.,  5.],
#       [ 1.,  5.],
#       [ 1.,  5.],
#       [ 1.,  5.]])

这篇关于矩阵和向量的元素明智点积的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆