numpy的大矩阵乘法优化 [英] numpy large matrix multiplication optimize

查看:450
本文介绍了numpy的大矩阵乘法优化的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我必须对大型矩阵进行迭代计算: R(t)= M @ R(t-1),其中M为n x n,R为n x 1

I have to do a iterative calculation with large matrix: R(t) = M @ R(t-1), where M is n x n, and R is n x 1

如果我这样写:

for _ in range(iter_num):
    R = M @ R

我想它将非常慢,因为它每次必须复制并创建一个新数组.有什么优化的方法吗? (也许就在这里?)

I suppose it will be very slow, because it has to copy and create new array each time. Is that any way to optimize this? (maybe do it inplace?)

推荐答案

一些时间表明OP的方法实际上具有相当的竞争力:

A few timings to show that OP's approach is actually quite competitive:

>>> import functools as ft
>>> kwds = dict(globals=globals(), number=1000)
>>> R = np.random.random((200,))
>>> M = np.random.random((200, 200))
>>> def f_op(M, R):
...     for i in range(k):
...         R = M@R
...     return R
... 
>>> def f_pp(M, R):
...     return ft.reduce(np.matmul, (R,) + k * (M.T,))
... 
>>> def f_ag(M, R):
...     return np.linalg.matrix_power(M, k)@R
... 
>>> def f_tai(M, R):
...     return np.linalg.multi_dot([M]*k+[R])
... 
>>> k = 20
>>> repeat('f_op(M, R)', **kwds)
[0.14156094897771254, 0.1264056910004001, 0.12611976702464744]
>>> repeat('f_pp(M, R)', **kwds)
[0.12594187198556028, 0.1227772050187923, 0.12045996301458217]
>>> repeat('f_ag(M, R)', **kwds)
[2.065609384997515, 2.041590739012463, 2.038702343008481]
>>> repeat('f_tai(M, R)', **kwds)
[3.426795684004901, 3.4321794749994297, 3.4208814119920135]
>>>
>>> k = 500
>>> repeat('f_op(M, R)', **kwds)
[3.066054102004273, 3.0294102499901783, 3.020273027010262]
>>> repeat('f_pp(M, R)', **kwds)
[2.891954762977548, 2.8680382019956596, 2.8558325179910753]
>>> repeat('f_ag(M, R)', **kwds)
[5.216210452985251, 5.1636185249954, 5.157578871003352]

这篇关于numpy的大矩阵乘法优化的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆