矩阵乘法,快速和稀疏的子集 [英] Subset of a matrix multiplication, fast, and sparse

查看:105
本文介绍了矩阵乘法,快速和稀疏的子集的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

转换协作式过滤代码以使用稀疏矩阵我正在困惑以下问题:给定两个完整矩阵X(m×l)和Theta(n×l),以及稀疏矩阵R(m×n),有没有一种快速的方法来计算稀疏内积.大尺寸为m和n(100000阶),而l小(10阶).对于大数据来说,这可能是相当普遍的操作,因为它显示在大多数线性回归问题的成本函数中,因此我希望scipy.sparse内置一个解决方案,但我还没有发现任何明显的方法.

Converting a collaborative filtering code to use sparse matrices I'm puzzling on the following problem: given two full matrices X (m by l) and Theta (n by l), and a sparse matrix R (m by n), is there a fast way to calculate the sparse inner product . Large dimensions are m and n (order 100000), while l is small (order 10). This is probably a fairly common operation for big data since it shows up in the cost function of most linear regression problems, so I'd expect a solution built into scipy.sparse, but I haven't found anything obvious yet.

在python中执行此操作的幼稚方法是R.multiply(X Theta.T),但这将导致对完整矩阵X Theta.T进行评估(m乘n,阶为100000 ** 2)占用太多内存,然后由于R稀疏而转储了大多数条目.

The naive way to do this in python is R.multiply(XTheta.T), but this will result in evaluation of the full matrix XTheta.T (m by n, order 100000**2) which occupies too much memory, then dumping most of the entries since R is sparse.

stackoverflow上已经有一个伪解决方案,但是它不是一步稀疏:

There is a pseudo solution already here on stackoverflow, but it is non-sparse in one step:

def sparse_mult_notreally(a, b, coords):
    rows, cols = coords
    rows, r_idx = np.unique(rows, return_inverse=True)
    cols, c_idx = np.unique(cols, return_inverse=True)
    C = np.array(np.dot(a[rows, :], b[:, cols])) # this operation is dense
    return sp.coo_matrix( (C[r_idx,c_idx],coords), (a.shape[0],b.shape[1]) )

对于我来说,在足够小的数组上运行良好且快速,但是在我的大型数据集上却出现以下错误:

This works fine, and fast, for me on small enough arrays, but it barfs on my big datasets with the following error:

... in sparse_mult(a, b, coords)
      132     rows, r_idx = np.unique(rows, return_inverse=True)
      133     cols, c_idx = np.unique(cols, return_inverse=True)
  --> 134     C = np.array(np.dot(a[rows, :], b[:, cols])) # this operation is not sparse
      135     return sp.coo_matrix( (C[r_idx,c_idx],coords), (a.shape[0],b.shape[1]) )

ValueError: array is too big.

实际上稀疏但非常缓慢的解决方案是:

A solution which IS actually sparse, but very slow, is:

def sparse_mult(a, b, coords):
    rows, cols = coords
    n = len(rows)
    C = np.array([ float(a[rows[i],:]*b[:,cols[i]]) for i in range(n) ]) # this is sparse, but VERY slow
    return sp.coo_matrix( (C,coords), (a.shape[0],b.shape[1]) )

有人知道快速,完全稀疏的方法吗?

Does anyone know a fast, fully sparse way to do this?

推荐答案

我针对您的问题介绍了4种不同的解决方案,对于任何大小的数组,

I profiled 4 different solutions to your problem, and it looks like for any size of the array, the numba jit solution is the best. A close second is @Alexander's cython solution.

以下是结果(M是x数组中的行数):

Here are the results (M is the number of rows in the x array):

M = 1000
function sparse_dense    took 0.03 sec.
function sparse_loop     took 0.07 sec.
function sparse_numba    took 0.00 sec.
function sparse_cython   took 0.09 sec.
M = 10000
function sparse_dense    took 2.88 sec.
function sparse_loop     took 0.68 sec.
function sparse_numba    took 0.00 sec.
function sparse_cython   took 0.01 sec.
M = 100000
function sparse_dense    ran out of memory
function sparse_loop     took 6.84 sec.
function sparse_numba    took 0.09 sec.
function sparse_cython   took 0.12 sec.

我用来剖析这些方法的脚本是:

The script I used to profile these methods is:

import numpy as np
from scipy.sparse import coo_matrix
from numba import autojit, jit, float64, int32
import pyximport
pyximport.install(setup_args={"script_args":["--compiler=mingw32"],
                              "include_dirs":np.get_include()},
                  reload_support=True)

def sparse_dense(a,b,c):
    return coo_matrix(c.multiply(np.dot(a,b)))

def sparse_loop(a,b,c):
    """Multiply sparse matrix `c` by np.dot(a,b) by looping over non-zero
    entries in `c` and using `np.dot()` for each entry."""
    N = c.size
    data = np.empty(N,dtype=float)
    for i in range(N):
        data[i] = c.data[i]*np.dot(a[c.row[i],:],b[:,c.col[i]])
    return coo_matrix((data,(c.row,c.col)),shape=(a.shape[0],b.shape[1]))

#@autojit
def _sparse_mult4(a,b,cd,cr,cc):
    N = cd.size
    data = np.empty_like(cd)
    for i in range(N):
        num = 0.0
        for j in range(a.shape[1]):
            num += a[cr[i],j]*b[j,cc[i]]
        data[i] = cd[i]*num
    return data

_fast_sparse_mult4 = \
    jit(float64[:,:](float64[:,:],float64[:,:],float64[:],int32[:],int32[:]))(_sparse_mult4)

def sparse_numba(a,b,c):
    """Multiply sparse matrix `c` by np.dot(a,b) using Numba's jit."""
    assert c.shape == (a.shape[0],b.shape[1])
    data = _fast_sparse_mult4(a,b,c.data,c.row,c.col)
    return coo_matrix((data,(c.row,c.col)),shape=(a.shape[0],b.shape[1]))

def sparse_cython(a, b, c):
    """Computes c.multiply(np.dot(a,b)) using cython."""
    from sparse_mult_c import sparse_mult_c

    data = np.empty_like(c.data)
    sparse_mult_c(a,b,c.data,c.row,c.col,data)
    return coo_matrix((data,(c.row,c.col)),shape=(a.shape[0],b.shape[1]))

def unique_rows(a):
    a = np.ascontiguousarray(a)
    unique_a = np.unique(a.view([('', a.dtype)]*a.shape[1]))
    return unique_a.view(a.dtype).reshape((unique_a.shape[0], a.shape[1]))

if __name__ == '__main__':
    import time

    for M in [1000,10000,100000]:
        print 'M = %i' % M
        N = M + 2
        L = 10

        x = np.random.rand(M,L)
        t = np.random.rand(N,L).T

        # number of non-zero entries in sparse r matrix
        S = M*10

        row = np.random.randint(M,size=S)
        col = np.random.randint(N,size=S)

        # remove duplicate rows and columns       
        row, col = unique_rows(np.dstack((row,col)).squeeze()).T

        data = np.random.rand(row.size)

        r = coo_matrix((data,(row,col)),shape=(M,N))

        a2 = sparse_loop(x,t,r)

        for f in [sparse_dense,sparse_loop,sparse_numba,sparse_cython]:
            t0 = time.time()
            try:
                a = f(x,t,r)
            except MemoryError:
                print 'function %s ran out of memory' % f.__name__
                continue
            elapsed = time.time()-t0
            try:
                diff = abs(a-a2)
                if diff.nnz > 0:
                    assert np.max(abs(a-a2).data) < 1e-5
            except AssertionError:
                print f.__name__
                raise
            print 'function %s took %.2f sec.' % (f.__name__,elapsed)

cython函数是@Alexander代码的略微修改版本:

The cython function is a slightly modified version of @Alexander's code:

# working from tutorial at: http://docs.cython.org/src/tutorial/numpy.html
cimport numpy as np

# Turn bounds checking back on if there are ANY problems!
cimport cython
@cython.boundscheck(False) # turn of bounds-checking for entire function
def sparse_mult_c(np.ndarray[np.float64_t, ndim=2] a,
                  np.ndarray[np.float64_t, ndim=2] b,
                  np.ndarray[np.float64_t, ndim=1] data,
                  np.ndarray[np.int32_t, ndim=1] rows,
                  np.ndarray[np.int32_t, ndim=1] cols,
                  np.ndarray[np.float64_t, ndim=1] out):

    cdef int n = rows.shape[0]
    cdef int k = a.shape[1]
    cdef int i,j

    cdef double num

    for i in range(n):
        num = 0.0
        for j in range(k):
            num += a[rows[i],j] * b[j,cols[i]]
        out[i] = data[i]*num

这篇关于矩阵乘法,快速和稀疏的子集的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆