在PyTorch中加速SVD [英] Speed up SVD in PyTorch

查看:211
本文介绍了在PyTorch中加速SVD的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在用Pytorch为CIFAR10做一些分类任务,对于每次迭代,我必须对每个批次进行一些预处理,然后才能反馈到模型.以下是每批预处理部分的代码:

I am doing some classification task for CIFAR10 with Pytorch and for each iteration I have to do some preprocessing on each batch before I would be able to feed forward to the model. Below is the code for the preprocessing part on each batch:

S = torch.zeros((batch_size, C, H, W))
for i in range(batch_size):
    img = batch[i, :, :, :]
    for c in range(C):                
        U, _, V = torch.svd(img[c])
        S[i, c] = U[:, 0].view(-1, 1).matmul(V[:, 0].view(1, -1))

但是,此计算非常慢.有什么办法可以加快这段代码的速度?

However, this calculation is very slow. Is there any way that I could speed up this code?

推荐答案

PyTorch现在具有类似于numpy的 linalg 模块的速度优化的线性代数运算,包括 torch.linalg.svd :

PyTorch now has speed optimised Linear Algebra operations analogous to numpy's linalg module, including torch.linalg.svd:

在CPU上实现SVD时,使用LAPACK例程?gesdd (分而治之算法)而不是?gesvd 来提高速度.类似地,GPU上的SVD在CUDA 10.1.243和更高版本上使用cuSOLVER例程 gesvdj gesvdjBatched ,并在CUDA的早期版本上使用MAGMA例程gesdd.

The implementation of SVD on CPU uses the LAPACK routine ?gesdd (a divide-and-conquer algorithm) instead of ?gesvd for speed. Analogously, the SVD on GPU uses the cuSOLVER routines gesvdj and gesvdjBatched on CUDA 10.1.243 and later, and uses the MAGMA routine gesdd on earlier versions of CUDA.

这篇关于在PyTorch中加速SVD的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆