如何在 PyTorch 中移动具有不同偏移量的张量中的列(或行)? [英] How to shift columns (or rows) in a tensor with different offsets in PyTorch?

查看:67
本文介绍了如何在 PyTorch 中移动具有不同偏移量的张量中的列(或行)?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在 PyTorch 中,内置的 torch.roll 函数只能移动具有相同偏移量的列(或行).但我想移动具有不同偏移量的列.假设输入张量是

In PyTorch, the build-in torch.roll function is only able to shift columns (or rows) with same offsets. But I want to shift columns with different offsets. Suppose the input tensor is

[[1,2,3],
 [4,5,6],
 [7,8,9]]

比方说,我想用偏移量 i 移动第 i 列.因此,预期的输出是

Let's say, I want to shift with offset i for the i-th column. Thus, the expected output is

[[1,8,6],
 [4,2,9],
 [7,5,3]]

这样做的一个选项是使用 torch.roll 分别移动每一列并连接每一列.但是出于有效性和代码紧凑性的考虑,我不想介绍循环结构.有没有更好的办法?

An option to do so is to separately shift every column using torch.roll and concat each of them. But for the consideration of effectiveness and code compactness, I don't want to introduce the loop structure. Is there a better way?

推荐答案

我对 torch.gather 的性能持怀疑态度,所以我用 numpy 搜索了类似的问题并找到了 这篇 帖子.

I was sceptical about the performance of torch.gather so I searched for similar questions with numpy and found this post.

我从@Andy L 那里得到了解决方案并将其翻译成 pytorch.但是,对它持保留态度,因为我不知道 strides 是如何工作的:

I took the solution from @Andy L and translated it into pytorch. However, take it with a grain of salt, because I don't know how the strides work:

from numpy.lib.stride_tricks import as_strided
# NumPy solution:
def custom_roll(arr, r_tup):
    m = np.asarray(r_tup)
    arr_roll = arr[:, [*range(arr.shape[1]),*range(arr.shape[1]-1)]].copy() #need `copy`
    #print(arr_roll)
    strd_0, strd_1 = arr_roll.strides
    #print(strd_0, strd_1)
    n = arr.shape[1]
    result = as_strided(arr_roll, (*arr.shape, n), (strd_0 ,strd_1, strd_1))

    return result[np.arange(arr.shape[0]), (n-m)%n]

# Translated to PyTorch
def pcustom_roll(arr, r_tup):
    m = torch.tensor(r_tup)
    arr_roll = arr[:, [*range(arr.shape[1]),*range(arr.shape[1]-1)]].clone() #need `copy`
    #print(arr_roll)
    strd_0, strd_1 = arr_roll.stride()
    #print(strd_0, strd_1)
    n = arr.shape[1]
    result = torch.as_strided(arr_roll, (*arr.shape, n), (strd_0 ,strd_1, strd_1))

    return result[torch.arange(arr.shape[0]), (n-m)%n]

这也是@Daniel M 的即插即用解决方案.

Here is also the solution from @Daniel M as plug and play.

def roll_by_gather(mat,dim, shifts: torch.LongTensor):
    # assumes 2D array
    n_rows, n_cols = mat.shape
    
    if dim==0:
        #print(mat)
        arange1 = torch.arange(n_rows).view((n_rows, 1)).repeat((1, n_cols))
        #print(arange1)
        arange2 = (arange1 - shifts) % n_rows
        #print(arange2)
        return torch.gather(mat, 0, arange2)
    elif dim==1:
        arange1 = torch.arange(n_cols).view(( 1,n_cols)).repeat((n_rows,1))
        #print(arange1)
        arange2 = (arange1 - shifts) % n_cols
        #print(arange2)
        return torch.gather(mat, 1, arange2)
    

基准测试

首先,我在 CPU 上运行这些方法.令人惊讶的是,上面的 gather 解决方案是最快的:

n_cols = 10000
n_rows = 100
shifts = torch.randint(-100,100,size=[n_rows,1])
data = torch.arange(n_rows*n_cols).reshape(n_rows,n_cols)
npdata = np.arange(n_rows*n_cols).reshape(n_rows,n_cols)
npshifts = shifts.numpy()
%timeit roll_by_gather(data,1,shifts)
%timeit pcustom_roll(data,shifts)
%timeit custom_roll(npdata,npshifts)
>> 2.41 ms ± 68.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
>> 90.4 ms ± 882 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
>> 247 ms ± 6.08 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

在 GPU 上运行代码显示了类似的结果:

Running the code on GPU shows similar results:

%timeit roll_by_gather(data,shifts)
%timeit pcustom_roll(data,shifts)
131 µs ± 6.79 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
3.29 ms ± 46.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

(注意:您需要在 roll_by_gather 方法中使用 torch.arange(...,device='cuda:0'))

(Note: You need torch.arange(...,device='cuda:0') within the roll_by_gather method)

这篇关于如何在 PyTorch 中移动具有不同偏移量的张量中的列(或行)?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆