PyTorch:如何实现图关注层的关注 [英] PyTorch: How to implement attention for graph attention layer

查看:198
本文介绍了PyTorch:如何实现图关注层的关注的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经实现了 https://arxiv.org/pdf/的关注(公式1) 1710.10903.pdf ,但显然内存效率不高,只能在我的GPU上运行单个模型(占用7-10GB).

I have implemented the attention (Eq. 1) of https://arxiv.org/pdf/1710.10903.pdf but it's clearly not memory efficient and can run only a single model on my GPU (it takes 7-10GB).

目前,我有

class MyModule(nn.Module):

def __init__(self, in_features, out_features):
    super(MyModule, self).__init__()
    self.in_features = in_features
    self.out_features = out_features

    self.W = nn.Parameter(nn.init.xavier_uniform(torch.Tensor(in_features, out_features).type(torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor), gain=np.sqrt(2.0)), requires_grad=True)
    self.a = nn.Parameter(nn.init.xavier_uniform(torch.Tensor(2*out_features, 1).type(torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor), gain=np.sqrt(2.0)), requires_grad=True)

def forward(self, input):
    h = torch.mm(input, self.W)
    N = h.size()[0]

    a_input = torch.cat([h.repeat(1, N).view(N * N, -1), h.repeat(N, 1)], dim=1).view(N, -1, 2 * self.out_features)
    e = F.elu(torch.matmul(a_input, self.a).squeeze(2))
    return e

我计算所有e_ij项的见解是

Where my insight to compute all the e_ij terms is

In [8]: import torch

在[9]中:将numpy导入为np

In [9]: import numpy as np

在[10]中:h = Torch.LongTensor(np.array([[1,1],[2,2],[3,3]]))

In [10]: h = torch.LongTensor(np.array([[1,1], [2,2], [3,3]]))

在[11]中:N = 3

In [11]: N=3

在[12]中:h.repeat(1,N).view(N * N,-1) 出[12]:

In [12]: h.repeat(1, N).view(N * N, -1) Out[12]:

1     1
1     1
1     1
2     2
2     2
2     2
3     3
3     3
3     3

[尺寸为9x2的Torch.LongTensor]

[torch.LongTensor of size 9x2]

在[13]中:h.repeat(N,1) 出[13]:

In [13]: h.repeat(N, 1) Out[13]:

1     1
2     2
3     3
1     1
2     2
3     3
1     1
2     2
3     3

[尺寸为9x2的Torch.LongTensor]

[torch.LongTensor of size 9x2]

最后连接hs和feed矩阵a.

And finally concatenate both hs and feed matrix a.

有没有办法以一种对内存更友好的方式来做到这一点?

Is there a way to do it in a more memory-friendly way ?

推荐答案

也许您可以使用稀疏张量来存储adj_mat

Maybe you can use sparse tensor to store adj_mat

def sparse_mx_to_torch_sparse_tensor(sparse_mx):
    """Convert a scipy sparse matrix to a torch sparse tensor."""
    sparse_mx = sparse_mx.tocoo().astype(np.float32)
    indices = torch.from_numpy(np.vstack((sparse_mx.row,
                                          sparse_mx.col))).long()
    values = torch.from_numpy(sparse_mx.data)
    shape = torch.Size(sparse_mx.shape)
    return torch.sparse.FloatTensor(indices, values, shape)

这篇关于PyTorch:如何实现图关注层的关注的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆