没有所有连接的神经网络层 [英] Neural network layer without all connections

查看:64
本文介绍了没有所有连接的神经网络层的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

神经网络的密集层中的权重是一个(n,d)矩阵,我想强制其中一些权重始终为零.我有另一个(n,d)矩阵,它的掩码可以是非零的条目.想法是该层不应真正密集,而应缺少一些连接(即等于0).

The weights in a dense layer of a neural network is a (n,d) matrix, and I want to force some of these weights to always be zero. I have another (n,d) matrix which is the mask of which entries can be non-zero. The idea is that the layer should not be truly dense, but have some connections missing (i.e. equal to 0).

在使用PyTorch(或Tensorflow)进行训练时如何实现?我不希望训练时这些权重变为非零.

How can achieve this while training with PyTorch (or Tensorflow)? I don't want these weights to become non-zero while training.

如果不直接支持,一种方法是在每次训练迭代之后将所需的条目归零.

One method, if it doesn't support it directly, would be to zero-out the desired entries after each iteration of training.

推荐答案

您可以利用pytorch的稀疏数据类型:

You can take advantage of pytorch's sparse data type:

class SparseLinear(nn.Module):
  def __init__(self, in_features, out_features, sparse_indices):
    super(SparseLinear, self).__init__()
    self.weight = nn.Parameter(data=torch.sparse.FloatTensor(sparse_indices, torch.randn(sparse_indices.shape[1]), [in_features, out_features]), requires_grad=True)
    self.bias = nn.Parameter(data=torch.randn(out_features), requires_grad=True)

  def forward(self, x):
    return torch.sparse.admm(self.bias, self.weight, x, 1., 1.)

这篇关于没有所有连接的神经网络层的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆