在训练期间修复神经网络中的权重子集 [英] Fixing a subset of weights in Neural network during training

查看:23
本文介绍了在训练期间修复神经网络中的权重子集的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在考虑创建一个定制的神经网络.基本结构和往常一样,但是我想截断层之间的连接.例如,如果我构建一个具有两个隐藏层的网络,我想删除一些权重并保留其他权重,如下所示:

I am considering creating a customized neural network. The basic structure is the same as usual, but I want to truncate the connections between layers. For example, if I construct a network with two hidden layers, I would like to delete some weights and keep the others, like so:

这不是传统的dropout(为了避免过度拟合),因为剩余的权重(连接)应该被指定和固定.

This is not conventional dropout (to avoid overfitting), since the remaining weights (connections) should be specified and fixed.

python 有什么方法可以做到吗?Tensorflow、pytorch、theano 或任何其他模块?

Are there any ways in python to do it? Tensorflow, pytorch, theano or any other modules?

推荐答案

是的,你可以在 tensorflow 中做到这一点.

Yes you can do this in tensorflow.

你会在你的 tensorflow 代码中有一些像这样的层:

You would have some layer in your tensorflow code something like so:

m = tf.Variable( [width,height] , dtype=tf.float32  ))
b = tf.Variable( [height] , dtype=tf.float32  ))
h = tf.sigmoid( tf.matmul( x,m ) + b )

您想要的是一些新矩阵,我们称其为 k 以表示杀戮.它会杀死特定的神经连接.神经连接在 m 中定义.这将是您的新配置

What you want is some new matrix, let's call it k for kill. It is going to kill specific neural connections. The neural connections are defined in m. This would be your new configuration

k = tf.Constant( kill_matrix , dtype=tf.float32 )
m = tf.Variable( [width,height] , dtype=tf.float32  )
b = tf.Variable( [height] , dtype=tf.float32  )
h = tf.sigmoid( tf.matmul( x, tf.multiply(m,k) ) + b )

你的 kill_matrix 是一个由 1 和 0 组成的矩阵.为每个要保留的神经连接插入 1,为每个要杀死的神经连接插入 0.

Your kill_matrix is a matrix of 1's and 0's. Insert a 1 for every neural connection you want to keep and a 0 for every one you want to kill.

这篇关于在训练期间修复神经网络中的权重子集的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆