在NN中指定连接(在keras中) [英] Specify connections in NN (in keras)

查看:43
本文介绍了在NN中指定连接(在keras中)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用keras和tensorflow 1.4.

I am using keras and tensorflow 1.4.

我想明确指定哪些神经元连接在两层之间.因此,每当第一层中的神经元i与第二层中的神经元j相连接,而在其他位置为零时,我就有一个矩阵A,其中有一个.

I want to explicitly specify which neurons are connected between two layers. Therefor I have a matrix A with ones in it, whenever neuron i in the first Layer is connected to neuron j in the second Layer and zeros elsewhere.

我的第一个尝试是创建一个带有内核的自定义层,该内核的大小与A相同,其中包含不可训练的零,其中A包含零并且可训练的权重,其中A包含一个.这样,所需的输出将是一个简单的点积.不幸的是,我没有弄清楚如何实现一个部分可训练和部分不可训练的内核.

My first attempt was to create a custom layer with a kernel, that has the same size as A with non-trainable zeros in it, where A has zeros in it and trainable weights, where A has ones in it. Then, the desired output would be a simple dot-product. Unfortunately I did not manage to figure out, how to implement a kernel that is partly trainable and partly non-trainable.

有什么建议吗?

(建立具有许多用手连接的神经元的功能模型可能会变通,但是某种程度上是丑陋"的解决方案)

(Building a functional model with a lot of neurons that are connected by hand could be a work around, but somehow 'ugly' solution)

推荐答案

如果矩阵的形状正确,我想到的最简单的方法就是派生Dense层,并简单地在矩阵中添加原始矩阵重量:

The simplest way I can think of, if you have this matrix correctly shaped, is to derive the Dense layer and simply add the matrix in the code multiplying the original weights:

class CustomConnected(Dense):

    def __init__(self,units,connections,**kwargs):

        #this is matrix A
        self.connections = connections                        

        #initalize the original Dense with all the usual arguments   
        super(CustomConnected,self).__init__(units,**kwargs)  


    def call(self,inputs):

        #change the kernel before calling the original call:
        self.kernel = self.kernel * self.connections

        #call the original calculations:
        super(CustomConnected,self).call(inputs)

使用:

model.add(CustomConnected(units,matrixA))
model.add(CustomConnected(hidden_dim2, matrixB,activation='tanh')) #can use all the other named parameters...

请注意,所有神经元/单元最后都添加了偏差.如果您不希望有偏见,则参数use_bias=False仍将起作用.例如,您还可以使用向量B进行完全相同的操作,并使用self.biases = self.biases * vectorB

Notice that all the neurons/units have yet a bias added at the end. The argument use_bias=False will still work if you don't want biases. You can also do exactly the same thing using a vector B, for instance, and mask the original biases with self.biases = self.biases * vectorB

测试提示:使用不同的输入和输出尺寸,因此可以确保矩阵A的形状正确.

Hint for testing: use different input and output dimensions, so you can be sure that your matrix A has the correct shape.

我刚刚意识到我的代码可能有错误,因为我正在更改原始Dense层使用的属性.如果出现怪异的行为或消息,则可以尝试其他调用方法:

I just realized that my code is potentially buggy, because I'm changing a property that is used by the original Dense layer. If weird behaviors or messages appear, you can try another call method:

def call(self, inputs):
    output = K.dot(inputs, self.kernel * self.connections)
    if self.use_bias:
        output = K.bias_add(output, self.bias)
    if self.activation is not None:
        output = self.activation(output)
    return output

K来自import keras.backend as K的地方.

如果您想查看矩阵掩盖的权重,还可以进一步设置自定义的get_weights()方法. (在上面的第一种方法中没有必要)

You may also go further and set a custom get_weights() method if you want to see the weights masked with your matrix. (This would not be necessary in the first approach above)

这篇关于在NN中指定连接(在keras中)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆