在 NN 中指定连接(在 keras 中) [英] Specify connections in NN (in keras)

查看:33
本文介绍了在 NN 中指定连接(在 keras 中)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 keras 和 tensorflow 1.4.

I am using keras and tensorflow 1.4.

我想明确指定哪些神经元连接在两层之间.因此,当第一层中的神经元 i 连接到第二层中的神经元 j 并在其他地方为零时,我有一个矩阵 A,其中包含 1.

I want to explicitly specify which neurons are connected between two layers. Therefor I have a matrix A with ones in it, whenever neuron i in the first Layer is connected to neuron j in the second Layer and zeros elsewhere.

我的第一次尝试是创建一个带有内核的自定义层,它与 A 具有相同的大小,其中包含不可训练的零,其中 A 中有零和可训练的权重,其中 A 中有 1.然后,所需的输出将是一个简单的点积.不幸的是,我没能弄清楚如何实现一个部分可训练部分不可训练的内核.

My first attempt was to create a custom layer with a kernel, that has the same size as A with non-trainable zeros in it, where A has zeros in it and trainable weights, where A has ones in it. Then, the desired output would be a simple dot-product. Unfortunately I did not manage to figure out, how to implement a kernel that is partly trainable and partly non-trainable.

有什么建议吗?

(构建一个具有大量手动连接的神经元的功能模型可能是一种解决方法,但不知何故丑陋"的解决方案)

(Building a functional model with a lot of neurons that are connected by hand could be a work around, but somehow 'ugly' solution)

推荐答案

我能想到的最简单的方法,如果你有这个矩阵的正确形状,就是导出 Dense 层,然后简单地在代码中添加矩阵乘以原始权重:

The simplest way I can think of, if you have this matrix correctly shaped, is to derive the Dense layer and simply add the matrix in the code multiplying the original weights:

class CustomConnected(Dense):

    def __init__(self,units,connections,**kwargs):

        #this is matrix A
        self.connections = connections                        

        #initalize the original Dense with all the usual arguments   
        super(CustomConnected,self).__init__(units,**kwargs)  


    def call(self,inputs):

        #change the kernel before calling the original call:
        self.kernel = self.kernel * self.connections

        #call the original calculations:
        super(CustomConnected,self).call(inputs)

使用:

model.add(CustomConnected(units,matrixA))
model.add(CustomConnected(hidden_dim2, matrixB,activation='tanh')) #can use all the other named parameters...

请注意,所有神经元/单元最后都添加了偏差.如果您不想要偏见,参数 use_bias=False 仍然有效.例如,您也可以使用向量 B 做完全相同的事情,并使用 self.biases = self.biases * vectorB

Notice that all the neurons/units have yet a bias added at the end. The argument use_bias=False will still work if you don't want biases. You can also do exactly the same thing using a vector B, for instance, and mask the original biases with self.biases = self.biases * vectorB

测试提示:使用不同的输入和输出维度,这样您就可以确保您的矩阵 A 具有正确的形状.

Hint for testing: use different input and output dimensions, so you can be sure that your matrix A has the correct shape.

我刚刚意识到我的代码可能有问题,因为我正在更改原始 Dense 层使用的属性.如果出现奇怪的行为或消息,您可以尝试其他调用方法:

I just realized that my code is potentially buggy, because I'm changing a property that is used by the original Dense layer. If weird behaviors or messages appear, you can try another call method:

def call(self, inputs):
    output = K.dot(inputs, self.kernel * self.connections)
    if self.use_bias:
        output = K.bias_add(output, self.bias)
    if self.activation is not None:
        output = self.activation(output)
    return output

K 的来源 import keras.backend as K.

如果您想查看被矩阵屏蔽的权重,您还可以进一步设置自定义 get_weights() 方法.(在上面的第一种方法中这不是必需的)

You may also go further and set a custom get_weights() method if you want to see the weights masked with your matrix. (This would not be necessary in the first approach above)

这篇关于在 NN 中指定连接(在 keras 中)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆