如何在 keras 中制作可训练的参数? [英] How can I make a trainable parameter in keras?

查看:25
本文介绍了如何在 keras 中制作可训练的参数?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

感谢您查看我的问题.

例如.

最终输出是两个矩阵 A 和 B 的和,如下所示:

The final output is the sum of two matrix A and B,like this:

output = keras.layers.add([A, B])

现在,我想建立一个新参数 x 来改变输出.

Now,I want to build a new parameter x to change the output.

我想让 newoutput = Ax+B(1-x)

I want to make newoutput = Ax+B(1-x)

并且 x 是我网络中的可训练参数.

and x is a trainable parameter in my network.

我该怎么办?请帮帮我~非常感谢!

what should I do? please help me ~ thanks very much!

编辑(部分代码):

conv1 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(input)
drop1 = Dropout(0.5)(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(drop1)

conv2 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
conv2 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2)
drop2 = Dropout(0.5)(conv2)

up1 = Conv2D(512, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop2))

#the line I want to change:
merge = add([drop2,up1])
#this layer is simply add drop2 and up1 layer.now I want to add a trainable parameter x to adjust the weight of thoese two layers.

我尝试使用代码,但还是出现了一些问题:

I tried to use the codes,but still occured some questions:

1.如何使用自己的图层?

1.how can I use my own layer?

merge = Mylayer()(drop2,up1)

或者其他方式?

2.out_dim 是什么意思?这些参数都是3-dim矩阵.out_dim有什么意义?

2.what is the meaning of out_dim? those parameters are all 3-dim matrix.what is the mening of out_dim?

谢谢...T.T

edit2(已解决)

from keras import backend as K
from keras.engine.topology import Layer
import numpy as np

from keras.layers import add

class MyLayer(Layer):

def __init__(self, **kwargs):
    super(MyLayer, self).__init__(**kwargs)

def build(self, input_shape):

    self._x = K.variable(0.5)
    self.trainable_weights = [self._x]

    super(MyLayer, self).build(input_shape)  # Be sure to call this at the end

def call(self, x):
    A, B = x
    result = add([self._x*A ,(1-self._x)*B])
    return result

def compute_output_shape(self, input_shape):
    return input_shape[0]

推荐答案

您必须创建一个继承自 Layer 的自定义类,并使用 self.add_weight(...).您可以在此处那里.

You have to create a custom class which inherits from Layer and create the trainable parameter using self.add_weight(...). You can find an example of this here and there.

就您的示例而言,图层将以某种方式如下所示:

For your example, the layer would somehow look like this:

from keras import backend as K
from keras.engine.topology import Layer
import numpy as np

class MyLayer(Layer):

    def __init__(self, output_dim, **kwargs):
        self.output_dim = output_dim
        super(MyLayer, self).__init__(**kwargs)

    def build(self, input_shape):
        # Create a trainable weight variable for this layer.
        self._A = self.add_weight(name='A', 
                                    shape=(input_shape[1], self.output_dim),
                                    initializer='uniform',
                                    trainable=True)
        self._B = self.add_weight(name='B', 
                                    shape=(input_shape[1], self.output_dim),
                                    initializer='uniform',
                                    trainable=True)
        super(MyLayer, self).build(input_shape)  # Be sure to call this at the end

    def call(self, x):
        return K.dot(x, self._A) + K.dot(1-x, self._B)

    def compute_output_shape(self, input_shape):
        return (input_shape[0], self.output_dim)

编辑:仅基于名称我(错误地)假设 x 是层输入,并且您想要优化 A 和 <代码>B.但是,正如您所说,您想要优化 x.为此,您可以执行以下操作:

Edit: Just based on the names I (wrongly) assumed that x is the layers input and you want to optimize A and B. But, as you stated, you want to optimize x. For this, you can do something like this:

from keras import backend as K
from keras.engine.topology import Layer
import numpy as np

class MyLayer(Layer):

    def __init__(self, **kwargs):
        super(MyLayer, self).__init__(**kwargs)

    def build(self, input_shape):
        # Create a trainable weight variable for this layer.
        self._x = self.add_weight(name='x', 
                                    shape=(1,),
                                    initializer='uniform',
                                    trainable=True)
        super(MyLayer, self).build(input_shape)  # Be sure to call this at the end

    def call(self, x):
        A, B = x
        return K.dot(self._x, A) + K.dot(1-self._x, B)

    def compute_output_shape(self, input_shape):
        return input_shape[0]

Edit2:您可以使用

merge = Mylayer()([drop2,up1])

这篇关于如何在 keras 中制作可训练的参数?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆