Keras-密集层与Convolution2D层的融合 [英] Keras - Fusion of a Dense Layer with a Convolution2D Layer

查看:114
本文介绍了Keras-密集层与Convolution2D层的融合的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想创建一个自定义层,该层应该将密集层的输出与Convolution2D层融合在一起.

该想法来自本文,这里是网络:

融合层尝试将Convolution2D张量(256x28x28)与密集张量(256)融合.这是它的等式:

y_global => Dense layer output with shape 256 y_mid => Convolution2D layer output with shape 256x28x28

以下是有关Fusion流程的论文说明:

我最终制作了一个新的自定义图层,如下所示:

class FusionLayer(Layer):

    def __init__(self, output_dim, **kwargs):
        self.output_dim = output_dim
        super(FusionLayer, self).__init__(**kwargs)

    def build(self, input_shape):
        input_dim = input_shape[1][1]
        initial_weight_value = np.random.random((input_dim, self.output_dim))
        self.W = K.variable(initial_weight_value)
        self.b = K.zeros((input_dim,))
        self.trainable_weights = [self.W, self.b]

    def call(self, inputs, mask=None):
        y_global = inputs[0]
        y_mid = inputs[1]
        # the code below should be modified
        output = K.dot(K.concatenate([y_global, y_mid]), self.W)
        output += self.b
        return self.activation(output)

    def get_output_shape_for(self, input_shape):
        assert input_shape and len(input_shape) == 2
        return (input_shape[0], self.output_dim)

我认为我正确地使用了__init__build方法,但是我不知道如何在call层中将y_global(256个双维数)与y-mid(尺寸为256x28x28)连接起来,输出将与上述公式相同.

如何在call方法中实现该方程式?

非常感谢...

更新:成功集成这2层数据的任何其他方法对我来说也是可以接受的……它不一定完全是本文中提到的方法,但它至少需要返回一个可接受的输出. ..

解决方案

我不得不在Keras Github页面上问这个问题,有人帮助我如何正确实现它……这是<​​a href ="https: //github上的//github.com/fchollet/keras/issues/4505"rel =" nofollow noreferrer>问题 ...

I want to make a custom layer which is supposed to fuse the output of a Dense Layer with a Convolution2D Layer.

The Idea came from this paper and here's the network:

the fusion layer tries to fuse the Convolution2D tensor (256x28x28) with the Dense tensor (256). here's the equation for it:

y_global => Dense layer output with shape 256 y_mid => Convolution2D layer output with shape 256x28x28

Here's the description of the paper about the Fusion process:

I ended up making a new custom layer like below:

class FusionLayer(Layer):

    def __init__(self, output_dim, **kwargs):
        self.output_dim = output_dim
        super(FusionLayer, self).__init__(**kwargs)

    def build(self, input_shape):
        input_dim = input_shape[1][1]
        initial_weight_value = np.random.random((input_dim, self.output_dim))
        self.W = K.variable(initial_weight_value)
        self.b = K.zeros((input_dim,))
        self.trainable_weights = [self.W, self.b]

    def call(self, inputs, mask=None):
        y_global = inputs[0]
        y_mid = inputs[1]
        # the code below should be modified
        output = K.dot(K.concatenate([y_global, y_mid]), self.W)
        output += self.b
        return self.activation(output)

    def get_output_shape_for(self, input_shape):
        assert input_shape and len(input_shape) == 2
        return (input_shape[0], self.output_dim)

I think I got the __init__ and build methods right but I don't know how to concatenate y_global (256 dimesnions) with y-mid (256x28x28 dimensions) in the call layer so that the output would be the same as the equation mentioned above.

How can I implement this equation in the call method?

Thanks so much...

UPDATE: any other way to successfully integrate the data of these 2 layers is also acceptable for me... it doesn't exactly have to be the way mentioned in the paper but it needs to at least return an acceptable output...

解决方案

I had to ask this question on the Keras Github page and someone helped me on how to implement it properly... here's the issue on github...

这篇关于Keras-密集层与Convolution2D层的融合的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆