两个平行的conv2d层(keras) [英] Two parallel conv2d layers (keras)

查看:428
本文介绍了两个平行的conv2d层(keras)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我要两个人建立一个神经网络,以两个具有相同维度(例如灰度图像)的单独矩阵作为输入,并输出一个介于-1和1之间(可能为tanh)的值.

I want two build a neural network that takes two separate matrices with same dimensions (for example grey-scale images) as input, and outputs a value between -1 and 1 (probably tanh).

我想构建网络,以便有两个独立的卷积层作为输入.每一个取一个矩阵(或图像).然后将这些内容合并到下一层.所以我希望它看起来像这样:

I would like to build the network so that there are two seperate convolutional layers as inputs. Each one takes one matrix(or image). An then that these one are combined in a following layer. So i want it to look something like that:

我的第一个问题是我可以在keras中执行此操作吗(或者如果不在tensorflow中执行此操作)? 第二个问题是?是否有意义? 因为我也可以很容易地将两个矩阵合成在一起,并且只使用一个conv2d层.像这样:

My first question is can i do this in keras (or if not in tensorflow)? The second Question is? Does it make sense? Because I could also very easy composite the two matrices together, and only use one conv2d layer. So something like this:

我想确切地做的事情太过分了.但是您能想象第一个版本更有意义的情况吗?

what I want to do exactly would go too far. But can you imagine a situation where the first version would make more sense?

推荐答案

如果输入不同,则可以在Keras中这样做,并且很有意义.要在keras中这样做,首先需要一个多输入模型,并且必须将卷积层的输出连接在一起.

You can do that in Keras and is makes sense, if the inputs are different. To do so in keras first you need a multiple input model and you have to concatenate the outputs of the convolutional layer together.

input_1= Input(shape=(x,y), name='input_1')
input_2= Input(shape=(x,y), name='input_1')
c1 = Conv2D(filter_size, kernel_size))(input_1)
p1 = MaxPooling2D(pool_size=(2, 2)(input_1)
f1 = Flatten()(p1)
c2 = Conv2D(filter_size, kernel_size))(input_2)
p2 = MaxPooling2D(pool_size=(2, 2)(c2)
f2 = Flatten()(p2)

x = concatenate([f1, f2])
x = Dense(num_classes, activation='sigmoid')(x)

model = Model(inputs=[input_1, input_2], outputs=[x])
model.compile('adam', 'binary_crossentropy', metrics=['accuracy'])    

根据您的数据,也可以共享卷积层,因此您只需定义一次dem即可重用它们.在这种情况下,权重是共享的.

Depending on you data it could also be possible to share the convolution layers, therefore you can just define dem once and reuse them. Weights are shared in this case.

conv = Conv2D(filter_size, kernel_size))
pooling = MaxPooling2D(pool_size=(2, 2)
flatten = Flatten()

f1 = flatten(pooling(conv(input_1)))
f2 = flatten(pooling(conv(input_2)))

这篇关于两个平行的conv2d层(keras)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆