将多个模型的输出合并为一个模型 [英] Combining the outputs of multiple models into one model

查看:540
本文介绍了将多个模型的输出合并为一个模型的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前正在寻找一种将多个模型的输出合并为一个模型的方法,我需要创建一个进行分类的CNN网络.

I am currently looking for a way i can combine the output of multiple model into one model, I need to create a CNN network that does classification.

图像分为多个部分(如颜色所示),每个部分均作为特定模型(1、2、3、4)的输入,每个模型的结构相同,但每个部分均给出到一个单独的模型,以确保不对整个图像应用相同的权重-我的尝试是避免完全权重分配,并保持权重共享在本地.然后,每个模型执行卷积和最大池化,并生成某种输出,该输出必须馈入一个密集层,该密集层采用先前模型(模型1,2,3,4,)的输出并执行分类.

The image is separated into sections (as seen by the colors), each section is given as input to a certain model (1,2,3,4) the structure of each model is the same, but each section is given to a separate model to ensure that the the same weight is not applied on whole image - My attempt to avoid full weight sharing, and keeping the weight sharing local. Each model then perform convolution and max pooling, and generate some sort of output that has to fed into a dense layer that takes the outputs from the prior models (model 1,2,3,4,) and performs classifications.

我的问题是可以创建模型1,2,3,4并将其连接到完全连接的层,并在给定输入部分和输出类别的情况下训练所有模型,而不必定义模型的输出keras中的卷积和池化层?

My question here is it possible to create model 1,2,3,4 and connect it to the fully connected layer and train all the models given the input sections and and the output class - without having to define the outputs of the convolution and pooling layer in keras?

推荐答案

是的,您可以使用多输入和多输出模型创建此类模型,请参考keras

Yes, you can create such models using Multi-input and multi-output models, refer keras documentation for more details. Here I am sharing code sample, hope this helps

import numpy as np
import keras
from keras.optimizers import SGD
from keras.models import Sequential, Model
from keras.layers import Activation, Dense, Dropout, Flatten, Input, Merge, Convolution2D, MaxPooling2D

# Generate dummy data
train1 = np.random.random((100, 100, 100, 3))
train2 = np.random.random((100, 100, 100, 3))
train3 = np.random.random((100, 100, 100, 3))
train4 = np.random.random((100, 100, 100, 3))

y_train = keras.utils.to_categorical(np.random.randint(10, size=(100, 1)), num_classes=10)

#parallel ip for different sections of image
inp1 = Input(shape=train1.shape[1:])
inp2 = Input(shape=train2.shape[1:])
inp3 = Input(shape=train3.shape[1:])
inp4 = Input(shape=train4.shape[1:])

# paralle conv and pool layer which process each section of input independently
conv1 = Conv2D(64, (3, 3), activation='relu')(inp1)
conv2 = Conv2D(64, (3, 3), activation='relu')(inp2)
conv3 = Conv2D(64, (3, 3), activation='relu')(inp3)
conv4 = Conv2D(64, (3, 3), activation='relu')(inp4)

maxp1 = MaxPooling2D((3, 3))(conv1)
maxp2 =MaxPooling2D((3, 3))(conv2)
maxp3 =MaxPooling2D((3, 3))(conv3)
maxp4 =MaxPooling2D((3, 3))(conv4)

# can add multiple parallel conv, pool layes to reduce size

flt1 = Flatten()(maxp1)
flt2 = Flatten()(maxp2)
flt3 = Flatten()(maxp3)
flt4 = Flatten()(maxp4)

mrg = Merge(mode='concat')([flt1,flt2,flt3,flt4])

dense = Dense(256, activation='relu')(mrg)

op = Dense(10, activation='softmax')(dense)

model = Model(input=[inp1, inp2, inp3, inp4], output=op)
model.compile(optimizer='rmsprop',
              loss='categorical_crossentropy',
              metrics=['accuracy'])
model.fit([train1,train2,train3,train4], y_train,
          nb_epoch=10, batch_size=28)

这篇关于将多个模型的输出合并为一个模型的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆