如何在keras中同时训练多个神经网络? [英] How do I train multiple neural nets simultaneously in keras?

查看:4473
本文介绍了如何在keras中同时训练多个神经网络?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述



例如:

pre> model_one = Sequential()#model 1
model_one.add(Convolution2D(32,3,3,activation ='relu',input_shape =(1,28,28) ))
model_one.add(Flatten())
model_one.add(Dense(128,activation ='relu'))

model_two = Sequential()#model 2
model_two.add(Dense(128,activation ='relu',input_shape =(784)))
model_two.add(Dense(128,activation ='relu'))

model _ ???。add(Dense(10,activation ='softmax'))#combine them here

model.compile(loss ='categorical_crossentropy',#continu together
optimizer = 'adam',
metrics = ['accuracy'])


model.fit(X_train,Y_train,#continu在某种程度上是一致的,即使这样也不行,因为X_train和Y_train格式错误
batch_size = 32,nb_epoch = 10,verbose = 1)

I 已经听说我可以通过图形模型来做到这一点,但我找不到任何文档。



编辑:回复下面的建议:

  A1 = Conv2D(20,kernel_size =(5,5),activation ='relu',input_shape =(28,28,1))
---> B1 = MaxPooling2D(pool_size =(2,2))(A1)

抛出此错误:

  AttributeError:'Conv2D'对象没有属性'get_shape'


解决方案

图表符号会为您做。从本质上讲,您可以为每个图层分配一个唯一的句柄,然后使用末尾括号中的句柄链接回上一图层:

  layer_handle = Layer(params)(prev_layer_handle)

请注意,第一层必须是输入(形状=(x,y)),无需事先连接。

然后,当你制作你的模型时,你需要告诉它它期望有多个输入:

  model = Model(inputs = [in_layer1,in_layer2,..],outputs = [out_layer1 ,out_layer2,..])

最后,当您训练它时,您还需要提供一个输入列表并输出与您的定义相对应的数据:

  model.fit([x_train1,x_train2,..],[y_train1,y_train2 ,..])

与此同时,其他一切都是相同的,所以您只需将上述给你你想要的网络布局:

  from keras.models import keras.layers中的Model 
导入Input,Convolution2D,Flatten,Dense,Concatenate

#Note Keras 2.02,频道最后维度排序

#模型1
in1 = Input(shape =(28,28,1))
model_one_conv_1 = Convolution2D(32,(3,3),activation ='relu')(in1)
model_one_flat_1 = Flatten()(model_one_conv_1)
model_one_dense_1 = Dense(128,activation ='relu')(model_one_flat_1)

#模型2
in2 =输入(shape =(784,)) )
model_two_dense_1 = Dense(128,activation ='relu')(in2)
model_two_dense_2 = Dense(128,activation ='relu')(model_two_dense_1)


model_final_concat =连接(轴= -1)([model_one_dense_1,model_two_dense_2])
model_final_dense_1 =密集(10,激活='softmax')(model_final_concat)

模型=模型(输入= [in1,in2],输出= model_final_dense_1)

model.compile(loss ='categorical_crossentropy',#continu together
optimize r ='adam',
metrics = ['accuracy'])

model.fit([X_train_one,X_train_two],Y_train,
batch_size = 32,nb_epoch = 10, verbose = 1)

文档可以在功能模型API 。我建议您阅读其他问题或查看 Keras的回购因为文档目前没有很多例子。

How do I train 1 model multiple times and combine them at the output layer?

For example:

model_one = Sequential() #model 1
model_one.add(Convolution2D(32, 3, 3, activation='relu', input_shape=(1,28,28)))
model_one.add(Flatten())
model_one.add(Dense(128, activation='relu'))

model_two = Sequential() #model 2
model_two.add(Dense(128, activation='relu', input_shape=(784)))
model_two.add(Dense(128, activation='relu'))

model_???.add(Dense(10, activation='softmax')) #combine them here

model.compile(loss='categorical_crossentropy', #continu together
          optimizer='adam',
          metrics=['accuracy'])


model.fit(X_train, Y_train, #continu together somehow, even though this would never work because X_train and Y_train have wrong formats
      batch_size=32, nb_epoch=10, verbose=1)

I've heard I can do this through a graph model but I can't find any documentation on it.

EDIT: in reply to the suggestion below:

A1 = Conv2D(20,kernel_size=(5,5),activation='relu',input_shape=( 28, 28, 1))
---> B1 = MaxPooling2D(pool_size=(2,2))(A1)

throws this error:

AttributeError: 'Conv2D' object has no attribute 'get_shape'

解决方案

Graph notation would do it for you. Essentially you give every layer a unique handle then link back to the previous layer using the handle in brackets at the end:

layer_handle = Layer(params)(prev_layer_handle)

Note that the first layer must be an Input(shape=(x,y)) with no prior connection.

Then when you make your model you need to tell it that it expects multiple inputs with a list:

model = Model(inputs=[in_layer1, in_layer2, ..], outputs=[out_layer1, out_layer2, ..])

Finally when you train it you also need to provide a list of input and output data that corresponds with your definition:

model.fit([x_train1, x_train2, ..], [y_train1, y_train2, ..])

Meanwhile everything else is the same so you just need to combine together the above to give you the network layout that you want:

from keras.models import Model
from keras.layers import Input, Convolution2D, Flatten, Dense, Concatenate

# Note Keras 2.02, channel last dimension ordering

# Model 1
in1 = Input(shape=(28,28,1))
model_one_conv_1 = Convolution2D(32, (3, 3), activation='relu')(in1)
model_one_flat_1 = Flatten()(model_one_conv_1)
model_one_dense_1 = Dense(128, activation='relu')(model_one_flat_1)

# Model 2
in2 = Input(shape=(784, ))
model_two_dense_1 = Dense(128, activation='relu')(in2)
model_two_dense_2 = Dense(128, activation='relu')(model_two_dense_1)

# Model Final
model_final_concat = Concatenate(axis=-1)([model_one_dense_1, model_two_dense_2])
model_final_dense_1 = Dense(10, activation='softmax')(model_final_concat)

model = Model(inputs=[in1, in2], outputs=model_final_dense_1)

model.compile(loss='categorical_crossentropy', #continu together
              optimizer='adam',
              metrics=['accuracy'])

model.fit([X_train_one, X_train_two], Y_train,
          batch_size=32, nb_epoch=10, verbose=1)

Documentation can be found in the Functional Model API. I'd recommend reading around other questions or checking out Keras' repo as well since the documentation currently doesn't have many examples.

这篇关于如何在keras中同时训练多个神经网络?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆