使用Keras构建LSTM + Conv2D模型 [英] Using Keras to build a LSTM+Conv2D model
问题描述
我想建立一个类似于该架构的模型:
I want to build a model similar to this architecture:-
我当前的LSTM模型如下:-
My current LSTM model is as follows:-
x = Embedding(max_features, embed_size, weights=[embedding_matrix],trainable=False)(inp)
x = SpatialDropout1D(0.1)(x)
x = Bidirectional(CuDNNLSTM(128, return_sequences=True))(x)
x = Bidirectional(CuDNNLSTM(64, return_sequences=True))(x)
avg_pool = GlobalAveragePooling1D()(x)
max_pool = GlobalMaxPooling1D()(x)
conc = concatenate([avg_pool, max_pool])
conc = Dense(64, activation="relu")(conc)
conc = Dropout(0.1)(conc)
outp = Dense(1, activation="sigmoid")(conc)
model = Model(inputs=inp, outputs=outp)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=[f1])
在BiLSTM之后如何将Conv2D层与2D Max Pooling层一起使用?
How to use the Conv2D layer after the BiLSTM later with 2D Max Pooling layer ?
推荐答案
创建此(非常复杂的)模型需要注意的几个要点.
There are few important points you need to pay attention to in order to create this (fairly complicated) model.
以下是使用功能性API创建的模型本身:
Here is the model itself, created using the functional API:
def expand_dims(x):
return K.expand_dims(x, -1)
inp = Input(shape=(3,3))
lstm = Bidirectional(LSTM(128, return_sequences=True))(inp)
lstm = Lambda(expand_dims)(lstm)
conv2d = Conv2D(filters=128, kernel_size=2, padding='same')(lstm)
max_pool = MaxPooling2D(pool_size=(2, 2),)(conv2d)
predictions = Dense(10, activation='softmax')(max_pool)
model = Model(inputs=inp, outputs=predictions)
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
分步说明
首先,创建您的输入形状.从上面的图像中可以看到,您使用了7个样本,一个包含3个特征和3个特征的窗口->形状为(7, 3, 3)
的张量.显然,您可以更改为自己喜欢的任何东西.将输入层用作双向LSTM层.
Step by step explanation
First, create your input shape. From the image above it looks like you work with 7 samples, a window of 3 and 3 features -> a tensor of shape (7, 3, 3)
. Obviously you can change to whatever you like. Use the input layer for your bidirectional LSTM layer.
inp = Input(shape=(3,3))
lstm = Bidirectional(LSTM(128, return_sequences=True))(inp)
第二,如@Amir所述,如果要使用Conv2D
层,则需要扩展尺寸.但是,仅使用keras后端是不够的,因为功能性api创建的模型将要求您仅包含keras层.在此处中检查错误NoneType' object has no attribute '_inbound_nodes
.因此,您需要将expand_dim
提取到其自己的函数中并环绕Lambda
层:
Second, as @Amir mentioned you need to expand the dimensions if you want to used a Conv2D
layer. However only using keras backend is not sufficient as the model created by the functional api would require you to have only keras layers in it. Check this answer here for error NoneType' object has no attribute '_inbound_nodes
. Therefore, you need to extract your expand_dim
into its own function and wrap around a Lambda
layer:
def expand_dims(x):
return K.expand_dims(x, -1)
lstm = Lambda(expand_dims)(lstm)
其余在上述排序后非常简单:
The rest is pretty straight forward once the above is sorted:
conv2d = Conv2D(filters=128, kernel_size=2, padding='same')(lstm)
max_pool = MaxPooling2D(pool_size=(2, 2),)(conv2d)
predictions = Dense(10, activation='softmax')(max_pool)
model = Model(inputs=inp, outputs=predictions)
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
模型的摘要如下:
Layer (type) Output Shape Param #
=================================================================
input_67 (InputLayer) (None, 3, 3) 0
_________________________________________________________________
bidirectional_29 (Bidirectio (None, 3, 256) 135168
_________________________________________________________________
lambda_7 (Lambda) (None, 3, 256, 1) 0
_________________________________________________________________
conv2d_19 (Conv2D) (None, 3, 256, 128) 640
_________________________________________________________________
max_pooling2d_14 (MaxPooling (None, 1, 128, 128) 0
_________________________________________________________________
dense_207 (Dense) (None, 1, 128, 10) 1290
=================================================================
Total params: 137,098
Trainable params: 137,098
Non-trainable params: 0
_________________________________________________________________
None
这是可视化:
这篇关于使用Keras构建LSTM + Conv2D模型的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!