keras cnn_lstm输入层不接受一维输入 [英] keras cnn_lstm input layer not accepting 1-D input

查看:389
本文介绍了keras cnn_lstm输入层不接受一维输入的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一些长的一维矢量(3000位),我要对其进行分类.之前,我已经实现了一个简单的CNN,可以相对成功地对它们进行分类:

I have sequences of long 1_D vectors (3000 digits) that I am trying to classify. I have previously implemented a simple CNN to classify them with relative success:

def create_shallow_model(shape,repeat_length,stride):
    model = Sequential()
    model.add(Conv1D(75,repeat_length,strides=stride,padding='same', input_shape=shape, activation='relu'))
    model.add(MaxPooling1D(repeat_length))
    model.add(Flatten())
    model.add(Dense(1, activation='sigmoid'))
    model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
    return model

但是,我希望通过在网络末端堆叠LSTM/RNN来提高性能.

However I am looking to improve the performance by stacking an LSTM/ RNN on the end of the network.

我遇到了困难,因为我似乎找不到网络接受数据的方法.

I am having difficulty with this as I cannot seem to find a way for the network to accept the data.

def cnn_lstm(shape,repeat_length,stride):
    model = Sequential()
    model.add(TimeDistributed(Conv1D(75,repeat_length,strides=stride,padding='same', activation='relu'),input_shape=(None,)+shape))
    model.add(TimeDistributed(MaxPooling1D(repeat_length)))
    model.add(TimeDistributed(Flatten()))
    model.add(LSTM(6,return_sequences=True))
    model.add(Dense(1,activation='sigmoid'))
    model.compile(loss='sparse_categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
    return model

model=cnn_lstm(X.shape[1:],1000,1)
tprs,aucs=calculate_roc(model,3,100,train_X,train_y,test_X,test_y,tprs,aucs)

但是出现以下错误:

ValueError: Error when checking input: expected time_distributed_4_input to have 4 dimensions, but got array with shape (50598, 3000, 1)

我的问题是:

  1. 这是分析数据的正确方法吗?

  1. Is this a correct way of analysing this data?

如果是这样,我如何使网络接受并分类输入序列?

If so, how do I get the network to accept and classify the input sequences?

推荐答案

无需添加这些TimeDistributed包装器.当前,在添加LSTM层之前,您的模型如下所示(我假设repeat_length=5stride=1):

There is no need to add those TimeDistributed wrappers. Currently, before adding the LSTM layer, your model looks like this (I have assumed repeat_length=5 and stride=1):

Layer (type)                 Output Shape              Param #   
=================================================================
conv1d_2 (Conv1D)            (None, 3000, 75)          450       
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 600, 75)           0         
_________________________________________________________________
flatten_2 (Flatten)          (None, 45000)             0         
_________________________________________________________________
dense_4 (Dense)              (None, 1)                 45001     
=================================================================
Total params: 45,451
Trainable params: 45,451
Non-trainable params: 0
_________________________________________________________________

因此,如果要添加LSTM层,可以将其放置在MaxPooling1D层之后,例如model.add(LSTM(16, activation='relu')),然后删除Flatten层.现在模型看起来像这样:

So if you want to add a LSTM layer, you can put it right after the MaxPooling1D layer like model.add(LSTM(16, activation='relu')) and just remove the Flatten layer. Now the model looks like this:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1d_4 (Conv1D)            (None, 3000, 75)          450       
_________________________________________________________________
max_pooling1d_3 (MaxPooling1 (None, 600, 75)           0         
_________________________________________________________________
lstm_1 (LSTM)                (None, 16)                5888      
_________________________________________________________________
dense_5 (Dense)              (None, 1)                 17        
=================================================================
Total params: 6,355
Trainable params: 6,355
Non-trainable params: 0
_________________________________________________________________

如果需要,可以将return_sequences=True参数传递给LSTM层,并保留Flatten层.但是只有在尝试了第一种方法并且结果很差之后才做这样的事情,因为添加return_sequences=True可能根本没有必要,并且只会增加模型大小并降低模型性能.

If you want you can pass the return_sequences=True argument to the LSTM layer and keep the Flatten layer. But only do such a thing after you have tried the first approach and you have gotten poor results, since adding return_sequences=True may not be necessary at all and it only increases your model size and decreases model performance.

作为旁注:为什么在第二个模型中将损失函数更改为sparse_categorical_crossentropy?无需这样做,因为binary_crossentropy可以正常工作.

As a side note: why did you change the loss function to sparse_categorical_crossentropy in the second model? There is no need to do that since binary_crossentropy would work fine.

这篇关于keras cnn_lstm输入层不接受一维输入的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆