Keras的LSTM层中的4D输入 [英] 4D input in LSTM layer in Keras

查看:309
本文介绍了Keras的LSTM层中的4D输入的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有形状为(10000, 20, 15, 4)的数据,其中num samples = 10000num series in time = 20height = 15weight = 4.因此,我有了随时间分布的表15x4.这是我要针对这些数据进行训练的模型:

I have data with a shape of (10000, 20, 15, 4) where num samples = 10000, num series in time = 20, height = 15, weight = 4. So I have table 15x4 which is distributed over time. Here is the model I want to train it over this data:

...
model.add((LSTM(nums-1,return_sequences=True,input_shape=(20,15,4), activation='relu')))
model.add((LSTM(nums-1,return_sequences=False,input_shape=(20,15,4), activation='tanh')))
model.add(Dense(15,activation='relu'))
...

但是,出现以下错误:

ValueError: Input 0 is incompatible with layer lstm_1: expected ndim=3, 
found ndim=4

如何定义具有4D输入形状的LSTM层?

How do I define a LSTM layer with 4D input shape?

推荐答案

LSTM层接受形状为(n_sample, n_timesteps, n_features)的3D数组作为输入.由于数据中每个时间步的特征都是一个(15,4)数组,因此您需要先将其展平为长度为60的特征向量,然后将其传递给模型:

LSTM layer accepts a 3D array as input which has a shape of (n_sample, n_timesteps, n_features). Since the features of each timestep in your data is a (15,4) array, you need to first flatten them to a feature vector of length 60 and then pass it to your model:

X_train = X_train.reshape(10000, 20, -1)

# ...
model.add(LSTM(...,input_shape=(20,15*4), ...)) # modify input_shape accordingly

或者,您可以使用包裹在TimeDistributed中的Flatten层作为模型的第一层,以在每个时间步上展平:

Alternatively, you can use a Flatten layer wrapped in a TimeDistributed layer as the first layer of your model to flatten each timestep:

model.add(TimeDistributed(Flatten(input_shape=(15,4))))

此外,请注意,如果每个时间步(即数组(15, 4))都是一个要素图,其元素之间存在局部空间关系(例如图像补丁),则也可以使用

Further, note that if each timestep (i.e. array (15, 4)) is a feature map where there is a local spatial relationship between its elements, say like an image patch, you can also use ConvLSTM2D instead of LSTM layer. Otherwise, flattening the timesteps and using LSTM would be fine.

作为附注:,您只需在模型的第一层上指定input_shape自变量即可.在其他层上进行指定将是多余的,并且将被Keras自动推断出它们的输入形状,因此将被忽略.

As a side note: you only need to specify input_shape argument on the first layer of the model. Specifying it on other layers would be redundant and will be ignored since their input shape is automatically inferred by Keras.

这篇关于Keras的LSTM层中的4D输入的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆