Keras LSTM 输入维度设置 [英] Keras LSTM input dimension setting

查看:156
本文介绍了Keras LSTM 输入维度设置的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我曾尝试使用 keras 训练 LSTM 模型,但我认为我这里出了点问题.

I was trying to train a LSTM model using keras but I think I got something wrong here.

我有一个错误

ValueError: 检查输入时出错:预期 lstm_17_input 有3 维,但得到了形状为 (10000, 0, 20) 的数组

ValueError: Error when checking input: expected lstm_17_input to have 3 dimensions, but got array with shape (10000, 0, 20)

虽然我的代码看起来像

while my code looks like

model = Sequential()
model.add(LSTM(256, activation="relu", dropout=0.25, recurrent_dropout=0.25, input_shape=(None, 20, 64)))
model.add(Dense(1, activation="sigmoid"))
model.compile(loss='binary_crossentropy',
          optimizer='adam',
          metrics=['accuracy'])
model.fit(X_train, y_train,
      batch_size=batch_size,
      epochs=10)

其中 X_train 的形状为 (10000, 20) 并且前几个数据点类似于

where X_train has a shape of (10000, 20) and the first few data points are like

array([[ 0,  0,  0, ..., 40, 40,  9],
   [ 0,  0,  0, ..., 33, 20, 51],
   [ 0,  0,  0, ..., 54, 54, 50],
...

y_train 的形状为 (10000, ),这是一个二进制 (0/1) 标签数组.

and y_train has a shape of (10000, ), which is a binary (0/1) label array.

有人能指出我错在哪里吗?

Could someone point out where I was wrong here?

推荐答案

为了完整起见,这里是发生的事情.

For the sake of completeness, here's what's happened.

首先,LSTM 和 Keras 中的所有层一样,接受两个参数:input_shapebatch_input_shape.区别在于input_shape 不包含批量大小,而batch_input_shape完整的输入形状,包括批量大小.

First up, LSTM, like all layers in Keras, accepts two arguments: input_shape and batch_input_shape. The difference is in convention that input_shape does not contain the batch size, while batch_input_shape is the full input shape including the batch size.

因此,规范 input_shape=(None, 20, 64) 告诉 keras 期望一个 4 维输入,这不是您想要的.正确的应该是 (20,).

Hence, the specification input_shape=(None, 20, 64) tells keras to expect a 4-dimensional input, which is not what you want. The correct would have been just (20,).

但这还不是全部.LSTM 层是一个循环层,因此它需要一个 3 维输入 (batch_size, timesteps, input_dim).这就是为什么正确的规范是 input_shape=(20, 1)batch_input_shape=(10000, 20, 1).另外,你的训练数组也应该重新整形,以表示它有 20 个时间步长和每个步长 1 个输入特征.

But that's not all. LSTM layer is a recurrent layer, hence it expects a 3-dimensional input (batch_size, timesteps, input_dim). That's why the correct specification is input_shape=(20, 1) or batch_input_shape=(10000, 20, 1). Plus, your training array should also be reshaped to denote that it has 20 time steps and 1 input feature per each step.

因此,解决方案:

X_train = np.expand_dims(X_train, 2)  # makes it (10000,20,1)
...
model = Sequential()
model.add(LSTM(..., input_shape=(20, 1)))

这篇关于Keras LSTM 输入维度设置的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆