Keras中的多对一和多对多LSTM示例 [英] Many to one and many to many LSTM examples in Keras
问题描述
我尝试了解LSTM以及如何使用Keras构建它们.我发现,原则上有4种模式可以运行RNN(图中的4种正确模式)
I try to understand LSTMs and how to build them with Keras. I found out, that there are principally the 4 modes to run a RNN (the 4 right ones in the picture)
图片来源: Andrej Karpathy
现在,我想知道在Keras中,它们每个的简约代码片段的外观如何. 所以像
Now I wonder how a minimalistic code snippet for each of them would look like in Keras. So something like
model = Sequential()
model.add(LSTM(128, input_shape=(timesteps, data_dim)))
model.add(Dense(1))
对于这四个任务中的每一个,也许都有一点解释.
for each of the 4 tasks, maybe with a little bit of explanation.
推荐答案
所以:
-
一对一:您可以使用
Dense
图层,因为您不处理序列:
One-to-one: you could use a
Dense
layer as you are not processing sequences:
model.add(Dense(output_size, input_shape=input_shape))
一对多:由于Keras
中的链接模型不是很容易,因此不支持此选项,因此以下版本是最简单的版本:
One-to-many: this option is not supported well as chaining models is not very easy in Keras
, so the following version is the easiest one:
model.add(RepeatVector(number_of_times, input_shape=input_shape))
model.add(LSTM(output_size, return_sequences=True))
多对一:实际上,您的代码段(几乎)是这种方法的一个示例:
Many-to-one: actually, your code snippet is (almost) an example of this approach:
model = Sequential()
model.add(LSTM(1, input_shape=(timesteps, data_dim)))
多对多:当输入和输出的长度与循环步数匹配时,这是最简单的代码段:
Many-to-many: This is the easiest snippet when the length of the input and output matches the number of recurrent steps:
model = Sequential()
model.add(LSTM(1, input_shape=(timesteps, data_dim), return_sequences=True))
步数与输入/输出长度不同时多对多:在Keras中这很怪异.没有简单的代码片段可以对此进行编码.
Many-to-many when number of steps differ from input/output length: this is freaky hard in Keras. There are no easy code snippets to code that.
修改:广告5
在我最近的一个应用程序中,我们实现了与第4张图像中的多对多类似的东西.如果您要使用以下架构的网络(输入长于输出时):
In one of my recent applications, we implemented something which might be similar to many-to-many from the 4th image. In case you want to have a network with the following architecture (when an input is longer than the output):
O O O
| | |
O O O O O O
| | | | | |
O O O O O O
您可以通过以下方式实现此目的:
You could achieve this in the following manner:
model = Sequential()
model.add(LSTM(1, input_shape=(timesteps, data_dim), return_sequences=True))
model.add(Lambda(lambda x: x[:, -N:, :]
N
是要覆盖的最后步骤的数量(在图像N = 3
上).
Where N
is the number of last steps you want to cover (on image N = 3
).
从这一点开始:
O O O
| | |
O O O O O O
| | |
O O O
与长度为N
的人工填充序列一样简单,例如使用与0
向量一起使用,以便将其调整为适当的大小.
is as simple as artificial padding sequence of length N
using e.g. with 0
vectors, in order to adjust it to an appropriate size.
这篇关于Keras中的多对一和多对多LSTM示例的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!