Keras 中的多对一和多对多 LSTM 示例 [英] Many to one and many to many LSTM examples in Keras

查看:33
本文介绍了Keras 中的多对一和多对多 LSTM 示例的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我尝试了解 LSTM 以及如何使用 Keras 构建它们.我发现,运行 RNN 主要有 4 种模式(图中正确的 4 种)

I try to understand LSTMs and how to build them with Keras. I found out, that there are principally the 4 modes to run a RNN (the 4 right ones in the picture)

图片来源:Andrej Karpathy

现在我想知道他们每个人的简约代码片段在 Keras 中会是什么样子.所以像

Now I wonder how a minimalistic code snippet for each of them would look like in Keras. So something like

model = Sequential()
model.add(LSTM(128, input_shape=(timesteps, data_dim)))
model.add(Dense(1))

对于这 4 个任务中的每一个,可能都有一些解释.

for each of the 4 tasks, maybe with a little bit of explanation.

推荐答案

所以:

  1. 一对一:您可以使用 Dense 层,因为您不处理序列:

  1. One-to-one: you could use a Dense layer as you are not processing sequences:

model.add(Dense(output_size, input_shape=input_shape))

  • 一对多:这个选项没有得到很好的支持,因为Keras中的链接模型不是很容易,所以下面的版本是最简单的:

  • One-to-many: this option is not supported well as chaining models is not very easy in Keras, so the following version is the easiest one:

    model.add(RepeatVector(number_of_times, input_shape=input_shape))
    model.add(LSTM(output_size, return_sequences=True))
    

  • 多对一:实际上,您的代码片段(几乎)就是这种方法的一个示例:

  • Many-to-one: actually, your code snippet is (almost) an example of this approach:

    model = Sequential()
    model.add(LSTM(1, input_shape=(timesteps, data_dim)))
    

  • Many-to-many:当输入和输出的长度与循环步骤的数量匹配时,这是最简单的片段:

  • Many-to-many: This is the easiest snippet when the length of the input and output matches the number of recurrent steps:

    model = Sequential()
    model.add(LSTM(1, input_shape=(timesteps, data_dim), return_sequences=True))
    

  • 当步数与输入/输出长度不同时的多对多:这在 Keras 中非常困难.没有简单的代码片段可以对其进行编码.

  • Many-to-many when number of steps differ from input/output length: this is freaky hard in Keras. There are no easy code snippets to code that.

    广告 5

    在我最近的一个应用程序中,我们实现了一些可能类似于第四张图片中的多对多的东西.如果您想要一个具有以下架构的网络(当输入长于输出时):

    In one of my recent applications, we implemented something which might be similar to many-to-many from the 4th image. In case you want to have a network with the following architecture (when an input is longer than the output):

                                            O O O
                                            | | |
                                      O O O O O O
                                      | | | | | | 
                                      O O O O O O
    

    您可以通过以下方式实现:

    You could achieve this in the following manner:

    model = Sequential()
    model.add(LSTM(1, input_shape=(timesteps, data_dim), return_sequences=True))
    model.add(Lambda(lambda x: x[:, -N:, :])) #Select last N from output
    

    其中 N 是您要覆盖的最后一步的数量(在图像 N = 3 上).

    Where N is the number of last steps you want to cover (on image N = 3).

    从这一点开始:

                                            O O O
                                            | | |
                                      O O O O O O
                                      | | | 
                                      O O O 
    

    就像长度为 N 的人工填充序列一样简单,例如使用使用 0 个向量,以便将其调整到合适的大小.

    is as simple as artificial padding sequence of length N using e.g. with 0 vectors, in order to adjust it to an appropriate size.

    这篇关于Keras 中的多对一和多对多 LSTM 示例的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

  • 查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆