将LSTM用于固定大小的输入和可变大小的输入的区别 [英] Difference in using LSTM for fixed sized input and variable size input

查看:68
本文介绍了将LSTM用于固定大小的输入和可变大小的输入的区别的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想知道如何训练(自然的)LSTM模型,其中我仅使用所有数据点一个接一个地一步一步地馈送单个时间步长,而不只是一个具有固定时间长度的历史数据点的样本./p>

那么,有什么区别?为什么使用固定长度是标准的,而逐步使用全点是更直观的方法呢?

除了我的理论问题之外,是否有一个很好的示例,将循环用于单个时间步长而不是将整个窗口馈送到LSTM?

解决方案

Ryn的PyTorch和TensorFlow实现都是使用张量作为输入来完成的,这些张量在CPU或GPU中通过矢量运算进行处理.这就要求完整的输入张量必须存在于内存中.

但是,您可以使用单个模块的模型来实现自己的RNN,然后在for循环中运行该循环,该循环将前一次迭代的输出作为模型的输入以及当前迭代的常规输入插入.

csirmaz做了类似的事情,您可以将其用作参考: https://github.com/csirmaz/超环.

I wonder how I can train (natural) LSTM model where I just feed a single time step one by one in one loop using all data points, and not just of a sample with a fixed time lenght of history data points.

And, what might be the difference and why is it standard to use a fixed lenght whereas the using all point step by step is the more intuitive way?

Beside my theoretical question, is there an good example using a loop for single time steps instead of feeding the whole window to a LSTM?

解决方案

Both PyTorch and TensorFlow implementations of RNNs are done using tensors as inputs, which are processed with vector operations in the CPU or the GPU). This requires for the full input tensor to be present in memory.

However, you could implement your own RNN with a model that is a single block, and run it inside a for loop which inserts the output of the previous iteration as input to the model, alongside the regular input of the current iteration.

csirmaz made a similar thing which you can use as a reference: https://github.com/csirmaz/superloop.

这篇关于将LSTM用于固定大小的输入和可变大小的输入的区别的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆