Neurolab中递归Elman网络的时间序列预测 [英] Time series forecast with recurrent Elman network in neurolab

查看:148
本文介绍了Neurolab中递归Elman网络的时间序列预测的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用来自Neurolab的Elman递归网络来预测连续值的时间序列.从一个序列训练网络,以使输入是索引i上的值,目标是索引i+1上的值.

I use the Elman recurrent network from neurolab to predict a time series of continuous values. The network is trained from a sequence such that the input is the value at index i and the target is the value at index i+1.

要在接下来的下一个步骤超出进行预测,则将网络的输出作为输入进行反馈.例如,如果我打算预测i+5的值,请按照以下步骤进行操作.

To make predictions beyond the immediate next time step, the output of the net is feed back as input. If, for example, I intend to predict the value at i+5, I proceed as follows.

  1. i
  2. 输入值
  3. 获取输出并将其作为下一个输入值(例如i+1)馈入网络
  4. 重复1.至3.四次
  5. 输出是i+5
  6. 值的预测
  1. Input the value from i
  2. Take the output and feed it to the net the as next input value (e.g. i+1)
  3. Repeat 1. to 3. four more times
  4. The output is a prediction of the value ati+5

因此,对于超出下一个下一步的预测,必须使用上一次激活的输出来激活循环网络.

So for predictions beyond the immediate next time step, recurrent networks must be activated with the output from a previous activation.

但是,在大多数示例中,网络已被馈入一个已经完整的序列.例如,参见函数 train sim 上面链接后面的示例.第一个功能使用已经完整的示例列表来训练网络,第二个功能使用完整的输入值列表来激活网络.

In most examples, however, the network is fed with an already complete sequence. See, for example, the functions train and sim in the example behind the link above. The first function trains the network with an already complete list of examples and the second function activates the network with a complete list of input values.

在Neurolab中进行了一些挖掘之后,我发现了函数 step 返回单个输入的单个输出.但是,使用step的结果表明,该功能不能保留循环层的激活,这对于循环网络至关重要.

After some digging in neurolab, I found the function step to return a single output for a single input. Results from using step suggest, however, that the function does not retain the activation of the recurrent layer, which is crucial to recurrent networks.

如何用单个输入激活Neurolab中的递归Elman网络,使其在下一次单个输入激活时保持其内部状态?

推荐答案

事实证明,早晚从先前输出生成的输出收敛到恒定值是很正常的.实际上,网络的输出不能依赖于其先前的输出.

It turns out it is quite normal for output which is generated from previous output sooner or later to converge towards a constant value. In effect, the output of a network cannot depend only on its previous output.

这篇关于Neurolab中递归Elman网络的时间序列预测的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆