使用LSTM RNN改组训练数据 [英] Shuffling training data with LSTM RNN

查看:104
本文介绍了使用LSTM RNN改组训练数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

由于LSTM RNN使用以前的事件来预测当前序列,所以为什么要改组训练数据?我们是否会丢失训练数据的时间顺序?在经过改组的训练数据上进行训练后,如何仍能有效地做出预测?

Since an LSTM RNN uses previous events to predict current sequences, why do we shuffle the training data? Don't we lose the temporal ordering of the training data? How is it still effective at making predictions after being trained on shuffled training data?

推荐答案

通常,在对训练数据(一组序列)进行混洗时,您将混入序列馈送到RNN的顺序进行混洗,而不会随机排列各个序列中的顺序.当您的网络为无状态时,可以这样做:

In general, when you shuffle the training data (a set of sequences), you shuffle the order in which sequences are fed to the RNN, you don't shuffle the ordering within individual sequences. This is fine to do when your network is stateless:

无状态案例:

网络的内存仅在序列持续时间内存在.在序列A之前对序列B进行训练并不重要,因为网络的内存状态不会在序列之间持久存在.

The network's memory only persists for the duration of a sequence. Training on sequence B before sequence A doesn't matter because the network's memory state does not persist across sequences.

另一方面:

状态案例:

网络的内存在各个序列之间均保持不变.在这里,您不能盲目地整理数据并期望获得最佳结果.序列A应该先于序列B馈入网络,因为序列A早于序列B,并且我们希望网络用序列A的内存来存储序列B.

The network's memory persists across sequences. Here, you cannot blindly shuffle your data and expect optimal results. Sequence A should be fed to the network before sequence B because A comes before B, and we want the network to evaluate sequence B with memory of what was in sequence A.

这篇关于使用LSTM RNN改组训练数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆