使用 LSTM RNN 混洗训练数据 [英] Shuffling training data with LSTM RNN

查看:34
本文介绍了使用 LSTM RNN 混洗训练数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

既然 LSTM RNN 使用先前的事件来预测当前序列,我们为什么要打乱训练数据?我们不会失去训练数据的时间顺序吗?在经过打乱的训练数据训练后,它如何仍然有效地进行预测?

Since an LSTM RNN uses previous events to predict current sequences, why do we shuffle the training data? Don't we lose the temporal ordering of the training data? How is it still effective at making predictions after being trained on shuffled training data?

推荐答案

一般来说,当你打乱训练数据(一组序列)时,你打乱了序列被馈送到 RNN 的顺序,你不会打乱单个序列中的顺序.当您的网络无状态时,这很好:

In general, when you shuffle the training data (a set of sequences), you shuffle the order in which sequences are fed to the RNN, you don't shuffle the ordering within individual sequences. This is fine to do when your network is stateless:

无状态案例:

网络的记忆仅在序列的持续时间内持续存在.在序列 A 之前对序列 B 进行训练无关紧要,因为网络的记忆状态不会跨序列持续存在.

The network's memory only persists for the duration of a sequence. Training on sequence B before sequence A doesn't matter because the network's memory state does not persist across sequences.

另一方面:

有状态的案例:

网络的记忆跨序列持续存在.在这里,您不能盲目地调整数据并期望获得最佳结果.序列 A 应该在序列 B 之前输入网络,因为 A 在 B 之前出现,我们希望网络评估序列 B 并记住序列 A 中的内容.

The network's memory persists across sequences. Here, you cannot blindly shuffle your data and expect optimal results. Sequence A should be fed to the network before sequence B because A comes before B, and we want the network to evaluate sequence B with memory of what was in sequence A.

这篇关于使用 LSTM RNN 混洗训练数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆