神经网络反向传播算法中的训练数据循环 [英] Looping through training data in Neural Networks Backpropagation Algorithm

查看:194
本文介绍了神经网络反向传播算法中的训练数据循环的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在一个训练周期中使用多少次训练数据样本? 假设我有60个训练数据.我经过第一行,进行前向传递,并使用后向传递的结果调整权重.使用S形函数,如下所示:

How many times do I use a sample of training data in one training cycle? Say I have 60 training data. I go through the 1st row and do a forward pass and adjust weights using results from backward pass. Using the sigmoidal function as below:

Forward pass 
Si = sum of (Wi * Uj)
Ui = f(Si) = 1 / 1 + e^ - Si

Backward pass 
Output Cell = (expected -Ui)(f'(Si)), where 
f'(Si) = Ui(1-Ui)

然后我要经过第二行并执行与第一行相同的过程,还是要绕过第一行直到错误减少?

Do I then go through the 2nd row and do the same process as the 1st or do I go around the 1st row until the error is less?

我希望有人可以帮忙

推荐答案

培训网络

您应该在每个训练纪元使用一次训练集的每个实例.

Training the network

You should use each instance of the training set once per training epoch.

培训纪元是整个数据集的完整循环.

A training epoch is a complete cycle through your dataset.

遍历数据集并计算增量后,应调整网络的权重.然后,您可以在神经网络上执行新的前向传递,并执行另一个训练时期,遍历您的训练数据集.

After you've looped through the dataset and calculated the deltas, you should adjust the weights of the network. Then you may perform a new forward pass on the neural network and do another training epoch, looping through your training dataset.

图形表示
在此链接上可以找到. a>

Graphical representation
A really great graphical representation of backpropagation may be found at this link.

有两种方法可以训练您的网络对数据集进行分类.最简单的方法称为单步学习或在线学习.这是您在大多数文献学中都可以找到的方法,也是收敛最快的方法.训练网络时,您将计算每个图层的增量并调整数据集的每个实例的权重.

There are two approaches to train you network to perform classification on a dataset. The easiest method is called single-step or online learning. This is the method you will find in most litterature, and it is also the fastest to converge. As you train your network you will calculate the deltas for each layer and adjust the weights for each instance of your dataset.

因此,如果您有60个实例的数据集,则意味着您应该在训练纪元结束之前将权重调整60次.

Thus if you have a dataset of 60 instances, this means you should have adjusted the weights 60 times before the training epoch is over.

另一种方法称为批量培训或离线学习.这种方法通常会产生残留误差较低的网络. 训练网络时,您应该为数据集的每个实例计算每一层的增量,然后最终平均各个增量,并每个时期校正一次权重.

The other approach is called batch training or offline learning. This approach often yields a network with a lower residual error. When you train the network you should calculate the deltas for each layer for every instance of the dataset, and then finally average the individual deltas and correct the weights once per epoch.

如果您有60个实例的数据集,则意味着您应该在训练纪元结束之前就对权重进行一次调整.

If you have a dataset of 60 instances, this means you should have adjusted the weights once before the training epoch is over.

这篇关于神经网络反向传播算法中的训练数据循环的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆