训练神经网络时的纪元与迭代 [英] Epoch vs Iteration when training neural networks

本文介绍了训练神经网络时的纪元与迭代的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

训练多层感知器时, epoch 和 iteration 有什么区别?

What is the difference between epoch and iteration when training a multi-layer perceptron?

推荐答案

在该神经网络术语:

  • 一个的时期 =一个直传和一个向后通所有的训练样例
  • 批量大小 =一个向前/向后通训练示例的数目.批处理大小越大,您将需要更多的内存空间.
  • 的数目的迭代 =遍数,每次通过使用[批量大小]数目的实例.需要明确的是,一个道次=一个直传+一个向后通(我们不计数直传和向后通作为两个不同的道次).
  • one epoch = one forward pass and one backward pass of all the training examples
  • batch size = the number of training examples in one forward/backward pass. The higher the batch size, the more memory space you'll need.
  • number of iterations = number of passes, each pass using [batch size] number of examples. To be clear, one pass = one forward pass + one backward pass (we do not count the forward pass and backward pass as two different passes).

例:如果你有1000个训练示例,并且批次大小为500,则其将需要2次迭代完成1个历元

Example: if you have 1000 training examples, and your batch size is 500, then it will take 2 iterations to complete 1 epoch.

供参考:权衡批量大小与迭代的次数来训练神经网络

的术语间歇"是不明确的:一些人用它来指定整个训练集,有的人用它来指代的训练实例数在一个向前/向后通(如我在此答案所做的那样).为了避免这种歧义并清楚地表明批次与一次向前/向后训练中的训练示例数量相对应,可以使用术语 mini-batch (迷你批次).

The term "batch" is ambiguous: some people use it to designate the entire training set, and some people use it to refer to the number of training examples in one forward/backward pass (as I did in this answer). To avoid that ambiguity and make clear that batch corresponds to the number of training examples in one forward/backward pass, one can use the term mini-batch.

这篇关于训练神经网络时的纪元与迭代的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆