反向传播训练算法如何工作? [英] How does a back-propagation training algorithm work?

查看:98
本文介绍了反向传播训练算法如何工作?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在尝试学习反向传播如何与神经网络一起工作,但是从技术性较低的方面找到了很好的解释.

I've been trying to learn how back-propagation works with neural networks, but yet to find a good explanation from a less technical aspect.

反向传播如何工作?如何从提供的训练数据集中学习?我将不得不对此进行编码,但是在此之前,我需要对它有更深入的了解.

How does back-propagation work? How does it learn from a training dataset provided? I will have to code this, but until then I need to gain a stronger understanding of it.

推荐答案

反向传播的工作原理与前馈的逻辑非常相似.区别在于数据流的方向.在前馈步骤中,您将观察到输入和输出.您可以将值向前传播,以训练前面的神经元.

Back-propagation works in a logic very similar to that of feed-forward. The difference is the direction of data flow. In the feed-forward step, you have the inputs and the output observed from it. You can propagate the values forward to train the neurons ahead.

在反向传播步骤中,您无法知道在每个神经元中发生的错误,但不能知道输出层中的错误.计算输出节点的误差很简单-在训练集中,您可以将神经元的输出与该实例的实际输出之间的差值计算在内.隐藏层中的神经元必须据此更新其错误.因此,您必须将错误值传回给他们.根据这些值,隐藏的神经元可以使用来自上一层的错误的加权总和来更新其错误和其他参数.

In the back-propagation step, you cannot know the errors occurred in every neuron but the ones in the output layer. Calculating the errors of output nodes is straightforward - you can take the difference between the output from the neuron and the actual output for that instance in training set. The neurons in the hidden layers must update their errors from this. Thus you have to pass the error values back to them. From these values, the hidden neurons can update their error and other parameters using the weighted sum of errors from the layer ahead.

可以找到前馈和反向传播步骤的分步演示此处.

A step-by-step demo of feed-forward and back-propagation steps can be found here.

如果您是神经网络的初学者,则可以从 Perceptron 开始学习> ,然后进入NN,它实际上是多层感知器.

If you're a beginner to neural networks, you can begin learning from Perceptron, then advance to NN, which actually is a multilayer perceptron.

这篇关于反向传播训练算法如何工作?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆