第2部分弹性BP神经网络的 [英] Part 2 Resilient backpropagation neural network

查看:366
本文介绍了第2部分弹性BP神经网络的的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这是一个后续问题<一href="http://stackoverflow.com/questions/2865057/resilient-backpropagation-neural-network-question-about-gradient">this帖子。对于给定的神经元,我不清楚,如何利用其错误的部分衍生物和它的重量的偏导数。

This is a follow-on question to this post. For a given neuron, I'm unclear as to how to take a partial derivative of its error and the partial derivative of it's weight.

网页的工作,很显然如何传播史的作品(虽然我与反弹传播处理)。对于前馈神经网络,我们必须1),同时向前移动,通过神经网络,触发神经元,2)从输出层的神经元,计算一个总误差。然后3)向后移动,propogate这个错误每个重量在神经元,那么4)再次来临前锋,每个神经元的更新的权重。

Working from this web page, it's clear how the propogation works (although I'm dealing with Resilient Propagation). For a Feedforward Neural Network, we have to 1) while moving forwards through the neural net, trigger neurons, 2) from the output layer neurons, calculate a total error. Then 3) moving backwards, propogate that error by each weight in a neuron, then 4) coming forwards again, update the weights in each neuron.

precisely不过,这些事情是我不明白。

Precisely though, these are the things I don't understand.

A)对于每一个神经元,你怎么计算的偏导数(定义)超过重量的偏导数的错误?我的困惑的是,在微积分,一个偏导数被计算在n个可变功能的条款。排序我理解 ldog 和的拜耳答案<一href="http://stackoverflow.com/questions/2190732/understanding-the-neural-network-backpropagation">this帖子。我甚至understnad链式法则。但事实并非如此凝胶的时候,我认为,precisely,如何将其应用到人工智能的结果),线性组合和ii)S形激活功能。

A) For each neuron, how do you calculate the partial derivative (definition) of the error over the partial derivative of the weight? My confusion is that, in calculus, a partial derivative is computed in terms of an n variable function. I'm sort of understanding ldog and Bayer's answers in this post. And I even understnad the chain rule. But it doesn't gel when I think, precisely, of how to apply it to the results of a i) linear combiner and ii) sigmoid activation function.

B)使用弹性传播史的方法,你会如何改变一个给定的神经元的偏见?或者是有利用反弹传播训练的神经网络没有任何偏见或阈值?

B) Using the Resilient propogation approach, how would you change the bias in a given neuron ? Or is there no bias or threshold in a NN using Resilient Propagation training?

C)如何传播共有错误,如果有两个或多个输出神经元?是否总误差*神经元发生重为每个输出神经元的价值?

C) How do you propagate a total error if there are two or more output neurons ? Does the total-error * neuron weight happen for each output neuron value?

感谢

推荐答案

不是100%确定的其他问题,但我可以在这一刻答案B:

Not 100% sure on the other points, but I can answer B at this moment:

B)的偏压是根据偏导数的方向更新,而不是在大小。如果方向保持不变,连续迭代加权更新的尺寸增大。振荡方向将减少更新的大小。  的http://nopr.niscair.res.in/bitstream/123456789/8460/1/IJEMS%2012(5)%20434-442.pdf

B)The bias is updated based on the direction of the partial derivative, and not on the magnitude. the size of the weight update is increased if the direction remains unchanged for consecutive iterations. oscillating directions will reduce the size of update. http://nopr.niscair.res.in/bitstream/123456789/8460/1/IJEMS%2012(5)%20434-442.pdf

这篇关于第2部分弹性BP神经网络的的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆