神经网络学习算法希维赛德/阶跃函数 [英] Neural network learning algorithm with heaviside/step-function

查看:853
本文介绍了神经网络学习算法希维赛德/阶跃函数的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

是否有前馈神经网络不使用S形或线性壁球功能,但一个不可微的,如海维赛德函数训练算法的任何实现(或简单的介绍)<? / P>

我已经找到了一个纸这样的算法,但没有按照执行,我觉得这令人困惑,因为在我看来,应该有一些东西出来。

任何提示?

解决方案

的BP将不会与heavyside功能的工作,因为它的衍生物是在所有的领域零,除了点零,它是无穷的。即,heavyside函数的导数是狄拉克的。

这样做的结果是,没有变化为大于零以外的任何值,并且可以由任何进展。在点零,导数是无限的,所以第一步是无法管理无论是。

您可以在网上找到的<一个href="http://franck.fleurey.free.fr/NeuralNetwork/javadoc/NeuralNetwork/HeavisideActivationFunction.html">implementation在Java中这个功能,但我仍然不认为这是使用它是一个好主意。如果增加伽马力量在乙状结肠功能,它成为与可微性的好处的heavyside功能的一个非常体面的逼近。

检查本文在看它是否有任何信息可能是对您有所帮助。

Is there any implementation (or straightforward description) of a training algorithm for feed-forward neural networks which doesn't use a sigmoid or linear squash-function, but a non-differentiable one, such as the heaviside-function?

I already found a paper on such an algorithm, but no according implementation, which I find bewildering, as it seems to me, there should be something out.

Any hints?

解决方案

Backpropagation will not work with the heavyside function because its derivate is zero in all the domain, except for the point zero, where it is infinite. That is, the derivative of the heavyside function is the Dirac delta.

The consequence of this is that there is no change for any value other than zero and no progress can be made. At the point zero, the derivate is infinite, so the step is not manageable either.

You can find online an implementation for this function in Java, but I still don't think that it is a good idea to use it. If you increase the gama power in the sigmoid function, it becomes a very decent approximation of the heavyside function with the added benefit of differentiability.

Check this paper at see if it has any information that might be of help to you.

这篇关于神经网络学习算法希维赛德/阶跃函数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆