弹性反向传播神经网络-有关梯度的问题 [英] Resilient backpropagation neural network - question about gradient

查看:87
本文介绍了弹性反向传播神经网络-有关梯度的问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

首先,我想说我真的是神经网络的新手,我不太了解它;)

我已经完成了反向传播神经网络的第一个C#实现.我已经使用XOR对其进行了测试,并且看起来可以正常工作.

现在,我想更改实施方式以使用弹性反向传播(Rprop- http://en.wikipedia. org/wiki/Rprop ).

定义说:"Rprop仅考虑所有模式上的偏导数的符号(不考虑幅度),并且对每个权重"独立起作用.

有人可以告诉我所有模式的偏导数是什么吗?以及如何计算隐藏层中神经元的偏导数.

非常感谢

更新:

我的实现基于以下Java代码:www_.dia.fi.upm.es/~jamartin/downloads/bpnn.java

我的backPropagate方法如下:

public double backPropagate(double[] targets)
    {
        double error, change;

        // calculate error terms for output
        double[] output_deltas = new double[outputsNumber];

        for (int k = 0; k < outputsNumber; k++)
        {

            error = targets[k] - activationsOutputs[k];
            output_deltas[k] = Dsigmoid(activationsOutputs[k]) * error;
        }

        // calculate error terms for hidden
        double[] hidden_deltas = new double[hiddenNumber];

        for (int j = 0; j < hiddenNumber; j++)
        {
            error = 0.0;

            for (int k = 0; k < outputsNumber; k++)
            {
                error = error + output_deltas[k] * weightsOutputs[j, k];
            }

            hidden_deltas[j] = Dsigmoid(activationsHidden[j]) * error;
        }

        //update output weights
        for (int j = 0; j < hiddenNumber; j++)
        {
            for (int k = 0; k < outputsNumber; k++)
            {
                change = output_deltas[k] * activationsHidden[j];
                weightsOutputs[j, k] = weightsOutputs[j, k] + learningRate * change + momentumFactor * lastChangeWeightsForMomentumOutpus[j, k];
                lastChangeWeightsForMomentumOutpus[j, k] = change;

            }
        }

        // update input weights
        for (int i = 0; i < inputsNumber; i++)
        {
            for (int j = 0; j < hiddenNumber; j++)
            {
                change = hidden_deltas[j] * activationsInputs[i];
                weightsInputs[i, j] = weightsInputs[i, j] + learningRate * change + momentumFactor * lastChangeWeightsForMomentumInputs[i, j];
                lastChangeWeightsForMomentumInputs[i, j] = change;
            }
        }

        // calculate error
        error = 0.0;

        for (int k = 0; k < outputsNumber; k++)
        {
            error = error + 0.5 * (targets[k] - activationsOutputs[k]) * (targets[k] - activationsOutputs[k]);
        }

        return error;
    }

那么我可以使用change = hidden_deltas[j] * activationsInputs[i]变量作为梯度(偏导数)来检查唱歌吗?

解决方案

我认为所有模式"仅表示每次迭代" ...请看http://en.wikipedia.org/wiki/Rprop).

The definition says: "Rprop takes into account only the sign of the partial derivative over all patterns (not the magnitude), and acts independently on each "weight".

Could somebody tell me what partial derivative over all patterns is? And how should I compute this partial derivative for a neuron in hidden layer.

Thanks a lot

UPDATE:

My implementation base on this Java code: www_.dia.fi.upm.es/~jamartin/downloads/bpnn.java

My backPropagate method looks like this:

public double backPropagate(double[] targets)
    {
        double error, change;

        // calculate error terms for output
        double[] output_deltas = new double[outputsNumber];

        for (int k = 0; k < outputsNumber; k++)
        {

            error = targets[k] - activationsOutputs[k];
            output_deltas[k] = Dsigmoid(activationsOutputs[k]) * error;
        }

        // calculate error terms for hidden
        double[] hidden_deltas = new double[hiddenNumber];

        for (int j = 0; j < hiddenNumber; j++)
        {
            error = 0.0;

            for (int k = 0; k < outputsNumber; k++)
            {
                error = error + output_deltas[k] * weightsOutputs[j, k];
            }

            hidden_deltas[j] = Dsigmoid(activationsHidden[j]) * error;
        }

        //update output weights
        for (int j = 0; j < hiddenNumber; j++)
        {
            for (int k = 0; k < outputsNumber; k++)
            {
                change = output_deltas[k] * activationsHidden[j];
                weightsOutputs[j, k] = weightsOutputs[j, k] + learningRate * change + momentumFactor * lastChangeWeightsForMomentumOutpus[j, k];
                lastChangeWeightsForMomentumOutpus[j, k] = change;

            }
        }

        // update input weights
        for (int i = 0; i < inputsNumber; i++)
        {
            for (int j = 0; j < hiddenNumber; j++)
            {
                change = hidden_deltas[j] * activationsInputs[i];
                weightsInputs[i, j] = weightsInputs[i, j] + learningRate * change + momentumFactor * lastChangeWeightsForMomentumInputs[i, j];
                lastChangeWeightsForMomentumInputs[i, j] = change;
            }
        }

        // calculate error
        error = 0.0;

        for (int k = 0; k < outputsNumber; k++)
        {
            error = error + 0.5 * (targets[k] - activationsOutputs[k]) * (targets[k] - activationsOutputs[k]);
        }

        return error;
    }

So can I use change = hidden_deltas[j] * activationsInputs[i] variable as a gradient (partial derivative) for checking the sing?

解决方案

I think the "over all patterns" simply means "in every iteration"... take a look at the RPROP paper

For the paritial derivative: you've already implemented the normal back-propagation algorithm. This is a method for efficiently calculate the gradient... there you calculate the δ values for the single neurons, which are in fact the negative ∂E/∂w values, i.e. the parital derivative of the global error as function of the weights.

so instead of multiplying the weights with these values, you take one of two constants (η+ or η-), depending on whether the sign has changed

这篇关于弹性反向传播神经网络-有关梯度的问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆