图书馆Neurolab培训newff [英] Library neurolab training newff

查看:318
本文介绍了图书馆Neurolab培训newff的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在使用python和Neurolab方面还很陌生,并且在训练前馈神经网络时遇到了问题.我建立了如下网络:

I am pretty new in using python and neurolab and I have a problem with the training of my feed forward neural network. I have built the net as following:

net = nl.net.newff([[-1,1]]*64, [60,1])
net.init()
testerr = net.train(InputT, TargetT, epochs=100, show=1)

,我的目标输出是0到4之间的向量. 当我使用控制台中的nl.train.train_bfgs时:

and my target output is a vector between 0 and 4. When I use the nl.train.train_bfgs I have in the console:

testerr = net.train(InputT, TargetT, epochs=10, show=1)
Epoch: 1; Error: 55670.4462766;
Epoch: 2; Error: 55649.5;

如您所见,我将纪元数固定为100,但它在第二个纪元和使用Netresults=net.sim(InputCross)测试网络后停止 我有一个向量1作为测试输出数组(完全错误). 如果我使用其他训练函数,则我的输出测试向量的全为1,但是在这种情况下,训练期间,历元达到我设置的数字,但显示的错误不会改变. 如果目标输出向量在-1和1之间,则相同. 有什么建议吗? 非常感谢你!

As you can see, I fixed the number of epochs to 100 but it stops at the second epoch and after the test of the net with Netresults=net.sim(InputCross) I have as test output array a vector of 1 (totally wrong). If I use the other training functions I have the same output testing vector full of 1 but in that case during the training, the epochs reach the number that I set but the error displayed doesn't change. The same if the target output vector is between -1 and 1. Any suggestion? Thank you very much!

推荐答案

最后,几个小时后,同样的问题我解决了.

Finally, after a few hours with the same problem I kind of solved the problem.

这是正在发生的事情:Neurolab正在使用train_bfgs作为其标准训练算法. train_bfgs从scipy.optimize运行fmin_bfgs.作为参数,给出了一个函数epochf.训练网络时,每次迭代后必须运行此功能,以使Neurolab正确退出.可悲的是,当优化成功终止"时,fmin_bfgs无法做到这一点(可以将self.kwargs ['disp'] = 1传递给/neurolab/train/spo.py的fmin_bfgs,以查看scipy的输出).我还没有进一步研究为什么fmin_bfgs返回优化成功终止",但是这与错误正在收敛有关.

Here is what is happening: Neurolab is using train_bfgs as its standard training algorithm. train_bfgs runs fmin_bfgs from scipy.optimize. As argument a function, epochf, is given. This function MUST be run after each iteration when training the network, in order for neurolab to exit propperly. Sadly, fmin_bfgs fails to do this when "optimization terminated successfully" (one can pass self.kwargs['disp'] = 1 to fmin_bfgs from /neurolab/train/spo.py to see output from scipy). I have not investigated further why fmin_bfgs returns "optimization terminated successfully" but it has to do with that the error is converging.

我尝试了scipy版本12.0到0.15的python 2.7和python 3,并且没有更改此行为(如这个建议).

I have tried python 2.7 and python 3 with scipy versions 12.0 to 0.15 without this behavior changing (as this suggested).

我的解决方案是简单地从train_bfgs训练切换到常规train_gd(梯度下降),但是我想其他训练算法也可以.

My solution is to simply switch from train_bfgs training to regular train_gd (gradient descent) but I guess any other training algorithm is fine.

net = nl.net.newff(inputNodes, [ hidden, output])

#change traning func
net.trainf = nl.train.train_gd

为完整起见,我测试过的代码是:

For completeness, The code I tested on was:

import neurolab as nl
hidden = 10
output = 1
test = [[0],[0],[0],[1],[1]]
net = nl.net.newff([[0,1]], [ 10, 1])
err = net.train(test, test, epochs = 500, show=1)

该问题仅在某些时候出现,因此需要重复测试

The problem only occurs sometimes so repeted tests is needed

https://github.com/zueve/neurolab/Issues/25

祝你好运!

这篇关于图书馆Neurolab培训newff的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆