MATLAB'fminsearch'与Octave的'fmincg'不同 [英] MATLABs 'fminsearch' different from Octave's 'fmincg'
问题描述
我正在尝试在MATLAB和Octave中的两个函数之间获得一个简单优化问题的一致答案.这是我的代码:
I am trying to get consistent answers for a simple optimization problem, between two functions in MATLAB and Octave. Here is my code:
options = optimset('MaxIter', 500 , 'Display', 'iter', 'MaxFunEvals', 1000);
objFunc = @(t) lrCostFunction(t,X,y);
[result1] = fminsearch(objFunc, theta, options);
[result2]= fmincg (objFunc, theta, options);
(请记住,X,y和theta较早定义且正确).问题如下:当我使用fmincg(推荐fminsearch)在MATLAB中运行上面的代码时,我得到了正确的答案.
(Bear in mind, that X, y, and theta are defined earlier and are correct). The problem is the following: When I run the above code in MATLAB with it using fmincg, (commend out fminsearch), I get the correct answer.
但是,如果我注释掉fmincg并让我们运行fminsearch,我将不会获得任何转换.实际上,输出如下所示:
However, if I comment out fmincg and let us run fminsearch, I get no conversion whatsoever. In fact the output looks like this:
491 893 0.692991 reflect
492 894 0.692991 reflect
493 895 0.692991 reflect
494 896 0.692991 reflect
495 897 0.692991 reflect
496 898 0.692991 reflect
497 899 0.692991 reflect
498 900 0.692991 reflect
499 901 0.692991 reflect
500 902 0.692991 reflect
Exiting: Maximum number of iterations has been exceeded
- increase MaxIter option.
Current function value: 0.692991
增加迭代次数不会影响jack.相反,当使用fmincg时,我看到它正在收敛,最终它给了我正确的结果:
Increasing the number of iterations doesnt do jack. In contrast, when using the fmincg, I see it converging, and it finally gives me the correct result:
Iteration 1 | Cost: 2.802128e-001
Iteration 2 | Cost: 9.454389e-002
Iteration 3 | Cost: 5.704641e-002
Iteration 4 | Cost: 4.688190e-002
Iteration 5 | Cost: 3.759021e-002
Iteration 6 | Cost: 3.522008e-002
Iteration 7 | Cost: 3.234531e-002
Iteration 8 | Cost: 3.145034e-002
Iteration 9 | Cost: 3.008919e-002
Iteration 10 | Cost: 2.994639e-002
Iteration 11 | Cost: 2.678528e-002
Iteration 12 | Cost: 2.660323e-002
Iteration 13 | Cost: 2.493301e-002
.
.
.
Iteration 493 | Cost: 1.311466e-002
Iteration 494 | Cost: 1.311466e-002
Iteration 495 | Cost: 1.311466e-002
Iteration 496 | Cost: 1.311466e-002
Iteration 497 | Cost: 1.311466e-002
Iteration 498 | Cost: 1.311466e-002
Iteration 499 | Cost: 1.311466e-002
Iteration 500 | Cost: 1.311466e-002
这给出了正确的答案.
那有什么用呢?为什么fminsearch在这种最小化情况下不起作用?
So what gives? Why is fminsearch not working in this minimization case?
其他上下文:
1)八度是具有fmincg btw的语言,但是快速的Google搜索结果也检索了此功能.我的MATLAB都可以调用.
1) Octave is the language that has fmincg btw, however a quick google result also retrieves this function. My MATLAB can call either.
2)我的问题有一个凸的误差面,并且它的误差面到处都是可微的.
2) My problem has a convex error surface, and its error surface is everywhere differentiable.
3)我只能访问fminsearch,fminbnd(由于此问题是多变量而不是单变量,所以我不能使用它),这样就离开了fminsearch. 谢谢!
3) I only have access to fminsearch, fminbnd (which I cant use since this problem is multivariate not univariate), so that leaves fminsearch. Thanks!
推荐答案
我假设fmincg正在实现共轭梯度类型优化. fminsearch是一种无导数的优化方法.因此,您为什么期望他们给出相同的结果.它们是完全不同的算法.
I assume that fmincg is implementing a conjugate-gradient type optimization. fminsearch is a derivative-free optimization method. So, why do you expect them to give the same results. They are completely different algorithms.
我希望fminsearch能够找到凸成本函数的全局最小值.至少到目前为止,这是我的经验.
I would expect fminsearch to find the global minima for a convex cost function. At least, this has been my experience so far.
fminsearch输出的第一行表明objFunc(theta)为〜0.69,但此值与fmincg输出中的成本值有很大不同.因此,我会在fminsearch之外寻找可能的错误.确保为两种算法提供相同的成本函数和初始点.
The first line of fminsearch's output suggest that objFunc(theta) is ~0.69 but this value is very different than the cost values in fmincg's output. So, I would look for possible bugs outside fminsearch. Make sure you are giving the same cost function and initial point to both algorithms.
这篇关于MATLAB'fminsearch'与Octave的'fmincg'不同的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!