NLopt SLSQP放弃好的解决方案,而选择较旧的,更差的解决方案 [英] NLopt SLSQP discards good solution in favour of older, worse solution

查看:305
本文介绍了NLopt SLSQP放弃好的解决方案,而选择较旧的,更差的解决方案的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在解决财务方面的标准优化问题-投资组合优化.在绝大多数情况下,NLopt都返回了明智的解决方案.但是,在极少数情况下,SLSQP算法似乎会迭代到正确的解,然后由于没有明显的原因,它选择从迭代过程的大约三分之一中选择返回一个解,该过程显然是次优的.有趣的是,以很小的量更改初始参数向量可以解决该问题.

I'm solving a standard optimisation problem from Finance - portfolio optimisation. The vast majority of the time, NLopt is returning a sensible solution. However, on rare occasions, the SLSQP algorithm appears to iterate to the correct solution, and then for no obvious reason it chooses to return a solution from about one third of the way through the iterative process that is very obviously suboptimal. Interestingly, changing the initial parameter vector by a very small amount can fix the problem.

我设法隔离出我正在谈论的行为的一个相对简单工作示例.抱歉,数字有点混乱.这是我所能做的最好的.可以将以下代码剪切并粘贴到Julia REPL中,并在每次NLopt调用目标函数时运行并打印目标函数的值和参数.我两次调用优化例程.如果您滚动浏览下面的代码输出的输出,您会在第一次调用时注意到,优化例程将迭代为目标函数值为0.0022的良好解决方案,但随后没有明显的原因返回到目标函数为0.0007的更早解决方案,然后返回它.第二次调用优化函数时,我使用了一个稍微不同的参数起始向量.再次,优化例程迭代到相同的好解,但是这次它返回目标函数值为0.0022的好解.

I have managed to isolate a relatively simple working example of the behaviour I am talking about. Apologies that the numbers are a bit messy. It was the best I could do. The following code can be cut-and-pasted into a Julia REPL and will run and print values of the objective function and parameters each time NLopt calls the objective function. I call the optimisation routine twice. If you scroll back through the output that is printed by the code below, you'll notice on the first call, the optimisation routine iterates to a good solution with objective function value of 0.0022 but then for no apparent reason goes back to a much earlier solution where the objective function is 0.0007, and returns it instead. The second time I call the optimisation function, I use a slightly different starting vector of parameters. Again, the optimisation routine iterates to the same good solution, but this time it returns the good solution with objective function value of 0.0022.

那么,问题是:没有人知道为什么在第一种情况下,SLSQP会从迭代过程的大约三分之一的过程中放弃好的解决方案,而选择更差的解决方案吗?如果是这样,我有什么办法可以解决此问题?

So, the question: Does anyone know why in the first case SLSQP abandons the good solution in favour of a much poorer one from only about a third of the way through the iterative process? If so, is there any way I can fix this?

#-------------------------------------------
#Load NLopt package
using NLopt
#Define objective function for the portfolio optimisation problem (maximise expected return subject to variance constraint)
function obj_func!(param::Vector{Float64}, grad::Vector{Float64}, meanVec::Vector{Float64}, covMat::Matrix{Float64})
    if length(grad) > 0
        tempGrad = meanVec - covMat * param
        for j = 1:length(grad)
            grad[j] = tempGrad[j]
        end
        println("Gradient vector = " * string(grad))
    end
    println("Parameter vector = " * string(param))
    fOut = dot(param, meanVec) - (1/2)*dot(param, covMat*param)
    println("Objective function value = " * string(fOut))
    return(fOut)
end
#Define standard equality constraint for the portfolio optimisation problem
function eq_con!(param::Vector{Float64}, grad::Vector{Float64})
    if length(grad) > 0
        for j = 1:length(grad)
            grad[j] = 1.0
        end
    end
    return(sum(param) - 1.0)
end
#Function to call the optimisation process with appropriate input parameters
function do_opt(meanVec::Vector{Float64}, covMat::Matrix{Float64}, paramInit::Vector{Float64})
    opt1 = Opt(:LD_SLSQP, length(meanVec))
    lower_bounds!(opt1, [0.0, 0.0, 0.05, 0.0, 0.0, 0.0])
    upper_bounds!(opt1, [1.0, 1.0, 1.0, 1.0, 1.0, 1.0])
    equality_constraint!(opt1, eq_con!)
    ftol_rel!(opt1, 0.000001)
    fObj = ((param, grad) -> obj_func!(param, grad, meanVec, covMat))
    max_objective!(opt1, fObj)
    (fObjOpt, paramOpt, flag) = optimize(opt1, paramInit)
    println("Returned parameter vector = " * string(paramOpt))
    println("Return objective function = " * string(fObjOpt))
end
#-------------------------------------------
#Inputs to optimisation
meanVec = [0.00238374894628471,0.0006879970888824095,0.00015027322404371585,0.0008440624572209092,-0.004949409024535505,-0.0011493778903180567]
covMat = [8.448145928621056e-5 1.9555283947528615e-5 0.0 1.7716366331331983e-5 1.5054664977783003e-5 2.1496436765051825e-6;
          1.9555283947528615e-5 0.00017068536691928327 0.0 1.4272576023325365e-5 4.2993023110905543e-5 1.047156519965148e-5;
          0.0 0.0 0.0 0.0 0.0 0.0;
          1.7716366331331983e-5 1.4272576023325365e-5 0.0 6.577888700124854e-5 3.957059294420261e-6 7.365234067319808e-6
          1.5054664977783003e-5 4.2993023110905543e-5 0.0 3.957059294420261e-6 0.0001288060347757139 6.457128839875466e-6
          2.1496436765051825e-6 1.047156519965148e-5 0.0 7.365234067319808e-6 6.457128839875466e-6 0.00010385067478418426]
paramInit = [0.0,0.9496114216578236,0.050388578342176464,0.0,0.0,0.0]

#Call the optimisation function
do_opt(meanVec, covMat, paramInit)

#Re-define initial parameters to very similar numbers
paramInit = [0.0,0.95,0.05,0.0,0.0,0.0]

#Call the optimisation function again
do_opt(meanVec, covMat, paramInit)

注意:我知道我的协方差矩阵是正半确定的,而不是正定的.这不是问题的根源.我已通过将零行的对角线元素更改为较小但明显为非零的值来确认这一点.上面的示例以及我可以随机产生的其他问题仍然存在.

Note: I know my covariance matrix is positive-semi-definite, rather than positive definite. This is not the source of the issue. I've confirmed this by altering the diagonal element of the zero row to a small, but significantly non-zero value. The issue is still present in the above example, as well as others that I can randomly generate.

推荐答案

SLSQP是受限的优化算法.每回合都必须检查是否具有最佳目标值并满足约束条件.满足约束条件时,最终输出是最佳值.

SLSQP is a constrained optimization algorithm. Every round it has to check for having the best objective value and satisfying the constraints. The final output is the best value when satisfying the constraints.

通过将eq_con!更改为来打印约束的值:

Printing out the value of the constraint by changing eq_con! to:

function eq_con!(param::Vector{Float64}, grad::Vector{Float64})
    if length(grad) > 0
        for j = 1:length(grad)
            grad[j] = 1.0
        end
    end
    @show sum(param)-1.0
    return(sum(param) - 1.0)
end

显示在第一次运行中的最后一个有效评估点具有:

Shows the last valid evaluation point in the first run has:

Objective function value = 0.0007628202546187453
sum(param) - 1.0 = 0.0

在第二轮中,所有评估点均满足约束条件.这就说明了这种行为并显示出它是合理的.

While in the second run, all the evaluation points satisfy the constraint. This explains the behavior and shows it's reasonable.

附录:

导致参数不稳定的基本问题是等式约束的确切性质.引用自NLopt参考( http://ab-initio.mit.edu /wiki/index.php/NLopt_Reference#Nonlinear_constraints ):

The essential problem leading to parameter instability is the exact nature of the equality constraint. Quoting from the NLopt Reference (http://ab-initio.mit.edu/wiki/index.php/NLopt_Reference#Nonlinear_constraints):

对于等式约束,强烈建议使用较小的正公差,以使NLopt收敛,即使等式约束稍为非零.

For equality constraints, a small positive tolerance is strongly advised in order to allow NLopt to converge even if the equality constraint is slightly nonzero.

确实,将do_opt中的equality_constraint!调用切换为

Indeed, switching the equality_constraint! call in do_opt to

    equality_constraint!(opt1, eq_con!,0.00000001)

给出两个初始参数的0.0022解.

Gives the 0.0022 solution for both initial parameters.

这篇关于NLopt SLSQP放弃好的解决方案,而选择较旧的,更差的解决方案的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆