NLopt具有单变量优化 [英] NLopt with univariate optimization

查看:176
本文介绍了NLopt具有单变量优化的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

任何人都知道NLopt是否适用于单变量优化.尝试运行以下代码:

Anyone know if NLopt works with univariate optimization. Tried to run following code:

using NLopt

function myfunc(x, grad)
    x.^2
end

opt = Opt(:LD_MMA, 1)
min_objective!(opt, myfunc)
(minf,minx,ret) = optimize(opt, [1.234])
println("got $minf at $minx (returned $ret)")

但是收到以下错误消息:

But get following error message:

> Error evaluating untitled
LoadError: BoundsError: attempt to access 1-element Array{Float64,1}:
1.234
at index [2]
in myfunc at untitled:8
in nlopt_callback_wrapper at /Users/davidzentlermunro/.julia/v0.4/NLopt/src/NLopt.jl:415
in optimize! at /Users/davidzentlermunro/.julia/v0.4/NLopt/src/NLopt.jl:514
in optimize at /Users/davidzentlermunro/.julia/v0.4/NLopt/src/NLopt.jl:520
in include_string at loading.jl:282
in include_string at /Users/davidzentlermunro/.julia/v0.4/CodeTools/src/eval.jl:32
in anonymous at /Users/davidzentlermunro/.julia/v0.4/Atom/src/eval.jl:84
in withpath at /Users/davidzentlermunro/.julia/v0.4/Requires/src/require.jl:37
in withpath at /Users/davidzentlermunro/.julia/v0.4/Atom/src/eval.jl:53
[inlined code] from /Users/davidzentlermunro/.julia/v0.4/Atom/src/eval.jl:83
in anonymous at task.jl:58
while loading untitled, in expression starting on line 13

如果无法做到这一点,有人知道我可以在其中指定范围和初始条件的单变量优化器吗?

If this isn't possible, does anyone know if a univariate optimizer where I can specify bounds and an initial condition?

推荐答案

在这里您缺少几件事.

  1. 您需要在函数中指定函数的梯度(即一阶导数).有关NLopt,请参见 github页面上的教程和示例.并非所有的优化算法都需要这样做,但是您正在使用的LD_MMA看起来确实是如此.请参阅此处,以获取各种算法列表以及需要渐变的算法
  2. 您应在宣告胜利"之前指定所需条件的容忍度. (即确定功能已充分优化).在下面的示例中,这是xtol_rel!(opt,1e-4).另请参见ftol_rel!,以指定另一种不同的公差条件.例如,根据文档,xtol_rel将在优化步骤(或最佳估计)将每个参数更改的值小于tol乘以参数的绝对值时停止".和ftol_rel将在优化步骤(或最优估计)将目标函数值更改为小于tol乘以函数值的绝对值时停止."请参见
  1. You need to specify the gradient (i.e. first derivative) of your function within the function. See the tutorial and examples on the github page for NLopt. Not all optimization algorithms require this, but the one that you are using LD_MMA looks like it does. See here for a listing of the various algorithms and which require a gradient.
  2. You should specify the tolerance for conditions you need before you "declare victory" ¹ (i.e. decide that the function is sufficiently optimized). This is the xtol_rel!(opt,1e-4) in the example below. See also the ftol_rel! for another way to specify a different tolerance condition. According to the documentation, for example, xtol_rel will "stop when an optimization step (or an estimate of the optimum) changes every parameter by less than tol multiplied by the absolute value of the parameter." and ftol_rel will "stop when an optimization step (or an estimate of the optimum) changes the objective function value by less than tol multiplied by the absolute value of the function value. " See here under the "Stopping Criteria" section for more information on various options here.
  3. The function that you are optimizing should have a unidimensional output. In your example, your output is a vector (albeit of length 1). (x.^2 in your output denotes a vector operation and a vector output). If you "objective function" doesn't ultimately output a unidimensional number, then it won't be clear what your optimization objective is (e.g. what does it mean to minimize a vector? It's not clear, you could minimize the norm of a vector, for instance, but a whole vector - it isn't clear).

下面是一个基于您的代码的有效示例.请注意,我在github页面上包含了该示例的打印输出,这可以帮助您诊断问题.

Below is a working example, based on your code. Note that I included the printing output from the example on the github page, which can be helpful for you in diagnosing problems.

using NLopt    

count = 0 # keep track of # function evaluations    

function myfunc(x::Vector, grad::Vector)
    if length(grad) > 0
        grad[1] = 2*x[1]
    end    

    global count
    count::Int += 1
    println("f_$count($x)")    

    x[1]^2
end    

opt = Opt(:LD_MMA, 1)    

xtol_rel!(opt,1e-4)    

min_objective!(opt, myfunc)
(minf,minx,ret) = optimize(opt, [1.234])    

println("got $minf at $minx (returned $ret)")

¹ (用最优化的话说,Yinyu Ye.)

¹ (In the words of optimization great Yinyu Ye.)

这篇关于NLopt具有单变量优化的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆