scipy优化中的整数步长最小化 [英] Integer step size in scipy optimize minimize

查看:35
本文介绍了scipy优化中的整数步长最小化的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个计算机视觉算法,我想使用 scipy.optimize.最小化.现在我只想调整两个参数,但参数的数量最终可能会增加,所以我想使用一种可以进行高维梯度搜索的技术.SciPy 中的 Nelder-Mead 实现似乎很合适.

I have a computer vision algorithm I want to tune up using scipy.optimize.minimize. Right now I only want to tune up two parameters but the number of parameters might eventually grow so I would like to use a technique that can do high-dimensional gradient searches. The Nelder-Mead implementation in SciPy seemed like a good fit.

我把代码都设置好了,但似乎minimize函数真的想使用步长小于1的浮点值.当前的一组参数都是整数,一个步长为1另一个的步长为 2(即值必须是奇数,如果不是我要优化的东西,会将其转换为奇数).大约一个参数是以像素为单位的窗口大小,另一个参数是阈值(0-255 之间的值).

I got the code all set up but it seems that the minimize function really wants to use floating point values with a step size that is less than one.The current set of parameters are both integers and one has a step size of one and the other has a step size of two (i.e. the value must be odd, if it isn't the thing I am trying to optimize will convert it to an odd number). Roughly one parameter is a window size in pixels and the other parameter is a threshold (a value from 0-255).

为了值得,我使用了来自 git repo 的全新 scipy.有谁知道如何告诉 scipy 为每个参数使用特定的步长?有什么方法可以滚动我自己的渐变函数吗?有没有可以帮助我的 scipy 标志?我知道这可以通过简单的参数扫描来完成,但我最终希望将此代码应用于更大的参数集.

For what it is worth I am using a fresh build of scipy from the git repo. Does anyone know how to tell scipy to use a specific step size for each parameter? Is there some way I can roll my own gradient function? Is there a scipy flag that could help me out? I am aware that this could be done with a simple parameter sweep, but I would eventually like to apply this code to much larger sets of parameters.

代码本身非常简单:

import numpy as np
from scipy.optimize import minimize
from ScannerUtil import straightenImg 
import bson

def doSingleIteration(parameters):
    # do some machine vision magic
    # return the difference between my value and the truth value

parameters = np.array([11,10])
res = minimize( doSingleIteration, parameters, method='Nelder-Mead',options={'xtol': 1e-2, 'disp': True,'ftol':1.0,}) #not sure if these params do anything
print "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
print res

这是我输出的样子.正如您所看到的,我们重复了很多次运行,但在最小化方面没有取得任何进展.

This is what my output looks like. As you can see we are repeating a lot of runs and not getting anywhere in the minimization.

*+++++++++++++++++++++++++++++++++++++++++
[ 11.  10.]  <-- Output from scipy minimize
{'block_size': 11, 'degree': 10} <-- input to my algorithm rounded and made int
+++++++++++++++++++++++++++++++++++++++++
120  <-- output of the function I am trying to minimize
+++++++++++++++++++++++++++++++++++++++++
[ 11.55  10.  ]
{'block_size': 11, 'degree': 10}
+++++++++++++++++++++++++++++++++++++++++
120
+++++++++++++++++++++++++++++++++++++++++
[ 11.   10.5]
{'block_size': 11, 'degree': 10}
+++++++++++++++++++++++++++++++++++++++++
120
+++++++++++++++++++++++++++++++++++++++++
[ 11.55   9.5 ]
{'block_size': 11, 'degree': 9}
+++++++++++++++++++++++++++++++++++++++++
120
+++++++++++++++++++++++++++++++++++++++++
[ 11.1375  10.25  ]
{'block_size': 11, 'degree': 10}
+++++++++++++++++++++++++++++++++++++++++
120
+++++++++++++++++++++++++++++++++++++++++
[ 11.275  10.   ]
{'block_size': 11, 'degree': 10}
+++++++++++++++++++++++++++++++++++++++++
120
+++++++++++++++++++++++++++++++++++++++++
[ 11.    10.25]
{'block_size': 11, 'degree': 10}
+++++++++++++++++++++++++++++++++++++++++
120
+++++++++++++++++++++++++++++++++++++++++
[ 11.275   9.75 ]
{'block_size': 11, 'degree': 9}
+++++++++++++++++++++++++++++++++++++++++
120
+++++++++++++++++++++++++++++++++++++++++
~~~
SNIP 
~~~
+++++++++++++++++++++++++++++++++++++++++
[ 11.         10.0078125]
{'block_size': 11, 'degree': 10}
+++++++++++++++++++++++++++++++++++++++++
120
Optimization terminated successfully.
         Current function value: 120.000000
         Iterations: 7
         Function evaluations: 27
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  status: 0
    nfev: 27
 success: True
     fun: 120.0
       x: array([ 11.,  10.])
 message: 'Optimization terminated successfully.'
     nit: 7*

推荐答案

假设要最小化的函数是任意复杂的(非线性),这通常是一个非常困难的问题.除非您尝试所有可能的选项,否则不能保证得到最佳解决方案.我不知道是否有任何整数约束非线性优化器(有点怀疑),我假设您知道 Nelder-Mead 如果它是一个连续函数应该可以正常工作.

Assuming that the function to minimize is arbitrarily complex (nonlinear), this is a very hard problem in general. It cannot be guaranteed to be solved optimal unless you try every possible option. I do not know if there are any integer constrained nonlinear optimizer (somewhat doubt it) and I will assume you know that Nelder-Mead should work fine if it was a contiguous function.

考虑到来自@Dougal 的评论,我将在此处添加:首先设置粗+细网格搜索,然后如果您想尝试 Nelder-Mead 是否有效(并且收敛速度更快),以下几点可能会有所帮助...

Considering the comment from @Dougal I will just add here: Set up a coarse+fine grid search first, if you then feel like trying if your Nelder-Mead works (and converges faster), the points below may help...

但也许有些要点会有所帮助:

But maybe some points that help:

  1. 考虑到整个整数约束非常困难,也许可以选择做一些简单的插值来帮助优化器.它仍然应该收敛到一个整数解.当然这需要计算额外的分数,但它可能会解决许多其他问题.(即使在线性整数规划中,它也很常见,首先解决无约束系统 AFAIK)
  2. Nelder-Mead 从 N+1 个点开始,这些在 scipy(至少是旧版本)中硬连线到 (1+0.05) * x0[j](对于 j 在所有维度中,除非 x0[j] 为 0),您将在第一个评估步骤中看到.也许这些可以在较新的版本中提供,否则您可以更改/复制 scipy 代码(它是纯 python)并将其设置为更合理的内容.或者,如果您觉得这更简单,请缩小所有输入变量,使 (1+0.05)*x0 具有合理的大小.
  3. 也许您应该缓存所有函数评估,因为如果您使用 Nelder-Mead,我猜您总是会遇到重复评估(至少在最后).
  4. 您必须检查 Nelder-Mead 缩小到单个值并放弃的可能性,因为它总是会找到相同的结果.
  5. 您通常必须检查您的函数是否表现良好...如果函数在参数空间上没有平滑变化,则这种优化注定要失败,即使这样,如果您应该这样做,它也很容易遇到局部最小值.(因为您缓存了所有评估 - 请参阅 2. - 您至少可以绘制这些评估并查看错误情况,而无需进行任何额外的评估)
  1. Considering how the whole integer constraint is very difficult, maybe it would be an option to do some simple interpolation to help the optimizer. It should still converge to an integer solution. Of course this requires to calculate extra points, but it might solve many other problems. (even in linear integer programming its common to solve the unconstrained system first AFAIK)
  2. Nelder-Mead starts with N+1 points, these are hard wired in scipy (at least older versions) to (1+0.05) * x0[j] (for j in all dimensions, unless x0[j] is 0), which you will see in your first evaluation steps. Maybe these can be supplied in newer versions, otherwise you could just change/copy the scipy code (it is pure python) and set it to something more reasonable. Or if you feel that is simpler, scale all input variables down so that (1+0.05)*x0 is of sensible size.
  3. Maybe you should cache all function evaluations, since if you use Nelder-Mead I would guess you can always run into duplicat evaluation (at least at the end).
  4. You have to check how likely Nelder-Mead will just shrink to a single value and give up, because it always finds the same result.
  5. You generally must check if your function is well behaved at all... This optimization is doomed if the function does not change smooth over the parameter space, and even then it can easily run into local minima if you should have of those. (since you cached all evaluations - see 2. - you could at least plot those and have a look at the error landscape without needing to do any extra evluations)

这篇关于scipy优化中的整数步长最小化的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆