scipy中的整数步长优化最小化 [英] Integer step size in scipy optimize minimize

查看:115
本文介绍了scipy中的整数步长优化最小化的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个计算机视觉算法,想使用 scipy.optimize进行优化.minimize .现在,我只想调整两个参数,但是参数的数量可能最终会增加,所以我想使用一种可以进行高维梯度搜索的技术. SciPy中的Nelder-Mead实施似乎很合适.

I have a computer vision algorithm I want to tune up using scipy.optimize.minimize. Right now I only want to tune up two parameters but the number of parameters might eventually grow so I would like to use a technique that can do high-dimensional gradient searches. The Nelder-Mead implementation in SciPy seemed like a good fit.

我已经完成了所有代码的设置,但是最小函数似乎真的想使用步长小于1的浮点值.当前的参数集都是整数并且一个步长为1另一个步长为2(即,值必须为奇数,如果不是,那么我尝试优化的东西会将其转换为奇数).大约一个参数是一个以像素为单位的窗口大小,另一个参数是一个阈值(0-255之间的值).

I got the code all set up but it seems that the minimize function really wants to use floating point values with a step size that is less than one.The current set of parameters are both integers and one has a step size of one and the other has a step size of two (i.e. the value must be odd, if it isn't the thing I am trying to optimize will convert it to an odd number). Roughly one parameter is a window size in pixels and the other parameter is a threshold (a value from 0-255).

对于我值得使用的东西,我正在使用来自git repo的新版本scipy.有谁知道如何告诉scipy为每个参数使用特定的步长?有什么方法可以滚动我自己的渐变函数?有没有可以帮助我的肮脏旗帜?我知道可以通过简单的参数扫描来完成,但是我最终希望将此代码应用于更大的参数集.

For what it is worth I am using a fresh build of scipy from the git repo. Does anyone know how to tell scipy to use a specific step size for each parameter? Is there some way I can roll my own gradient function? Is there a scipy flag that could help me out? I am aware that this could be done with a simple parameter sweep, but I would eventually like to apply this code to much larger sets of parameters.

代码本身非常简单:

import numpy as np
from scipy.optimize import minimize
from ScannerUtil import straightenImg 
import bson

def doSingleIteration(parameters):
    # do some machine vision magic
    # return the difference between my value and the truth value

parameters = np.array([11,10])
res = minimize( doSingleIteration, parameters, method='Nelder-Mead',options={'xtol': 1e-2, 'disp': True,'ftol':1.0,}) #not sure if these params do anything
print "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
print res

这是我的输出的样子.如您所见,我们正在重复很多次运行,并且在最小化方面没有取得任何进展.

This is what my output looks like. As you can see we are repeating a lot of runs and not getting anywhere in the minimization.

*+++++++++++++++++++++++++++++++++++++++++
[ 11.  10.]  <-- Output from scipy minimize
{'block_size': 11, 'degree': 10} <-- input to my algorithm rounded and made int
+++++++++++++++++++++++++++++++++++++++++
120  <-- output of the function I am trying to minimize
+++++++++++++++++++++++++++++++++++++++++
[ 11.55  10.  ]
{'block_size': 11, 'degree': 10}
+++++++++++++++++++++++++++++++++++++++++
120
+++++++++++++++++++++++++++++++++++++++++
[ 11.   10.5]
{'block_size': 11, 'degree': 10}
+++++++++++++++++++++++++++++++++++++++++
120
+++++++++++++++++++++++++++++++++++++++++
[ 11.55   9.5 ]
{'block_size': 11, 'degree': 9}
+++++++++++++++++++++++++++++++++++++++++
120
+++++++++++++++++++++++++++++++++++++++++
[ 11.1375  10.25  ]
{'block_size': 11, 'degree': 10}
+++++++++++++++++++++++++++++++++++++++++
120
+++++++++++++++++++++++++++++++++++++++++
[ 11.275  10.   ]
{'block_size': 11, 'degree': 10}
+++++++++++++++++++++++++++++++++++++++++
120
+++++++++++++++++++++++++++++++++++++++++
[ 11.    10.25]
{'block_size': 11, 'degree': 10}
+++++++++++++++++++++++++++++++++++++++++
120
+++++++++++++++++++++++++++++++++++++++++
[ 11.275   9.75 ]
{'block_size': 11, 'degree': 9}
+++++++++++++++++++++++++++++++++++++++++
120
+++++++++++++++++++++++++++++++++++++++++
~~~
SNIP 
~~~
+++++++++++++++++++++++++++++++++++++++++
[ 11.         10.0078125]
{'block_size': 11, 'degree': 10}
+++++++++++++++++++++++++++++++++++++++++
120
Optimization terminated successfully.
         Current function value: 120.000000
         Iterations: 7
         Function evaluations: 27
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  status: 0
    nfev: 27
 success: True
     fun: 120.0
       x: array([ 11.,  10.])
 message: 'Optimization terminated successfully.'
     nit: 7*

推荐答案

假定最小化函数任意复杂(非线性),这通常是一个非常棘手的问题.除非您尝试所有可能的选择,否则不能保证将其求解为最佳.我不知道是否有任何整数约束的非线性优化器(有点怀疑),我假设您知道,如果Nelder-Mead是连续函数,它应该可以正常工作.

Assuming that the function to minimize is arbitrarily complex (nonlinear), this is a very hard problem in general. It cannot be guaranteed to be solved optimal unless you try every possible option. I do not know if there are any integer constrained nonlinear optimizer (somewhat doubt it) and I will assume you know that Nelder-Mead should work fine if it was a contiguous function.

考虑来自@Dougal的评论,我将在这里添加:首先设置一个粗略+精细的网格搜索,如果您随后想尝试一下您的Nelder-Mead是否有效(并且收敛速度更快),那么以下几点可能会有所帮助...

Considering the comment from @Dougal I will just add here: Set up a coarse+fine grid search first, if you then feel like trying if your Nelder-Mead works (and converges faster), the points below may help...

但是也许有一些帮助:

  1. 考虑到整个整数约束非常困难,也许可以执行一些简单的插值来帮助优化程序.它仍然应该收敛到整数解.当然,这需要计算加分,但它可能会解决许多其他问题. (甚至在线性整数编程中,通常也可以首先解决无约束系统的AFAIK问题)
  2. Nelder-Mead以N + 1点开始,这些点在scipy(至少是较旧的版本)中硬连接到(1+0.05) * x0[j](对于所有尺寸的j,除非x0[j]为0),您将看到在您的第一个评估步骤中.也许这些可以以较新的版本提供,否则您可以更改/复制scipy代码(它是纯python)并将其设置为更合理的名称.或者,如果您觉得比较简单,请按比例缩小所有输入变量,以使(1 + 0.05)* x0大小合理.
  3. 也许您应该缓存所有函数求值,因为如果您使用Nelder-Mead,我想您总是会遇到重复求值(至少在最后).
  4. 您必须检查Nelder-Mead收缩为单个值并放弃的可能性,因为它总是会找到相同的结果.
  5. 通常,您必须检查函数是否表现良好...如果函数在参数空间上的变化不平滑,那么这种优化注定要失败,即使您拥有这些函数,它也很容易陷入局部最小值. (因为您缓存了所有评估-请参阅2.-您至少可以将其绘制出来并查看错误情况,而无需进行任何额外的评估)
  1. Considering how the whole integer constraint is very difficult, maybe it would be an option to do some simple interpolation to help the optimizer. It should still converge to an integer solution. Of course this requires to calculate extra points, but it might solve many other problems. (even in linear integer programming its common to solve the unconstrained system first AFAIK)
  2. Nelder-Mead starts with N+1 points, these are hard wired in scipy (at least older versions) to (1+0.05) * x0[j] (for j in all dimensions, unless x0[j] is 0), which you will see in your first evaluation steps. Maybe these can be supplied in newer versions, otherwise you could just change/copy the scipy code (it is pure python) and set it to something more reasonable. Or if you feel that is simpler, scale all input variables down so that (1+0.05)*x0 is of sensible size.
  3. Maybe you should cache all function evaluations, since if you use Nelder-Mead I would guess you can always run into duplicat evaluation (at least at the end).
  4. You have to check how likely Nelder-Mead will just shrink to a single value and give up, because it always finds the same result.
  5. You generally must check if your function is well behaved at all... This optimization is doomed if the function does not change smooth over the parameter space, and even then it can easily run into local minima if you should have of those. (since you cached all evaluations - see 2. - you could at least plot those and have a look at the error landscape without needing to do any extra evluations)

这篇关于scipy中的整数步长优化最小化的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆