我应该如何对带有边界的多元且不可微的函数进行优化? [英] How should I scipy.optimize a multivariate and non-differentiable function with boundaries?
问题描述
我遇到以下优化问题:
目标函数是一个多元且不可微的函数,它将标量列表作为参数并返回标量.从函数内的计算是基于熊猫和一系列滚动,std等操作的意义上来说,这是不可微的.
The target function is a multivariate and non-differentiable function which takes as argument a list of scalars and return a scalar. It is non-differentiable in the sense that the computation within the function is based on pandas and a series of rolling, std, etc. actions.
伪代码如下:
def target_function(x: list) -> float:
# calculations
return output
此外,x参数的每个组成部分都有其自己的界限,定义为元组(最小,最大).那么我应该如何使用scipy.optimize库来查找此函数的全局最小值?还有其他图书馆可以帮助吗?
Besides, each component of the x argument has its own bounds defined as a tuple (min, max). So how should I use the scipy.optimize library to find the global minimum of this function? Any other libraries could help?
我已经尝试过scipy.optimize.brute,它使我像永远一样,而scipy.optimize.minimize却从来没有给出看似正确的答案.
I already tried scipy.optimize.brute, which took me like forever and scipy.optimize.minimize, which never produced a seemingly correct answer.
推荐答案
basinhopping
,brute
和differential_evolution
是可用于全局优化的方法.正如您已经发现的那样,蛮力全局优化并不是特别有效.
basinhopping
, brute
, and differential_evolution
are the methods available for global optimization. As you've already discovered, brute-force global optimization is not going to be particularly efficient.
差分进化是一种随机方法,应该比蛮力更好,但可能仍需要大量目标函数评估.如果要使用它,则应使用这些参数并查看最适合您的问题的参数.如果您知道目标函数不是平滑的",则此方法往往比其他方法更有效:该函数或其导数可能不连续.
Differential evolution is a stochastic method that should do better than brute-force, but may still require a large number of objective function evaluations. If you want to use it, you should play with the parameters and see what will work best for your problem. This tends to work better than other methods if you know that your objective function is not "smooth": there could be discontinuities in the function or its derivatives.
另一方面,跳水池可以进行随机跳跃,但每次跳跃后也可以进行局部放松.如果您的目标函数具有许多局部最小值,这将很有用,但是由于使用了局部松弛,因此函数应该是平滑的.如果您不能轻松地获得函数的梯度,您仍然可以尝试使用不需要这些信息的局部最小化器之一进行跳盆.
Basin-hopping, on the other hand, makes stochastic jumps but also uses local relaxation after each jump. This is useful if your objective function has many local minima, but due to the local relaxation used, the function should be smooth. If you can't easily get at the gradient of your function, you could still try basin-hopping with one of the local minimizers which doesn't require this information.
The advantage of the scipy.optimize.basinhopping
routine is that it is very customizable. You can use take_step
to define a custom random jump, accept_test
to override the test used for deciding whether to proceed with or discard the results of a random jump and relaxation, and minimizer_kwargs
to adjust the local minimization behavior. For example, you might override take_step
to stay within your bounds, and then select perhaps the L-BFGS-B minimizer, which can numerically estimate your function's gradient as well as take bounds on the parameters. L-BFGS-B does work better if you give it a gradient, but I've used it without one and it still is able to minimize well. Be sure to read about all of the parameters on the local and global optimization routines and adjust things like tolerances as acceptable to improve performance.
这篇关于我应该如何对带有边界的多元且不可微的函数进行优化?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!