我应该如何对带有边界的多元且不可微的函数进行优化? [英] How should I scipy.optimize a multivariate and non-differentiable function with boundaries?

查看:143
本文介绍了我应该如何对带有边界的多元且不可微的函数进行优化?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我遇到以下优化问题:

目标函数是一个多元且不可微的函数,它将标量列表作为参数并返回标量.从函数内的计算是基于熊猫和一系列滚动,std等操作的意义上来说,这是不可微的.

The target function is a multivariate and non-differentiable function which takes as argument a list of scalars and return a scalar. It is non-differentiable in the sense that the computation within the function is based on pandas and a series of rolling, std, etc. actions.

伪代码如下:

def target_function(x: list) -> float:
    # calculations
    return output

此外,x参数的每个组成部分都有其自己的界限,定义为元组(最小,最大).那么我应该如何使用scipy.optimize库来查找此函数的全局最小值?还有其他图书馆可以帮助吗?

Besides, each component of the x argument has its own bounds defined as a tuple (min, max). So how should I use the scipy.optimize library to find the global minimum of this function? Any other libraries could help?

我已经尝试过scipy.optimize.brute,它使我像永远一样,而scipy.optimize.minimize却从来没有给出看似正确的答案.

I already tried scipy.optimize.brute, which took me like forever and scipy.optimize.minimize, which never produced a seemingly correct answer.

推荐答案

basinhoppingbrutedifferential_evolution是可用于全局优化的方法.正如您已经发现的那样,蛮力全局优化并不是特别有效.

basinhopping, brute, and differential_evolution are the methods available for global optimization. As you've already discovered, brute-force global optimization is not going to be particularly efficient.

差分进化是一种随机方法,应该比蛮力更好,但可能仍需要大量目标函数评估.如果要使用它,则应使用这些参数并查看最适合您的问题的参数.如果您知道目标函数不是平滑的",则此方法往往比其他方法更有效:该函数或其导数可能不连续.

Differential evolution is a stochastic method that should do better than brute-force, but may still require a large number of objective function evaluations. If you want to use it, you should play with the parameters and see what will work best for your problem. This tends to work better than other methods if you know that your objective function is not "smooth": there could be discontinuities in the function or its derivatives.

另一方面,跳水池可以进行随机跳跃,但每次跳跃后也可以进行局部放松.如果您的目标函数具有许多局部最小值,这将很有用,但是由于使用了局部松弛,因此函数应该是平滑的.如果您不能轻松地获得函数的梯度,您仍然可以尝试使用不需要这些信息的局部最小化器之一进行跳盆.

Basin-hopping, on the other hand, makes stochastic jumps but also uses local relaxation after each jump. This is useful if your objective function has many local minima, but due to the local relaxation used, the function should be smooth. If you can't easily get at the gradient of your function, you could still try basin-hopping with one of the local minimizers which doesn't require this information.

scipy.optimize.basinhopping take_step定义自定义随机跳转,使用accept_test覆盖用于确定是否继续或放弃随机跳转和松弛结果的测试,并使用minimizer_kwargs调整局部最小化行为.例如,您可以覆盖take_step使其保持在范围内,然后选择L-BFGS-B最小化器,该最小化器可以在数字上估计函数的梯度并确定参数的边界.如果给它一个渐变,L-BFGS-B会更好地工作,但是我没有使用它,并且它仍然能够很好地将其最小化.请务必阅读本地和全局优化例程中的所有参数,并调整可接受的容差等内容,以提高性能.

The advantage of the scipy.optimize.basinhopping routine is that it is very customizable. You can use take_step to define a custom random jump, accept_test to override the test used for deciding whether to proceed with or discard the results of a random jump and relaxation, and minimizer_kwargs to adjust the local minimization behavior. For example, you might override take_step to stay within your bounds, and then select perhaps the L-BFGS-B minimizer, which can numerically estimate your function's gradient as well as take bounds on the parameters. L-BFGS-B does work better if you give it a gradient, but I've used it without one and it still is able to minimize well. Be sure to read about all of the parameters on the local and global optimization routines and adjust things like tolerances as acceptable to improve performance.

这篇关于我应该如何对带有边界的多元且不可微的函数进行优化?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆