了解scipy盆地跳频优化函数的例子 [英] Example to understand scipy basin hopping optimization function

查看:75
本文介绍了了解scipy盆地跳频优化函数的例子的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我遇到了 scipy中的流域跳频算法,并创建了一个简单的问题以了解如何使用它,但似乎无法解决该问题.可能是我做错了什么.

I came across the basin hopping algorithm in scipy and created a simple problem to understand how to use it but it doesnt seem to be working correctly for that problem. May be I'm doing something completely wrong.

这是代码:

import scipy.optimize as spo
import numpy as np
minimizer_kwargs = {"method":"BFGS"}    
f1=lambda x: (x-4)
def mybounds(**kwargs):
    x = kwargs["x_new"]
    tmax = bool(np.all(x <= 1.0))
    tmin = bool(np.all(x >= 0.0))
    print x
    print tmin and tmax
    return tmax and tmin


def print_fun(x, f, accepted):
      print("at minima %.4f accepted %d" % (f, int(accepted)))
x0=[1.]     
spo.basinhopping(f1,x0,accept_test=mybounds,callback=print_fun,niter=200,minimizer_kwargs=minimizer_kwargs)

它提供的解决方案是x: array([ -1.80746874e+08])

推荐答案

您正在测试的功能使用了一种称为Metropolis-Hastings的方法,该方法可以修改为称为模拟退火的过程,该过程可以以随机方式优化函数

The function you are testing makes use of an approach called Metropolis-Hastings, which can be modified into a procedure called simulated annealing that can optimze functions in a stochastic way.

其工作方式如下.首先,您选择一个点,例如您的点x0.从这一点开始,您将产生一个随机扰动(称为提议").一旦提出了提议的扰动,您就可以通过将扰动应用于当前的输出来获得新点的候选人.因此,您可以将其视为x1 = x0 + perturbation.

The way this works is as follows. First you pick a point, like your point x0. From that point, you generate a random perturbation (this is called a "proposal"). Once there is a proposed perturbation, you get your candidate for a new point by applying the perturbation to your current out. So, you could think of it like x1 = x0 + perturbation.

在常规的旧梯度下降中,perturbation项只是确定性计算的量,就像在梯度方向上的阶跃一样.但是在Metropolis-Hastings中,perturbation是随机生成的(有时通过使用渐变作为有关随机选择的线索……但有时只是随机而无提示).

In regular old gradient descent, the perturbation term is just a deterministically calculated quantity, like a step in the direction of the gradient. But in Metropolis-Hastings, perturbation is generated randomly (sometimes by using the gradient as a clue about where to randomly go... but sometimes just randomly with no clues).

这时,当您拥有x1时,您必须问自己:我是通过随机干扰x0来做一件好事的,还是我把所有事情搞砸了?"其中一部分与在某些范围内保持一致有关,例如您的mybounds函数.它的另一部分与目标函数的值在新点上变得更好/更差有关.

At this point when you've got x1, you have to ask yourself: "Did I do a good thing by randomly perturbing x0 or did I just mess everything up?" One part of that has to do with sticking inside of some bounds, such as your mybounds function. Another part of it has to do with how much better/worse the objective function's value has become at the new point.

因此,有两种方法来拒绝x1的建议:首先,它可能会违反您设置的界限,并且对于问题的定义是不可行的.第二,从Metropolis-Hastings的接受/拒绝评估步骤来看,应该将其拒绝是一个非常糟糕的问题.无论哪种情况,您都将拒绝x1,而是设置x1 = x0并假装好像您只是停留在同一地点以再次尝试.

So there are two ways to reject the proposal of x1: first, it could violate the bounds you have set and be an infeasible point by the problem's definition; second, from the accept/reject assessment step of Metropolis-Hastings, it could just be a very bad point that should be rejected. In either case, you will reject x1 and instead set x1 = x0 and pretend as if you just stayed in the same spot to try again.

用梯度类型的方法进行对比,无论哪种情况,您都将至少进行 种运动(向梯度方向迈进).

Contrast that with a gradient-type method, where you'll definitely, no matter what, always make at least some kind of movement (a step in the direction of the gradient).

哇,好吧.撇开所有这些,让我们考虑一下如何使用basinhopping函数.从文档中我们可以看到,典型的接受条件是通过take_step参数访问的,文档中这样说:默认的步长提取例程是坐标的随机位移,但是对于某些情况,其他步长提取算法可能会更好.系统."因此,除了mybounds边界检查器之外,该函数还将对坐标进行随机位移以生成新的尝试点.而且,由于此函数的梯度只是常数1,因此它将始终在负梯度的方向上进行相同的大调整(以最小化).

Whew, all right. With that all aside, let's think about how this plays out with the basinhopping function. From the documentation we can see that the typical acceptance condition is accessed by the take_step argument, and the documentation saying this: "The default step taking routine is a random displacement of the coordinates, but other step taking algorithms may be better for some systems." So, even apart from your mybounds bounds checker, the function will be making a random displacement of the coordinates to generate its new point to try. And since the gradient of this function is just the constant 1, it will always be making the same big steps in the direction of negative gradient (for minimizing).

在实践上,这意味着x1的建议点将始终在间隔[0,1]之外,并且边界检查器将始终否决它们.

At a practical level, this means that the proposed points for x1 are always going to be quite outside of the interval [0,1] and your bounds checker will always veto them.

当我运行您的代码时,我发现这种情况一直在发生:

When I run your code, I see this happening all the time:

In [5]: spo.basinhopping(f1,x0,accept_test=mybounds,callback=print_fun,niter=200,minimizer_kwargs=minimizer_kwargs)
at minima -180750994.1924 accepted 0
[ -1.80746874e+08]
False
at minima -180746877.5530 accepted 0
[ -1.80746873e+08]
False
at minima -180746877.3896 accepted 0
[ -1.80750991e+08]
False
at minima -180750994.7281 accepted 0
[ -1.80746874e+08]
False
at minima -180746878.2433 accepted 0
[ -1.80746874e+08]
False
at minima -180746877.5774 accepted 0
[ -1.80746874e+08]
False
at minima -180746878.3173 accepted 0
[ -1.80750990e+08]
False
at minima -180750994.3509 accepted 0
[ -1.80750991e+08]
False
at minima -180750994.6605 accepted 0
[ -1.80746874e+08]
False
at minima -180746877.6966 accepted 0
[ -1.80746874e+08]
False
at minima -180746877.6900 accepted 0
[ -1.80750990e+08]
False
at minima -180750993.9707 accepted 0
[ -1.80750990e+08]
False
at minima -180750994.0494 accepted 0
[ -1.80750991e+08]
False
at minima -180750994.5824 accepted 0
[ -1.80746874e+08]
False
at minima -180746877.5459 accepted 0
[ -1.80750991e+08]
False
at minima -180750994.6679 accepted 0
[ -1.80750991e+08]
False
at minima -180750994.5823 accepted 0
[ -1.80750990e+08]
False
at minima -180750993.9308 accepted 0
[ -1.80746874e+08]
False
at minima -180746878.0395 accepted 0
[ -1.80750991e+08]
False

# ... etc.

因此,绝不接受.输出没有告诉您已经找到了解决方案.它告诉您的是,比起随机的干扰来探索可能的解决方案,这会使您发现的点对优化器来说越来越好,但始终无法满足您的标准.它一直找不到返回[0,1]的方式来获得要做的满足mybounds的点.

So it's never accepting the posposal points. The output is not telling you that it has found a solution. It's telling you than the random perturbing to explore possible solutions keeps resulting in points that look better and better to the optimizer, but which keep failing to satisfy your criteria. It can't find its way all the way back to [0,1] to get points that do satisfy mybounds.

这篇关于了解scipy盆地跳频优化函数的例子的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆