scipy中使用L-BFGS-B时出错 [英] error using L-BFGS-B in scipy

查看:410
本文介绍了scipy中使用L-BFGS-B时出错的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在scipy.optimize.minimize中使用"L-BFGS-B"方法时,我得到一些令人困惑的结果:

I get some puzzling result when using the 'L-BFGS-B' method in scipy.optimize.minimize:

import scipy.optimize as optimize
import numpy as np

def testFun():
    prec = 1e3

    func0 = lambda x: (float(x[0]*prec)/prec+0.5)**2+(float(x[1]*prec)/prec-0.3)**2
    func1 = lambda x: (float(round(x[0]*prec))/prec+0.5)**2+(float(round(x[1]*prec))/prec-0.3)**2

    result0 = optimize.minimize(func0, np.array([0,0]), method = 'L-BFGS-B', bounds=((-1,1),(-1,1)))
    print result0
    print 'func0 at [0,0]:',func0([0,0]),'; func0 at [-0.5,0.3]:',func0([-0.5,0.3]),'\n'

    result1 = optimize.minimize(func1, np.array([0,0]), method = 'L-BFGS-B', bounds=((-1,1),(-1,1)))
    print result1
    print 'func1 at [0,0]:',func1([0,0]),'; func1 at [-0.5,0.3]:',func1([-0.5,0.3])

def main():
    testFun()

func0()和func1()是几乎相同的二次函数,输入值的精度差仅为0.001. "L-BFGS-B"方法对于func0效果很好.但是,只需在func1()中添加round()函数,"L-BFGS-B"便会在第一步之后停止搜索最佳值,而直接使用初始值[0,0]作为最佳点.

func0() and func1() are almost identical quadratic functions with only a precision difference of 0.001 for input values. 'L-BFGS-B' method works well for func0. However, by just adding a round() function in func1(), 'L-BFGS-B' stops to search for optimal values after first step and directly use initial value [0,0] as the optimal point.

这不仅限于round().将func1()中的round()替换为int()也会导致相同的错误.

This is not just restricted to round(). Replace round() in func1() as int() also results in the same error.

有人知道原因吗?

非常感谢.

推荐答案

BFGS方法是不仅依赖于函数值,而且还依赖于梯度和Hessian的那些方法之一(如果需要,可以将其视为一阶和二阶导数).希望).在func1()中,一旦包含round(),渐变就不再连续.因此,BFGS方法在第一次迭代后就失败了(想像一下:BFGS在起始参数附近搜索,发现梯度没有变化,因此停止了).同样,我希望其他需要渐变的方法也无法像BGFS一样.

BFGS method is one of those method that relies on not only the function value, but also the gradient and Hessian (think of it as first and second derivative if you wish). In your func1(), once you have round() in it, the gradient is no longer continuous. BFGS method therefore fails right after the 1st iteration (think of as this: BFGS searched around the starting parameter and found the gradient is not changed, so it stopped). Similarly, I would expect other methods requiring gradient fail as BGFS.

您可能可以通过前置条件或重新缩放X使其工作.但是更好的是,您应该尝试使用无梯度方法,例如"Nelder-Mead"或"Powell"

You may be able to get it working by precondition or rescaling X. But better yet, you should try gradient free method such as 'Nelder-Mead' or 'Powell'

这篇关于scipy中使用L-BFGS-B时出错的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆