Scipy选择nan作为输入,同时将其最小化 [英] Scipy selects nan as inputs while minimizing

查看:101
本文介绍了Scipy选择nan作为输入,同时将其最小化的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有这个目标函数(在python中):

I have this objective function (in python) :

actions= [...] # some array
Na= len(actions)    
# maximize p0 * qr(s,a0,b0) + ... + pn * qr(s,an,bn)
def objective(x):
        p = x[:Na] # p is a probability distribution
        b = x[Na:2 * Na] # b is an array of positive unbounded scalars
        q = np.array([qr(s, actions[a], b[a]) for a in range(0, Na)]) # s is an array
        rez = - np.dot(p, q) # np stands for numpy library
        return rez

qrqc是回归树,它们是将数组映射到标量的函数.

qr and qc are regression trees, these are functions mapping arrays to scalars.

我有这些限制:

# p0 * qc(s,a0,b0) + ... + pn * qc(s,an,bn) < beta
def constraint(x):
    p = x[:Na]
    b = x[Na:2 * Na]
    q = np.array([qc(s, actions[a], b[a]) for a in range(0, Na)])
    rez = beta - np.dot(p, q) # beta is a scalar        
    return rez

# elements of p should sum to 1
def constraint_proba_sum_1(x):
    p = x[:Na]
    rez = 0
    for i in range(0, Na):
        rez += p[i]
    rez = 1 - rez
    return rez

我如何最小化:

constraints = ({'type': 'ineq', 'fun': constraint},
                   {'type': 'eq', 'fun': constraint_proba_sum_1})

res = opt.minimize(fun=objective, x0=np.array([0.5, 0.5, 10, 10]), constraints=constraints,
                       bounds=[(0, 1), (0, 1), (0, None), (0, None)])

问题是opt.minimize有时在其最小化过程"slsqp"中将nan数组用作输入.因此qr树会引发错误.为什么在什么情况下会评估这样的数组?

The problem is opt.minimize uses nan arrays as inputs sometimes during its minimization process "slsqp". Thus the qr tree raises errors.Why would it evaluate such arrays, in what circumstances ?

我确实意识到此问题与这篇文章相同 Scipy优化方法选择nan作为输入参数,但尚未解析,看起来像函数相关.

I do realize this issue is the same as this post Scipy optimizations methods select nan for input parameter but it is not resolved and it looks like function dependent.

编辑:看来,如果删除约束constraint_proba_sum_1(x),我将不再具有NaN值作为输入.

EDIT : It appears that if I remove the constraint constraint_proba_sum_1(x), I dont have NaN value as input anymore.

编辑2 :我尝试了另一个API,带有SLSQP优化的pyOPT,并且遇到了同样的问题.

EDIT 2 : I tryed another API, pyOPT with SLSQP optimization and I have the same issue.

推荐答案

我观察到scipy的differential_evolution优化器的行为类似,我可以追溯到polish参数,该参数在全局变量运行后进行局部最小化.优化.由于这种情况仅在DE优化器的最大迭代之后出现(显然无法从我手头的数据中识别出我模型的参数),因此DE opt对象中的局部最小化器的初始化引起了类似其他文章所述的类似效果

I observed a similar behavoir for scipy's differential_evolution optimizer, and I could trace this back to the polish argument, which runs a local minimization after the global optimization. Since this appeared only after the max iterations of the DE optimizer (apparently the parameters of my model cannot be identified from the data I have at hand), the init of the local minimizer from the DE opt object caused similiar effects as the other posts describe.

我的解决方法是抓住目标函数以查找NaN值的出现并引发异常,因为这仅在DE优化器找不到最佳状态时发生.

My fix was to catch within the objective function for occurence of NaN values and raising an exception, since this happens for my only when the DE optimizer could not find an optimium.

这篇关于Scipy选择nan作为输入,同时将其最小化的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆