具有单个等约束的 Python 中的最小二乘优化 [英] Least-squares optimization in Python with single equal constraint

查看:78
本文介绍了具有单个等约束的 Python 中的最小二乘优化的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我希望在 python 中解决一个最小二乘问题,以便在约束 下最小化 0.5*||np.dot(A,x) - b||^2>np.dot(C,x) = d,边界为0,其中

I am looking to solve a least-squares problem in python such that I minimize 0.5*||np.dot(A,x) - b||^2 subject to the constraint np.dot(C,x) = d, with the bounds 0<x<np.inf, where

A : shape=[m, n]
C : shape=[m, n]
b : shape=[m]
d : shape=[m]

都是已知矩阵.

不幸的是,看起来 scipy.optimize.lsq_linear() 函数仅适用于上限/下限约束:

Unfortunately, it looks like the scipy.optimize.lsq_linear() function only works for an upper/lower bound constraint:

minimize 0.5 * ||A x - b||**2
subject to lb <= x <= ub

不是等式约束.此外,我想为解决方案添加边界,使其仅为正值.是否有使用 scipyNumPy 内置函数的简单或干净的方法来做到这一点?

not an equality constraint. In addition, I would like to add bounds for the solution such that it is positive ONLY. Is there an easy or clean way to do this using scipy or NumPy built-in functions?

推荐答案

如果你坚持使用 scipy.optimize.lsq_linear,你可以给目标函数添加一个惩罚项,即

If you stick to scipy.optimize.lsq_linear, you could add a penalty term to the objective function, i.e.

minimize 0.5 * ||A x - b||**2 + beta*||Cx-d||**2
subject to lb <= x <= ub

并选择足够大的标量beta,以便将您的约束优化问题转换为具有简单框约束的问题.但是,使用 scipy 更方便.optimize.minimize 用于约束优化问题:

and choose the scalar beta sufficiently large in order to transform your constrained optimization problem into a problem with simple box-constraints. However, it is more convenient to use scipy.optimize.minimize for constrained optimization problems:

import numpy as np
from scipy.optimize import minimize

# ... Define your np.ndarrays A, C, b, d here

# constraint
cons = [{'type': 'eq', 'fun': lambda x: C @ x - d}]

# variable bounds
bnds = [(0, None) for _ in range(A.shape[1])]

# initial point
x0 = np.ones(A.shape[0])

# res.x contains your solution
res = minimize(lambda x: np.linalg.norm(A@x-b)**2, x0=x0, bounds=bnds, constraints=cons)

我使用欧几里得范数的地方.

where I used the euclidian norm.

但是,请注意,目标梯度和约束雅可比行列式都是通过有限差分近似的,这需要对目标和约束函数进行多次评估.因此,对于大问题,代码可能太慢了.在这种情况下,值得提供精确的梯度和雅可比.此外,由于

However, note that both the objective gradient and the constraint Jacobian are approximated by finite differences, which requires many evaluations of the objective and the constraint function. Consequently, the code might be too slow for large problems. In this case, it's worth providing the exact gradient and jacobian. Furthermore, since

# f(x) = ||Ax-b||^2 = x'A'Ax - 2b'Ax + b'b
# grad f(x) = 2A'Ax - 2A'b

通过预先计算A'Ab'Ab'b,我们可以显着减少评估函数所需的时间.另外,我们只需要计算一次A'Ax:

we can significantly reduce the time needed to evaluate the functions by precalculating A'A, b'A and b'b. Additionally, we only need to calculate A'Ax once:

# Precalculate the matrix-vector products
AtA = A.T @ A
btA = b.T @ A
btb = b.T @ b
Atb = btA.T

# return the objective value and the objective gradient
def obj_and_grad(x):
    AtAx = AtA @ x
    obj = x.T @ AtAx - 2*btA @ x + btb
    grad = 2*AtAx - 2*Atb
    return obj, grad

# constraint (including the jacobian)
cons = [{'type': 'eq', 'fun' : lambda x: C @ x - d, 'jac' : lambda x: C}]

# res.x contains your solution
res = minimize(obj_and_grad, jac=True, x0=x0, bounds=bnds, constraints=cons)

这里,选项 jac=True 告诉 minimize 我们的函数 obj_and_grad 返回一个包含目标的元组函数和梯度.

Here, the option jac=True tells minimize that our function obj_and_grad returns a tuple containing the objective function and the gradient.

这篇关于具有单个等约束的 Python 中的最小二乘优化的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆