scipy.optimize 中的线性约束 [英] LinearConstraint in scipy.optimize

查看:61
本文介绍了scipy.optimize 中的线性约束的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想使用 scipy.optimize 来最小化一大组线性不等式上的函数(最终是非线性的).作为热身,我试图在 0<=x<=1, 0<=y<=1, 0<=x<=1 上最小化 x+y.按照下面 Johnny Drama 的建议,我目前正在使用 dict-comprehesion 来生成不等式字典,但没有得到预期的答案(最小值 = 0,最小值为 (0,0)).

新的代码段(当前相关):

将 numpy 导入为 np从 scipy.optimize 导入最小化#创建初始点.x0=[.1,.1]#创建最小化函数定义对象(x):返回 x[0]+x[1]#创建线性约束lbnd<= A*(x,y)^T<= upbndA=np.array([[1,0],[0,1]])b1=np.array([0,0])b2=np.array([1,1])cons=[{"type": "ineq", "fun": lambda x: np.matmul(A[i, :],x) -b1[i]} for i in range(A.shape[0])]cons2=[{"type": "ineq", "fun": lambda x: b2[i]-np.matmul(A[i, :], x) } for i in range(A.shape[0])]cons.extend(cons2)溶胶=最小化(对象,x0,约束=限制)打印(溶胶)

问题的原始版本:

我想使用 LinearConstraint 对象在 scipy.optimize 中,如此处的教程中所述:

由于所有约束都是线性的,我们可以用一个仿射线性函数A*x-b来表达它们,这样我们就有了不等式A*x >= b.这里 A 是一个 3x2 矩阵,b 是 3x1 右侧向量:

将 numpy 导入为 np从 scipy.optimize 导入最小化obj_fun = lambda x: (x[0] - 1)**2 + (x[1] - 2.5)**2A = np.array([[1, -2], [-1, -2], [-1, 2]])b = np.array([-2, -6, -2])bnds = [(0, None) for i in range(A.shape[1])] # x_1 >= 0, x_2 >= 0xinit = [0, 0]

现在唯一要做的就是定义约束,每个约束都必须是一个 dict 形式

{type":ineq",fun":constr_fun}

其中 constr_fun 是一个可调用函数,使得 constr_fun >= 0.因此,我们可以定义每个约束

cons = [{'type': 'ineq', 'fun': lambda x: x[0] - 2 * x[1] + 2},{'type': 'ineq', 'fun': lambda x: -x[0] - 2 * x[1] + 6},{'type': 'ineq', 'fun': lambda x: -x[0] + 2 * x[1] + 2}]

我们就完成了.然而,事实上,这对于许多约束来说可能相当麻烦.相反,我们可以通过以下方式直接传递所有约束:

cons = [{type": ineq", fun": lambda x: A @ x - b}]

其中 @ 表示 矩阵乘法运算符.综合起来

res = minimum(obj_fun, x0=xinit, bounds=bnds, constraint=cons)打印(资源)

收益

 乐趣:0.799999999999998jac:数组([0.79999999,-1.59999999])消息:'优化成功终止.'非传染性疾病:16尼特:4涅夫:4状态:0成功:正确x: 数组([1.39999999, 1.69999999])


同样,您可以使用 LinearConstraint 对象:

from scipy.optimize import LinearConstraint# lb <= A <= ub.在我们的例子中:lb = b, ub = inflincon = LinearConstraint(A, b, np.inf*np.ones(3))# 休息如上res = 最小化(obj_fun,x0=xinit,bounds=bnds,constraints=(lincon,))


回答您的新问题:

# b1 <= A * x <==>-b1 >= -A*x <==>A*x - b1 >= 0# A * x <= b2 <==>A*x - b2 <= 0 <==>-Ax + b2 >= 0cons = [{"type": "ineq", "fun": lambda x: A @ x - b1}, {"type": "ineq", "fun": lambda x: -A@ x + b2}]溶胶=最小化(对象,x0,约束=限制)打印(溶胶)

I would like to use scipy.optimize to minimize a function (eventually non-linear) over a large set of linear inequalities. As a warm-up, I'm trying to minimize x+y over the box 0<=x<=1, 0<=y<=1. Following the suggestion of Johnny Drama below, I am currently using a dict-comprehesion to produce the dictionary of inequalities, but am not getting the expected answer (min value=0, min at (0,0)).

New section of code (currently relevant):

import numpy as np
from scipy.optimize import minimize



#Create initial point.

x0=[.1,.1]

#Create function to be minimized

def obj(x):
    return x[0]+x[1]


#Create linear constraints  lbnd<= A*(x,y)^T<= upbnd

A=np.array([[1,0],[0,1]])

b1=np.array([0,0])

b2=np.array([1,1])

cons=[{"type": "ineq", "fun": lambda x: np.matmul(A[i, :],x) -b1[i]} for i in range(A.shape[0])]

cons2=[{"type": "ineq", "fun": lambda x: b2[i]-np.matmul(A[i, :], x) } for i in range(A.shape[0])]

cons.extend(cons2)

sol=minimize(obj,x0,constraints=cons)

print(sol)

Original version of question:

I would like to use the LinearConstraint object in scipy.optimize, as described in the tutorial here: "Defining linear constraints"

I've tried to do a simpler example, where it's obvious what the answer should be: minimize x+y over the square 0<=x<=1, 0<=y<=1. Below is my code, which returns the error "'LinearConstraint' object is not iterable", but I don't see how I'm trying to iterate.

EDIT 1: The example is deliberately over simple. Ultimately, I want to minimize a non-linear function over a large number of linear constraints. I know that I can use dictionary comprehension to turn my matrix of constraints into a list of dictionaries, but I'd like to know if "LinearConstraints" can be used as an off-the-shelf way to turn matrices into constraints.

EDIT 2: As pointed out by Johnny Drama, LinearConstraint is for a particular method. So above I've tried to use instead his suggestion for a dict-comprehension to produce the linear constraints, but am still not getting the expected answer.

Original section of code (now irrelevant):

from scipy.optimize import minimize
from scipy.optimize import LinearConstraint


#Create initial point.

x0=[.1,.1]

#Create function to be minimized

def obj(x):
    return x[0]+x[1]


#Create linear constraints  lbnd<= A* 
#(x,y)^T<= upbnd

A=[[1,0],[0,1]]

lbnd=[0,0]

upbnd=[0,0]

lin_cons=LinearConstraint(A,lbnd,upbnd)

sol=minimize(obj,x0,constraints=lin_cons)

print(sol)

解决方案

As newbie already said, use scipy.optimize.linprog if you want to solve a LP (linear program), i.e. your objective function and your constraints are linear. If either the objective or one of the constraints isn't linear, we are facing a NLP (nonlinear optimization problem), which can be solved by scipy.optimize.minimize:

minimize(obj_fun, x0=xinit, bounds=bnds, constraints=cons)

where obj_fun is your objective function, xinit a initial point, bnds a list of tuples for the bounds of your variables and cons a list of constraint dicts.


Here's an example. Suppose we want to solve the following NLP:

Since all constraints are linear, we can express them by a affin-linear function A*x-b such that we have the inequality A*x >= b. Here A is a 3x2 matrix and b the 3x1 right hand side vector:

import numpy as np
from scipy.optimize import minimize

obj_fun = lambda x: (x[0] - 1)**2 + (x[1] - 2.5)**2
A = np.array([[1, -2], [-1, -2], [-1, 2]])
b = np.array([-2, -6, -2])
bnds = [(0, None) for i in range(A.shape[1])]  # x_1 >= 0, x_2 >= 0
xinit = [0, 0] 

Now the only thing left to do is defining the constraints, each one has to be a dict of the form

{"type": "ineq", "fun": constr_fun}

where constr_fun is a callable function such that constr_fun >= 0. Thus, we could define each constraint

cons = [{'type': 'ineq', 'fun': lambda x:  x[0] - 2 * x[1] + 2},
        {'type': 'ineq', 'fun': lambda x: -x[0] - 2 * x[1] + 6},
        {'type': 'ineq', 'fun': lambda x: -x[0] + 2 * x[1] + 2}]

and we'd be done. However, in fact, this can be quite cumbersome for many constraints. Instead, we can pass all constraints directly by:

cons = [{"type": "ineq", "fun": lambda x: A @ x - b}]

where @ denotes the matrix multiplication operator. Putting all together

res = minimize(obj_fun, x0=xinit, bounds=bnds, constraints=cons)
print(res)

yields

     fun: 0.799999999999998
     jac: array([ 0.79999999, -1.59999999])
 message: 'Optimization terminated successfully.'
    nfev: 16
     nit: 4
    njev: 4
  status: 0
 success: True
       x: array([1.39999999, 1.69999999])


Likewise, you could use a LinearConstraint object:

from scipy.optimize import LinearConstraint

# lb <= A <= ub. In our case: lb = b, ub = inf
lincon = LinearConstraint(A, b, np.inf*np.ones(3))

# rest as above
res = minimize(obj_fun, x0=xinit, bounds=bnds, constraints=(lincon,))


Edit: To answer your new question:

# b1    <= A * x   <==>   -b1 >= -A*x        <==>   A*x - b1 >= 0
# A * x <= b2      <==>    A*x - b2 <= 0     <==>  -Ax + b2 >= 0
cons = [{"type": "ineq", "fun": lambda x: A @ x - b1}, {"type": "ineq", "fun": lambda x: -A @ x + b2}]
sol=minimize(obj,x0,constraints=cons)
print(sol)

这篇关于scipy.optimize 中的线性约束的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆