使用 Pymoo 重新启动优化 [英] Restarting an optimisation with Pymoo

查看:88
本文介绍了使用 Pymoo 重新启动优化的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在 pymoo 中重新启动优化.

我有一个问题定义为:

class myOptProb(问题):"我的身体在这里""算法 = NSGA2(pop_size=24)问题 = myOptProblem(opt_obj=dp_ptr,nvars=7,nobj=4,nconstr=0,磅=0.3 * np.ones(7),ub=0.7 * np.ones(7),并行化=('线程',cpu_count(),))res = 最小化(问题,算法,('n_gen', 100),种子=1,详细=真)

在优化过程中,我将设计向量和结果写入 .csv 文件.design_vectors.csv 的一个例子是:

<预> <代码> 5.000000000000000000e + 00,4.079711567060104183e-01,6.583544872784267143e-01,4.712364759485179189e-01,6.859360188593541796e-01,5.653765991273791425e-01,5.486782880836487131e-01,5.275405748345924906e-01,7.000000000000000000e + 00,5.211287914743063521e-01,6.368123569438421949e-01,3.496693260479644128e-01,4.116734716044557763e-01,5.343037085833151068e-01,6.878382993278697732e-01,5.244120877022839800e-01,9.000000000000000000e + 00,5.425317846613321171e-01,5.275405748345924906e-01,4.269449637288642574e-01,6.954464617649794844e-01,5.318980876983187001e-01,4.520564690494201510e-01,5.203792876471586837e-01,1.100000000000000000e + 01,4.579502451694219545e-01,6.853050113762846340e-01,3.695822666721857441e-01,3.505318077758549089e-01,3.540316632186925050e-01,5.022648662707586142e-01,3.086099221096791911e-01,3.000000000000000000e + 00,4.121775968257620493e-01,6.157117313805953174e-01,3.412904026310568106e-01,4.791574104703620329e-01,6.634382012372381787e-01,4.174456593494717538e-01,4.151101354345394512e-01,

results.csv 是:

5.000000000000000000e+00, 1.000000000000000000e+05, 1.000000000000000000e+05, 1.00000000000000e+0.000000000000000000000000000000000000000000007.000000000000000000e+00, 1.041682833582066703e+00, 3.481167125962069189e-03, -5.235115318709097903,709097903,7090979039.000000000000000000e+00, 1.067730307802263967e+00, 2.194702810002167534e-02, -3.195892023664555271, 307802263967e+00, -3.1958920236645552718,2037552711.100000000000000000e+01, 8.986880344052742275e-01, 2.969022150977750681e-03, -4.346692726475219184, 400000003,405-405-4040853.000000000000000000e+00, 9.638770499257821589e-01, 1.859596479928402393e-02, -2.72323007314269607314269616, 30000073142696160308269616

第一列是设计向量的索引 - 因为我是异步线程,所以我指定了索引.

我看到应该可以通过 pymoo.algorithms.nsga2.NSGA2 的采样参数重新启动优化,但我找不到工作示例.人口和个人的文件也不清楚.那么如何使用之前的结果重新开始模拟?

解决方案

是的,您可以使用种群初始化算法对象,而不是随机进行.

我写了一个关于偏向初始化的小教程:https://pymoo.org/customization/initialization.html

因为在您的情况下数据已经存在,在 CSV 或内存文件中,您可能想要创建一个虚拟问题(在我的示例中我将其称为 Constant)来设置属性在 Population 对象中.(在人口XFGCVfeasible需要设置).另一种方法是直接设置属性...

带有虚拟问题的有偏初始化如下所示.如果您已经使用 pymoo 存储 csv 文件,您也可以直接 np.save Population 对象并加载它.那么所有中间步骤都不是必需的.

我计划在未来改进检查点实施.因此,如果您有更多的反馈和用例但尚无法实现,请告诉我.

将 numpy 导入为 np从 pymoo.algorithms.nsga2 导入 NSGA2从 pymoo.algorithms.so_genetic_algorithm 导入 GA从 pymoo.factory 导入 get_problem, G1, Problem从 pymoo.model.evaluator 导入评估器从 pymoo.model.population 导入人口从 pymoo.optimize 导入最小化类你的问题(问题):def __init__(self, n_var=10):super().__init__(n_var=n_var, n_obj=1, n_constr=0, xl=-0, xu=1, type_var=np.double)def _evaluate(self, x, out, *args, **kwargs):out[F"] = np.sum(np.square(x - 0.5), axis=1)问题 = 你的问题()# 创建初始数据并设置为人口对象 - 对于您来说,这是您的文件N = 300X = np.random.random((N, problem.n_var))F = np.random.random((N, problem.n_obj))G = np.random.random((N, question.n_constr))类常量(你的问题):def _evaluate(self, x, out, *args, **kwargs):出[F"] = F出[G"] = Gpop = Population().new(X", X)Evaluator().eval(Constant(), pop)算法= GA(pop_size=100,采样=pop)最小化(问题,算法,('n_gen', 10),种子=1,详细=真)

I'm trying to restart an optimisation in pymoo.

I have a problem defined as:

class myOptProb(Problem):
    """my body goes here"""

algorithm = NSGA2(pop_size=24)  

problem = myOptProblem(opt_obj=dp_ptr,
                       nvars=7,
                       nobj=4,
                       nconstr=0,
                       lb=0.3 * np.ones(7),
                       ub=0.7 * np.ones(7),
                       parallelization=('threads', cpu_count(),))

res = minimize(problem,
               algorithm,
               ('n_gen', 100),
               seed=1,
               verbose=True)

During the optimisation I write the design vectors and results to a .csv file. An example of design_vectors.csv is:

5.000000000000000000e+00, 4.079711567060104183e-01, 6.583544872784267143e-01, 4.712364759485179189e-01, 6.859360188593541796e-01, 5.653765991273791425e-01, 5.486782880836487131e-01, 5.275405748345924906e-01,
7.000000000000000000e+00, 5.211287914743063521e-01, 6.368123569438421949e-01, 3.496693260479644128e-01, 4.116734716044557763e-01, 5.343037085833151068e-01, 6.878382993278697732e-01, 5.244120877022839800e-01, 
9.000000000000000000e+00, 5.425317846613321171e-01, 5.275405748345924906e-01, 4.269449637288642574e-01, 6.954464617649794844e-01, 5.318980876983187001e-01, 4.520564690494201510e-01, 5.203792876471586837e-01, 
1.100000000000000000e+01, 4.579502451694219545e-01, 6.853050113762846340e-01, 3.695822666721857441e-01, 3.505318077758549089e-01, 3.540316632186925050e-01, 5.022648662707586142e-01, 3.086099221096791911e-01, 
3.000000000000000000e+00, 4.121775968257620493e-01, 6.157117313805953174e-01, 3.412904026310568106e-01, 4.791574104703620329e-01, 6.634382012372381787e-01, 4.174456593494717538e-01, 4.151101354345394512e-01, 

The results.csv is:

5.000000000000000000e+00, 1.000000000000000000e+05, 1.000000000000000000e+05, 1.000000000000000000e+05, 1.000000000000000000e+05, 
7.000000000000000000e+00, 1.041682833582066703e+00, 3.481167125962069189e-03, -5.235115318709097909e-02, 4.634480813876099177e-03, 
9.000000000000000000e+00, 1.067730307802263967e+00, 2.194702810002167534e-02, -3.195892023664552717e-01, 1.841232582360878426e-03, 
1.100000000000000000e+01, 8.986880344052742275e-01, 2.969022150977750681e-03, -4.346692726475211849e-02, 4.995468429444801205e-03, 
3.000000000000000000e+00, 9.638770499257821589e-01, 1.859596479928402393e-02, -2.723230073142696162e-01, 1.600910928983005632e-03, 

The first column is the index of the design vector - because I thread asynchronously, I specify the indices.

I see that it should be possible to restart the optimisation via the sampling parameter for pymoo.algorithms.nsga2.NSGA2 but I couldn't find a working example. The documentation for both population and individuals is also not clear. So how can I restart a simulation with the previous results?

解决方案

Yes, you can initialize the algorithm object with a population instead of doing it randomly.

I have written a small tutorial for a biased initialization: https://pymoo.org/customization/initialization.html

Because in your case the data already exists, in a CSV or in-memory file, you might want to create a dummy problem (I have called it Constant in my example) to set the attributes in the Population object. (In the population X, F, G, CV and feasible needs to be set). Another way would be setting the attributes directly...

The biased initialization with a dummy problem is shown below. If you already use pymoo to store the csv files, you can also just np.save the Population object directly and load it. Then all intermediate steps are not necessary.

I am planning to improve checkpoint implementation in the future. So if you have some more feedback and use case which are not possible yet please let me know.

import numpy as np

from pymoo.algorithms.nsga2 import NSGA2
from pymoo.algorithms.so_genetic_algorithm import GA
from pymoo.factory import get_problem, G1, Problem
from pymoo.model.evaluator import Evaluator
from pymoo.model.population import Population
from pymoo.optimize import minimize


class YourProblem(Problem):

    def __init__(self, n_var=10):
        super().__init__(n_var=n_var, n_obj=1, n_constr=0, xl=-0, xu=1, type_var=np.double)

    def _evaluate(self, x, out, *args, **kwargs):
        out["F"] = np.sum(np.square(x - 0.5), axis=1)


problem = YourProblem()

# create initial data and set to the population object - for your this is your file
N = 300
X = np.random.random((N, problem.n_var))
F = np.random.random((N, problem.n_obj))
G = np.random.random((N, problem.n_constr))


class Constant(YourProblem):

    def _evaluate(self, x, out, *args, **kwargs):
        out["F"] = F
        out["G"] = G


pop = Population().new("X", X)
Evaluator().eval(Constant(), pop)

algorithm = GA(pop_size=100, sampling=pop)

minimize(problem,
         algorithm,
         ('n_gen', 10),
         seed=1,
         verbose=True)

这篇关于使用 Pymoo 重新启动优化的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆