恢复scipy.optimize中的优化? [英] Resuming an optimization in scipy.optimize?

查看:140
本文介绍了恢复scipy.optimize中的优化?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

scipy.optimize提供了用于多元和局部优化全局系统的许多不同方法.但是,我有一个很长的优化运行需要,可能会被中断(在某些情况下,我可能想故意中断它).有什么办法可以重启...好吧,其中的任何一个吗?我的意思是,显然可以提供一组最初始,最优化的参数作为初始猜测,但这并不是唯一的参数-例如,还存在梯度(例如,美洲鳄),差异进化种群等我显然不希望这些也重新开始.

scipy.optimize presents many different methods for local and global optimization of multivariate systems. However, I have a very long optimization run needed that may be interrupted (and in some cases I may want to interrupt it deliberately). Is there any way to restart... well, any of them? I mean, clearly one can provide the last, most optimized set of parameters found as the initial guess, but that's not the only parameter in play - for example, there are also gradients (jacobians, for example), populations in differential evolution, etc. I obviously don't want these to have to start over as well.

我看不出有什么办法可以证明这些秘密,也不能保存其状态.对于以jacobian为例的函数,有一个jacobian参数("jac"),但是它是一个布尔值(表示您的评估函数返回一个jacobian,而我的不是)或一个可调用函数(我只会提供上次运行的单个结果来提供).没有什么可以仅使用最后一个可用的jacobian数组.而且随着差异的发展,人口的流失对于绩效和融合将是可怕的.

I see little way to prove these to scipy, nor to save its state. For functions that take a jacobian for example, there is a jacobian argument ("jac"), but it's either a boolean (indicating that your evaluation function returns a jacobian, which mine doesn't), or a callable function (I would only have the single result from the last run to provide). Nothing takes just an array of the last jacobian available. And with differential evolution, loss of the population would be horrible for performance and convergence.

有什么解决办法吗?有什么办法可以恢复优化吗?

Are there any solutions to this? Any way to resume optimizations at all?

推荐答案

总的答案是不,除了您所说的,从上一次运行的最后一个估计开始,没有任何通用的解决方案.

The general answer is no, there's no general solution apart from, just as you say, starting from the last estimate from the previous run.

但是,对于特定的差异演化,您可以实例化DifferentialEvolutionSolver,您可以在检查点将其腌制并解开以恢复. (建议来自 https://github.com/scipy/scipy/issues/6517)

For differential evolution specifically though, you can can instantiate the DifferentialEvolutionSolver, which you can pickle at checkpoint and unpickle to resume. (The suggestion comes from https://github.com/scipy/scipy/issues/6517)

这篇关于恢复scipy.optimize中的优化?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆