使用 MAXSMT 进行增量学习 [英] Incremental Learning using MAXSMT

查看:37
本文介绍了使用 MAXSMT 进行增量学习的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们能否在 z3 中以增量方式使用 MaxSMT 求解器(优化)的先前解决方案?另外,有没有办法打印出优化器上的软断言?

Can we use the previous solution of a MaxSMT solver (optimize) in an incremental way in z3? Also, Is there any way to print out the soft assertions on the optimizer?

推荐答案

如果您问技术上是否可以运行任一,答案是YESz3OptiMathSAT 增量 与 MaxSMT 问题.(使用 API).

The answer is YES if you are asking whether it is technically possible to run either z3 or OptiMathSAT incrementally with a MaxSMT problem. (Use the API).

具有相同id 的所有软子句--在执行check-sat--的那一刻被认为是一部分相同的 MaxSMT 目标.本质上,OMT 求解器懒惰地评估 MaxSMT 目标的相关软子句集.这适用于 z3OptiMathSAT.

All soft-clauses with the same id --at the moment in which one performs a check-sat-- are considered part of the same MaxSMT goal. In essence, the OMT solver evaluates the relevant set of soft-clauses of a MaxSMT goal lazily. This holds for both z3 and OptiMathSAT.

早期发现的最优解可能与迭代过程后期的最优解相距很远.

An optimal solution found at an early stage may be far away from the optimal solution of later stages of the iterative process.

在处理 MaxSMT 问题时,OMT 求解器重用的能力跨增量调用学习的子句可能取决于正在使用的优化算法.

When dealing with a MaxSMT problem, the ability of an OMT solver to reuse learned clauses across incremental calls may depend on the optimization algorithm that is being used.

我看到两种可能的情况:

I see two possible cases:

  • 一种是使用基于内核的 MaxSMT 引擎.在这种情况下,探索复杂程度不断增加的问题的公式可能有助于识别原始问题的易处理子集.但是,请注意,涉及在先前迭代中学到的软约束的引理可能在后期阶段没有用处(实际上,OMT 求解器将丢弃所有这些子句并在必要时重新计算它们).

  • One is using a core-based MaxSMT engine. In this case, exploring formulations of the problem with an increasing level of complexity may help identifying a tractable sub-set of the original problem. However, notice that lemmas involving soft-constraints learned at previous iterations may not be useful at later stages (Actually, the OMT solver is going to discard all of these clauses and re-compute them if necessary).

一种是使用基于卫星的 MaxSMT 引擎.在这种情况下,除了将搜索重点放在 (?可能相关?) 软子句的特定组上之外,我不清楚将问题分成更小的块的好处.OMT 求解器可以一次获得所有软约束,并带有硬超时,当警报触发时,它仍然能够产生部分最优解.(涉及成本函数的 T-Lemmas 在增量调用中不会有用,因为成本函数发生了变化.在最好的情况下,OMT 求解器会丢弃它们.在最坏的情况下,这些 T-Lemmas 保留在环境中并在不改变解决方案的情况下使搜索变得混乱).

One is using a sat-based MaxSMT engine. In this case, it is not clear to me the benefit of splitting the problem into smaller chunks, other than focusing the search over particular groups of (?possibly related?) soft-clauses. The OMT solver could be given all soft-constraints at once, together with a hard timeout, and it would still be able to yield a partial optimal solution when the alarm fires. (T-Lemmas involving the cost function are not going to be useful across incremental calls because the cost function changes. In the best scenario, the OMT solvers discards them. In the worst scenario, these T-Lemmas remain in the environment and clutter the search without altering the solution).

我承认预测 OMT 求解器的性能有点困难,因为这两种方法都会引入开销.一方面,我们有增量调用的开销以及优化过程从头开始多次重新启动的事实.另一方面,我们有在更大的一组软子句上执行 BCP 的开销.我想,对于足够大的软条款集,平衡会倾向于 增量 方法.[这将是一个有趣的调查主题,我很想读一篇关于它的论文!]

I admit that it is a bit hard to predict the performance of the OMT solver because of the overhead that is introduced with either approach. On the one hand, we have the overhead of incremental calls and the fact that the optimization process is restarted from scratch multiple times. On the other hand, we have the overhead of performing BCP over a much larger set of soft-clauses. I guess that the balance turns in favor of the incremental approach for sufficiently large sets of soft-clauses. [This would be an interesting subject of investigation, I would love to read a paper about it!]

这篇关于使用 MAXSMT 进行增量学习的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆