Python中的参数优化 [英] Parameter Optimization in Python

查看:277
本文介绍了Python中的参数优化的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

给出一个Python可调用函数,该函数可返回适应度测量值(0.0 =可怕,0.5 =正常,1.0 =完美)及其参数描述(类型= bool | int | float | nominal,min,max),那么健壮可以找到参数组合的参数优化器的实现,这些组合可以使适应性度量尽可能高?我不是在寻找详尽的保证全局最优.一个近似值就可以了.

Given a Python callable that returns a fitness measurement (0.0=horrible, 0.5=ok, 1.0=perfect) and a description of its parameters (type=bool|int|float|nominal, min, max), what are robust implementations of parameter optimizers that can find the combination of parameters that get the fitness measure as high as possible? I'm not looking for an exhaustive guaranteed global optimum. An approximation would be fine.

我已经看到scipy的优化模块被大量引用,还有scikit-learn的 gridsearch .两者之间的实际区别是什么?还有什么选择?

I've seen scipy's optimize module referenced a lot, but also scikit-learn's gridsearch. What's the practical difference between these two? What are other options?

推荐答案

给出一个参数空间和一个寻找最优值的任务,gridsearch可能是您最容易做的事情:离散化参数空间并通过蛮力检查所有组合-力量.返回产生最佳结果的参数组合.

Given a parameter space and the task to find an optimum, gridsearch is probably the easiest thing you can do: Discretize the parameter space and just check all combinations by brute-force. Return the parameter combination that yielded the best result.

这有效,但是正如您可以想象的那样,这不能很好地扩展.对于高维优化问题,这根本不可行.

This works, but as you can imagine, this does not scale well. For high dimensional optimization problems this is simply not feasible.

此处的改进策略取决于您所拥有的其他信息.在最佳情况下,您可以优化平滑和可微的功能.在这种情况下,您可以使用数值优化.

Strategies to improve here depend on what additional information you have. In the optimal case you optimize a smooth and differentiable function. In this case you can use numerical optimization.

在数值优化例程中,您可以利用函数的梯度始终指向上方这一事实.因此,如果您想增加函数值,只要稍微改变一下梯度,只要梯度不为零,就可以一直改善.

In numerical optimization routines you exploit the fact that the gradient of a function always points upward. So if you want to increase the function value, you simply follow the gradient a little bit and you will always improve, as long as the gradient is not zero.

scipy的大多数例程都利用了这种强大的概念.这样,您可以利用有关当前位置附近信息的其他信息来优化高维函数.

This powerful concept is exploited in most of scipy's routines. This way you can optimize high-dimensional functions by exploiting additional information you get about the neighborhood of your current position.

因此,如果您没有平滑和微分函数,则无法使用scipy的数值例程.

So if you do not have a smooth and differential function, scipy's numerical routines cannot be used.

请注意,利用当前参数向量附近的信息也可以用于非平滑优化.基本上,您会做同样的事情:检查当前估算值周围的一个窗口,并尝试通过在该窗口中找到更好的值来进行改进.

Note that exploiting the information in the neighborhood of your current parameter vector can be used in non-smooth optimization as well. Basically you do the same thing: You check a window around your current estimate and try to improve by finding a better value in that window.

这篇关于Python中的参数优化的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆