scipy.stat.distributions的内置概率密度函数是否比用户提供的函数慢? [英] Is the build-in probability density functions of `scipy.stat.distributions` slower than a user provided one?

查看:139
本文介绍了scipy.stat.distributions的内置概率密度函数是否比用户提供的函数慢?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

假设我有一个数组:adata=array([0.5, 1.,2.,3.,6.,10.]),并且我想在给定参数[5.,1.5][5.1,1.6]的情况下计算该数组的Weibull分布的对数似然.我从没想过要为此任务编写自己的Weibull概率密度函数,因为它已经在scipy.stat.distributions中提供了.因此,这应该做到:

Suppose I have an array: adata=array([0.5, 1.,2.,3.,6.,10.]) and I want to calculate log likelihood of Weibull distribution of this array, given the parameters [5.,1.5] and [5.1,1.6]. I have never thought I need to write my own Weibull probability density functions for this task, as it is already provided in scipy.stat.distributions. So, this ought to do it:

from scipy import stats
from numpy import *
adata=array([0.5, 1.,2.,3.,6.,10.])
def wb2LL(p, x): #log-likelihood of 2 parameter Weibull distribution
    return sum(log(stats.weibull_min.pdf(x, p[1], 0., p[0])), axis=1)

并且:

>>> wb2LL(array([[5.,1.5],[5.1,1.6]]).T[...,newaxis], adata)
array([-14.43743911, -14.68835298])

或者我重新发明轮子并编写一个新的Weibull pdf函数,例如:

Or I reinvent the wheel and write a new Weibull pdf function, such as:

wbp=lambda p, x: p[1]/p[0]*((x/p[0])**(p[1]-1))*exp(-((x/p[0])**p[1]))
def wb2LL1(p, x): #log-likelihood of 2 paramter Weibull
    return sum(log(wbp(p,x)), axis=1)

并且:

>>> wb2LL1(array([[5.,1.5],[5.1,1.6]]).T[...,newaxis], adata)
array([-14.43743911, -14.68835298])

诚然,我总是认为,如果scipy中已经存在某些东西,则应该对其进行非常好的优化,并且重新发明轮子很少会使其更快.但是出人意料的是:如果我timeitwb2LL1(array([[5.,1.5],[5.1,1.6]])[...,newaxis], adata)的100000次调用花费2.156s,而wb2LL(array([[5.,1.5],[5.1,1.6]])[...,newaxis], adata)的100000调用花费5.219s,则内置的stats.weibull_min.pdf()慢了两倍多.

Admittedly, I always take it for granted that if something is already in scipy, it should be very well optimized and re-inventing the wheel is seldom going to make it faster. But here comes the surprise: if I timeit, 100000 calls of wb2LL1(array([[5.,1.5],[5.1,1.6]])[...,newaxis], adata) takes 2.156s while 100000 calls of wb2LL(array([[5.,1.5],[5.1,1.6]])[...,newaxis], adata) takes 5.219s, the build-in stats.weibull_min.pdf() is more than twice slower.

至少对于我来说,检查源代码python_path/sitepackage/scipy/stat/distributions.py并没有提供简单的答案.如果有的话,我希望stats.weibull_min.pdf()wbp()几乎一样快.

Checking the source code python_path/sitepackage/scipy/stat/distributions.py didn't provides an easy answer, at least for me. If anything, from it I would expect stats.weibull_min.pdf() to be almost as fast as wbp().

相关源代码:2999-3033行:

Relevant source code: line 2999-3033:

class frechet_r_gen(rv_continuous):
    """A Frechet right (or Weibull minimum) continuous random variable.

    %(before_notes)s

    See Also
    --------
    weibull_min : The same distribution as `frechet_r`.
    frechet_l, weibull_max

    Notes
    -----
    The probability density function for `frechet_r` is::

        frechet_r.pdf(x, c) = c * x**(c-1) * exp(-x**c)

    for ``x > 0``, ``c > 0``.

    %(example)s

    """
    def _pdf(self, x, c):
        return c*pow(x,c-1)*exp(-pow(x,c))
    def _logpdf(self, x, c):
        return log(c) + (c-1)*log(x) - pow(x,c)
    def _cdf(self, x, c):
        return -expm1(-pow(x,c))
    def _ppf(self, q, c):
        return pow(-log1p(-q),1.0/c)
    def _munp(self, n, c):
        return special.gamma(1.0+n*1.0/c)
    def _entropy(self, c):
        return -_EULER / c - log(c) + _EULER + 1
frechet_r = frechet_r_gen(a=0.0, name='frechet_r', shapes='c')
weibull_min = frechet_r_gen(a=0.0, name='weibull_min', shapes='c')

所以,问题是:stats.weibull_min.pdf()真的慢吗?如果是这样,怎么会这样?

So, the question is: is stats.weibull_min.pdf() really slower? If so, how come?

推荐答案

pdf()方法在rv_continuous类中定义,该类调用frechet_r_gen._pdf(). pdf()的代码是:

The pdf() method is defined in rv_continuous class, which calls frechet_r_gen._pdf(). the code of pdf() is:

def pdf(self,x,*args,**kwds):
    loc,scale=map(kwds.get,['loc','scale'])
    args, loc, scale = self._fix_loc_scale(args, loc, scale)
    x,loc,scale = map(asarray,(x,loc,scale))
    args = tuple(map(asarray,args))
    x = asarray((x-loc)*1.0/scale)
    cond0 = self._argcheck(*args) & (scale > 0)
    cond1 = (scale > 0) & (x >= self.a) & (x <= self.b)
    cond = cond0 & cond1
    output = zeros(shape(cond),'d')
    putmask(output,(1-cond0)+np.isnan(x),self.badvalue)
    if any(cond):
        goodargs = argsreduce(cond, *((x,)+args+(scale,)))
        scale, goodargs = goodargs[-1], goodargs[:-1]
        place(output,cond,self._pdf(*goodargs) / scale)
    if output.ndim == 0:
        return output[()]
    return output

因此,它具有许多参数处理代码,这使其运行缓慢.

So, it has many argument processing code, which make it slow.

这篇关于scipy.stat.distributions的内置概率密度函数是否比用户提供的函数慢?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆