如何在Python中使用宽度可变的高斯进行卷积? [英] How do I perform a convolution in python with a variable-width Gaussian?

查看:164
本文介绍了如何在Python中使用宽度可变的高斯进行卷积?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我需要使用高斯进行卷积,但是高斯的宽度需要改变.我不是在进行传统的信号处理,而是需要根据设备的分辨率采用完善的概率密度函数(PDF)并涂抹"它.

例如,假设我的PDF首先以尖峰/增量功能开始.我将其建模为非常狭窄的高斯模型.在通过我的设备运行之后,它将根据某些高斯分辨率将其涂抹掉.我可以使用scipy.signal卷积函数进行计算.

    import numpy as np
    import matplotlib.pylab as plt

    import scipy.signal as signal
    import scipy.stats as stats

    # Create the initial function. I model a spike
    # as an arbitrarily narrow Gaussian
    mu = 1.0 # Centroid
    sig=0.001 # Width
    original_pdf = stats.norm(mu,sig)

    x = np.linspace(0.0,2.0,1000) 
    y = original_pdf.pdf(x)
    plt.plot(x,y,label='original')


    # Create the ``smearing" function to convolve with the
    # original function.
    # I use a Gaussian, centered at 0.0 (no bias) and
    # width of 0.5
    mu_conv = 0.0 # Centroid
    sigma_conv = 0.5 # Width
    convolving_term = stats.norm(mu_conv,sigma_conv)

    xconv = np.linspace(-5,5,1000)
    yconv = convolving_term.pdf(xconv)

    convolved_pdf = signal.convolve(y/y.sum(),yconv,mode='same')

    plt.plot(x,convolved_pdf,label='convolved')
    plt.ylim(0,1.2*max(convolved_pdf))
    plt.legend()
    plt.show()

这一切都没问题.但是现在假设我的原始PDF不是峰值,而是一些更广泛的功能.例如,σ= 1.0的高斯.现在,假设我的分辨率实际上在x上变化:在x = 0.5时,拖尾函数是sigma_conv = 0.5的高斯函数,但是在x = 1.5时,拖尾函数是sigma_conv = 1.5的高斯函数.并且假设我知道我拖尾的高斯函数的x依赖性的函数形式.天真的,我想将上面的行更改为

    convolving_term = stats.norm(mu_conv,lambda x: 0.2*x + 0.1)

但这不起作用,因为norm函数期望宽度值,而不是函数.从某种意义上讲,我需要将卷积函数设为2D数组,在该数组中,我的原始PDF中的每个点都有一个不同的拖尾高斯分布,它仍然是1D数组.

那么有没有办法用Python中定义的已经函数来做到这一点?我有一些代码是我自己编写的....但是我想确保自己不仅是在重新发明轮子.

提前谢谢!

马特

解决方案

问题,简而言之:
如何与非平稳内核进行卷积,例如,高斯算法会改变数据中不同位置的宽度,而Python是否已成为该工具的现有工具?

答案,排序方式:
很难证明是否定的,但是我不认为scipy或numpy中存在与非平稳内核进行卷积的函数.无论如何,正如您所描述的那样,它实际上不能很好地向量化,因此您也可以执行循环或编写一些自定义C代码.

一种可能对您有用的技巧是,以相反的比例拉伸数据(而不是在位置上更改内核大小)(即,在您希望将高斯与为基础宽度的0.5倍,则将数据拉伸到2倍).这样,您可以对数据执行单个变形操作,使用固定宽度的高斯进行标准卷积,然后将数据变形为原始比例.

这种方法的优点是它非常易于编写,并且已完全矢量化,因此运行起来可能相当快.

对数据进行翘曲(例如使用插值方法)会导致准确性的损失,但是如果您选择某种东西以便在初始翘曲操作中始终对数据进行扩展而不减少,则损失应该很小./p>

I need to perform a convolution using a Gaussian, however the width of the Gaussian needs to change. I'm not doing traditional signal processing but instead I need to take my perfect Probability Density Function (PDF) and ``smear" it, based on the resolution of my equipment.

For instance, suppose my PDF starts out as a spike/delta-function. I'll model this as a very narrow Gaussian. After being run through my equipment, it will be smeared out according to some Gaussian resolution. I can calculate this using the scipy.signal convolution functions.

    import numpy as np
    import matplotlib.pylab as plt

    import scipy.signal as signal
    import scipy.stats as stats

    # Create the initial function. I model a spike
    # as an arbitrarily narrow Gaussian
    mu = 1.0 # Centroid
    sig=0.001 # Width
    original_pdf = stats.norm(mu,sig)

    x = np.linspace(0.0,2.0,1000) 
    y = original_pdf.pdf(x)
    plt.plot(x,y,label='original')


    # Create the ``smearing" function to convolve with the
    # original function.
    # I use a Gaussian, centered at 0.0 (no bias) and
    # width of 0.5
    mu_conv = 0.0 # Centroid
    sigma_conv = 0.5 # Width
    convolving_term = stats.norm(mu_conv,sigma_conv)

    xconv = np.linspace(-5,5,1000)
    yconv = convolving_term.pdf(xconv)

    convolved_pdf = signal.convolve(y/y.sum(),yconv,mode='same')

    plt.plot(x,convolved_pdf,label='convolved')
    plt.ylim(0,1.2*max(convolved_pdf))
    plt.legend()
    plt.show()

This all works no problem. But now suppose my original PDF is not a spike, but some broader function. For example, a Gaussian with sigma=1.0. And now suppose my resolution actually varys over x: at x=0.5, the smearing function is a Gaussian with sigma_conv=0.5, but at x=1.5, the smearing function is a Gaussian with sigma_conv=1.5. And suppose I know the functional form of the x-dependence of my smearing Gaussian. Naively, I thought I would change the line above to

    convolving_term = stats.norm(mu_conv,lambda x: 0.2*x + 0.1)

But that doesn't work, because the norm function expects a value for the width, not a function. In some sense, I need my convolving function to be a 2D array, where I have a different smearing Gaussian for each point in my original PDF, which remains a 1D array.

So is there a way to do this with functions already defined in Python? I have some code to do this that I wrote myself....but I want to make sure I've not just re-invented the wheel.

Thanks in advance!

Matt

解决方案

Question, in brief:
How to convolve with a non-stationary kernel, for example, a Gaussian that changes width for different locations in the data, and does a Python an existing tool for this?

Answer, sort-of:
It's difficult to prove a negative, but I do not think that a function to perform a convolution with a non-stationary kernel exists in scipy or numpy. Anyway, as you describe it, it can't really be vectorized well, so you may as well do a loop or write some custom C code.

One trick that might work for you is, instead of changing the kernel size with position, stretch the data with the inverse scale (ie, at places where you'd want to the Gaussian with to be 0.5 the base width, stretch the data to 2x). This way, you can do a single warping operation on the data, a standard convolution with a fixed width Gaussian, and then unwarp the data to original scale.

The advantages of this approach are that it's very easy to write, and is completely vectorized, and therefore probably fairly fast to run.

Warping the data (using, say, an interpolation method) will cause some loss of accuracy, but if you choose things so that the data is always expanded and not reduced in your initial warping operation, the losses should be minimal.

这篇关于如何在Python中使用宽度可变的高斯进行卷积?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆