有效地计算边界适应的邻域平均值 [英] Efficiently calculating boundary-adapted neighbourhood average

查看:434
本文介绍了有效地计算边界适应的邻域平均值的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个值范围从0到1的图像。我喜欢做的是简单的平均。

但是,更具体地说,对于图像边界的单元格,我想计算位于图像范围内的邻域/内核部分的像素平均值。事实上,这可以归结为适应平均公式的分母,即将总和除以的像素数量。



我设法做到了这一点,如下所示: scipy.ndimage.generic_filter ,但是这远不是时间 - 高效。

$ p $ def fnc(buffer,count):
n = float(sum(buffer <2.0))
sum = sum(buffer) - ((count-b)* 2.0)
return(sum / n)

avg = scipy.ndimage.generic_filter(image,fnc,footprint = kernel ,\
mode ='constant',cval = 2.0,\
extra_keywords = {'count':countkernel})

详情


  • / code> =方形数组(用1表示的圆形)
    <2>用2填充而不是零填充,因为我无法正确分隔填充区域的零和实际的零栅格

  • countkernel = 内核中的数目

  • n = image 中通过排除填充区域的单元格由值2确定
  • 更正总和通过从原来的邻域总和减去(填充的单元格数* 2.0)

    $ b

    更新

    1)使用NaN进行填充会增加约30%的计算:

      def fnc(buffer):
    return(numpy.nansum(buffer)/ numpy.sum([〜numpy.isnan(buffer)]))

    avg = scipy.ndimage。 generic_filter(image,fnc,footprint = kernel,\
    mode ='constant',cval = float(numpy.nan)



    2)运用Yves Daoust提出的解决方案(接受的答案),绝对可以减少处理时间:

      def fnc(buffer):
    返回numpy.sum(缓冲区)

    sumbigimage = scipy.ndimage.generic_filter(image,fnc, \
    footprint = kernel,\
    mode = 'constant',\
    cval = 0.0)
    summask = scipy.ndimage.generic_filter(mask,fnc,\
    footprint = kernel,\
    mode ='constant ',\
    cval = 0.0)
    avg = sumbigimage / summask

    3)基于 Yves'提示使用额外的二进制图像,实际上是应用一个蒙版,我偶然发现了 masked arrays 。因此,只有一个数组需要被处理,因为一个蒙版数组将图像和蒙版数组混合在一起。

    蒙版数组的一个小细节:而不是填充内部部分(原始图像的范围)用1来填充外部部分(边界),如上一次更新所做的那样,反之亦然。 1表示'无效',0表示'有效'。

    这个代码甚至比更新2中提供的代码快了50%:

      maskedimg = numpy.ma.masked_array(imgarray,mask = maskarray)

    def fnc(buffer):
    return numpy。 (缓冲区)

    avg = scipy.ndimage.generic_filter(maskedimg,fnc,footprint = kernel,\
    mode ='constant',cval = 0.0)

    - >我必须在这里改正自己!

    在验证过程中我必须弄错,因为经过一些计算运行后,似乎 scipy.ndimage。< filters> 无法处理masked_arrays,操作掩码是不考虑的。

    其他人也提到了这个,就像这里 here




    图像的力量...

      $
    • grey:要处理的图像范围
    • white:填充区域(在我的例子中填充了2.0)

    • 红色阴影:内核的范围


      • 暗红色:有效的邻居

      • 浅红色: / li>







    这个更实用的代码怎么可以改变以提高性能的计算?

    非常感谢提前!

    解决方案

    不确定这是否有帮助,因为我不擅长scipy:在灰色区域使用1的辅助图像,在白色区域使用辅助图像(源图像中的0也是如此)。如果scipy提供了一个内置函数的专用版本的过滤器,那么有一点希望:总结。

    这样做,您将需要逐个像素地划分两个图像。


    I have an image with values ranging from 0 to 1. What I like to do is simple averaging.
    But, more specifically, for a cell at the border of the image I'd like to compute the average of the pixels for that part of the neighbourhood/kernel that lies within the extent of the image. In fact this boils down to adapt the denominator of the 'mean formula', the number of pixels you divide the sum by.

    I managed to do this as shown below with scipy.ndimage.generic_filter, but this is far from time-efficient.

    def fnc(buffer, count):
        n = float(sum(buffer < 2.0))
        sum = sum(buffer) - ((count - b) * 2.0)
        return (sum / n)
    
    avg = scipy.ndimage.generic_filter(image, fnc, footprint = kernel, \
                                       mode = 'constant', cval = 2.0,   \
                                       extra_keywords = {'count': countkernel})
    

    Details

    • kernel = square array (circle represented by ones)
    • Padding with 2's and not by zeroes since then I could not properly separate zeroes of the padded area and zeroes of the actual raster
    • countkernel = number of ones in the kernel
    • n = number of cells that lie within image by excluding the cells of the padded area identified by values of 2
    • Correct the sum by subtracting (number of padded cells * 2.0) from the original neighbourhood total sum

    Update(s)

    1) Padding with NaNs increases the calculation with about 30%:

        def fnc(buffer):
            return (numpy.nansum(buffer) / numpy.sum([~numpy.isnan(buffer)]))
    
        avg = scipy.ndimage.generic_filter(image, fnc, footprint = kernel, \
                                           mode = 'constant', cval = float(numpy.nan)
    

    2) Applying the solution proposed by Yves Daoust (accepted answer), definitely reduces the processing time to a minimum:

        def fnc(buffer):
            return numpy.sum(buffer)
    
        sumbigimage = scipy.ndimage.generic_filter(image, fnc, \
                                                   footprint = kernel, \
                                                   mode = 'constant', \
                                                   cval = 0.0)
        summask     = scipy.ndimage.generic_filter(mask, fnc, \
                                                   footprint = kernel, \
                                                   mode = 'constant', \
                                                   cval = 0.0)
        avg = sumbigimage / summask
    

    3) Building on Yves' tip to use an additional binary image, which in fact is applying a mask, I stumbled upon the principle of masked arrays. As such only one array has to be processed because a masked array 'blends' the image and mask arrays together.
    A small detail about the mask array: instead of filling the inner part (extent of original image) with 1's and filling the outer part (border) with 0's as done in the previous update, you must do vice versa. A 1 in a masked array means 'invalid', a 0 means 'valid'.
    This code is even 50% faster then the code supplied in update 2):

        maskedimg = numpy.ma.masked_array(imgarray, mask = maskarray)
    
        def fnc(buffer):
            return numpy.mean(buffer)
    
        avg = scipy.ndimage.generic_filter(maskedimg, fnc, footprint = kernel, \
                                           mode = 'constant', cval = 0.0)
    

    --> I must correct myself here!
    I must be mistaken during the validation, since after some calculation runs it seemed that scipy.ndimage.<filters> cannot handle masked_arrays in that sense that during the filter operation the mask is not taken into account.
    Some other people mentioned this too, like here and here.


    The power of an image...

    • grey: extent of image to be processed
    • white: padded area (in my case filled with 2.0's)
    • red shades: extent of kernel
      • dark red: effective neighbourhoud
      • light red: part of neighbourhood to be ignored


    How can this rather pragmatical piece of code be changed to improve performance of the calculation?

    Many thanks in advance!

    解决方案

    Unsure if this will help, as I am not proficient in scipy: use an auxiliary image of 1's in the gray area and 0's in the white area (0's too in the source image). Then apply the filter to both images with a simple sum.

    There is some hope of a speedup if scipy provides a specialized version of the filter with a built-in function for summing.

    This done, you will need to divide both images pixel by pixel.

    这篇关于有效地计算边界适应的邻域平均值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆