改善最小/最大下采样 [英] Improve min/max downsampling

查看:64
本文介绍了改善最小/最大下采样的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我需要交互绘制一些大型阵列(约1亿个点).我目前正在使用Matplotlib.按原样绘制阵列会非常缓慢,而且很浪费,因为无论如何您都无法直观地看到那么多点.

I have some large arrays (~100 million points) that I need to interactively plot. I am currenlty using Matplotlib. Plotting the arrays as-is gets very slow and is a waste since you can't visualize that many points anyway.

因此,我做了一个最小/最大抽取函数,并将其绑定到轴的​​ xlim_changed"回调中.我采用最小/最大方法,因为数据包含快速峰值,我不想单步执行数据就错过这些峰值.有更多的包装器可以裁剪到x限制,并在某些条件下跳过处理过程,但相关部分如下:

So I made a min/max decimation function that I tied to the 'xlim_changed' callback of the axis. I went with a min/max approach because the data contains fast spikes that I do not want to miss by just stepping through the data. There are more wrappers that crop to the x-limits, and skip processing under certain conditions but the relevant part is below:

def min_max_downsample(x,y,num_bins):
    """ Break the data into num_bins and returns min/max for each bin"""
    pts_per_bin = x.size // num_bins    

    #Create temp to hold the reshaped & slightly cropped y
    y_temp = y[:num_bins*pts_per_bin].reshape((num_bins, pts_per_bin))
    y_out      = np.empty((num_bins,2))
    #Take the min/max by rows.
    y_out[:,0] = y_temp.max(axis=1)
    y_out[:,1] = y_temp.min(axis=1)
    y_out = y_out.ravel()

    #This duplicates the x-value for each min/max y-pair
    x_out = np.empty((num_bins,2))
    x_out[:] = x[:num_bins*pts_per_bin:pts_per_bin,np.newaxis]
    x_out = x_out.ravel()
    return x_out, y_out

这工作得很好并且足够快(在1e8点和2k仓位上为〜80ms).由于它会定期重新计算&,因此几乎没有滞后.更新该行的x& y-data.

This works pretty well and is sufficiently fast (~80ms on 1e8 points & 2k bins). There is very little lag as it periodically recalculates & updates the line's x & y-data.

但是,我唯一的抱怨是在x数据中.此代码复制每个bin左边缘的x值,并且不返回y个最小/最大对的真实x位置.我通常将箱数设置为使轴像素宽度加倍.所以您真的看不到区别,因为垃圾箱太小了……但我知道它在那里……这让我很烦.

However, my only complaint is in the x-data. This code duplicates the x-value of each bin's left edge and doesn't return the true x-location of the y min/max pairs. I typically set the number of bins to double the axis pixel width. So you can't really see the difference because the bins are so small...but I know its there... and it bugs me.

因此尝试编号2的确会为每个最小/最大对返回实际的x值.但是,它慢了大约5倍.

So attempt number 2 which does return the actual x-values for every min/max pair. However it is about 5x slower.

def min_max_downsample_v2(x,y,num_bins):
    pts_per_bin = x.size // num_bins
    #Create temp to hold the reshaped & slightly cropped y
    y_temp = y[:num_bins*pts_per_bin].reshape((num_bins, pts_per_bin))
    #use argmax/min to get column locations
    cc_max = y_temp.argmax(axis=1)
    cc_min = y_temp.argmin(axis=1)    
    rr = np.arange(0,num_bins)
    #compute the flat index to where these are
    flat_max = cc_max + rr*pts_per_bin
    flat_min = cc_min + rr*pts_per_bin
    #Create a boolean mask of these locations
    mm_mask  = np.full((x.size,), False)
    mm_mask[flat_max] = True
    mm_mask[flat_min] = True  
    x_out = x[mm_mask]    
    y_out = y[mm_mask]  
    return x_out, y_out

这在我的机器上花费了大约400+毫秒,这非常明显.所以我的问题是,基本上有没有一种方法可以更快地提供相同的结果?瓶颈主要在numpy.argminnumpy.argmax函数中,这比numpy.minnumpy.max慢很多.

This takes roughly 400+ ms on my machine which becomes pretty noticeable. So my question is basically is there a way to go faster and provide the same results? The bottleneck is mostly in the numpy.argmin and numpy.argmax functions which are a good bit slower than numpy.min and numpy.max.

答案可能是只使用版本1,因为它在视觉上并不重要.或尝试加快速度,例如cython(我从未使用过).

The answer might be to just live with version #1 since it visually doesn't really matter. Or maybe try to speed it up something like cython (which I have never used).

FYI在Windows上使用Python 3.6.4 ...示例用法如下:

FYI using Python 3.6.4 on Windows ... example usage would be something like this:

x_big = np.linspace(0,10,100000000)
y_big = np.cos(x_big )
x_small, y_small = min_max_downsample(x_big ,y_big ,2000) #Fast but not exactly correct.
x_small, y_small = min_max_downsample_v2(x_big ,y_big ,2000) #correct but not exactly fast.

推荐答案

我设法通过直接使用arg(min|max)的输出为数据数组建立索引来提高性能.这是以额外调用np.sort为代价的,但是要排序的轴只有两个元素(最小/最大索引),并且整个数组相当小(仓数):

I managed to get an improved performance by using the output of arg(min|max) directly to index the data arrays. This comes at the cost of an extra call to np.sort but the axis to be sorted has only two elements (the min. / max. indices) and the overall array is rather small (number of bins):

def min_max_downsample_v3(x, y, num_bins):
    pts_per_bin = x.size // num_bins

    x_view = x[:pts_per_bin*num_bins].reshape(num_bins, pts_per_bin)
    y_view = y[:pts_per_bin*num_bins].reshape(num_bins, pts_per_bin)
    i_min = np.argmin(y_view, axis=1)
    i_max = np.argmax(y_view, axis=1)

    r_index = np.repeat(np.arange(num_bins), 2)
    c_index = np.sort(np.stack((i_min, i_max), axis=1)).ravel()

    return x_view[r_index, c_index], y_view[r_index, c_index]

我检查了您的示例的时间,并获得了:

I checked the timings for your example and I obtained:

  • min_max_downsample_v1:110毫秒±5毫秒
  • min_max_downsample_v2:240毫秒±8.01毫秒
  • min_max_downsample_v3:164毫秒±1.23毫秒
  • min_max_downsample_v1: 110 ms ± 5 ms
  • min_max_downsample_v2: 240 ms ± 8.01 ms
  • min_max_downsample_v3: 164 ms ± 1.23 ms

我还检查了对arg(min|max)的调用后是否直接返回,结果同样是164毫秒,也就是说,此后再也没有实际的开销了.

I also checked returning directly after the calls to arg(min|max) and the result was equally 164 ms, i.e. there's no real overhead after that anymore.

这篇关于改善最小/最大下采样的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆