numpy的数组分片 [英] Numpy Array Slicing

查看:273
本文介绍了numpy的数组分片的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个一维数组numpy的,有些偏移/长度值。我想从这个数组中提取落入内的偏移的所有条目,偏移+长度,然后将其用于建立从原来一个新的降低数组,即只包括由偏移/长度成对拾取这些值的

I have a 1D numpy array, and some offset/length values. I would like to extract from this array all entries which fall within offset, offset+length, which are then used to build up a new 'reduced' array from the original one, that only consists of those values picked by the offset/length pairs.

对于单个偏移/长度对这个是微不足道的标准阵列切片 [偏移:偏移+长度] 。但是,如何才能有效地我这样做(即没有任何循环)为许多偏移/长度值?

For a single offset/length pair this is trivial with standard array slicing [offset:offset+length]. But how can I do this efficiently (i.e. without any loops) for many offset/length values?

谢谢,
马克

推荐答案

有是天真的方法;只是在做切片:

There is the naive method; just doing the slices:

>>> import numpy as np
>>> a = np.arange(100)
>>> 
>>> offset_length = [(3,10),(50,3),(60,20),(95,1)]
>>>
>>> np.concatenate([a[offset:offset+length] for offset,length in offset_length])
array([ 3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 50, 51, 52, 60, 61, 62, 63,
       64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 95])


下面的可能的比较快,但是你必须测试/基准。


The following might be faster, but you would have to test/benchmark.

它的工作原理构造的所需索引的列表,这是编入索引numpy的阵列的有效方法。

It works by constructing a list of the desired indices, which is valid method of indexing a numpy array.

>>> indices = [offset + i for offset,length in offset_length for i in xrange(length)]
>>> a[indices]
array([ 3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 50, 51, 52, 60, 61, 62, 63,
       64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 95])

目前尚不清楚这是否会实际上比天真的方法快,但如果你有很多非常短的间隔可能是。但我不知道。

It's not clear if this would actually be faster than the naive method but it might be if you have a lot of very short intervals. But I don't know.

(这最后的方法是基本相同@飞梭的溶液,只是使用使得索引列表的不同的方法。)

(This last method is basically the same as @fraxel's solution, just using a different method of making the index list.)

我测试了几个不同的情况:短短的时间间隔,几个长的时间间隔,大量的短间隔。我用下面的脚本:

I've tested a few different cases: a few short intervals, a few long intervals, lots of short intervals. I used the following script:

import timeit

setup = 'import numpy as np; a = np.arange(1000); offset_length = %s'

for title, ol in [('few short', '[(3,10),(50,3),(60,10),(95,1)]'),
                  ('few long', '[(3,100),(200,200),(600,300)]'),
                  ('many short', '[(2*x,1) for x in range(400)]')]:
  print '**',title,'**'
  print 'dbaupp 1st:', timeit.timeit('np.concatenate([a[offset:offset+length] for offset,length in offset_length])', setup % ol, number=10000)
  print 'dbaupp 2nd:', timeit.timeit('a[[offset + i for offset,length in offset_length for i in xrange(length)]]', setup % ol, number=10000)
  print '    fraxel:', timeit.timeit('a[np.concatenate([np.arange(offset,offset+length) for offset,length in offset_length])]', setup % ol, number=10000)

此输出:

** few short **
dbaupp 1st: 0.0474979877472
dbaupp 2nd: 0.190793991089
    fraxel: 0.128381967545
** few long **
dbaupp 1st: 0.0416231155396
dbaupp 2nd: 1.58000087738
    fraxel: 0.228138923645
** many short **
dbaupp 1st: 3.97210478783
dbaupp 2nd: 2.73584890366
    fraxel: 7.34302687645

这表明,我的第一个方法是最快的,当你有几个区间(它是显著更快),和我的第二个是最快的,当你有很多时间间隔。

This suggests that my first method is the fastest when you have a few intervals (and it is significantly faster), and my second is the fastest when you have lots of intervals.

这篇关于numpy的数组分片的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆