numpy.ndarray枚举尺寸的适当子集? [英] numpy.ndarray enumeration over a proper subset of the dimensions?

查看:97
本文介绍了numpy.ndarray枚举尺寸的适当子集?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

(在本文中,让npnumpy的简写.)

(In this post, let np be shorthand for numpy.)

假设a是( n   +  k )‑维np.ndarray对象,对于某些整数 n  >  1和 k  >  1. (IOW, n   +  k  >  3是a.ndim的值).我想枚举a的第一个 n 维度;这意味着,在每次迭代中,枚举器/迭代器都会生成一个对,其第一个元素是 n 个索引的元组ii,第二个元素是 k &# x2011; a[ii]处的尺寸子ndarray.

Suppose a is a (n + k)‑dimensional np.ndarray object, for some integers n > 1 and k > 1. (IOW, n + k > 3 is the value of a.ndim). I want to enumerate a over its first n dimensions; this means that, at each iteration, the enumerator/iterator produces a pair whose first element is a tuple ii of n indices, and second element is the k‑dimensional sub-ndarray at a[ii] .

当然,编写一个函数来做到这一点并不困难(实际上,我在下面给出了一个这样的函数的示例),但是我想知道这一点:

Granted, it is not difficult to code a function to do this (in fact, I give an example of such a function below), but I want to know this:

numpy是否提供任何特殊的语法或功能来执行这种类型的部分"枚举?

does numpy provide any special syntax or functions for carrying out this type of "partial" enumeration?

(通常,当我想遍历多维对象np.ndarray时,我使用np.ndenumerate,但这在这里无济于事,因为(据我所知)np.ndenumerate会遍历所有 n   +  k 维度.)

(Normally, when I want to iterate over an multidimensional np.ndarray object, I use np.ndenumerate, but it wouldn't help here, because (as far as I can tell) np.ndenumerate would iterate over all n + k dimensions.)

假设上述问题的答案是肯定的,那么将进行后续跟踪:

Assuming that the answer to the question above is yes, then there's this follow-up:

要迭代的 n 个维不连续的情况是什么?

what about the case where the n dimensions to iterate over are not contiguous?

(在这种情况下,枚举器/迭代器在每次迭代中返回的对中的第一个元素将是 r  >  n 的元组元素,其中某些将是表示全部"的特殊值,例如slice(None);该对的第二个元素仍将是长度为 k ndarray.)

(In this case, the first element of the pair returned at each iteration by the enumerator/iterator would be a tuple of r > n elements, some of which would be a special value denoting "all", e.g. slice(None); the second element of this pair would still be an ndarray of length k.)

谢谢!

以下代码有望阐明问题说明.函数partial_enumerate使用为此目的可用的任何特殊numpy构造实现了我想做的事情.遵循partial_enumerate的定义是 n   =  k   =  2的简单示例.

The code below hopefully clarifies the problem specification. The function partial_enumerate does what I would like to do using any special numpy constructs available for the purpose. Following the definition of partial_enumerate is a simple example for the case n = k = 2.

import numpy as np
import itertools as it
def partial_enumerate(nda, n):
  """Enumerate over the first N dimensions of the numpy.ndarray NDA.

  Returns an iterator of pairs.  The first element of each pair is a tuple 
  of N integers, corresponding to a partial index I into NDA; the second element
  is the subarray of NDA at I.
  """

  # ERROR CHECKING & HANDLING OMITTED
  for ii in it.product(*[range(d) for d in nda.shape[:n]]):
    yield ii, nda[ii]

a = np.zeros((2, 3, 4, 5))
for ii, vv in partial_enumerate(a, 2):
    print ii, vv.shape

输出的每一行是一个成对的元组",其中第一个元组表示a n 个坐标的一部分,第二个元组表示的形状> k ‑在这些局部坐标处a的维子数组; (第二行的值对于所有行都是相同的,这是从数组的规律性中得出的期望值):

Each line of the output is a "pair of tuples", where the first tuple represents a partial set of n coordinates in a, and the second one represents the shape of the k‑dimensional subarray of a at those partial coordinates; (the value of this second pair is the same for all lines, as expected from the regularity of the array):

(0, 0) (4, 5)
(0, 1) (4, 5)
(0, 2) (4, 5)
(1, 0) (4, 5)
(1, 1) (4, 5)
(1, 2) (4, 5)

相反,在这种情况下,对np.ndenumerate(a)进行迭代将导致a.size迭代,每个迭代都访问a的单个单元格.

In contrast, iterating over np.ndenumerate(a) in this case would result in a.size iterations, each visiting an individual cell of a.

推荐答案

您可以使用numpy广播规则来生成笛卡尔乘积. numpy.ix_函数创建适当数组的列表.等效于以下内容:

You can use the numpy broadcasting rules to generate a cartesian product. The numpy.ix_ function creates a list of the appropriate arrays. It's equivalent to the below:

>>> def pseudo_ix_gen(*arrays):
...     base_shape = [1 for arr in arrays]
...     for dim, arr in enumerate(arrays):
...         shape = base_shape[:]
...         shape[dim] = len(arr)
...         yield numpy.array(arr).reshape(shape)
... 
>>> def pseudo_ix_(*arrays):
...     return list(pseudo_ix_gen(*arrays))

或更简洁地说:

>>> def pseudo_ix_(*arrays):
...     shapes = numpy.diagflat([len(a) - 1 for a in arrays]) + 1
...     return [numpy.array(a).reshape(s) for a, s in zip(arrays, shapes)]

结果是可广播数组的列表:

The result is a list of broadcastable arrays:

>>> numpy.ix_(*[[2, 4], [1, 3], [0, 2]])
[array([[[2]],

       [[4]]]), array([[[1],
        [3]]]), array([[[0, 2]]])]

将此与numpy.ogrid的结果进行比较:

Compare this to the result of numpy.ogrid:

>>> numpy.ogrid[0:2, 0:2, 0:2]
[array([[[0]],

       [[1]]]), array([[[0],
        [1]]]), array([[[0, 1]]])]

如您所见,

相同,但是numpy.ix_允许您使用非连续索引.现在,当我们应用numpy广播规则时,我们得到了笛卡尔积:

As you can see, it's the same, but numpy.ix_ allows you to use non-consecutive indices. Now when we apply the numpy broadcasting rules, we get a cartesian product:

>>> list(numpy.broadcast(*numpy.ix_(*[[2, 4], [1, 3], [0, 2]])))
[(2, 1, 0), (2, 1, 2), (2, 3, 0), (2, 3, 2), 
 (4, 1, 0), (4, 1, 2), (4, 3, 0), (4, 3, 2)]

如果不是将numpy.ix_的结果传递给numpy.broadcast,而是使用它为数组建立索引,则会得到以下信息:

If, instead of passing the result of numpy.ix_ to numpy.broadcast, we use it to index an array, we get this:

>>> a = numpy.arange(6 ** 4).reshape((6, 6, 6, 6))
>>> a[numpy.ix_(*[[2, 4], [1, 3], [0, 2]])]
array([[[[468, 469, 470, 471, 472, 473],
         [480, 481, 482, 483, 484, 485]],

        [[540, 541, 542, 543, 544, 545],
         [552, 553, 554, 555, 556, 557]]],


       [[[900, 901, 902, 903, 904, 905],
         [912, 913, 914, 915, 916, 917]],

        [[972, 973, 974, 975, 976, 977],
         [984, 985, 986, 987, 988, 989]]]])

但是,腔体清空器.可广播的数组对于建立索引很有用,但是如果您确实想枚举这些值,则最好使用itertools.product:

However, caveat emptor. Broadcastable arrays are useful for indexing, but if you literally want to enumerate the values, you might be better off using itertools.product:

>>> %timeit list(itertools.product(range(5), repeat=5))
10000 loops, best of 3: 196 us per loop
>>> %timeit list(numpy.broadcast(*numpy.ix_(*([range(5)] * 5))))
100 loops, best of 3: 2.74 ms per loop

因此,如果您仍然合并了for循环,则itertools.product可能会更快.不过,您仍然可以使用上述方法以纯numpy的方式获取一些类似的数据结构:

So if you're incorporating a for loop anyway, then itertools.product will likely be faster. Still, you can use the above methods to get some similar data structures in pure numpy:

>> pgrid_idx = numpy.ix_(*[[2, 4], [1, 3], [0, 2]])
>>> sub_indices = numpy.rec.fromarrays(numpy.indices((6, 6, 6)))
>>> a[pgrid_idx].reshape((8, 6))
array([[468, 469, 470, 471, 472, 473],
       [480, 481, 482, 483, 484, 485],
       [540, 541, 542, 543, 544, 545],
       [552, 553, 554, 555, 556, 557],
       [900, 901, 902, 903, 904, 905],
       [912, 913, 914, 915, 916, 917],
       [972, 973, 974, 975, 976, 977],
       [984, 985, 986, 987, 988, 989]])
>>> sub_indices[pgrid_idx].reshape((8,))
rec.array([(2, 1, 0), (2, 1, 2), (2, 3, 0), (2, 3, 2), 
           (4, 1, 0), (4, 1, 2), (4, 3, 0), (4, 3, 2)], 
          dtype=[('f0', '<i8'), ('f1', '<i8'), ('f2', '<i8')])

这篇关于numpy.ndarray枚举尺寸的适当子集?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆