python为什么使用numpy.r_而不是串联 [英] python why use numpy.r_ instead of concatenate

查看:90
本文介绍了python为什么使用numpy.r_而不是串联的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在这种情况下,使用numpy.r_或numpy.c_之类的对象比使用诸如串联或vstack之类的功能更好(更有效,更合适)?

In which case using objects like numpy.r_ or numpy.c_ is better (more efficient, more suitable) than using fonctions like concatenate or vstack for example ?

我正在尝试理解程序员编写类似以下内容的代码:

I am trying to understand a code where the programmer wrote something like:

return np.r_[0.0, 1d_array, 0.0] == 2

其中1d_array是一个数组,其值可以为0、1或2. 为什么不使用np.concatenate(例如)呢?喜欢:

where 1d_array is an array whose values can be 0, 1 or 2. Why not using np.concatenate (for example) instead ? Like :

return np.concatenate([[0.0], 1d_array, [0.0]]) == 2

它更具可读性,而且显然具有相同的作用.

It is more readable and apparently it does the same thing.

推荐答案

np.r_numpy/lib/index_tricks.py文件中实现.这是纯Python代码,没有特殊的编译内容.因此,它不会比用concatenatearangelinspace编写的等效内容更快.仅当符号符合您的思维方式和需求时,它才有用.

np.r_ is implemented in the numpy/lib/index_tricks.py file. This is pure Python code, with no special compiled stuff. So it is not going to be any faster than the equivalent written with concatenate, arange and linspace. It's useful only if the notation fits your way of thinking and your needs.

在您的示例中,它只保存了将标量转换为列表或数组的操作:

In your example it just saves converting the scalars to lists or arrays:

In [452]: np.r_[0.0, np.array([1,2,3,4]), 0.0]
Out[452]: array([ 0.,  1.,  2.,  3.,  4.,  0.])

具有相同参数的错误:

In [453]: np.concatenate([0.0, np.array([1,2,3,4]), 0.0])
...
ValueError: zero-dimensional arrays cannot be concatenated

使用添加的[]正确

In [454]: np.concatenate([[0.0], np.array([1,2,3,4]), [0.0]])
Out[454]: array([ 0.,  1.,  2.,  3.,  4.,  0.])

hstack通过将所有参数传递给[atleast_1d(_m) for _m in tup]来解决此问题:

hstack takes care of that by passing all arguments through [atleast_1d(_m) for _m in tup]:

In [455]: np.hstack([0.0, np.array([1,2,3,4]), 0.0])
Out[455]: array([ 0.,  1.,  2.,  3.,  4.,  0.])

因此至少在简单的情况下,它与hstack最相似.

So at least in simple cases it is most similar to hstack.

但是r_的真正有用之处在于您要使用范围

But the real usefulness of r_ comes when you want to use ranges

np.r_[0.0, 1:5, 0.0]
np.hstack([0.0, np.arange(1,5), 0.0])
np.r_[0.0, slice(1,5), 0.0]

r_使您可以使用索引中使用的:语法.这是因为它实际上是具有__getitem__方法的类的实例. index_tricks多次使用此编程技巧.

r_ lets you use the : syntax that is used in indexing. That's because it is actually an instance of a class that has a __getitem__ method. index_tricks uses this programming trick several times.

他们扔了其他的铃铛

使用imaginary步骤,使用np.linspace而不是np.arange扩展切片.

Using an imaginary step, uses np.linspace to expand the slice rather than np.arange.

np.r_[-1:1:6j, [0]*3, 5, 6]

产生:

array([-1. , -0.6, -0.2,  0.2,  0.6,  1. ,  0. ,  0. ,  0. ,  5. ,  6. ])

文档中有更多详细信息.

There are more details in the documentation.

我对 https://stackoverflow.com/a/37625115/901925 中的许多切片做了一些时间测试

I did some time tests for many slices in https://stackoverflow.com/a/37625115/901925

这篇关于python为什么使用numpy.r_而不是串联的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆