Numpy的longdouble算术似乎并没有长时间的转换 [英] Numpy longdouble arithmetic does not seem to be in long double with conversion

查看:260
本文介绍了Numpy的longdouble算术似乎并没有长时间的转换的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在玩C99的四倍精度 long double。这是我的理解(平台特定)numpy支持长双和128位浮点数。



我已经碰到了一些我无法解释的东西。



鉴于:

 >>>以np 
导入numpy




计算一个需要多于64位但少于128位的数字来表示作为整数:

 >>> 2 ** 64 + 2 
18446744073709551618#注意'8'最后
>>> int(2 ** 64 + 2)
18446744073709551618#显然相同

如果我计算在C99中的相同数字128位长双,我得到 18446744073709551618.000000



现在,如果我用numpy long double:

 > >> a = np.longdouble(2)
>>> b = np.longdouble(64)
>>> a ** b + a
18446744073709551618.0#all good ...

结果:

 >>> np.longdouble(2 ** 64 + 2)
18446744073709551616.0#注意'6';出现2 ** 64没有做长双
>>> np.longdouble(int(2 ** 64 + 2))
18446744073709551616.0#不能强制使用Python long
>>> n = int(2 ** 64 + 2)
>>> np.longdouble(n)
18446744073709551616.0
>>> np.longdouble(18446744073709551618)
18446744073709551616.0#它真的不想在最后做'8'

但是,这个工作:

 >>> np.longdouble(2 ** 64)+2 
18446744073709551618.0

问题:numpy是否有问题正确地将值转换成长双打?有什么我做的不正确?

解决方案

您试图在非直接转换类型之间执行类型转换。看看堆栈:

 #0 0x00002aaaaab243a0 PyLong_AsDouble()
from libpython2.7.so.1.0
#1 0x00002aaaaab2447a在? ()
from libpython2.7.so.1.0
#2 PyNumber_Float()中的0x00002aaaaaaf8357()
from libpython2.7.so.1.0
#3 0x00002aaaae71acdc MyPyFloat_AsDouble(obj = 0x2aaaaae93c00 )
在numpy / core / src / multiarray / arraytypes.c.src中:
#4 0x00002aaaae71adfc in LONGDOUBLE_setitem(op = 0x2aaaaae93c00,
ov = 0xc157b0,ap = 0xbf6ca0)$在oparay_fromAny(op = 0x2aaaaae93c00,
newtype = 0x2aaaae995960,min_depth =< value optimized out>中,numpy / core / src / multiarray / arraytypes.c.src中的b $ b:278
#5 0x00002aaaae705c82 max_depth = 0,
flags = 0,context =< value out out>)
在numpy / core / src / multiarray / ctors.c:1664
#6 0x00002aaaae7300ad in longdouble_arrtype_new在numpy / core / src / multiarray / scalartypes.c.src中为
:2545
<=优化值out>,__NPY_UNUSED_TAGGEDkwds =< / code>

正如您所见,Python long (unlimited-p recision integer) 2 ** 64 + 2 被转换为 float (即64位双),失去精度;然后float用来初始化long double,但是精度已经丢失了。

问题是128位double不是一个本地Python类型,所以 long 没有本地转换,只有64位double。 NumPy可能会检测到这种情况,并使用 很长的 C API ,但是可能相当复杂,而且收益相对较小(从一开始就可以在 np.longdouble )。


I have been playing C99's quad precision long double. It is my understanding that (platform specific) numpy supports long double and 128bit floats.

I have run across something I cannot explain however.

Given:

>>> import numpy as np

Calculate a number that will require more than 64 bits but less than 128 bits to represent as an integer:

>>> 2**64+2
18446744073709551618          # note the '8' at the end
>>> int(2**64+2)
18446744073709551618          # same obviously

If I calculate the same number in C99 128 bit long double, I get 18446744073709551618.000000

Now, if I use numpy long double:

>>> a=np.longdouble(2)
>>> b=np.longdouble(64)
>>> a**b+a
18446744073709551618.0              # all good...

What about these incorrect results:

>>> np.longdouble(2**64+2)
18446744073709551616.0             # Note '6'; appears 2**64 not done in long double
>>> np.longdouble(int(2**64+2))
18446744073709551616.0             # can't force the use of a Python long
>>> n=int(2**64+2)
>>> np.longdouble(n)
18446744073709551616.0
>>> np.longdouble(18446744073709551618)
18446744073709551616.0             # It really does not want to do '8' at the end

But, this works:

>>> np.longdouble(2**64)+2
18446744073709551618.0

Question: Does numpy have issues converting values correctly into long doubles? Is there something I am doing incorrect?

解决方案

You're trying to perform a type conversion between non-directly-convertible types. Take a look at the stack:

#0  0x00002aaaaab243a0 in PyLong_AsDouble ()
   from libpython2.7.so.1.0
#1  0x00002aaaaab2447a in ?? ()
   from libpython2.7.so.1.0
#2  0x00002aaaaaaf8357 in PyNumber_Float ()
   from libpython2.7.so.1.0
#3  0x00002aaaae71acdc in MyPyFloat_AsDouble (obj=0x2aaaaae93c00)
    at numpy/core/src/multiarray/arraytypes.c.src:40
#4  0x00002aaaae71adfc in LONGDOUBLE_setitem (op=0x2aaaaae93c00, 
    ov=0xc157b0 "", ap=0xbf6ca0)
    at numpy/core/src/multiarray/arraytypes.c.src:278
#5  0x00002aaaae705c82 in PyArray_FromAny (op=0x2aaaaae93c00, 
    newtype=0x2aaaae995960, min_depth=<value optimized out>, max_depth=0, 
    flags=0, context=<value optimized out>)
    at numpy/core/src/multiarray/ctors.c:1664
#6  0x00002aaaae7300ad in longdouble_arrtype_new (type=0x2aaaae9938a0, 
    args=<value optimized out>, __NPY_UNUSED_TAGGEDkwds=<value optimized out>)
    at numpy/core/src/multiarray/scalartypes.c.src:2545

As you can see, the Python long (unlimited-precision integer) 2**64 + 2 is being converted to float (i.e. 64-bit double), which loses precision; the float is then used to initialise the long double but the precision has already been lost.

The problem is that 128-bit double is not a native Python type, so long doesn't have a native conversion to it, only to 64-bit double. It probably would be possible for NumPy to detect this situation and perform its own conversion using the long C API, but might be fairly complicated for relatively little benefit (you can just do arithmetic in np.longdouble from the start).

这篇关于Numpy的longdouble算术似乎并没有长时间的转换的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆