Python的numpy float16数据类型的操作,和float8? [英] Python numpy float16 datatype operations, and float8?

查看:2698
本文介绍了Python的numpy float16数据类型的操作,和float8?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当对float16 Numpy数字进行数学运算时,结果也是float16类型的数字。
我的问题是如何计算结果?
说我相乘/添加两个float16数字,python生成结果在float32,然后截断/四舍五入结果float16?还是在16位多路复用器/加法器硬件中进行的计算一直是另一个问题?有没有float8类型?我找不到这个...如果不是,那为什么?谢谢大家!

解决方案

第一个问题:没有硬件支持 float16 float16 操作数转换为 float32 ,对 float32 值,然后将 float32 结果回到 float16 。可以证明结果仍然是正确的: float32 的精度足够大(相对于 float16 )这个双舍入在这里不是问题,至少对于四个基本的算术运算和平方根来说。

在当前的NumPy源文件中,这是什么定义四个基本的算术运算,看起来像 float16 标量运算。

  #define half_ctype_add(a,b,outp)*(outp)= \ 
npy_float_to_half(npy_half_to_float(a)+ npy_half_to_float(b))
#define half_ctype_subtract(a,b,outp)*( outp)= \
npy_float_to_half(npy_half_to_float(a) - npy_half_to_float(b))
#define half_ctype_multiply(a,b,outp)*(outp)= \
npy_float_to_half(npy_half_to_float a)* npy_half_to_float(b))
#define half_ctype_divide(a,b,outp)*(outp)= \
npy_float_to_half(npy_half_to_float(a)/ npy_half_to_fl oat(b))

上面的代码取自 scalarmath.csrc 。您也可以查看 loops.csrc 为数组ufuncs的相应代码。支持 npy_half_to_float npy_float_to_half 函数在 halffloat.c ,以及 float16 type。



第二个问题:no,NumPy中没有 float8 类型。 float16 是一种标准化类型(在IEEE 754标准中描述),在某些情况下(特别是GPU)已经被广泛使用。没有IEEE 754 float8 类型,而且似乎也不是标准 float8 类型。我也猜测在NumPy中对于 float8 的支持没有那么多的需求。


when performing math operations on float16 Numpy numbers, the result is also in float16 type number. My question is how exactly the result is computed? Say Im multiplying/adding two float16 numbers, does python generate the result in float32 and then truncate/round the result to float16? Or does the calculation performed in '16bit multiplexer/adder hardware' all the way?

another question - is there a float8 type? I couldnt find this one... if not, then why? Thank-you all!

解决方案

To the first question: there's no hardware support for float16 on a typical processor (at least outside the GPU). NumPy does exactly what you suggest: convert the float16 operands to float32, perform the scalar operation on the float32 values, then round the float32 result back to float16. It can be proved that the results are still correctly-rounded: the precision of float32 is large enough (relative to that of float16) that double rounding isn't an issue here, at least for the four basic arithmetic operations and square root.

In the current NumPy source, this is what the definition of the four basic arithmetic operations looks like for float16 scalar operations.

#define half_ctype_add(a, b, outp) *(outp) = \
        npy_float_to_half(npy_half_to_float(a) + npy_half_to_float(b))
#define half_ctype_subtract(a, b, outp) *(outp) = \
        npy_float_to_half(npy_half_to_float(a) - npy_half_to_float(b))
#define half_ctype_multiply(a, b, outp) *(outp) = \
        npy_float_to_half(npy_half_to_float(a) * npy_half_to_float(b))
#define half_ctype_divide(a, b, outp) *(outp) = \
        npy_float_to_half(npy_half_to_float(a) / npy_half_to_float(b))

The code above is taken from scalarmath.c.src in the NumPy source. You can also take a look at loops.c.src for the corresponding code for array ufuncs. The supporting npy_half_to_float and npy_float_to_half functions are defined in halffloat.c, along with various other support functions for the float16 type.

For the second question: no, there's no float8 type in NumPy. float16 is a standardized type (described in the IEEE 754 standard), that's already in wide use in some contexts (notably GPUs). There's no IEEE 754 float8 type, and there doesn't appear to be an obvious candidate for a "standard" float8 type. I'd also guess that there just hasn't been that much demand for float8 support in NumPy.

这篇关于Python的numpy float16数据类型的操作,和float8?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆