非规范化数字 - IEEE 754浮点数 [英] Denormalized Numbers - IEEE 754 Floating Point

查看:2297
本文介绍了非规范化数字 - IEEE 754浮点数的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

所以我想了解更多关于IEEE 754浮点数标准中定义的非规范化数字。我已经阅读了几篇文章,感谢谷歌的搜索结果,我已经通过了几个StackOverFlow职位。然而,我仍然有一些问题没有得到答复。

首先,回顾一下我对非规范化浮动的理解:


具有较少比特精度的数字,比标准化数字小(
大小)

本质上,非规格化的float可以表示可以用任何浮点值表示的SMALLEST(数量级)数字。这是否正确?还有什么比这更重要的?



我读过:


使用非规范化的数字在许多
平台上带有性能成本



我也在其中的一篇文章中看过


<一个人应该避免规范化和非规范化数字之间的重叠


对此的任何评论



在IEEE标准的一些演示中,当浮点范围出现时,非归一化的值被排除,表格被标记为有效范围,就好像演示者是思考我们知道非规格化数字可以代表尽可能小的浮点数值,但是由于非规格化数字的某些缺点,我们选择将它们排除在更适合常用场景的范围之外正规化的数字是不常用的。



我想我一直留下这样的印象:在大多数情况下使用非规格化数字不是一件好事情? / strong>



如果我必须自己回答这个问题,我会想:

使用非规格化的数字是很好的,因为您可以表示可能的最小(数量级)的数字 - 只要精度不重要,而且不要将它们与规范化的数字混合在一起,并且应用程序的结果性能符合要求。 / b>

使用非规格化数字是一件坏事,因为大多数应用程序不需要表示如此之小 - 精确度损失是有害的,您可以轻松地通过混合他们与规范化的数字,性能是不值得的成本在大多数情况下。



这两个答案的任何意见?还有什么可能会丢失或不理解非规范化的数字?

解决方案


,非规格化的浮点数能够表示
SMALLEST(数量级)的数字,可以用
表示任何浮点数值。

这是正确的。使用非规格化数字的
$ b


在许多平台上都会带来性能成本


惩罚在不同的处理器上是不同的,但可以达到2个数量级。原因?与此建议相同:

lockquote
应该避免规范化和非规范化数字之间的重叠
blockquote>

以下是关键:非正规化是IEEE-754浮点格式中的一种定点微格式。在正常数字中,指数表示二进制点的位置。非正规数字包含定点表示法中的最后52位,双指数为2 -1074 。

因此,非规范化是慢的因为他们需要特殊的处理。在实践中,它们很少发生,芯片制造商不喜欢在罕见的情况下花费太多有价值的资源。

混合非常规与法线是缓慢的,重新混合格式,你有额外的步骤之间的转换。


我想我一直留下印象,使用非规范化的
的数字在大多数情况下并不是一件好事吗?

非常规是为了一个主要目的而创建的:渐变下溢即可。这是保持微小数字之间相对差异的一种方法。如果你从最小的正常数字直到零(突然下溢),相对变化是无限的。如果在下溢时去反规范,相对变化仍然不完全准确,但至少更合理。而这种差异显示在计算中。



换一种方式。浮点数不是统一分配的。在两个连续的幂之间总是有相同数量的数字:2 52 (用于双精度)。因此,如果没有非正则表达式,那么总是以0和最小浮点数之间的差距结束,这个最小浮点数是最小两个数之间差值大小的2 52倍。非正态分布填补了这个空白。



作为一个例子说明突然下溢对渐进下溢的影响,看看数学等价的 x == y x - y == 0 。如果 x y 很小但是不同,并且使用突然下溢,那么如果它们的差值小于最小截止值价值,他们的差异将为零,所以等值是违反。

随着逐渐下溢,两个微小但不同的正常数字之间的差异是一个denormal,这是仍然不是零。保留等价。



因此,不建议使用非规范化的,因为它们仅在特殊情况下被设计为备份机制

So I'm trying to learn more about Denormalized numbers as defined in the IEEE 754 standard for Floating Point numbers. I've already read several articles thanks to Google search results, and I've gone through several StackOverFlow posts. However I still have some questions unanswered.

First off, just to review my understanding of what a Denormalized float is:

Numbers which have fewer bits of precision, and are smaller (in magnitude) than normalized numbers

Essentially, a denormalized float has the ability to represent the SMALLEST (in magnitude) number that is possible to be represented with any floating point value.

Does that sound correct? Anything more to it than that?

I've read that:

using denormalized numbers comes with a performance cost on many platforms

Any comments on this?

I've also read in one of the articles that

one should "avoid overlap between normalized and denormalized numbers"

Any comments on this?

In some presentations of the IEEE standard, when floating point ranges are presented the denormalized values are excluded and the tables are labeled as an "effective range", almost as if the presenter is thinking "We know that denormalized numbers CAN represent the smallest possible floating point values, but because of certain disadvantages of denormalized numbers, we choose to exclude them from ranges that will better fit common use scenarios" -- As if denormalized numbers are not commonly used.

I guess I just keep getting the impression that using denormalized numbers turns out to not be a good thing in most cases?

If I had to answer that question on my own I would want to think that:

Using denormalized numbers is good because you can represent the smallest (in magnitude) numbers possible -- As long as precision is not important, and you do not mix them up with normalized numbers, AND the resulting performance of the application fits within requirements.

Using denormalized numbers is a bad thing because most applications do not require representations so small -- The precision loss is detrimental, and you can shoot yourself in the foot too easily by mixing them up with normalized numbers, AND the peformance is not worth the cost in most cases.

Any comments on these two answers? What else might I be missing or not understand about denormalized numbers?

解决方案

Essentially, a denormalized float has the ability to represent the SMALLEST (in magnitude) number that is possible to be represented with any floating point value.

That is correct.

using denormalized numbers comes with a performance cost on many platforms

The penalty is different on different processors, but it can be up to 2 orders of magnitude. The reason? The same as for this advice:

one should "avoid overlap between normalized and denormalized numbers"

Here's the key: denormals are a fixed-point "micro-format" within the IEEE-754 floating-point format. In normal numbers, the exponent indicates the position of the binary point. Denormal numbers contain the last 52 bits in the fixed-point notation with an exponent of 2-1074 for doubles.

So, denormals are slow because they require special handling. In practice, they occur very rarely, and chip makers don't like to spend too many valuable resources on rare cases.

Mixing denormals with normals is slow because then you're mixing formats and you have the additional step of converting between the two.

I guess I just keep getting the impression that using denormalized numbers turns out to not be a good thing in most cases?

Denormals were created for one primary purpose: gradual underflow. It's a way to keep the relative difference between tiny numbers small. If you go straight from the smallest normal number to zero (abrupt underflow), the relative change is infinite. If you go to denormals on underflow, the relative change is still not fully accurate, but at least more reasonable. And that difference shows up in calculations.

To put it a different way. Floating-point numbers are not distributed uniformly. There are always the same amount of numbers between successive powers of two: 252 (for double precision). So without denormals, you always end up with a gap between 0 and the smallest floating-point number that is 252 times the size of the difference between the smallest two numbers. Denormals fill this gap uniformly.

As an example about the effects of abrupt vs. gradual underflow, look at the mathematically equivalent x == y and x - y == 0. If x and y are tiny but different and you use abrupt underflow, then if their difference is less than the minimum cutoff value, their difference will be zero, and so the equivalence is violated.

With gradual underflow, the difference between two tiny but different normal numbers gets to be a denormal, which is still not zero. The equivalence is preserved.

So, using denormals on purpose is not advised, because they were designed only as a backup mechanism in exceptional cases.

这篇关于非规范化数字 - IEEE 754浮点数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆