Layman解释为什么JavaScript有奇怪的浮动数学 - IEEE 754标准 [英] Layman's explanation for why JavaScript has weird floating math – IEEE 754 standard

查看:105
本文介绍了Layman解释为什么JavaScript有奇怪的浮动数学 - IEEE 754标准的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当我对浮点数进行数学运算时,我从来不知道JavaScript到底发生了什么。我一直都很害怕使用小数,直到我尽可能地避开它们。但是,如果我知道在IEEE 754标准中幕后发生了什么,那么我将能够预测会发生什么;具有可预测性,我会更自信,也不会感到害怕。



有人可以给我一个简单的解释(就像解释整数的二进制表示一样简单关于IEEE 754标准如何工作以及它如何产生这种副作用: 0.1 + 0.2!= 0.3



<非常感谢!:)

解决方案

像基数2这样的十进制分数无法在基数2中干净地表达/ strong>



假设我们想在base-2中表示小数0.1。我们知道它等于1/10。在base-2中1除以10的结果是 0.000110011001100 ... ,带有重复的小数序列​​。



因此,在十进制形式中,实际上很容易干净地表示像0.1这样的数字,在base-2中你无法准确地表达基于10的有理数。你只能通过使用你可以存储的多少位来近似它。



让我们说简化一下,我们只有足够的存储空间来重现第一个,比方说,该数字的8个有效二进制数字。存储的数字为11001100(指数为11)。这转换回基数为2的0.000110011,十进制为0.099609375,而不是0.1。这是将0.1转换为理论浮点变量时会发生的错误量,该变量将基值存储为8位(不包括符号位)。



浮点变量如何存储值



IEEE 754的标准规定了一种用二进制编码实数的方法,带有符号和二进制指数。指数应用于二进制域,这意味着在转换为二进制之前不会移动小数点,之后就会这样做。



有不同大小的IEEE浮点数,每个浮点数指定基数使用了多少二进制数字和指数使用了多少二进制数字。



你看到 0.1 + 0.2!= 0.3 ,这是因为你实际上并没有在0.1或0.2上执行计算,而是将这些数字的近似值以浮点二进制表示为某个只有精度。在将结果转换回十进制时,由于此错误,结果将不会精确为0.3。另外,结果甚至不等于0.3的二进制近似值。实际的错误量取决于浮点值的大小,因此使用了多少精度位。



舍入有时会有多大帮助,但是不是在这种情况下



在某些情况下,由于转换为二进制文件的精度损失而导致的计算错误将小到足以从值中舍入在从二进制转换回来的过程中,所以你永远不会注意到任何差异 - 看起来它会起作用。



IEEE浮点数对于如何舍入有特定的规则将要完成。



然而,对于0.1 + 0.2和0.3,舍入不会抵消错误。 添加0.1和0.2的二进制近似值的结果将与0.3的二进制近似值不同。


I never understand exactly what's going on with JavaScript when I do mathematical operations on floating point numbers. I've been down-right fearful of using decimals, to the point where I just avoid them when at all possible. However, if I knew what was going on behind the scenes when it comes to the IEEE 754 standard, then I would be able to predict what would happen; with predictability, I'll be more confident and less fearful.

Could someone give me a simple explanation (as simple as explaining binary representations of integers as to how the IEEE 754 standard works and how it gives this side effect: 0.1 + 0.2 != 0.3?

Thanks so much! :)

解决方案

Decimal fractions like 0.1 can't be expressed cleanly in base 2

Let's say we want to express the decimal 0.1 in base-2. We know that it is equal to 1/10. The result of 1 divided by 10 in base-2 is 0.000110011001100... with a repeating sequence of decimals.

Thus while in decimal form it's actually really easy to cleanly represent a number like 0.1, in base-2 you cannot express a rational number based on 10ths exactly. You can only approximate it by using as many bits are you are able to store.

Let's say for simplification that we only have enough storage space to reproduce the first, say, 8 significant binary digits of that number. The digits stored would be 11001100 (along with an exponent of 11). This translates back to 0.000110011 in base-2 which in decimal is 0.099609375, not 0.1. This is the amount of error that would happen if you converted 0.1 to a theoretical floating point variable which stores base values in 8 bits (not including the sign bit).

How floating-point variables store values

The standard of IEEE 754 specifies a way of encoding a real number in binary, with a sign and a binary exponent. The exponent is applied in the binary domain, meaning that you don't shift the decimal point before converting to binary, you do it after.

There are different sizes of IEEE floating-point number, each one specifying how many of the binary digits are used for the base number and how many for an exponent.

When you see 0.1 + 0.2 != 0.3, it's because you are not actually performing the calculation on 0.1 or 0.2, but on approximations of these numbers in floating-point binary to a certain precision only. Upon converting the result back to decimal, the result won't be exactly 0.3, due to this error. In addition, the result won't even be equal to the binary approximation of 0.3, either. The actual amount of error will depend on the size of the floating point value, and thus how many bits of precision were used.

How rounding sometimes helps, but not in this case

In some cases, errors in calculation due to precision loss in the conversion to binary will be small enough to be rounded out of the value during the conversion back from binary again, and so you will never even notice any difference - it will look like it worked.

IEEE floating point has specific rules for how this rounding is to be done.

With 0.1 + 0.2 vs 0.3, however, the rounding does not cancel out the error. The result of adding the binary approximations of 0.1 and 0.2 will be different to the binary approximation of 0.3.

这篇关于Layman解释为什么JavaScript有奇怪的浮动数学 - IEEE 754标准的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆