浮点数学在不同的编程语言中 [英] Floating point math in different programming languages

查看:146
本文介绍了浮点数学在不同的编程语言中的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我知道浮点数学最好不过了,但我想知道是否有人可以解释下面的问题。在大多数编程语言中,我测试了0.4到0.2的加法给出了一个小小的错误,其中0.4 + 0.1 + 0.1没有给出。

这是什么原因造成这两个计算的不平等,以及在各自的编程语言中可以采取什么措施来获得正确的结果。

在python2 / 3中,

  .4 + .2 
0.6000000000000001
.4 + .1 + .1
0.6

Julia 0.3也一样

  julia> .4 + .2 
0.6000000000000001

julia> .4 + .1 + .1
0.6

和Scala:

  scala> 0.4 + 0.2 
res0:Double = 0.6000000000000001

scala> 0.4 + 0.1 + 0.1
res1:Double = 0.6

和Haskell:

  Prelude> 0.4 + 0.2 
0.6000000000000001
前奏> 0.4 + 0.1 + 0.1
0.6

但R v3正确:

 > .4 + .2 
[1] 0.6
> .4 + .1 + .1
[1] 0.6


解决方案所有这些语言都使用系统提供的浮点格式,它表示 binary 中的值而不是 decimal 中的值。像 0.2 0.4 这样的值不能完全以这种格式表示,所以代之以存储最接近的可表示值,导致一个小错误。例如,数字文字 0.2 会生成一个浮点数,其精确值为 0.200000000000000011102230246251565404236316680908203125 。同样,任何给定的浮点数的算术运算都可能导致一个不完全可表示的值,所以真正的数学结果被替换为最接近的可表示值。这些是你看到的错误的根本原因。然而,这并不能解释语言之间的差异:在所有的例子中,完全相同的计算正在制定中,并得出了完全相同的结果。不同之处在于各种语言选择显示结果的方式。



严格来说, none 你显示的答案是正确的。使用舍入到舍入方式来进行IEEE 754二进制64算术(相当安全)的假设,第一个和的确切值为:

  0.600000000000088817841970012523233890533447265625 

而第二个和的确切值是:

  0.59999999999999997779553950749686919152736663818359375 

但是,这些输出都不是特别用户友好的,显然你所测试的所有语言都是在打印时缩短输出的合理决定。然而,他们并不都采用相同的格式化输出的策略,这就是为什么你看到了差异。

有很多可能的格式化策略,但三特别常见的是:


  1. 计算并显示17个正确的四舍五入的有效数字,可能会出现尾随零。 17位数的输出保证不同的binary64浮点数将具有不同的表示形式,以便浮点值可以从其表示中明确地恢复; 17是这个属性的最小整数。例如,这是Python 2.6使用的策略。


  2. 计算并显示在通常的round-结合甚至舍入模式。与战略1相比,这比实施起来要复杂得多,但是保留了不同花车具有不同表现形式的性质,并且趋向于使得花费更加愉快。这似乎是你所测试的所有语言(除了R以外)所使用的策略。


  3. 计算并显示15个(或更少)数字。这具有隐藏在十进制到二进制转换中涉及的错误的效果,给出精确的十进制算术的错觉。它有不同的花车可以有相同的代表性的缺点。这似乎是R在做什么。 (感谢@hadley在评论中指出有一个 R设置来控制显示的数字位数,缺省值是使用7位有效数字。)



I know that floating point math can be ugly at best but I am wondering if somebody can explain the following quirk. In most of the programing languages I tested the addition of 0.4 to 0.2 gave a slight error, where as 0.4 + 0.1 + 0.1 gave non.

What is the reason for the inequality of both calculation and what measures can one undertake in the respective programing languages to obtain correct results.

In python2/3

.4 + .2
0.6000000000000001
.4 + .1 + .1
0.6

The same happens in Julia 0.3

julia> .4 + .2
0.6000000000000001

julia> .4 + .1 + .1
0.6

and Scala:

scala> 0.4 + 0.2
res0: Double = 0.6000000000000001

scala> 0.4 + 0.1 + 0.1
res1: Double = 0.6

and Haskell:

Prelude> 0.4 + 0.2
0.6000000000000001    
Prelude> 0.4 + 0.1 + 0.1
0.6

but R v3 gets it right:

> .4 + .2
[1] 0.6
> .4 + .1 + .1
[1] 0.6

解决方案

All these languages are using the system-provided floating-point format, which represents values in binary rather than in decimal. Values like 0.2 and 0.4 can't be represented exactly in that format, so instead the closest representable value is stored, resulting in a small error. For example, the numeric literal 0.2 results in a floating-point number whose exact value is 0.200000000000000011102230246251565404236316680908203125. Similarly, any given arithmetic operation on floating-point numbers may result in a value that's not exactly representable, so the true mathematical result is replaced with the closest representable value. These are the fundamental reasons for the errors you're seeing.

However, this doesn't explain the differences between languages: in all of your examples, the exact same computations are being made and the exact same results are being arrived at. The difference then lies in the way that the various languages choose to display the results.

Strictly speaking, none of the answers you show is correct. Making the (fairly safe) assumption of IEEE 754 binary 64 arithmetic with a round-to-nearest rounding mode, the exact value of the first sum is:

0.600000000000000088817841970012523233890533447265625

while the exact value of the second sum is:

0.59999999999999997779553950749686919152736663818359375

However, neither of those outputs is particularly user-friendly, and clearly all of the languages you tested made the sensible decision to abbreviate the output when printing. However, they don't all adopt the same strategy for formatting the output, which is why you're seeing differences.

There are many possible strategies for formatting, but three particularly common ones are:

  1. Compute and display 17 correctly-rounded significant digits, possibly stripping trailing zeros where they appear. The output of 17 digits guarantees that distinct binary64 floats will have distinct representations, so that a floating-point value can be unambiguously recovered from its representation; 17 is the smallest integer with this property. This is the strategy that Python 2.6 uses, for example.

  2. Compute and display the shortest decimal string that rounds back to the given binary64 value under the usual round-ties-to-even rounding mode. This is rather more complicated to implement than strategy 1, but preserves the property that distinct floats have distinct representations, and tends to make for pleasanter output. This appears to be the strategy that all of the languages you tested (besides R) are using.

  3. Compute and display 15 (or fewer) correctly-rounded significant digits. This has the effect of hiding the errors involved in the decimal-to-binary conversions, giving the illusion of exact decimal arithmetic. It has the drawback that distinct floats can have the same representation. This appears to be what R is doing. (Thanks to @hadley for pointing out in the comments that there's an R setting which controls the number of digits used for display; the default is to use 7 significant digits.)

这篇关于浮点数学在不同的编程语言中的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆