为什么会在32同code产量不同的数值结果比64位的机器? [英] Why would the same code yield different numeric results on 32 vs 64-bit machines?

查看:90
本文介绍了为什么会在32同code产量不同的数值结果比64位的机器?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们正在对数字例程的C中的图书馆,我们还不确定我们是否会使用单precision(浮动)或双(正常工作双击),所以我们定义的类型 SP 作为别名,直到我们决定:

We are working on a library of numeric routines in C. We are not sure yet whether we will work with single precision (float) or double (double), so we've defined a type SP as an alias until we decide:

typedef float SP;

当我们运行单元测试,它们都通过我的机器(在64位的Ubuntu)上,但他们失败对我的同事(一个32位的Ubuntu被错误地在64位机器上安装)。

When we run our unit tests, they all pass on my machine (a 64-bit Ubuntu) but they fail on my colleague's (a 32-bit Ubuntu that was mistakenly installed on a 64-bit machine).

使用Git的开张命令,我们发现开始屈服他的机器和我之间的不同结果的确切差异:

Using Git's bisect command, we found the exact diff that began yielding different results between his machine and mine:

-typedef double SP;
+typedef float SP;

在换句话说,从双precision单precision收率回事我们的机器(约1E-3在最坏的情况下相对差)数值不同的结果。

In other words, going from double precision to single precision yields numerically different results on our machines (about 1e-3 relative difference in the worst cases).

我们是相当肯定,我们将永不比较无符号整数随时随地负整数签署

We are fairly certain that we are never comparing unsigned ints to negative signed ints anywhere.

为什么数字例程库的32位操作系统和64位系统上产生不同的结果?

澄清

我怕我可能不是足够清晰:我们有git的承诺 2f3f671 使用双precision,并在单元测试同样考好上两台机器。然后,我们让Git犯下 46f2ba ,我们改为单precision,这里的测试还是的传递64位计算机上,但的的32位计算机上。

I'm afraid I might not have been clear enough: we have Git commit 2f3f671 that uses double precision, and where the unit tests pass equally well on both machines. Then we have Git commit 46f2ba, where we changed to single precision, and here the tests still pass on the 64-bit machine but not on the 32-bit machine.

推荐答案

您遇到什么通常被称为'的x87 excess- precision错误。

You are encountering what is often called the 'x87 excess-precision "bug"'.

在短:从历史上看,(几乎)在x86处理器上的所有浮点计算使用的x87指令集,默认情况下在80位浮点型工作已完成,但可以设置在单操作 - 或双precision(几乎)由一些位在控制寄存器

In short: historically, (nearly) all floating-point computation on x86 processors was done using the x87 instruction set, which by default operates on an 80-bit floating-point type, but can be set to operate in either single- or double-precision (almost) by some bits in a control register.

如果同时使用x87控制寄存器的precision设置为双精度型或延伸期precision执行单precision操作,然后将结果从什么会如果同样来制造不同操作在单precision执行(除非编译器显得格外谨慎,每一个存储计算的结果,并重新加载它迫使四舍五入在正确的位置出现。)

If single-precision operations are performed while the precision of the x87 control register is set to double- or extended-precision, then the results will differ from what would be produced if the same operations were performed in single-precision (unless the compiler is extraordinarily careful and stores the result of every computation and reloads it to force rounding to occur in the correct place.)

在32位上运行你的code使用的x87单位浮点计算(显然与控制寄存器设置为双precision),因此遇到上述问题。在64位的code运行使用SSE [2,3,...]浮点运算指令,提供原生的单,双precision操作,因此不会进行多余 - precision。这就是为什么你的结果不同。

Your code running on 32-bit is using the x87 unit for floating-point computation (apparently with the control register set for double-precision), and thus encountering the issue described above. Your code running on 64-bit is using the SSE[2,3,...] instructions for floating-point computation, which provide native single- and double-precision operations, and therefore does not carry excess-precision. This is why your results differ.

您可以解决这个(一个点)告诉你的编译器,即使在32位( -mfpmath = SSE 与GCC使用SSE浮点运算)。即使这样,比特精确的结果不能保证,因为你对链接的各种库可以使用的x87,或简单地使用依赖于架构不同的算法。

You can work around this (to a point) by telling your compiler to use SSE for floating-point computation even on 32-bit (-mfpmath=sse with GCC). Even then, bit-exact results are not guaranteed because the various libraries that you link against may use x87, or simply use different algorithms depending on the architecture.

这篇关于为什么会在32同code产量不同的数值结果比64位的机器?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆