为什么 Math.Exp 在 32 位和 64 位之间给出不同的结果,具有相同的输入,相同的硬件 [英] Why does Math.Exp give different results between 32-bit and 64-bit, with same input, same hardware

查看:23
本文介绍了为什么 Math.Exp 在 32 位和 64 位之间给出不同的结果,具有相同的输入,相同的硬件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用 .NET 2.0 和 PlatformTarget x64 和 x86.我给 Math.Exp 相同的输入数字,它在任一平台上返回不同的结果.

MSDN 说你不能依靠文字/解析的 Double 来表示平台之间的相同数字,但我认为我在下面使用 Int64BitsToDouble 避免了这个问题,并保证在两个平台上对 Math.Exp 的输入相同.

我的问题是为什么结果不同?我会认为:

  • 输入的存储方式相同(双精度/64 位精度)
  • 无论处理器的位数如何,FPU 都会执行相同的计算
  • 输出的存储方式相同

我知道我一般不应该比较第 15/17 位之后的浮点数,但我对这里的不一致与在相同硬件上的相同操作感到困惑.

有人知道幕后发生了什么吗?

double d = BitConverter.Int64BitsToDouble(-4648784593573222648L);//与 Double.Parse("-0.0068846153846153849") 相同,但不必担心在转换中丢失数字Debug.Assert(d.ToString("G17") == "-0.0068846153846153849"&&BitConverter.DoubleToInt64Bits(d) == -4648784593573222648L);//32 & 均为真64位double exp = Math.Exp(d);Console.WriteLine("{0:G17} = {1}", exp, BitConverter.DoubleToInt64Bits(exp));//64 位:0.99313902928727449 = 4607120620669726947//32 位:0.9931390292872746 = 4607120620669726948

在打开或关闭 JIT 的情况下,两个平台上的结果都是一致的.

我对下面的答案并不完全满意,所以这里是我搜索的更多细节.

http://www.manicai.net/comp/debugging/fpudiff/ 说:

<块引用>

所以 32 位使用 80 位 FPU 寄存器,64 位使用 128 位 SSE 寄存器.

并且 CLI 标准说如果硬件支持双精度可以表示为更高的精度:

<块引用>

[理由:此设计允许 CLI 选择特定于平台的高性能表示浮点数,直到它们被放置在存储位置.例如,它可能能够离开硬件寄存器中的浮点变量提供比用户要求的精度更高的精度.在分区 I 69同时,CIL 生成器可以通过以下方式强制操作遵守语言特定的表示规则转换指令的使用.结束理由]

http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-335.pdf(12.1.3 浮点数据类型的处理)

我认为这就是这里发生的事情,因为在 Double 的标准 15 位精度之后结果不同.64 位 Math.Exp 结果更精确(它有一个额外的数字),因为在内部 64 位 .NET 使用的 FPU 寄存器比 32 位 .NET 使用的 FPU 寄存器精度更高.

解决方案

是的四舍五入错误,它实际上不是相同的硬件.32 位版本针对的是一组不同的指令和寄存器大小.

I am using .NET 2.0 with PlatformTarget x64 and x86. I am giving Math.Exp the same input number, and it returns different results in either platform.

MSDN says you can't rely on a literal/parsed Double to represent the same number between platforms, but I think my use of Int64BitsToDouble below avoids this problem and guarantees the same input to Math.Exp on both platforms.

My question is why are the results different? I would have thought that:

  • the input is stored in the same way (double/64-bit precision)
  • the FPU would do the same calculations regardless of processor's bitness
  • the output is stored in the same way

I know I should not compare floating-point numbers after the 15/17th digit in general, but I am confused about the inconsistency here with what looks like the same operation on the same hardware.

Any one know what's going on under the hood?

double d = BitConverter.Int64BitsToDouble(-4648784593573222648L); // same as Double.Parse("-0.0068846153846153849") but with no concern about losing digits in conversion
Debug.Assert(d.ToString("G17") == "-0.0068846153846153849"
    && BitConverter.DoubleToInt64Bits(d) == -4648784593573222648L); // true on both 32 & 64 bit

double exp = Math.Exp(d);

Console.WriteLine("{0:G17} = {1}", exp, BitConverter.DoubleToInt64Bits(exp));
// 64-bit: 0.99313902928727449 = 4607120620669726947
// 32-bit: 0.9931390292872746  = 4607120620669726948

The results are consistent on both platforms with JIT turned on or off.

[Edit]

I'm not completely satisfied with the answers below so here are some more details from my searching.

http://www.manicai.net/comp/debugging/fpudiff/ says that:

So 32-bit is using the 80-bit FPU registers, 64-bit is using the 128-bit SSE registers.

And the CLI Standard says that doubles can be represented with higher precision if the hardware supports it:

[Rationale: This design allows the CLI to choose a platform-specific high-performance representation for floating-point numbers until they are placed in storage locations. For example, it might be able to leave floating-point variables in hardware registers that provide more precision than a user has requested. At the Partition I 69 same time, CIL generators can force operations to respect language-specific rules for representations through the use of conversion instructions. end rationale]

http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-335.pdf (12.1.3 Handling of floating-point data types)

I think this is what is happening here, because the results different after Double's standard 15 digits of precision. The 64-bit Math.Exp result is more precise (it has an extra digit) because internally 64-bit .NET is using an FPU register with more precision than the FPU register used by 32-bit .NET.

解决方案

Yes rounding errors, and it is effectively NOT the same hardware. The 32 bit version is targeting a different set of instructions and register sizes.

这篇关于为什么 Math.Exp 在 32 位和 64 位之间给出不同的结果,具有相同的输入,相同的硬件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆