从i386移动到x86_64时的浮点精度 [英] Floating-point precision when moving from i386 to x86_64

查看:217
本文介绍了从i386移动到x86_64时的浮点精度的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个为Linux x86 32位开发的应用程序。有很多的浮点运算和大量的测试取决于结果。现在我们将它移植到x86_64上,但是这个架构的测试结果是不同的。我们不想为每个架构保留一组单独的结果。

根据 GCC简介 - 对于GNU编译器gcc和g ++ 问题是X86_64中的GCC假设而x86则假设 fpmath = 387 fpmath = sse 。 387 FPU对所有操作都使用 80位内部精度,只将结果转换为给定的浮点类型(float,double或long double),而SSE使用操作数的类型来确定其内部精度。

当编译我自己的代码和我所有的操作都能正常工作时,我可以 force -mfpmath = 387 ,但是每当我调用一些库函数(sin,cos,atan2等)结果又错了。我假设这是因为 libm 是在没有fpmath重写的情况下编译的。



我试图用387模拟来构建libm自己(glibc)很多崩溃(不知道我是否做错了什么)。

有没有办法强制进程中的所有代码使用387模拟x86_64的?或者,也许有一些库在两种架构上都返回与libm相同的值?有什么建议?



关于你需要80位精度的问题,我不得不说这对个人操作不是问题。在这种简单的情况下,差别非常小,没有什么区别。但是,当复合大量的操作时,错误会传播,最终结果的差异不会再那么小,而是有所作为。所以我想我需要80位的精度。

解决方案

我会说你需要修复你的测试。如果你认为浮点数学是准确的,你通常会让自己失望。测试 exact 是否相等,测试它是否足够接近预期的结果。毕竟,你发现的并不是一个错误,所以如果你的测试报告错误,那么测试 是错误的。 ;)

正如你所发现的,你所依赖的每个库都将假定SSE浮点,所以除非你打算编译所有的东西手动,现在和永远,所以你可以将FP模式设置为x87,你现在最好是处理这个问题,只是接受FP数学不是100%准确的,一般不会产生相同的结果在两个不同的平台上。 (我相信在x87数学上,AMD CPU的产量会略有不同)。

你完全需要80位精度吗? (如果是这样,显然没有太多的选择,除了自己编译所有东西自己使用80位FP)。

否则,调整你的测试,在一些小的epsilon中执行比较和相等性测试。如果差值小于那个ε,则这些值被认为是相等的。

I have an application that was developed for Linux x86 32 bits. There are lots of floating-point operations and a lot of tests depending on the results. Now we are porting it to x86_64, but the test results are different in this architecture. We don't want to keep a separate set of results for each architecture.

According to the article An Introduction to GCC - for the GNU compilers gcc and g++ the problem is that GCC in X86_64 assumes fpmath=sse while x86 assumes fpmath=387. The 387 FPU uses 80 bit internal precision for all operations and only convert the result to a given floating-point type (float, double or long double) while SSE uses the type of the operands to determine its internal precision.

I can force -mfpmath=387 when compiling my own code and all my operations work correctly, but whenever I call some library function (sin, cos, atan2, etc.) the results are wrong again. I assume it's because libm was compiled without the fpmath override.

I tried to build libm myself (glibc) using 387 emulation, but it caused a lot of crashes all around (don't know if I did something wrong).

Is there a way to force all code in a process to use the 387 emulation in x86_64? Or maybe some library that returns the same values as libm does on both architectures? Any suggestions?

Regarding the question of "Do you need the 80 bit precision", I have to say that this is not a problem for an individual operation. In this simple case the difference is really small and makes no difference. When compounding a lot of operations, though, the error propagates and the difference in the final result is not so small any more and makes a difference. So I guess I need the 80 bit precision.

解决方案

I'd say you need to fix your tests. You're generally setting yourself up for disappointment if you assume floating point math to be accurate. Instead of testing for exact equality, test whether it's close enough to the expected result. What you've found isn't a bug, after all, so if your tests report errors, the tests are wrong. ;)

As you've found out, every library you rely on is going to assume SSE floating point, so unless you plan to compile everything manually, now and forever, just so you can set the FP mode to x87, you're better off dealing with the problem now, and just accepting that FP math is not 100% accurate, and will not in general yield the same result on two different platforms. (I believe AMD CPU's yield slightly different results in x87 math as well).

Do you absolutely need 80-bit precision? (If so, there obviously aren't many alternatives, other than to compile everything yourself to use 80-bit FP.)

Otherwise, adjust your tests to perform comparisons and equality tests within some small epsilon. If the difference is smaller than that epsilon, the values are considered equal.

这篇关于从i386移动到x86_64时的浮点精度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆