为什么相同的代码在不同的计算机上产生两个不同的fp结果? [英] Why this same code produce two different fp results on different Machines?

查看:82
本文介绍了为什么相同的代码在不同的计算机上产生两个不同的fp结果?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

代码如下:

#include <iostream>
#include <math.h>

const double ln2per12 = log(2.0) / 12.0;

int main() {
    std::cout.precision(100);
    double target = 9.800000000000000710542735760100185871124267578125;
    double unnormalizatedValue = 9.79999999999063220457173883914947509765625;
    double ln2per12edValue = unnormalizatedValue * ln2per12;
    double errorLn2per12 = fabs(target - ln2per12edValue / ln2per12);
    std::cout << unnormalizatedValue << std::endl;
    std::cout << ln2per12 << std::endl;
    std::cout << errorLn2per12 << " <<<<< its different" << std::endl;
}

如果我在计算机(MSVC)上尝试运行,或者在此处(GCC):

errorLn2per12 = 9.3702823278363212011754512786865234375e-12

相反,此处(GCC):

errorLn2per12 = 9.368505970996920950710773468017578125e-12

这是不同的.是由于机器Epsilon ?还是编译器精度标志?还是其他IEEE评估?

造成这种漂移的原因是什么?问题出现在fabs()函数中(因为其他值似乎相同).

解决方案

即使没有-Ofast,C ++标准也不要求实现与log(或sinexp等)完全相同. ),只有它们之间的误差不大(例如,最后的二进制位置可能有一些误差).这样可以加快硬件(或软件)的逼近速度,每个平台/编译器可能会以不同的方式进行估算.

(在所有平台上始终可以从中获得完美结果的唯一浮点数学函数是sqrt.)

更令人烦恼的是,您甚至可能在编译之间(编译器可能使用某些内部库,使其精确到float/double允许使用常量表达式)和运行时(例如,硬件支持的近似值)之间得到不同的结果. /p>

如果希望log在平台和编译器之间提供完全相同的结果,则必须自己仅使用+-*/sqrt(或找到具有此保证的图书馆).并避免沿途出现很多陷阱.

如果您通常需要浮点确定性,强烈建议阅读本文以了解您面临的问题有多严重: https://randomascii.wordpress.com/2013/07/16/floating-point-determinism/

Here's the code:

#include <iostream>
#include <math.h>

const double ln2per12 = log(2.0) / 12.0;

int main() {
    std::cout.precision(100);
    double target = 9.800000000000000710542735760100185871124267578125;
    double unnormalizatedValue = 9.79999999999063220457173883914947509765625;
    double ln2per12edValue = unnormalizatedValue * ln2per12;
    double errorLn2per12 = fabs(target - ln2per12edValue / ln2per12);
    std::cout << unnormalizatedValue << std::endl;
    std::cout << ln2per12 << std::endl;
    std::cout << errorLn2per12 << " <<<<< its different" << std::endl;
}

If I try on my machine (MSVC), or here (GCC):

errorLn2per12 = 9.3702823278363212011754512786865234375e-12

Instead, here (GCC):

errorLn2per12 = 9.368505970996920950710773468017578125e-12

which is different. Its due to Machine Epsilon? Or Compiler precision flags? Or a different IEEE evaluation?

What's the cause here for this drift? The problem seems in fabs() function (since the other values seems the same).

解决方案

Even without -Ofast, the C++ standard does not require implementations to be exact with log (or sin, or exp, etc.), only that they be within a few ulp (i.e. there may be some inaccuracies in the last binary places). This allows faster hardware (or software) approximations, which each platform/compiler may do differently.

(The only floating point math function that you will always get perfect results from on all platforms is sqrt.)

More annoyingly, you may even get different results between compilation (the compiler may use some internal library to be as precise as float/double allows for constant expressions) and runtime (e.g. hardware-supported approximations).

If you want log to give the exact same result across platforms and compilers, you will have to implement it yourself using only +, -, *, / and sqrt (or find a library with this guarantee). And avoid a whole host of pitfalls along the way.

If you need floating point determinism in general, I strongly recommend reading this article to understand how big of a problem you have ahead of you: https://randomascii.wordpress.com/2013/07/16/floating-point-determinism/

这篇关于为什么相同的代码在不同的计算机上产生两个不同的fp结果?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆