gcc精度错误? [英] gcc precision bug?

查看:186
本文介绍了gcc精度错误?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我只能假设这是一个错误。第一次断言通过,第二次失败:

  double sum_1 = 4.0 + 6.3; 
assert(sum_1 == 4.0 + 6.3);

double t1 = 4.0,t2 = 6.3;

double sum_2 = t1 + t2;
assert(sum_2 == t1 + t2);

如果不是bug,为什么?

解决方案

这也是一直困扰我的东西。



是的,浮点数不应该相等,因为四舍五入错误,你可能知道这一点。



但是在这种情况下,你计算 t1 + t2 ,然后再次计算它。 肯定必须产生相同的结果?



以下是可能发生的情况。我敢打赌,你在x86 CPU上运行它,对吗? x86 FPU为其内部寄存器使用80位,但内存中的值以64位双精度存储。

因此 t1 + t2 首先用80位精度计算code>,然后 - 我推测 - 以64位精度将 sum_2 中的内存存储到内存中 - 以及一些舍入发生。对于断言,它被加载回浮点寄存器,再次计算 t1 + t2 ,同样以80位精度。所以现在你要比较 sum_2 ,它之前被四舍五入为64位浮点值,并且 t1 + t2 ,它的计算精度更高(80位) - 这就是为什么这些值不完全相同的原因。



编辑第一次测试通过了吗?在这种情况下,编译器可能在编译时评估 4.0 + 6.3 ,并将其存储为64位数量 - 既用于赋值,也用于断言。因此,相同的值将进行比较和断言传递。



第二编辑下面是为代码的第二部分生成的汇编代码(gcc ,x86),带有注释 - 非常符合上述情形:

  // t1 = 4.0 
fldl LC3
fstpl -16(%ebp)

// t2 = 6.3
fldl LC4
fstpl -24(%ebp)

(%ebp)
fstpl -32(%ebp)

// b_2再次计算t1 + t2
fldl -16(%ebp)
faddl -24(%ebp)

//从内存中加载sum_2并比较
fldl -32 (%ebp)
fxch%st(1)
fucompp

有趣的一面注意:这是编译没有优化。当它用 -O3 编译时,编译器会优化代码中的所有。


I can only assume this is a bug. The first assert passes while the second fails:

double sum_1 =  4.0 + 6.3;
assert(sum_1 == 4.0 + 6.3);

double t1 = 4.0, t2 = 6.3;

double sum_2 =  t1 + t2;
assert(sum_2 == t1 + t2);

If not a bug, why?

解决方案

This is something that has bitten me, too.

Yes, floating point numbers should never be compared for equality because of rounding error, and you probably knew that.

But in this case, you're computing t1+t2, then computing it again. Surely that has to produce an identical result?

Here's what's probably going on. I'll bet you're running this on an x86 CPU, correct? The x86 FPU uses 80 bits for its internal registers, but values in memory are stored as 64-bit doubles.

So t1+t2 is first computed with 80 bits of precision, then -- I presume -- stored out to memory in sum_2 with 64 bits of precision -- and some rounding occurs. For the assert, it's loaded back into a floating point register, and t1+t2 is computed again, again with 80 bits of precision. So now you're comparing sum_2, which was previously rounded to a 64-bit floating point value, with t1+t2, which was computed with higher precision (80 bits) -- and that's why the values aren't exactly identical.

Edit So why does the first test pass? In this case, the compiler probably evaluates 4.0+6.3 at compile time and stores it as a 64-bit quantity -- both for the assignment and for the assert. So identical values are being compared, and the assert passes.

Second Edit Here's the assembly code generated for the second part of the code (gcc, x86), with comments -- pretty much follows the scenario outlined above:

// t1 = 4.0
fldl    LC3
fstpl   -16(%ebp)

// t2 = 6.3
fldl    LC4
fstpl   -24(%ebp)

// sum_2 =  t1+t2
fldl    -16(%ebp)
faddl   -24(%ebp)
fstpl   -32(%ebp)

// Compute t1+t2 again
fldl    -16(%ebp)
faddl   -24(%ebp)

// Load sum_2 from memory and compare
fldl    -32(%ebp)
fxch    %st(1)
fucompp

Interesting side note: This was compiled without optimization. When it's compiled with -O3, the compiler optimizes all of the code away.

这篇关于gcc精度错误?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆