为什么GDB不同于C ++来评估浮点运算? [英] Why does GDB evaluate floating-point arithmetic differently from C++?

查看:162
本文介绍了为什么GDB不同于C ++来评估浮点运算?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在尝试处理浮点算术问题时,我遇到了一些让人困惑的事情。

首先是代码。我把我的问题的精髓融入到这个例子中:

pre $ lt; i ct c&
#include< iomanip>

使用namespace std;
typedef union {long long ll; double d;} bindouble;

int main(int argc,char ** argv){
bind double y,z,tau,xinum,xiden;
y.d = 1.0d;
z.ll = 0x3fc5f8e2f0686eee; // double 0.17165791262311053
tau.ll = 0x3fab51c5e0bf9ef7; // double 0.053358253178712838
// xinum = double 0.16249854626123722(0x3fc4ccc09aeb769a)
xinum.d = y.d *(z.d - tau.d) - tau.d *(z.d - 1);
// xiden = double 0.16249854626123725(0x3fc4ccc09aeb769b)
xiden.d = z.d *(1-tau.d);
cout<<十六进制<< xinum.ll<< endl<< xiden.ll<< ENDL;

$ / code>

xinum xiden 应该具有相同的值(当 y == 1 时),但是由于浮点舍入错误,吨。这部分我得到了。



当我通过GDB运行这个代码(实际上是我的真实程序)来追踪这个差异时,问题出现了。如果我使用GDB来重现在代码中完成的评估,它会为xiden提供不同的结果:

  $ gdb mathtest 
GNU gdb(Gentoo 7.5 p1)7.5
...
这个GDB被配置为x86_64-pc-linux-gnu。
...
(gdb)break 16
0x4008ef处的断点1:file mathtest.cpp,第16行。
(gdb)run
启动程序:/ home / diazona / tmp / mathtest
...
在mathtest.cpp中的断点1,main(argc = 1,argv = 0x7fffffffd5f8):16
16 cout<<十六进制<< xinum.ll<< endl<< xiden.ll<< ENDL;
(gdb)print xiden.d
$ 1 = 0.16249854626123725
(gdb)print zd *(1-tau.d)
$ 2 = 0.16249854626123722

您会注意到,如果我要求GDB计算 zd *(1-tau.d)

我曾经看过这个相关的问题询问有关GDB评估 sqrt(3)为0,但这不应该是相同的因为这里没有涉及到函数调用。
解决方案可能是因为x86 FPU在寄存器中工作到80位精度,但是当值被存储到存储器时舍入到64位。 GDB将在(解释)计算的每一步中存储到内存中。


I've encountered something a little confusing while trying to deal with a floating-point arithmetic problem.

First, the code. I've distilled the essence of my problem into this example:

#include <iostream>
#include <iomanip>

using namespace std;
typedef union {long long ll; double d;} bindouble;

int main(int argc, char** argv) {
    bindouble y, z, tau, xinum, xiden;
    y.d = 1.0d;
    z.ll = 0x3fc5f8e2f0686eee; // double 0.17165791262311053
    tau.ll = 0x3fab51c5e0bf9ef7; // double 0.053358253178712838
    // xinum = double 0.16249854626123722 (0x3fc4ccc09aeb769a)
    xinum.d = y.d * (z.d - tau.d) - tau.d * (z.d - 1);
    // xiden = double 0.16249854626123725 (0x3fc4ccc09aeb769b)
    xiden.d = z.d * (1 - tau.d);
    cout << hex << xinum.ll << endl << xiden.ll << endl;
}

xinum and xiden should have the same value (when y == 1), but because of floating-point roundoff error they don't. That part I get.

The question came up when I ran this code (actually, my real program) through GDB to track down the discrepancy. If I use GDB to reproduce the evaluations done in the code, it gives a different result for xiden:

$ gdb mathtest
GNU gdb (Gentoo 7.5 p1) 7.5
...
This GDB was configured as "x86_64-pc-linux-gnu".
...
(gdb) break 16
Breakpoint 1 at 0x4008ef: file mathtest.cpp, line 16.
(gdb) run
Starting program: /home/diazona/tmp/mathtest 
...
Breakpoint 1, main (argc=1, argv=0x7fffffffd5f8) at mathtest.cpp:16
16          cout << hex << xinum.ll << endl << xiden.ll << endl;
(gdb) print xiden.d
$1 = 0.16249854626123725
(gdb) print z.d * (1 - tau.d)
$2 = 0.16249854626123722

You'll notice that if I ask GDB to calculate z.d * (1 - tau.d), it gives 0.16249854626123722 (0x3fc4ccc09aeb769a), whereas the actual C++ code that calculates the same thing in the program gives 0.16249854626123725 (0x3fc4ccc09aeb769b). So GDB must be using a different evaluation model for floating-point arithmetic. Can anyone shed some more light on this? How is GDB's evaluation different from my processor's evaluation?

I did look at this related question asking about GDB evaluating sqrt(3) to 0, but this shouldn't be the same thing because there are no function calls involved here.

解决方案

Could be because the x86 FPU works in registers to 80 bits accuracy, but rounds to 64 bits when the value is stored to memory. GDB will be storing to memory on every step of the (interpreted) computation.

这篇关于为什么GDB不同于C ++来评估浮点运算?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆