计算机如何进行浮点运算? [英] How computer does floating point arithmetic?

查看:181
本文介绍了计算机如何进行浮点运算?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经看过很多文章解释浮点数可以如何存储以及如何计算这些数字的算术,但请简要解释为什么我写入

I have seen long articles explaining how floating point numbers can be stored and how the arithmetic of those numbers is being done, but please briefly explain why when I write

cout << 1.0 / 3.0 <<endl;

我看到 0.333333 ,但当我写

cout << 1.0 / 3.0 + 1.0 / 3.0 + 1.0 / 3.0 << endl;

我看到 1

计算机如何做到这一点?请解释这个简单的例子。这对我来说够了。

How does the computer do this? Please explain just this simple example. It is enough for me.

推荐答案

问题是浮点格式代表基数2中的分数。

The problem is that the floating point format represents fractions in base 2.

第一个分数位为½,第二个¼,并以1 / 2 n 结束。

The first fraction bit is ½, the second ¼, and it goes on as 1 / 2n.

的问题在于,并非每个有理数(可以表示为两个整数的比率的数字)在这个基本2格式中实际上具有有限表示。

And the problem with that is that not every rational number (a number that can be expressed as the ratio of two integers) actually has a finite representation in this base 2 format.

(这使得浮点格式难以用于货币值。尽管这些值总是有理数( n / 100),但只有.00,。

(This makes the floating point format difficult to use for monetary values. Although these values are always rational numbers (n/100) only .00, .25, .50, and .75 actually have exact representations in any number of digits of a base two fraction. )

无论如何,当你添加他们的时候,系统最终有机会将结果四舍五入到一个它可以精确表示的数字。

Anyway, when you add them back, the system eventually gets a chance to round the result to a number that it can represent exactly.

在某些时候,它发现自己添加了.666 ...数字到.333 ...一个,像这样:

At some point, it finds itself adding the .666... number to the .333... one, like so:

  00111110 1  .o10101010 10101010 10101011
+ 00111111 0  .10101010 10101010 10101011o
------------------------------------------
  00111111 1 (1).0000000 00000000 0000000x  # the x isn't in the final result

最左边的位是符号,接下来的8位是指数,剩余的位是分数。在指数和分数之间是假设的1,其总是存在,并且因此不被实际存储为归一化的最左分数位。我写了零,实际上不存在作为个别位 o

The leftmost bit is the sign, the next eight are the exponent, and the remaining bits are the fraction. In between the exponent and the fraction is an assummed "1" that is always present, and therefore not actually stored, as the normalized leftmost fraction bit. I've written zeroes that aren't actually present as individual bits as o.

这里发生了很多,在每一步,FPU都采取了相当英勇的措施来舍入结果。保留了两个额外的精度数字(超出了结果中将适合的精度),并且FPU在许多情况下知道剩余的最右边位中的任何一个或至少1个是1。如果是,那么分数的那部分大于0.5(缩放),因此它向上舍入。中间舍入值允许FPU将最右边的位一直移到整数部分,最后舍入到正确的答案。

A lot has happened here, at each step, the FPU has taken rather heroic measures to round the result. Two extra digits of precision (beyond what will fit in the result) have been kept, and the FPU knows in many cases if any, or at least 1, of the remaining rightmost bits were one. If so, then that part of the fraction is more than 0.5 (scaled) and so it rounds up. The intermediate rounded values allow the FPU to carry the rightmost bit all the way over to the integer part and finally round to the correct answer.

这没有发生,因为任何人加0.5; FPU只是在格式的限制下做了最好的。浮点实际上不是不准确的。它是完全准确的,但是我们期望在基数10,理性数世界视图中看到的大多数数据不能由格式的基-2分数表示。事实上,很少有。

This didn't happen because anyone added 0.5; the FPU just did the best it could within the limitations of the format. Floating point is not, actually, inaccurate. It's perfectly accurate, but most of the numbers we expect to see in our base-10, rational-number world-view are not representable by the base-2 fraction of the format. In fact, very few are.

这篇关于计算机如何进行浮点运算?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆