(.1F + .2f == 3F)!=(.1F + .2f).Equals(.3f)为什么? [英] (.1f+.2f==.3f) != (.1f+.2f).Equals(.3f) Why?

查看:126
本文介绍了(.1F + .2f == 3F)!=(.1F + .2f).Equals(.3f)为什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的问题是不会关于浮动precision。这是为什么等于() ==

不同

我明白为什么 .1F + .2f == .3f (而0.1米+0.2米==0.3米真正)。结果
我得到的 == 是参考和 .Equals()是值比较。 (修改的:我知道有更多的这种)

但为什么(1F + .2f).Equals(.3f) 真正,而(1D + .2d).Equals(.3D)还是

  .1F + .2f == .3f; //假
(.1F + .2f).Equals(.3f); //真
(.1d + .2d).Equals(.3D); //假


解决方案

现在的问题是容易混淆的措辞。让我们把它分解成许多小问题:


  

为什么是它的十分之一加十分之二并不总是等于十分之三的浮点运算?


让我给你打个比方。假设我们有一个数学系统,所有的数字四舍五入精确小数点后五位。假设你说:

  X = 1.00000 / 3.00000;

您所期望x为可0.33333,对不对?因为那是的最近的在我们的系统给的真正的答复号。现在假设你说

  Y = 2.00000 / 3.00000;

您期望y将是0.66667,对不对?因为再次,这是的最近的在我们的系统给的真正的答复号。 0.66666是的更远的从不是0.66667三分之二是。

请注意,在第一种情况下,我们舍去和在第二种情况下,我们四舍五入。

现在,当我们说

  Q = X + X + X + X;
R = Y + X + X;
S = Y +ÿ;

我们什么得到什么?如果我们做了精确的算法,然后每个这些显然是三分之四,他们都将是平等的。但他们并不相等。尽管1.33333是在我们的系统,以三分之四最接近的数字,仅r的那个值。

q为1.33332 - 因为x为一点点小事,每天除了累积的错误而最终的结果是相当太小了一点。同样,S太大;它是1.33334,因为y为有点太大了。 ř得到正确的答案,因为y的太大而内斯被取消了由x的过小岬,其结果最终是正确的。


  

中是否precision的名额有误差的大小和方向的影响?


是的;更precision使得误差较小的幅度,但可以改变的计算是否累积损失或增益由于错误。例如:

  B = 4.00000 / 7.00000;

B。将0.57143,其从0.571428571真值向上舍...如果我们去到八个的地方,这将是0.57142857,其具有远,错误的,但在相反方向上远小幅度;它舍去。

由于改变precision可以改变错误是否是增益或在每个个体计算了损失,这能够改变一个给定的累计计算的误差是否相互加强或相互抵消。最终的结果是,有时一个较低precision计算,因为在较低precision计算比higher- precision计算更接近真的结果的你幸运和错误是在不同的方向。


  

我们期望,在更高的precision做一个计算总是给一个答案更接近真实的答案,但除此之外,这种说法显示。这就解释了为什么有时花车运算给出了正确的答案,但在双打计算 - 具有两倍于precision - 给出了错误的答案,正确吗?


是的,这正是是发生在你的例子,除了十进制precision的,与其五个数字,我们有一定数量的二进制的precision的数字。正如三分之一不能准确再在五个psented $ P $ - 或任意有限数目 - 十进制数字,0.1,0.2和0.3不能精确再在任何有限数量的二进制数位psented $ P $。其中一些将被围捕,其中一些将被舍去,以及是否其中添加的增加的错误或抵消的错误取决于具体的细节的有多少个二进制数字在每个系统中。也就是说,在变化的 precision 的可以改变的答案的是好还是坏。通常越高precision,越接近答案是真正的答案,但并非总是如此。


  

我怎样才能得到准确的数字算术计算的话,如果float和double使用二进制数字?


如果您需要准确的十进制数学然后使用小数输入;它使用小数,而不是二进制小数。你付出的代价是,这是相当大的慢。当然,正如我们已经看到的,像三分之一或七分之四部分不会重新presented准确。任何分数,实际上是一个小数然而将被重新presented零误差,最多约29显著数字位数。


  

确定,我同意所有浮点方案介绍由于重新presentation误差不准确的,而那些不准确性有时会积累或彼此抵消基于在计算中使用precision的比特数。难道我们起码有保证,那些不准确会的相一致的?


没有,你有浮筒或双打没有这样的保证。编译器和运行时都允许执行浮点计算中的的precision比由规范要求。特别是,编译器和运行时被允许做单precision(32位)算法的以64位或80位或128位或任何位数比他们喜欢更大的32 的。

编译器和运行时允许这样做的但是他们觉得它在时间的。他们不一定是从机一致的机器,从RUN运行,等等。因为这样只能使计算的更精确的的这种不被视为一个错误。这是一个特点。一种功能,使得它非常难写pdictably行为$ P $程序,但仍然一个特点。


  

因此​​,这意味着,在编译时进行的,像面值计算0.1 + 0.2,可以比在运行时使用的变量进行同样的计算得出不同的结果?


是的。


  

什么比较的结果0.1 + 0.2 = = 0.3 (0.1 + 0.2).Equals(0.3)


由于第一个由编译器,第二个计算由运行时计算的,我只是说,他们被允许随意使用比他们的突发奇想所要求的规格较多,precision,是的,那些可以给出不同的结果。或许他们中的一个选择做计算仅在64位precision而其它冰锥80位或128位的precision为一部分或全部的计算,并得到一差应答


  

因此​​,这里撑起一分钟。你是说,不仅是 0.1 + 0.2 = = 0.3 可不同于(0.1 + 0.2).Equals(0.3)。你是说 0.1 + 0.2 = = 0.3 可以被计算为真或假完全由编译器的心血来潮。它可以产生星期二真假星期四,它可以生产出一台机器上真假另一个,如果EX pression在同一程序中出现了两次它可以同时生产真假。这当然pression可以有任何理由任何一个值;允许编译器是的完全的不可靠的在这里。


正确的。

这是通常报告给C#编译器团队的方式是,有人有一些前pression当他们在调试编译和虚假的,当他们在释放模式编译产生如此。这是最常见的情况是,因为调试和发布code一代变化寄存器分配方案本作物了。但是编译器的允许的做任何事情它这个前pression喜欢,只要它选择真或假。 (那可不一定,说,产生编译时错误。)


  

这是疯狂。


正确的。


  

谁,我应该归咎于这个烂摊子?


不是我,那是肯定织补

英特尔决定做一个浮点运算芯片,它是远远更加昂贵,使一致的结果。小选择在什么操作enregister VS哪些操作,以保持在栈上最多可以添加的结果差异很大的编译器。


  

我如何确保一致的结果?


使用小数类型,正如我以前说过。或做你的数学中的整数。


  

我必须使用双或浮筒;我可以做的任何的鼓励一致的结果?


是的。如果存储的任何结果到任何的静态字段的,任何的类的实例字段的或的数组元素的浮点型或双则是保证被截断回32位或64位precision。 (这保证是前pressly的的为加盟店当地人或形式参数的。)另外,如果你做的运行的强制转换为(浮点)(双)上的前pression这已经是该类型的那么编译器就会发出特殊的code,它的力量结果截断,就像它被分配给一个字段或数组元素。 (其执行在编译时广播 - 也就是使人们对不断前pressions - 不能保证这样做)


  

要澄清的是,最后一点:请问C#的语言规范的使这些保障


没有。在运行的保证存储到一个数组或场截。 C#的规范并不能保证一个身份投截断但微软实现具有回归测试,确保编译器的每一个新版本有此行为。

所有的语言规范有关于这个问题说的是,浮点操作可以在更高的precision在执行的自由裁量权进行的。

My question is not about floating precision. It is about why Equals() is different from ==.

I understand why .1f + .2f == .3f is false (while .1m + .2m == .3m is true).
I get that == is reference and .Equals() is value comparison. (Edit: I know there is more to this.)

But why is (.1f + .2f).Equals(.3f) true, while (.1d+.2d).Equals(.3d) is still false?

 .1f + .2f == .3f;              // false
(.1f + .2f).Equals(.3f);        // true
(.1d + .2d).Equals(.3d);        // false

解决方案

The question is confusingly worded. Let's break it down into many smaller questions:

Why is it that one tenth plus two tenths does not always equal three tenths in floating point arithmetic?

Let me give you an analogy. Suppose we have a math system where all numbers are rounded off to exactly five decimal places. Suppose you say:

x = 1.00000 / 3.00000;

You would expect x to be 0.33333, right? Because that is the closest number in our system to the real answer. Now suppose you said

y = 2.00000 / 3.00000;

You'd expect y to be 0.66667, right? Because again, that is the closest number in our system to the real answer. 0.66666 is farther from two thirds than 0.66667 is.

Notice that in the first case we rounded down and in the second case we rounded up.

Now when we say

q = x + x + x + x;
r = y + x + x;
s = y + y;

what do we get? If we did exact arithmetic then each of these would obviously be four thirds and they would all be equal. But they are not equal. Even though 1.33333 is the closest number in our system to four thirds, only r has that value.

q is 1.33332 -- because x was a little bit small, every addition accumulated that error and the end result is quite a bit too small. Similarly, s is too big; it is 1.33334, because y was a little bit too big. r gets the right answer because the too-big-ness of y is cancelled out by the too-small-ness of x and the result ends up correct.

Does the number of places of precision have an effect on the magnitude and direction of the error?

Yes; more precision makes the magnitude of the error smaller, but can change whether a calculation accrues a loss or a gain due to the error. For example:

b = 4.00000 / 7.00000;

b would be 0.57143, which rounds up from the true value of 0.571428571... Had we gone to eight places that would be 0.57142857, which has far, far smaller magnitude of error but in the opposite direction; it rounded down.

Because changing the precision can change whether an error is a gain or a loss in each individual calculation, this can change whether a given aggregate calculation's errors reinforce each other or cancel each other out. The net result is that sometimes a lower-precision computation is closer to the "true" result than a higher-precision computation because in the lower-precision computation you get lucky and the errors are in different directions.

We would expect that doing a calculation in higher precision always gives an answer closer to the true answer, but this argument shows otherwise. This explains why sometimes a computation in floats gives the "right" answer but a computation in doubles -- which have twice the precision -- gives the "wrong" answer, correct?

Yes, this is exactly what is happening in your examples, except that instead of five digits of decimal precision we have a certain number of digits of binary precision. Just as one-third cannot be accurately represented in five -- or any finite number -- of decimal digits, 0.1, 0.2 and 0.3 cannot be accurately represented in any finite number of binary digits. Some of those will be rounded up, some of them will be rounded down, and whether or not additions of them increase the error or cancel out the error depends on the specific details of how many binary digits are in each system. That is, changes in precision can change the answer for better or worse. Generally the higher the precision, the closer the answer is to the true answer, but not always.

How can I get accurate decimal arithmetic computations then, if float and double use binary digits?

If you require accurate decimal math then use the decimal type; it uses decimal fractions, not binary fractions. The price you pay is that it is considerably larger and slower. And of course as we've already seen, fractions like one third or four sevenths are not going to be represented accurately. Any fraction that is actually a decimal fraction however will be represented with zero error, up to about 29 significant digits.

OK, I accept that all floating point schemes introduce inaccuracies due to representation error, and that those inaccuracies can sometimes accumulate or cancel each other out based on the number of bits of precision used in the calculation. Do we at least have the guarantee that those inaccuracies will be consistent?

No, you have no such guarantee for floats or doubles. The compiler and the runtime are both permitted to perform floating point calculations in higher precision than is required by the specification. In particular, the compiler and the runtime are permitted to do single-precision (32 bit) arithmetic in 64 bit or 80 bit or 128 bit or whatever bitness greater than 32 they like.

The compiler and the runtime are permitted to do so however they feel like it at the time. They need not be consistent from machine to machine, from run to run, and so on. Since this can only make calculations more accurate this is not considered a bug. It's a feature. A feature that makes it incredibly difficult to write programs that behave predictably, but a feature nevertheless.

So that means that calculations performed at compile time, like the literals 0.1 + 0.2, can give different results than the same calculation performed at runtime with variables?

Yep.

What about comparing the results of 0.1 + 0.2 == 0.3 to (0.1 + 0.2).Equals(0.3)?

Since the first one is computed by the compiler and the second one is computed by the runtime, and I just said that they are permitted to arbitrarily use more precision than required by the specification at their whim, yes, those can give different results. Maybe one of them chooses to do the calculation only in 64 bit precision whereas the other picks 80 bit or 128 bit precision for part or all of the calculation and gets a difference answer.

So hold up a minute here. You're saying not only that 0.1 + 0.2 == 0.3 can be different than (0.1 + 0.2).Equals(0.3). You're saying that 0.1 + 0.2 == 0.3 can be computed to be true or false entirely at the whim of the compiler. It could produce true on Tuesdays and false on Thursdays, it could produce true on one machine and false on another, it could produce both true and false if the expression appeared twice in the same program. This expression can have either value for any reason whatsoever; the compiler is permitted to be completely unreliable here.

Correct.

The way this is usually reported to the C# compiler team is that someone has some expression that produces true when they compile in debug and false when they compile in release mode. That's the most common situation in which this crops up because the debug and release code generation changes register allocation schemes. But the compiler is permitted to do anything it likes with this expression, so long as it chooses true or false. (It cannot, say, produce a compile-time error.)

This is craziness.

Correct.

Who should I blame for this mess?

Not me, that's for darn sure.

Intel decided to make a floating point math chip in which it was far, far more expensive to make consistent results. Small choices in the compiler about what operations to enregister vs what operations to keep on the stack can add up to big differences in results.

How do I ensure consistent results?

Use the decimal type, as I said before. Or do all your math in integers.

I have to use doubles or floats; can I do anything to encourage consistent results?

Yes. If you store any result into any static field, any instance field of a class or array element of type float or double then it is guaranteed to be truncated back to 32 or 64 bit precision. (This guarantee is expressly not made for stores to locals or formal parameters.) Also if you do a runtime cast to (float) or (double) on an expression that is already of that type then the compiler will emit special code that forces the result to truncate as though it had been assigned to a field or array element. (Casts which execute at compile time -- that is, casts on constant expressions -- are not guaranteed to do so.)

To clarify that last point: does the C# language specification make those guarantees?

No. The runtime guarantees that stores into an array or field truncate. The C# specification does not guarantee that an identity cast truncates but the Microsoft implementation has regression tests that ensure that every new version of the compiler has this behaviour.

All the language spec has to say on the subject is that floating point operations may be performed in higher precision at the discretion of the implementation.

这篇关于(.1F + .2f == 3F)!=(.1F + .2f).Equals(.3f)为什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆