.NET是“十进制”算术独立的平台/体系结构? [英] Is .NET “decimal” arithmetic independent of platform/architecture?

查看:196
本文介绍了.NET是“十进制”算术独立的平台/体系结构?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我问了一下 System.Double 最近,被告知,计算不同而有所不同平台/体系结构。不幸的是,我无法找到任何信息告诉我,是否同样适用于 System.Decimal

我保证可以得到的完全的相同的结果对于任何特定的十进制计算的独立平台/体系结构?

解决方案
  

我保证可以得到完全相同的结果对于任何特定的小数计算独立于平台/体系结构?

C#4规格显然,的,你得到的将被计算在任何平台上是相同的。

由于LukeH的回答指出,在ECMA版本的C#2规范津贴的余地,以符合实现提供更多的precision,所以C#2.0在其他平台上的实现可能会提供一个higher- precision答案。

有关这个答案的目的,我将只讨论C#4.0中指定的行为。

在C#4.0规范说:


  

上型小数的值的操作的结果是,这将导致计算一个精确的结果(preserving规模,所限定的每个操作者),然后四舍五入以适合重新presentation。结果被四舍五入到最近的重新presentable值,并且,当其结果是同样接近二重presentable值,到具有偶数的至少显著位位置的值[...]。零结果总是包含符号0和标度为0。


自的动作的准确值的计算应该在任何平台上是相同的,和舍入算法被明确定义的,所得到的值应是相同的,不管平台

不过,请注意括号和关于零的最后一句话。它可能不是很清楚为什么这些信息是必要的。

一个十进制的奇特的是,几乎每一个量有多个可能重新presentation。考虑精确值123.456。十进制是一个96位的整数,一个1位符号的组合,以及一个八位指数即重presents的数-28〜28。这意味着,准确值123.456可以重新由psented $ P $小数123456×10 -3 或1234560×10 -4 或12345600×10 -5 尺度的问题。

C#的规范还规定了如何对大规模的信息进行计算。的文字123.456米将连接codeD为123456×10 -3 ,和123.4560米将连接codeD为1234560×10 -4

注意在行动这一功能的影响:

 十进制D1 =111.111000米;
十进制D2 =111.111米;
十进制D3 = D1 + D1;
十进制D4 = D2 + D2;
十进制D5 = D1 + D2;
Console.WriteLine(d1)的;
Console.WriteLine(D2);
Console.WriteLine(D3);
Console.WriteLine(D4);
Console.WriteLine(D5);
Console.WriteLine(D3 = = D4);
Console.WriteLine(D4 == D5);
Console.WriteLine(D5 == D3);
 

这将产生

  111.111000
111.111
222.222000
222.222
222.222000
真正
真正
真正
 

请注意有关显著零数字信息的 preserved 的整个操作的小数,并且Decimal.ToString的知道这一点,如果可以显示preserved零。还要注意如何小数平等知道基于精确值进行比较,即使这些值都具有不同的二进制和字符串重新presentations。

我觉得实际上并没有说Decimal.ToString的()需要正确地打印出基于其尺度尾随零值,但是这将是愚蠢的实现不这样做的规范;我会认为这是一个错误。

我还注意到,在CLR实现一个小数的内部存储器中的格式是128位,分为:16未使用的位,8比例位数,7多个未使用的位,1符号位和96个尾数位。这些位在存储器的确切布局不是由说明书限定,并且如果另一实现想要的东西更多的信息到那些23未使用的位供其自己的目的,它可以这样做。在CLR执行未使用的位都应该总是零。

I asked about System.Double recently and was told that computations may differ depending on platform/architecture. Unfortunately, I cannot find any information to tell me whether the same applies to System.Decimal.

Am I guaranteed to get exactly the same result for any particular decimal computation independently of platform/architecture?

解决方案

Am I guaranteed to get exactly the same result for any particular decimal computation independently of platform/architecture?

The C# 4 spec is clear that the value you get will be computed the same on any platform.

As LukeH's answer notes, the ECMA version of the C# 2 spec grants leeway to conforming implementations to provide more precision, so an implementation of C# 2.0 on another platform might provide a higher-precision answer.

For the purposes of this answer I'll just discuss the C# 4.0 specified behaviour.

The C# 4.0 spec says:


The result of an operation on values of type decimal is that which would result from calculating an exact result (preserving scale, as defined for each operator) and then rounding to fit the representation. Results are rounded to the nearest representable value, and, when a result is equally close to two representable values, to the value that has an even number in the least significant digit position [...]. A zero result always has a sign of 0 and a scale of 0.


Since the calculation of the exact value of an operation should be the same on any platform, and the rounding algorithm is well-defined, the resulting value should be the same regardless of platform.

However, note the parenthetical and that last sentence about the zeroes. It might not be clear why that information is necessary.

One of the oddities of the decimal system is that almost every quantity has more than one possible representation. Consider exact value 123.456. A decimal is the combination of a 96 bit integer, a 1 bit sign, and an eight-bit exponent that represents a number from -28 to 28. That means that exact value 123.456 could be represented by decimals 123456 x 10-3 or 1234560 x 10-4 or 12345600 x 10-5. Scale matters.

The C# specification also mandates how information about scale is computed. The literal 123.456m would be encoded as 123456 x 10-3, and 123.4560m would be encoded as 1234560 x 10-4.

Observe the effects of this feature in action:

decimal d1 = 111.111000m;
decimal d2 = 111.111m;
decimal d3 = d1 + d1;
decimal d4 = d2 + d2;
decimal d5 = d1 + d2;
Console.WriteLine(d1);
Console.WriteLine(d2);
Console.WriteLine(d3);
Console.WriteLine(d4);
Console.WriteLine(d5);
Console.WriteLine(d3 == d4);
Console.WriteLine(d4 == d5);
Console.WriteLine(d5 == d3);

This produces

111.111000
111.111
222.222000
222.222
222.222000
True
True
True

Notice how information about significant zero figures is preserved across operations on decimals, and that decimal.ToString knows about that and displays the preserved zeroes if it can. Notice also how decimal equality knows to make comparisons based on exact values, even if those values have different binary and string representations.

The spec I think does not actually say that decimal.ToString() needs to correctly print out values with trailing zeroes based on their scales, but it would be foolish of an implementation to not do so; I would consider that a bug.

I also note that the internal memory format of a decimal in the CLR implementation is 128 bits, subdivided into: 16 unused bits, 8 scale bits, 7 more unused bits, 1 sign bit and 96 mantissa bits. The exact layout of those bits in memory is not defined by the specification, and if another implementation wants to stuff additional information into those 23 unused bits for its own purposes, it can do so. In the CLR implementation the unused bits are supposed to always be zero.

这篇关于.NET是“十进制”算术独立的平台/体系结构?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆