四舍五入问题.Net Core 3.1与.Net Core 2.0/.Net Framework [英] Rounding issues .Net Core 3.1 vs. .Net Core 2.0/.Net Framework

查看:159
本文介绍了四舍五入问题.Net Core 3.1与.Net Core 2.0/.Net Framework的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在 .NET Core 3.0 .NET Framework/.NET Core 2.x 之间遇到一些舍入问题.

我已经在网上搜索了一段时间,但找不到合适的搜索词,所以我将其发布在这里.

我编写了以下示例控制台应用程序来说明我的问题:

  class程序{静态void Main(string [] args){const double x = 123.4567890/3.14159265358979;Console.WriteLine(x);const double y = 98.76543210/3.14159265358979;Console.WriteLine(y);const double z = 11.2233445566778899/3.14159265358979;Console.WriteLine(z);Console.ReadKey();}} 

我在不同的框架上运行了该程序,并获得了以下输出:

  • .NET Framework 4.7.2
    • 39,2975164552063
    • 31,4380134506439
    • 3,57250152843761
  • .NET Core 2.0:
    • 39,2975164552063
    • 31,4380134506439
    • 3,57250152843761
  • .NET Core 3.0:
    • 39,2975164552063
    • 31,438013450643936
    • 3,5725015284376096

如您所见,3.0输出与前两个输出有所不同,并且从浮点后的第13个数字开始具有更高的精度.

我认为.NET Core 3.0的精度更高.

但是我的情况是我想从 .NET Framework 迁移到 .NET Core 3.0 .在迁移之前,我为 .Net Framework 库编写了测试,以确保在迁移到 .NET Core 3.0 之后,计算结果是否相同.为此,我只编写了以下测试:

 <代码>//排列const double ExpectedValue = 0.1232342802302;//行为var结果= Subject.Calculate();//断言result.Should.Be(expectedValue); 

如果我迁移代码并运行我写到 .NET Framework 的测试,则测试将失败.我有一些细微差别,例如

 预期项目[0]为0.4451391569556069,但发现为0.44513915698437145.预期的结果是-13.142142181869094,但找到-13.142142181869062. 

我的问题是;如何以与 .NET Framework/.NET Core 2.0 相同的方式强制对 .NET Core 3.0 进行四舍五入,所以我不会得到这些细微的差别./p>

还有谁能解释这种差异/描述 .NET Core 3.1 .NET Framework 的舍入变化?

解决方案

这是已记录在案的更改,它使格式化程序和解析器符合IEEE 754-2008.从 .NET 3.0的新功能文档中的IEEE浮点部分:

浮点API正在更新,以符合IEEE 754-2008修订版.这些更改的目的是公开所有必需的操作,并确保它们在行为上符合IEEE规范.有关浮点改进的更多信息,请参见

有问题的例子是Math.PI.ToString(),其中先前返回的字符串(对于ToString()和ToString("G"))为3.14159265358979;而是应该返回3.14159265358979 31 .

先前的结果在解析后返回的值与Math.PI的实际值在内部相差7个ULP(最后一个单位).这意味着用户很容易陷入一种情况,在这种情况下,当需要对它进行序列化/反序列化时,他们会意外地失去对浮点值的精度.

实际数据未更改.即使在.NET 4.7中, y z do 也具有更高的精度.所做的更改是格式化程序.在Core 3.x之前,即使值的精度更高,格式化程序也只会使用15位数字.

博客文章介绍了如何获得旧行为:

对于ToString()和ToString("G"),您可以使用G15作为格式说明符,因为这是以前的逻辑在内部执行的操作.

以下代码:

  const double y = 98.76543210/3.14159265358979;Console.WriteLine(y);Console.WriteLine("{0:G15}",y); 

将打印:

 <代码> 31.43801345064393631.4380134506439 

I'm experiencing some rounding issues between .NET Core 3.0 and .NET Framework/.NET Core 2.x.

I've been searching on the web for a while, but I couldn't find the right term to search for, so i'm posting it here.

I wrote the following sample console app to illustrate my problem:

class Program
{
    static void Main(string[] args)
    {
        const double x = 123.4567890 / 3.14159265358979;
        Console.WriteLine(x);

        const double y = 98.76543210 / 3.14159265358979;
        Console.WriteLine(y);

        const double z = 11.2233445566778899 / 3.14159265358979;
        Console.WriteLine(z);

        Console.ReadKey();
    }
}

I ran this program on different frameworks and got the following output:

  • .NET Framework 4.7.2
    • 39,2975164552063
    • 31,4380134506439
    • 3,57250152843761
  • .NET Core 2.0:
    • 39,2975164552063
    • 31,4380134506439
    • 3,57250152843761
  • .NET Core 3.0:
    • 39,2975164552063
    • 31,438013450643936
    • 3,5725015284376096

As you can see, the 3.0 output differs from the first two, and has got more precision starting from the 13th number after the floating point.

I assume that the precision of .NET Core 3.0 is more accurate.

But my case is that I want to migrate from .NET Framework to .NET Core 3.0. Before migrating, I wrote tests for the .Net Framework library to make sure the calculations will give the same output after migrating to .NET Core 3.0 . For that, I just wrote tests like:

//Arrange
const double expectedValue = 0.1232342802302;

//Act
var result = Subject.Calculate();
//Assert
result.Should.Be(expectedValue);

If I migrate the code and run the tests, which I wrote to the .NET Framework, the tests will fail. I got minor differences like

Expected item[0] to be 0.4451391569556069, but found 0.44513915698437145.
Expected result to be -13.142142181869094, but found -13.142142181869062.

My question here is; how do I force to round .NET Core 3.0 in the same way as .NET Framework/.NET Core 2.0 does, so I won't get these minor differences.

And could anyone explain this difference / describe the changes of rounding in .NET Core 3.1 versus .NET Framework?

解决方案

This is a documented change that makes the formatter and parser compliant with IEEE 754-2008. From the IEEE Floating-Point section in the What's new in .NET 3.0 document :

Floating point APIs are being updated to comply with IEEE 754-2008 revision. The goal of these changes is to expose all required operations and ensure that they're behaviorally compliant with the IEEE spec. For more information about floating-point improvements, see the Floating-Point Parsing and Formatting improvements in .NET Core 3.0 blog post.

The examples in the blog post actually address what happened here with Pi (emphasis mine):

ToString(), ToString("G"), and ToString("R") will now return the shortest roundtrippable string. This ensures that users end up with something that just works by default.

An example of where it was problematic was Math.PI.ToString() where the string that was previously being returned (for ToString() and ToString("G")) was 3.14159265358979; instead, it should have returned 3.1415926535897931.

The previous result, when parsed, returned a value which was internally off by 7 ULP (units in last place) from the actual value of Math.PI. This meant that it was very easy for users to get into a scenario where they would accidentally lose some precision on a floating-point value when the needed to serialize/deserialize it.

The actual data hasn't changed. The y and z values do have greater precision, even in .NET 4.7. What did change is the formatter. Before Core 3.x, the formatter would use only 15 digits even if the values had greater precision.

The blog post explains how to get the old behavior :

For ToString() and ToString("G") you can use G15 as the format specifier as this is what the previous logic would do internally.

The following code:

const double y = 98.76543210 / 3.14159265358979;
Console.WriteLine(y);
Console.WriteLine("{0:G15}",y);

Will print :

31.438013450643936
31.4380134506439

这篇关于四舍五入问题.Net Core 3.1与.Net Core 2.0/.Net Framework的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆