数学确定十进制值的精度和小数位数 [英] Mathematically determine the precision and scale of a decimal value

查看:100
本文介绍了数学确定十进制值的精度和小数位数的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在寻找确定C#中小数位数和小数位数精度的方法,这使我想到了几个SO问题,但它们似乎都没有正确的答案或具有误导性的标题(它们实际上是关于SQL的)服务器或其他一些数据库,而不是C#),或者根本没有答案.我认为以下帖子与我所追求的最接近,但即使这似乎也是错误的:

I have been looking at some way to determine the scale and precision of a decimal in C#, which led me to several SO questions, yet none of them seem to have correct answers, or have misleading titles (they really are about SQL server or some other databases, not C#), or any answers at all. The following post, I think, is the closest to what I'm after, but even this seems wrong:

确定输入数字的十进制精度

首先,对于比例和精度之间的差异似乎有些困惑.每个Google(每个MSDN):

First, there seems to be some confusion about the difference between scale and precision. Per Google (per MSDN):

精度是数字中的位数.小数位数是数字中小数点右边的位数.

Precision is the number of digits in a number. Scale is the number of digits to the right of the decimal point in a number.

话虽如此,数字12345.67890M的小数位数为5,精度为10.我还没有找到可以在C#中准确计算出这个数字的代码示例.

With that being said, the number 12345.67890M would have a scale of 5 and a precision of 10. I have not discovered a single code example that would accurately calculate this in C#.

我想创建两个辅助方法decimal.Scale()decimal.Precision(),以便通过以下单元测试:

I want to make two helper methods, decimal.Scale(), and decimal.Precision(), such that the following unit test passes:

[TestMethod]
public void ScaleAndPrecisionTest()
{
    //arrange 
    var number = 12345.67890M;

    //act
    var scale = number.Scale();
    var precision = number.Precision();

    //assert
    Assert.IsTrue(precision == 10);
    Assert.IsTrue(scale == 5);
}

但是我还没有找到可以做到这一点的代码段,尽管有几个人建议使用decimal.GetBits(),而其他人则说过将其转换为字符串并进行解析.

but I have yet to find a snippet that will do this, though several people have suggested using decimal.GetBits(), and others have said, convert it to a string and parse it.

在我看来,将其转换为字符串并进行解析是一个糟糕的主意,甚至不考虑带小数点的本地化问题.但是,GetBits()方法背后的数学对我来说就像希腊语.

Converting it to a string and parsing it is, in my mind, an awful idea, even disregarding the localization issue with the decimal point. The math behind the GetBits() method, however, is like Greek to me.

有人能用C#的decimal值描述确定标度和精度的计算结果吗?

Can anyone describe what the calculations would look like for determining scale and precision in a decimal value for C#?

推荐答案

这是您使用

This is how you get the scale using the GetBits() function:

decimal x = 12345.67890M;
int[] bits = decimal.GetBits(x);
byte scale = (byte) ((bits[3] >> 16) & 0x7F); 

我想获得精度的最好方法是删除分数点(即使用小数构造器,以在不使用上述小数位数的情况下重建小数),然后使用对数:

And the best way I can think of to get the precision is by removing the fraction point (i.e. use the Decimal Constructor to reconstruct the decimal number without the scale mentioned above) and then use the logarithm:

decimal x = 12345.67890M;
int[] bits = decimal.GetBits(x);
//We will use false for the sign (false =  positive), because we don't care about it.
//We will use 0 for the last argument instead of bits[3] to eliminate the fraction point.
decimal xx = new Decimal(bits[0], bits[1], bits[2], false, 0);
int precision = (int)Math.Floor(Math.Log10((double)xx)) + 1;

现在我们可以将它们放入扩展中:

Now we can put them into extensions:

public static class Extensions{
    public static int GetScale(this decimal value){
    if(value == 0)
            return 0;
    int[] bits = decimal.GetBits(value);
    return (int) ((bits[3] >> 16) & 0x7F); 
    }

    public static int GetPrecision(this decimal value){
    if(value == 0)
        return 0;
    int[] bits = decimal.GetBits(value);
    //We will use false for the sign (false =  positive), because we don't care about it.
    //We will use 0 for the last argument instead of bits[3] to eliminate the fraction point.
    decimal d = new Decimal(bits[0], bits[1], bits[2], false, 0);
    return (int)Math.Floor(Math.Log10((double)d)) + 1;
    }
}

这是小提琴.

这篇关于数学确定十进制值的精度和小数位数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆