什么时候应该使用double而不是decimal? [英] When should I use double instead of decimal?

查看:325
本文介绍了什么时候应该使用double而不是decimal?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我可以使用 double (或 float )而不是 decimal


  1. 使用更少的内存。

  2. 由于浮点数学操作本身由处理器支持。

  3. 可以代表更大的数字。

但是这些优点似乎仅适用于计算密集型操作,例如在建模软件中发现的那些。当然,如果需要精确度,例如财务计算,则不应使用双打。那么有什么实际的原因可以选择 double (或 float )而不是 decimal 在正常的应用程序?



编辑添加:
感谢所有的伟大的反应,我从中学到了。另外还有一个问题:有几个人认为,双打可以更准确地代表实数。当宣布时,我会认为他们通常更准确地代表他们。但是,这是一个真实的陈述,当执行浮点运算时,精度可能会降低(有时显着)?

解决方案

我认为你我们很好地总结了优势。然而,你却失去了一点。 decimal 类型在代表基数10的数字(例如货币/金融计算中使用的数字)中更为准确。一般来说, double 类型将至少提供一样高的精度(如果我错了,有人会更正我),并且任意实数的速度肯定更高。简单的结论是:在考虑使用哪种方法时,始终使用 double ,除非您需要 base 10 准确度 decimal offer。



编辑:



关于您关于操作后浮点数精度下降的其他问题,这是一个稍微更微妙的问题。事实上,精确度(我在这里使用这个术语可以互换)在每个操作完成后将会稳定地减少。这是由于以下两个原因:a)某些数字(最明显的小数)不能以浮点形式真正表示,b)舍入误差发生,就像您手动进行计算一样。这很大程度上取决于上下文(您执行的操作有多少),但是这些错误是否足够重要。在所有情况下,如果要比较理论上相当于两个浮点数(但是使用不同的计算得出),则需要允许一定程度的容差(多少变化但通常非常小) 。



有关可以引入精度错误的特定情况的更详细的概述,请参阅,维基百科,维基百科,维基百科,自由的分类,rel =noreferrer最后,如果你想在机器级别深入(和数学)地讨论浮点数/操作,请尝试阅读引用的文章 每个计算机科学家应该了解的浮点数算法


I can name three advantages to using double (or float) instead of decimal:

  1. Uses less memory.
  2. Faster because floating point math operations are natively supported by processors.
  3. Can represent a larger range of numbers.

But these advantages seem to apply only to calculation intensive operations, such as those found in modeling software. Of course, doubles should not be used when precision is required, such as financial calculations. So are there any practical reasons to ever choose double (or float) instead of decimal in "normal" applications?

Edited to add: Thanks for all the great responses, I learned from them.

One further question: A few people made the point that doubles can more precisely represent real numbers. When declared I would think that they usually more accurately represent them as well. But is it a true statement that the accuracy may decrease (sometimes significantly) when floating point operations are performed?

解决方案

I think you've summarised the advantages quite well. You are however missing one point. The decimal type is only more accurate at representing base 10 numbers (e.g. those used in currency/financial calculations). In general, the double type is going to offer at least as great precision (someone correct me if I'm wrong) and definitely greater speed for arbitrary real numbers. The simple conclusion is: when considering which to use, always use double unless you need the base 10 accuracy that decimal offers.

Edit:

Regarding your additional question about the decrease in accuracy of floating-point numbers after operations, this is a slightly more subtle issue. Indeed, precision (I use the term interchangeably for accuracy here) will steadily decrease after each operation is performed. This is due to two reasons: a) the fact that certain numbers (most obviously decimals) can't be truly represented in floating point form, b) rounding errors occur, just as if you were doing the calculation by hand. It depends greatly on the context (how many operations you're performing) whether these errors are significant enough to warrant much thought however. In all cases, if you want to compare two floating-point numbers that should in theory be equivalent (but were arrived at using different calculations), you need to allow a certain degree of tolerance (how much varies, but is typically very small).

For a more detailed overview of the particular cases where errors in accuracies can be introduced, see the Accuracy section of the Wikipedia article. Finally, if you want a seriously in-depth (and mathematical) discussion of floating-point numbers/operations at machine level, try reading the oft-quoted article What Every Computer Scientist Should Know About Floating-Point Arithmetic.

这篇关于什么时候应该使用double而不是decimal?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆