双精度量级 [英] Order of magnitude for double precision

查看:186
本文介绍了双精度量级的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在两个理论上相等的双精度数之间进行减法时,我应该预期哪个数量级的差异?

我有两个双精度数组。预计它们在理论上是相同的。它们都是由两种完全不同的方法计算出来的,所以它们之间有一些数值上的差异。我逐个检查了它们,我的最大差异是
6.5557799910909154E-008。我的老板说,对于双精度这是一个非常高的差异,但我认为,如果E-008的顺序如果差别,那么它的好。

谢谢,
Pradeep

解决方案

双精度浮点数具有以下格式:


  • 符号位:1 bit

  • 指数宽度:11位

  • 显着性和精确性:53位(显式存储52)


    这给出15到17个十进制数字的精度。如果一个十进制字符串最多有15个十进制转换为IEEE 754双精度,然后再转换回相同的有效十进制数,那么最后一个字符串应该与原始值匹配;如果IEEE 754双精度转换为十进制字符串,并且至少有17位有效小数,然后再转换为double,那么最终数字必须与原始数字匹配。


    单精度浮点数具有以下格式


    • 符号位:1位
    • 指数宽度:8位
    • 显着性和精确性:24(显式存储23)




    • 数字精度(如果一个十进制字符串最多有6个有效小数转换为IEEE 754单精度,然后再转换回相同数量的有效小数,那么最后一个字符串应该与原始值匹配;如果IEEE 754单精度转换到至少9位有效小数的十进制字符串,然后转换回单个,然后fin人数必须与原始数据匹配。


      您遇到的最大差异表示类似于转换为单精度的精度损失。 p>

      你知道哪两种方法更准确吗?它是计算速度与精度之间的一个折衷,它是主要区别还是算法之一在数值上不太稳定?输入的精度是多少?如果您的输入不是那么精确,那么精度的8位小数的差值可能不相关......或者它可能意味着行星轨迹上没有火星。


      What order of magnitude difference should I be expecting for a subtraction between two theoretically equal double precision numbers?

      I have two double precision arrays. They are expected to be theoretically same. They are both calculated by two completely different methodologies, so there is some numerical difference between them. I checked them element by element and my maximum difference is coming out to be 6.5557799910909154E-008. My boss says that for a double precision this is a very high difference, but I thought that if the difference if of the order of E-008, then its alright.

      Thank you, Pradeep

      解决方案

      Double precision floating point has the following format

      • Sign bit: 1 bit
      • Exponent width: 11 bits
      • Significand precision: 53 bits (52 explicitly stored)

      This gives from 15 - 17 significant decimal digits precision. If a decimal string with at most 15 significant decimal is converted to IEEE 754 double precision and then converted back to the same number of significant decimal, then the final string should match the original; and if an IEEE 754 double precision is converted to a decimal string with at least 17 significant decimal and then converted back to double, then the final number must match the original.

      Single precision floating point has the following format

      • Sign bit: 1 bit
      • Exponent width: 8 bits
      • Significand precision: 24 (23 explicitly stored)

      This gives from 6 to 9 significant decimal digits precision (if a decimal string with at most 6 significant decimal is converted to IEEE 754 single precision and then converted back to the same number of significant decimal, then the final string should match the original; and if an IEEE 754 single precision is converted to a decimal string with at least 9 significant decimal and then converted back to single, then the final number must match the original.

      The maximum difference you are encountering indicates a loss of precision akin to converting to single precision.

      Do you know which of the two methods is more accurate? Is it a trade-off between speed of computation and precision that is the main difference or is one of the algorithms less numerically stable? What is the precision of the inputs? A difference of 8 decimal digits of precision may not be relevant if your inputs aren't that precise... or it could mean missing Mars on a planetary trajectory.

      这篇关于双精度量级的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆