在定义精度时设置python Decimal会产生不精确的Decimal [英] setting python Decimal when defining precision yields inexact Decimal

查看:72
本文介绍了在定义精度时设置python Decimal会产生不精确的Decimal的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

为什么不能在配置精度的范围内明确声明精确的 Decimal 对象?

Why can't I explicitly declare an exact Decimal object within the precision I configured it to?

from decimal import *
getcontext().prec = 5

num = Decimal(0.1234)
print num

预期输出:

0.1234

实际输出:

0.12339999999999999580335696691690827719867229461669921875

最终,我正在编写单元测试,以比较由于期望值不精确而失败的Decimal对象.

Ultimately, I'm writing unit tests that compare Decimal objects that are failing since the expected value is inexact.

推荐答案

0.1234 首先转换为标准IEEE-754基数2浮点数,然后将近似值转换为Decimal.只需将其保留为字符串即可.

0.1234 is converted to standard IEEE-754 radix-2 floating point number first, then that approximation is converted to Decimal. Just leave it as a string.

>>> Decimal(0.1234)
Decimal('0.12339999999999999580335696691690827719867229461669921875')
>>> Decimal('0.1234')
Decimal('0.1234')

这篇关于在定义精度时设置python Decimal会产生不精确的Decimal的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆