为什么双打在字典中的印刷方式不同? [英] Why are doubles printed differently in dictionaries?

查看:22
本文介绍了为什么双打在字典中的印刷方式不同?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

let dic : [Double : Double] = [1.1 : 2.3, 2.3 : 1.1, 1.2 : 2.3]

print(dic)// [2.2999999999999998: 1.1000000000000001, 1.2: 2.2999999999999998, 1.1000000000000001: 2.2999999999999998]



let double : Double = 2.3
let anotherdouble : Double = 1.1

print(double) // 2.3
print(anotherdouble) // 1.1

我不明白为什么编译器打印字典中的值不同?我使用的是 Swift 3、Xcode 8.这是一个错误还是某种优化东西的奇怪方式?

I don't get that why is the compiler printing values from dictionaries differently? I'm on Swift 3, Xcode 8. Is this a bug or some weird way of optimizing stuff or something?

更奇怪的是:

有些值会超过,有些会低于,有些则保持原样!1.1 小于 1.1000000000000001 而 2.3 大于 2.2999999999999998,1.2 只是 1.2

Some values go over, some go below, some stay as they are! 1.1 is less than 1.1000000000000001 while 2.3 is more than 2.2999999999999998, 1.2 is just 1.2

推荐答案

正如评论中已经提到的,Double 不能存储值 1.1 完全正确.Swift 使用(像许多其他语言一样)根据 IEEE 754 的二进制浮点数标准.

As already mentioned in the comments, a Double cannot store the value 1.1 exactly. Swift uses (like many other languages) binary floating point numbers according to the IEEE 754 standard.

可以表示为Double的最接近1.1的数字是

The closest number to 1.1 that can be represented as a Double is

1.100000000000000088817841970012523233890533447265625

并且可以表示为Double的最接近2.3的数字是

and the closest number to 2.3 that can be represented as a Double is

2.29999999999999982236431605997495353221893310546875

打印这个数字意味着它被转换成一个字符串再次十进制表示,这是用不同的精度,取决于您打印数字的方式.

Printing that number means that it is converted to a string with a decimal representation again, and that is done with different precision, depending on how you print the number.

来自 HashedCollections 的源代码.swift.gyb 可以看到 description 方法Dictionary 使用 debugPrint() 作为键和值,和 debugPrint(x) 打印 x.debugDescription 的值(如果 x 符合 CustomDebugStringConvertible).

From the source code at HashedCollections.swift.gyb one can see that the description method of Dictionary uses debugPrint() for both keys and values, and debugPrint(x) prints the value of x.debugDescription (if x conforms to CustomDebugStringConvertible).

另一方面,如果 x 符合,print(x) 调用 x.descriptionCustomStringConvertible.

On the other hand, print(x) calls x.description if x conforms to CustomStringConvertible.

所以你看到的是description的不同输出和 DoubledebugDescription:

So what you see is the different output of description and debugDescription of Double:

print(1.1.description) // 1.1
print(1.1.debugDescription) // 1.1000000000000001

从Swift源代码可以看出两者都使用 swift_floatingPointToString()Stubs.cpp 中的函数,使用Debug 参数分别设置为 falsetrue.此参数控制数字到字符串转换的精度:

From the Swift source code one can see that both use the swift_floatingPointToString() function in Stubs.cpp, with the Debug parameter set to false and true, respectively. This parameter controls the precision of the number to string conversion:

int Precision = std::numeric_limits<T>::digits10;
if (Debug) {
  Precision = std::numeric_limits<T>::max_digits10;
}

有关这些常量的含义,请参阅std::numeric_limits:

For the meaning of those constants, see std::numeric_limits:

  • digits10 – 可以不加变化地表示的十进制数字的数量,
  • max_digits10 – 区分此类型的所有值所需的十进制位数.
  • digits10 – number of decimal digits that can be represented without change,
  • max_digits10 – number of decimal digits necessary to differentiate all values of this type.

所以 description 创建一个十进制数字较少的字符串.那字符串可以转换为 Double 并返回给字符串同样的结果.debugDescription 创建一个包含更多十进制数字的字符串,以便任何两个不同的浮点值都会产生不同的输出.

So description creates a string with less decimal digits. That string can be converted to a Double and back to a string giving the same result. debugDescription creates a string with more decimal digits, so that any two different floating point values will produce a different output.

这篇关于为什么双打在字典中的印刷方式不同?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆