swift:将字符串转换为双精度的问题 [英] swift: issue in converting string to double
问题描述
这是 Xcode 7.3.1 playground 中的简单代码:
var str = "8.7"打印(双(str))
输出令人惊讶:可选(8.6999999999999993)
另外,Float(str)
给出:8.69999981
对这些家伙有什么想法或理由吗?对此的任何引用将不胜感激.
另外,我应该如何将8.7"转换为 8.7 作为双精度(或浮点数)?
编辑
迅速:
(str as NSString).doubleValue 返回 8.7
现在好了.但是我的问题仍然没有得到完整的答案.我们已经找到了替代方案,但为什么我们不能依赖 Double("8.7").请对此有更深入的了解.
编辑 2
("6.9" as NSString).doubleValue//打印 6.9000000000000004
那么,问题又来了.
这里有两个不同的问题.首先 - 正如已经提到的注释——二进制浮点数不能代表数字 8.7
精确.Swift 使用 IEEE 754 标准来表示单精度和双精度浮点数,如果你赋值
让 x = 8.7
然后将最接近的可表示数字存储在x
中,即
8.699999999999999289457264239899814128875732421875
关于这方面的更多信息可以在优秀的问答 浮点数学被破坏了吗?.
<小时>第二个问题是:为什么数字有时会打印为8.7"有时为8.6999999999999993"?
let str = "8.7"打印(双(str))//可选(8.6999999999999993)让 x = 8.7打印(x)//8.7
Double("8.7")
和 8.7
有区别吗?一个比另一个?
要回答这些问题,我们需要知道 print()
功能工作:
- 如果参数符合
CustomStringConvertible
,则打印函数调用其description
属性并打印结果到标准输出. - 否则,如果参数符合
CustomDebugStringConvertible
,打印函数调用是debugDescription
属性并打印结果输出到标准输出. - 否则,会使用其他一些机制.(这里不是为我们进口的目的.)
Double
类型符合 CustomStringConvertible
,因此
让 x = 8.7打印(x)//8.7
产生与
相同的输出让 x = 8.7打印(x.描述)//8.7
但是发生了什么
let str = "8.7"打印(双(str))//可选(8.6999999999999993)
Double(str)
是一个 optional,而 struct Optional
不是符合 CustomStringConvertible
,但要CustomDebugStringConvertible
.因此打印函数调用Optional
的 debugDescription
属性,反过来调用底层 Double
的 debugDescription
.因此——除了是可选的——数字输出是与
让 x = 8.7打印(x.debugDescription)//8.6999999999999993
但是description
和debugDescription
有什么区别对于浮点值?从 Swift 源代码可以看出两者最终都调用了 swift_floatingPointToString
Stubs.cpp 中的函数,使用 Debug
参数分别设置为 false
和 true
.这控制了数字到字符串转换的精度:
int Precision = std::numeric_limits::digits10;如果(调试){精度 = std::numeric_limits::max_digits10;}
有关这些常量的含义,请参见 http://en.cppreference.com/w/cpp/types/numeric_limits:
digits10
– 可以不加变化地表示的十进制数字的数量,max_digits10
– 区分此类型的所有值所需的十进制位数.
所以 description
创建一个十进制数字较少的字符串.那字符串可以转换为 Double
并返回给字符串同样的结果.debugDescription
创建一个包含更多十进制数字的字符串,以便任何两个不同的浮点值都会产生不同的输出.
总结:
- 大多数十进制数不能完全表示为二进制浮点值.
- 浮动的
description
和debugDescription
方法点类型使用不同的精度转换为细绳.结果, - 打印可选浮点值与打印非可选值使用不同的转换精度.
因此,在您的情况下,您可能想要解开可选的在打印之前:</p>
let str = "8.7"如果让 d = Double(str) {打印(d)//8.7}
为了更好的控制,使用 NSNumberFormatter
或格式化以 %.
另一种选择是使用 (NS)DecimalNumber
而不是 Double
(例如货币金额),参见例如快速问题.
Here is a simple code in Xcode 7.3.1 playground:
var str = "8.7"
print(Double(str))
the output is suprising:
Optional(8.6999999999999993)
also, Float(str)
gives: 8.69999981
Any thoughts or reasons on this guys? Any references to this would be appreciated.
Also, how should I then convert "8.7" to 8.7 as Double (or Float)?
Edit
in swift:
(str as NSString).doubleValue returns 8.7
Now, that is Ok. But my question, still, does not get a complete answer. We have found an alternative but why can we not rely on Double("8.7"). Please, give a deeper insight on this.
Edit 2
("6.9" as NSString).doubleValue // prints 6.9000000000000004
So, the question opens up again.
There are two different issues here. First – as already mentioned in
the comments – a binary floating point number cannot represent the
number 8.7
precisely. Swift uses the IEEE 754 standard for representing
single- and double-precision floating point numbers, and if you assign
let x = 8.7
then the closest representable number is stored in x
, and that is
8.699999999999999289457264239899814128875732421875
Much more information about this can be found in the excellent Q&A Is floating point math broken?.
The second issue is: Why is the number sometimes printed as "8.7" and sometimes as "8.6999999999999993"?
let str = "8.7"
print(Double(str)) // Optional(8.6999999999999993)
let x = 8.7
print(x) // 8.7
Is Double("8.7")
different from 8.7
? Is one more precise than
the other?
To answer these questions, we need to know how the print()
function works:
- If an argument conforms to
CustomStringConvertible
, the print function calls itsdescription
property and prints the result to the standard output. - Otherwise, if an argument conforms to
CustomDebugStringConvertible
, the print function calls isdebugDescription
property and prints the result to the standard output. - Otherwise, some other mechanism is used. (Not imported here for our purpose.)
The Double
type conforms to CustomStringConvertible
, therefore
let x = 8.7
print(x) // 8.7
produces the same output as
let x = 8.7
print(x.description) // 8.7
But what happens in
let str = "8.7"
print(Double(str)) // Optional(8.6999999999999993)
Double(str)
is an optional, and struct Optional
does not
conform to CustomStringConvertible
, but to
CustomDebugStringConvertible
. Therefore the print function calls
the debugDescription
property of Optional
, which in turn
calls the debugDescription
of the underlying Double
.
Therefore – apart from being an optional – the number output is
the same as in
let x = 8.7
print(x.debugDescription) // 8.6999999999999993
But what is the difference between description
and debugDescription
for floating point values? From the Swift source code one can see
that both ultimately call the swift_floatingPointToString
function in Stubs.cpp, with the Debug
parameter set to false
and true
, respectively.
This controls the precision of the number to string conversion:
int Precision = std::numeric_limits<T>::digits10;
if (Debug) {
Precision = std::numeric_limits<T>::max_digits10;
}
For the meaning of those constants, see http://en.cppreference.com/w/cpp/types/numeric_limits:
digits10
– number of decimal digits that can be represented without change,max_digits10
– number of decimal digits necessary to differentiate all values of this type.
So description
creates a string with less decimal digits. That
string can be converted to a Double
and back to a string giving
the same result.
debugDescription
creates a string with more decimal digits, so that
any two different floating point values will produce a different output.
Summary:
- Most decimal numbers cannot be represented exactly as a binary floating point value.
- The
description
anddebugDescription
methods of the floating point types use a different precision for the conversion to a string. As a consequence, - printing an optional floating point value uses a different precision for the conversion than printing a non-optional value.
Therefore in your case, you probably want to unwrap the optional before printing it:
let str = "8.7"
if let d = Double(str) {
print(d) // 8.7
}
For better control, use NSNumberFormatter
or formatted
printing with the %.<precision>f
format.
Another option can be to use (NS)DecimalNumber
instead of Double
(e.g. for currency amounts), see e.g. Round Issue in swift.
这篇关于swift:将字符串转换为双精度的问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!