如何将字符串转换为漂浮完美的准确性? [英] How to convert strings to floats with perfect accuracy?
问题描述
该算法的有趣部分的代码如下,我可以看到精度损失来自哪里,但我不知道如何摆脱它。 (我遗漏了许多与核心算法无关的代码部分,以免人们读取)。什么字符串到浮点数算法将保证结果将尽可能接近IEEE编号行到由字符串表示的值。
$ b
real currentPlace = 10.0L ^^(pointPos-ePos + 1 + expon);
real ans = 0;
for(int index = ePos-1; index> -1; index--){
if(str [index] =='。'){
continue;
if(str [index]<'0'|| str [index]>'9'){
err();
$ b $自动数字=铸造(int)str [索引] - 铸(int)'0';
ans + = digit * currentPlace;
currentPlace * = 10;
}
返回ans *标志;
另外,我正在使用旧版本的单元测试, p>
assert(to!(real)(0.456)== 0.456L);
有可能我的函数产生的答案实际上比编译器产生的表示更准确当解析一个浮点文字,但编译器(用C ++编写)总是与strtold完全一致,因为它在内部使用strtold来分析浮点文字?
Clinger 和 Steele& White 开发了用于读写浮点数的精细算法。
有一个回顾这里以及一些对实现的引用。
David Gay 论文改善了克林格的工作,盖伊的 C中的实现很棒。我已经在嵌入式系统中使用了它们,我相信Gay的 dtoa
进入了许多 libc
p>
I'm trying to write a function in the D programming language to replace the calls to C's strtold. (Rationale: To use strtold from D, you have to convert D strings to C strings, which is inefficient. Also, strtold can't be executed at compile time.) I've come up with an implementation that mostly works, but I seem to lose some precision in the least significant bits.
The code to the interesting part of the algorithm is below and I can see where the precision loss comes from, but I don't know how to get rid of it. (I've left out a lot of the parts of code that weren't relevant to the core algorithm to save people reading.) What string-to-float algorithm will guarantee that the result will be as close as possible on the IEEE number line to the value represented by the string.
real currentPlace = 10.0L ^^ (pointPos - ePos + 1 + expon);
real ans = 0;
for(int index = ePos - 1; index > -1; index--) {
if(str[index] == '.') {
continue;
}
if(str[index] < '0' || str[index] > '9') {
err();
}
auto digit = cast(int) str[index] - cast(int) '0';
ans += digit * currentPlace;
currentPlace *= 10;
}
return ans * sign;
Also, I'm using the unit tests for the old version, which did things like:
assert(to!(real)("0.456") == 0.456L);
Is it possible that the answers being produced by my function are actually more accurate than the representation the compiler produces when parsing a floating point literal, but the compiler (which is written in C++) always agrees exactly with strtold because it uses strtold internally for parsing floating point literals?
Clinger and Steele & White developed fine algorithms for reading and writing floating point.
There's a retrospective here along with some references to implementations.
David Gay's paper improving Clinger's work, and Gay's implementation in C are great. I have used them in embedded systems, and I believe Gay's dtoa
made its way into many libc
's.
这篇关于如何将字符串转换为漂浮完美的准确性?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!