带有说明符“%g"的 printf 的精度; [英] the precision of printf with specifier "%g"
问题描述
谁能解释一下 printf 中的 [.precision]
如何与说明符%g"一起工作?我对以下输出感到非常困惑:
double value = 3122.55;printf("%.16g\n", 值);//输出:3122.55printf("%.17g\n", 值);//输出:3122.5500000000002
我了解到 %g
使用最短的表示.
但以下输出仍然让我困惑
printf("%.16e\n", value);//输出:3.1225500000000002e+03printf("%.16f\n", 值);//输出:3122.5500000000001819printf("%.17e\n", 值);//输出:3.12255000000000018e+03printf("%.17f\n", 值);//输出:3122.55000000000018190
我的问题是:为什么 %.16g
给出确切的数字而 %.17g
不能?
似乎 16 位有效数字是准确的.谁能告诉我原因?
%g
使用最短表示法.
浮点数通常不存储 作为基数 10
中的数字,但 2
(性能、大小、实用性原因).但是,无论您的表示的基础是什么,总会有一些有理数无法以任意大小限制来表达,以便变量存储它们.
当您指定 %.16g
时,您是在说您想要以最多 16
个有效数字给出的数字的最短表示.
如果最短表示超过16
位,printf
会通过剪掉最后的2
位来缩短数字串,给你留下3122.550000000000
,实际上是最短形式的3122.55
,解释你得到的结果.
一般来说,%g
总是会给你尽可能短的结果,这意味着如果代表你的数字的数字序列可以被缩短而不损失任何精度,它就会完成.>
为了进一步举例,当您使用 %.17g
并且 17
小数位包含与 0
不同的值(2
),你最终得到了完整的数字 3122.5500000000002
.
我的问题是:为什么 %.16g
给出确切的数字而 %.17g
不能吗?
实际上是 %.17g
给你准确的结果,而 %.16g
只给你一个带有误差的四舍五入的近似值(与记忆).
如果您想要更固定的精度,请改用 %f
或 %F
.
Can anybody explain me how the [.precision]
in printf works with specifier "%g"? I'm quite confused by the following output:
double value = 3122.55;
printf("%.16g\n", value); //output: 3122.55
printf("%.17g\n", value); //output: 3122.5500000000002
I've learned that %g
uses the shortest representation.
But the following outputs still confuse me
printf("%.16e\n", value); //output: 3.1225500000000002e+03
printf("%.16f\n", value); //output: 3122.5500000000001819
printf("%.17e\n", value); //output: 3.12255000000000018e+03
printf("%.17f\n", value); //output: 3122.55000000000018190
My question is: why %.16g
gives the exact number while %.17g
can't?
It seems 16 significant digits can be accurate. Could anyone tell me the reason?
%g
uses the shortest representation.
Floating-point numbers usually aren't stored as a number in base 10
, but 2
(performance, size, practicality reasons). However, whatever the base of your representation, there will always be rational numbers that will not be expressible in some arbitrary size limit for the variable to store them.
When you specify %.16g
, you're saying that you want the shortest representation of the number given with a maximum of 16
significant digits.
If the shortest representation has more than 16
digits, printf
will shorten the number string by cutting cut the 2
digit at the very end, leaving you with 3122.550000000000
, which is actually 3122.55
in the shortest form, explaining the result you obtained.
In general, %g
will always give you the shortest result possible, meaning that if the sequence of digits representing your number can be shortened without any loss of precision, it will be done.
To further the example, when you use %.17g
and the 17
th decimal place contains a value different from 0
(2
in particular), you ended up with the full number 3122.5500000000002
.
My question is: why
%.16g
gives the exact number while%.17g
can't?
It's actually the %.17g
which gives you the exact result, while %.16g
gives you only a rounded approximate with an error (when compared to the value in memory).
If you want a more fixed precision, use %f
or %F
instead.
这篇关于带有说明符“%g"的 printf 的精度;的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!