为什么代表1.0和2.0的位串如此不同? [英] Why are the bit strings representing 1.0 and 2.0 so different?
问题描述
我最近开始使用Julia,并且找到了位
函数,它返回数字参数的位串表示。例如:
julia> (1.0)
0011111111110000000000000000000000000000000000000000000000000000
然而,在玩这个功能的时候,惊讶地发现 bits
为 1.0
和 2.0 $ c返回了非常不同的位串$ c $:
julia>位(1.0)
0011111111110000000000000000000000000000000000000000000000000000
julia>位(2.0)
0100000000000000000000000000000000000000000000000000000000000000
我预计这两个值是相似的...
这些位的含义是什么?我隐约记得一些关于编码指数的位(从我的数值分析类) ,但我真的不记得它,我没有设法找到一个很好的描述在线...
为什么 normalized number (例如位(1.0)
和位(2.0)$ c>的
ASCIIString
$ c>是如此不同,您需要了解一下(!)关于 IEEE-754(二进制)浮点数字。每个这样的双精度数字存储为一个64位的字,细分分为三部分:
1.0
和<$ c
$ b $ pre $($ 1 $ ^ $ $ $ $ $ $ $) sign_bit x 1.significand x 2 ^(biased_exponent - bias)
(对于双精度浮点数字,这个偏差的值是2 ^ 10 - 1 = 1023)
现在,
-
1.0 = +1.000 .. (1023-bias)
和1023对应于(2)基11中的(0)1111111111,所以相应的位串是
0 01111111111 0000000000000000000000000000000000000000000000000000
-
2.0 = +1.000 ... x 2 ^(1024 - 偏差)
<1024>和1024对应于基数为2的10000000000,所以相应的位串是
$ b $0 10000000000 0000000000000000000000000000000000000000000000000000
-
3.0 = +1.100 ... x 2 ^(1024 - bias)
所以对应的位串是
0 10000000000 1000000000000000000000000000000000000000000000000000
$ p
$ b $ p总之,你可以得到
2.0
通过增加 1.0
这个2的幂乘-1的位串中的偏指数部分。增加这样一个数字会导致它的所有位二进制表示改变,以同样的方式递增数字9999(以十进制表示)导致所有数字改变。I recently started using Julia and I came upon the bits
function, which returns the bit-string representation of its numeric argument. For example:
julia> bits(1.0)
"0011111111110000000000000000000000000000000000000000000000000000"
However, while playing with this function, I was surprised to discover that bits
returns very different bit strings for 1.0
and 2.0
:
julia> bits(1.0)
"0011111111110000000000000000000000000000000000000000000000000000"
julia> bits(2.0)
"0100000000000000000000000000000000000000000000000000000000000000"
I would have expected those two values to be similar...
What is the meaning of those bits? I vaguely recall something about bits encoding the exponent (from my numerical-analysis class), but I really do not remember it well and I did not manage to find a good description online...
To understand why the ASCIIString
values of bits(1.0)
and bits(2.0)
are "so different", you need to know a bit (!) about IEEE-754 (binary) floating-point numbers. Each such double-precision number is stored as a 64-bit word, broken down into three parts:
- the sign bit (0 for nonnegative numbers, 1 for nonpositive numbers), followed by
- the biased exponent (11 bits), followed by
- the significand (52 bits).
The value of a normalized number (such as 1.0
and 2.0
) can be obtained by using the following formula:
(-1)^sign_bit x 1.significand x 2^(biased_exponent - bias)
(For double-precision floating-point numbers, the bias has a value of 2^10 - 1 = 1023)
Now,
1.0 = +1.000... x 2^(1023 - bias)
and 1023 corresponds to (0)1111111111 in base 2, so the corresponding bit string is
0 01111111111 0000000000000000000000000000000000000000000000000000
2.0 = +1.000... x 2^(1024 - bias)
and 1024 corresponds to 10000000000 in base 2, so the corresponding bit string is
0 10000000000 0000000000000000000000000000000000000000000000000000
3.0 = +1.100... x 2^(1024 - bias)
so the corresponding bit string is
0 10000000000 1000000000000000000000000000000000000000000000000000
etc.
In summary, you can obtain the bit string of 2.0
by incrementing the biased-exponent part in the bit string of 1.0
, which is a power of 2, minus 1. Incrementing such a number causes all the bits of its binary representation to change, in the same way that incrementing the number 9999 (in decimal representation) causes all the digits to change.
这篇关于为什么代表1.0和2.0的位串如此不同?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!