如何在JavaScript中将整数转换为二进制? [英] How do I convert an integer to binary in JavaScript?
问题描述
我希望看到二进制中的整数,正数或负数。
更像是,其中这里有一个可用的polyfill 。
I’d like to see integers, positive or negative, in binary.
Rather like this question, but for JavaScript.
This answer attempts to address integers with absolute values between Number.MAX_SAFE_INTEGER
(or 2**53-1
) and 2**31
. The current solutions only address signed integers within 32 bits, but this solution will output in 64-bit two's complement form using float64ToInt64Binary()
:
// IIFE to scope internal variables
var float64ToInt64Binary = (function () {
// create union
var flt64 = new Float64Array(1)
var uint16 = new Uint16Array(flt64.buffer)
// 2**53-1
var MAX_SAFE = 9007199254740991
// 2**31
var MAX_INT32 = 2147483648
function uint16ToBinary() {
var bin64 = ''
// generate padded binary string a word at a time
for (var word = 0; word < 4; word++) {
bin64 = uint16[word].toString(2).padStart(16, 0) + bin64
}
return bin64
}
return function float64ToInt64Binary(number) {
// NaN would pass through Math.abs(number) > MAX_SAFE
if (!(Math.abs(number) <= MAX_SAFE)) {
throw new RangeError('Absolute value must be less than 2**53')
}
var sign = number < 0 ? 1 : 0
// shortcut using other answer for sufficiently small range
if (Math.abs(number) <= MAX_INT32) {
return (number >>> 0).toString(2).padStart(64, sign)
}
// little endian byte ordering
flt64[0] = number
// subtract bias from exponent bits
var exponent = ((uint16[3] & 0x7FF0) >> 4) - 1022
// encode implicit leading bit of mantissa
uint16[3] |= 0x10
// clear exponent and sign bit
uint16[3] &= 0x1F
// check sign bit
if (sign === 1) {
// apply two's complement
uint16[0] ^= 0xFFFF
uint16[1] ^= 0xFFFF
uint16[2] ^= 0xFFFF
uint16[3] ^= 0xFFFF
// propagate carry bit
for (var word = 0; word < 3 && uint16[word] === 0xFFFF; word++) {
// apply integer overflow
uint16[word] = 0
}
// complete increment
uint16[word]++
}
// only keep integer part of mantissa
var bin64 = uint16ToBinary().substr(11, Math.max(exponent, 0))
// sign-extend binary string
return bin64.padStart(64, sign)
}
})()
console.log('8')
console.log(float64ToInt64Binary(8))
console.log('-8')
console.log(float64ToInt64Binary(-8))
console.log('2**33-1')
console.log(float64ToInt64Binary(2**33-1))
console.log('-(2**33-1)')
console.log(float64ToInt64Binary(-(2**33-1)))
console.log('2**53-1')
console.log(float64ToInt64Binary(2**53-1))
console.log('-(2**53-1)')
console.log(float64ToInt64Binary(-(2**53-1)))
console.log('2**52')
console.log(float64ToInt64Binary(2**52))
console.log('-(2**52)')
console.log(float64ToInt64Binary(-(2**52)))
console.log('2**52+1')
console.log(float64ToInt64Binary(2**52+1))
console.log('-(2**52+1)')
console.log(float64ToInt64Binary(-(2**52+1)))
.as-console-wrapper {
max-height: 100% !important;
}
This answer heavily deals with the IEEE-754 Double-precision floating-point format, illustrated here:
seee eeee eeee ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff
---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----
[ uint16[3] ] [ uint16[2] ] [ uint16[1] ] [ uint16[0] ]
[ flt64[0] ]
little endian byte ordering
s = sign = uint16[3] >> 15
e = exponent = (uint16[3] & 0x7FF) >> 4
f = fraction
The way the solution works is it creates a union between a 64-bit floating point number and an unsigned 16-bit integer array in little endian byte ordering. After validating the integer input range, it casts the input to a double precision floating point number on the buffer, and then uses the union to gain bit access to the value and calculate the binary string based on the unbiased binary exponent and fraction bits.
The solution is implemented in pure ECMAScript 5 except for the use of String#padStart()
, which has an available polyfill here.
这篇关于如何在JavaScript中将整数转换为二进制?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!