在Java SHA1与JavaScript SHA1中不同 [英] Different in Java SHA1 vs JavaScript SHA1
问题描述
我有点困惑。我想获得一个字符串的字节,它是用SHA1散列的。
JavaScript:
var content =somestring;
console.warn(content.getBytes()。toString());
console.warn(CryptoJS.SHA1(content.getBytes()。toString())。toString()。getBytes());
String.prototype.getBytes = function(){
var bytes = [];
for(var i = 0; i< this.length; i ++){
bytes.push(this.charCodeAt(i));
}
返回字节;
};
Array.prototype.toString = function(){
var result =;
for(var i = 0; i< this.length; i ++){
result + = this [i] .toString();
}
返回结果;
}
这给了我
115111109101115116114105110103
[52,99,97,54,48,56,99,51,53,54,102,54,48,53,50,49,99 ,51,49,51,49,100,49,97,54,55,57,55,56,55,98,52,52,52,99,55,57,102,54,101] b
Java:
String message =somestring;
byte [] sha1 = MessageDigest.getInstance(SHA1)。digest(message.getBytes());
System.out.println(Arrays.toString(message.getBytes()));
System.out.println(Arrays.toString(sha1));
System.out.println(new String(sha1));
这给了我
[115,111,109,101,115,116,114,105,110,103]
[-38,99,-5,105,-82,-80,60, 119,107,-46,62,-111,-30,-63,-53,61,-13,1,53,-45]
écûi°â€œwkâ€â€œäË=óÓÓ
第一个输出在JavaScript和Java上相同,但第二个不同。为什么以及如何进行校验和,如Úcûi®°
可能?
以下是解决方案:
Javascript:
key ='testKey';
var hashedKey = CryptoJS.SHA1(key);
console.log(hashedKey);
输出:2420e186fcdb8d0ea08d82fdfbfb8722d6cbf606
Java: / p>
password =testKey;
final MessageDigest md = MessageDigest.getInstance(SHA1);
ByteArrayOutputStream pwsalt = new ByteArrayOutputStream();
pwsalt.write(password.getBytes(UTF-8));
byte [] unhashedBytes = pwsalt.toByteArray();
byte [] digestVonPassword = md.digest(unhashedBytes);
System.out.println(bytesToHex(digestVonPassword));
输出:2420E186FCDB8D0EA08D82FDFBFB8722D6CBF606
的资本与小写,输出是相同的。顺便说一下,它的格式是十六进制。
I am a little bit confused. I want to get the bytes of an String, which is hashed with SHA1.
JavaScript:
var content = "somestring";
console.warn(content.getBytes().toString());
console.warn(CryptoJS.SHA1(content.getBytes().toString()).toString().getBytes());
String.prototype.getBytes = function () {
var bytes = [];
for (var i = 0; i < this.length; i++){
bytes.push(this.charCodeAt(i));
}
return bytes;
};
Array.prototype.toString = function(){
var result = "";
for(var i = 0; i < this.length; i++){
result += this[i].toString();
}
return result;
}
which gives me
115111109101115116114105110103
[52, 99, 97, 54, 48, 56, 99, 51, 53, 54, 102, 54, 48, 53, 50, 49, 99, 51, 49, 51, 49, 100, 49, 97, 54, 55, 57, 55, 56, 55, 98, 52, 52, 52, 99, 55, 57, 102, 54, 101]
Java:
String message = "somestring";
byte[] sha1 = MessageDigest.getInstance("SHA1").digest(message.getBytes());
System.out.println(Arrays.toString(message.getBytes()));
System.out.println(Arrays.toString(sha1));
System.out.println(new String(sha1));
which gives me
[115, 111, 109, 101, 115, 116, 114, 105, 110, 103]
[-38, 99, -5, 105, -82, -80, 60, 119, 107, -46, 62, -111, -30, -63, -53, 61, -13, 1, 53, -45]
Úcûi®°<wkÒ>‘âÁË=ó5Ó
The first output is equal on JavaScript and Java, but the second is different. Why and how is a checksum like Úcûi®°<wkÒ>‘âÁË=ó5Ó
possible?
Here's the solution:
Javascript:
key = 'testKey';
var hashedKey = CryptoJS.SHA1(key);
console.log(hashedKey);
Output: 2420e186fcdb8d0ea08d82fdfbfb8722d6cbf606
Java:
password="testKey";
final MessageDigest md = MessageDigest.getInstance("SHA1");
ByteArrayOutputStream pwsalt = new ByteArrayOutputStream();
pwsalt.write(password.getBytes("UTF-8"));
byte[] unhashedBytes = pwsalt.toByteArray();
byte[] digestVonPassword = md.digest(unhashedBytes);
System.out.println(bytesToHex(digestVonPassword));
Output: 2420E186FCDB8D0EA08D82FDFBFB8722D6CBF606
With the exceptions of capital vs. lowercase, the output is the same. It's in hex, by the way.
这篇关于在Java SHA1与JavaScript SHA1中不同的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!