比较分割文件的MD5结果和整体的MD5 [英] Comparing the MD5 results of split files against the MD5 of the whole

查看:285
本文介绍了比较分割文件的MD5结果和整体的MD5的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个情况,我有一个非常大的文件,我使用linuxsplit命令分成较小的部分。后来我使用linuxcat命令将部件全部重新组合在一起。

I have a situation where I have one VERY large file that I'm using the linux "split" command to break into smaller parts. Later I use the linux "cat" command to bring the parts all back together again.

但在此期间,我很好奇...

In the interim, however, I'm curious...

大文件,然后拆分它,然后以后获得所有独立的文件部分,从split命令的结果的MD5指纹,有没有一种方式,采取独立的指纹,以某种方式推断,总和或平均(或任何你喜欢的所有它)的部分等于单个大文件的指纹?

If I get an MD5 fingerprint on the large file before splitting it, then later get the MD5 fingerprints on all the independent file parts that result from the split command, is there a way to take the independent fingerprints and somehow deduce that the sum or average (or whatever you like to all it) of their parts is equal to the fingerprint of the single large file?

通过(非常)松散示例...

By (very) loose example...

bigoldfile.txt MD5 = 737da789
smallfile1.txt MD5 = 23489a89
smallfile2.txt MD5 = 1238g89d
smallfile3.txt MD5 = 01234cd7

someoperator(23489a89,1238g89d,01234cd7)= 737da789(原始文件的指纹)

someoperator(23489a89,1238g89d,01234cd7) = 737da789 (the fingerprint of the original file)

推荐答案

你可能不能这样做 - MD5内部是复杂的,取决于实际的数据以及初始哈希值。

You likely can't do that - MD5 is complex enough inside and depends on actual data as well as the "initial" hash value.

您可以生成增量散列 - 第一部分的散列,第一部分的散列和第二部分的散列等。

You could instead generate "incremental" hashes - hash of first part, hash of first plus second part, etc.

这篇关于比较分割文件的MD5结果和整体的MD5的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆