C#中,最快的(最好?)的目录的阵列识别重复文件的方法 [英] C#, Fastest (Best?) Method of Identifying Duplicate Files in an Array of Directories
问题描述
我要递归几个目录,找到目录的数n之间的重复文件。
I want to recurse several directories and find duplicate files between the n number of directories.
目前我的下意识想法是有一个全球性的哈希表或者其他一些数据结构来保存每个文件我发现;然后检查每个后续文件,以确定它是否在文件中的大师名单。很明显,我不认为这将是非常高效和我们有了一个更好的办法!在保持我的大脑振铃。
My knee-jerk idea at this is to have a global hashtable or some other data structure to hold each file I find; then check each subsequent file to determine if it's in the "master" list of files. Obviously, I don't think this would be very efficient and the "there's got to be a better way!" keeps ringing in my brain.
这是一个更好的方式来处理这种情况任何意见,将不胜感激。
Any advice on a better way to handle this situation would be appreciated.
推荐答案
您可以使用LINQ:
的http://www.hosca.com/blog /post/2010/04/13/using-LINQ-to-detect-and-remove-duplicate-files.aspx
这篇关于C#中,最快的(最好?)的目录的阵列识别重复文件的方法的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!