C#,最快(最好?)识别目录数组中的重复文件的方法 [英] C#, Fastest (Best?) Method of Identifying Duplicate Files in an Array of Directories
问题描述
我想递归几个目录,并找到n个目录之间的重复文件。
I want to recurse several directories and find duplicate files between the n number of directories.
我的这种想法是有一个全局哈希表或一些其他数据结构来保存我找到的每个文件;然后检查每个后续文件以确定它是否在主文件列表中。显然,我不认为这将是非常有效的,必须有一个更好的方式!保持在我的大脑。
My knee-jerk idea at this is to have a global hashtable or some other data structure to hold each file I find; then check each subsequent file to determine if it's in the "master" list of files. Obviously, I don't think this would be very efficient and the "there's got to be a better way!" keeps ringing in my brain.
任何建议更好的办法来处理这种情况将不胜感激。
Any advice on a better way to handle this situation would be appreciated.
推荐答案
您可以使用LINQ:
http://www.hosca .com / blog / post / 2010/04/13 / using-LINQ-to-detect-and-remove-duplicate-files
已更新链接
这篇关于C#,最快(最好?)识别目录数组中的重复文件的方法的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!