要从一个txt文件中删除重复的行 [英] Removing duplicated lines from a txt file
问题描述
我处理包含按行分隔的数据大的文本文件(20MB〜)。
大多数数据项被复制,我想删除这些重复到只保留一个副本。
另外,为了使稍有问题更加复杂化,一些项重复所附信息的一个额外位。在这种情况下,我需要保持该条目包含额外的信息,并删除旧版本。
例如。
我需要从这个去:
BOB 123 1DB
JIM 456 3DB AX
DAVE 789 1DB
BOB 123 1DB
JIM 456 3DB AX
DAVE 789 1DB
BOB 123 1DB额外位
为此:
JIM 456 3DB AX
DAVE 789 1DB
BOB 123 1DB额外位
NB。最终的顺序并不重要。
什么是做一个有效的方式?
我可以用awk,Python或任何标准的Linux命令行工具。
感谢。
如何以下(在Python):
preV =无
在排序行(开('文件')):
行= line.strip()
如果preV是不是无,而不是line.startswith(preV):
打印preV
preV =行
如果preV是不无:
打印preV
如果您发现内存使用的问题,你可以做的排序与使用Unix的排序
(这是<一个一个pre处理步骤href=\"http://vkundeti.blogspot.com/2008/03/tech-algorithmic-details-of-unix-sort.html\">disk-based)并更改脚本,以便它不读取整个文件到内存中。
I am processing large text files (~20MB) containing data delimited by line. Most data entries are duplicated and I want to remove these duplications to only keep one copy.
Also, to make the problem slightly more complicated, some entries are repeated with an extra bit of info appended. In this case I need to keep the entry containing the extra info and delete the older versions.
e.g. I need to go from this:
BOB 123 1DB JIM 456 3DB AX DAVE 789 1DB BOB 123 1DB JIM 456 3DB AX DAVE 789 1DB BOB 123 1DB EXTRA BITSto this:
JIM 456 3DB AX DAVE 789 1DB BOB 123 1DB EXTRA BITSNB. the final order doesn't matter.
What is an efficient way to do this?
I can use awk, python or any standard linux command line tool.
Thanks.
How about the following (in Python):
prev = None
for line in sorted(open('file')):
line = line.strip()
if prev is not None and not line.startswith(prev):
print prev
prev = line
if prev is not None:
print prev
If you find memory usage an issue, you can do the sort as a pre-processing step using Unix sort
(which is disk-based) and change the script so that it doesn't read the entire file into memory.
这篇关于要从一个txt文件中删除重复的行的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!