有条件地拆分一个巨大的文件 [英] Conditional splitting of a HUGE file

查看:51
本文介绍了有条件地拆分一个巨大的文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个非常大的文件(> 5亿行),我想根据其中一列的前3个字符将其分成几个较小的文件.

I have a really huge file (>500 million lines) that I want to split in several smaller files according to the first 3 characters of one of its columns.

看起来像这样,第1列和第2列的每个元素都是唯一的:

It looks like this, where each element of columns 1 and 2 is unique:

A0A023GPI8  A0A023GPI8.1    232300  1027923628
A0A023GPJ0  A0A023GPJ0.2    716541  765680613
A0A023PXA5  A0A023PXA5.1    559292  728048729
A0A023PXB0  A0A023PXB0.1    559292  728048786
A0A023PXB5  A0A023PXB5.1    559292  728048524
A0A023PXB9  A0A023PXB9.1    559292  728048769
A0A023PXC2  A0A023PXC2.1    559292  728050382

我使用以下脚本认为它会非常快,因为在我看来,它涉及对整个文件的一次读取.但是,它已经运行了几天,而且距离完成还很遥远.有什么想法可以解释为什么以及要提出的解决方案吗?

I used the following script thinking it would be quite fast, because it seemed to me that it involved a single reading of the whole file. However, it's been running for several days and it is far from being finished. Any idea to explain why, and solutions to propose?

while read line
do
    PREFIX=$(echo "$line" | cut -f2 | cut -c1-3)
    echo -e "$line" >> ../split_DB/$PREFIX.part
done < $file

推荐答案

它可能很简单:

$ awk '{s=substr($2,1,3); print >> s}' file

>> 将打印重定向到以给定名称附加文件的位置.名称由第二列的前三个字母组成.

The >> redirects the print to appending a file by the name given. The name is formed by the first 3 letters of the second column.

这将比处理该文件的Bash快得多.

This will be monumentally faster than Bash dealing with this file.

通常,操作系统确实限制了同时打开的文件数量.该 可能是个问题,具体取决于第二列的前3个字符中潜在的字符组合数量.这将影响任何在处理给定文件的同时这些名称的文件保持打开状态的解决方案,而不仅仅是awk.

Usually an OS does have a limit on the number of simultaneous files open. This may be an issue depending on the number of potential character combinations in the first 3 characters of the second column. This will effect any solution where the files of those names remain open while processing the given file -- not just awk.

如果您有 000 999 ,则表示有999个潜在文件打开;如果您的 AAA ZZZ 为17,575;如果您有3个大写和小写字母数字,即238,327个潜在打开文件...如果您的数据只有几个唯一的前缀,则可能不必担心.如果您声明数据的详细信息,则此处建议的解决方案可能会有所不同.

If you have 000 to 999 that is 999 potential files open; if you have AAA to ZZZ that is 17,575; if you have three alphanumeric with upper and lower case, that is 238,327 potential open files... If you data has only a few unique prefixes, you may not need to worry about this; If you state the details of the data, the solutions suggested here may be different.

(您可以根据3个字符允许的字母长度,将'ZZZ'转换为十进制来基本计算潜在的组合.('0'..'9','A'..'Z')是以32为底的('0'..'9','a'..'z','A'..'Z')以62为底,依此类推.)

(You can calculate the potential combinations with a base conversion of 'ZZZ' into decimal based on the length of the alphabet allowed in the 3 characters. ('0'..'9','A'..'Z') is base 32 ('0'..'9','a'..'z','A'..'Z') is base 62 and so on.)

如果需要(可以合理使用),您可以提高大多数Unix风格的操作系统的限制,也可以根据需要打开和关闭新文件.将文件限制提高到238,327是不切实际的.您还可以对数据进行排序,并在前一个文件不再使用时将其关闭.

You can raise the limit with most Unix style OSs if need be (within reason) or open and close new files as needed. Raising the file limit to 238,327 would be impractical. You could also sort the data and close the previous file as it goes out of use.

这篇关于有条件地拆分一个巨大的文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆