分割非常大的文件分成若干小的如下模式(bash)的 [英] Divide very large file into small ones following pattern (bash)
问题描述
我一直在这个问题上,只有收效甚微,所以我来到这里得到一些新的建议。
I have been working on this problem with only little success so I am coming here to get some fresh advices.
我试图提取的每个数据扫描到单独的文件。
I am trying to extract the data of every scan into separate files.
问题是,3196文件创建后,我收到错误消息:awk的让打开的文件太多
The problem is that after 3196 files created I receive the error message : awk "makes too many open files".
我明白我需要附近AWK创建的文件,但我不知道该怎么做。
I understand that I need to close the files created by awk but I don't know how to do that.
文字inputfile中正在寻找这样的(80 000扫描):
Text inputfile is looking like this (up to 80 000 Scan):
Scan 1
11111 111
22222 221
...
Scan 2
11122 111
11122 111
...
Scan 3
11522 141
19922 141
...
现在,我一直在做的:
For now I have been doing :
awk '/.*Scan.*/{n++}{print >"filescan" n }' inputfile
这给了我之后创建的3196文件扫描每一个递增的输出文件和崩溃。
Which gives me an incremented output file for every Scan and crash after 3196 files created..
cat filescan1
Scan 1
11111 111
22222 221
...
任何想法?
THX。
推荐答案
您需要的关闭输出文件既然awk保持文件句柄开放。
You need to close the output file as awk is keeping the file handle open.
awk '/.*Scan.*/{
close(file);
n++;
}
{
file="filescan"n;
print >> file;
}' inputfile
这篇关于分割非常大的文件分成若干小的如下模式(bash)的的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!