将文本数据库分块为N个相等的块并保留标题 [英] chunk a text database into N equal blocks and retain header

查看:109
本文介绍了将文本数据库分块为N个相等的块并保留标题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有几个大型(30+百万行)文本数据库,正在使用以下代码进行清理,我需要将文件拆分为100万行或更少,并保留标题行.我已经看过块和itertools,但无法获得明确的解决方案.将在arcgis模型中使用.

I have several large (30+ million lines) text databases which I am cleaning up with the following code, I need to split the file into 1 million lines or less and retain the header line. I have looked at chunk and itertools but can't get a clear solution. It is to use in an arcgis model.

==根据icyrock.com的响应更新了代码

== updated code as per response from icyrock.com

import arcpy, os
#fc = arcpy.GetParameter(0)
#chunk_size = arcpy.GetParameter(1) # number of records in each dataset

fc='input.txt'
Name = fc[:fc.rfind('.')]
fl = Name+'_db.txt'

with open(fc) as f:
  lines = f.readlines()
lines[:] = lines[3:]
lines[0] = lines[0].replace('Rx(db)', 'Rx_'+Name)
lines[0] = lines[0].replace('Best Unit', 'Best_Unit')
records = len(lines)
with open(fl, 'w') as f: #where N is the chunk number
  f.write('\n'.join(lines))

with open(fl) as file:
  lines = file.readlines()

headers = lines[0:1]
rest = lines[1:]
chunk_size = 1000000

def chunks(lst, chunk_size):
  for i in xrange(0, len(lst), chunk_size):
    yield lst[i:i + chunk_size]

def write_rows(rows, file):
  for row in rows:
    file.write('%s' % row)

part = 1
for chunk in chunks(rest, chunk_size):
  with open(Name+'_%d' % part+'.txt', 'w') as file:
    write_rows(headers, file)
    write_rows(chunk, file)
  part += 1

请参见从大型文本文件中删除特定行在python 中拆分大文本( xyz)数据库分成x个相等的部分作为背景.我不再需要基于cygwin的解决方案,因为它会使模型复杂化.我需要一种Python方式.我们可以使用记录"进行遍历,但不清楚的是如何在db#1中指定行1:999,999,在db#2中指定行1,000,0000到1,999,999,等等.如果最后一个数据集的长度小于1m,就可以了记录.

See Remove specific lines from a large text file in python and split a large text (xyz) database into x equal parts for background. I don't want a cygwin based solution any longer as it over complicates the model. I need a pythonic way. We can use the "records" to iterate through but what is not clear is how to specify line 1:999,999 in db #1, lines 1,000,0000 to 1,999,999 in db#2 etc. It's fine if the last dataset has less than 1m records.

500mb文件出错(我有16GB RAM).

Error with 500mb file (I have 16GB RAM).

回溯(最近通话最近):文件 "P:\ 2012 \ Job_044_DM_Radio_Propogation \ Working \ test \ clean_file.py", 第10行,在 lines = f.readlines()MemoryError

Traceback (most recent call last): File "P:\2012\Job_044_DM_Radio_Propogation\Working\test\clean_file.py", line 10, in lines = f.readlines() MemoryError

记录2249878

records 2249878

上面的记录数不是它的总记录数,而是它在内存不足的地方(我认为).

The records amount above is not the total record count it just where it went out of memory (I think).

===使用Icyrock的新代码.

=== With the new code from Icyrock.

该块似乎可以正常工作,但在Arcgis中使用时会出错.

The chunk seems to work ok but gives errors when used in Arcgis.

开始时间:2012年3月9日星期五17:20:04警告000594:输入功能 1945882430:不在输出几何图形域之内.警告000595: d:\ Temp \ cb_vhn007_1.txt_Features1.fid包含以下内容的完整列表: 功能无法复制.

Start Time: Fri Mar 09 17:20:04 2012 WARNING 000594: Input feature 1945882430: falls outside of output geometry domains. WARNING 000595: d:\Temp\cb_vhn007_1.txt_Features1.fid contains the full list of features not able to be copied.

我知道这是分块的问题,因为制作事件层"过程可以与完整的预分块数据集一起很好地工作.

I know it is an issue with chunking as the "Make Event Layer" process works fine with full pre-chunk dataset.

有什么想法吗?

推荐答案

您可以执行以下操作:

with open('file') as file:
  lines = file.readlines()

headers = lines[0:1]
rest = lines[1:]
chunk_size = 4

def chunks(lst, chunk_size):
  for i in xrange(0, len(lst), chunk_size):
    yield lst[i:i + chunk_size]

def write_rows(rows, file):
  for row in rows:
    file.write('%s' % row)

part = 1
for chunk in chunks(rest, chunk_size):
  with open('part%d' % part, 'w') as file:
    write_rows(headers, file)
    write_rows(chunk, file)
  part += 1

这是一个测试运行:

$ cat file && python mkt.py && for p in part*; do echo ---- $p; cat $p; done
header
1
2
3
4
5
6
7
8
9
10
11
12
13
14
---- part1
header
1
2
3
4
---- part2
header
5
6
7
8
---- part3
header
9
10
11
12
---- part4
header
13
14

很明显,根据其计数更改chunk_size的值以及获取headers的方式.

Obviously, change the values of the chunk_size and how you fetch headers depending on their count.

积分:

编辑-要逐行执行此操作以避免出现内存问题,您可以执行以下操作:

Edit - to do this line-by-line to avoid memory issues, you can do something like this:

from itertools import islice

headers_count = 5
chunk_size = 250000

with open('file') as fin:
  headers = list(islice(fin, headers_count))

  part = 1
  while True:
    line_iter = islice(fin, chunk_size)
    try:
      first_line = line_iter.next()
    except StopIteration:
      break
    with open('part%d' % part, 'w') as fout:
      for line in headers:
        fout.write(line)
      fout.write(first_line)
      for line in line_iter:
        fout.write(line)
    part += 1

积分:

测试用例(将以上内容放入名为mkt2.py的文件中):

Test case (put the above in the file called mkt2.py):

制作一个包含5行标题和1234567行的文件:

Make a file containing 5-line header and 1234567 lines in it:

with open('file', 'w') as fout:
  for i in range(5):
    fout.write(10 * ('header %d ' % i) + '\n')
  for i in range(1234567):
    fout.write(10 * ('line %d ' % i) + '\n')

要测试的Shell脚本(放入名为rt.sh的文件):

Shell script to test (put in file called rt.sh):

rm part*
echo ---- file
head -n7 file
tail -n2 file

python mkt2.py

for i in part*; do
  echo ---- $i
  head -n7 $i
  tail -n2 $i
done

示例输出:

$ sh rt.sh 
---- file
header 0 header 0 header 0 header 0 header 0 header 0 header 0 header 0 header 0 header 0 
header 1 header 1 header 1 header 1 header 1 header 1 header 1 header 1 header 1 header 1 
header 2 header 2 header 2 header 2 header 2 header 2 header 2 header 2 header 2 header 2 
header 3 header 3 header 3 header 3 header 3 header 3 header 3 header 3 header 3 header 3 
header 4 header 4 header 4 header 4 header 4 header 4 header 4 header 4 header 4 header 4 
line 0 line 0 line 0 line 0 line 0 line 0 line 0 line 0 line 0 line 0 
line 1 line 1 line 1 line 1 line 1 line 1 line 1 line 1 line 1 line 1 
line 1234565 line 1234565 line 1234565 line 1234565 line 1234565 line 1234565 line 1234565 line 1234565 line 1234565 line 1234565 
line 1234566 line 1234566 line 1234566 line 1234566 line 1234566 line 1234566 line 1234566 line 1234566 line 1234566 line 1234566 
---- part1
header 0 header 0 header 0 header 0 header 0 header 0 header 0 header 0 header 0 header 0 
header 1 header 1 header 1 header 1 header 1 header 1 header 1 header 1 header 1 header 1 
header 2 header 2 header 2 header 2 header 2 header 2 header 2 header 2 header 2 header 2 
header 3 header 3 header 3 header 3 header 3 header 3 header 3 header 3 header 3 header 3 
header 4 header 4 header 4 header 4 header 4 header 4 header 4 header 4 header 4 header 4 
line 0 line 0 line 0 line 0 line 0 line 0 line 0 line 0 line 0 line 0 
line 1 line 1 line 1 line 1 line 1 line 1 line 1 line 1 line 1 line 1 
line 249998 line 249998 line 249998 line 249998 line 249998 line 249998 line 249998 line 249998 line 249998 line 249998 
line 249999 line 249999 line 249999 line 249999 line 249999 line 249999 line 249999 line 249999 line 249999 line 249999 
---- part2
header 0 header 0 header 0 header 0 header 0 header 0 header 0 header 0 header 0 header 0 
header 1 header 1 header 1 header 1 header 1 header 1 header 1 header 1 header 1 header 1 
header 2 header 2 header 2 header 2 header 2 header 2 header 2 header 2 header 2 header 2 
header 3 header 3 header 3 header 3 header 3 header 3 header 3 header 3 header 3 header 3 
header 4 header 4 header 4 header 4 header 4 header 4 header 4 header 4 header 4 header 4 
line 250000 line 250000 line 250000 line 250000 line 250000 line 250000 line 250000 line 250000 line 250000 line 250000 
line 250001 line 250001 line 250001 line 250001 line 250001 line 250001 line 250001 line 250001 line 250001 line 250001 
line 499998 line 499998 line 499998 line 499998 line 499998 line 499998 line 499998 line 499998 line 499998 line 499998 
line 499999 line 499999 line 499999 line 499999 line 499999 line 499999 line 499999 line 499999 line 499999 line 499999 
---- part3
header 0 header 0 header 0 header 0 header 0 header 0 header 0 header 0 header 0 header 0 
header 1 header 1 header 1 header 1 header 1 header 1 header 1 header 1 header 1 header 1 
header 2 header 2 header 2 header 2 header 2 header 2 header 2 header 2 header 2 header 2 
header 3 header 3 header 3 header 3 header 3 header 3 header 3 header 3 header 3 header 3 
header 4 header 4 header 4 header 4 header 4 header 4 header 4 header 4 header 4 header 4 
line 500000 line 500000 line 500000 line 500000 line 500000 line 500000 line 500000 line 500000 line 500000 line 500000 
line 500001 line 500001 line 500001 line 500001 line 500001 line 500001 line 500001 line 500001 line 500001 line 500001 
line 749998 line 749998 line 749998 line 749998 line 749998 line 749998 line 749998 line 749998 line 749998 line 749998 
line 749999 line 749999 line 749999 line 749999 line 749999 line 749999 line 749999 line 749999 line 749999 line 749999 
---- part4
header 0 header 0 header 0 header 0 header 0 header 0 header 0 header 0 header 0 header 0 
header 1 header 1 header 1 header 1 header 1 header 1 header 1 header 1 header 1 header 1 
header 2 header 2 header 2 header 2 header 2 header 2 header 2 header 2 header 2 header 2 
header 3 header 3 header 3 header 3 header 3 header 3 header 3 header 3 header 3 header 3 
header 4 header 4 header 4 header 4 header 4 header 4 header 4 header 4 header 4 header 4 
line 750000 line 750000 line 750000 line 750000 line 750000 line 750000 line 750000 line 750000 line 750000 line 750000 
line 750001 line 750001 line 750001 line 750001 line 750001 line 750001 line 750001 line 750001 line 750001 line 750001 
line 999998 line 999998 line 999998 line 999998 line 999998 line 999998 line 999998 line 999998 line 999998 line 999998 
line 999999 line 999999 line 999999 line 999999 line 999999 line 999999 line 999999 line 999999 line 999999 line 999999 
---- part5
header 0 header 0 header 0 header 0 header 0 header 0 header 0 header 0 header 0 header 0 
header 1 header 1 header 1 header 1 header 1 header 1 header 1 header 1 header 1 header 1 
header 2 header 2 header 2 header 2 header 2 header 2 header 2 header 2 header 2 header 2 
header 3 header 3 header 3 header 3 header 3 header 3 header 3 header 3 header 3 header 3 
header 4 header 4 header 4 header 4 header 4 header 4 header 4 header 4 header 4 header 4 
line 1000000 line 1000000 line 1000000 line 1000000 line 1000000 line 1000000 line 1000000 line 1000000 line 1000000 line 1000000 
line 1000001 line 1000001 line 1000001 line 1000001 line 1000001 line 1000001 line 1000001 line 1000001 line 1000001 line 1000001 
line 1234565 line 1234565 line 1234565 line 1234565 line 1234565 line 1234565 line 1234565 line 1234565 line 1234565 line 1234565 
line 1234566 line 1234566 line 1234566 line 1234566 line 1234566 line 1234566 line 1234566 line 1234566 line 1234566 line 1234566 

上面的时间:

real    0m0.935s
user    0m0.708s
sys     0m0.200s

希望这会有所帮助.

这篇关于将文本数据库分块为N个相等的块并保留标题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆