如何预处理和加载“大数据"tsv 文件转换为 python 数据框? [英] How to preprocess and load a "big data" tsv file into a python dataframe?

查看:26
本文介绍了如何预处理和加载“大数据"tsv 文件转换为 python 数据框?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前正在尝试将以下大型制表符分隔文件导入 Python 中类似数据框的结构中——当然,我使用的是 pandas 数据框,尽管我对其他选项持开放态度.

I am currently trying to import the following large tab-delimited file into a dataframe-like structure within Python---naturally I am using pandas dataframe, though I am open to other options.

这个文件有几 GB 大小,不是标准的 tsv 文件——它坏了,即行有不同的列数.一行可能有 25 列,另一行可能有 21 列.

This file is several GB in size, and is not a standard tsv file---it is broken, i.e. the rows have a different number of columns. One row may have 25 columns, another has 21.

以下是数据示例:

Col_01: 14 .... Col_20: 25    Col_21: 23432    Col_22: 639142
Col_01: 8  .... Col_20: 25    Col_22: 25134    Col_23: 243344
Col_01: 17 .... Col_21: 75    Col_23: 79876    Col_25: 634534    Col_22: 5    Col_24: 73453
Col_01: 19 .... Col_20: 25    Col_21: 32425    Col_23: 989423
Col_01: 12 .... Col_20: 25    Col_21: 23424    Col_22: 342421    Col_23: 7    Col_24: 13424    Col_25: 67
Col_01: 3  .... Col_20: 95    Col_21: 32121    Col_25: 111231

如您所见,其中一些列的顺序不正确...

As you can see, some of these columns are not in the correct order...

现在,我认为将此文件导入数据帧的正确方法是预处理数据,以便您可以输出具有 NaN 值的数据帧,例如

Now, I think the correct way to import this file into a dataframe is to preprocess the data such that you can output a dataframe with NaN values, e.g.

Col_01 .... Col_20    Col_21    Col22    Col23    Col24    Col25
8      .... 25        NaN       25134    243344   NaN      NaN
17     .... NaN       75        2        79876    73453    634534
19     .... 25        32425     NaN      989423   NaN      NaN
12     .... 25        23424     342421   7        13424    67
3      .... 95        32121     NaN      NaN      NaN      111231

更复杂的是,这是一个非常大的文件,有几 GB 大小.

To make this even more complicated, this is a very large file, several GB in size.

通常,我尝试分块处理数据,例如

Normally, I try to process the data in chunks, e.g.

import pandas as pd

for chunk in pd.read_table(FILE_PATH, header=None, sep='	', chunksize=10**6):
    # place chunks into a dataframe or HDF 

但是,我认为没有办法先分块预处理"数据,然后再用分块将数据读入pandas.read_table().你会怎么做?有哪些预处理工具可用——也许是 sed?awk?

However, I see no way to "preprocess" the data first in chunks, and then use chunks to read the data into pandas.read_table(). How would you do this? What sort of preprocessing tools are available---perhaps sed? awk?

这是一个具有挑战性的问题,因为数据的大小和加载到数据帧之前必须完成的格式设置.任何帮助表示赞赏.

This is a challenging problem, due to the size of the data and the formatting that must be done before loading into a dataframe. Any help appreciated.

推荐答案

$ cat > pandas.awk
BEGIN {
    PROCINFO["sorted_in"]="@ind_str_asc" # traversal order for for(i in a)                  
}
NR==1 {       # the header cols is in the beginning of data file
              # FORGET THIS: header cols from another file replace NR==1 with NR==FNR and see * below
    split($0,a," ")                  # mkheader a[1]=first_col ...
    for(i in a) {                    # replace with a[first_col]="" ...
        a[a[i]]
        printf "%6s%s", a[i], OFS    # output the header
        delete a[i]                  # remove a[1], a[2], ...
    }
    # next                           # FORGET THIS * next here if cols from another file UNTESTED
}
{
    gsub(/: /,"=")                   # replace key-value separator ": " with "="
    split($0,b,FS)                   # split record from ","
    for(i in b) {
        split(b[i],c,"=")            # split key=value to c[1]=key, c[2]=value
        b[c[1]]=c[2]                 # b[key]=value
    }
    for(i in a)                      # go thru headers in a[] and printf from b[]
        printf "%6s%s", (i in b?b[i]:"NaN"), OFS; print ""
}

数据样本(pandas.txt):

Col_01 Col_20 Col_21 Col_22 Col_23 Col_25
Col_01: 14  Col_20: 25    Col_21: 23432    Col_22: 639142
Col_01: 8   Col_20: 25    Col_22: 25134    Col_23: 243344
Col_01: 17  Col_21: 75    Col_23: 79876    Col_25: 634534    Col_22: 5    Col_24: 73453
Col_01: 19  Col_20: 25    Col_21: 32425    Col_23: 989423
Col_01: 12  Col_20: 25    Col_21: 23424    Col_22: 342421    Col_23: 7    Col_24: 13424    Col_25: 67
Col_01: 3   Col_20: 95    Col_21: 32121    Col_25: 111231

$ awk -f pandas.awk -pandas.txt
Col_01 Col_20 Col_21 Col_22 Col_23 Col_25
    14     25  23432 639142    NaN    NaN 
     8     25    NaN  25134 243344    NaN 
    17    NaN     75      5  79876 634534 
    19     25  32425    NaN 989423    NaN 
    12     25  23424 342421      7     67 
     3     95  32121    NaN    NaN 111231 

所有需要的列都应该在数据文件头中.处理时收集头信息可能不是什么大工作,只需将数据保存在数组中并在最后打印,也许在版本 3 中.

All needed cols should be in the data file header. It's probably not a big job to collect the headers while processing, just keep the data in arrays and print in the end, maybe in version 3.

如果您从与数据文件 (pandas.txt) 不同的文件 (cols.txt) 读取标头,请执行脚本 (pandas.txt).awk):

If you read the headers from a different file (cols.txt) than the data file (pandas.txt), execute the script (pandas.awk):

$ awk -F pandas.awk cols.txt pandas.txt

这篇关于如何预处理和加载“大数据"tsv 文件转换为 python 数据框?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆