ValueError:通过块将数据导入pandas.csv_reader() [英] ValueError: import data via chunks into pandas.csv_reader()

查看:78
本文介绍了ValueError:通过块将数据导入pandas.csv_reader()的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个很大的gzip文件,我想将其导入到pandas数据框中.不幸的是,该文件的列数是不均匀的.数据大致具有以下格式:

I have a large gzip file which I would like to import into a pandas dataframe. Unfortunately, the file has an uneven number of columns. The data has roughly this format:

.... Col_20: 25    Col_21: 23432    Col22: 639142
.... Col_20: 25    Col_22: 25134    Col23: 243344
.... Col_21: 75    Col_23: 79876    Col25: 634534    Col22: 5    Col24: 73453
.... Col_20: 25    Col_21: 32425    Col23: 989423
.... Col_20: 25    Col_21: 23424    Col22: 342421    Col23: 7    Col24: 13424    Col 25: 67
.... Col_20: 95    Col_21: 32121    Col25: 111231

作为测试,我尝试了这个:

As a test, I tried this:

import pandas as pd
filename = `path/to/filename.gz`

for chunk in pd.read_csv(filename, sep='\t', chunksize=10**5, engine='python'):
    print(chunk)

这是我得到的错误:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/nfs/sw/python/python-3.5.1/lib/python3.5/site-packages/pandas/io/parsers.py", line 795, in __next__
    return self.get_chunk()
  File "/nfs/sw/python/python-3.5.1/lib/python3.5/site-packages/pandas/io/parsers.py", line 836, in get_chunk
    return self.read(nrows=size)
  File "/nfs/sw/python/python-3.5.1/lib/python3.5/site-packages/pandas/io/parsers.py", line 815, in read
    ret = self._engine.read(nrows)
  File "/nfs/sw/python/python-3.5.1/lib/python3.5/site-packages/pandas/io/parsers.py", line 1761, in read
    alldata = self._rows_to_cols(content)
  File "/nfs/sw/python/python-3.5.1/lib/python3.5/site-packages/pandas/io/parsers.py", line 2166, in _rows_to_cols
    raise ValueError(msg)
ValueError: Expected 18 fields in line 28, saw 22

如何为pandas.read_csv()分配一定数量的列?

How can you allocate a certain number of columns for pandas.read_csv()?

推荐答案

您也可以尝试以下方法:

You could also try this:

for chunk in pd.read_csv(filename, sep='\t', chunksize=10**5, engine='python', error_bad_lines=False):
print(chunk)

error_bad_lines会跳过认为不好的地方.我会看看是否可以找到更好的选择

error_bad_lines would skip bad lines thought. I will see if a better alternative can be found

为了保持被error_bad_lines跳过的行,我们可以遍历错误并将其重新添加到数据框中

In order to maintain the lines that were skipped by error_bad_lines we can go through the error and add it back to the dataframe

line     = []
expected = []
saw      = []     
cont     = True 

while cont == True:     
    try:
        data = pd.read_csv('file1.csv',skiprows=line)
        cont = False
    except Exception as e:    
        errortype = e.message.split('.')[0].strip()                                
        if errortype == 'Error tokenizing data':                        
           cerror      = e.message.split(':')[1].strip().replace(',','')
           nums        = [n for n in cerror.split(' ') if str.isdigit(n)]
           expected.append(int(nums[0]))
           saw.append(int(nums[2]))
           line.append(int(nums[1])-1)
         else:
           cerror      = 'Unknown'
           print 'Unknown Error - 222'

这篇关于ValueError:通过块将数据导入pandas.csv_reader()的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆