在Pandas中通过多处理读取csv文件的最简单方法 [英] Easiest way to read csv files with multiprocessing in Pandas

查看:404
本文介绍了在Pandas中通过多处理读取csv文件的最简单方法的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这是我的问题.
与一堆.csv文件(或其他文件).熊猫是读取它们并保存为Dataframe格式的简便方法.但是当文件数量巨大时,我想通过多处理读取文件以节省一些时间.

Here is my question.
With bunch of .csv files(or other files). Pandas is an easy way to read them and save into Dataframe format. But when the amount of files was huge, I want to read the files with multiprocessing to save some time.

我手动将文件分成不同的路径.多次使用:

I manually divide the files into different path. Using severally:

os.chdir("./task_1)
files = os.listdir('.')
files.sort()
for file in files:
    filename,extname = os.path.splitext(file)
    if extname == '.csv':
        f = pd.read_csv(file)
        df = (f.VALUE.as_matrix()).reshape(75,90)   

然后将它们合并.

如何使用pool运行它们以解决我的问题?
任何建议将不胜感激!

How to run them with pool to achieve my problem?
Any advice would be appreciate!

推荐答案

使用Pool:

import os
import pandas as pd 
from multiprocessing import Pool

# wrap your csv importer in a function that can be mapped
def read_csv(filename):
    'converts a filename to a pandas dataframe'
    return pd.read_csv(filename)


def main():

    # get a list of file names
    files = os.listdir('.')
    file_list = [filename for filename in files if filename.split('.')[1]=='csv']

    # set up your pool
    with Pool(processes=8) as pool: # or whatever your hardware can support

        # have your pool map the file names to dataframes
        df_list = pool.map(read_csv, file_list)

        # reduce the list of dataframes to a single dataframe
        combined_df = pd.concat(df_list, ignore_index=True)

if __name__ == '__main__':
    main()

这篇关于在Pandas中通过多处理读取csv文件的最简单方法的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆