如何使用pyarrow从S3作为 pandas 数据框读取实木复合地板文件列表? [英] How to read a list of parquet files from S3 as a pandas dataframe using pyarrow?

查看:182
本文介绍了如何使用pyarrow从S3作为 pandas 数据框读取实木复合地板文件列表?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我可以通过boto3(1.4.4),pyarrow(0.4.1)和pandas(0.20.3)来实现这一目标.

I have a hacky way of achieving this using boto3 (1.4.4), pyarrow (0.4.1) and pandas (0.20.3).

首先,我可以像这样在本地读取单个实木复合地板文件:

First, I can read a single parquet file locally like this:

import pyarrow.parquet as pq

path = 'parquet/part-r-00000-1e638be4-e31f-498a-a359-47d017a0059c.gz.parquet'
table = pq.read_table(path)
df = table.to_pandas()

我还可以像这样在本地读取实木复合地板文件的目录:

I can also read a directory of parquet files locally like this:

import pyarrow.parquet as pq

dataset = pq.ParquetDataset('parquet/')
table = dataset.read()
df = table.to_pandas()

两者的工作都像魅力.现在,我想使用存储在S3存储桶中的文件远程实现相同的目的.我希望这样的事情行得通:

Both work like a charm. Now I want to achieve the same remotely with files stored in a S3 bucket. I was hoping that something like this would work:

dataset = pq.ParquetDataset('s3n://dsn/to/my/bucket')

但不是:

OSError: Passed non-file path: s3n://dsn/to/my/bucket

在通读 pyarrow的文档之后,这似乎不可能

After reading pyarrow's documentation thoroughly, this does not seem possible at the moment. So I came out with the following solution:

从S3读取单个文件并获取熊猫数据框:

Reading a single file from S3 and getting a pandas dataframe:

import io
import boto3
import pyarrow.parquet as pq

buffer = io.BytesIO()
s3 = boto3.resource('s3')
s3_object = s3.Object('bucket-name', 'key/to/parquet/file.gz.parquet')
s3_object.download_fileobj(buffer)
table = pq.read_table(buffer)
df = table.to_pandas()

这里是我还没经过优化的骇人解决方案,它可以从S3文件夹路径创建熊猫数据框:

And here my hacky, not-so-optimized, solution to create a pandas dataframe from a S3 folder path:

import io
import boto3
import pandas as pd
import pyarrow.parquet as pq

bucket_name = 'bucket-name'
def download_s3_parquet_file(s3, bucket, key):
    buffer = io.BytesIO()
    s3.Object(bucket, key).download_fileobj(buffer)
    return buffer

client = boto3.client('s3')
s3 = boto3.resource('s3')
objects_dict = client.list_objects_v2(Bucket=bucket_name, Prefix='my/folder/prefix')
s3_keys = [item['Key'] for item in objects_dict['Contents'] if item['Key'].endswith('.parquet')]
buffers = [download_s3_parquet_file(s3, bucket_name, key) for key in s3_keys]
dfs = [pq.read_table(buffer).to_pandas() for buffer in buffers]
df = pd.concat(dfs, ignore_index=True)

是否有更好的方法来实现这一目标?也许某种使用pyarrow的大熊猫连接器?我想避免使用pyspark,但是如果没有其他解决方案,那么我会采用它.

Is there a better way to achieve this? Maybe some kind of connector for pandas using pyarrow? I would like to avoid using pyspark, but if there is no other solution, then I would take it.

推荐答案

您应使用 yjk21 .但是,由于调用ParquetDataset的结果,您将获得pyarrow.parquet.ParquetDataset对象.要获取Pandas DataFrame,您宁愿对其应用.read_pandas().to_pandas():

You should use the s3fs module as proposed by yjk21. However as result of calling ParquetDataset you'll get a pyarrow.parquet.ParquetDataset object. To get the Pandas DataFrame you'll rather want to apply .read_pandas().to_pandas() to it:

import pyarrow.parquet as pq
import s3fs
s3 = s3fs.S3FileSystem()

pandas_dataframe = pq.ParquetDataset('s3://your-bucket/', filesystem=s3).read_pandas().to_pandas()

这篇关于如何使用pyarrow从S3作为 pandas 数据框读取实木复合地板文件列表?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆