将咸菜文件写入AWS中的s3存储桶 [英] Writing a pickle file to an s3 bucket in AWS

查看:100
本文介绍了将咸菜文件写入AWS中的s3存储桶的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试将熊猫数据框作为泡菜文件写入AWS的s3存储桶中.我知道我可以将数据帧new_df作为csv写入s3存储桶,如下所示:

I'm trying to write a pandas dataframe as a pickle file into an s3 bucket in AWS. I know that I can write dataframe new_df as a csv to an s3 bucket as follows:

bucket='mybucket'
key='path'

csv_buffer = StringIO()
s3_resource = boto3.resource('s3')

new_df.to_csv(csv_buffer, index=False)
s3_resource.Object(bucket,path).put(Body=csv_buffer.getvalue())

我已经尝试使用与to_pickle()相同的代码,但是没有成功.

I've tried using the same code as above with to_pickle() but with no success.

推荐答案

我找到了解决方案,需要将BytesIO调用到用于腌制文件而不是StringIO(用于CSV文件)的缓冲区中.

I've found the solution, need to call BytesIO into the buffer for pickle files instead of StringIO (which are for CSV files).

import io
import boto3

pickle_buffer = io.BytesIO()
s3_resource = boto3.resource('s3')

new_df.to_pickle(pickle_buffer)
s3_resource.Object(bucket, key).put(Body=pickle_buffer.getvalue())

这篇关于将咸菜文件写入AWS中的s3存储桶的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆