如何从S3存储桶中读取内容作为URL [英] How to read content from the s3 bucket as url

查看:279
本文介绍了如何从S3存储桶中读取内容作为URL的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的s3存储段网址低于

I have s3 bucket url is below

s3_filename是s3://xx/xx/y/z/ion.csv

s3_filename is s3://xx/xx/y/z/ion.csv

如果它是存储桶,我可以阅读以下代码

if its is bucket i can read like below code

def read_s3(bucket, key):
        s3 = boto3.client('s3')
        obj = s3.get_object(Bucket=bucket, Key=key)
        df = pd.read_csv(obj['Body'])
        return df

推荐答案

由于您似乎正在使用Pandas,因此请注意,它实际上使用了s3fs.因此,如果您的安装相对较新且标准,则可以直接执行以下操作:

Since you appear to be using Pandas, please note that it actually uses s3fs under the cover. So, if your install is relatively recent and standard, you may directly do:

df = pd.read_csv(s3_path)

如果您的存储桶有一些特定的配置,例如特殊凭证,KMS加密等,则可以使用显式配置的s3fs文件系统,例如:

If you have some specific config for your bucket, for example special credentials, KMS encryption, etc., you may use an explicitly configured s3fs filesystem, for example:

fs = s3fs.S3FileSystem(
    key=my_aws_access_key_id,
    secret=my_aws_secret_access_key,
    s3_additional_kwargs={
            'ServerSideEncryption': 'aws:kms',
            'SSEKMSKeyId': my_kms_key,
    },
)
# note: KMS encryption only used when writing; when reading, it is automatic if you have access

with fs.open(s3_path, 'r') as f:
    df = pd.read_csv(f)

# here we write the same df at a different location, making sure
# it is using my_kms_key:
with fs.open(out_s3_path, 'w') as f:
    df.to_csv(f)

也就是说,如果您真的很想处理对象,而问题只是关于如何删除潜在的s3://前缀然后拆分bucket/key,则可以简单地使用:

That said, if you are really interested to deal yourself with getting the object, and the question is just about how to remove a potential s3:// prefix and then split bucket/key, you could simply use:

bucket, key = re.sub(r'^s3://', '', s3_path).split('/', 1)

但这可能会错过由 awscli 或上面提到的非常 s3fs .

But that may miss more general cases and conventions handled by systems such as awscli or the very s3fs referenced above.

要获得更多通用性,您可以在awscli中了解他们如何做到这一点.通常,这样做通常可以很好地表明boto3botocore中是否已经内置了某些功能.但是,在这种情况下,它似乎没有(查看本地发行版本1.18.126).他们只是从第一条原则开始做这件事:请参见awscli.customizations.s3.utils.split_s3_bucket_key,因为它已实现此处.

For more generality, you can take a look at how they do this in awscli. In general, doing so often provides a good indication of whether or not some functionality may already be built in boto3 or botocore. In this case however, it would appear not (looking at a local clone of release-1.18.126). They simply do this from first principles: see awscli.customizations.s3.utils.split_s3_bucket_key as it is implemented here.

从该代码中最终使用的正则表达式中,您可以推断出awscli允许s3_path的情况确实多种多样:

From the regex that is eventually used in that code, you can infer that the kind of cases awscli allows for s3_path is quite diverse indeed:

_S3_ACCESSPOINT_TO_BUCKET_KEY_REGEX = re.compile(
    r'^(?P<bucket>arn:(aws).*:s3:[a-z\-0-9]+:[0-9]{12}:accesspoint[:/][^/]+)/?'
    r'(?P<key>.*)$'
)

这篇关于如何从S3存储桶中读取内容作为URL的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆