S3存储桶的拉式跨区域复制 [英] Pull-style cross region replication for S3 buckets

查看:77
本文介绍了S3存储桶的拉式跨区域复制的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我需要将不同区域中的其他组织(因此使用不同的AWS账户)发布到S3存储桶的数据进行后续Lambda处理.我确实有权读取它,但不能要求他们设置对我的存储桶的复制.

I need to pull data published to an S3 bucket by a different organization (therefore a different AWS account) in a different region, for subsequent processing with Lambda. I do have access to read it but cannot ask them to set up replication to my buckets.

Amazon的跨区域复制看起来像是设计的用于从源推送数据,我什至不确定源组织是否启用了版本控制.

Amazon's Cross-Region Replication looks like it's designed for pushing data from the source and I'm not even sure the source organization has versioning enabled.

有没有办法数据?我只需要单程;我需要在数据到达源S3存储桶后不久(大约10分钟内)处理该数据.

Is there a way to pull data? My need is for one-way only; I need to process that data shortly (within 10 minutes or so) after it arrives in the source S3 bucket.

推荐答案

您可以按计划运行aws s3 sync,例如每10分钟运行一次.如果要在AWS Lambda函数中运行它,它看起来像 NodeJS和Python Lambda环境已预先安装了AWS CLI工具.我建议编写一个简短的Python Lambda函数,该函数调用AWS CLI运行s3 sync命令,并安排该Lambda函数每10分钟运行一次.

You could run aws s3 sync on a schedule, like every 10 minutes. If you want to run this in a AWS Lambda function, it looks like NodeJS and Python Lambda environments have the AWS CLI tool pre-installed. I would suggest writing a short Python Lambda function that calls the AWS CLI took to run an s3 sync command, and schedule that Lambda function to run every 10 minutes.

这篇关于S3存储桶的拉式跨区域复制的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆