是否可以在AWS S3中自动删除超过10分钟的对象? [英] Is it possible to automatically delete objects older than 10 minutes in AWS S3?

查看:274
本文介绍了是否可以在AWS S3中自动删除超过10分钟的对象?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

创建对象10分钟后,我们想从S3中删除对象.目前可以吗?

We want to delete objects from S3, 10 minutes after they are created. Is it possible currently?

推荐答案

我有一个有效的解决方案,该解决方案是在AWS的简单队列服务 AWS Lambda .这适用于在s3存储桶中创建的所有对象.

I have a working solution that was built serverless with the help of AWS's Simple Queue Service and AWS Lambda. This works for all objects created in an s3 bucket.

在s3存储桶中创建任何对象时,存储桶会将带有对象详细信息的事件发送到配置了10分钟传递延迟的SQS队列. SQS队列也是配置为触发Lambda函数. Lambda函数从发送的事件中读取对象详细信息,并从s3存储桶中删除该对象.涉及的所有三个组件(s3,SQS和Lambda)都是低成本的,松散耦合的,无服务器的,并且可以自动扩展到非常大的工作负载.

When any object is created in your s3 bucket, the bucket will send an event with object details to an SQS queue configured with a 10 minute delivery delay. The SQS queue is also configured to trigger a Lambda function. The Lambda function reads the object details from the event sent and deletes the object from the s3 bucket. All three components involved (s3, SQS and Lambda) are low cost, loosely coupled, serverless and scale automatically to very large workloads.

  1. 首先设置Lambda函数.在我的解决方案中,我使用了Python 3.7.该函数的代码为:

  1. Setup your Lambda Function First. In my solution, I used Python 3.7. The code for the function is:

import json
import boto3

def lambda_handler(event, context):

for record in event['Records']:
    v = json.loads(record['body'])
    for rec in v["Records"]:

        bucketName = rec["s3"]["bucket"]["name"]
        objectKey = rec["s3"]["object"]["key"]
        #print("bucket is " + bucketName + " and object is " + objectKey )

        sss = boto3.resource("s3")
        obj = sss.Object(bucketName, objectKey)
        obj.delete()

return {
    'statusCode': 200,
    'body': json.dumps('Delete Completed.')
}

此代码和示例消息文件已上传到 github存储库.

This code and a sample message file were uploaded to a github repo.

  1. 创建普通SQS队列.然后将SQS队列配置为具有10分钟的传递延迟.可以在队列操作"->配置队列"->设置为4"下找到此设置

  1. 配置SQS队列以触发您在步骤1中创建的Lambda函数.为此,请使用Queue Actions->配置Lambda函数的触发器.设置屏幕不言自明.如果您在步骤1中看不到Lambda函数,请正确地重做它,并确保您使用的是相同的Region.

  1. Configure the SQS Queue to trigger the Lambda Function you created in Step 1. To do this use Queue Actions -> Configure Trigger for Lambda Function. The setup screen is self explanatory. If you don't see your Lambda function from step 1, redo it correctly and make sure you are using the same Region.

设置您的S3存储桶,以将事件触发到您在步骤2中创建的SQS队列.这在主存储桶屏幕上找到,单击属性"选项卡,然后选择事件".单击加号以添加事件并填写以下表格:

Setup your S3 Bucket so that it fires an event to the SQS Queue you created in step 2. This is found on the main bucket screen, click Properties tab and select Events. Click the plus sign to add an event and fill out the following form:

要选择的重要点是选择All Object create events并选择在步骤2中为该屏幕上一次下拉菜单创建的队列.

Important points to select are to select All Object create events and to select the queue you created in Step 2 for the last pull down on this screen.

  1. 最后一步-向您的Lambda函数添加执行策略,以使其只能从特定的S3存储桶中删除.您可以通过Lambda功能控制台执行此操作.向下滚动控制台的Lambda功能屏幕,然后在Execution Role下对其进行配置.
  1. Last step - Add an execute policy to your Lambda Function that allows it to only delete from the specific S3 bucket. You can do this via the Lambda function console. Scroll down the Lambda function screen of your console and configure it under Execution Role.

这适用于我已复制到单个s3存储桶中的文件.该解决方案可以支持许多S3存储桶到1个队列和1个lambda.

This works for files I've copied into a single s3 bucket. This solution could support many S3 buckets to 1 queue and 1 lambda.

这篇关于是否可以在AWS S3中自动删除超过10分钟的对象?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆