如何在Amazon s3存储桶中压缩文件并获取其URL [英] How to zip files in Amazon s3 Bucket and get its URL

查看:547
本文介绍了如何在Amazon s3存储桶中压缩文件并获取其URL的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在Amazon s3存储桶中有一堆文件,我想压缩这些文件并使用Java Spring通过S3 URL下载获取内容.

I have a bunch of files inside Amazon s3 bucket, I want to zip those file and download get the contents via S3 URL using Java Spring.

推荐答案

S3不是文件服务器,也不提供操作系统文件服务,例如数据处理.

S3 is not a file server, nor does it offer operating system file services, such as data manipulation.

如果有很多巨大"文件,那么最好的选择是

If there is many "HUGE" files, your best bet is

  1. 启动一个简单的EC2实例
  2. 将所有这些文件下载到EC2实例,对其进行压缩,然后使用新的对象名称将其重新上传到S3存储桶中

是的,您可以使用AWS lambda执行相同的操作,但是lambda的执行超时限制为900秒(15分钟)(因此建议分配更多的RAM以提高lambda的执行性能)

Yes, you can use AWS lambda to do the same thing, but lambda is bounds to 900 seconds (15 mins) execution timeout (Thus it is recommended to allocate more RAM to boost lambda execution performance)

从S3到本地EC2实例等的流量是免费的.

Traffics from S3 to local region EC2 instance and etc services is FREE.

如果您的主要目的只是使用EC2/etc服务在同一AWS区域中读取这些文件,那么您无需执行此额外步骤.只需直接访问文件.

If your main purpose is just to read those file within same AWS region using EC2/etc services, then you don't need this extra step. Just access the file directly.

注意:

建议使用AWS API访问和共享文件.如果打算公开共享文件,则必须认真研究安全问题并施加下载限制. AWS流量到互联网绝不便宜.

It is recommended to access and share file using AWS API. If you intend to share the file publicly, you must look into security issue seriously and impose download restriction. AWS traffics out to internet is never cheap.

这篇关于如何在Amazon s3存储桶中压缩文件并获取其URL的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆