使用Java从S3检索文件并将其放入EC2 Linux实例 [英] Retrieve files from S3 and put them in to EC2 Linux instance using Java
问题描述
以下是我可以想到的不同方法,但不确定哪种方法最好:
Here are the different ways that I can think of but not sure which is the best way :
- 使用 AWS Java SDK 中提供的nofollow noreferrer> GetObject . li>
- 使用 s3同步. >
- 使用SNS> Lambda.
- 使用 REST API .
- 使用SNS> HTTPS (Java Servlet).
- Create a console app in Java using GetObject provided in AWS Java SDK.
- Use s3 sync.
- Use SNS>Lambda.
- Use REST API.
- Use SNS>HTTPS (Java Servlet).
性能很重要,因为我可能不得不将许多大小不同的文件拉到linux实例中.
Performance is important as I may have to pull many files of varing sizes down to the linux instance.
我在选项1上看到的问题是我需要采取某种轮询行为.
The problem I see with option 1 is that I would need to have some kind of polling behavior in place.
使用选项2时,我不知道(a)是否需要定期运行此命令,或者它是否永远保持同步文件的运行(b)如果它仅运行一次,如何将其包装在Java程序中?我也是Java和Linux的新手.如果是.Net和Windows,我将创建Windows服务,但不确定Java/Linux的等效功能.
With option 2 I don't know (a) if I neeed to run this command periodically or if it keeps running syncing files forever (b) if it just runs once how do I wrap it in a java program? Also I'm new to Java and Linux. If this was .Net and Windows I would create a windows service but not sure what's the Java/Linux equivalent.
由于排除了Lambda(长话短说),所以选项3不在桌面上.
Option 3 is not on the table as Lambda is ruled out (long story).
那么就性能,可维护性和可伸缩性而言,哪种方法是个好方法?我将需要监视的S3存储桶的数量将随文件频率/大小的变化(增加)而变化.
So which is a good way to do this as far as performance, maintainability and scalability goes? The number of S3 buckets I will need to monitor will vary (increase) as will the frequency/size of the files.
谢谢
推荐答案
使用Lambda函数通过SSH将文件从S3存储桶复制到EC2. 触发器:S3对象创建
Use Lambda Function to copy the file from S3 Bucket to EC2 via SSH. Trigger: S3 Object Creation
检查此链接: https://privatedock.wordpress.com/2017/08/21/s3-bucket-ec2-directory-sync-using-lambda/
这篇关于使用Java从S3检索文件并将其放入EC2 Linux实例的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!