我可以使用Amazon高级或低级别API暂停和恢复分段上传吗? [英] Can I pause and resume the multipart upload using Amazon High or Low Level API?
问题描述
- 如何从暂停点恢复上传?
-
我有两个按钮(使用JFrames和Swing工人)命名为暂停和恢复。我该怎么做才能重复暂停和恢复?如何实现代码?
public class UploadObjectMultipartUploadUsingHighLevelAPI {
public void pauseUploading(TransferManager tm,上传上传)抛出异常{
长MB = 1024 * 1024;
TransferProgress progress = upload.getProgress();
System.out.println(上传5 MB数据后会暂停);
while(progress.getBytesTransferred()<5 * MB)
Thread.sleep(2000);
布尔值forceCancel = true;
float dataTransfered =(float)upload.getProgress()。getBytesTransferred();
System.out.println(数据传输到现在:+ dataTransfered);
PauseResult< PersistableUpload> pauseResult =((上传)上传).tryPause(forceCancel);
System.out.println(上传已经暂停,我们得到的代码是+ pauseResult.getPauseStatus());
pauseResult =((上传)上传).tryPause(forceCancel);
PersistableUpload persistableUpload =(PersistableUpload)pauseResult.getInfoToResume();
System.out.println(将信息存入文件);
文件f =新文件(D:\\Example\\resume-upload);
if(!f.exists())
f.createNewFile();
FileOutputStream fos = new FileOutputStream(f);
persistableUpload.serialize(fos);
fos.close();
$ b $ public void resumeUploading(TransferManager tm)throws Exception {
FileInputStream fis = new FileInputStream(new File(D:\\Example\\resume-上传));
System.out.println(从文件中读取信息);
PersistableUpload persistableUpload;
persistableUpload = PersistableTransfer.deserializeFrom(fis);
System.out.println(读完信息);
System.out.println(系统将立即恢复上传);
tm.resumeUpload(persistableUpload);
fis.close();
// System.out.println(Upload complete。);
public static void main(String [] args)throws Exception {
String existingBucketName =Business.SkySquirrel.RawImages / Test;
String keyName =Pictures1.zip;
String filePath =D:\\Pictures1.zip;
TransferManagerConfiguration配置=新的TransferManagerConfiguration();
TransferManager tm =新的TransferManager(新的ProfileCredentialsProvider());
configuration.setMultipartUploadThreshold(1024 * 1024);
tm.setConfiguration(configuration);
System.out.println(*************上传管理器*************);
尝试{
Upload upload = tm.upload(existingBucketName,keyName,new File(filePath));
System.out.println(Upload Started);
System.out.println(Transfer:+ upload.getDescription());
UploadObjectMultipartUploadUsingHighLevelAPI multipartPause =
UploadObjectMultipartUploadUsingHighLevelAPI();
multipartPause.pauseUploading(tm,upload);
UploadObjectMultipartUploadUsingHighLevelAPI multipartResume =
UploadObjectMultipartUploadUsingHighLevelAPI();
multipartResume.resumeUploading(tm);
catch(AmazonClientException amazonClientException){
System.out.println(Unable to upload file,upload was aborted。);
amazonClientException.printStackTrace();
$ b $
我会欣赏使用AmazonS3的高级API或低级API的示例代码。
我正在使用SDK的1.8.9.1版本。我还在初始化上传,暂停和用下面的代码恢复时添加了进度。
TransferProgress progress = upload.getProgress();
float dataTransfered = progress.getBytesTransferred(); $!
$ b while(!upload.isDone()){
dataTransfered = progress.getBytesTransferred();
System.out.println(Data Transfered:+ dataTransfers / MB +MB);
Thread.sleep(2000);
$ / code $ / pre
$ b $我有以下结果:
密码匹配
选择以下文件:
D:\Pictures3\DSC02247 - Copy.JPG
写入'D:\Pictures3\DSC02247 - Copy.JPG'至压缩文件
D:\Pictures3\DSC02247.JPG
写入'D:\ Pictures3 \DSC02247.JPG'将zip文件
D:\Pictures3\DSC02248.JPG
写入'D:\Pictures3\DSC02248.JPG'至zip文件
** ***********上传管理器*************
上传开始
传输:上传到******* / ** ** / ****。zip
转移的数据:0.0 MB
转移的数据:0.0703125 MB
转移的数据:0.21875 MB
数据转移:0.3203125 MB
数据转让:0.4140625 MB
转移的数据:0.515625 MB
...
...
数据转移:0.9609375 MB
转移数据:1.0546875 MB
暂停开始
一旦上传5 MB数据
,就会暂停数据传输:1.09375 MB
数据传输:1.1640625 MB
数据传输:1.265625 MB
数据传输:1.359375 MB
...
...
数据传输:4.734375 MB
数据传输:4.8359375 MB
数据传输:4.9296875 MB
上传已暂停。我们得到的代码是SUCCESS
将信息存入文件
上传已暂停
恢复开始
从文件中读取信息
完成读取信息
系统将恢复现在上传
数据传输:0.0 MB
数据传输:0.171875 MB
数据传输:0.265625 MB
数据传输:0.359375 MB
数据传输:0.421875 MB
....
....
数据转移:9.58182 MB
数据转移:9.683382 MB
数据转移:9.753695 MB
上传完成
解决方案使用REST API,这非常简单...你只要不再发送任何部分,就可以暂停,然后通过发送下一部分继续。
低级API紧密地映射到REST接口,所以功能应该是一样的。 S3在内部没有暂停分段上传的概念。只是等待 - 无限期地 - 您要上传更多的零件并完成请求,或放弃请求。它将存储您发送的部分(与存储费用),直到整个操作完成或中止...它将真正地等待月(我已经看到它...和大概是,它会永远等待),让你恢复。
但是,没有低层次的要求暂停/恢复 - 你只是这样做。 p>
问题是,您必须在本地持有每个部分的etags,并且必须随请求一起发送,以完成分段上传。 p>
如果您从未完成或中止多部分操作,则发送的部分将由S3存储,等待您的下一步操作。 b
http://docs.aws.amazon.com/AmazonS3 /latest/dev/llJavaUploadFile.html
I'm trying to develop an Upload Manager (installable on your PC and will not be a web app) that will upload the selected image files to AmazonS3 using Java. Now, I have done coding to the level that it will initialize the upload and pause it using TransferManager (high level API) from AmazonS3. The problem is, everytime I resume the upload, it starts from the beginning. I have looked at several aws blogs and aws docs but there is no straight forward answer to the question. I have also written code with low level API (using their sample code) but there, I don't know how to pause the upload. So, my question is:
- How to resume the upload from the point it was paused ?
I have two buttons (Using JFrames and Swing Worker) named Pause and Resume. What should I do so that I can Pause and Resume repeatedly ? How do I implement the code ?
public class UploadObjectMultipartUploadUsingHighLevelAPI { public void pauseUploading(TransferManager tm, Upload upload) throws Exception{ long MB = 1024 * 1024 ; TransferProgress progress = upload.getProgress(); System.out.println("The pause will occur once 5 MB of data is uploaded"); while( progress.getBytesTransferred() < 5*MB ) Thread.sleep(2000); boolean forceCancel = true; float dataTransfered = (float) upload.getProgress().getBytesTransferred(); System.out.println("Data Transfered until now: " + dataTransfered); PauseResult<PersistableUpload> pauseResult = ((Upload) upload).tryPause(forceCancel); System.out.println("The upload has been paused. The code that we've got is " + pauseResult.getPauseStatus()); pauseResult = ((Upload) upload).tryPause(forceCancel); PersistableUpload persistableUpload = (PersistableUpload) pauseResult.getInfoToResume(); System.out.println("Storing information into file"); File f = new File("D:\\Example\\resume-upload"); if( !f.exists() ) f.createNewFile(); FileOutputStream fos = new FileOutputStream(f); persistableUpload.serialize(fos); fos.close(); } public void resumeUploading(TransferManager tm) throws Exception{ FileInputStream fis = new FileInputStream(new File("D:\\Example\\resume-upload")); System.out.println("Reading information from the file"); PersistableUpload persistableUpload; persistableUpload = PersistableTransfer.deserializeFrom(fis); System.out.println("Reading information completed"); System.out.println("The system will resume upload now"); tm.resumeUpload(persistableUpload); fis.close(); // System.out.println("Upload complete."); } public static void main(String[] args) throws Exception { String existingBucketName = "Business.SkySquirrel.RawImages/Test"; String keyName = "Pictures1.zip"; String filePath = "D:\\Pictures1.zip"; TransferManagerConfiguration configuration = new TransferManagerConfiguration(); TransferManager tm = new TransferManager(new ProfileCredentialsProvider()); configuration.setMultipartUploadThreshold(1024 * 1024); tm.setConfiguration(configuration); System.out.println("************* Upload Manager *************"); try { Upload upload = tm.upload(existingBucketName, keyName, new File(filePath)); System.out.println("Upload Started"); System.out.println("Transfer: " + upload.getDescription()); UploadObjectMultipartUploadUsingHighLevelAPI multipartPause = new UploadObjectMultipartUploadUsingHighLevelAPI(); multipartPause.pauseUploading(tm, upload); UploadObjectMultipartUploadUsingHighLevelAPI multipartResume = new UploadObjectMultipartUploadUsingHighLevelAPI(); multipartResume.resumeUploading(tm); } catch (AmazonClientException amazonClientException) { System.out.println("Unable to upload file, upload was aborted."); amazonClientException.printStackTrace(); } } }
I'd appreciate a sample code using either High Level or Low Level APIs of AmazonS3.
I am using version 1.8.9.1 of the SDK. I have also added progress while initializing upload, pausing and resuming with following code.
long MB = 1024 * 1024; TransferProgress progress = upload.getProgress(); float dataTransfered = progress.getBytesTransferred(); while(!upload.isDone()){ dataTransfered = progress.getBytesTransferred(); System.out.println("Data Transfered: " + dataTransfered/MB + " MB"); Thread.sleep(2000); }
And I've got following result:
Passwords match Following Files are selected:
D:\Pictures3\DSC02247 - Copy.JPG Writing 'D:\Pictures3\DSC02247 - Copy.JPG' to zip file D:\Pictures3\DSC02247.JPG Writing 'D:\Pictures3\DSC02247.JPG' to zip file D:\Pictures3\DSC02248.JPG Writing 'D:\Pictures3\DSC02248.JPG' to zip file ************* Upload Manager ************* Upload Started Transfer: Uploading to *******/****/****.zip Data Transfered: 0.0 MB Data Transfered: 0.0703125 MB Data Transfered: 0.21875 MB Data Transfered: 0.3203125 MB Data Transfered: 0.4140625 MB Data Transfered: 0.515625 MB .... .... Data Transfered: 0.9609375 MB Data Transfered: 1.0546875 MB Pause Commencing The pause will occur once 5 MB of data is uploaded Data Transfered: 1.09375 MB Data Transfered: 1.1640625 MB Data Transfered: 1.265625 MB Data Transfered: 1.359375 MB .... .... Data Transfered: 4.734375 MB Data Transfered: 4.8359375 MB Data Transfered: 4.9296875 MB The upload has been paused. The code that we've got is SUCCESS Storing information into file Upload Paused Resume Commencing Reading information from the file Reading information completed The system will resume upload now Data Transfered: 0.0 MB Data Transfered: 0.171875 MB Data Transfered: 0.265625 MB Data Transfered: 0.359375 MB Data Transfered: 0.421875 MB .... .... Data Transfered: 9.58182 MB Data Transfered: 9.683382 MB Data Transfered: 9.753695 MB Upload Complete
解决方案With the REST API, this is trivially simple... you "pause" by just not sending any more parts, and you "resume" by sending the next part.
The "low level API" maps closely to the REST interface, so the functionality should be the same. S3, internally, has no concept of "pausing" a multipart upload. It's just waiting -- indefinitely -- for you to upload more parts and complete the request, or to abort the request. It will store the parts you've sent (with charges for storage) until the entire operation is completed or aborted... and it will literally wait for months (I've seen it... and presumably, it will wait forever) for you to "resume."
But there are no low-level calls for pause/resume -- you just do it.
The catch is, you have to hold on, locally, to the etags for each part, and you have to send them along with the request to complete the multipart upload.
If you never complete or abort a multipart operation, the parts you did send are stored by S3, waiting for your next move.
http://docs.aws.amazon.com/AmazonS3/latest/dev/llJavaUploadFile.html
这篇关于我可以使用Amazon高级或低级别API暂停和恢复分段上传吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!