从S3存储桶读取大文件 [英] Reading a large file from S3 bucket

查看:91
本文介绍了从S3存储桶读取大文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试从S3存储桶读取一个大小约为1GB的文件.我的目标是从文件中读取数据并将其发送到另一台服务器.

当我尝试读取大文件(1GB)时,我的系统挂断了/服务器崩溃了.我可以使用以下代码段来控制240MB文件的数据

  var bucketParams = {值区:"xyzBucket",关键字:"input/something.zip"};router.get('/getData',function(req,res){s3.getObject(bucketParams,function(err,data){如果(错误){console.log(err,err.stack);//发生错误}别的 {console.log(data);//成功回应}});//将数据发送到另一台服务器}); 

从S3读取大文件时,它将如何工作?

解决方案

要回答从S3读取大文件的问题,我建议使用 Range 来获取对象的一部分

https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html

部分获取它可以防止您超出框架/RAM消耗的限制

您还可以通过多部分/多线程下载来利用Range支持来提高带宽利用率

I am trying to read a file of size around 1GB, from an S3 bucket. My objective is to read the data from the file and send it over to another server.

At the moment when I try to read a large file(1GB) my system hangs up/server crashes. I am able to console out the data of a 240MB file with the following segment of code

var bucketParams = {
    Bucket: "xyzBucket",
    Key: "input/something.zip"
};

router.get('/getData', function(req, res) {
    s3.getObject(bucketParams, function(err, data) {
        if (err) {
            console.log(err, err.stack); // an error occurred
        }
        else {
            console.log(data); // successful response
        }
    });
    // Send data over to another server
});

How would it work, when it comes to reading large files from S3?

解决方案

To answer the question of reading large files from S3, I would recommend using the Range to get a part of the object

https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html

Getting it part by part will prevent you from exceeding the limitation of your framework / RAM consumption

You can also leverage the Range support to enhance bandwidth utilization with multipart / multithreaded download

这篇关于从S3存储桶读取大文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆