保存到存储桶中时,AWS Lambda中的S3 PutObject(通过节点)将文件大小加倍 [英] S3 PutObject in AWS Lambda (via Node) is doubling filesize when saving to bucket

查看:73
本文介绍了保存到存储桶中时,AWS Lambda中的S3 PutObject(通过节点)将文件大小加倍的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在使用http.get和s3.putObject.基本上,只想从http位置获取文件并将其按原样保存到S3中的存储桶中即可.似乎很简单.原始文件大小为47kb.

I have been working with http.get and s3.putObject. Basically, just want to get a file from a http location and save it, as is, to a bucket in S3. Seems rather simple. The original filesize is 47kb.

问题是,检索到的文件(47kb)被保存为S3存储桶(使用s3.putObject),大小为92.4kb.在某个地方,文件大小增加了一倍,使其无法使用.

The problem is, the retrieved file (47kb) is being saved to the S3 bucket (using s3.putObject) as 92.4kb in size. Somewhere, the file has doubled in size, making it non-usable.

如何防止文件保存到S3存储桶时文件大小增加一倍?

How do I prevent the file from doubling in size by the time it is saved to the S3 bucket?

以下是整个使用的代码:

exports.handler = function(event, context) {
    var imgSourceURL = "http://www.asite.com/an-image.jpg";
    var body;
    var stagingparams;
    http.get(imgSourceURL, function(res) {
        res.on('data', function(chunk) { body += chunk; });
        res.on('end', function() {
            var tmp_contentType = res.headers['content-type']; // Reported as image/jpeg
            var tmp_contentLength = res.headers['content-length']; // The reported filesize is 50kb (the actual filesize on disk is 47kb)
            stagingparams = {
                Bucket: "myspecialbucket",
                Key: "mytestimage.jpg",
                Body: body
            };
            // When putObject saves the file to S3, it doubles the size of the file to 92.4kb, thus making file non-readable.
            s3.putObject(stagingparams, function(err, data) {
                if (err) {
                    console.error(err, err.stack);
                }
                else {
                    console.log(data);
                }
            });
        });
    });
};

推荐答案

使用数组存储可读流字节,然后在调用 s3之前将数组中的所有缓冲区实例连接在一起.putObject :

Use an array to store the readable stream bytes and then concatenate all the buffer instances in the array together before calling s3.putObject:

exports.handler = function(event, context) {
    var imgSourceURL = "http://www.asite.com/an-image.jpg";
    var body = [];
    var stagingparams;
    http.get(imgSourceURL, function(res) {
        res.on('data', function(chunk) { body.push(chunk); });
        res.on('end', function() {
            var tmp_contentType = res.headers['content-type']; // Reported as image/jpeg
            var tmp_contentLength = res.headers['content-length']; // The reported filesize is 50kb (the actual filesize on disk is 47kb)
            stagingparams = {
                Bucket: "myspecialbucket",
                Key: "mytestimage.jpg",
                Body: Buffer.concat(body)
            };
            // When putObject saves the file to S3, it doubles the size of the file to 92.4kb, thus making file non-readable.
            s3.putObject(stagingparams, function(err, data) {
                if (err) {
                    console.error(err, err.stack);
                }
                else {
                    console.log(data);
                }
            });
        });
    });
};

这篇关于保存到存储桶中时,AWS Lambda中的S3 PutObject(通过节点)将文件大小加倍的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆