使用 NodeJS 将文件上传到 Amazon S3 [英] Upload a file to Amazon S3 with NodeJS

查看:51
本文介绍了使用 NodeJS 将文件上传到 Amazon S3的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在尝试将文件上传到我的 S3 存储桶时遇到问题.一切正常,只是我的文件参数似乎不合适.我正在使用 Amazon S3 sdk 从 nodejs 上传到 s3.

I ran into a problem while trying to upload a file to my S3 bucket. Everything works except that my file paramters do not seem appropriate. I am using Amazon S3 sdk to upload from nodejs to s3.

这些是我的路线设置:

var multiparty = require('connect-multiparty'),
    multipartyMiddleware = multiparty();
app.route('/api/items/upload').post(multipartyMiddleware, items.upload);

这是 items.upload() 函数:

This is items.upload() function:

exports.upload = function(req, res) {
    var file = req.files.file;
    var s3bucket = new AWS.S3({params: {Bucket: 'mybucketname'}});
    s3bucket.createBucket(function() {
        var params = {
            Key: file.name,
            Body: file
        };
        s3bucket.upload(params, function(err, data) {
            console.log("PRINT FILE:", file);
            if (err) {
                console.log('ERROR MSG: ', err);
            } else {
                console.log('Successfully uploaded data');
            }
        });
    });
};

Body 参数设置为像 "hello" 这样的字符串可以正常工作.根据 docBody 参数必须取 (Buffer, Typed Array, Blob, String, ReadableStream) 对象数据. 但是,上传文件对象失败并显示以下错误消息:

Setting Body param to a string like "hello" works fine. According to doc, Body param must take (Buffer, Typed Array, Blob, String, ReadableStream) Object data. However, uploading a file object fails with the following error message:

[Error: Unsupported body payload object]

这是文件对象:

{ fieldName: 'file',
  originalFilename: 'second_fnp.png',
  path: '/var/folders/ps/l8lvygws0w93trqz7yj1t5sr0000gn/T/26374-7ttwvc.png',
  headers: 
   { 'content-disposition': 'form-data; name="file"; filename="second_fnp.png"',
     'content-type': 'image/png' },
  ws: 
   { _writableState: 
      { highWaterMark: 16384,
        objectMode: false,
        needDrain: true,
        ending: true,
        ended: true,
        finished: true,
        decodeStrings: true,
        defaultEncoding: 'utf8',
        length: 0,
        writing: false,
        sync: false,
        bufferProcessing: false,
        onwrite: [Function],
        writecb: null,
        writelen: 0,
        buffer: [],
        errorEmitted: false },
     writable: true,
     domain: null,
     _events: { error: [Object], close: [Object] },
     _maxListeners: 10,
     path: '/var/folders/ps/l8lvygws0w93trqz7yj1t5sr0000gn/T/26374-7ttwvc.png',
     fd: null,
     flags: 'w',
     mode: 438,
     start: undefined,
     pos: undefined,
     bytesWritten: 261937,
     closed: true },
  size: 261937,
  name: 'second_fnp.png',
  type: 'image/png' }

任何帮助将不胜感激!

推荐答案

所以看起来这里有一些问题.根据您的帖子,您似乎正在尝试使用 connect-multiparty 中间件支持文件上传.这个中间件的作用是获取上传的文件,将其写入本地文件系统,然后将 req.files 设置为上传的文件.

So it looks like there are a few things going wrong here. Based on your post it looks like you are attempting to support file uploads using the connect-multiparty middleware. What this middleware does is take the uploaded file, write it to the local filesystem and then sets req.files to the the uploaded file(s).

您的路线配置看起来不错,问题似乎出在您的 items.upload() 函数上.特别是这部分:

The configuration of your route looks fine, the problem looks to be with your items.upload() function. In particular with this part:

var params = {
  Key: file.name,
  Body: file
};

正如我在回答开头提到的 connect-multiparty 将文件写入本地文件系统,因此您需要打开文件并读取它,然后上传它,然后删除它在本地文件系统上.

As I mentioned at the beginning of my answer connect-multiparty writes the file to the local filesystem, so you'll need to open the file and read it, then upload it, and then delete it on the local filesystem.

也就是说,您可以将方法更新为如下所示:

That said you could update your method to something like the following:

var fs = require('fs');
exports.upload = function (req, res) {
    var file = req.files.file;
    fs.readFile(file.path, function (err, data) {
        if (err) throw err; // Something went wrong!
        var s3bucket = new AWS.S3({params: {Bucket: 'mybucketname'}});
        s3bucket.createBucket(function () {
            var params = {
                Key: file.originalFilename, //file.name doesn't exist as a property
                Body: data
            };
            s3bucket.upload(params, function (err, data) {
                // Whether there is an error or not, delete the temp file
                fs.unlink(file.path, function (err) {
                    if (err) {
                        console.error(err);
                    }
                    console.log('Temp File Delete');
                });

                console.log("PRINT FILE:", file);
                if (err) {
                    console.log('ERROR MSG: ', err);
                    res.status(500).send(err);
                } else {
                    console.log('Successfully uploaded data');
                    res.status(200).end();
                }
            });
        });
    });
};

这样做是从本地文件系统读取上传的文件,然后将其上传到 S3,然后删除临时文件并发送响应.

What this does is read the uploaded file from the local filesystem, then uploads it to S3, then it deletes the temporary file and sends a response.

这种方法存在一些问题.首先,它的效率并不高,因为对于大文件,您将在写入之前加载整个文件.其次,这个过程不支持大文件的分段上传(我认为在你必须进行分段上传之前截止是 5 Mb).

There's a few problems with this approach. First off, it's not as efficient as it could be, as for large files you will be loading the entire file before you write it. Secondly, this process doesn't support multi-part uploads for large files (I think the cut-off is 5 Mb before you have to do a multi-part upload).

我建议您使用我一直在研究的名为 S3FS 的模块提供与 Node.JS 中原生 FS 类似的接口,但抽象掉了一些细节,例如分段上传和 S3 api(以及添​​加一些附加功能,如递归方法).

What I would suggest instead is that you use a module I've been working on called S3FS which provides a similar interface to the native FS in Node.JS but abstracts away some of the details such as the multi-part upload and the S3 api (as well as adds some additional functionality like recursive methods).

如果您要引入 S3FS 库,您的代码将如下所示:

If you were to pull in the S3FS library your code would look something like this:

var fs = require('fs'),
    S3FS = require('s3fs'),
    s3fsImpl = new S3FS('mybucketname', {
        accessKeyId: XXXXXXXXXXX,
        secretAccessKey: XXXXXXXXXXXXXXXXX
    });

// Create our bucket if it doesn't exist
s3fsImpl.create();

exports.upload = function (req, res) {
    var file = req.files.file;
    var stream = fs.createReadStream(file.path);
    return s3fsImpl.writeFile(file.originalFilename, stream).then(function () {
        fs.unlink(file.path, function (err) {
            if (err) {
                console.error(err);
            }
        });
        res.status(200).end();
    });
};

这会为提供的存储桶和 AWS 凭证实例化模块,然后在存储桶不存在时创建它.然后,当请求上传文件时,我们将打开文件流并使用它将文件写入 S3 的指定路径.这将在幕后处理多部分上传(如果需要),并具有通过流完成的好处,因此您无需在开始上传之前等待读取整个文件.

What this will do is instantiate the module for the provided bucket and AWS credentials and then create the bucket if it doesn't exist. Then when a request comes through to upload a file we'll open up a stream to the file and use it to write the file to S3 to the specified path. This will handle the multi-part upload piece behind the scenes (if needed) and has the benefit of being done through a stream, so you don't have to wait to read the whole file before you start uploading it.

如果您愿意,可以将代码更改为来自 Promises 的回调.或者使用 pipe() 方法与事件侦听器确定结束/错误.

If you prefer, you could change the code to callbacks from Promises. Or use the pipe() method with the event listener to determine the end/errors.

如果您正在寻找其他方法,请查看 s3fs 的文档,如果您正在寻找其他方法或遇到问题,请随时提出问题.

If you're looking for some additional methods, check out the documentation for s3fs and feel free to open up an issue if you are looking for some additional methods or having issues.

这篇关于使用 NodeJS 将文件上传到 Amazon S3的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆