上传文件到Amazon S3与NodeJS [英] Upload a file to Amazon S3 with NodeJS

查看:3685
本文介绍了上传文件到Amazon S3与NodeJS的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我遇到了一个问题,同时尝试将文件上传到我的S3存储。一切正常,只是我的文件par​​amters似乎并不合适。我使用Amazon S3的SDK从nodejs到S3上传。

I ran into a problem while trying to upload a file to my S3 bucket. Everything works except that my file paramters do not seem appropriate. I am using Amazon S3 sdk to upload from nodejs to s3.

这些是我的路线设置:

var multiparty = require('connect-multiparty'),
    multipartyMiddleware = multiparty();
app.route('/api/items/upload').post(multipartyMiddleware, items.upload);

这是items.upload()函数:

This is items.upload() function:

exports.upload = function(req, res) {
    var file = req.files.file;
    var s3bucket = new AWS.S3({params: {Bucket: 'mybucketname'}});
    s3bucket.createBucket(function() {
        var params = {
            Key: file.name,
            Body: file
        };
        s3bucket.upload(params, function(err, data) {
            console.log("PRINT FILE:", file);
            if (err) {
                console.log('ERROR MSG: ', err);
            } else {
                console.log('Successfully uploaded data');
            }
        });
    });
};

设置车身参数像一个字符串hello工作正常。据文档正文参数必须考虑的(缓冲区,类型数组,斑点,字符串,ReadableStream)对象数据的但是,上传一个文件对象失败,出现以下错误信息:

Setting Body param to a string like "hello" works fine. According to doc, Body param must take (Buffer, Typed Array, Blob, String, ReadableStream) Object data. However, uploading a file object fails with the following error message:

[Error: Unsupported body payload object]

这是文件对象:

{ fieldName: 'file',
  originalFilename: 'second_fnp.png',
  path: '/var/folders/ps/l8lvygws0w93trqz7yj1t5sr0000gn/T/26374-7ttwvc.png',
  headers: 
   { 'content-disposition': 'form-data; name="file"; filename="second_fnp.png"',
     'content-type': 'image/png' },
  ws: 
   { _writableState: 
      { highWaterMark: 16384,
        objectMode: false,
        needDrain: true,
        ending: true,
        ended: true,
        finished: true,
        decodeStrings: true,
        defaultEncoding: 'utf8',
        length: 0,
        writing: false,
        sync: false,
        bufferProcessing: false,
        onwrite: [Function],
        writecb: null,
        writelen: 0,
        buffer: [],
        errorEmitted: false },
     writable: true,
     domain: null,
     _events: { error: [Object], close: [Object] },
     _maxListeners: 10,
     path: '/var/folders/ps/l8lvygws0w93trqz7yj1t5sr0000gn/T/26374-7ttwvc.png',
     fd: null,
     flags: 'w',
     mode: 438,
     start: undefined,
     pos: undefined,
     bytesWritten: 261937,
     closed: true },
  size: 261937,
  name: 'second_fnp.png',
  type: 'image/png' }

任何帮助将大大AP preciated!

Any help will be greatly appreciated!

推荐答案

因此​​,它看起来像有一些事情在这里走错。根据您的文章看起来你正试图以支持使用文件上传连接多党中间件。这是什么做的中间件是把上传的文件,写入到本地文件系统,然后设置 req.files 到上传的文件(S)。

So it looks like there are a few things going wrong here. Based on your post it looks like you are attempting to support file uploads using the connect-multiparty middleware. What this middleware does is take the uploaded file, write it to the local filesystem and then sets req.files to the the uploaded file(s).

您路由的配置看起来不错,这个问题看起来是与你的 items.upload()功能。特别是本部分:

The configuration of your route looks fine, the problem looks to be with your items.upload() function. In particular with this part:

var params = {
  Key: file.name,
  Body: file
};

正如我在我的答案连接 - 多方开头提到将文件写入到本地文件系统,所以你需要打开文件并读取它,然后上传,然后在本地文件系统中删除。

As I mentioned at the beginning of my answer connect-multiparty writes the file to the local filesystem, so you'll need to open the file and read it, then upload it, and then delete it on the local filesystem.

这是说,你可以更新你的方法类似如下:

That said you could update your method to something like the following:

var fs = require('fs');
exports.upload = function (req, res) {
    var file = req.files.file;
    fs.readFile(file.path, function (err, data) {
        if (err) throw err; // Something went wrong!
        var s3bucket = new AWS.S3({params: {Bucket: 'mybucketname'}});
        s3bucket.createBucket(function () {
            var params = {
                Key: file.originalFilename, //file.name doesn't exist as a property
                Body: data
            };
            s3bucket.upload(params, function (err, data) {
                // Whether there is an error or not, delete the temp file
                fs.unlink(file.path, function (err) {
                    if (err) {
                        console.error(err);
                    }
                    console.log('Temp File Delete');
                });

                console.log("PRINT FILE:", file);
                if (err) {
                    console.log('ERROR MSG: ', err);
                    res.status(500).send(err);
                } else {
                    console.log('Successfully uploaded data');
                    res.status(200).end();
                }
            });
        });
    });
};

这样做是读取上传的文件从本地文件系统,然后将其上传到S3,然后将其删除临时文件,并发送一个响应。

What this does is read the uploaded file from the local filesystem, then uploads it to S3, then it deletes the temporary file and sends a response.

有一些问题,这种方法。首先,它的效率不高,因为它可能是,因为对于大文件,你会,你把它写之前加载整个文件。其次,这一过程不支持多部分上传大文件(我觉得截止为5 MB之前,你必须做一个多部分上传)。

There's a few problems with this approach. First off, it's not as efficient as it could be, as for large files you will be loading the entire file before you write it. Secondly, this process doesn't support multi-part uploads for large files (I think the cut-off is 5 Mb before you have to do a multi-part upload).

我的建议,而不是为你使用一个模块我一直对所谓的 S3FS 这提供了一个类似的接口原生 FS 在Node.js的,但抽象了一些细节,如多部分上传和S3的API(以及增加像递归方法的一些额外的功能)。

What I would suggest instead is that you use a module I've been working on called S3FS which provides a similar interface to the native FS in Node.JS but abstracts away some of the details such as the multi-part upload and the S3 api (as well as adds some additional functionality like recursive methods).

如果你拉在 S3FS 库的code会是这个样子:

If you were to pull in the S3FS library your code would look something like this:

var fs = require('fs'),
    S3FS = require('s3fs'),
    s3fsImpl = new S3FS('mybucketname', {
        accessKeyId: XXXXXXXXXXX,
        secretAccessKey: XXXXXXXXXXXXXXXXX
    });

// Create our bucket if it doesn't exist
s3fsImpl.create();

exports.upload = function (req, res) {
    var file = req.files.file;
    var stream = fs.createReadStream(file.path);
    return s3fsImpl.writeFile(file.originalFilename, stream).then(function () {
        fs.unlink(file.path, function (err) {
            if (err) {
                console.error(err);
            }
        });
        res.status(200).end();
    });
};

这将完成的实例化模块所提供的桶和AWS凭据,然后创造的水桶,如果它不存在。然后,当一个请求通过上传文件,我们将开辟一个流文件,并用它来编写文件,以S3到指定的路径。这将处理多部分上传一段幕后(如果需要),并有通过流正在做的好处,这样你就不必等待你开始上传之前阅读整个文件。

What this will do is instantiate the module for the provided bucket and AWS credentials and then create the bucket if it doesn't exist. Then when a request comes through to upload a file we'll open up a stream to the file and use it to write the file to S3 to the specified path. This will handle the multi-part upload piece behind the scenes (if needed) and has the benefit of being done through a stream, so you don't have to wait to read the whole file before you start uploading it.

如果您preFER,你可以从的诺言。或者使用<一个href="https://github.com/RiptideCloud/s3fs/blob/2516ab9356470f066409b870de5d198726e3fb7b/test/file.js#L272"相对=nofollow>管道()方法与事件侦听器来确定到底/错误。

If you prefer, you could change the code to callbacks from Promises. Or use the pipe() method with the event listener to determine the end/errors.

如果您正在寻找一些额外的方法,检查出s3fs文档和随意打开了一个问题,如果你正在寻找一些额外的方法或有问题。

If you're looking for some additional methods, check out the documentation for s3fs and feel free to open up an issue if you are looking for some additional methods or having issues.

这篇关于上传文件到Amazon S3与NodeJS的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆