如何将档案(zip)传送到S3存储桶 [英] how to pipe an archive (zip) to an S3 bucket

查看:288
本文介绍了如何将档案(zip)传送到S3存储桶的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我对如何进行操作感到困惑.我正在使用Archive(node js模块)作为将数据写入zip文件的方法.目前,当我写入文件(本地存储)时,我的代码可以正常工作.

I’m a bit confused with how to proceed. I am using Archive ( node js module) as a means to write data to a zip file. Currently, I have my code working when I write to a file (local storage).

var fs = require('fs');
var archiver = require('archiver');

var output = fs.createWriteStream(__dirname + '/example.zip');
var archive = archiver('zip', {
     zlib: { level: 9 }  
});

archive.pipe(output);
archive.append(mybuffer, {name: ‘msg001.txt’});

我想修改代码,以便存档目标文件是一个AWS S3存储桶.查看代码示例,在创建存储桶对象时,我可以指定存储桶名称和键(和主体),如下所示:

I’d like to modify the code so that the archive target file is an AWS S3 bucket. Looking at the code examples, I can specify the bucket name and key (and body) when I create the bucket object as in:

var s3 = new AWS.S3();
var params = {Bucket: 'myBucket', Key: 'myMsgArchive.zip' Body: myStream};
s3.upload( params, function(err,data){
    … 
});

Or 

s3 = new AWS.S3({ parms: {Bucket: ‘myBucket’ Key: ‘myMsgArchive.zip’}});
s3.upload( {Body: myStream})
    .send(function(err,data) {
    …
    });

就我的S3示例而言,myStream似乎是可读的流,由于archive.pipe需要可写的流,因此我对如何使它起作用感到困惑.这是我们需要使用直通流的地方吗?我找到了一个示例,其中有人创建了直通流,但是该示例过于简洁,无法正确理解.我要指的具体示例是:

With regards to my S3 example(s), myStream appears to be a readable stream and I am confused as how to make this work as archive.pipe requires a writeable stream. Is this something where we need to use a pass-through stream? I’ve found an example where someone created a pass-through stream but the example is too terse to gain proper understanding. The specific example I am referring to is:

将流添加到s3.upload()

任何人能给我的帮助将不胜感激.谢谢.

Any help someone can give me would greatly be appreciated. Thanks.

推荐答案

对于想知道如何使用pipe的其他人来说,这可能很有用.

This could be useful for anyone else wondering how to use pipe.

由于您使用传递流正确引用了示例,所以这是我的工作代码:

Since you correctly referenced the example using the pass-through stream, here's my working code:

1-例程本身,使用 node-archiver

1 - The routine itself, zipping files with node-archiver

exports.downloadFromS3AndZipToS3 = () => {
  // These are my input files I'm willing to read from S3 to ZIP them

  const files = [
    `${s3Folder}/myFile.pdf`,
    `${s3Folder}/anotherFile.xml`
  ]

  // Just in case you like to rename them as they have a different name in the final ZIP

  const fileNames = [
    'finalPDFName.pdf',
    'finalXMLName.xml'
  ]

  // Use promises to get them all

  const promises = []

  files.map((file) => {
    promises.push(s3client.getObject({
      Bucket: yourBubucket,
      Key: file
    }).promise())
  })

  // Define the ZIP target archive

  let archive = archiver('zip', {
    zlib: { level: 9 } // Sets the compression level.
  })

  // Pipe!

  archive.pipe(uploadFromStream(s3client, 'someDestinationFolderPathOnS3', 'zipFileName.zip'))

  archive.on('warning', function(err) {
    if (err.code === 'ENOENT') {
      // log warning
    } else {
      // throw error
      throw err;
    }
  })

  // Good practice to catch this error explicitly
  archive.on('error', function(err) {
    throw err;
  })

  // The actual archive is populated here 

  return Promise
    .all(promises)
    .then((data) => {
      data.map((thisFile, index) => {
        archive.append(thisFile.Body, { name: fileNames[index] })
      })

      archive.finalize()
    })
  }

2-辅助方法

const uploadFromStream = (s3client) => {
  const pass = new stream.PassThrough()

  const s3params = {
    Bucket: yourBucket,
    Key: `${someFolder}/${aFilename}`,
    Body: pass,
    ContentType: 'application/zip'
  }

  s3client.upload(s3params, (err, data) => {
    if (err)
      console.log(err)

    if (data)
      console.log('Success')
  })

  return pass
}

这篇关于如何将档案(zip)传送到S3存储桶的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆