流上载GM-调整图像到S3与AWS-SDK [英] stream uploading an gm-resized image to s3 with aws-sdk

查看:243
本文介绍了流上载GM-调整图像到S3与AWS-SDK的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

所以,我想要做的是从URL流的图像,用实:graphicsmagick并处理数据流上传到S3。我只是不明白它的工作。

(使用fs.createWriteStream)处理后的图像流到本地硬盘的工作原理没有问题。

当我缓存我流,在S3中的最终图像至少有预期的大小(KB-明智的),但我不能打开图像。

这就是我目前的进展情况:

  VAR请求=要求(请求);

变种克=要求(克);

VAR AWS =需要('AWS-SDK);

VAR哑剧=需要('默');

VAR S3 =新AWS.S3();

通用汽车公司(请求(http://www.some-domain.com/some-image.jpg),我的-i​​mage.jpg文件)
  .resize(100 ^,100 ^)
  .stream(功能(ERR,标准输出,标准错误){
    VAR海峡='';
    stdout.on('数据',功能(数据){
       STR + =数据;
    });
    stdout.on(结束功能(数据){
      VAR数据= {
        斗:我斗,
        关键:我的-i​​mage.jpg的,
        正文:新的缓冲区(STR,二进制),//多数民众赞成在即时通讯可能是错的
        的ContentType:mime.lookup(我的-i​​mage.jpg文件)
      };
      s3.client.putObject(数据,功能(ERR,RES){
        的console.log(完成);
      });
    });
  });
 

我也尝试了一些东西,像创建一个filewritestream和filereadstream,但我认为应该有一些更清洁的一种更好的解决这一问题...

编辑:首先我想是身体设置为标准输出(从@AndyD建议的答案):

  VAR数据= {
    斗:我斗,
    关键:我的-i​​mage.jpg的,
    身体:标准输出,
    的ContentType:mime.lookup(我的-i​​mage.jpg文件)
  };
 

但下面的错误回报:

 无法确定[对象的对象]的长度
 

EDIT2:

  • nodeversion:0.8.6(我也试过0.8.22和0.10.0)
  • AWS-SDK:0.9.7- $ P $ 8页(现在安装)

完整的错误:

  {[错误:无法确定的长度[对象的对象]
  消息:无法确定[对象的对象]的长度,
  目的:
  { _处理:
   {writeQueueSize:0,
    老板:[通知]
    onread:[功能:onread]},
 _pendingWriteReqs:0,
 _flags:0,
 _connectQueueSize:0,
 毁:假的,
 errorEmitted:假的,
 读取动作:0,
 _bytesDispatched:0,
 allowHalfOpen:未定义,
 可写的:假的,
 阅读:真正的,
 _paused:假的,
 _events:{接近:[功能],错误:[功能:handlerr]}},
名称:错误}
 

解决方案

您不需要读取自己的数据流(在你的情况下,你似乎是从二进制字符串转换回由于VAR海峡=''和然后将数据追加它是一个二进制缓存等等...

试着让putObject管流是这样的:

 克(请求(http://www.some-domain.com/some-image.jpg),我的-i​​mage.jpg文件)
  .resize(100 ^,100 ^)
  .stream(功能(ERR,标准输出,标准错误){
      VAR数据= {
        斗:我斗,
        关键:我的-i​​mage.jpg的,
        正文:标准输出
        的ContentType:mime.lookup(我的-i​​mage.jpg文件)
      };
      s3.client.putObject(数据,功能(ERR,RES){
        的console.log(完成);
      });
    });
  });
 

请参阅这些发行说明获得更多的信息。

如果流/管道不工作,那么这样的实力,这将加载所有到内存中,然后上传。你限制为4MB,我认为在这种情况下。

  VAR BUF =新的缓冲区('');
    stdout.on('数据',功能(数据){
       BUF = Buffer.concat([BUF,数据]);
    });
    stdout.on(结束功能(数据){
      VAR数据= {
        斗:我斗,
        关键:我的-i​​mage.jpg的,
        身体:BUF,
        的ContentType:mime.lookup(我的-i​​mage.jpg文件)
      };
      s3.client.putObject(数据,功能(ERR,RES){
        的console.log(完成);
      });
    });
 

so what i want to do is to stream an image from a url, process it with graphicsmagick and stream-upload it to s3. i just dont get it working.

streaming the processed image to local disk (using fs.createWriteStream) works without a problem.

when i buffer my stream, the final image in s3 has at least the expected size (kb-wise), but i can not open that image.

thats my current progress:

var request = require('request');

var gm = require("gm");

var AWS = require('aws-sdk');

var mime = require('mime');

var s3 = new AWS.S3();

gm(request('http://www.some-domain.com/some-image.jpg'), "my-image.jpg")
  .resize("100^", "100^")
  .stream(function(err, stdout, stderr) {
    var str = '';
    stdout.on('data', function(data) {
       str += data;
    });
    stdout.on('end', function(data) {
      var data = {
        Bucket: "my-bucket",
        Key: "my-image.jpg",
        Body: new Buffer(str, 'binary'), // thats where im probably wrong
        ContentType: mime.lookup("my-image.jpg")
      };
      s3.client.putObject(data, function(err, res) {
        console.log("done");
      });
    });
  });

i did try some stuff like creating a filewritestream and filereadstream, but i think there should be some cleaner an nicer solution to that problem...

EDIT: the first thing i tried was setting the Body to stdout (the suggested answer from @AndyD):

var data = {
    Bucket: "my-bucket",
    Key: "my-image.jpg",
    Body: stdout,
    ContentType: mime.lookup("my-image.jpg")
  };

but that returns following error:

Cannot determine length of [object Object]'

EDIT2:

  • nodeversion: 0.8.6 (i also tried 0.8.22 and 0.10.0)
  • aws-sdk: 0.9.7-pre.8 (installed today)

the complete err:

{ [Error: Cannot determine length of [object Object]]
  message: 'Cannot determine length of [object Object]',
  object:
  { _handle:
   { writeQueueSize: 0,
    owner: [Circular],
    onread: [Function: onread] },
 _pendingWriteReqs: 0,
 _flags: 0,
 _connectQueueSize: 0,
 destroyed: false,
 errorEmitted: false,
 bytesRead: 0,
 _bytesDispatched: 0,
 allowHalfOpen: undefined,
 writable: false,
 readable: true,
 _paused: false,
 _events: { close: [Function], error: [Function: handlerr] } },
name: 'Error' }

解决方案

you don't need to read the stream yourself (in your case you seem to be converting from binary to string and back due to var str='' and then appending data which is a binary buffer etc...

Try letting putObject pipe the stream like this:

gm(request('http://www.some-domain.com/some-image.jpg'), "my-image.jpg")
  .resize("100^", "100^")
  .stream(function(err, stdout, stderr) {
      var data = {
        Bucket: "my-bucket",
        Key: "my-image.jpg",
        Body: stdout
        ContentType: mime.lookup("my-image.jpg")
      };
      s3.client.putObject(data, function(err, res) {
        console.log("done");
      });
    });
  });

See these release notes for more info.

If streaming/pipe doesn't work then something like this might which will load everything into memory and then upload. You're limited to 4Mb I think in this case.

    var buf = new Buffer('');
    stdout.on('data', function(data) {
       buf = Buffer.concat([buf, data]);
    });
    stdout.on('end', function(data) {
      var data = {
        Bucket: "my-bucket",
        Key: "my-image.jpg",
        Body: buf,
        ContentType: mime.lookup("my-image.jpg")
      };
      s3.client.putObject(data, function(err, res) {
        console.log("done");
      });
    });

这篇关于流上载GM-调整图像到S3与AWS-SDK的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆