通过纯JS缩小图像尺寸会导致图像尺寸膨胀(以字节为单位) [英] Downsizing image dimensions via pure JS leads to image size inflation (in bytes)

查看:225
本文介绍了通过纯JS缩小图像尺寸会导致图像尺寸膨胀(以字节为单位)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是一个服务器端开发人员,学习客户端操作的绳索,从纯JS开始。



目前我正在使用纯JS来调整通过浏览器上传的图像尺寸。



我遇到了将1018 x 1529 .jpg 文件缩小为400 x 601的情况 .jpeg 正在生成一个更大的文件(以字节为单位)。它从 70013 字节到 74823 字节。



我的期望是尺寸减少,而不是通货膨胀。发生了什么事,有没有办法修补这种情况?



注意:特别困扰我的一点是每个图像的压缩都是在没有任何先验的情况下开始的了解目标之前的按压。因此,低于100的任何质量水平都应该进一步降低图像质量。因此,总是会减少文件大小。但奇怪的是没有发生?






如果需要,我的相关JS代码是:

  var max_img_width = 400; 
var wranges = [max_img_width,Math.round(0.8 * max_img_width),Math.round(0.6 * max_img_width),Math.round(0.4 * max_img_width),Math.round(0.2 * max_img_width)];

函数prep_image(img_src,text,img_name,target_action,callback){
var img = document.createElement('img');
var fr = new FileReader();
fr.onload = function(){
var dataURL = fr.result;
img.onload = function(){
img_width = this.width;
img_height = this.height;
img_to_send = resize_and_compress(this,img_width,img_height,image / jpeg);
回调(text,img_name,target_action,img_to_send);
}
img.src = dataURL;
};
fr.readAsDataURL(img_src);
}


函数resize_and_compress(source_img,img_width,img_height,mime_type){
var new_width;
switch(true){
case img_width< wranges [4]:
new_width = wranges [4];
休息;
case img_width< wranges [3]:
new_width = wranges [4];
休息;
case img_width< wranges [2]:
new_width = wranges [3];
休息;
case img_width< wranges [1]:
new_width = wranges [2];
休息;
case img_width< wranges [0]:
new_width = wranges [1];
休息;
默认值:
new_width = wranges [0];
休息;
}
var wpercent =(new_width / img_width);
var new_height = Math.round(img_height * wpercent);
var canvas = document.createElement('canvas'); //支持
canvas.width = new_width;
canvas.height = new_height;
var ctx = canvas.getContext(2d);
ctx.drawImage(source_img,0,0,new_width,new_height);
返回dataURItoBlob(canvas.toDataURL(mime_type),mime_type);
}

//将图像数据uri转换为blob对象
函数dataURItoBlob(dataURI,mime_type){
var byteString = atob(dataURI.split(', )[1]);
var ab = new ArrayBuffer(byteString.length);
var ia = new Uint8Array(ab); //支持
for(var i = 0; i< byteString.length; i ++){ia [i] = byteString.charCodeAt(i); }
返回新的Blob([ab],{type:mime_type});
}

如果有保证,这是我用过的测试图像:





这是图像的原始位置。



请注意,对于其他几个图像我试过,代码确实按预期运行。它并不总是搞砸了结果,但现在我不能确定它会一直有效。让我们坚持使用纯JS解决方案来解决这个问题的范围。

解决方案

为什么Canvas不是最好的选择缩小图像文件大小。



我不会详细介绍太多细节,也不会深入解释,但我会尽力向您解释您遇到的基本知识。



以下是您需要了解的一些概念(至少部分)。




  • 什么是有损图像格式(如JPEG)

  • 将图像绘制到画布时会发生什么

  • 导出时会发生什么画布图像到图像格式






有损图像格式



图像格式可分为三类: / p>


  • 原始图像格式

  • 无损图像格式(tiff,png,gif,bmp,webp .. 。)

  • 有损成像电子格式(jpeg,...)



无损图像格式通常只是压缩表格中的数据将像素颜色映射到使用此颜色的像素位置。



另一方面,有损图像格式将丢弃信息并生成近似值来自原始图像的数据( artifacts ),以便使用更少的数据创建感知相似的图像渲染。



近似( artifacts )有效,因为解压缩算法知道它必须在给定区域上传播颜色信息,因此它不会不得不保留每个像素信息。



但是一旦算法处理了原始图像并生成了新图像,就无法找回丢失的数据。






将图像绘制到画布上。



当你在画布上绘制图像,浏览器会将图像信息转换为原始图像格式

它不会存储有关传递给它的图像格式的任何信息,并且在有损图像的情况下,工件中包含的每个像素将成为每隔一个像素的第一类公民。






导出画布图像



画布2D API具有导出原始数据的三种方法:




  • getImageData 。这将返回原始像素RGBA值

  • toDataURL 。这将同步应用与您作为参数传递的MIME对应的压缩算法。

  • toBlob 。类似于 toDataURL ,但是异步。



我们感兴趣的是一个 toDataURL toBlob 以及image / jpeg MIME。

请记住,在调用此方法时,浏览器只能看到它在画布上的当前原始像素数据。因此,它将再次应用jpeg算法,删除一些数据,并从此原始图像生成新的近似值( artifacts )。



所以,是的,在这些方法中有一个0-1 质量参数可用于有损压缩,因此可以认为我们可以尝试知道用于生成的原始损失级别是多少原始图像,但即便如此,由于我们实际上在绘图到画布步骤中生成了新的图像数据,该算法可能无法为这些工件生成良好的扩展方案。



要考虑的另一件事,主要是 toDataURL ,浏览器必须尽可能快在进行这些操作时,他们通常更喜欢速度而不是压缩质量。






好吧,画布不适合它。那么呢?



jpeg图像不那么容易... jpegtran 声称它可以对你的jpeg图像进行无损扩展,所以我想它应该可以制作一个js端口,但我不知道...









关于无损格式的特别说明



请注意您的调整大小算法也可以生成更大的png文件,这是一个示例案例,但我会让读者猜测为什么会发生这种情况:



  var ctx = c.getContext('2d'); c.width = 501; for(var i = 0; i< 500; i + = 10 ){ctx.moveTo(i + .5,0); ctx.lineTo(i + .5,150);} ctx.stroke(); c.toBlob(b => console.log('original',b.size)); c2.width = 500; c2.height = (500/501)* c.height; c2.getContext('2d')。drawImage(c,0,0,c2.width,c2.height); c2.toBlob(b => console.log('resized) ',b.size));  

 < canvas id =c>< / canvas>< canvas id =c2>< / canvas>  


I'm a server-side dev learning the ropes of client side manipulation, starting with pure JS.

Currently I'm using pure JS to resize the dimensions of images uploaded via the browser.

I'm running into a situation where downsizing a 1018 x 1529 .jpg file to a 400 x 601 .jpeg is producing a file with a bigger size (in bytes). It goes from 70013 bytes to 74823 bytes.

My expectation is that there ought to be a size reduction, not inflation. What is going on, and is there any way to patch this kind of a situation?

Note: one point that especially perplexes me is that each image's compression starts without any prior knowledge of the target's previous compressions. Thus, any quality level below 100 should further degrade the image. This should accordingly always decrease the file size. But that strangely doesn't happen?


If required, my relevant JS code is:

var max_img_width = 400;
var wranges = [max_img_width, Math.round(0.8*max_img_width), Math.round(0.6*max_img_width),Math.round(0.4*max_img_width),Math.round(0.2*max_img_width)];

function prep_image(img_src, text, img_name, target_action, callback) { 
    var img = document.createElement('img');
    var fr = new FileReader();
    fr.onload = function(){
      var dataURL = fr.result;
      img.onload = function() {
          img_width = this.width;
          img_height = this.height;
          img_to_send = resize_and_compress(this, img_width, img_height, "image/jpeg");
          callback(text, img_name, target_action, img_to_send);
        }
      img.src = dataURL;
    };
    fr.readAsDataURL(img_src);
}


function resize_and_compress(source_img, img_width, img_height, mime_type){
    var new_width;
    switch (true) {
      case img_width < wranges[4]:
         new_width = wranges[4];
         break;
      case img_width < wranges[3]:
         new_width = wranges[4];
         break;
      case img_width < wranges[2]:
         new_width = wranges[3];
         break;
      case img_width < wranges[1]:
         new_width = wranges[2];
         break;
      case img_width < wranges[0]:
         new_width = wranges[1];
         break;
      default:
         new_width = wranges[0];
         break;
    }
    var wpercent = (new_width/img_width);
    var new_height = Math.round(img_height*wpercent);
    var canvas = document.createElement('canvas');//supported
    canvas.width = new_width;
    canvas.height = new_height;
    var ctx = canvas.getContext("2d");
    ctx.drawImage(source_img, 0, 0, new_width, new_height);
    return dataURItoBlob(canvas.toDataURL(mime_type),mime_type);
}

// converting image data uri to a blob object
function dataURItoBlob(dataURI,mime_type) {
  var byteString = atob(dataURI.split(',')[1]);
  var ab = new ArrayBuffer(byteString.length);
  var ia = new Uint8Array(ab);//supported
  for (var i = 0; i < byteString.length; i++) { ia[i] = byteString.charCodeAt(i); }
  return new Blob([ab], { type: mime_type });
}

If warranted, here's the test image I've used:

Here's the image's original location.

Note that for several other images I tried, the code did behave as expected. It doesn't always screw up the results, but now I can't be sure that it'll always work. Let's stick to pure JS solutions for the scope of this question.

解决方案

Why Canvas is not the best option to shrink an image file size.

I won't go into too much details, nor in depth explanations, but I will try to explain to you the basics of what you encountered.

Here are a few concepts you need to understand (at least partially).

  • What is a lossy image format (like JPEG)
  • What happens when you draw an image to a canvas
  • What happens when you export a canvas image to an image format

Lossy Image Format.

Image formats can be divided in three categories:

  • raw Image formats
  • lossless image formats (tiff, png, gif, bmp, webp ...)
  • lossy image formats (jpeg, ...)

Lossless image formats generally simply compress the data in a table mapping pixel colors to the pixel positions where this color is used.

On the other hand, Lossy image formats will discard information and produce approximation of the data (artifacts) from the raw image in order to create a perceptively similar image rendering, using less data.

Approximation (artifacts) works because the decompression algorithm knows that it will have to spread the color information on a given area, and thus it doesn't have to keep every pixels information.

But once the algorithm has treated the raw image, and produced the new one, there is no way to find back the lost data.


Drawing an image to the canvas.

When you draw an image on a canvas, the browser will convert the image information to a raw image format.
It won't store any information about what image format was passed to it, and in the case of a lossy image, every pixels contained in the artifacts will become a first class citizen as every other pixels.


Exporting a canvas image

The canvas 2D API has three methods to export its raw data:

  • getImageData. Which will return the raw pixels RGBA values
  • toDataURL. Which will apply a compression algorithm corresponding to the MIME you passed as argument, synchronously.
  • toBlob. Similar to toDataURL, but asynchronously.

The case we are interested in is the one of toDataURL and toBlob along with the "image/jpeg" MIME.
Remember that when calling this method, the browser only sees the current raw pixel data it has on the canvas. So it will apply once again the jpeg algorithm, removing some data, and producing new approximations (artifacts) from this raw image.

So, yes, there is an 0-1 quality parameter available for lossy compression in these methods, so one could think that we could try to know what was the original loss level used to generate the original image, but even then, since we actually produced new image data in the drawing-to-canvas step, the algorithm might not be able to produce a good spreading scheme for these artifacts.

An other thing to take into consideration, mostly for toDataURL, is that browsers have to be as fast as possible when doing these operations, and thus they will generally prefer speed over compression quality.


Alright, the canvas is not good for it. What then?

Not so easy for jpeg images... jpegtran claims it can do a lossless scaling of your jpeg images, so I guess it should be possible to make a js port too, but I don't know any...



Special note about lossless formats

Note that your resizing algorithm can also produce bigger png files, here is an example case, but I'll let the reader guess why this happens:

var ctx= c.getContext('2d');
c.width = 501;
for(var i = 0; i<500; i+=10) {
  ctx.moveTo(i+.5, 0);
  ctx.lineTo(i+.5, 150);
}
ctx.stroke();

c.toBlob(b=>console.log('original', b.size));

c2.width = 500;
c2.height = (500 / 501) * c.height;
c2.getContext('2d').drawImage(c, 0, 0, c2.width, c2.height);
c2.toBlob(b=>console.log('resized', b.size));

<canvas id="c"></canvas>
<canvas id="c2"></canvas>

这篇关于通过纯JS缩小图像尺寸会导致图像尺寸膨胀(以字节为单位)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆