减少色深的图像中的未reducin文件的大小? [英] Reducing color depth in an image is not reducin the file size?

查看:183
本文介绍了减少色深的图像中的未reducin文件的大小?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用此代码,以减少图像的深度:

 公共无效ApplyDecreaseColourDepth(INT偏移)
{
INT A,R,G,b;

彩色pixelColor;

为(INT Y = 0; Y< bitmapImage.Height; Y ++)
{
的for(int x = 0; X< bitmapImage.Width; X ++)
{
pixelColor = bitmapImage.GetPixel(X,Y);

A = pixelColor.A;

R =((pixelColor.R +(偏移/ 2)) - ((pixelColor.R +(偏移/ 2))%偏移量) - 1);

如果(R小于0)
{
R = 0;
}

G =((pixelColor.G +(偏移/ 2)) - ((pixelColor.G +(偏移/ 2))%偏移量) - 1);

如果(G小于0)
{
G = 0;
}

B =((pixelColor.B +(偏移/ 2)) - ((pixelColor.B +(偏移/ 2))%偏移量) - 1);

如果(B小于0)
{
B = 0;
}

bitmapImage.SetPixel(X,Y,Color.FromArgb(A,R,G,B));
}
}
}



第一个问题是:偏移我给功能不深,是这样吗?



二是,当我尝试保存图像后,我减少了它的颜色深度,我获得原始图片的大小相同。 。是不是很合乎逻辑,我应该得到一个文件一个没有大小,或者我是错的。



这是我用来保存修改后的图像的代码:

 私人位图的BitmapImage; 

公共无效SaveImage(字符串路径)
{
bitmapImage.Save(路径);
}


解决方案

让我们先来清理启动编写一个位。以下模式:

  R =((pixelColor.R +(偏置/ 2)) - ((pixelColor.R +(偏置/ 2))%偏移量) -  1); 
如果(R小于0)
{
R = 0;
}



是相同的:

  R = Math.Max​​(0,(pixelColor.R +偏移量/ 2)/偏移*偏移 -  1); 

您可以由此简化功能,这样的:

 公共无效ApplyDecreaseColourDepth(INT偏移)
{$ b $为b(INT Y = 0; Y< bitmapImage.Height; Y ++)
{
表示(中间体X = 0; X&下; bitmapImage.Width; X ++)
{
INT pixelColor = bitmapImage.GetPixel(X,Y);

INT A = pixel.A;

INT R = Math.Max​​(0,(pixelColor.R +偏移量/ 2)/偏移*偏移 - 1);
INT G = Math.Max​​(0,(pixelColor.G +偏移量/ 2)/偏移量*偏移 - 1);
INT B = Math.Max​​(0,(pixelColor.B +偏移量/ 2)/偏移量*偏移 - 1);

bitmapImage.SetPixel(X,Y,Color.FromArgb(A,R,G,B));
}
}
}

要回答你的问题:




  1. 修正;偏移量是在步骤功能的步骤的尺寸。每个颜色分量的深度是原来的深度减去日志 2 (偏移)。例如,如果原始图像具有每组分(BPC)8比特的深度和偏移是16,则各成分的深度为8 - 日志<子> 2 (16)= 8 - 4 = 4 BPC。但是请注意,这只是表明了多少熵每个输出组件可以持有,没有多少每个组件位将实际用于存储结果。

  2. 输出文件的大小取决于所存储的颜色深度和使用的压缩。简单地减小不同值每个部件可以具有将不会自动导致更少的比特每个组件所使用的数量,所以未压缩图像不会收缩,除非明确地选择每个组件使用较少的位的编码。如果要保存压缩格式,如PNG,你可能会看到一个改进过的图像,或者您可能没有;这取决于图像的内容。有很多的平织构的领域,如艺术线条图的图像,会看到微不足道的改善,而照片可能会受益明显从变换(尽管在感知质量为代价)。


I use this code to reduce the depth of an image:

public void ApplyDecreaseColourDepth(int offset)
{
    int A, R, G, B;

    Color pixelColor;

    for (int y = 0; y < bitmapImage.Height; y++)
    {
        for (int x = 0; x < bitmapImage.Width; x++)
        {
            pixelColor = bitmapImage.GetPixel(x, y);

            A = pixelColor.A;

            R = ((pixelColor.R + (offset / 2)) - ((pixelColor.R + (offset / 2)) % offset) - 1);

            if (R < 0)
            {
                R = 0;
            }

            G = ((pixelColor.G + (offset / 2)) - ((pixelColor.G + (offset / 2)) % offset) - 1);

            if (G < 0)
            {
                G = 0;
            }

            B = ((pixelColor.B + (offset / 2)) - ((pixelColor.B + (offset / 2)) % offset) - 1);

            if (B < 0)
            {
                B = 0;
            }

            bitmapImage.SetPixel(x, y, Color.FromArgb(A, R, G, B));
        }
    }
}

first question is: the offset that I give the function is not the depth, is that right?

the second is that when I try to save the image after I reduce the depth of its colors, I get the same size of the original Image. Isn't it logical that I should get a file with a less size, or I am wrong.

This is the code that I use to save the modified image:

private Bitmap bitmapImage;

public void SaveImage(string path)
{
    bitmapImage.Save(path);
} 

解决方案

Let's start by cleaning up the code a bit. The following pattern:

R = ((pixelColor.R + (offset / 2)) - ((pixelColor.R + (offset / 2)) % offset) - 1);
if (R < 0)
{
    R = 0;
}

Is equivalent to this:

R = Math.Max(0, (pixelColor.R + offset / 2) / offset * offset - 1);

You can thus simplify your function to this:

public void ApplyDecreaseColourDepth(int offset)
{
    for (int y = 0; y < bitmapImage.Height; y++)
    {
        for (int x = 0; x < bitmapImage.Width; x++)
        {
            int pixelColor = bitmapImage.GetPixel(x, y);

            int A = pixel.A;

            int R = Math.Max(0, (pixelColor.R + offset / 2) / offset * offset - 1);
            int G = Math.Max(0, (pixelColor.G + offset / 2) / offset * offset - 1);
            int B = Math.Max(0, (pixelColor.B + offset / 2) / offset * offset - 1);

            bitmapImage.SetPixel(x, y, Color.FromArgb(A, R, G, B));
        }
    }
}

To answer your questions:

  1. Correct; the offset is the size of the steps in the step function. The depth per color component is the original depth minus log2(offset). For example, if the original image has a depth of eight bits per component (bpc) and the offset is 16, then the depth of each component is 8 - log2(16) = 8 - 4 = 4 bpc. Note, however, that this only indicates how much entropy each output component can hold, not how many bits per component will actually be used to store the result.
  2. The size of the output file depends on the stored color depth and the compression used. Simply reducing the number of distinct values each component can have won't automatically result in fewer bits being used per component, so an uncompressed image won't shrink unless you explicitly choose an encoding that uses fewer bits per component. If you are saving a compressed format such as PNG, you might see an improvement with the transformed image, or you might not; it depends on the content of the image. Images with a lot of flat untextured areas, such as line art drawings, will see negligible improvement, whereas photos will probably benefit noticeably from the transform (albeit at the expense of perceptual quality).

这篇关于减少色深的图像中的未reducin文件的大小?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆