我是否需要在现代计算机/显示器上校正最终的彩色输出 [英] Do I need to gamma correct the final color output on a modern computer/monitor

查看:135
本文介绍了我是否需要在现代计算机/显示器上校正最终的彩色输出的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述




  • 对所有加载的纹理使用sRGB格式( GL_SRGB8_ALPHA8 ),因为所有的艺术程序都是在伽玛校正文件之前进行的。在着色器中从 GL_SRGB8_ALPHA8 纹理进行采样时,OpenGL将自动转换为线性空间。
  • 执行所有照明计算,后期处理等。
  • 在写入将显示在屏幕上的最终颜色时,转换回sRGB空间。



请注意,在我的情况下,最终颜色写入涉及到我从FBO(这是一个线性RGB纹理)写入到后台缓冲区。



My假设已经受到挑战,好像我在最后阶段的伽马校正正确,我的颜色比他们应该的更亮。我设置了一个纯色,用我的值为{255, 106 ,0}的灯光绘制,但是当我渲染时,我得到{255, 171 ,0} (如通过印刷筛选和颜色选择所确定的)。而不是橙色,我得到黄色。如果我在最后一步 不正确,我会得到正确的{255, 106 ,0}值。



根据,结果变得如此重要,以至于它从未经历过ARB,而直接进入了OpenGL规范本身:注意 GL_SRGB ... 内部纹理格式。



在用于源采样之前,用此格式加载的纹理经历sRGB到线性RGB色彩空间转换。这给出了适合线性渲染操作的线性像素值,然后在转到主屏幕帧缓冲区时,可以将结果有效地转换为sRGB。









关于整个问题的个人说明:在目标设备色彩空间IMHO中的屏幕帧缓冲区中呈现图像是一个巨大的设计缺陷。在这样的设置中,没有办法让事情没有发生。



真正想要的是让屏幕帧缓冲区处于线性接触色彩空间。自然选择是CIEXYZ。渲染操作自然会发生在相同的联系人颜色空间中。在接触色彩空间中进行所有的图形操作,避免了前面提到的尝试通过名为sRGB的非线性圆孔推出名为线性RGB的方形挂钩的蠕虫的打开。



尽管我不太喜欢Weston / Wayland的设计,但至少它提供了实际实现这样的显示系统的机会,通过让客户端呈现并且合成器在接触色彩空间中操作并应用输出设备的颜色配置文件在最后的后处理步骤中。

接触式颜色空间的唯一缺点是必须使用深色(即,每种颜色> 12位渠道)。事实上,即使使用非线性RGB,8位也是完全不足的(非线性有助于掩盖缺乏可感知的分辨率)。




更新




我已经加载了几张图片(在我的例子中都是.png和.bmp图片)二进制数据。在我看来,图像实际上在RGB色彩空间中,就好像我将图像编辑程序的像素值与我在我的程序中获得的字节数组进行比较,他们完美匹配。由于我的图像编辑器给了我RGB值,这表示图像存储在RGB中。


确实如此。如果信号链中的某个位置应用了非线性变换,但是所有像素值都不会从图像改变到显示器,那么非线性度已经预先应用于图像的像素值。这意味着图像已经处于非线性色彩空间中。



2 - 开大多数电脑的有效扫描输出LUT是线性的!这是什么意思呢?


我不确定我能找到这个想法在你的回应中完成了。


这个想法在下面的章节中详细阐述,放入一个普通的(OpenGL)framebuffer直接进入显示器,不加修改.sRGB的想法是将值放入图像中,就像它们被发送到显示器并构建消费者显示器以遵循该sRGB色彩空间一样。


从我所知道的,经过实验,我已经测试了所有显示器的输出线性值。


< blockquote>

您是如何测量信号响应的?您是否使用了校准功率计或类似设备来测量响应信号从监视器发出的光强度?你不能相信你的眼睛,因为像我们所有的感官一样,我们的眼睛有一个对数信号响应。




更新2




对我来说,只有这样,我才能看到你说的是真实的,那么就是如果图像编辑器给了我值sRGB空间。


确实如此。由于色彩管理是作为事后考虑添加到所有广泛使用的图形系统中的,因此大多数图像编辑器都会在其目标色彩空间中编辑像素值。请注意,sRGB的一个特定设计参数是,它应该仅追溯性地指定非托管的直接值传输颜色操作,因为它们在消费设备上完成(并且大部分仍然完成)。由于完全没有颜色管理,图像中包含的值和在编辑器中操作的值必须已经在sRGB中。这个工作很长时间,因为长的图像不是在线性渲染过程中合成创建的;在后面的情况下,渲染系统必须考虑目标色彩空间。


我拍了一张屏幕截图,然后使用图像编辑程序来查看像素的值是什么

这当然只给出扫描输出缓冲区中的原始值,而没有伽玛LUT和显示非线性应用。

I've been under the assumption that my gamma correction pipeline should be as follows:

  • Use sRGB format for all textures loaded in (GL_SRGB8_ALPHA8) as all art programs pre-gamma correct their files. When sampling from a GL_SRGB8_ALPHA8 texture in a shader OpenGL will automatically convert to linear space.
  • Do all lighting calculations, post processing, etc. in linear space.
  • Convert back to sRGB space when writing final color that will be displayed on the screen.

Note that in my case the final color write involves me writing from a FBO (which is a linear RGB texture) to the back buffer.

My assumption has been challenged as if I gamma correct in the final stage my colors are brighter than they should be. I set up for a solid color to be drawn by my lights of value { 255, 106, 0 }, but when I render I get { 255, 171, 0 } (as determined by print-screening and color picking). Instead of orange I get yellow. If I don't gamma correct at the final step I get exactly the right value of { 255, 106, 0 }.

According to some resources modern LCD screens mimic CRT gamma. Do they always? If not, how can I tell if I should gamma correct? Am I going wrong somewhere else?


Edit 1

I've now noticed that even though the color I write with the light is correct, places where I use colors from textures are not correct (but rather far darker as I would expect without gamma correction). I don't know where this disparity is coming from.


Edit 2

After trying GL_RGBA8 for my textures instead of GL_SRGB8_ALPHA8, everything looks perfect, even when using the texture values in lighting computations (if I half the intensity of the light, the output color values are halfed).

My code is no longer taking gamma correction into account anywhere, and my output looks correct.

This confuses me even more, is gamma correction no longer needed/used?


Edit 3 - In response to datenwolf's answer

After some more experimenting I'm confused on a couple points here.

1 - Most image formats are stored non-linearly (in sRGB space)

I've loaded a few images (in my case both .png and .bmp images) and examined the raw binary data. It appears to me as though the images are actually in the RGB color space, as if I compare the values of pixels with an image editing program with the byte array I get in my program they match up perfectly. Since my image editor is giving me RGB values, this would indicate the image stored in RGB.

I'm using stb_image.h/.c to load my images and followed it all the way through loading a .png and did not see anywhere that it gamma corrected the image while loading. I also examined the .bmps in a hex editor and the values on disk matched up for them.

If these images are actually stored on disk in linear RGB space, how am I supposed to (programatically) know when to specify an image is in sRGB space? Is there some way to query for this that a more featured image loader might provide? Or is it up to the image creators to save their image as gamma corrected (or not) - meaning establishing a convention and following it for a given project. I've asked a couple artists and neither of them knew what gamma correction is.

If I specify my images are sRGB, they are too dark unless I gamma correct in the end (which would be understandable if the monitor output using sRGB, but see point #2).

2 - "On most computers the effective scanout LUT is linear! What does this mean though?"

I'm not sure I can find where this thought is finished in your response.

From what I can tell, having experimented, all monitors I've tested on output linear values. If I draw a full screen quad and color it with a hard-coded value in a shader with no gamma correction the monitor displays the correct value that I specified.

What the sentence I quoted above from your answer and my results would lead me to believe is that modern monitors output linear values (i.e. do not emulate CRT gamma).

The target platform for our application is the PC. For this platform (excluding people with CRTs or really old monitors), would it be reasonable to do whatever your response to #1 is, then for #2 to not gamma correct (i.e. not perform the final RGB->sRGB transformation - either manually or using GL_FRAMEBUFFER_SRGB)?

If this is so, what are the platforms on which GL_FRAMEBUFFER_SRGB is meant for (or where it would be valid to use it today), or are monitors that use linear RGB really that new (given that GL_FRAMEBUFFER_SRGB was introduced 2008)?

--

I've talked to a few other graphics devs at my school and from the sounds of it, none of them have taken gamma correction into account and they have not noticed anything incorrect (some were not even aware of it). One dev in particular said that he got incorrect results when taking gamma into account so he then decided to not worry about gamma. I'm unsure what to do in my project for my target platform given the conflicting information I'm getting online/seeing with my project.


Edit 4 - In response to datenwolf's updated answer

Yes, indeed. If somewhere in the signal chain a nonlinear transform is applied, but all the pixel values go unmodified from the image to the display, then that nonlinearity has already been pre-applied on the image's pixel values. Which means, that the image is already in a nonlinear color space.

Your response would make sense to me if I was examining the image on my display. To be sure I was clear, when I said I was examining the byte array for the image I mean I was examining the numerical value in memory for the texture, not the image output on the screen (which I did do for point #2). To me the only way I could see what you're saying to be true then is if the image editor was giving me values in sRGB space.

Also note that I did try examining the output on monitor, as well as modifying the texture color (for example, dividing by half or doubling it) and the output appeared correct (measured using the method I describe below).

How did you measure the signal response?

Unfortunately my methods of measurement are far cruder than yours. When I said I experimented on my monitors what I meant was that I output solid color full screen quad whose color was hard coded in a shader to a plain OpenGL framebuffer (which does not do any color space conversion when written to). When I output white, 75% gray, 50% gray, 25% gray and black the correct colors are displayed. Now here my interpretation of correct colors could most certainly be wrong. I take a screenshot and then use an image editing program to see what the values of the pixels are (as well as a visual appraisal to make sure the values make sense). If I understand correctly, if my monitors were non-linear I would need to perform a RGB->sRGB transformation before presenting them to the display device for them to be correct.

I'm not going to lie, I feel I'm getting a bit out of my depth here. I'm thinking the solution I might persue for my second point of confusion (the final RGB->sRGB transformation) will be a tweakable brightness setting and default it to what looks correct on my devices (no gamma correction).

解决方案

First of all you must understand that the nonlinear mapping applied to the color channels is often more than just a simple power function. sRGB nonlinearity can be approximated by about x^2.4, but that's not really the real deal. Anyway your primary assumptions are more or less correct.

If your textures are stored in the more common image file formats, they will contain the values as they are presented to the graphics scanout. Now there are two common hardware scenarios:

  • The scanout interface outputs a linear signal and the display device will then internally apply a nonlinear mapping. Old CRT monitors were nonlinear due to their physics: The amplifiers could put only so much current into the electron beam, the phosphor saturating and so on – that's why the whole gamma thing was introduced in the first place, to model the nonlinearities of CRT displays.

  • Modern LCD and OLED displays either use resistor ladders in their driver amplifiers, or they have gamma ramp lookup tables in their image processors.

  • Some devices however are linear, and ask the image producing device to supply a proper matching LUT for the desired output color profile on the scanout.

On most computers the effective scanout LUT is linear! What does this mean though? A little detour:


For illustration I quickly hooked up my laptop's analogue display output (VGA connector) to my analogue oscilloscope: Blue channel onto scope channel 1, green channel to scope channel 2, external triggering on line synchronization signal (HSync). A quick and dirty OpenGL program, deliberately written with immediate mode was used to generate a linear color ramp:

#include <GL/glut.h>

void display()
{
    GLuint win_width = glutGet(GLUT_WINDOW_WIDTH);
    GLuint win_height = glutGet(GLUT_WINDOW_HEIGHT);

    glViewport(0,0, win_width, win_height);
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    glOrtho(0, 1, 0, 1, -1, 1);

    glMatrixMode(GL_MODELVIEW);
    glLoadIdentity();

    glBegin(GL_QUAD_STRIP);
        glColor3f(0., 0., 0.);
        glVertex2f(0., 0.);
        glVertex2f(0., 1.);
        glColor3f(1., 1., 1.);
        glVertex2f(1., 0.);
        glVertex2f(1., 1.);
    glEnd();

    glutSwapBuffers();
}

int main(int argc, char *argv[])
{
    glutInit(&argc, argv);
    glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE);

    glutCreateWindow("linear");
    glutFullScreen();
    glutDisplayFunc(display);

    glutMainLoop();

    return 0;
}

The graphics output was configured with the Modeline

"1440x900_60.00"  106.50  1440 1528 1672 1904  900 903 909 934 -HSync +VSync

(because that's the same mode the flat panel runs in, and I was using cloning mode)

  • gamma=2 LUT on the green channel.
  • linear (gamma=1) LUT on the blue channel

This is how the signals of a single scanout line look like (upper curve: Ch2 = green, lower curve: Ch1 = blue):

You can clearly see the x⟼x² and x⟼x mappings (parabola and linear shapes of the curves).


Now after this little detour we know, that the pixel values that go to the main framebuffer, go there as they are: The OpenGL linear ramp underwent no further changes and only when a nonlinear scanout LUT was applied it altered the signal sent to the display.

Either way the values you present to the scanout (which means the on-screen framebuffers) will undergo a nonlinear mapping at some point in the signal chain. And for all standard consumer devices this mapping will be according to the sRGB standard, because it's the smallest common factor (i.e. images represented in the sRGB color space can be reproduced on most output devices).

Since most programs, like webbrowsers assume the output to undergo a sRGB to display color space mapping, they simply copy the pixel values of the standard image file formats to the on-screen frame as they are, without performing a color space conversion, thereby implying that the color values within those images are in sRGB color space (or they will often merely convert to sRGB, if the image color profile is not sRGB); the correct thing to do (if, and only if the color values written to the framebuffer are scanned out to the display unaltered; assuming that scanout LUT is part of the display), would be conversion to the specified color profile the display expects.

But this implies, that the on-screen framebuffer itself is in sRGB color space (I don't want to split hairs about how idiotic that is, lets just accept this fact).

How to bring this together with OpenGL? First of all, OpenGL does all it's color operations linearly. However since the scanout is expected to be in some nonlinear color space, this means, that the end result of the rendering operations of OpenGL somehow must be brougt into the on-screen framebuffer color space.

This is where the ARB_framebuffer_sRGB extension (which went core with OpenGL-3) enters the picture, which introduced new flags used for the configuration of window pixelformats:

New Tokens

    Accepted by the <attribList> parameter of glXChooseVisual, and by
    the <attrib> parameter of glXGetConfig:

        GLX_FRAMEBUFFER_SRGB_CAPABLE_ARB             0x20B2

    Accepted by the <piAttributes> parameter of
    wglGetPixelFormatAttribivEXT, wglGetPixelFormatAttribfvEXT, and
    the <piAttribIList> and <pfAttribIList> of wglChoosePixelFormatEXT:

        WGL_FRAMEBUFFER_SRGB_CAPABLE_ARB             0x20A9

    Accepted by the <cap> parameter of Enable, Disable, and IsEnabled,
    and by the <pname> parameter of GetBooleanv, GetIntegerv, GetFloatv,
    and GetDoublev:

        FRAMEBUFFER_SRGB                             0x8DB9

So if you have a window configured with such a sRGB pixelformat and enable sRGB rasterization mode in OpenGL with glEnable(GL_FRAMEBUFFER_SRGB); the result of the linear colorspace rendering operations will be transformed in sRGB color space.

Another way would be to render everything into an off-screen FBO and to the color conversion in a postprocessing shader.

But that's only the output side of rendering signal chain. You also got input signals, in the form of textures. And those are usually images, with their pixel values stored nonlinearly. So before those can be used in linear image operations, such images must be brought into a linear color space first. Lets just ignore for the time being, that mapping nonlinear color spaces into linear color spaces opens several of cans of worms upon itself – which is why the sRGB color space is so ridiculously small, namely to avoid those problems.

So to address this an extension EXT_texture_sRGB was introduced, which turned out to be so vital, that it never went through being ARB, but went straight into the OpenGL specification itself: Behold the GL_SRGB… internal texture formats.

A texture loaded with this format undergoes a sRGB to linear RGB colorspace transformation, before being used to source samples. This gives linear pixel values, suitable for linear rendering operations, and the result can then be validly transformed to sRGB when going to the main on-screen framebuffer.



A personal note on the whole issue: Presenting images on the on-screen framebuffer in the target device color space IMHO is a huge design flaw. There's no way to do everything right in such a setup without going insane.

What one really wants is to have the on-screen framebuffer in a linear, contact color space; the natural choice would be CIEXYZ. Rendering operations would naturally take place in the same contact color space. Doing all graphics operations in contact color spaces, avoids the opening of the aforementioned cans-of-worms involved with trying to push a square peg named linear RGB through a nonlinear, round hole named sRGB.

And although I don't like the design of Weston/Wayland very much, at least it offers the opportunity to actually implement such a display system, by having the clients render and the compositor operate in contact color space and apply the output device's color profiles in a last postprocessing step.

The only drawback of contact color spaces is, that there it's imperative to use deep color (i.e. > 12 bits per color channel). In fact 8 bits are completely insufficient, even with nonlinear RGB (the nonlinearity helps a bit to cover up the lack of perceptible resolution).


Update

I've loaded a few images (in my case both .png and .bmp images) and examined the raw binary data. It appears to me as though the images are actually in the RGB color space, as if I compare the values of pixels with an image editing program with the byte array I get in my program they match up perfectly. Since my image editor is giving me RGB values, this would indicate the image stored in RGB.

Yes, indeed. If somewhere in the signal chain a nonlinear transform is applied, but all the pixel values go unmodified from the image to the display, then that nonlinearity has already been pre-applied on the image's pixel values. Which means, that the image is already in a nonlinear color space.

2 - "On most computers the effective scanout LUT is linear! What does this mean though?

I'm not sure I can find where this thought is finished in your response.

This thought is elaborated in the section that immediately follows, where I show how the values you put into a plain (OpenGL) framebuffer go directly to the monitor, unmodified. The idea of sRGB is "put the values into the images exactly as they are sent to the monitor and build consumer displays to follow that sRGB color space".

From what I can tell, having experimented, all monitors I've tested on output linear values.

How did you measure the signal response? Did you use a calibrated power meter or similar device to measure the light intensity emitted from the monitor in response to the signal? You can't trust your eyes with that, because like all our senses our eyes have a logarithmic signal response.


Update 2

To me the only way I could see what you're saying to be true then is if the image editor was giving me values in sRGB space.

That's indeed the case. Because color management was added to all the widespread graphics systems as an afterthought, most image editors edit pixel values in their destination color space. Note that one particular design parameter of sRGB was, that it should merely retroactively specify the unmanaged, direct value transfer color operations as they were (and mostly still are done) done on consumer devices. Since there happens no color management at all, the values contained in the images and manipulated in editors must be in sRGB already. This works for so long, as long images are not synthetically created in a linear rendering process; in case of the later the render system has to take into account the destination color space.

I take a screenshot and then use an image editing program to see what the values of the pixels are

Which gives you of course only the raw values in the scanout buffer without the gamma LUT and the display nonlinearity applied.

这篇关于我是否需要在现代计算机/显示器上校正最终的彩色输出的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆