具有色差的反射/折射-眼睛矫正 [英] Reflection/refraction with chromatic aberration - eye correction

查看:150
本文介绍了具有色差的反射/折射-眼睛矫正的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在编写一个GLSL着色器,用于模拟简单对象的色差.我保持与OpenGL 2.0兼容,因此我使用内置的OpenGL矩阵堆栈.这是简单的顶点着色器:

uniform vec3 cameraPos;

varying vec3 incident;
varying vec3 normal;

void main(void) {
    vec4 position = gl_ModelViewMatrix * gl_Vertex;
    incident = position.xyz / position.w - cameraPos;
    normal   = gl_NormalMatrix * gl_Normal;

    gl_Position = ftransform();
}

cameraPos制服是相机在模型空间中的位置,正如人们可能想像的那样.这是片段着色器:

const float etaR = 1.14;
const float etaG = 1.12;
const float etaB = 1.10;
const float fresnelPower = 2.0;
const float F = ((1.0 - etaG) * (1.0 - etaG)) / ((1.0 + etaG) * (1.0 + etaG));

uniform samplerCube environment;

varying vec3 incident;
varying vec3 normal;

void main(void) {
    vec3 i = normalize(incident);
    vec3 n = normalize(normal);

    float ratio = F + (1.0 - F) * pow(1.0 - dot(-i, n), fresnelPower);

    vec3 refractR = vec3(gl_TextureMatrix[0] * vec4(refract(i, n, etaR), 1.0));
    vec3 refractG = vec3(gl_TextureMatrix[0] * vec4(refract(i, n, etaG), 1.0));
    vec3 refractB = vec3(gl_TextureMatrix[0] * vec4(refract(i, n, etaB), 1.0));

    vec3 reflectDir = vec3(gl_TextureMatrix[0] * vec4(reflect(i, n), 1.0));

    vec4 refractColor;
    refractColor.ra = textureCube(environment, refractR).ra;
    refractColor.g  = textureCube(environment, refractG).g;
    refractColor.b  = textureCube(environment, refractB).b;

    vec4 reflectColor;
    reflectColor    = textureCube(environment, reflectDir);

    vec3 combinedColor = mix(refractColor, reflectColor, ratio);

    gl_FragColor = vec4(combinedColor, 1.0);
}

environment是从绘制对象的环境实时渲染的立方体贴图.

在正常情况下,着色器的行为(我认为)像预期的那样,产生以下结果:

但是,当相机绕其目标旋转180度以使其现在从另一侧指向物体时,折射/反射的图像会像这样扭曲(对于0到180度之间的角度,这种情况会逐渐发生,当然):

放下/抬起相机时会出现类似的伪像;当摄像机直接位于目标对象上方时(在这种情况下,指向负Z),它似乎只能正确地执行100%的操作.

我在弄清楚着色器中哪个变形导致此变形图像时遇到了麻烦,但是应该明显与cameraPos的处理方式有关.是什么导致图像以这种方式自身变形?

解决方案

在我看来,这很可疑:

vec4 position = gl_ModelViewMatrix * gl_Vertex;
incident = position.xyz / position.w - cameraPos;

您的cameraPos是在世界空间中定义的吗?您正在从假定的世界空间cameraPos向量中减去视图空间向量(position).您要么需要在世界空间中进行计算,要么需要在视图空间中进行计算,但是不能将它们混合在一起.

要在世界空间中正确执行此操作,必须分别上传模型矩阵以获得世界空间入射向量.

I am writing a GLSL shader that simulates chromatic aberration for simple objects. I am staying OpenGL 2.0 compatible, so I use the built-in OpenGL matrix stack. This is the simple vertex shader:

uniform vec3 cameraPos;

varying vec3 incident;
varying vec3 normal;

void main(void) {
    vec4 position = gl_ModelViewMatrix * gl_Vertex;
    incident = position.xyz / position.w - cameraPos;
    normal   = gl_NormalMatrix * gl_Normal;

    gl_Position = ftransform();
}

The cameraPos uniform is the position of the camera in model space, as one might imagine. Here is the fragment shader:

const float etaR = 1.14;
const float etaG = 1.12;
const float etaB = 1.10;
const float fresnelPower = 2.0;
const float F = ((1.0 - etaG) * (1.0 - etaG)) / ((1.0 + etaG) * (1.0 + etaG));

uniform samplerCube environment;

varying vec3 incident;
varying vec3 normal;

void main(void) {
    vec3 i = normalize(incident);
    vec3 n = normalize(normal);

    float ratio = F + (1.0 - F) * pow(1.0 - dot(-i, n), fresnelPower);

    vec3 refractR = vec3(gl_TextureMatrix[0] * vec4(refract(i, n, etaR), 1.0));
    vec3 refractG = vec3(gl_TextureMatrix[0] * vec4(refract(i, n, etaG), 1.0));
    vec3 refractB = vec3(gl_TextureMatrix[0] * vec4(refract(i, n, etaB), 1.0));

    vec3 reflectDir = vec3(gl_TextureMatrix[0] * vec4(reflect(i, n), 1.0));

    vec4 refractColor;
    refractColor.ra = textureCube(environment, refractR).ra;
    refractColor.g  = textureCube(environment, refractG).g;
    refractColor.b  = textureCube(environment, refractB).b;

    vec4 reflectColor;
    reflectColor    = textureCube(environment, reflectDir);

    vec3 combinedColor = mix(refractColor, reflectColor, ratio);

    gl_FragColor = vec4(combinedColor, 1.0);
}

The environment is a cube map that is rendered live from the drawn object's environment.

Under normal circumstances, the shader behaves (I think) like expected, yielding this result:

However, when the camera is rotated 180 degrees around its target, so that it now points at the object from the other side, the refracted/reflected image gets warped like so (This happens gradually for angles between 0 and 180 degrees, of course):

Similar artifacts appear when the camera is lowered/raised; it only seems to behave 100% correctly when the camera is directly over the target object (pointing towards negative Z, in this case).

I am having trouble figuring out which transformation in the shader that is responsible for this warped image, but it should be something obvious related to how cameraPos is handled. What is causing the image to warp itself in this way?

解决方案

This looks suspect to me:

vec4 position = gl_ModelViewMatrix * gl_Vertex;
incident = position.xyz / position.w - cameraPos;

Is your cameraPos defined in world space? You're subtracting a view space vector (position), from a supposedly world space cameraPos vector. You either need to do the calculation in world space or view space, but you can't mix them.

To do this correctly in world space you'll have to upload the model matrix separately to get the world space incident vector.

这篇关于具有色差的反射/折射-眼睛矫正的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆