从clipspace.xyz和(inv)投影矩阵计算clipspace.w [英] Calculate clipspace.w from clipspace.xyz and (inv) projection matrix

查看:259
本文介绍了从clipspace.xyz和(inv)投影矩阵计算clipspace.w的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用对数深度算法,这导致将someFunc(clipspace.z)写入深度缓冲区,并且没有隐式透视划分.

I'm using a logarithmic depth algorithmic which results in someFunc(clipspace.z) being written to the depth buffer and no implicit perspective divide.

我正在执行RTT/后处理,因此稍后在片段着色器中,我想重新计算eyespace.xyz,给定ndc.xy(来自片段坐标)和clipspace.z(来自someFuncInv()),深度缓冲区).

I'm doing RTT / postprocessing so later on in a fragment shader I want to recompute eyespace.xyz, given ndc.xy (from the fragment coordinates) and clipspace.z (from someFuncInv() on the value stored in the depth buffer).

请注意,我没有clipspace.w,并且存储的值不是clipspace.z/clipspace.w(就像使用固定函数深度时那样)-沿... >

Note that I do not have clipspace.w, and my stored value is not clipspace.z / clipspace.w (as it would be when using fixed function depth) - so something along the lines of ...

float clip_z = ...; /* [-1 .. +1] */
vec2 ndc = vec2(FragCoord.xy / viewport * 2.0 - 1.0);
vec4 clipspace = InvProjMatrix * vec4(ndc, clip_z, 1.0));
clipspace /= clipspace.w;

...在这里不起作用.

... does not work here.

那么,给定投影矩阵或它是逆矩阵,是否有一种方法可以从clipspace.xyz中计算出clipspace.w?

So is there a way to calculate clipspace.w out of clipspace.xyz, given the projection matrix or it's inverse?

推荐答案

clipspace.xy = FragCoord.xy / viewport * 2.0 - 1.0;

就术语而言,这是错误的. 剪辑空间"是顶点着色器(或最后一个顶点处理"阶段所输出的)空间.剪辑空间和窗口空间之间是归一化设备坐标(NDC)空间. NDC空间是剪辑空间除以剪辑空间W坐标:

This is wrong in terms of nomenclature. "Clip space" is the space that the vertex shader (or whatever the last Vertex Processing stage is) outputs. Between clip space and window space is normalized device coordinate (NDC) space. NDC space is clip space divided by the clip space W coordinate:

vec3 ndcspace = clipspace.xyz / clipspace.w;

因此,第一步是获取我们的窗口空间坐标并获取NDC空间坐标.这很简单:

So the first step is to take our window space coordinates and get NDC space coordinates. Which is easy:

vec3 ndcspace = vec3(FragCoord.xy / viewport * 2.0 - 1.0, depth);

现在,我要假设 您的depth值是正确的NDC空间深度.我假设您从深度纹理中获取值,然后使用深度范围的近/远值渲染该深度值,以将其映射到[-1,1]范围.如果没有,那么应该.

Now, I'm going to assume that your depth value is the proper NDC-space depth. I'm assuming that you fetch the value from a depth texture, then used the depth range near/far values it was rendered with to map it into a [-1, 1] range. If you didn't, you should.

那么,既然我们有了ndcspace,我们如何计算clipspace?好吧,这很明显:

So, now that we have ndcspace, how do we compute clipspace? Well, that's obvious:

vec4 clipspace = vec4(ndcspace * clipspace.w, clipspace.w);

显而易见,...没有帮助,因为我们没有clipspace.w.那么我们如何得到它呢?

Obvious and... not helpful, since we don't have clipspace.w. So how do we get it?

要获取此信息,我们需要查看第一次计算clipspace的方式:

To get this, we need to look at how clipspace was computed the first time:

vec4 clipspace = Proj * cameraspace;

这意味着clipspace.w是通过取cameraspace并将其乘以Proj的第四行进行点乘运算而得出的.

This means that clipspace.w is computed by taking cameraspace and dot-producting it by the fourth row of Proj.

嗯,那不是很有帮助.如果我们实际上查看Proj的第四行,它将变得更有帮助.当然,您可以使用任何投影矩阵,并且如果您不使用典型的投影矩阵,则此计算将变得更加困难(可能无法实现).

Well, that's not very helpful. It gets more helpful if we actually look at the fourth row of Proj. Granted, you could be using any projection matrix, and if you're not using the typical projection matrix, this computation becomes more difficult (potentially impossible).

使用典型的投影矩阵,Proj的第四行实际上就是这样:

The fourth row of Proj, using the typical projection matrix, is really just this:

[0, 0, -1, 0]

这意味着clipspace.w实际上只是-cameraspace.z.这对我们有什么帮助?

This means that the clipspace.w is really just -cameraspace.z. How does that help us?

记住以下内容会有所帮助:

It helps by remembering this:

ndcspace.z = clipspace.z / clipspace.w;
ndcspace.z = clipspace.z / -cameraspace.z;

很好,但这只是将一个未知数换成另一个.我们仍然有两个未知数(clipspace.zcameraspace.z)的方程式.但是,我们确实知道其他一些信息:clipspace.z来自点投影cameraspace与投影矩阵的第三行.传统投影矩阵的第三行如下所示:

Well, that's nice, but it just trades one unknown for another; we still have an equation with two unknowns (clipspace.z and cameraspace.z). However, we do know something else: clipspace.z comes from dot-producting cameraspace with the third row of our projection matrix. The traditional projection matrix's third row looks like this:

[0, 0, T1, T2]

其中T1和T2是非零数字.我们暂时将忽略这些数字.因此,clipspace.z实际上就是T1 * cameraspace.z + T2 * cameraspace.w.而且,如果我们知道cameraspace.w是1.0(通常是),那么我们可以将其删除:

Where T1 and T2 are non-zero numbers. We'll ignore what these numbers are for the time being. Therefore, clipspace.z is really just T1 * cameraspace.z + T2 * cameraspace.w. And if we know cameraspace.w is 1.0 (as it usually is), then we can remove it:

ndcspace.z = (T1 * cameraspace.z + T2) / -cameraspace.z;

因此,我们仍然有问题.实际上,我们没有.为什么?因为在这个方程中只有一个未知数.请记住:我们已经知道ndcspace.z .因此,我们可以使用ndcspace.z来计算cameraspace.z:

So, we still have a problem. Actually, we don't. Why? Because there is only one unknown in this euqation. Remember: we already know ndcspace.z. We can therefore use ndcspace.z to compute cameraspace.z:

ndcspace.z = -T1 + (-T2 / cameraspace.z);
ndcspace.z + T1 = -T2 / cameraspace.z;
cameraspace.z = -T2 / (ndcspace.z + T1);

T1T2刚好来自我们的投影矩阵(场景最初使用的那个投影矩阵).而且我们已经有了ndcspace.z.因此我们可以计算cameraspace.z.而且我们知道:

T1 and T2 come right out of our projection matrix (the one the scene was originally rendered with). And we already have ndcspace.z. So we can compute cameraspace.z. And we know that:

clispace.w = -cameraspace.z;

因此,我们可以这样做:

Therefore, we can do this:

vec4 clipspace = vec4(ndcspace * clipspace.w, clipspace.w);

很显然,您需要为clipspace.w而不是文字代码添加浮点数,但是您明白了我的意思.一旦有了clipspace,要获取相机空间,请乘以反投影矩阵:

Obviously you'll need a float for clipspace.w rather than the literal code, but you get my point. Once you have clipspace, to get camera space, you multiply by the inverse projection matrix:

vec4 cameraspace = InvProj * clipspace;

这篇关于从clipspace.xyz和(inv)投影矩阵计算clipspace.w的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆