openGL 何时以及如何计算 F_depth(深度值) [英] When and how does openGL calculate F_depth(depth value)

查看:190
本文介绍了openGL 何时以及如何计算 F_depth(深度值)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

表示此时投影已经完成.这篇文章给了我们OpenGL使用的投影矩阵,影响点z坐标的因素是行:

[ 0 0 -(f+n)/(f-n) -2fn/(f-n) ]

注意,计算这个矩阵是为了将锥体"截头体变成一个单位立方体.这意味着在应用此矩阵后,z 坐标也已映射到 [0,1].

那么,深度值精度一章的作者告诉我们:视图空间中的这些 z 值可以是截锥体的近平面和远平面之间的任何值,我们需要某种方式将它们转换为 [0,1].问题是为什么此时我们已经在应用投影矩阵时对其进行了映射.

另外,他说:像这样的线性深度缓冲区:F_depth=z-near/(far-near) 从不使用,为了正确的投影属性,使用非线性深度方程:

F_depth= (1/z- 1/near)/(1/far - 1/near)

但是,正如我们所见,z 使用以下方法在范围内映射:

[ 0 0 -(f+n)/(f-n) -2fn/(f-n) ]

看起来是线性的.

所有这些相互矛盾的陈述让我真的很困惑什么时候计算和比较片段的深度,以及实际用来计算这个的方程是什么.在我的理解中,在应用了 OpenGL 投影矩阵之后,不需要再计算深度,但是读完这篇我真的很困惑.有什么说明吗?

解决方案

At 透视投影 由于透视划分,深度不是线性的.

当顶点坐标被投影矩阵转换后,裁剪空间坐标就被计算出来了.剪辑空间坐标是一个 齐次坐标.现在所有不在裁剪空间中的几何体(在 Viewing frustum 中)都被裁剪了.裁剪规则是:

-w <= x, y, z <= w

然后通过将 xyz 分量除以 w 组件(透视鸿沟).NDC 是笛卡尔坐标,归一化的设备空间是一个独特的立方体,左、下、近(-1, -1, -1) 和 right, top, far of (1, 1, 1).立方体中的所有几何体都投影在二维视口上.

注意,在齐次顶点坐标乘以透视投影矩阵(剪辑空间)后,z 分量是线性"的.但它不在 [-1, 1] 范围内.裁剪和透视分割后,z 坐标在 [-1, 1] (NDC) 范围内,但不再是线性".

深度缓冲区可以存储 [0, 1] 范围内的值.因此,规范化设备空间的 z 组件必须从 [-1.0, 1.0] 映射到 [0.0, 1.0].


在透视投影中,投影矩阵描述了从针孔相机看到的世界中的 3D 点到视口的 2D 点的映射.
相机视锥体(截棱锥)中的眼睛空间坐标被映射到一个立方体(标准化的设备坐标).

透视投影矩阵可以由 frustum 定义.
leftrightbottomtop 的距离是视图中心到侧面的距离截锥体的面,在近平面上.nearfar 指定到截锥体的近平面和远平面的距离.

r = 右,l = 左,b = 底部,t = 顶部,n = 近,f = 远x: 2*n/(r-l) 0 0 0y: 0 2*n/(t-b) 0 0z: (r+l)/(r-l) (t+b)/(t-b) -(f+n)/(f-n) -1t: 0 0 -2*f*n/(f-n) 0

如果投影是对称的,视线是平截头体的对称轴,矩阵可以简化:

a = w/hta = tan( fov_y/2 );2 * n/(r-l) = 1/(ta * a)2 * n/(t-b) = 1/ta(r+l)/(r-l) = 0(t+b)/(t-b) = 0

对称透视投影矩阵为:

x: 1/(ta*a) 0 0 0y: 0 1/ta 0 0z: 0 0 -(f+n)/(f-n) -1t: 0 0 -2*f*n/(f-n) 0


<子>

另见

究竟什么是眼睛空间坐标?>

如何在现代 OpenGL 中使用片段着色器中的 gl_FragCoord.z 线性渲染深度?

Meaning at this point the projection has already been done. This article gives us the projection matrix used by OpenGL, and the factor that affect the z-coordinate of a point is the row:

[ 0 0 -(f+n)/(f-n) -2fn/(f-n) ]

Note, this matrix is computed to make the ‘pyramidal’ frustum to a unit cube. Meaning the z-coordinate has also been mapped to [0,1] after this matrix is applied.

Then, the author in the depth value precision chapter tells us: These z-values in view space can be any values between frustum’s near and far plane and we need some way to transform them to [0,1]. The question is why at this point, when we had already mapped it while applying the projection matrix.

Also, he says: a linear depth buffer like this: F_depth=z-near/(far-near) is never used, for correct projection properties a non-linear depth equation is used:

F_depth= (1/z- 1/near)/(1/far - 1/near)

But, as we have seen, z is mapped within range using:

[ 0 0 -(f+n)/(f-n) -2fn/(f-n) ]

Which appears to be linear.

All these contradicting statements are making me really confused on when is the depth for fragments calculated and compared,and what is the equation actually used to compute this. In my understanding nothing more for depth should be calculated after the OpenGL projection matrix is applied, but after reading this I’m really confused. Any clarifications?

解决方案

At perspective projection the depth is not linear, because of the perspective divide.

When a vertex coordinate is transformed by the projection matrix then the clip space coordinate is computed. The clip space coordinate is a Homogeneous coordinate. Now all the geometry which is not in clip space (in the Viewing frustum) is clipped. The clipping rule is:

-w <=  x, y, z  <= w

After that the normalized device space coordinate is computed by dividing the x, y, z components by the w component (Perspective divide). NDC are Cartesian coordinates and the normalized device space is a unique cube with the left, bottom, near of (-1, -1, -1) and right, top, far of (1, 1, 1). All the geometry in the cube is projected on the 2 dimensional viewport.

Note, after the homogeneous vertex coordinate is multiplied by the perspective projection matrix (clip space) the z component is "linear" but it is not in range [-1, 1]. After clipping and perspective divide, the z coordinate is in range [-1, 1] (NDC), but it is no longer "linear".

The depth buffer can store values in range [0, 1]. Hence the z component of the normalized device space has to be mapped from [-1.0, 1.0] to [0.0, 1.0].


At Perspective Projection the projection matrix describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport.
The eye space coordinates in the camera frustum (a truncated pyramid) are mapped to a cube (the normalized device coordinates).

A perspective projection matrix can be defined by a frustum.
The distances left, right, bottom and top, are the distances from the center of the view to the side faces of the frustum, on the near plane. near and far specify the distances to the near and far plane of the frustum.

r = right, l = left, b = bottom, t = top, n = near, f = far

x:    2*n/(r-l)      0              0                0
y:    0              2*n/(t-b)      0                0
z:    (r+l)/(r-l)    (t+b)/(t-b)    -(f+n)/(f-n)    -1
t:    0              0              -2*f*n/(f-n)     0

If the projection is symmetrical and the line of sight is the axis of symmetry of the frustum, the matrix can be simplified:

a  = w / h
ta = tan( fov_y / 2 );

2 * n / (r-l) = 1 / (ta * a)
2 * n / (t-b) = 1 / ta
(r+l)/(r-l)   = 0
(t+b)/(t-b)   = 0

The symmetrically perspective projection matrix is:

x:    1/(ta*a)  0      0              0
y:    0         1/ta   0              0
z:    0         0     -(f+n)/(f-n)   -1
t:    0         0     -2*f*n/(f-n)    0


See also

What exactly are eye space coordinates?

How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?

这篇关于openGL 何时以及如何计算 F_depth(深度值)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆