如何在片段着色器中使用 gl_FragCoord.z 在现代 OpenGL 中线性渲染深度? [英] How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?

查看:25
本文介绍了如何在片段着色器中使用 gl_FragCoord.z 在现代 OpenGL 中线性渲染深度?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我阅读了大量有关使用片段着色器获取深度的信息.

比如

正交投影矩阵:

r = 右,l = 左,b = 底部,t = 顶部,n = 近,f = 远2/(r-l) 0 0 00 2/(t-b) 0 00 0 -2/(f-n) 0-(r+l)/(r-l) -(t+b)/(t-b) -(f+n)/(f-n) 1

在正交投影中,Z 分量由线性函数计算:

z_ndc = z_eye * -2/(f-n) - (f+n)/(f-n)

透视投影

在透视投影中,投影矩阵描述了从针孔相机看到的世界中的 3D 点到视口的 2D 点的映射.
相机视锥体(一个截棱锥)中的眼睛空间坐标被映射到一个立方体(标准化的设备坐标).

透视投影矩阵:

r = 右,l = 左,b = 底部,t = 顶部,n = 近,f = 远2*n/(r-l) 0 0 00 2*n/(t-b) 0 0(r+l)/(r-l) (t+b)/(t-b) -(f+n)/(f-n) -10 0 -2*f*n/(f-n) 0

在透视投影中,Z分量由有理函数计算:

z_ndc = ( -z_eye * (f+n)/(f-n) - 2*f*n/(f-n) )/-z_eye

深度缓冲

由于标准化设备坐标在 (-1,-1,-1) 到 (1,1,1) 范围内,Z 坐标必须映射到深度缓冲区范围 [0,1]:

depth = (z_ndc + 1)/2


<块引用>

那么如果不是线性的,如何在世界空间中线性化呢?

要将深度缓冲区的深度转换为原始 Z 坐标,必须知道投影(正交或透视)以及近平面和远平面.

正交投影

n = 近,f = 远z_eye = 深度 * (f-n) + n;

透视投影

n = 近,f = 远z_ndc = 2.0 * 深度 - 1.0;z_eye = 2.0 * n * f/(f + n - z_ndc * (f - n));

如果透视投影矩阵已知,则可以按如下方式完成:

A = prj_mat[2][2]B = prj_mat[3][2]z_eye = B/(A + z_ndc)

另见答案

如何在给定视图空间深度值和 ndc xy 的情况下恢复视图空间位置

I read lots of information about getting depth with fragment shader.

such as

http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=234519

but I still don't know whether or not the gl_FragCoord.z is linear.

GLSL specification said its range is [0,1] in screen sapce without mentioning it's linear or not.

I think linearity it is vital since I will use the rendered model to match depth map from Kinect.

Then if it is not linear, how to linearlize it in the world space?

解决方案

but I still don't know whether or not the gl_FragCoord.z is linear.

Whether gl_FragCoord.z is linear or not depends on, the projection matrix. While for Orthographic Projection gl_FragCoord.z is linear, for Perspective Projection it is not linear.

In general, the depth (gl_FragCoord.z and gl_FragDepth) is calculated as follows (see GLSL gl_FragCoord.z Calculation and Setting gl_FragDepth):

float ndc_depth = clip_space_pos.z / clip_space_pos.w;
float depth = (((farZ-nearZ) * ndc_depth) + nearZ + farZ) / 2.0;

The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. It transforms from eye space to the clip space, and the coordinates in the clip space are transformed to the normalized device coordinates (NDC) by dividing with the w component of the clip coordinates

Orthographic Projection

At Orthographic Projection the coordinates in the eye space are linearly mapped to normalized device coordinates.

Orthographic Projection Matrix:

r = right, l = left, b = bottom, t = top, n = near, f = far 

2/(r-l)         0               0               0
0               2/(t-b)         0               0
0               0               -2/(f-n)        0
-(r+l)/(r-l)    -(t+b)/(t-b)    -(f+n)/(f-n)    1

At Orthographic Projection, the Z component is calculated by the linear function:

z_ndc = z_eye * -2/(f-n) - (f+n)/(f-n)

Perspective Projection

At Perspective Projection the projection matrix describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport.
The eye space coordinates in the camera frustum (a truncated pyramid) are mapped to a cube (the normalized device coordinates).

Perspective Projection Matrix:

r = right, l = left, b = bottom, t = top, n = near, f = far

2*n/(r-l)      0              0               0
0              2*n/(t-b)      0               0
(r+l)/(r-l)    (t+b)/(t-b)    -(f+n)/(f-n)    -1    
0              0              -2*f*n/(f-n)    0

At Perspective Projection, the Z component is calculated by the rational function:

z_ndc = ( -z_eye * (f+n)/(f-n) - 2*f*n/(f-n) ) / -z_eye

Depth buffer

Since the normalized device coordinates are in range (-1,-1,-1) to (1,1,1) the Z-coordinate has to be mapped to the depth buffer range [0,1]:

depth = (z_ndc + 1) / 2 


Then if it is not linear, how to linearize it in the world space?

To convert form the depth of the depth buffer to the original Z-coordinate, the projection (Orthographic or Perspective), and the near plane and far plane has to be known.

Orthographic Projection

n = near, f = far

z_eye = depth * (f-n) + n;

Perspective Projection

n = near, f = far

z_ndc = 2.0 * depth - 1.0;
z_eye = 2.0 * n * f / (f + n - z_ndc * (f - n));

If the perspective projection matrix is known this can be done as follows:

A = prj_mat[2][2]
B = prj_mat[3][2]
z_eye = B / (A + z_ndc)

See also the answer to

How to recover view space position given view space depth value and ndc xy

这篇关于如何在片段着色器中使用 gl_FragCoord.z 在现代 OpenGL 中线性渲染深度?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆