如何在片段着色器中使用gl_FragCoord.z在现代OpenGL中线性渲染深度? [英] How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?

查看:297
本文介绍了如何在片段着色器中使用gl_FragCoord.z在现代OpenGL中线性渲染深度?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我阅读了很多有关使用片段着色器获得深度的信息.

I read lots of information about getting depth with fragment shader.

例如

http://www.opengl.org/discussion_boards/ubbthreads.php?ubb = showflat& Number = 234519

,但是我仍然不知道gl_FragCoord.z是否是线性的.

but I still don't know whether or not the gl_FragCoord.z is linear.

GLSL规范说,它在屏幕空间中的范围是[0,1],没有提到它是否是线性的.

GLSL specification said its range is [0,1] in screen sapce without mentioning it's linear or not.

我认为线性度至关​​重要,因为我将使用渲染的模型来匹配Kinect的深度图.

I think linearity it is vital since I will use the rendered model to match depth map from Kinect.

然后,如果它不是线性的,如何在世界空间中将其线性化?

Then if it is not linear, how to linearlize it in the world space?

推荐答案

但是我仍然不知道gl_FragCoord.z是否是线性的.

but I still don't know whether or not the gl_FragCoord.z is linear.

gl_FragCoord.z是否为线性投影矩阵. 对于正交投影,gl_FragCoord.z是线性的,对于透视投影,它不是线性的.

Whether gl_FragCoord.z is linear or not depends on, the projection matrix. While for Orthographic Projection gl_FragCoord.z is linear, for Perspective Projection it is not linear.

通常,深度(gl_FragCoord.zgl_FragDepth)的计算方法如下(请参阅

In general, the depth (gl_FragCoord.z and gl_FragDepth) is calculated as follows (see GLSL gl_FragCoord.z Calculation and Setting gl_FragDepth):

float ndc_depth = clip_space_pos.z / clip_space_pos.w;
float depth = (((farZ-nearZ) * ndc_depth) + nearZ + farZ) / 2.0;

投影矩阵描述从场景的3D点到视口的2D点的映射.它从眼睛空间转换到剪辑空间,然后通过除以剪辑坐标的 w 分量,将剪辑空间中的坐标转换为归一化设备坐标(NDC)

The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. It transforms from eye space to the clip space, and the coordinates in the clip space are transformed to the normalized device coordinates (NDC) by dividing with the w component of the clip coordinates

在正交投影"中,眼睛空间中的坐标被线性映射到归一化的设备坐标.

At Orthographic Projection the coordinates in the eye space are linearly mapped to normalized device coordinates.

正投影矩阵:

r = right, l = left, b = bottom, t = top, n = near, f = far 

2/(r-l)         0               0               0
0               2/(t-b)         0               0
0               0               -2/(f-n)        0
-(r+l)/(r-l)    -(t+b)/(t-b)    -(f+n)/(f-n)    1

在正交投影中,Z分量由线性函数计算:

At Orthographic Projection, the Z component is calculated by the linear function:

z_ndc = z_eye * -2/(f-n) - (f+n)/(f-n)

在透视投影"中,投影矩阵描述了从针孔相机中看到的世界3D点到视口的2D点的映射.
摄像机视锥中的眼睛空间坐标(截断的金字塔)映射到立方体(规范化的设备坐标).

At Perspective Projection the projection matrix describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport.
The eye space coordinates in the camera frustum (a truncated pyramid) are mapped to a cube (the normalized device coordinates).

透视投影矩阵:

r = right, l = left, b = bottom, t = top, n = near, f = far

2*n/(r-l)      0              0               0
0              2*n/(t-b)      0               0
(r+l)/(r-l)    (t+b)/(t-b)    -(f+n)/(f-n)    -1    
0              0              -2*f*n/(f-n)    0

在透视投影"中,Z分量由有理函数计算:

At Perspective Projection, the Z component is calculated by the rational function:

z_ndc = ( -z_eye * (f+n)/(f-n) - 2*f*n/(f-n) ) / -z_eye

由于规范化的设备坐标在(-1,-1,-1)至(1,1,1)范围内,因此Z坐标必须映射到深度缓冲区范围[0,1]:

Since the normalized device coordinates are in range (-1,-1,-1) to (1,1,1) the Z-coordinate has to be mapped to the depth buffer range [0,1]:

depth = (z_ndc + 1) / 2 


然后,如果它不是线性的,如何在世界空间中将其线性化?

Then if it is not linear, how to linearize it in the world space?

要将深度缓冲区的深度转换为原始Z坐标,投影(正交或透视),并且必须知道近平面和远平面.

To convert form the depth of the depth buffer to the original Z-coordinate, the projection (Orthographic or Perspective), and the near plane and far plane has to be known.

正交投影

n = near, f = far

z_eye = depth * (f-n) + n;

透视投影

n = near, f = far

z_ndc = 2.0 * depth - 1.0;
z_eye = 2.0 * n * f / (f + n - z_ndc * (f - n));

如果透视投影矩阵已知,则可以执行以下操作:

If the perspective projection matrix is known this can be done as follows:

A = prj_mat[2][2]
B = prj_mat[3][2]
z_eye = B / (A + z_ndc)

另请参阅

查看全文

登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆