如何在给定视图空间深度值和ndc xy的情况下恢复视图空间位置 [英] How to recover view space position given view space depth value and ndc xy

查看:243
本文介绍了如何在给定视图空间深度值和ndc xy的情况下恢复视图空间位置的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在编写一个延迟着色器,并试图更紧密地打包我的gbuffer。但是,在给定视图空间深度的情况下,我似乎无法正确计算视图位置

I am writing a deferred shader, and am trying to pack my gbuffer more tightly. However, I cant seem to compute the view position given the view space depth correctly

// depth -> (gl_ModelViewMatrix * vec4(pos.xyz, 1)).z; where pos is the model space position
// fov -> field of view in radians (0.62831855, 0.47123888)
// p -> ndc position, x, y [-1, 1]
vec3 getPosition(float depth, vec2 fov, vec2 p)
{
    vec3 pos;
    pos.x = -depth * tan( HALF_PI - fov.x/2.0 ) * (p.x);
    pos.y = -depth * tan( HALF_PI - fov.y/2.0 ) * (p.y);
    pos.z = depth;
    return pos;
}

计算出的位置错误。我知道这一点是因为我仍将正确的位置存储在gbuffer中并使用它进行测试。

The computed position is wrong. I know this because I am still storing the correct position in the gbuffer and testing using that.

推荐答案

3解决方案,用于恢复视图空间投影矩阵中的位置



投影矩阵描述了从场景的3D点到视口的2D点的映射。它从视图(眼睛)空间转换到剪辑空间,然后通过除以剪辑坐标的 w 分量,将剪辑空间中的坐标转换为归一化设备坐标(NDC)。 NDC的范围是(-1,-1,-1)到(1,1,1)。

3 Solutions to recover view space position in perspective projection

The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. It transforms from view (eye) space to the clip space, and the coordinates in the clip space are transformed to the normalized device coordinates (NDC) by dividing with the w component of the clip coordinates. The NDC are in range (-1,-1,-1) to (1,1,1).

在Perspective Projection上,投影矩阵描述了3D的映射从针孔相机看到的世界上各个点,再到视口的2D点。

相机视锥中的视线坐标(截顶的金字塔)映射到一个立方体(归一化的设备)坐标)。

At Perspective Projection the projection matrix describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport.
The eye space coordinates in the camera frustum (a truncated pyramid) are mapped to a cube (the normalized device coordinates).

透视投影矩阵:

r = right, l = left, b = bottom, t = top, n = near, f = far

2*n/(r-l)      0              0               0
0              2*n/(t-b)      0               0
(r+l)/(r-l)    (t+b)/(t-b)    -(f+n)/(f-n)    -1    
0              0              -2*f*n/(f-n)    0



it follows:

aspect = w / h
tanFov = tan( fov_y * 0.5 );

prjMat[0][0] = 2*n/(r-l) = 1.0 / (tanFov * aspect)
prjMat[1][1] = 2*n/(t-b) = 1.0 / tanFov

在Perspective Projection上,Z分量由有理函数

At Perspective Projection, the Z component is calculated by the rational function:

z_ndc = ( -z_eye * (f+n)/(f-n) - 2*f*n/(f-n) ) / -z_eye

深度( gl_FragCoord.z gl_FragDepth )的计算方式如下:

The depth (gl_FragCoord.z and gl_FragDepth) is calculated as follows:

z_ndc = clip_space_pos.z / clip_space_pos.w;
depth = (((farZ-nearZ) * z_ndc) + nearZ + farZ) / 2.0;


由于投影矩阵是由视野和宽高比定义的,因此可以通过视野和宽高比恢复视口位置长宽比。假设它是对称的透视投影,并且已标准化的设备坐标,深度和近,远平面都是已知的。

Since the projection matrix is defined by the field of view and the aspect ratio it is possible to recover the viewport position with the field of view and the aspect ratio. Provided that it is a symmetrical perspective projection and the normalized device coordinates, the depth and the near and far plane are known.

恢复视图空间中的Z距离:

Recover the Z distance in view space:

z_ndc = 2.0 * depth - 1.0;
z_eye = 2.0 * n * f / (f + n - z_ndc * (f - n));

通过XY归一化设备坐标恢复视图空间位置:

Recover the view space position by the XY normalized device coordinates:

ndc_x, ndc_y = xy normalized device coordinates in range from (-1, -1) to (1, 1):

viewPos.x = z_eye * ndc_x * aspect * tanFov;
viewPos.y = z_eye * ndc_y * tanFov;
viewPos.z = -z_eye; 


由视场和纵横比定义的投影参数存储在投影矩阵中。因此,可以通过对称矩阵投影中的投影矩阵中的值来恢复视口位置。

The projection parameters, defined by the field of view and the aspect ratio, are stored in the projection matrix. Therefore the viewport position can be recovered by the values from the projection matrix, from a symmetrical perspective projection.

请注意投影矩阵,视场和宽高比之间的关系:

Note the relation between projection matrix, field of view and aspect ratio:

prjMat[0][0] = 2*n/(r-l) = 1.0 / (tanFov * aspect);
prjMat[1][1] = 2*n/(t-b) = 1.0 / tanFov;

prjMat[2][2] = -(f+n)/(f-n)
prjMat[3][2] = -2*f*n/(f-n)

恢复视图空间中的Z距离:

Recover the Z distance in view space:

A     = prj_mat[2][2];
B     = prj_mat[3][2];
z_ndc = 2.0 * depth - 1.0;
z_eye = B / (A + z_ndc);

通过XY归一化设备坐标恢复视图空间位置:

Recover the view space position by the XY normalized device coordinates:

viewPos.x = z_eye * ndc_x / prjMat[0][0];
viewPos.y = z_eye * ndc_y / prjMat[1][1];
viewPos.z = -z_eye; 


当然,可以通过逆投影矩阵来恢复视口位置。

Of course the viewport position can be recovered by the inverse projection matrix.

mat4 inversePrjMat = inverse( prjMat );
vec4 viewPosH      = inversePrjMat * vec3( ndc_x, ndc_y, 2.0 * depth - 1.0, 1.0 )
vec3 viewPos       = viewPos.xyz / viewPos.w;



另请参阅以下问题的答案:


See also the answers to the following question:

  • How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?

这篇关于如何在给定视图空间深度值和ndc xy的情况下恢复视图空间位置的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆