深度作为 GLSL 中相机平面的距离 [英] Depth as distance to camera plane in GLSL

查看:25
本文介绍了深度作为 GLSL 中相机平面的距离的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一对 GLSL 着色器,可以为我提供场景中对象的深度图.我现在得到的是每个像素到相机的距离.我需要的是获得从像素到相机平面的距离.让我用一个小图来说明

I have a pair of GLSL shaders that give me the depth map of the objects in my scene. What I get now is the distance from each pixel to the camera. What I need is to get the distance from the pixel to the camera plane. Let me illustrate with a little drawing

   *          |--*
  /           |
 /            |
C-----*       C-----*
             |
             |
   *          |--*

3 个星号是像素,C 是相机.星号的线是深度".在第一种情况下,我得到了从像素到相机的距离.第二,我希望得到每个像素到平面的距离.

The 3 asterisks are pixels and the C is the camera. The lines from the asterisks are the "depth". In the first case, I get the distance from the pixel to the camera. In the second, I wish to get the distance from each pixel to the plane.

一定有办法通过使用一些投影矩阵来做到这一点,但我很难过.

There must be a way to do this by using some projection matrix, but I'm stumped.

这是我正在使用的着色器.请注意,eyePosition 是 camera_position_object_space.

Here are the shaders I'm using. Note that eyePosition is camera_position_object_space.

顶点着色器:

void main() {
    position = gl_Vertex.xyz;
    gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}

像素着色器:

uniform vec3 eyePosition;
varying vec3 position;


void main(void) {
        vec3 temp = vec3(1.0,1.0,1.0);
        float depth = (length(eyePosition - position*temp) - 1.0) / 49.0;
        gl_FragColor = vec4(depth, depth, depth, 1.0);
}

推荐答案

你真的很难做到这一点.只需将事物转换为相机空间,然后从那里开始工作.

You're really trying to do this the hard way. Simply transform things to camera space, and work from there.

varying float distToCamera;

void main()
{
    vec4 cs_position = glModelViewMatrix * gl_Vertex;
    distToCamera = -cs_position.z;
    gl_Position = gl_ProjectionMatrix * cs_position;
}

在相机空间(一切都与相机的位置/方向相关的空间)中,到顶点的平面距离只是 Z 坐标的负值(负值越高 Z 越远).

In camera space (the space where everything is relative to the position/orientation of the camera), the planar distance to a vertex is just the negative of the Z coordinate (higher negative Z is farther away).

所以你的片段着色器甚至不需要 eyePosition;深度"直接来自顶点着色器.

So your fragment shader doesn't even need eyePosition; the "depth" comes directly from the vertex shader.

这篇关于深度作为 GLSL 中相机平面的距离的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆