从窗口转换的深度分量->世界坐标 [英] Depth Component of Converting from Window -> World Coordinates

查看:84
本文介绍了从窗口转换的深度分量->世界坐标的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在开发一个程序,该程序绘制100x100的网格并允许用户单击单元格并更改颜色.

I'm working on a program that draws a 100x100 grid and allows the user to click on a cell and change the color.

当前也可以单击,但是仅当在上面查看网格面(即camPos.z等于camLook.z)并且网格位于屏幕中央时才可以使用.

Clicking also works currently, however only when looking at the grid face on (i.e. camPos.z equal to camLook.z) and when the grid is positioned in the center of the screen.

最近几天,我一直停留在从不同的相机位置或屏幕上的不同区域查看网格时选择正确的单元格.

What I've been stuck on the last few days is selecting the correct cell when looking at the grid from a different camera position or different area on the screen.

我唯一的猜测是深度缓冲不能以某种方式反映照相机的当前位置,或者缓冲深度范围与照相机的近距和远距之间存在一些不一致之处.或我应用投影/视图矩阵的方式可以显示图像,但是在通过管道返回时出了点问题.但是我不太清楚.

My only guess would be that somehow the depth buffer does not reflect the current position of the camera or that there is some inconsistency between the buffer depth range and the near and far values of the camera. Or that the way I'm applying the projection/view matrix is ok for displaying the image, but something is going wrong when going back through the pipeline. But I can't quite figure it out.

(自最初发布以来,代码已更新/重构)

(code updated/refactored since originally posting)

顶点着色器:

#version 330

layout(location = 0) in vec4 position;

smooth out vec4 theColor;

uniform vec4 color;
uniform mat4 pv;

void main() {
  gl_Position = pv * position;
  theColor = color;
}

相机类别(projectionViewMatrix()的结果是上面的pv制服):

Camera class (result of projectionViewMatrix() is the pv uniform above):

Camera::Camera()
{
  camPos = glm::vec3(1.0f, 5.0f, 2.0f);
  camLook = glm::vec3(1.0f, 0.0f, 0.0f);

  fovy = 90.0f;
  aspect = 1.0f;
  near = 0.1f;
  far = 1000.0f;
}

glm::mat4 Camera::projectionMatrix()
{
  return glm::perspective(fovy, aspect, near, far);
}

glm::mat4 Camera::viewMatrix()
{
  return glm::lookAt(
    camPos,
    camLook,
    glm::vec3(0.0f, 1.0f, 0.0f)
  );
}

glm::mat4 Camera::projectionViewMatrix()
{
  return projectionMatrix() * viewMatrix();
}

// view controls

void Camera::moveForward()
{
  camPos.z -= 1.0f;
  camLook.z -= 1.0f;
}

void Camera::moveBack()
{
  camPos.z += 1.0f;
  camLook.z += 1.0f;
}

void Camera::moveLeft()
{
  camPos.x -= 1.0f;
  camLook.x -= 1.0f;
}

void Camera::moveRight()
{
  camPos.x += 1.0f;
  camLook.x += 1.0f;
}

void Camera::zoomIn()
{
  camPos.y -= 1.0f;
}

void Camera::zoomOut()
{
  camPos.y += 1.0f;
}

void Camera::lookDown()
{
  camLook.z += 0.1f;
}

void Camera::lookAtAngle()
{
  if (camLook.z != 0.0f)
    camLook.z -= 0.1f;
}

我要获取世界坐标的相机类中的特定功能(xy是屏幕坐标):

Specific function in the camera class where I am trying to get world coordinates (x and y are screen coordinates):

glm::vec3 Camera::experiment(int x, int y)
{
  GLint viewport[4];
  glGetIntegerv(GL_VIEWPORT, viewport);

  GLfloat winZ;
  glReadPixels(x, y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
  printf("DEPTH: %f\n", winZ);

  glm::vec3 pos = glm::unProject(
    glm::vec3(x, viewport[3] - y, winZ),
    viewMatrix(),
    projectionMatrix(),
    glm::vec4(0.0f, 0.0f, viewport[2], viewport[3])
  );

  printf("POS: (%f, %f, %f)\n", pos.x, pos.y, pos.z);

  return pos;
}

初始化并显示:

void init(void)
{
  glewExperimental = GL_TRUE;
  glewInit();

  glEnable(GL_DEPTH_TEST);
  glDepthMask(GL_TRUE);
  glDepthFunc(GL_LESS);
  glDepthRange(0.0f, 1.0f);

  InitializeProgram();
  InitializeVAO();
  InitializeGrid();

  glEnable(GL_CULL_FACE);
  glCullFace(GL_BACK);
  glFrontFace(GL_CW);
}

void display(void)
{
  glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
  glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

  glUseProgram(theProgram);
  glBindVertexArray(vao);

  glUniformMatrix4fv(projectionViewMatrixUnif, 1, GL_FALSE, glm::value_ptr(camera.projectionViewMatrix()));

  DrawGrid();

  glBindVertexArray(0);
  glUseProgram(0);

  glutSwapBuffers();
  glutPostRedisplay();
}

int main(int argc, char** argv)
{
  glutInit(&argc, argv);

  glutInitDisplayMode(GLUT_RGB | GLUT_DEPTH);
  glutInitContextVersion(3, 2);
  glutInitContextProfile(GLUT_CORE_PROFILE);

  glutInitWindowSize(500, 500);
  glutInitWindowPosition(300, 200);

  glutCreateWindow("testing");

  init();

  glutDisplayFunc(display);
  glutReshapeFunc(reshape);
  glutKeyboardFunc(keyboard);
  glutMouseFunc(mouse);
  glutMainLoop();
  return 0;
}

推荐答案

将光线投射到光标下方以实现拾取实际上非常简单.它总是可以与几乎所有的投影和模型视图矩阵一起工作(除了一些无效的奇异情况,这些情况将整个场景转换为无穷大等).

It is actually very simple to project rays under the cursor to implement picking. It will always work with pretty much any projection and modelview matrix (except for some invalid singular cases which transform the entire scene into infinity, etc.).

为了简化起见,我编写了一个小示例,该示例使用了不推荐使用的固定功能管道,但是该代码也可以与着色器一起使用.首先从OpenGL读取矩阵:

I've written a small demo which uses the deprecated fixed-function pipeline for simplicity, but the code will work with shaders as well. It begins by reading the matrices from OpenGL:

glm::mat4 proj, mv;
glGetFloatv(GL_PROJECTION_MATRIX, &proj[0][0]);
glGetFloatv(GL_MODELVIEW_MATRIX, &mv[0][0]);
glm::mat4 mvp = proj * mv;

此处mvp是传递给顶点着色器的内容.然后我们定义两点:

Here mvp is what you would pass to your vertex shader. Then we define two points:

glm::vec4 nearc(f_mouse_x, f_mouse_y, 0, 1);
glm::vec4 farc(f_mouse_x, f_mouse_y, 1, 1);

这些是标准化空间中的近光标和远光标坐标(因此f_mouse_xf_mouse_y[-1, 1]间隔内).请注意,z坐标不必为0和1,它们只需为两个不同的任意数字.现在,我们可以使用mvp将它们取消投影到世界空间:

These are near and far cursor coordinates in normalized space (so f_mouse_x and f_mouse_y are in the [-1, 1] interval). Note that the z coordinates do not need to be 0 and 1, they just need to be two different arbitrary numbers. Now we can use the mvp to unproject them to worldspace:

nearc = glm::inverse(mvp) * nearc;
nearc /= nearc.w; // dehomog
farc = glm::inverse(mvp) * farc;
farc /= farc.w; // dehomog

请注意,此处的均匀划分很重要.这为我们提供了光标在世界空间中的位置,您可以在其中定义对象(除非它们具有自己的模型矩阵,但这很容易合并).

Note that the homogenous division is important here. This gives us the position of the cursor in worldspace, where your objects are defined (except when they have their own model matrices, but that is easy to incorporate).

最后,该演示计算nearcfarc之间的光线与具有纹理的平面(您的100x100网格)之间的交点:

Finally, the demo calculates intersection of the ray between nearc and farc and a plane on which there is a texture (your 100x100 grid):

glm::vec3 plane_normal(0, 0, 1); // plane normal
float plane_d = 0; // plane distance from origin
// this is the plane with the grid

glm::vec3 ray_org(nearc), ray_dir(farc - nearc);
ray_dir = glm::normalize(ray_dir);
// this is the ray under the mouse cursor

float t = glm::dot(ray_dir, plane_normal);
if(fabs(t) > 1e-5f)
    t = -(glm::dot(ray_org, plane_normal) + plane_d) / t;
else
    t = 0; // no intersection, the plane and ray is collinear
glm::vec3 isect = ray_org + t * ray_dir;
// calculate ray-plane intersection

float grid_x = N * (isect.x + 1) / 2;
float grid_y = N * (isect.y + 1) / 2;
if(t && grid_x >= 0 && grid_x < N && grid_y >= 0 && grid_y < N) {
    int x = int(grid_x), y = int(grid_y);
    // calculate integer coordinates

    tex_data[x + N * y] = 0xff0000ff; // red
    glBindTexture(GL_TEXTURE_2D, n_texture);
    glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, N, N, GL_RGBA, GL_UNSIGNED_BYTE, &tex_data[0]);
    // change the texture to see
}
// calculate grid position in pixels

输出相当不错:

这只是20x20的纹理,但提高到100x100却是微不足道的.您可以在此处获取完整的演示源和预编译的win32二进制文件. >.它指望有glm.您可以用鼠标转动或用WASD移动.

This is only a 20x20 texture, but it is trivial to go up to 100x100. You can get the full demo source and precompiled win32 binaries here. It counts on having glm. You can turn with mouse or move with WASD.

比平面更复杂的对象是可能的,它实际上是光线追踪.使用光标下方的深度分量(窗口z)非常简单-仅当心规范化的坐标([0, 1][-1, 1]).另外请注意,回读z值可能会降低性能,因为它需要CPU/GPU同步.

More complicated objects than planes are possible, it is essentially raytracing. Using the depth component under the cursor (window z) is just as simple - only beware the normalized coordinates ([0, 1] vs. [-1, 1]). Also note, that reading back the z value may deteriorate performance, as it requires CPU / GPU synchronization.

这篇关于从窗口转换的深度分量->世界坐标的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆