如何将2D显示坐标映射到3D OpenGL空间 [英] How to map 2D display coordinate to 3D OpenGL space

查看:281
本文介绍了如何将2D显示坐标映射到3D OpenGL空间的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在研究在Android上移植的3D游戏,并且希望在游戏的3D过程中处理触摸事件.我需要在3D空间中的点,就在裁剪平面附近,但是我所能获得的只是来自Android显示器的2D坐标.那么,有什么方法可以将这些(x,y)坐标映射到3D空间中的(x,y,z)坐标?

I am working on a 3D game that is ported on Android and I want to work with touch events in 3D course of a game. I need point in 3D space, right on near clipping plane, but all I can get is a 2D coordinates from an Android display. So, is there any way to map these (x, y) coordinates to (x, y, z) coordinates in 3D space?

好吧,我正在从事赛车游戏,我想根据点击的位置在课程中插入一些项目.我有这个功能:

Well, I am working on a racing game, and I want to insert some items on a course, depending on where I click. I have this function:

void racing_mouse_cb(int button, int state, int x, int y) { //parameters (x,y) are coords of a display
    set_ill_fish(get_player_data( local_player())->view);
}

但是现在我要在玩家前方一定距离处插入物品:

but for now I am inserting items in front of a player at some distance:

void set_ill_fish(view_t view) {
    item_locs[num_items].ray.pt.x = view.plyr_pos.x;
    item_locs[num_items].ray.pt.z = view.plyr_pos.z - 5;
    item_locs[num_items].ray.pt.y = find_y_coord(view.plyr_pos.x,
            view.plyr_pos.z - 5) + 0.2;
    item_locs[num_items].ray.vec = make_vector(0, 1, 0);
    .
    .
    .
}

,但是我不知道如何将其翻译为显示表面.

, but how to translate this to display surface, I am clueless.

推荐答案

要将2D显示坐标(display_x, display_y)重映射到3D对象坐标(x,y,z),您需要了解

To remap 2D display coordinates (display_x, display_y) to 3D object coordinates (x,y,z) you need to know

  1. (display_x, display_y)处像素的深度display_z
  2. 将剪辑空间坐标(clip_x, clip_y, clip_z)转换为显示坐标的转换T
  3. 将对象坐标转换为剪辑空间坐标的转换M(通常将相机和透视图结合在一起)
  1. the depth display_z of the pixel at (display_x, display_y)
  2. the transformation T that transforms clip space coordinates (clip_x, clip_y, clip_z) to display coordinates
  3. the transformation M that transforms object coordinates to clip space coordinates (usually combines a camera and a perspective)

显示坐标的计算方法如下

The display coordinates are computed as follows

M.transform(x, y, z, 1) --> (clip_x, clip_y, clip_z, clip_w)

T.transform(clip_x / clip_w, clip_y / clip_w, clip_z / clip_w) --> (display_x, display_y, display_z)

M.transform是可逆矩阵乘法,而T.transform是任何可逆变换.

M.transform is an invertible matrix multiplication and T.transform is any invertible transformation.

您可以按照以下步骤从(display_x, display_y, display_z)中恢复(x,y,z)

You can recover (x,y,z) from (display_x, display_y, display_z) as follows

T.inverse_transform(display_x, display_y, display_z) --> (a, b, c)

M.inverse_transform(a, b, c, 1) --> (X, Y, Z, W)

(X/W, Y/W, Z/W) --> (x, y, z)

以下内容直观地说明了为什么上述计算会得出正确的解决方案

The following gives intuition on why the above computation leads to the right solution

T.inverse_transform(display_x, display_y, display_z) --> (clip_x / clip_w, clip_y / clip_w, clip_z / clip_w)

(clip_x / clip_w, clip_y / clip_w, clip_z / clip_w, clip_w / clip_w) == (clip_x, clip_y, clip_z, clip_w) / clip_w

M.inverse_transform((clip_x, clip_y, clip_z, clip_w) / clip_w) == M.inverse_transform(clip_x, clip_y, clip_z, clip_w) / clip_w

M.inverse_transform(clip_x, clip_y, clip_z, clip_w) / clip_w --> (x, y, z, 1) / clip_w

(x, y, z, 1) / clip_w == (x / clip_w, y / clip_w, z / clip_w, 1 / clip_w)

(x / clip_w, y / clip_w, z / clip_w, 1 / clip_w) == (X, Y, Z, W)

上面使用了以下矩阵(M)向量(v)标量(a == 1 / clip_w)属性:

The above used the following matrix (M) vector (v) scalar (a == 1 / clip_w) property:

M * (a * v) == a * (M * v)

这篇关于如何将2D显示坐标映射到3D OpenGL空间的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆