OpenGLES中的屏幕到世界坐标转换是一项简单的任务吗? [英] Screen-to-World coordinate conversion in OpenGLES an easy task?

查看:166
本文介绍了OpenGLES中的屏幕到世界坐标转换是一项简单的任务吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

iPhon上的屏幕到世界问题 e

我有一个3D模型(CUBE)呈现在EAGLView和我希望能够检测到何时触摸立方体的给定面的中心(从任何方向角度)。听起来很简单但不是......

I have a 3D model (CUBE) rendered in an EAGLView and I want to be able to detect when I am touching the center of a given face (From any orientation angle) of the cube. Sounds pretty easy but it is not...

问题:


如何准确地关联屏幕坐标(触摸点)到世界坐标(OpenGL 3D空间中的位置)?当然,将给定点转换为屏幕/世界轴的百分比似乎是合乎逻辑的解决方案,但是当我需要缩放或旋转3D空间时会出现问题。注意:旋转&放大和缩小3D空间将改变2D屏幕坐标与3D世界坐标的关系...此外,您必须允许3D空间中的视点和对象之间的距离。起初,这似乎是一个简单的任务,但是当您实际检查需求时会发生变化。而且我没有找到人们在iPhone上这样做的例子。这通常是怎么做的?

The problem:
How do I accurately relate screen-coordinates (touch point) to world-coordinates (a location in OpenGL 3D space)? Sure, converting a given point into a 'percentage' of the screen/world-axis might seem the logical fix, but problems would arise when I need to zoom or rotate the 3D space. Note: rotating & zooming in and out of the 3D space will change the relationship of the 2D screen coords with the 3D world coords...Also, you'd have to allow for 'distance' in between the viewpoint and objects in 3D space. At first, this might seem like an 'easy task', but that changes when you actually examine the requirements. And I've found no examples of people doing this on the iPhone. How is this normally done?

简单任务?:

当然,有人可能会承担编写API的任务屏幕和世界之间的中介,但创建这样一个框架的任务需要一些严肃的设计,并可能需要'时间'去做 - 不是可以单人 4小时...... 4个小时恰好是我的截止日期。

An 'easy' task?:
Sure, one might undertake the task of writing an API to act as a go-between between screen and world, but the task of creating such a framework would require some serious design and would likely take 'time' to do -- NOT something that can be one-manned in 4 hours...And 4 hours happens to be my deadline.

问题:


  • 有什么最简单的方法来
    知道我是否在iPhone OpenGL ES
    世界中触及3D空间中的特定位置

推荐答案

两种解决方案呈现出来。它们都应该实现最终目标,尽管是通过不同的方式:而不是回答鼠标下的世界坐标是什么?,它们回答了鼠标下呈现什么对象?的问题。

Two solutions present themselves. Both of them should achieve the end goal, albeit by a different means: rather than answering "what world coordinate is under the mouse?", they answer the question "what object is rendered under the mouse?".

一种是将模型的简化版本绘制到离屏缓冲区,使用不同的颜色渲染每个面的中心(并调整光照以保持颜色相同)。然后,您可以在缓冲区中检测这些颜色(例如像素图),并将鼠标位置映射到它们。

One is to draw a simplified version of your model to an off-screen buffer, rendering the center of each face using a distinct color (and adjusting the lighting so color is preserved identically). You can then detect those colors in the buffer (e.g. pixmap), and map mouse locations to them.

另一种是使用OpenGL拾取。有一个很好看的教程这里。基本思路是将OpenGL置于选择模式,将视口限制在感兴趣点周围的一个小的(可能是3x3或5x5)窗口,然后使用OpenGL名称渲染场景(或其简化版本)(整数)标识符)以识别构成每个面的组件。在此过程结束时,OpenGL可以为您提供在选择视口中呈现的名称列表。将这些标识符映射回原始对象将允许您确定鼠标光标下的对象。

The other is to use OpenGL picking. There's a decent-looking tutorial here. The basic idea is to put OpenGL in select mode, restrict the viewport to a small (perhaps 3x3 or 5x5) window around the point of interest, and then render the scene (or a simplified version of it) using OpenGL "names" (integer identifiers) to identify the components making up each face. At the end of this process, OpenGL can give you a list of the names that were rendered in the selection viewport. Mapping these identifiers back to original objects will let you determine what object is under the mouse cursor.

这篇关于OpenGLES中的屏幕到世界坐标转换是一项简单的任务吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆