使用 NinevehGL 或 OpenGL i-phone 进行 3D 光线追踪 [英] picking in 3D with ray-tracing using NinevehGL or OpenGL i-phone

查看:18
本文介绍了使用 NinevehGL 或 OpenGL i-phone 进行 3D 光线追踪的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我找不到使用光线追踪方法在 3D 中拾取的正确且可理解的表达方式.有没有人用任何语言实现过这个算法?直接分享工作代码,因为伪代码不能编译,一般都是缺部分写的.

I couldn't find the correct and understandable expression of picking in 3D with method of ray-tracing. Has anyone implemented this algorithm in any language? Share directly working code, because since pseudocodes can not be compiled, they are genereally written with lacking parts.

推荐答案

你所拥有的是屏幕上的 2D 位置.首先要做的是将该点从像素转换为标准化设备坐标——-1 到 1.然后您需要在 3D 空间中找到该点所代表的线.为此,您需要 3D 应用用于创建投影和相机的转换矩阵/ces.

What you have is a position in 2D on the screen. The first thing to do is convert that point from pixels to normalized device coordinates — -1 to 1. Then you need to find the line in 3D space that the point represents. For this, you need the transformation matrix/ces that your 3D app uses to create a projection and camera.

通常您有 3 个矩阵:投影、视图和模型.当您为对象指定顶点时,它们位于对象空间"中.乘以模型矩阵得到世界空间"中的顶点.再次乘以视图矩阵得到眼睛/相机空间".再次乘以投影给出剪辑空间".剪辑空间具有非线性深度.将 Z 组件添加到鼠标坐标中会将它们放置在剪辑空间中.您可以在任何线性空间中执行线/对象相交测试,因此您至少必须将鼠标坐标移动到眼睛空间,但在世界空间(或对象空间,具体取决于您的场景图)中执行相交测试更方便.

Typically you have 3 matrics: projection, view and model. When you specify vertices for an object, they're in "object space". Multiplying by the model matrix gives the vertices in "world space". Multiplying again by the view matrix gives "eye/camera space". Multiplying again by the projection gives "clip space". Clip space has non-linear depth. Adding a Z component to your mouse coordinates puts them in clip space. You can perform the line/object intersection tests in any linear space, so you must at least move the mouse coordinates to eye space, but it's more convenient to perform the intersection tests in world space (or object space depending on your scene graph).

要将鼠标坐标从剪辑空间移动到世界空间,请添加 Z 分量并乘以逆投影矩阵,然后乘以逆相机/视图矩阵.要创建一条线,将计算沿 Z 的两个点 - fromto.

To move the mouse coordinates from clip space to world space, add a Z-component and multiply by the inverse projection matrix and then the inverse camera/view matrix. To create a line, two points along Z will be computed — from and to.

在下面的示例中,我有一个对象列表,每个对象都有一个位置和边界半径.当然,交叉点永远不会完美匹配,但现在它已经足够好了.这不是伪代码,但它使用我自己的向量/矩阵库.您必须在某些地方替换您自己的.

In the following example, I have a list of objects, each with a position and bounding radius. The intersections of course never match perfectly but it works well enough for now. This isn't pseudocode, but it uses my own vector/matrix library. You'll have to substitute your own in places.

vec2f mouse = (vec2f(mousePosition) / vec2f(windowSize)) * 2.0f - 1.0f;
mouse.y = -mouse.y; //origin is top-left and +y mouse is down

mat44 toWorld = (camera.projection * camera.transform).inverse();
//equivalent to camera.transform.inverse() * camera.projection.inverse() but faster

vec4f from = toWorld * vec4f(mouse, -1.0f, 1.0f);
vec4f to = toWorld * vec4f(mouse, 1.0f, 1.0f);

from /= from.w; //perspective divide ("normalize" homogeneous coordinates)
to /= to.w;

int clickedObject = -1;
float minDist = 99999.0f;

for (size_t i = 0; i < objects.size(); ++i)
{
    float t1, t2;
    vec3f direction = to.xyz() - from.xyz();
    if (intersectSphere(from.xyz(), direction, objects[i].position, objects[i].radius, t1, t2))
    {
        //object i has been clicked. probably best to find the minimum t1 (front-most object)
        if (t1 < minDist)
        {
            minDist = t1;
            clickedObject = (int)i;
        }
    }
}

//clicked object is objects[clickedObject]

您可以使用边界来代替 intersectSphere盒子或其他隐式几何体,或与网格的三角形相交(出于性能原因,这可能需要构建 kd-tree).

Instead of intersectSphere, you could use a bounding box or other implicit geometry, or intersect a mesh's triangles (this may require building a kd-tree for performance reasons).


这是线/球相交的实现(基于上面的链接).它假定球体位于原点,因此不要将 from.xyz() 作为 p 传递,而是给出 from.xyz() - objects[i].位置.


Here's an implementation of the line/sphere intersect (based off the link above). It assumes the sphere is at the origin, so instead of passing from.xyz() as p, give from.xyz() - objects[i].position.

//ray at position p with direction d intersects sphere at (0,0,0) with radius r. returns intersection times along ray t1 and t2
bool intersectSphere(const vec3f& p, const vec3f& d, float r, float& t1, float& t2)
{
    //http://wiki.cgsociety.org/index.php/Ray_Sphere_Intersection
    float A = d.dot(d);
    float B = 2.0f * d.dot(p);
    float C = p.dot(p) - r * r;

    float dis = B * B - 4.0f * A * C;

    if (dis < 0.0f)
        return false;

    float S = sqrt(dis);    

    t1 = (-B - S) / (2.0f * A);
    t2 = (-B + S) / (2.0f * A);
    return true;
}

这篇关于使用 NinevehGL 或 OpenGL i-phone 进行 3D 光线追踪的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆