采摘3D使用NinevehGL或OpenGL我的手机光线追踪 [英] picking in 3D with ray-tracing using NinevehGL or OpenGL i-phone

查看:376
本文介绍了采摘3D使用NinevehGL或OpenGL我的手机光线追踪的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我找不到采摘3D与光线追踪的方法的正确和可以理解的EX pression。有没有人实现了这个算法在任何语言?直接共享工作code,因为自伪codeS不能编译,他们genereally写入缺少的部分。

I couldn't find the correct and understandable expression of picking in 3D with method of ray-tracing. Has anyone implemented this algorithm in any language? Share directly working code, because since pseudocodes can not be compiled, they are genereally written with lacking parts.

推荐答案

你有什么是在2D屏幕上的位置。要做的第一件事是从像素点转换成的标准化设备坐标的 - -1到1。然后,你需要找到在三维空间中线上的一点再presents。对于这一点,你需要变换矩阵/ CES你的3D应用程序,用来创建一个投影和摄像头。

What you have is a position in 2D on the screen. The first thing to do is convert that point from pixels to normalized device coordinates — -1 to 1. Then you need to find the line in 3D space that the point represents. For this, you need the transformation matrix/ces that your 3D app uses to create a projection and camera.

通常情况下,你有3 MATRICS:投影,视图和模型。当您指定顶点的对象,他们在对象空间。通过模型矩阵相乘给出了世界空间的顶点。通过观察矩阵相乘,再赋予眼/摄像机空间。由投影再次乘以赋予裁剪空间。剪辑空间具有非线性深度。添加一个Z分量您的鼠标坐标使他们在剪辑的空间。您可以在任何线性空间进行行/对象相交测试,因此您必须至少将鼠标移动坐标眼位,但它更方便地执行在世界空间(或者根据您的场景图对象空间)的相交测试。

Typically you have 3 matrics: projection, view and model. When you specify vertices for an object, they're in "object space". Multiplying by the model matrix gives the vertices in "world space". Multiplying again by the view matrix gives "eye/camera space". Multiplying again by the projection gives "clip space". Clip space has non-linear depth. Adding a Z component to your mouse coordinates puts them in clip space. You can perform the line/object intersection tests in any linear space, so you must at least move the mouse coordinates to eye space, but it's more convenient to perform the intersection tests in world space (or object space depending on your scene graph).

要移动从裁剪空间鼠标坐标世界空间中,添加的Z分量和由逆投影矩阵相乘,然后将逆摄/视图矩阵。要创建一个行,两个点的Z将被计算。 -

To move the mouse coordinates from clip space to world space, add a Z-component and multiply by the inverse projection matrix and then the inverse camera/view matrix. To create a line, two points along Z will be computed — from and to.

在下面的例子中,我有对象的列表,每一个位置和边界半径。当然,交点不会匹配完美,但它运作良好,足够了。这不是伪code,但它使用我自己的向量/矩阵库。你必须代替你自己的地方。

In the following example, I have a list of objects, each with a position and bounding radius. The intersections of course never match perfectly but it works well enough for now. This isn't pseudocode, but it uses my own vector/matrix library. You'll have to substitute your own in places.

vec2f mouse = (vec2f(mousePosition) / vec2f(windowSize)) * 2.0f - 1.0f;
mouse.y = -mouse.y; //origin is top-left and +y mouse is down

mat44 toWorld = (camera.projection * camera.transform).inverse();
//equivalent to camera.transform.inverse() * camera.projection.inverse() but faster

vec4f from = toWorld * vec4f(mouse, -1.0f, 1.0f);
vec4f to = toWorld * vec4f(mouse, 1.0f, 1.0f);

from /= from.w; //perspective divide ("normalize" homogeneous coordinates)
to /= to.w;

int clickedObject = -1;
float minDist = 99999.0f;

for (size_t i = 0; i < objects.size(); ++i)
{
    float t1, t2;
    vec3f direction = to.xyz() - from.xyz();
    if (intersectSphere(from.xyz(), direction, objects[i].position, objects[i].radius, t1, t2))
    {
        //object i has been clicked. probably best to find the minimum t1 (front-most object)
        if (t1 < minDist)
        {
            minDist = t1;
            clickedObject = (int)i;
        }
    }
}

//clicked object is objects[clickedObject]

相反 intersectSphere的 ,你可以使用边界框或其他隐含的几何图形,或交叉的网格的三角形(这可能需要建立一个kd树出于性能的考虑)。

Instead of intersectSphere, you could use a bounding box or other implicit geometry, or intersect a mesh's triangles (this may require building a kd-tree for performance reasons).


这里的行/球的实现相交(根据掉上面的链接)。它假定球在原点,所以不是通过 from.xyz() P ,给 from.xyz() - 对象[I] .position


Here's an implementation of the line/sphere intersect (based off the link above). It assumes the sphere is at the origin, so instead of passing from.xyz() as p, give from.xyz() - objects[i].position.

//ray at position p with direction d intersects sphere at (0,0,0) with radius r. returns intersection times along ray t1 and t2
bool intersectSphere(const vec3f& p, const vec3f& d, float r, float& t1, float& t2)
{
    //http://wiki.cgsociety.org/index.php/Ray_Sphere_Intersection
    float A = d.dot(d);
    float B = 2.0f * d.dot(p);
    float C = p.dot(p) - r * r;

    float dis = B * B - 4.0f * A * C;

    if (dis < 0.0f)
        return false;

    float S = sqrt(dis);    

    t1 = (-B - S) / (2.0f * A);
    t2 = (-B + S) / (2.0f * A);
    return true;
}

这篇关于采摘3D使用NinevehGL或OpenGL我的手机光线追踪的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆