处理 3D“场景"中的触摸事件或屏幕到 3D 坐标 [英] Handling touch events in a 3D "scene" or Screen to 3D coordinates

查看:25
本文介绍了处理 3D“场景"中的触摸事件或屏幕到 3D 坐标的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在 Android 中使用 3D (OpenGL ES) 实现打地鼠"类型的游戏.现在,我在任何给定时间在屏幕上都有一个 3D 形状(旋转立方体)代表我的痣".我的视图中有一个触摸事件处理程序,它在我的渲染器中随机设置一些 x,y 值,导致立方体四处移动(使用 glTranslatef()).

I'm trying to implement a "whack-a-mole" type game using 3D (OpenGL ES) in Android. For now, I have ONE 3D shape (spinning cube) at the screen at any given time that represents my "mole". I have a touch event handler in my view which randomly sets some x,y values in my renderer, causing the cube to move around (using glTranslatef()).

我还没有遇到任何将屏幕触摸事件完全连接到 3D 场景的教程或文档.我已经做了很多跑腿工作才能到达我所在的位置,但我似乎无法在剩下的路上弄清楚这一点.

I've yet to come across any tutorial or documentation that completely bridges the screen touch events to a 3D scene. I've done a lot of legwork to get to where I'm at but I can't seem to figure this out the rest of the way.

来自开发人员.andrdoid.com 我正在使用我认为可以被视为矩阵的帮助类:MatrixGrabber.java, MatrixStack.javaMatrixTrackingGL.java.

From developer.andrdoid.com I'm using what I guess could be considered helper classes for the Matrices: MatrixGrabber.java, MatrixStack.java and MatrixTrackingGL.java.

我在我的 GLU.glUnProject 方法中使用这些类应该做从真实屏幕坐标到3D或物体坐标的转换.

I use those classes in my GLU.glUnProject method which is supposed to do the conversion from the real screen coordinates to the 3D or object coordinates.

片段:

    MatrixGrabber mg = new MatrixGrabber();
    int viewport[] = {0, 0, renderer._width, renderer._height};
    mg.getCurrentModelView(renderer.myg);
    mg.getCurrentProjection(renderer.myg);
    float nearCoords[] = { 0.0f, 0.0f, 0.0f, 0.0f };
    float farCoords[] = { 0.0f, 0.0f, 0.0f, 0.0f };
    float x = event.getX();
    float y = event.getY();
    GLU.gluUnProject(x, y, -1.0f, mg.mModelView, 0, mg.mProjection , 0, viewport, 0, nearCoords, 0)
    GLU.gluUnProject(x, y, 1.0f, mg.mModelView, 0, mg.mProjection , 0, viewport, 0, farCoords, 0)

此代码段执行时没有错误,但输出看起来不正确.我知道屏幕的原点 (0,0) 在左下角.而 3D 场景,至少是我的,似乎起源于屏幕中间,就像经典的笛卡尔系统一样.因此,从触摸左下角开始运行我的代码,其中屏幕坐标为 (0, 718).我从最后一个参数到 gluUnProject 的输出是:

This snippet executes without error put the output does not look correct. I know the screen has the origin (0,0) at the bottom left. And the 3D scene, at least mine, seems to have the origin right at the middle of the screen like a classic cartesian system. So run my code where the screen coordinates are (0, 718) from touching the bottom left. My outputs from last parameters to gluUnProject are:

近:{-2.544, 2.927, 2.839, 1.99}

Near: {-2.544, 2.927, 2.839, 1.99}

远:{0.083, 0.802, -0.760, 0.009}

Far: {0.083, 0.802, -0.760, 0.009}

这些数字对我来说没有任何意义.我的触摸甚至在第三象限,所以我所有的近和远的 x,y 值都应该是负数,但它们不是.gluUnProject 文档没有提到任何需要转换屏幕坐标.再说一次,同样的文档会让您相信 Near 和 Far 应该是大小为 3 的数组,但它们必须是大小为 4 的数组,我不知道为什么.

Those numbers don't make any sense to me. My touch even was in the 3rd quadrant so all my x,y values for near and far should be negative but they aren't. The gluUnProject documention doesn't mention any need to convert the screen coordinates. Then again, that same documentation would lead you to believe that Near and Far should have been arrays of size 3 but they have to be of size 4 and I have NO CLUE why.

所以,我有两个问题(我相信会有更多问题出现).

So, I've got two questions (I'm sure more will come up).

  1. 我怎样才能确保我得到了正确的远近基于屏幕坐标的坐标.
  2. 获得近距和远坐标后,如何使用它来确定它们创建的线是否与屏幕上的对象相交.

推荐答案

我记得在我大学时代在 Android 上使用 glUnProject 时遇到了问题.(那是在 Android 的早期)我的一个同学发现我们的计算会被 glUnProject 的结果中的第 4 维破坏.如果我没记错的话,这是某处记录的东西,但由于某种原因,我无法再次挖掘它.我从来没有深入研究过它的细节,但也许对我们有帮助的东西也可能对你有用.这可能与我们应用的数学有关...

I remember runnning into problems with glUnProject on Android back in my college days. (That was in the early days of Android) One of my fellow students figured out that our calculations would get mangled by the 4th dimension in the result of glUnProject. If I recall correctly, this was something documented somewhere, but for some reason I haven't been able to dig that up again. I never dug into the specifics of it, but perhaps what helped us may also be of use to you. It's likely to do with the math we applied...

/**
 * Convert the 4D input into 3D space (or something like that, otherwise the gluUnproject values are incorrect)
 * @param v 4D input
 * @return 3D output
 */
private static float[] fixW(float[] v) { 
    float w = v[3];
    for(int i = 0; i < 4; i++) 
        v[i] = v[i] / w;
    return v;
}

我们实际上使用了上述方法来修复 glUnProject 结果并对 3D 空间中的球形对象执行拾取/触摸/选择操作.下面的代码可能会提供有关如何执行此操作的指南.这只不过是投射一条射线并进行射线-球体相交测试.

We actually used the above method to fix up the glUnProject results and do a pick/touch/select action on spherical objects in 3D space. Below code may provide a guide on how to do this. It's little more than casting a ray and doing a ray-sphere intersection test.

一些额外的注释可能会使下面的代码更容易理解:

A few additional notes that may make below code more easy to understand:

  • Vector3f 是基于 3 个浮点值的 3D 矢量的自定义实现,并实现了常用的矢量操作.
  • shootTarget 是 3D 空间中的球形物体.
  • getXZBoundsInWorldspace(0)getPosition(0) 等调用中的 0 只是一个索引.我们实现了 3D 模型动画,索引确定要返回模型的哪个帧/姿势".由于我们最终对非动画对象进行了此特定命中测试,因此我们始终使用第一帧.
  • Concepts.wConcepts.h 只是以像素为单位的屏幕宽度和高度 - 或者对于全屏应用程序可能有不同的说法:屏幕的分辨率.
  • Vector3f is a custom implementation of a 3D vector based on 3 float values and implements the usual vector operations.
  • shootTarget is the spherical object in 3D space.
  • The 0 in calls like getXZBoundsInWorldspace(0) and getPosition(0) are simply an index. We implemented 3D model animations and the index determines which 'frame/pose' of the model to return. Since we ended up doing this specific hit test on a non-animated object, we always used the first frame.
  • Concepts.w and Concepts.h are simply the width and height of the screen in pixels - or perhaps differently said for a full screen app: the screen's resolution.

_

/**
 * Checks if the ray, casted from the pixel touched on-screen, hits
 * the shoot target (a sphere). 
 * @param x
 * @param y
 * @return Whether the target is hit
 */
public static boolean rayHitsTarget(float x, float y) {
    float[] bounds = Level.shootTarget.getXZBoundsInWorldspace(0);
    float radius = (bounds[1] - bounds[0]) / 2f;
    Ray ray = shootRay(x, y);
    float a = ray.direction.dot(ray.direction);  // = 1
    float b = ray.direction.mul(2).dot(ray.near.min(Level.shootTarget.getPosition(0)));
    float c = (ray.near.min(Level.shootTarget.getPosition(0))).dot(ray.near.min(Level.shootTarget.getPosition(0))) - (radius*radius);

    return (((b * b) - (4 * a * c)) >= 0 );

}

/**
 * Casts a ray from screen coordinates x and y.
 * @param x
 * @param y
 * @return Ray fired from screen coordinate (x,y)
 */
public static Ray shootRay(float x, float y){
    float[] resultNear = {0,0,0,1};
    float[] resultFar = {0,0,0,1};

    float[] modelViewMatrix = new float[16];
    Render.viewStack.getMatrix(modelViewMatrix, 0);

    float[] projectionMatrix = new float[16];
    Render.projectionStack.getMatrix(projectionMatrix, 0);

    int[] viewport = { 0, 0, Concepts.w, Concepts.h };

    float x1 = x;
    float y1 = viewport[3] - y;

    GLU.gluUnProject(x1, y1, 0.01f, modelViewMatrix, 0, projectionMatrix, 0, viewport, 0, resultNear, 0);
    GLU.gluUnProject(x1, y1, 50f, modelViewMatrix, 0, projectionMatrix, 0, viewport, 0, resultFar, 0);
    //transform the results from 4d to 3d coordinates.
    resultNear = fixW(resultNear);
    resultFar = fixW(resultFar);
    //create the vector of the ray.
    Vector3f rayDirection = new Vector3f(resultFar[0]-resultNear[0], resultFar[1]-resultNear[1], resultFar[2]-resultNear[2]);
    //normalize the ray.
    rayDirection = rayDirection.normalize();
    return new Ray(rayDirection, resultNear, resultFar);
}

/**
 * @author MH
 * Provides some accessors for a casted ray.
 */
public static class Ray {
    Vector3f direction;
    Vector3f near;
    Vector3f far;

    /**
     * Casts a new ray based on the given direction, near and far params. 
     * @param direction
     * @param near
     * @param far
     */
    public Ray(Vector3f direction, float[] near, float[] far){
        this.direction = direction;
        this.near = new Vector3f(near[0], near[1], near[2]);
        this.far = new Vector3f(far[0], far[1], far[2]);
    }
}

这篇关于处理 3D“场景"中的触摸事件或屏幕到 3D 坐标的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆