处理触摸事件在3D"现场"或画面到3D坐标 [英] Handling touch events in a 3D "scene" or Screen to 3D coordinates

查看:168
本文介绍了处理触摸事件在3D"现场"或画面到3D坐标的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想实现使用3D(OpenGL ES的)在Android的一个捶​​痣式的游戏。现在,我有一个三维形状(旋转的立方体)屏幕在任何给定时间重新presents我的鼹鼠。我有我的看法触摸事件处理程序,该随机设置一些X,在我的渲染器的y值,导致立方体走动(使用glTranslatef()函数)。

我还没有遇到任何教程或文档完全填补了屏幕触摸事件到3D场景中。我已经做了很多跑腿去我在哪里,但我似乎无法弄清楚这一点的方式休息。

从<一个href="http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/graphics/spritetext/index.html"相对=nofollow> developer.andrdoid.com 我使用的是什么,我想可以考虑辅助类的矩阵:<一href="http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/graphics/spritetext/MatrixGrabber.html"相对=nofollow> MatrixGrabber.java ,<一个href="http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/graphics/spritetext/MatrixStack.html"相对=nofollow> MatrixStack.java 和<一href="http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/graphics/spritetext/MatrixTrackingGL.html"相对=nofollow> MatrixTrackingGL.java 。

我用这些类在我的 GLU.glUnProject方法这是应该做的转换,从真正的屏幕坐标的3D或对象的坐标。

摘录:

  MatrixGrabber毫克=新MatrixGrabber();
    视口INT [] = {0,0,renderer._width,renderer._height};
    mg.getCurrentModelView(renderer.myg);
    mg.getCurrentProjection(renderer.myg);
    浮nearCoords [] = {0.0,0.0,0.0,0.0};
    浮farCoords [] = {0.0,0.0,0.0,0.0};
    浮X = event.getX();
    浮动Y = event.getY();
    GLU.gluUnProject(X,Y​​,-1.0F,mg.mModelView,0,mg.mProjection,0,视口,0,nearCoords,0)
    GLU.gluUnProject(X,Y​​,1.0F,mg.mModelView,0,mg.mProjection,0,视口,0,farCoords,0)
 

这个片段执行没有错误将输出看起来不正确的。我知道屏幕上有原点(0,0)在左下角。和3D场景中,至少矿,似乎有在像经典笛卡尔系统的屏幕中间的原点右。所以,我跑code式屏幕坐标为(0,718)的触摸左下角。我从去年参数gluUnProject输出是:

邻近:{-2.544,2.927,2.839,1.99}

远:{0.083,0.802,-0.760,0.009}

这些数字没有任何意义,我。我的触摸,甚至是在第三象限所以我所有的X,为远近y的值应该是否定的,但事实并非如此。该gluUnProject机制的文档没有提到任何需要转换的屏幕坐标。话又说回来,同样的文件会导致您认为近和远应该是大小3的阵列,但他们必须是大小为4,我不知道为什么。

所以,我有两个问题(我敢肯定,更会拿出)。

  1. 我怎样才能确保我得到正确的远近 坐标根据屏幕坐标。
  2. 有一次,我有近及远的坐标,我怎么用它来找到,如果他们所创造的线与屏幕上的对象。
解决方案

我记得捉迷藏成问题glUnProject在Android上回在我的大学时代。 (那是在Android的初期),我的一个同学想通了,我们的计算将得到错位在glUnProject结果的第四维度。如果我没有记错,这是一件记载的地方,但由于某些原因,我一直没能再次挖了起来。我从来没有挖成它的细节,但也许是什么使我们也可以利用你。这很可能跟我们应用数学...

  / **
 *转换四维输入到三维空间(或类似的东西,否则gluUnproject值不正确)
 *参数v 4D输入
 * @返回3D输出
 * /
私有静态浮动[] fixW(浮动[] V){
    浮瓦特= V [3];
    的for(int i = 0;我4;;我++)
        v [电流] = V [I] /瓦特;
    返回伏;
}
 

我们实际使用上面的方法来解决了glUnProject结果,并做一个选秀权/触摸/选择在三维空间中的球状物体的动作。下面code可提供关于如何做到这一点的指南。它比铸造线,做一个射线球相交测试而已。

一些额外的注意事项,可能使低于code更容易理解:

  • Vector3f 是一个3D矢量根据3浮点值的自定义实现,并且实现了通常的向量运算。
  • shootTarget 是在三维空间中的球形物体。
  • 0 在这样的方法调用 getXZBoundsInWorldspace(0)为getPosition(0)只是一个索引。我们实现了三维模型动画和指数决定了模型返回该框架/姿势。由于我们终于实现了这个特定的点击测试在非动画对象,我们总是用第一帧。
  • Concepts.w Concepts.h 只是在宽度和像素屏幕的高度 - 或者不同说了全屏应用程序:屏幕分辨率

_

  / **
 *检查是否光线,从像素铸造触及屏幕上,点击
 *拍摄对象(球体)。
 * @参数x
 * @参数ÿ
 * @返回目标是否被击中
 * /
公共静态布尔rayHitsTarget(浮X,浮动Y){
    浮[]界限= Level.shootTarget.getXZBoundsInWorldspace(0);
    浮半径=(边界[1]  - 界限[0])/ 2F;
    一缕一缕= shootRay(X,Y);
    浮起= ray.direction.dot(ray.direction); // = 1
    浮B = ray.direction.mul(2)的.dot(ray.near.min(Level.shootTarget.getPosition(0)));
    。浮点C =(ray.near.min(Level.shootTarget.getPosition(0)))点(ray.near.min(Level.shootTarget.getPosition(0))) - (半径*半径);

    返回(((b *表二) - (4 * A * C))&GT; = 0);

}

/ **
 *施放从屏幕坐标x和y射线。
 * @参数x
 * @参数ÿ
 * @返回雷从屏幕发射的坐标(X,Y)
 * /
公共静态雷shootRay(浮X,浮动Y){
    浮动[] resultNear = {0,0,0,1};
    浮动[] resultFar = {0,0,0,1};

    浮动[] modelViewMatrix =新的浮动[16];
    Render.viewStack.getMatrix(modelViewMatrix,0);

    浮动[] projectionMatrix =新的浮动[16];
    Render.projectionStack.getMatrix(projectionMatrix,0);

    INT []视= {0,0,Concepts.w,Concepts.h};

    浮X1 = X;
    浮Y1 =视[3]  - ÿ;

    GLU.gluUnProject(X1,Y1,0.01F,modelViewMatrix,0,projectionMatrix,0,视口,0,resultNear,0);
    GLU.gluUnProject(X1,Y1,50F,modelViewMatrix,0,projectionMatrix,0,视口,0,resultFar,0);
    //从4D转换结果的三维坐标。
    resultNear = fixW(resultNear);
    resultFar = fixW(resultFar);
    //创建光线的载体。
    Vector3f rayDirection =新Vector3f(resultFar [0] -resultNear [0],resultFar [1] -resultNear [1],resultFar [2] -resultNear [2]);
    //正常化的射线。
    rayDirection = rayDirection.normalize();
    返回新雷(rayDirection,resultNear,resultFar);
}

/ **
 * @author MH
 *提供一些访问的铸造线。
 * /
公共静态类雷{
    Vector3f方向;
    Vector3f附近;
    Vector3f为止;

    / **
     *施放根据给定的方向,远近PARAMS一个新的光芒。
     * @参数方向
     * @参数附近
     * @参数远
     * /
    公共雷(Vector3f方向,浮动[]附近,漂浮[]远){
        this.direction =方向;
        this.near =新Vector3f(靠近[0],邻近[1],邻近[2]);
        this.far =新Vector3f(远[0],远[1],远[2]);
    }
}
 

I'm trying to implement a "whack-a-mole" type game using 3D (OpenGL ES) in Android. For now, I have ONE 3D shape (spinning cube) at the screen at any given time that represents my "mole". I have a touch event handler in my view which randomly sets some x,y values in my renderer, causing the cube to move around (using glTranslatef()).

I've yet to come across any tutorial or documentation that completely bridges the screen touch events to a 3D scene. I've done a lot of legwork to get to where I'm at but I can't seem to figure this out the rest of the way.

From developer.andrdoid.com I'm using what I guess could be considered helper classes for the Matrices: MatrixGrabber.java, MatrixStack.java and MatrixTrackingGL.java.

I use those classes in my GLU.glUnProject method which is supposed to do the conversion from the real screen coordinates to the 3D or object coordinates.

Snippet:

    MatrixGrabber mg = new MatrixGrabber();
    int viewport[] = {0, 0, renderer._width, renderer._height};
    mg.getCurrentModelView(renderer.myg);
    mg.getCurrentProjection(renderer.myg);
    float nearCoords[] = { 0.0f, 0.0f, 0.0f, 0.0f };
    float farCoords[] = { 0.0f, 0.0f, 0.0f, 0.0f };
    float x = event.getX();
    float y = event.getY();
    GLU.gluUnProject(x, y, -1.0f, mg.mModelView, 0, mg.mProjection , 0, viewport, 0, nearCoords, 0)
    GLU.gluUnProject(x, y, 1.0f, mg.mModelView, 0, mg.mProjection , 0, viewport, 0, farCoords, 0)

This snippet executes without error put the output does not look correct. I know the screen has the origin (0,0) at the bottom left. And the 3D scene, at least mine, seems to have the origin right at the middle of the screen like a classic cartesian system. So run my code where the screen coordinates are (0, 718) from touching the bottom left. My outputs from last parameters to gluUnProject are:

Near: {-2.544, 2.927, 2.839, 1.99}

Far: {0.083, 0.802, -0.760, 0.009}

Those numbers don't make any sense to me. My touch even was in the 3rd quadrant so all my x,y values for near and far should be negative but they aren't. The gluUnProject documention doesn't mention any need to convert the screen coordinates. Then again, that same documentation would lead you to believe that Near and Far should have been arrays of size 3 but they have to be of size 4 and I have NO CLUE why.

So, I've got two questions (I'm sure more will come up).

  1. How can I make sure I am getting the proper near and far coordinates based on the screen coordinates.
  2. Once I have the near and far coordinates, how do I use that to find if the line they create intersects an object on the screen.

解决方案

I remember runnning into problems with glUnProject on Android back in my college days. (That was in the early days of Android) One of my fellow students figured out that our calculations would get mangled by the 4th dimension in the result of glUnProject. If I recall correctly, this was something documented somewhere, but for some reason I haven't been able to dig that up again. I never dug into the specifics of it, but perhaps what helped us may also be of use to you. It's likely to do with the math we applied...

/**
 * Convert the 4D input into 3D space (or something like that, otherwise the gluUnproject values are incorrect)
 * @param v 4D input
 * @return 3D output
 */
private static float[] fixW(float[] v) { 
    float w = v[3];
    for(int i = 0; i < 4; i++) 
        v[i] = v[i] / w;
    return v;
}

We actually used the above method to fix up the glUnProject results and do a pick/touch/select action on spherical objects in 3D space. Below code may provide a guide on how to do this. It's little more than casting a ray and doing a ray-sphere intersection test.

A few additional notes that may make below code more easy to understand:

  • Vector3f is a custom implementation of a 3D vector based on 3 float values and implements the usual vector operations.
  • shootTarget is the spherical object in 3D space.
  • The 0 in calls like getXZBoundsInWorldspace(0) and getPosition(0) are simply an index. We implemented 3D model animations and the index determines which 'frame/pose' of the model to return. Since we ended up doing this specific hit test on a non-animated object, we always used the first frame.
  • Concepts.w and Concepts.h are simply the width and height of the screen in pixels - or perhaps differently said for a full screen app: the screen's resolution.

_

/**
 * Checks if the ray, casted from the pixel touched on-screen, hits
 * the shoot target (a sphere). 
 * @param x
 * @param y
 * @return Whether the target is hit
 */
public static boolean rayHitsTarget(float x, float y) {
    float[] bounds = Level.shootTarget.getXZBoundsInWorldspace(0);
    float radius = (bounds[1] - bounds[0]) / 2f;
    Ray ray = shootRay(x, y);
    float a = ray.direction.dot(ray.direction);  // = 1
    float b = ray.direction.mul(2).dot(ray.near.min(Level.shootTarget.getPosition(0)));
    float c = (ray.near.min(Level.shootTarget.getPosition(0))).dot(ray.near.min(Level.shootTarget.getPosition(0))) - (radius*radius);

    return (((b * b) - (4 * a * c)) >= 0 );

}

/**
 * Casts a ray from screen coordinates x and y.
 * @param x
 * @param y
 * @return Ray fired from screen coordinate (x,y)
 */
public static Ray shootRay(float x, float y){
    float[] resultNear = {0,0,0,1};
    float[] resultFar = {0,0,0,1};

    float[] modelViewMatrix = new float[16];
    Render.viewStack.getMatrix(modelViewMatrix, 0);

    float[] projectionMatrix = new float[16];
    Render.projectionStack.getMatrix(projectionMatrix, 0);

    int[] viewport = { 0, 0, Concepts.w, Concepts.h };

    float x1 = x;
    float y1 = viewport[3] - y;

    GLU.gluUnProject(x1, y1, 0.01f, modelViewMatrix, 0, projectionMatrix, 0, viewport, 0, resultNear, 0);
    GLU.gluUnProject(x1, y1, 50f, modelViewMatrix, 0, projectionMatrix, 0, viewport, 0, resultFar, 0);
    //transform the results from 4d to 3d coordinates.
    resultNear = fixW(resultNear);
    resultFar = fixW(resultFar);
    //create the vector of the ray.
    Vector3f rayDirection = new Vector3f(resultFar[0]-resultNear[0], resultFar[1]-resultNear[1], resultFar[2]-resultNear[2]);
    //normalize the ray.
    rayDirection = rayDirection.normalize();
    return new Ray(rayDirection, resultNear, resultFar);
}

/**
 * @author MH
 * Provides some accessors for a casted ray.
 */
public static class Ray {
    Vector3f direction;
    Vector3f near;
    Vector3f far;

    /**
     * Casts a new ray based on the given direction, near and far params. 
     * @param direction
     * @param near
     * @param far
     */
    public Ray(Vector3f direction, float[] near, float[] far){
        this.direction = direction;
        this.near = new Vector3f(near[0], near[1], near[2]);
        this.far = new Vector3f(far[0], far[1], far[2]);
    }
}

这篇关于处理触摸事件在3D&QUOT;现场&QUOT;或画面到3D坐标的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆