gluUnProject的Andr​​oid的OpenGL ES 1.1用法 [英] gluUnProject Android OpenGL ES 1.1 Usage

查看:163
本文介绍了gluUnProject的Andr​​oid的OpenGL ES 1.1用法的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图用gluUnProject到Windows COORDS转化为世界的coords。 我没有试图让仿真器或更旧的Andr​​oid系统工作试样(OpenGL ES的1.0版),这是不问关于GL功能的可用性。我试图用OpenGL ES 1.1和glGet功能真正的设备上工作,返回非零结果。

有样:

 公共Vector3类型translateScreenCoordsToWorld(Vector2 screenCoords){
gl.glLoadIdentity();

最后MATRIX4X4 modelViewMatrix =新MATRIX4X4();
最后MATRIX4X4 projectMatrix =新MATRIX4X4();

INT [] viewVectorParams =新INT [4];
gl.glGetIntegerv(GL11.GL_VIEWPORT,viewVectorParams,0);
gl.glGetFloatv(GL11.GL_MODELVIEW_MATRIX,modelViewMatrix.buffer,0);
gl.glGetFloatv(GL11.GL_PROJECTION_MATRIX,projectMatrix.buffer,0);

浮动[]输出=新的浮动[4];
GLU.gluUnProject(
    screenCoords.x,screenCoords.y,0,
    modelViewMatrix.buffer,0,
    projectMatrix.buffer,0,
    viewVectorParams,0,
    输出,0);

返回新的Vector3类型(输出[0],输出[1],输出[2]);

}
 

MATRIX4X4是只是一个包装浮法[]缓冲区;

通过这个功能我试图创建一个平面来填充屏幕或检测到的最大的世界COORDS当前投影矩阵,但它不工作,因为我真的不知道我正在正确的使用这些功能。

例如,当我试图运行的 translateScreenCoordsToWorld 的(新Vector2(480,800))返回非常小的坐标值Vector3类型(0.27f,0.42f,-1.0F)。

对于GL_PROJECTION模式,一个位于相机

任何人都可以提供很好的样本使用gluUnProject。

更新 感谢您的良好的联系。但它仍然不是为我工作:( 现在,我的功能是这样的:

 公共Vector3类型translateScreenCoordsToWorld(Vector2 screenCoords){

浮WINX = screenCoords.x,酒味= screenCoords.y,winZ = 0;
酒味=(浮点)currentViewVectorParams [3]  - 酒香;

浮动[]输出=新的浮动[4];
GLU.gluUnProject(
    WINX,酒味,winZ,
    currentModelViewMatrix.buffer,0,
    currentProjectMatrix.buffer,0,
    currentViewVectorParams,0,
    输出,0
);

Vector3类型nearPlane =新的Vector3类型(输出[0],输出[1],输出[2]);
winZ = 1.0F;
GLU.gluUnProject(
    WINX,酒味,winZ,
    currentModelViewMatrix.buffer,0,
    currentProjectMatrix.buffer,0,
    currentViewVectorParams,0,
    输出,0
);
Vector3类型farPlane =新的Vector3类型(输出[0],输出[1],输出[2]);

farPlane.sub(nearPlane);
farPlane.div(nearPlane.length());

浮DOT1,DOT2;

Vector3类型pointInPlane =新Vector3类型(),pointPlaneNormal =新的Vector3类型(0,0,-1);
pointInPlane.sub(nearPlane);

DOT1 =(pointPlaneNormal.x * pointInPlane.x)+(pointPlaneNormal.y * pointInPlane.y)+(pointPlaneNormal.z * pointInPlane.z);
DOT2 =(pointPlaneNormal.x * farPlane.x)+(pointPlaneNormal.y * farPlane.y)+(pointPlaneNormal.z * farPlane.z);

浮动T = DOT1 / DOT2;
farPlane.mul(T);

返回farPlane.add(nearPlane);
}
 

这是我的摄像头配置:

 公共无效updateCamera(){
摄像头摄像头= scene.getCamera();
GLU.gluLookAt(GL,
    camera.position.x,camera.position.y,camera.position.z,
    camera.target.x,camera.target.y,camera.target.z,
    camera.upAxis.x,camera.upAxis.y,camera.upAxis.z
);

    gl.glGetIntegerv(GL11.GL_VIEWPORT,currentViewVectorParams,0);
    gl.glGetFloatv(GL11.GL_MODELVIEW_MATRIX,currentModelViewMatrix.buffer,0);
    gl.glGetFloatv(GL11.GL_PROJECTION_MATRIX,currentProjectMatrix.buffer,0);
}
 

配置如下坐标中相机:

  camera.position = {0,0,65};
camera.target = {0,0,0}
camera.upAxis = {0,1,0}
 

解决方案

好了,正常使用gluUnProject当你试图让屏幕上的一个像素的世界空间坐标。需要三条信息,(除其它矩阵)投入glUnProject,丝网x坐标,一个屏幕的y坐标,和对应于该像素的深度缓冲器值。这个过程通常是这样的:

  1. 在画一个框架,让所有的深度缓存信息已准备就绪。
  2. 在抓取视/矩阵/画面COORDS
  3. 反转屏幕y坐标。
  4. 阅读在给定的像素深度值。这实际上是一个归一化的Z坐标。
  5. 输入X,Y和Z为glUnProject。

此结束了给的像素的给定片段的世界空间坐标。一个更详细的指南可以在这里找到 。无论如何,我可以看到你的code两种可能的错误。首先,通常你要反转的y屏幕坐标,(教程链接的详细信息为什么),第二个是,你总是传递0作为Z值gluUnProject。这样做取消的项目顶点,就好像它是在近平面。这是你想要的吗?

不管怎样,原谅我,如果我误解你的问题。

I'm trying to use gluUnProject to convert windows coords to world coords. I'm not trying to get working sample in emulator or older Android systems (with OpenGL ES v1.0) and this is not question about GL function availability. I'm trying to work on the real device with OpenGL ES 1.1 and glGet functions return non-zero results.

There is sample:

public Vector3 translateScreenCoordsToWorld(Vector2 screenCoords) {
gl.glLoadIdentity();

final Matrix4x4 modelViewMatrix = new Matrix4x4();
final Matrix4x4 projectMatrix = new Matrix4x4();

int[] viewVectorParams = new int[4];
gl.glGetIntegerv(GL11.GL_VIEWPORT,viewVectorParams,0);
gl.glGetFloatv(GL11.GL_MODELVIEW_MATRIX,modelViewMatrix.buffer,0);
gl.glGetFloatv(GL11.GL_PROJECTION_MATRIX,projectMatrix.buffer,0);

float[] output = new float[4];
GLU.gluUnProject(
    screenCoords.x, screenCoords.y, 0, 
    modelViewMatrix.buffer, 0, 
    projectMatrix.buffer, 0, 
    viewVectorParams, 0, 
    output, 0);             

return new Vector3(output[0],output[1],output[2]);

}

Matrix4x4 is just a wrapper for float[] buffer;

With this function I'm trying to create a plane to fill all screen or detect maximum world coords for current projection matrix, but it's not working at all because I not really sure that I'm using these functions correctly.

For example when I'm trying run translateScreenCoordsToWorld(new Vector2(480,800)) it returns very small coordinates values Vector3(0.27f, 0.42f, -1.0f).

Could anyone provide good sample usage gluUnProject for GL_PROJECTION mode with one positioned camera.

Updated Thanks for good links. But it' still not working for me :( Now my function looks like:

public Vector3 translateScreenCoordsToWorld(Vector2 screenCoords) {

float winX = screenCoords.x, winY = screenCoords.y, winZ = 0;
winY = (float)currentViewVectorParams[3] - winY;

float[] output = new float[4];
GLU.gluUnProject(
    winX, winY, winZ, 
    currentModelViewMatrix.buffer, 0, 
    currentProjectMatrix.buffer, 0, 
    currentViewVectorParams, 0, 
    output, 0
);              

Vector3 nearPlane = new Vector3(output[0],output[1],output[2]);
winZ = 1.0f;        
GLU.gluUnProject(
    winX, winY, winZ, 
    currentModelViewMatrix.buffer, 0, 
    currentProjectMatrix.buffer, 0, 
    currentViewVectorParams, 0, 
    output, 0
);              
Vector3 farPlane = new Vector3(output[0],output[1],output[2]);

farPlane.sub(nearPlane);
farPlane.div( nearPlane.length());

float dot1, dot2;

Vector3 pointInPlane = new Vector3(), pointPlaneNormal = new Vector3(0,0,-1);
pointInPlane.sub( nearPlane );

dot1 = (pointPlaneNormal.x * pointInPlane.x) + (pointPlaneNormal.y * pointInPlane.y) + (pointPlaneNormal.z * pointInPlane.z);
dot2 = (pointPlaneNormal.x * farPlane.x) + (pointPlaneNormal.y * farPlane.y ) +(pointPlaneNormal.z * farPlane.z);

float t = dot1/dot2;
farPlane.mul(t);

return farPlane.add(nearPlane);
}

and this is where my camera configuring:

public void updateCamera() {
Camera camera = scene.getCamera();
GLU.gluLookAt(gl, 
    camera.position.x, camera.position.y, camera.position.z,
    camera.target.x, camera.target.y, camera.target.z,
    camera.upAxis.x, camera.upAxis.y, camera.upAxis.z
);

    gl.glGetIntegerv(GL11.GL_VIEWPORT,currentViewVectorParams,0);       
    gl.glGetFloatv(GL11.GL_MODELVIEW_MATRIX, currentModelViewMatrix.buffer,0);
    gl.glGetFloatv(GL11.GL_PROJECTION_MATRIX,currentProjectMatrix.buffer,0);
}

The camera configured with the following coords:

camera.position = { 0, 0, 65 };     
camera.target = { 0, 0, 0 }
camera.upAxis = { 0, 1, 0 }

解决方案

Okay, normally when using gluUnProject you are trying to get the world space coordinate of a pixel on screen. You need three pieces of information,(other than the matrices) to put into glUnProject, a screen x coordinate , a screen y coordinate, and the depth buffer value corresponding to that pixel. The process normally goes like this:

  1. Draw a frame, so all the depth buffer information is ready.
  2. Grab viewport/matrices/screen coords
  3. Invert the screen y coordinate.
  4. Read the depth value at a given pixel. This is effectively a normalised z coordinate.
  5. Input the x,y and z into glUnProject.

This ends up giving the world space coordinate of the given fragment of a pixel. A much more detailed guide can be found here. Anyway, I can see two possible mistakes in your code. The first is that normally you have to invert the y screen coord,(the tutorial on the link details why), the second is that you are always passing in 0 as the z value to gluUnProject. Doing this un-projects the vertex as if it was on the near plane. Is this what you want?

Anyway, forgive me if I misread your question.

这篇关于gluUnProject的Andr​​oid的OpenGL ES 1.1用法的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆