如何在gluProject功能工作?我不明白 [英] How does the gluProject function work? I can't understand it

查看:230
本文介绍了如何在gluProject功能工作?我不明白的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我需要显示一个方形的多边形与屏幕的宽度的100%,那么,我supose,我必须放大它(与Z轴),直到多边形边界tounching屏幕边缘

我试图用gluProject投射三维坐标为2D屏幕坐标来实现这一目标。如果屏幕坐标为0或相匹配的宽度或高度,那么它被触摸的屏幕边框。

问题是,事情错了,在 outputCoords 返回数组与gluProject是给我的这些值:0,0,0.5,但我的广场为中心的筛网,并与Z = -5.0f !!!!

我不明白这些价值观......

这是在code我使用,以获得我的屏幕上方形poligon的2D投影:

这code是在GLSurfaceView类的onSurfaceCreated方法,¿它有另一种方法来推? ¿在哪里?

  ///////////////新的$ C $下缩放AR图像所需宽度///////// ////////

        mg.getCurrentModelView(GL);
        mg.getCurrentProjection(GL);

        浮动[] modelMatrix =新的浮动[16];
        浮动[] projMatrix =新的浮动[16];
        modelMatrix = mg.mModelView;
        projMatrix = mg.mProjection;
        INT [] MVIEW =新INT [4];
        //填写这与你的窗口宽度和高度
        MVIEW [0] = 0;
        MVIEW [1] = 0;
        MVIEW [2] = 800; //宽度
        MVIEW [3] = 480; //高度
        //确保你有3个组成部分在这个数组即使屏幕只需要2
        浮动[] outputCoords =新的浮动[3];
        //物objx,objY,objZ是边界中的一个的坐标
        GLU.gluProject(-1.0F,-1.0F,0.0,modelMatrix,0,projMatrix,0,MVIEW,0,outputCoords,0);
 

这是我的广场类:

 公共类方{
//缓冲区德顶点
私人FloatBuffer vertexBuffer;
//缓冲区德coordenadas德texturas
私人FloatBuffer textureBuffer;
// Puntero德texturas
私人INT []纹理=新INT [3];
//厄尔尼诺项目重新presentar
私人位图图像;
//Definición德顶点

私人浮动顶点[] =
{
    -1.0F,-1.0F,0.0,//左下
    1.0F,-1.0F,0.0,//右下
    -1.0F,1.0F,0.0,//左上
    1.0F,1.0F,0.0 //右上
};

私人浮纹[] =
{
    //贴图坐标的顶点
    0.0,1.0F,
    1.0F,1.0F,
    0.0,0.0,
    1.0F,0.0
};
// Inicializamos洛杉矶缓冲区
市民广场(位图图像​​){
    ByteBuffer的byteBuf = ByteBuffer.allocateDirect(vertices.length * 4);
    byteBuf.order(ByteOrder.nativeOrder());
    vertexBuffer = byteBuf.asFloatBuffer();
    vertexBuffer.put(顶点);
    vertexBuffer.position(0);

    byteBuf = ByteBuffer.allocateDirect(texture.length * 4);
    byteBuf.order(ByteOrder.nativeOrder());
    textureBuffer = byteBuf.asFloatBuffer();
    textureBuffer.put(纹理);
    textureBuffer.position(0);

    this.image =图像;
}
// Funcion德dibujado
公共无效画(GL10 GL){
    gl.glFrontFace(GL10.GL_CCW);
    //gl.glEnable(GL10.GL_BLEND);
    //在这种情况下pviously生成的纹理绑定我们唯一的$ P $
    gl.glBindTexture(GL10.GL_TEXTURE_2D,纹理[0]);
    //指向我们的顶点缓冲
    gl.glVertexPointer(3,GL10.GL_FLOAT,0,vertexBuffer);
    gl.glTexCoordPointer(2,GL10.GL_FLOAT,0,textureBuffer);
    //启用顶点缓冲区
    gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
    gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
    //绘制顶点三角形带
    gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP,0,vertices.length / 3);
    临行前//禁用客户端状态
    gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
    gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
    //gl.glDisable(GL10.GL_BLEND);
}
// Carga德texturas
公共无效loadGLTexture(GL10 GL,上下文语境){
    // Generamos未puntero德texturas
    gl.glGenTextures(1,纹理,0);
    //Ÿ本身LO asignamos一个NUESTRO阵列
    gl.glBindTexture(GL10.GL_TEXTURE_2D,纹理[0]);
    // Creamos筛选现在去texturas
    gl.glTexParameterf(GL10.GL_TEXTURE_2D,GL10.GL_TEXTURE_MIN_FILTER,GL10.GL_NEAREST);
    gl.glTexParameterf(GL10.GL_TEXTURE_2D,GL10.GL_TEXTURE_MAG_FILTER,GL10.GL_LINEAR);
    // Diferentes parametros日textura posibles GL10.GL_CLAMP_TO_EDGE
    gl.glTexParameterf(GL10.GL_TEXTURE_2D,GL10.GL_TEXTURE_WRAP_S,GL10.GL_REPEAT);
    gl.glTexParameterf(GL10.GL_TEXTURE_2D,GL10.GL_TEXTURE_WRAP_T,GL10.GL_REPEAT);
    / *
    字符串的ImagePath =radiocd5.png;
    AssetManager mngr = context.getAssets();
    InputStream的是= NULL;
    尝试 {
        是= mngr.open(的ImagePath);
    }赶上(IOException异常E1){e1.printStackTrace(); }
    * /
    //获取纹理从Android资源目录
    InputStream的是= NULL;
    / *
    如果(item.equals(RIM))
        ,是= context.getResources()openRawResource(R.drawable.rueda);
    否则,如果(item.equals(选择))
        ,是= context.getResources()openRawResource(R.drawable.selector);
    * /
    / *
    ,是= context.getResources()openRawResource(RESOURCEID);
    点阵位图= NULL;
    尝试 {
        位= BitmapFactory.de codeStream(是);
    } 最后 {
        尝试 {
            is.close();
            是=无效;
        }赶上(IOException异常E){
        }
    }
    * /
    点阵位图=图像;
    // CON EL siguientecódigoredimensionamos拉斯imágenes阙肖恩·马斯GRANDES日​​256×256。
    INT东西方妇女网络= bitmap.getWidth();
    INT newH = bitmap.getHeight();
    浮动事实;
    如果(newH> 256 ||东西方妇女网络> 256)
    {
        如果(newH> 256)
        {
            其实=(浮点)255 /(浮点)newH; // porcentaje POR EL阙multiplicar对SER玉野256
            newH =(INT)(newH *事实); // Altura的reducida人porcentaje necesario
            东西方妇女网络=(INT)(东西方妇女网络*事实); // anchura reducida人porcentaje necesario
        }
        如果(东西方妇女网络> 256)
        {
            其实=(浮点)255 /(浮点)东西方妇女网络; // porcentaje POR EL阙multiplicar对SER玉野256
            newH =(INT)(newH *事实); // Altura的reducida人porcentaje necesario
            东西方妇女网络=(INT)(东西方妇女网络*事实); // anchura reducida人porcentaje necesario
        }
        位= Bitmap.createScaledBitmap(位图,东西方妇女网络,newH,真正的);
    }
    // CON EL siguientecódigotransformamosimágenes没有potencia德2间imágenespotencia德2(壶)
    //梅托埃尔位NOPOT恩未位POT对阙没有aparezcan texturas卡斯。
    INT nextPot = 256;
    INT H = bitmap.getHeight();
    INT W = bitmap.getWidth();
    INT offx =(nextPot-瓦特)/ 2; // DISTANCIA respecto一拉左派,对阙拉imagen画质quede centrada EN LA努埃瓦imagen画质POT
    INT offy =(nextPot-H)/ 2; // DISTANCIA respecto一个阿里巴,对阙拉imagen画质quede centrada EN LA努埃瓦imagen画质POT
    位图bitmap2 = Bitmap.createBitmap(nextPot,nextPot,Bitmap.Config.ARGB_8888); // CREA未位transparente格拉西亚斯人ARGB_8888
    帆布comboImage =新的Canvas(bitmap2);
    comboImage.drawBitmap(位图,offx,offy,NULL);
    comboImage.save();

    // Usamos的Andr​​oid GLUtils对espcificar UNA textura日2 dimensiones对NUESTRO位图
    GLUtils.texImage2D(GL10.GL_TEXTURE_2D,0,bitmap2,0);

    // Checkeamos SI EL GL上下文ES版1.1Ÿgeneramos洛杉矶贴图POR标志。思无,llamamos一个圣母propiaimplementación
    如果(GL的instanceof GL11){
        gl.glTexParameterf(GL11.GL_TEXTURE_2D,GL11.GL_GENERATE_MIPMAP,GL11.GL_TRUE);
        GLUtils.texImage2D(GL10.GL_TEXTURE_2D,0,bitmap2,0);
    } 其他 {
        buildMipmap(GL,bitmap2);
    }
    // Limpiamos洛杉矶位图
    bitmap.recycle();
    bitmap2.recycle();
}
//圣母implementación德MIPMAP。 Escalamos埃尔位原hacia瓦霍POR因素去2 Y LO asignamos科莫索尔NIVEL德的mipmap
私人无效buildMipmap(GL10 GL,位图位图){
    INT级别= 0;
    INT高= bitmap.getHeight();
    INT宽度= bitmap.getWidth();
    而(高度> = 1 ||宽度GT; = 1){
        GLUtils.texImage2D(GL10.GL_TEXTURE_2D,水平,位图,0);
        如果(高度== 1 ||宽度== 1){
            打破;
        }
        ++级;
        高度/ = 2;
        宽度/ = 2;
        位图bitmap2 = Bitmap.createScaledBitmap(位图,宽度,高度,真);
        bitmap.recycle();
        位= bitmap2;
    }
}
}
 

解决方案

gluProject 不正是固定功能转换管道会做,太:

  1. 三维顶点通过附加扩展到齐次坐标1为第四坐标: v [3] = 1

  2. 那么这个同质顶点乘模型视图矩阵和投影矩阵: V'= P * M * V

  3. 然后是persepctive师。由第四分隔的坐标,我们占了透视失真(如果你使用的是正交投影,例如 glOrtho ,然后 V'[3] == 1 并没有透视变形): V= V'/ V[3]

  4. 现在一切都在你的观看量(场景的可视区域)已被改造成标准化设备坐标,在[-1,1] -cube。那么,什么需要做的是将其转化成屏幕坐标[0,W]×[0,H]: X = W *(V[0] + 1)/ 2 Y = H *(V[1] +1)/ 2 。最后,在z坐标从转化[-1,1]到[0,1],得到该写入深度缓冲器中的深度归一化值: Z =(V[2] +1)/ 2

所以关键理解发生了什么以z值来实现,即到相机(鉴于空间的Z值)的距离被首先转化为[-1,1]范围内的投影矩阵,这取决于在远近范围(近及远的价值,你把 glOrtho glFrustum gluPerspective )。然后该归一化值被变换成[0,1]的范围内,以导致在被写入到深度缓冲器和该 gluProject 最终深度值来计算作为z值该窗口坐标。

那么,你实际上得到了(0,0,0.5)是屏幕和0.5的深度左下角。与正交矩阵(没有任何立体失真)和身份模型视图矩阵,这将是等于的坐标(左边,底部,(远近)/ 2),系统其中,离开附近你放进 glOrtho 函数调用(或其它具有类似功能),相应的参数。所以顶点是在近远范围的中间,并在观察空间范围(从照相机看)的左下角。但是这将不成立的透视投影,如在此情况下,从图空间的z坐标的深度值的转变是(当然仍然单调,)不是线性的。

既然你把顶点( - 1,-1,0),这可能意味着你的模型视图矩阵是身份和您的投影矩阵对应于与创建矩阵 glOrtho(-1,1,-1,1,-1,1),这也是几乎恒等矩阵(尽管用镜像z值为,但由于输入z是0,你可能不会注意到它)。因此,如果这些都不是你会期待值(理解 gluProject 的运作,当然之后),它也可能仅仅是因为你的矩阵还没有被检索正确而你刚刚得到的,而不是你的实际模型视图和投影矩阵的身份矩阵。

所以我觉得有什么不对您的 gluProject 功能。您还可以看看答案这个问题获得一些更深入地了解OpenGL的默认转换管道。虽然与顶点着色引擎的出现有些阶段可以计算不同,通常仍沿用惯用模式 - >查看 - >投影方式

I need to show a square polygon with the 100% of the width of the screen, then, i supose that i must zoom it (with Z axis) until the polygon borders are tounching the screen borders.

I'm trying to achieve this using gluProject to project a coordinate in 3D into a 2D screen coordinate. If the screen coordinate is either 0 or matches the width or height, then it is touching a screen border.

The problem is that something is going wrong, the outputCoords array returned with gluProject is giving me these values: 0,0,0.5, but my square is centered on the sreen, and with Z=-5.0f!!!!

I dont understand these values...

This is the code i'm using to obtain the 2D Projection of my square poligon on the screen:

This code is on the onSurfaceCreated method of the GLSurfaceView class, ¿it have to be putted in another method? ¿where?

/////////////// NEW CODE FOR SCALING THE AR IMAGE TO THE DESIRED WIDTH /////////////////

        mg.getCurrentModelView(gl);  
        mg.getCurrentProjection(gl);   

        float [] modelMatrix = new float[16];
        float [] projMatrix = new float[16];        
        modelMatrix=mg.mModelView;
        projMatrix=mg.mProjection;
        int [] mView = new int[4];
        // Fill this with your window width and height
        mView[0] = 0;
        mView[1] = 0;
        mView[2] = 800; //width
        mView[3] = 480; //height
        // Make sure you have 3 components in this array even if the screen only needs 2
        float [] outputCoords = new float[3];
        // objX, objY, objZ are the coordinates of one of the borders
        GLU.gluProject(-1.0f, -1.0f, 0.0f, modelMatrix, 0, projMatrix, 0, mView, 0, outputCoords, 0);

This is my square class:

public class Square {
//Buffer de vertices
private FloatBuffer vertexBuffer;
//Buffer de coordenadas de texturas
private FloatBuffer textureBuffer;
//Puntero de texturas
private int[] textures = new int[3];
//El item a representar
private Bitmap image;
//Definición de vertices

private float vertices[] = 
{ 
    -1.0f, -1.0f, 0.0f,     //Bottom Left
    1.0f, -1.0f, 0.0f,      //Bottom Right
    -1.0f, 1.0f, 0.0f,      //Top Left
    1.0f, 1.0f, 0.0f        //Top Right
};

private float texture[] =
{
    //Mapping coordinates for the vertices
    0.0f, 1.0f,
    1.0f, 1.0f,
    0.0f, 0.0f,
    1.0f, 0.0f
};
//Inicializamos los buffers
public Square(Bitmap image) {
    ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4);
    byteBuf.order(ByteOrder.nativeOrder());
    vertexBuffer = byteBuf.asFloatBuffer();
    vertexBuffer.put(vertices);
    vertexBuffer.position(0);

    byteBuf = ByteBuffer.allocateDirect(texture.length * 4);
    byteBuf.order(ByteOrder.nativeOrder());
    textureBuffer = byteBuf.asFloatBuffer();
    textureBuffer.put(texture);
    textureBuffer.position(0);

    this.image=image;
} 
//Funcion de dibujado
public void draw(GL10 gl) {
    gl.glFrontFace(GL10.GL_CCW);
    //gl.glEnable(GL10.GL_BLEND);
    //Bind our only previously generated texture in this case
    gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
    //Point to our vertex buffer
    gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
    gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
    //Enable vertex buffer
    gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
    gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
    //Draw the vertices as triangle strip
    gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3);
    //Disable the client state before leaving
    gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
    gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
    //gl.glDisable(GL10.GL_BLEND);      
}
//Carga de texturas
public void loadGLTexture(GL10 gl, Context context) {
    //Generamos un puntero de texturas
    gl.glGenTextures(1, textures, 0);       
    //y se lo asignamos a nuestro array
    gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
    //Creamos filtros de texturas
    gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST);
    gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
    //Diferentes parametros de textura posibles GL10.GL_CLAMP_TO_EDGE
    gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT);
    gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT);     
    /*
    String imagePath = "radiocd5.png";
    AssetManager mngr = context.getAssets();
    InputStream is=null;
    try {
        is = mngr.open(imagePath);
    } catch (IOException e1) {  e1.printStackTrace();   }
    */
    //Get the texture from the Android resource directory
    InputStream is=null;
    /*
    if (item.equals("rim"))
        is = context.getResources().openRawResource(R.drawable.rueda);
    else if (item.equals("selector"))
        is = context.getResources().openRawResource(R.drawable.selector);
    */      
    /*
    is = context.getResources().openRawResource(resourceId);
    Bitmap bitmap = null;
    try {
        bitmap = BitmapFactory.decodeStream(is);
    } finally {
        try {
            is.close();
            is = null;
        } catch (IOException e) {
        }
    }
    */
    Bitmap bitmap =image;       
    //con el siguiente código redimensionamos las imágenes que sean mas grandes de 256x256.
    int newW=bitmap.getWidth();
    int newH=bitmap.getHeight();
    float fact;
    if (newH>256 || newW>256)
    {
        if (newH>256)
        {
            fact=(float)255/(float)newH; //porcentaje por el que multiplicar para ser tamaño 256
            newH=(int)(newH*fact); //altura reducida al porcentaje necesario
            newW=(int)(newW*fact); //anchura reducida al porcentaje necesario   
        }
        if (newW>256)
        {
            fact=(float)255/(float)newW; //porcentaje por el que multiplicar para ser tamaño 256
            newH=(int)(newH*fact); //altura reducida al porcentaje necesario
            newW=(int)(newW*fact); //anchura reducida al porcentaje necesario
        }
        bitmap=Bitmap.createScaledBitmap(bitmap, newW, newH, true);
    }       
    //con el siguiente código transformamos imágenes no potencia de 2 en imágenes potencia de 2 (pot)
    //meto el bitmap NOPOT en un bitmap POT para que no aparezcan texturas blancas.
    int nextPot=256;
    int h = bitmap.getHeight();
    int w = bitmap.getWidth();
    int offx=(nextPot-w)/2; //distancia respecto a la izquierda, para que la imagen quede centrada en la nueva imagen POT
    int offy=(nextPot-h)/2; //distancia respecto a arriba, para que la imagen quede centrada en la nueva imagen POT
    Bitmap bitmap2 = Bitmap.createBitmap(nextPot, nextPot, Bitmap.Config.ARGB_8888); //crea un bitmap transparente gracias al ARGB_8888
    Canvas comboImage = new Canvas(bitmap2);
    comboImage.drawBitmap(bitmap, offx, offy, null);
    comboImage.save();

    //Usamos Android GLUtils para espcificar una textura de 2 dimensiones para nuestro bitmap
    GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap2, 0);

    //Checkeamos si el GL context es versión 1.1 y generamos los Mipmaps por Flag. Si no, llamamos a nuestra propia implementación
    if(gl instanceof GL11) {
        gl.glTexParameterf(GL11.GL_TEXTURE_2D, GL11.GL_GENERATE_MIPMAP, GL11.GL_TRUE);
        GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap2, 0);
    } else {
        buildMipmap(gl, bitmap2);
    }   
    //Limpiamos los bitmaps
    bitmap.recycle();
    bitmap2.recycle();
}
//Nuestra implementación de MipMap. Escalamos el bitmap original hacia abajo por factor de 2 y lo asignamos como nuevo nivel de mipmap
private void buildMipmap(GL10 gl, Bitmap bitmap) {
    int level = 0;
    int height = bitmap.getHeight();
    int width = bitmap.getWidth();
    while(height >= 1 || width >= 1) {
        GLUtils.texImage2D(GL10.GL_TEXTURE_2D, level, bitmap, 0);
        if(height == 1 || width == 1) {
            break;
        }
        level++;
        height /= 2;
        width /= 2;
        Bitmap bitmap2 = Bitmap.createScaledBitmap(bitmap, width, height, true);
        bitmap.recycle();
        bitmap = bitmap2;
    }
}
}

解决方案

gluProject does exactly what the fixed function transformation pipeline would do, too:

  1. The 3D vertex is expanded to homogeneous coordinates by appending a 1 as fourth coordinate: v[3]=1.

  2. Then this homogenous vertex is multiplied by the modelview matrix and the projection matrix: v'=P*M*v.

  3. Then comes the persepctive division. By dividing by the fourth coordinate we account for perspective distortion (if you have an orthographic projection e.g. using glOrtho, then v'[3]==1 and there is no perspective distortion): v"=v'/v'[3].

  4. Now everything in your viewing volume (the visible area of your scene) has been transformed into normalized device coordinates, the [-1,1]-cube. So what needs to be done is transform this into screen coordinates [0,w] x [0,h]: x=w * (v"[0]+1) / 2 and y = h * (v"[1]+1) / 2. And finally, the z-coordinate is transformed from [-1,1] to [0,1] to give the normalized depth value that is written into the depth buffer: z = (v"[2]+1) / 2.

So the key to understand what happens to the z value is to realize, that the distance to the camera (the z value in view space) is first transformed into the [-1,1] range by the projection matrix, depending on the near-far range (the near and far values you put into glOrtho, glFrustum or gluPerspective). Then this normalized value is transformed into the [0,1] range to result in the final depth value that gets written into the depth buffer and that gluProject computes as z-value of the window coordinates.

So what you actually got out (0, 0, 0.5) is the lower left corner of your screen and with a depth of 0.5. With an orthographic matrix (without any perspective distortion) and an identity modelview matrix this would be equal to a coordinate of (left, bottom, (far-near)/2), where bottom, left, near and far are the corresponding arguments you put into the glOrtho function call (or something with similar functionality). So the vertex is in the middle of the near-far-range and in the lower left corner of the viewing volume (as seen from the camera). But this won't hold for a perspective projection, as in this case the transformation from the view-space z-coordinate to the depth value is not linear (though still monotonic, of course).

Since you put in the vertex (-1, -1, 0), this could mean your modelview matrix is identity and your projection matrix corresponds to a matrix created with glOrtho(-1, 1, -1, 1, -1, 1), which is also nearly the identity matrix (though with a mirrored z value, but because the input z is 0, you might not notice it). So if these are not the values you would have awaited (after understanding the workings of gluProject, of course), it may also just be that your matrices haven't been retrieved correctly and you just got identity matrices instead of your actual modelview and projection matrices.

So I think there is nothing wrong with your gluProject function. You might also look at the answers to this question to gain some more insight into OpenGL's default transformation pipeline. Although with the advent of vertex shaders some of the stages can be computed differently, you normally still follow the idiomatic model -> view -> projection approach.

这篇关于如何在gluProject功能工作?我不明白的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆