通过着色器进行OpenGL投影纹理映射 [英] OpenGL Projective Texture Mapping via Shaders

查看:100
本文介绍了通过着色器进行OpenGL投影纹理映射的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试通过在OpenGL 3+中使用着色器来实现一种简单的投影纹理映射方法.虽然网络上有一些示例,但我在使用着色器创建可行的示例时遇到了麻烦.

I am trying to implement a simple projective texture mapping approach by using shaders in OpenGL 3+. While there are some examples on the web I am having trouble creating a working example with shaders.

我实际上计划使用两个着色器,一个着色器进行普通的场景绘制,另一个着色器用于投影纹理映射.我有一个用于绘制场景的函数 void ProjTextureMappingScene :: renderScene(GLFWwindow * window),并且我正在使用glUseProgram()在着色器之间进行切换.法线工程正常.但是,我不清楚我应该如何在已经有纹理的立方体上渲染投影纹理.我是否必须以某种方式使用模板缓冲区或帧缓冲区对象(场景的其余部分应该不受影响)?

I am actually planning on using two shaders, one which does a normal scene draw, and another for projective texture mapping. I have a function for drawing a scene void ProjTextureMappingScene::renderScene(GLFWwindow *window) and I am using glUseProgram() to switch between shaders. The normal drawing works fine. However, it is unclear to me how I am supposed to render the projective texture on top of an already textured cube. Do I somehow have to use a stencil buffer or a framebuffer object(the rest of the scene should be unaffected)?

我也不认为投影纹理贴图着色器是正确的,因为第二次渲染显示黑色的立方体.此外,我尝试通过使用颜色进行调试,并且着色器的 t 组件似乎不为零(因此立方体显示为绿色).我仅出于调试目的而覆盖下面的片段着色器中的 texColor .

I also don't think that my projective texture mapping shaders are correct since the second time I render a cube it shows black. Further, I tried to debug by using colors and only the t component of the shader seems to be non-zero(so the cube appears green). I am overriding the texColor in the fragment shader below just for debugging purposes.

VertexShader

#version 330

uniform mat4 TexGenMat;
uniform mat4 InvViewMat;

uniform mat4 P;
uniform mat4 MV;
uniform mat4 N;

layout (location = 0) in vec3 inPosition;
//layout (location = 1) in vec2 inCoord;
layout (location = 2) in vec3 inNormal;

out vec3 vNormal, eyeVec;
out vec2 texCoord;
out vec4 projCoords;

void main()
{
    vNormal = (N * vec4(inNormal, 0.0)).xyz;

    vec4 posEye    = MV * vec4(inPosition, 1.0);
    vec4 posWorld  = InvViewMat * posEye;
    projCoords     = TexGenMat * posWorld;

    // only needed for specular component
    // currently not used
    eyeVec = -posEye.xyz;

    gl_Position = P * MV * vec4(inPosition, 1.0);
}

片段着色器

#version 330

uniform sampler2D projMap;
uniform sampler2D gSampler;
uniform vec4 vColor;

in vec3 vNormal, lightDir, eyeVec;
//in vec2 texCoord;
in vec4 projCoords;

out vec4 outputColor;

struct DirectionalLight
{
    vec3 vColor;
    vec3 vDirection;
    float fAmbientIntensity;
};

uniform DirectionalLight sunLight;

void main (void)
{
    // supress the reverse projection
    if (projCoords.q > 0.0)
    {
        vec2 finalCoords = projCoords.st / projCoords.q;
        vec4 vTexColor = texture(gSampler, finalCoords);
        // only t has non-zero values..why?
        vTexColor = vec4(finalCoords.s, finalCoords.t, finalCoords.r, 1.0);
        //vTexColor = vec4(projCoords.s, projCoords.t, projCoords.r, 1.0);
        float fDiffuseIntensity = max(0.0, dot(normalize(vNormal), -sunLight.vDirection));
        outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
    }
}

创建TexGen矩阵

biasMatrix = glm::mat4(0.5f, 0, 0, 0.5f,
                  0, 0.5f, 0, 0.5f,
                  0, 0, 0.5f, 0.5f,
                  0, 0, 0, 1);

    // 4:3 perspective with 45 fov
    projectorP = glm::perspective(45.0f * zoomFactor, 4.0f / 3.0f, 0.1f, 1000.0f);
    projectorOrigin = glm::vec3(-3.0f, 3.0f, 0.0f);
    projectorTarget = glm::vec3(0.0f, 0.0f, 0.0f);
    projectorV = glm::lookAt(projectorOrigin, // projector origin
                                    projectorTarget,     // project on object at origin 
                                    glm::vec3(0.0f, 1.0f, 0.0f)   // Y axis is up
                                    );
    mModel = glm::mat4(1.0f);
...
texGenMatrix = biasMatrix * projectorP * projectorV * mModel;
invViewMatrix = glm::inverse(mModel*mModelView);

再次渲染多维数据集

我也不清楚多维数据集的模型视图应该是什么?它应该使用幻灯片投影仪(现在)还是普通视图投影仪的视图矩阵?当前,在场景视图的中间,多维数据集呈现为黑色(如果调试则为绿色),就像在幻灯机中显示的那样(我按下了切换热键,以便可以看到幻灯机看到"的东西).多维数据集也随视图一起移动.如何获得对立方体本身的投影?

It is also unclear to me what the modelview of the cube should be? Should it use the view matrix from the slide projector(as it is now) or the normal view projector? Currently the cube is rendered black(or green if debugging) in the middle of the scene view, as it would appear from the slide projector(I made a toggle hotkey so that I can see what the slide projector "sees"). The cube also moves with the view. How do I get the projection unto the cube itself?

mModel = glm::translate(projectorV, projectorOrigin);
// bind projective texture
tTextures[2].bindTexture();
// set all uniforms
...
// bind VBO data and draw
glBindVertexArray(uiVAOSceneObjects);
glDrawArrays(GL_TRIANGLES, 6, 36);

在主场景相机和幻灯机相机之间切换

if (useMainCam)
{
    mCurrent   = glm::mat4(1.0f);
    mModelView = mModelView*mCurrent;
    mProjection = *pipeline->getProjectionMatrix();
}
else
{
    mModelView  = projectorV;
    mProjection = projectorP;
}

推荐答案

我已经解决了这个问题.我遇到的一个问题是,我混淆了两个摄影机系统(世界摄影机和投影纹理摄影机)中的矩阵.现在,当我为投影纹理贴图部分设置制服时,我为MVP值使用了正确的矩阵-与在世界场景中使用的矩阵相同.

I have solved the problem. One issue I had is that I confused the matrices in the two camera systems (world and projective texture camera). Now when I set the uniforms for the projective texture mapping part I use the correct matrices for the MVP values - the same ones I use for the world scene.

glUniformMatrix4fv(iPTMProjectionLoc, 1, GL_FALSE, glm::value_ptr(*pipeline->getProjectionMatrix()));
glUniformMatrix4fv(iPTMNormalLoc, 1, GL_FALSE, glm::value_ptr(glm::transpose(glm::inverse(mCurrent))));
glUniformMatrix4fv(iPTMModelViewLoc, 1, GL_FALSE, glm::value_ptr(mCurrent));
glUniformMatrix4fv(iTexGenMatLoc, 1, GL_FALSE, glm::value_ptr(texGenMatrix));
glUniformMatrix4fv(iInvViewMatrix, 1, GL_FALSE, glm::value_ptr(invViewMatrix));

此外,invViewMatrix只是视图矩阵的逆,而不是模型视图(这在我的案例中并没有改变行为,因为模型是同一性,但这是错误的).对于我的项目,我只想选择性地渲染一些带有投影纹理的对象.为此,对于每个对象,我必须使用 glUseProgram(projectiveTextureMappingProgramID)确保当前的着色器程序是用于投影纹理的程序.接下来,我计算此对象所需的矩阵:

Further, the invViewMatrix is just the inverse of the view matrix not the model view (this didn't change the behaviour in my case, since the model was identity, but it is wrong). For my project I only wanted to selectively render a few objects with projective textures. To do this, for each object, I must make sure that the current shader program is the one for projective textures using glUseProgram(projectiveTextureMappingProgramID). Next, I compute the required matrices for this object:

texGenMatrix = biasMatrix * projectorP * projectorV * mModel;
invViewMatrix = glm::inverse(mView);

回到着色器,顶点着色器是正确的,除了我为当前对象重新添加了UV纹理坐标( inCoord )并将其存储在 texCoord 中.

Coming back to the shaders, the vertex shader is correct except that I re-added the UV texture coordinates (inCoord) for the current object and stored them in texCoord.

对于片段着色器,我更改了主要功能以固定投影纹理,以使其不再重复(我无法使其与客户端 GL_CLAMP_TO_EDGE 一起使用),并且我也如果投影仪不能覆盖整个对象,则使用默认的对象纹理和UV坐标(我也从投影纹理中删除了照明,因为在我的情况下不需要):

For the fragment shader I changed the main function to clamp the projective texture so that it doesn't repeat (I couldn't get it to work with the client side GL_CLAMP_TO_EDGE) and I am also using the default object texture and UV coordinates in case the projector does not cover the whole object (I also removed lighting from the projective texture since it is not needed in my case):

void main (void)
{
    vec2 finalCoords    = projCoords.st / projCoords.q;
    vec4 vTexColor      = texture(gSampler, texCoord);
    vec4 vProjTexColor  = texture(projMap, finalCoords);
    //vec4 vProjTexColor  = textureProj(projMap, projCoords);
    float fDiffuseIntensity = max(0.0, dot(normalize(vNormal), -sunLight.vDirection));

    // supress the reverse projection
    if (projCoords.q > 0.0)
    {
        // CLAMP PROJECTIVE TEXTURE (for some reason gl_clamp did not work...)
        if(projCoords.s > 0 && projCoords.t > 0 && finalCoords.s < 1 && finalCoords.t < 1)
            //outputColor = vProjTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
            outputColor = vProjTexColor*vColor;
        else
            outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
    }
    else
    {
        outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
    }
}

如果卡住了并且由于某种原因无法使着色器正常工作,则可以在"OpenGL 4.0 Shading Language Cookbook"(纹理一章)中查看示例-我实际上错过了这一点,直到我自己使它起作用为止

If you are stuck and for some reason can not get the shaders to work, you can check out an example in "OpenGL 4.0 Shading Language Cookbook" (textures chapter) - I actually missed this, until I got it working by myself.

除了上述所有功能外,调试算法是否正常运行的一个重要帮助是为投影相机绘制了平截头体(作为线框).我将着色器用于视锥绘制.片段着色器仅分配纯色,而下面列出了顶点着色器及其说明:

In addition to all of the above, a great help for debugging if the algorithm is working correctly was to draw the frustum (as wireframe) for the projective camera. I used a shader for frustum drawing. The fragment shader just assigns a solid color, while the vertex shader is listed below with explanations:

#version 330

// input vertex data
layout(location = 0) in vec3 vp;

uniform mat4 P;
uniform mat4 MV;
uniform mat4 invP;
uniform mat4 invMV;
void main()
{
    /*The transformed clip space position c of a
    world space vertex v is obtained by transforming 
    v with the product of the projection matrix P 
    and the modelview matrix MV

    c = P MV v

    So, if we could solve for v, then we could 
    genrerate vertex positions by plugging in clip 
    space positions. For your frustum, one line 
    would be between the clip space positions 

    (-1,-1,near) and (-1,-1,far), 

    the lower left edge of the frustum, for example.

    NB: If you would like to mix normalized device 
    coords (x,y) and eye space coords (near,far), 
    you need an additional step here. Modify your 
    clip position as follows

    c' = (c.x * c.z, c.y * c.z, c.z, c.z)

    otherwise you would need to supply both the z 
    and w for c, which might be inconvenient. Simply 
    use c' instead of c below.


    To solve for v, multiply both sides of the equation above with 

          -1       
    (P MV) 

    This gives

          -1      
    (P MV)   c = v

    This is equivalent to

      -1  -1      
    MV   P   c = v

     -1
    P   is given by

    |(r-l)/(2n)     0         0      (r+l)/(2n) |
    |     0    (t-b)/(2n)     0      (t+b)/(2n) |
    |     0         0         0         -1      |
    |     0         0   -(f-n)/(2fn) (f+n)/(2fn)|

    where l, r, t, b, n, and f are the parameters in the glFrustum() call.

    If you don't want to fool with inverting the 
    model matrix, the info you already have can be 
    used instead: the forward, right, and up 
    vectors, in addition to the eye position.

    First, go from clip space to eye space

         -1   
    e = P   c

    Next go from eye space to world space

    v = eyePos - forward*e.z + right*e.x + up*e.y

    assuming x = right, y = up, and -z = forward.
    */
    vec4 fVp = invMV * invP * vec4(vp, 1.0);
    gl_Position = P * MV * fVp;
}

制服的用法如下(请确保使用正确的矩阵):

The uniforms are used like this (make sure you use the right matrices):

// projector matrices
glUniformMatrix4fv(iFrustumInvProjectionLoc, 1, GL_FALSE, glm::value_ptr(glm::inverse(projectorP)));
glUniformMatrix4fv(iFrustumInvMVLoc, 1, GL_FALSE, glm::value_ptr(glm::inverse(projectorV)));
// world camera
glUniformMatrix4fv(iFrustumProjectionLoc, 1, GL_FALSE, glm::value_ptr(*pipeline->getProjectionMatrix()));
glUniformMatrix4fv(iFrustumModelViewLoc, 1, GL_FALSE, glm::value_ptr(mModelView));

要获取平截头体的顶点着色器所需的输入顶点,可以执行以下操作以获取坐标(然后将其添加到顶点数组中):

To get the input vertices needed for the frustum's vertex shader you can do the following to get the coordinates (then just add them to your vertex array):

glm::vec3 ftl = glm::vec3(-1, +1, pFar); //far top left
glm::vec3 fbr = glm::vec3(+1, -1, pFar); //far bottom right
glm::vec3 fbl = glm::vec3(-1, -1, pFar); //far bottom left
glm::vec3 ftr = glm::vec3(+1, +1, pFar); //far top right
glm::vec3 ntl = glm::vec3(-1, +1, pNear); //near top left
glm::vec3 nbr = glm::vec3(+1, -1, pNear); //near bottom right
glm::vec3 nbl = glm::vec3(-1, -1, pNear); //near bottom left
glm::vec3 ntr = glm::vec3(+1, +1, pNear); //near top right

glm::vec3   frustum_coords[36] = {
    // near
    ntl, nbl, ntr, // 1 triangle
    ntr, nbl, nbr,
    // right
    nbr, ftr, ntr,
    ftr, nbr, fbr,
    // left
    nbl, ftl, ntl,
    ftl, nbl, fbl,
    // far
    ftl, fbl, fbr,
    fbr, ftr, ftl,
    //bottom
    nbl, fbr, fbl,
    fbr, nbl, nbr,
    //top
    ntl, ftr, ftl,
    ftr, ntl, ntr
};

说完一切之后,很高兴看到它的外观:

After all is said and done, it's nice to see how it looks:

如您所见,我应用了两个投影纹理,一个是Blender的Suzanne猴头上的生物危害图像,一个是地板和一个小立方体上的笑脸纹理.您还可以看到该多维数据集部分地被投影纹理覆盖,而其其余部分以其默认纹理显示.最后,您可以看到投影机相机的绿色平头锥体线框-一切看起来都正确.

As you can see I applied two projective textures, one of a biohazard image on Blender's Suzanne monkey head, and a smiley texture on the floor and a small cube. You can also see that the cube is partly covered by the projective texture, while the rest of it appears with its default texture. Finally, you can see the green frustum wireframe for the projector camera - and everything looks correct.

这篇关于通过着色器进行OpenGL投影纹理映射的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆