虚拟现实中的着色器分析 [英] Analysis of a shader in VR

查看:107
本文介绍了虚拟现实中的着色器分析的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想创建一个着色器,以获取世界坐标并创建波浪.我想分析视频并知道所需的步骤. 我不是在寻找代码,我只是在寻找有关如何使用GLSL或HLSL或任何其他语言来实现它的想法.

I would like to create a shader like that that takes world coordinates and creates waves. I would like to analyse the video and know the steps that are required. I'm not looking for codes, I'm just looking for ideas on how to implement that using GLSL or HLSL or any other language.

在这种情况下,如果链接中断,则质量较低且fps GIF.

Here low quality and fps GIF in case link broke.

这是片段着色器:

#version 330 core

// Interpolated values from the vertex shaders
in vec2 UV;
in vec3 Position_worldspace;
in vec3 Normal_cameraspace;
in vec3 EyeDirection_cameraspace;
in vec3 LightDirection_cameraspace;

// highlight effect
in float pixel_z;       // fragment z coordinate in [LCS]
uniform float animz;    // highlight animation z coordinate [GCS]

// Ouput data
out vec4 color;
vec3 c;

// Values that stay constant for the whole mesh.
uniform sampler2D myTextureSampler;
uniform mat4 MV;
uniform vec3 LightPosition_worldspace;

void main(){

    // Light emission properties
    // You probably want to put them as uniforms
    vec3 LightColor = vec3(1,1,1);
    float LightPower = 50.0f;

    // Material properties
    vec3 MaterialDiffuseColor = texture( myTextureSampler, UV ).rgb;
    vec3 MaterialAmbientColor = vec3(0.1,0.1,0.1) * MaterialDiffuseColor;
    vec3 MaterialSpecularColor = vec3(0.3,0.3,0.3);

    // Distance to the light
    float distance = length( LightPosition_worldspace - Position_worldspace );

    // Normal of the computed fragment, in camera space
    vec3 n = normalize( Normal_cameraspace );
    // Direction of the light (from the fragment to the light)
    vec3 l = normalize( LightDirection_cameraspace );
    // Cosine of the angle between the normal and the light direction, 
    // clamped above 0
    //  - light is at the vertical of the triangle -> 1
    //  - light is perpendicular to the triangle -> 0
    //  - light is behind the triangle -> 0
    float cosTheta = clamp( dot( n,l ), 0,1 );

    // Eye vector (towards the camera)
    vec3 E = normalize(EyeDirection_cameraspace);
    // Direction in which the triangle reflects the light
    vec3 R = reflect(-l,n);
    // Cosine of the angle between the Eye vector and the Reflect vector,
    // clamped to 0
    //  - Looking into the reflection -> 1
    //  - Looking elsewhere -> < 1
    float cosAlpha = clamp( dot( E,R ), 0,1 );

    c = 
        // Ambient : simulates indirect lighting
        MaterialAmbientColor +
        // Diffuse : "color" of the object
        MaterialDiffuseColor * LightColor * LightPower * cosTheta / (distance*distance) +
        // Specular : reflective highlight, like a mirror
        MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha,5) / (distance*distance);


    float z;
    z=abs(pixel_z-animz);   // distance to animated z coordinate
    z*=1.5;                 // scale to change highlight width
    if (z<1.0)
        {
        z*=0.5*3.1415926535897932384626433832795;   // z=<0,M_PI/2> 0 in the middle
        z=0.5*cos(z);
        color+=vec3(0.0,z,z);
        }

        color=vec4(c,1.0);

}

这是顶点着色器:

#version 330 core

// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec2 vertexUV;
layout(location = 2) in vec3 vertexNormal_modelspace;

// Output data ; will be interpolated for each fragment.
out vec2 UV;
out vec3 Position_worldspace;
out vec3 Normal_cameraspace;
out vec3 EyeDirection_cameraspace;
out vec3 LightDirection_cameraspace;

out float pixel_z;      // fragment z coordinate in [LCS]

// Values that stay constant for the whole mesh.
uniform mat4 MVP;
uniform mat4 V;
uniform mat4 M;
uniform vec3 LightPosition_worldspace;

void main(){


    pixel_z=vertexPosition_modelspace.z;
    // Output position of the vertex, in clip space : MVP * position
    gl_Position =  MVP * vec4(vertexPosition_modelspace,1);

    // Position of the vertex, in worldspace : M * position
    Position_worldspace = (M * vec4(vertexPosition_modelspace,1)).xyz;

    // Vector that goes from the vertex to the camera, in camera space.
    // In camera space, the camera is at the origin (0,0,0).
    vec3 vertexPosition_cameraspace = ( V * M * vec4(vertexPosition_modelspace,1)).xyz;
    EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace;

    // Vector that goes from the vertex to the light, in camera space. M is ommited because it's identity.
    vec3 LightPosition_cameraspace = ( V * vec4(LightPosition_worldspace,1)).xyz;
    LightDirection_cameraspace = LightPosition_cameraspace + EyeDirection_cameraspace;

    // Normal of the the vertex, in camera space
    Normal_cameraspace = ( V * M * vec4(vertexNormal_modelspace,0)).xyz; // Only correct if ModelMatrix does not scale the model ! Use its inverse transpose if not.

    // UV of the vertex. No special space for this one.
    UV = vertexUV;
}

推荐答案

为此,我可以想到两种方法:

there are 2 approaches I can think of for this:

  1. 基于3D重建

因此您需要从运动中重建 3D 场景(这不是一件容易的事,也不是我喝杯茶的方式).然后只需根据u,v纹理映射坐标和动画时间将调制应用于选定的网格纹理.

so you need to reconstruct the 3D scene from motion (not an easy task and way of my cup of tea). then you simply apply modulation to the selected mesh texture based on u,v texture mapping coordinates and time of animation.

描述此类主题不适合SO答案,因此您应该在该主题上搜索一些 CV 书籍/论文.

Describe such topic will not fit in SO answer so you should google some CV books/papers on the subject instead.

基于图像处理

您只需根据颜色连续性/均匀性对图像进行细分.因此,您将具有相似颜色和强度(增长区域)的相邻像素归为一组.完成后,尝试根据类似于以下的强度梯度来伪造表面 3D 重建:

you simply segmentate the image based on color continuity/homogenity. So you group neighboring pixels that have similar color and intensity (growing regions). When done try to fake surface 3D reconstruction based on intensity gradients similar to this:

,然后创建u,v映射,其中一个轴为深度.

and after that create u,v mapping where one axis is depth.

完成后,只需将正弦波效果调制应用于颜色即可.

When done then just apply your sin-wave effect modulation to color.

我将其分为两个阶段.第一遍将进行细分(为此,我会选择 CPU 端),第二遍进行效果渲染(在 GPU 上).

I would divide this into 2 stages. 1st pass will segmentate (I would chose CPU side for this) and second for the effect rendering (on GPU).

由于这是增强现实的形式,因此您还应该阅读以下内容:

As this is form of augmented reality you should also read this:

在该视频上执行的操作都不是上述选项.他们很可能已经为该汽车准备了矢量形式的网格,并使用轮廓匹配来获取其在图像上的方向,并像往常一样进行渲染,因此它不适用于场景中的任何对象,而仅适用于该汽车. ..像这样的东西:

btw what is done on that video is neither of above options. They most likely have the mesh for that car already in vector form and use silhouette matching to obtain its orientation on image ... and rendered as usual ... so it would not work for any object on the scene but only for that car ... Something like this:

[Edit1] GLSL突出显示效果

我举了这个例子:

并向其添加突出显示,如下所示:

And added the highlight to it like this:

  1. 在CPU端,我添加了animz变量

  1. On CPU side I added animz variable

确定实际放置高光的对象局部坐标系 LCS 中的z坐标.并且我在计时器中为渲染的网格物体(多维数据集)的最小和最大z值之间的+/-进行了动画处理,因此高光不会立即从对象的一侧转移到另一侧...

it determines the z coordinate in object local coordinate system LCS where the highlight is actually placed. and I animate it in timer between min and max z value of rendered mesh (cube) +/- some margin so the highlight does not teleport at once from one to another side of object...

// global
float animz=-1.0;
// in timer
animz+=0.05; if (animz>1.5) animz=-1.5; // my object z = <-1,+1> 0.5 is margin
// render
id=glGetUniformLocation(prog_id,"animz"); glUniform1f(id,animz);

  • 顶点着色器

    我只获取顶点z坐标并将其传递而无需变换为片段

    I just take vertex z coordinate and pass it without transform into fragment

    out float pixel_z;      // fragment z coordinate in [LCS]
    pixel_z=pos.z;
    

  • 片段着色器

    在计算了目标颜色c(通过标准渲染)之后,我计算了pixel_zanimz的距离,如果很小,则根据距离对正弦波进行调制.

    After computing target color c (by standard rendering) I compute distance of pixel_z and animz and if small then I modulate c with a sinwave depended on the distance.

    // highlight effect
    float z;
    z=abs(pixel_z-animz);   // distance to animated z coordinate
    z*=1.5;                 // scale to change highlight width
    if (z<1.0)
        {
        z*=0.5*3.1415926535897932384626433832795;   // z=<0,M_PI/2> 0 in the middle
        z=0.5*cos(z);
        c+=vec3(0.0,z,z);
        }
    

  • 这里是完整的 GLSL 着色器...

    Here the full GLSL shaders...

    顶点:

    #version 400 core
    #extension GL_ARB_explicit_uniform_location : enable
    layout(location = 0) in vec3 pos;
    layout(location = 2) in vec3 nor;
    layout(location = 3) in vec3 col;
    layout(location = 0) uniform mat4 m_model;  // model matrix
    layout(location =16) uniform mat4 m_normal; // model matrix with origin=(0,0,0)
    layout(location =32) uniform mat4 m_view;   // inverse of camera matrix
    layout(location =48) uniform mat4 m_proj;   // projection matrix
    out vec3 pixel_pos;     // fragment position [GCS]
    out vec3 pixel_col;     // fragment surface color
    out vec3 pixel_nor;     // fragment surface normal [GCS]
    
    // highlight effect
    out float pixel_z;      // fragment z coordinate in [LCS]
    
    void main()
        {
        pixel_z=pos.z;
        pixel_col=col;
        pixel_pos=(m_model*vec4(pos,1)).xyz;
        pixel_nor=(m_normal*vec4(nor,1)).xyz;
        gl_Position=m_proj*m_view*m_model*vec4(pos,1);
        }
    

    片段:

    #version 400 core
    #extension GL_ARB_explicit_uniform_location : enable
    layout(location =64) uniform vec3 lt_pnt_pos;// point light source position [GCS]
    layout(location =67) uniform vec3 lt_pnt_col;// point light source color&strength
    layout(location =70) uniform vec3 lt_amb_col;// ambient light source color&strength
    in vec3 pixel_pos;      // fragment position [GCS]
    in vec3 pixel_col;      // fragment surface color
    in vec3 pixel_nor;      // fragment surface normal [GCS]
    out vec4 col;
    
    // highlight effect
    in float pixel_z;       // fragment z coordinate in [LCS]
    uniform float animz;    // highlight animation z coordinate [GCS]
    
    void main()
        {
        // standard rendering
        float li;
        vec3 c,lt_dir;
        lt_dir=normalize(lt_pnt_pos-pixel_pos); // vector from fragment to point light source in [GCS]
        li=dot(pixel_nor,lt_dir);
        if (li<0.0) li=0.0;
        c=pixel_col*(lt_amb_col+(lt_pnt_col*li));
        // highlight effect
        float z;
        z=abs(pixel_z-animz);   // distance to animated z coordinate
        z*=1.5;                 // scale to change highlight width
        if (z<1.0)
            {
            z*=0.5*3.1415926535897932384626433832795;   // z=<0,M_PI/2> 0 in the middle
            z=0.5*cos(z);
            c+=vec3(0.0,z,z);
            }
        col=vec4(c,1.0);
        }
    

    预览:

    此方法不需要纹理,也不需要u,v映射.

    This approach does not require textures nor u,v mapping.

    [Edit2]高亮显示起点

    有很多方法可以实现它.我选择了距起点的距离作为突出显示参数.因此,重点将从各个方向发展.在此预览两个不同的接触点位置:

    There are many ways how to implement it. I chose distance from the start point as a highlight parameter. So the highlight will grow from the point in all directions. Here preview for two different touch point locations:

    白色粗体十字是为进行视觉检查而渲染的触摸点的位置.这里的代码:

    The white bold cross is the location of touch point rendered for visual check. Here the code:

    顶点:

    // Vertex
    #version 400 core
    #extension GL_ARB_explicit_uniform_location : enable
    layout(location = 0) in vec3 pos;
    layout(location = 2) in vec3 nor;
    layout(location = 3) in vec3 col;
    layout(location = 0) uniform mat4 m_model;  // model matrix
    layout(location =16) uniform mat4 m_normal; // model matrix with origin=(0,0,0)
    layout(location =32) uniform mat4 m_view;   // inverse of camera matrix
    layout(location =48) uniform mat4 m_proj;   // projection matrix
    out vec3 LCS_pos;       // fragment position [LCS]
    out vec3 pixel_pos;     // fragment position [GCS]
    out vec3 pixel_col;     // fragment surface color
    out vec3 pixel_nor;     // fragment surface normal [GCS]
    
    void main()
        {
        LCS_pos=pos;
        pixel_col=col;
        pixel_pos=(m_model*vec4(pos,1)).xyz;
        pixel_nor=(m_normal*vec4(nor,1)).xyz;
        gl_Position=m_proj*m_view*m_model*vec4(pos,1);
        }
    

    片段:

    // Fragment
    #version 400 core
    #extension GL_ARB_explicit_uniform_location : enable
    layout(location =64) uniform vec3 lt_pnt_pos;// point light source position [GCS]
    layout(location =67) uniform vec3 lt_pnt_col;// point light source color&strength
    layout(location =70) uniform vec3 lt_amb_col;// ambient light source color&strength
    in vec3 LCS_pos;        // fragment position [LCS]
    in vec3 pixel_pos;      // fragment position [GCS]
    in vec3 pixel_col;      // fragment surface color
    in vec3 pixel_nor;      // fragment surface normal [GCS]
    out vec4 col;
    
    // highlight effect
    uniform vec3  touch;    // highlight start point [GCS]
    uniform float animt;    // animation parameter <0,1> or -1 for off
    uniform float size;     // highlight size
    
    void main()
        {
        // standard rendering
        float li;
        vec3 c,lt_dir;
        lt_dir=normalize(lt_pnt_pos-pixel_pos); // vector from fragment to point light source in [GCS]
        li=dot(pixel_nor,lt_dir);
        if (li<0.0) li=0.0;
        c=pixel_col*(lt_amb_col+(lt_pnt_col*li));
        // highlight effect
        float t=length(LCS_pos-touch)/size; // distance from start point
        if (t<=animt)
            {
            t*=0.5*3.1415926535897932384626433832795;   // z=<0,M_PI/2> 0 in the middle
            t=0.75*cos(t);
            c+=vec3(0.0,t,t);
            }
        col=vec4(c,1.0);
        }
    

    您可以通过制服来控制它:

    uniform vec3  touch;    // highlight start point [GCS]
    uniform float animt;    // animation parameter <0,1> or -1 for off
    uniform float size;     // max distance of any point of object from touch point
    

    这篇关于虚拟现实中的着色器分析的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆