VR中的着色器分析 [英] Analysis of a shader in VR

查看:29
本文介绍了VR中的着色器分析的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想创建一个这样的着色器,它接受世界坐标并创建波浪.我想分析视频并了解所需的步骤.我不是在寻找代码,我只是在寻找有关如何使用 GLSL 或 HLSL 或任何其他语言实现代码的想法.

这里提供低质量和 fps 的 GIF,以防链接断开.

这是片段着色器:

#version 330 核心//来自顶点着色器的内插值在 vec2 紫外线中;在 vec3 Position_worldspace 中;在 vec3 Normal_cameraspace 中;在 vec3 EyeDirection_cameraspace 中;在 vec3 LightDirection_cameraspace 中;//高亮效果在浮动pixel_z中;//[LCS]中的片段z坐标统一浮动动画;//高亮动画 z 坐标 [GCS]//输出数据出 vec4 颜色;vec3 c;//整个网格的值保持不变.统一的 sampler2D myTextureSampler;统一 mat4 MV;统一 vec3 LightPosition_worldspace;无效主(){//发光属性//你可能想把它们当作制服vec3 LightColor = vec3(1,1,1);浮动光功率 = 50.0f;//材料特性vec3 MaterialDiffuseColor = 纹理(myTextureSampler,UV).rgb;vec3 MaterialAmbientColor = vec3(0.1,0.1,0.1) * MaterialDiffuseColor;vec3 MaterialSpecularColor = vec3(0.3,0.3,0.3);//到光的距离浮动距离 = 长度(LightPosition_worldspace - Position_worldspace);//计算片段的法线,在相机空间vec3 n = normalize(Normal_cameraspace);//光的方向(从片段到光)vec3 l = 标准化(LightDirection_cameraspace);//法线和光线方向夹角的余弦,//钳位在 0 以上//- 光线位于三角形的垂直方向 ->1//- 光线垂直于三角形 ->0//- 光线在三角形后面 ->0浮动 cosTheta = 钳位(点(n,l),0,1);//眼睛向量(朝向相机)vec3 E = 标准化(EyeDirection_cameraspace);//三角形反射光的方向vec3 R = 反射(-l,n);//眼睛向量和反射向量之间角度的余弦,//钳位为 0//- 查看反射 ->1//- 寻找别处 -><1浮动 cosAlpha = 钳位(点(E,R),0,1);c =//Ambient : 模拟间接照明材质环境色 +//漫反射:对象的颜色"MaterialDiffuseColor * LightColor * LightPower * cosTheta/(distance*distance) +//Specular : 反射高光,就像一面镜子MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha,5)/(distance*distance);浮动 z;z=abs(pixel_z-animz);//到动画 z 坐标的距离z*=1.5;//缩放以改变高亮宽度如果 (z<1.0){z*=0.5*3.1415926535897932384626433832795;//z=<0,M_PI/2>0 中间z=0.5*cos(z);颜色+=vec3(0.0,z,z);}颜色=vec4(c,1.0);}

这里是顶点着色器:

#version 330 核心//输入顶点数据,此着色器的所有执行都不同.vec3 vertexPosition_modelspace 中的布局(位置 = 0);vec2 vertexUV 中的布局(位置 = 1);vec3 vertexNormal_modelspace 中的布局(位置 = 2);//输出数据;将为每个片段进行插值.输出 vec2 紫外线;出 vec3 Position_worldspace;出 vec3 Normal_cameraspace;出 vec3 EyeDirection_cameraspace;出 vec3 LightDirection_cameraspace;出浮动pixel_z;//[LCS]中的片段z坐标//整个网格的值保持不变.统一 mat4 MVP;均匀 mat4 V;均匀垫4 M;统一 vec3 LightPosition_worldspace;无效主(){pixel_z=vertexPosition_modelspace.z;//顶点的输出位置,在剪辑空间中:MVP * 位置gl_Position = MVP * vec4(vertexPosition_modelspace,1);//顶点在世界空间中的位置:M * 位置Position_worldspace = (M * vec4(vertexPosition_modelspace,1)).xyz;//在相机空间中从顶点到相机的向量.//在相机空间中,相机位于原点 (0,0,0).vec3 vertexPosition_cameraspace = (V * M * vec4(vertexPosition_modelspace,1)).xyz;EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace;//在相机空间中从顶点到光源的向量.M 被省略,因为它是身份.vec3 LightPosition_cameraspace = (V * vec4(LightPosition_worldspace,1)).xyz;LightDirection_cameraspace = LightPosition_cameraspace + EyeDirection_cameraspace;//顶点的法线,在相机空间Normal_cameraspace = ( V * M * vec4(vertexNormal_modelspace,0)).xyz;//只有在 ModelMatrix 不缩放模型时才更正!如果没有,请使用其逆转置.//顶点的 UV.这个没有特别的空间.UV = 顶点UV;}

解决方案

为此我可以想到两种方法:

  1. 基于 3D 重建

    所以你需要从运动中重建3D场景(这不是一件容易的事,也不是我喝茶的方式).然后您只需根据 u,v 纹理映射坐标和动画时间对选定的网格纹理应用调制.

    描述此类主题不适合 SO 答案,因此您应该搜索一些有关该主题的简历书籍/论文.

  2. 基于图像处理

    您只需根据颜色连续性/均匀性对图像进行分割.因此,您将具有相似颜色和强度(生长区域)的相邻像素分组.完成后,尝试基于与此类似的强度梯度来伪造表面 3D 重建:

    • 这种方法不需要纹理,也不需要 u,v 映射.

      [Edit2] 突出显示起点

      实现方式有很多种.我选择与起点的距离作为高亮参数.所以亮点会从各个方向的点开始增长.这里预览了两个不同的接触点位置:

      白色粗体十字是用于视觉检查的触点位置.代码如下:

      顶点:

      //顶点#version 400 核心#extension GL_ARB_explicit_uniform_location : 启用vec3 pos 中的布局(位置 = 0);vec3 中的布局(位置 = 2)也不;vec3 col 中的布局(位置 = 3);布局(位置 = 0)统一 mat4 m_model;//模型矩阵布局(位置=16)统一mat4 m_normal;//模型矩阵原点=(0,0,0)布局(位置=32)统一mat4 m_view;//相机矩阵的逆布局(位置=48)统一mat4 m_proj;//投影矩阵出 vec3 LCS_pos;//片段位置 [LCS]出 vec3 pixel_pos;//片段位置 [GCS]出 vec3 pixel_col;//片段表面颜色出 vec3 pixel_nor;//片段表面法线 [GCS]无效主(){LCS_pos=pos;pixel_col=col;pixel_pos=(m_model*vec4(pos,1)).xyz;pixel_nor=(m_normal*vec4(nor,1)).xyz;gl_Position=m_proj*m_view*m_model*vec4(pos,1);}

      片段:

      //片段#version 400 核心#extension GL_ARB_explicit_uniform_location : 启用layout(location =64) uniform vec3 lt_pnt_pos;//点光源位置[GCS]layout(location =67) uniform vec3 lt_pnt_col;//点光源颜色&强度layout(location =70) uniform vec3 lt_amb_col;//环境光源颜色&强度在 vec3 LCS_pos 中;//片段位置 [LCS]在 vec3 pixel_pos 中;//片段位置 [GCS]在 vec3 pixel_col 中;//片段表面颜色在 vec3 pixel_nor 中;//片段表面法线 [GCS]出 vec4 col;//高亮效果统一的 vec3 触摸;//高亮起点 [GCS]统一的浮动动画;//动画参数 <0,1>或 -1 表示关闭统一的浮动大小;//高亮尺寸无效主(){//标准渲染浮力;vec3 c,lt_dir;lt_dir=normalize(lt_pnt_pos-pixel_pos);//[GCS]中从片段到点光源的向量li=dot(pixel_nor,lt_dir);如果 (li<0.0) li=0.0;c=pixel_col*(lt_amb_col+(lt_pnt_col*li));//高亮效果浮动 t=长度(LCS_pos-touch)/大小;//到起点的距离如果 (t<=animt){t*=0.5*3.1415926535897932384626433832795;//z=<0,M_PI/2>0 中间t=0.75*cos(t);c+=vec3(0.0,t,t);}col=vec4(c,1.0);}

      你可以通过制服来控制:

      uniform vec3 touch;//高亮起点 [GCS]统一的浮动动画;//动画参数 <0,1>或 -1 表示关闭统一的浮动大小;//物体任意点到触摸点的最大距离

      I would like to create a shader like that that takes world coordinates and creates waves. I would like to analyse the video and know the steps that are required. I'm not looking for codes, I'm just looking for ideas on how to implement that using GLSL or HLSL or any other language.

      Here low quality and fps GIF in case link broke.

      Here is the fragment shader:

      #version 330 core
      
      // Interpolated values from the vertex shaders
      in vec2 UV;
      in vec3 Position_worldspace;
      in vec3 Normal_cameraspace;
      in vec3 EyeDirection_cameraspace;
      in vec3 LightDirection_cameraspace;
      
      // highlight effect
      in float pixel_z;       // fragment z coordinate in [LCS]
      uniform float animz;    // highlight animation z coordinate [GCS]
      
      // Ouput data
      out vec4 color;
      vec3 c;
      
      // Values that stay constant for the whole mesh.
      uniform sampler2D myTextureSampler;
      uniform mat4 MV;
      uniform vec3 LightPosition_worldspace;
      
      void main(){
      
          // Light emission properties
          // You probably want to put them as uniforms
          vec3 LightColor = vec3(1,1,1);
          float LightPower = 50.0f;
      
          // Material properties
          vec3 MaterialDiffuseColor = texture( myTextureSampler, UV ).rgb;
          vec3 MaterialAmbientColor = vec3(0.1,0.1,0.1) * MaterialDiffuseColor;
          vec3 MaterialSpecularColor = vec3(0.3,0.3,0.3);
      
          // Distance to the light
          float distance = length( LightPosition_worldspace - Position_worldspace );
      
          // Normal of the computed fragment, in camera space
          vec3 n = normalize( Normal_cameraspace );
          // Direction of the light (from the fragment to the light)
          vec3 l = normalize( LightDirection_cameraspace );
          // Cosine of the angle between the normal and the light direction, 
          // clamped above 0
          //  - light is at the vertical of the triangle -> 1
          //  - light is perpendicular to the triangle -> 0
          //  - light is behind the triangle -> 0
          float cosTheta = clamp( dot( n,l ), 0,1 );
      
          // Eye vector (towards the camera)
          vec3 E = normalize(EyeDirection_cameraspace);
          // Direction in which the triangle reflects the light
          vec3 R = reflect(-l,n);
          // Cosine of the angle between the Eye vector and the Reflect vector,
          // clamped to 0
          //  - Looking into the reflection -> 1
          //  - Looking elsewhere -> < 1
          float cosAlpha = clamp( dot( E,R ), 0,1 );
      
          c = 
              // Ambient : simulates indirect lighting
              MaterialAmbientColor +
              // Diffuse : "color" of the object
              MaterialDiffuseColor * LightColor * LightPower * cosTheta / (distance*distance) +
              // Specular : reflective highlight, like a mirror
              MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha,5) / (distance*distance);
      
      
          float z;
          z=abs(pixel_z-animz);   // distance to animated z coordinate
          z*=1.5;                 // scale to change highlight width
          if (z<1.0)
              {
              z*=0.5*3.1415926535897932384626433832795;   // z=<0,M_PI/2> 0 in the middle
              z=0.5*cos(z);
              color+=vec3(0.0,z,z);
              }
      
              color=vec4(c,1.0);
      
      }
      

      here is the vertex shader:

      #version 330 core
      
      // Input vertex data, different for all executions of this shader.
      layout(location = 0) in vec3 vertexPosition_modelspace;
      layout(location = 1) in vec2 vertexUV;
      layout(location = 2) in vec3 vertexNormal_modelspace;
      
      // Output data ; will be interpolated for each fragment.
      out vec2 UV;
      out vec3 Position_worldspace;
      out vec3 Normal_cameraspace;
      out vec3 EyeDirection_cameraspace;
      out vec3 LightDirection_cameraspace;
      
      out float pixel_z;      // fragment z coordinate in [LCS]
      
      // Values that stay constant for the whole mesh.
      uniform mat4 MVP;
      uniform mat4 V;
      uniform mat4 M;
      uniform vec3 LightPosition_worldspace;
      
      void main(){
      
      
          pixel_z=vertexPosition_modelspace.z;
          // Output position of the vertex, in clip space : MVP * position
          gl_Position =  MVP * vec4(vertexPosition_modelspace,1);
      
          // Position of the vertex, in worldspace : M * position
          Position_worldspace = (M * vec4(vertexPosition_modelspace,1)).xyz;
      
          // Vector that goes from the vertex to the camera, in camera space.
          // In camera space, the camera is at the origin (0,0,0).
          vec3 vertexPosition_cameraspace = ( V * M * vec4(vertexPosition_modelspace,1)).xyz;
          EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace;
      
          // Vector that goes from the vertex to the light, in camera space. M is ommited because it's identity.
          vec3 LightPosition_cameraspace = ( V * vec4(LightPosition_worldspace,1)).xyz;
          LightDirection_cameraspace = LightPosition_cameraspace + EyeDirection_cameraspace;
      
          // Normal of the the vertex, in camera space
          Normal_cameraspace = ( V * M * vec4(vertexNormal_modelspace,0)).xyz; // Only correct if ModelMatrix does not scale the model ! Use its inverse transpose if not.
      
          // UV of the vertex. No special space for this one.
          UV = vertexUV;
      }
      

      解决方案

      there are 2 approaches I can think of for this:

      1. 3D reconstruction based

        so you need to reconstruct the 3D scene from motion (not an easy task and way of my cup of tea). then you simply apply modulation to the selected mesh texture based on u,v texture mapping coordinates and time of animation.

        Describe such topic will not fit in SO answer so you should google some CV books/papers on the subject instead.

      2. Image processing based

        you simply segmentate the image based on color continuity/homogenity. So you group neighboring pixels that have similar color and intensity (growing regions). When done try to fake surface 3D reconstruction based on intensity gradients similar to this:

        and after that create u,v mapping where one axis is depth.

        When done then just apply your sin-wave effect modulation to color.

        I would divide this into 2 stages. 1st pass will segmentate (I would chose CPU side for this) and second for the effect rendering (on GPU).

      As this is form of augmented reality you should also read this:

      btw what is done on that video is neither of above options. They most likely have the mesh for that car already in vector form and use silhouette matching to obtain its orientation on image ... and rendered as usual ... so it would not work for any object on the scene but only for that car ... Something like this:

      [Edit1] GLSL highlight effect

      I took this example:

      And added the highlight to it like this:

      1. On CPU side I added animz variable

        it determines the z coordinate in object local coordinate system LCS where the highlight is actually placed. and I animate it in timer between min and max z value of rendered mesh (cube) +/- some margin so the highlight does not teleport at once from one to another side of object...

        // global
        float animz=-1.0;
        // in timer
        animz+=0.05; if (animz>1.5) animz=-1.5; // my object z = <-1,+1> 0.5 is margin
        // render
        id=glGetUniformLocation(prog_id,"animz"); glUniform1f(id,animz);
        

      2. Vertex shader

        I just take vertex z coordinate and pass it without transform into fragment

        out float pixel_z;      // fragment z coordinate in [LCS]
        pixel_z=pos.z;
        

      3. Fragment shader

        After computing target color c (by standard rendering) I compute distance of pixel_z and animz and if small then I modulate c with a sinwave depended on the distance.

        // highlight effect
        float z;
        z=abs(pixel_z-animz);   // distance to animated z coordinate
        z*=1.5;                 // scale to change highlight width
        if (z<1.0)
            {
            z*=0.5*3.1415926535897932384626433832795;   // z=<0,M_PI/2> 0 in the middle
            z=0.5*cos(z);
            c+=vec3(0.0,z,z);
            }
        

      Here the full GLSL shaders...

      Vertex:

      #version 400 core
      #extension GL_ARB_explicit_uniform_location : enable
      layout(location = 0) in vec3 pos;
      layout(location = 2) in vec3 nor;
      layout(location = 3) in vec3 col;
      layout(location = 0) uniform mat4 m_model;  // model matrix
      layout(location =16) uniform mat4 m_normal; // model matrix with origin=(0,0,0)
      layout(location =32) uniform mat4 m_view;   // inverse of camera matrix
      layout(location =48) uniform mat4 m_proj;   // projection matrix
      out vec3 pixel_pos;     // fragment position [GCS]
      out vec3 pixel_col;     // fragment surface color
      out vec3 pixel_nor;     // fragment surface normal [GCS]
      
      // highlight effect
      out float pixel_z;      // fragment z coordinate in [LCS]
      
      void main()
          {
          pixel_z=pos.z;
          pixel_col=col;
          pixel_pos=(m_model*vec4(pos,1)).xyz;
          pixel_nor=(m_normal*vec4(nor,1)).xyz;
          gl_Position=m_proj*m_view*m_model*vec4(pos,1);
          }
      

      Fragment:

      #version 400 core
      #extension GL_ARB_explicit_uniform_location : enable
      layout(location =64) uniform vec3 lt_pnt_pos;// point light source position [GCS]
      layout(location =67) uniform vec3 lt_pnt_col;// point light source color&strength
      layout(location =70) uniform vec3 lt_amb_col;// ambient light source color&strength
      in vec3 pixel_pos;      // fragment position [GCS]
      in vec3 pixel_col;      // fragment surface color
      in vec3 pixel_nor;      // fragment surface normal [GCS]
      out vec4 col;
      
      // highlight effect
      in float pixel_z;       // fragment z coordinate in [LCS]
      uniform float animz;    // highlight animation z coordinate [GCS]
      
      void main()
          {
          // standard rendering
          float li;
          vec3 c,lt_dir;
          lt_dir=normalize(lt_pnt_pos-pixel_pos); // vector from fragment to point light source in [GCS]
          li=dot(pixel_nor,lt_dir);
          if (li<0.0) li=0.0;
          c=pixel_col*(lt_amb_col+(lt_pnt_col*li));
          // highlight effect
          float z;
          z=abs(pixel_z-animz);   // distance to animated z coordinate
          z*=1.5;                 // scale to change highlight width
          if (z<1.0)
              {
              z*=0.5*3.1415926535897932384626433832795;   // z=<0,M_PI/2> 0 in the middle
              z=0.5*cos(z);
              c+=vec3(0.0,z,z);
              }
          col=vec4(c,1.0);
          }
      

      And preview:

      This approach does not require textures nor u,v mapping.

      [Edit2] highlight with start point

      There are many ways how to implement it. I chose distance from the start point as a highlight parameter. So the highlight will grow from the point in all directions. Here preview for two different touch point locations:

      The white bold cross is the location of touch point rendered for visual check. Here the code:

      Vertex:

      // Vertex
      #version 400 core
      #extension GL_ARB_explicit_uniform_location : enable
      layout(location = 0) in vec3 pos;
      layout(location = 2) in vec3 nor;
      layout(location = 3) in vec3 col;
      layout(location = 0) uniform mat4 m_model;  // model matrix
      layout(location =16) uniform mat4 m_normal; // model matrix with origin=(0,0,0)
      layout(location =32) uniform mat4 m_view;   // inverse of camera matrix
      layout(location =48) uniform mat4 m_proj;   // projection matrix
      out vec3 LCS_pos;       // fragment position [LCS]
      out vec3 pixel_pos;     // fragment position [GCS]
      out vec3 pixel_col;     // fragment surface color
      out vec3 pixel_nor;     // fragment surface normal [GCS]
      
      void main()
          {
          LCS_pos=pos;
          pixel_col=col;
          pixel_pos=(m_model*vec4(pos,1)).xyz;
          pixel_nor=(m_normal*vec4(nor,1)).xyz;
          gl_Position=m_proj*m_view*m_model*vec4(pos,1);
          }
      

      Fragment:

      // Fragment
      #version 400 core
      #extension GL_ARB_explicit_uniform_location : enable
      layout(location =64) uniform vec3 lt_pnt_pos;// point light source position [GCS]
      layout(location =67) uniform vec3 lt_pnt_col;// point light source color&strength
      layout(location =70) uniform vec3 lt_amb_col;// ambient light source color&strength
      in vec3 LCS_pos;        // fragment position [LCS]
      in vec3 pixel_pos;      // fragment position [GCS]
      in vec3 pixel_col;      // fragment surface color
      in vec3 pixel_nor;      // fragment surface normal [GCS]
      out vec4 col;
      
      // highlight effect
      uniform vec3  touch;    // highlight start point [GCS]
      uniform float animt;    // animation parameter <0,1> or -1 for off
      uniform float size;     // highlight size
      
      void main()
          {
          // standard rendering
          float li;
          vec3 c,lt_dir;
          lt_dir=normalize(lt_pnt_pos-pixel_pos); // vector from fragment to point light source in [GCS]
          li=dot(pixel_nor,lt_dir);
          if (li<0.0) li=0.0;
          c=pixel_col*(lt_amb_col+(lt_pnt_col*li));
          // highlight effect
          float t=length(LCS_pos-touch)/size; // distance from start point
          if (t<=animt)
              {
              t*=0.5*3.1415926535897932384626433832795;   // z=<0,M_PI/2> 0 in the middle
              t=0.75*cos(t);
              c+=vec3(0.0,t,t);
              }
          col=vec4(c,1.0);
          }
      

      You control this with uniforms:

      uniform vec3  touch;    // highlight start point [GCS]
      uniform float animt;    // animation parameter <0,1> or -1 for off
      uniform float size;     // max distance of any point of object from touch point
      

      这篇关于VR中的着色器分析的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆