openGL工作原理说明 [英] Explanation of working principle of openGL

查看:132
本文介绍了openGL工作原理说明的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图了解openGL中的编码是如何工作的.我在互联网上找到了此代码,我想清楚地理解它.

I'm trying to understand how coding in openGL works. I found this code on the internet and I want to understand it clearly.

对于我的顶点着色器,我有:

For my vertex shader I have:

顶点

uniform vec3 fvLightPosition;
    varying vec2 Texcoord;
    varying vec2 Texcoordcut;
    varying vec3 ViewDirection;
    varying vec3 LightDirection;
    uniform mat4 extra;

    attribute vec3 rm_Binormal;
    attribute vec3 rm_Tangent;

    uniform float fSinTime0_X;
    uniform float fCosTime0_X;

    void main( void )
    {
       gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex * extra;
       Texcoord    = gl_MultiTexCoord0.xy;
       Texcoordcut   = gl_MultiTexCoord0.xy;

       vec4 fvObjectPosition = gl_ModelViewMatrix * gl_Vertex;

       vec3 rotationLight = vec3(fCosTime0_X,0, fSinTime0_X);
       ViewDirection  =  - fvObjectPosition.xyz;
       LightDirection = (-rotationLight ) * (gl_NormalMatrix); 
    }

对于我的片段着色器,我在图片上创建了白色以在其中创建孔. :

And for my Fragment shader, I created a white color on the picture to create a hole in it. :

uniform vec4 fvAmbient;
uniform vec4 fvSpecular;
uniform vec4 fvDiffuse;
uniform float fSpecularPower;

uniform sampler2D baseMap;
uniform sampler2D bumpMap;

varying vec2 Texcoord;
varying vec2 Texcoordcut;
varying vec3 ViewDirection;
varying vec3 LightDirection;

void main( void )
{
   vec3  fvLightDirection = normalize( LightDirection );
   vec3  fvNormal         = normalize( ( texture2D( bumpMap, Texcoord ).xyz * 2.0 ) - 1.0 );
   float fNDotL           = dot( fvNormal, fvLightDirection ); 
   vec3  fvReflection     = normalize( ( ( 2.0 * fvNormal ) * fNDotL ) - fvLightDirection ); 
   vec3  fvViewDirection  = normalize( ViewDirection );
   float fRDotV           = max( 0.0, dot( fvReflection, fvViewDirection ) );
   vec4  fvBaseColor      = texture2D( baseMap, Texcoord );

   vec4  fvTotalAmbient   = fvAmbient * fvBaseColor; 
   vec4  fvTotalDiffuse   = fvDiffuse * fNDotL * fvBaseColor; 
   vec4  fvTotalSpecular  = fvSpecular * ( pow( fRDotV, fSpecularPower ) );


   if(fvBaseColor == vec4(1,1,1,1)){ 
      discard;
   }else{ 
     gl_FragColor = ( fvTotalDiffuse + fvTotalSpecular );
   }



}

有人可以向我解释一下一切是什么?我了解它的基本思想.但不是经常为什么需要它,以及当您使用其他变量时会发生什么?发生的情况是茶壶周围的光线正在及时消失.这如何与余弦和窦变量正确地联系在一起?如果我希望光线从上方射到茶壶底部怎么办?

Somebody who can explain me very whell what everything does? I understand the basic idea of it. But not often why you need it, and what happend when you use other variables? What happens is that light around the teapot is comming and removing in the time. How is this correctly linked with the cosinus and sinus variables? What if I want the the light comes from above and goes to the bottom of the teapot?

  • 这行是什么意思?

  • What do this lines mean?

vec4 fvObjectPosition = gl_ModelViewMatrix * gl_Vertex;

vec4 fvObjectPosition = gl_ModelViewMatrix * gl_Vertex;

为什么在此变量前加减号?

And why is here a minus before the variable?

ViewDirection =-fvObjectPosition.xyz;

ViewDirection = - fvObjectPosition.xyz;

为什么要使用负的rotationLight?

Why do we use a negative rotationLight?

LightDirection =(-rotationLight)*(gl_NormalMatrix);

LightDirection = (-rotationLight ) * (gl_NormalMatrix);

为什么要使用* 2.0)-1.0来计算法线向量? Normal = normalize(gl_NormalMatrix * gl_Normal); 是不是不可能?

Why do they use *2.0 ) - 1.0 for calculating the normalvector? Isn't that not possible with Normal = normalize( gl_NormalMatrix * gl_Normal);?

vec3 fvNormal = normalize((texture2D(bumpMap,Texcoord).xyz * 2.0)-1.0);

vec3 fvNormal = normalize( ( texture2D( bumpMap, Texcoord ).xyz * 2.0 ) - 1.0 );

推荐答案

太懒惰,无法完全分析代码,而没有发送到着色器的适当上下文……但是您的子问题很容易:

Too lazy to fully analyze the code without the propper context of what you are sending to the shaders ... but your subquestions are easy enough:

  1. 这行是什么意思? vec4 fvObjectPosition = gl_ModelViewMatrix * gl_Vertex;

这会将gl_Vertex(多边形边缘点)从对象/模型坐标系转换为相机坐标系.换句话说,它会应用顶点的所有旋转和平移. z轴是指向或来自屏幕的相机视图轴,而x,y轴与屏幕相同.尚无投影/剪裁/夹紧的应用!!!结果点存储在fvObjectPosition 4D向量(x,y,z,w)中,我强烈建议您阅读了解4x4均匀变换矩阵和那里的子链接也值得研究.

This converts gl_Vertex (polygon edge points) from object/model coordinate system to camera coordinate system. In other words it apply all the rotations and translations of your vertexes. The z axis is camera view axis pointing to or from Screen and x,y axises are the same as screens. No projections/clippings/clampings are applied yet !!! The resulting point is stored in fvObjectPosition 4D vector (x,y,z,w) I strongly recommend you to read Understanding 4x4 homogenous transform matrices and the sub-links there are also worth looking into.

为什么在此变量前加减号? ViewDirection = - fvObjectPosition.xyz;

最可能是因为您需要从表面到相机的方向,所以direction_from_surface=camera_pos-surface_pos因为surface_pos已经在相机坐标系中,所以相机在相同坐标中的位置为(0,0,0),所以结果为direction_from_surface=(0,0,0)-surface_pos=-surface_pos否则您得到了负Z轴视图方向(取决于矩阵的格式).没有背景信息很难确定.

Most likely because you need the direction from surface to camera so direction_from_surface=camera_pos-surface_pos as your surface_pos is already in camera coordinate system then camera position in the same coordinates are (0,0,0) so the result is direction_from_surface=(0,0,0)-surface_pos=-surface_pos or you got negative Z axis view direction (depends on the format of your matrices). It is Hard to determine without background info.

为什么要使用负的rotationLight? LightDirection = (-rotationLight ) * (gl_NormalMatrix);

最有可能出于与项目符号2相同的原因

most likely for the same reasons as bullet 2

为什么他们使用*2.0)-1.0计算法向矢量?

Why do they use *2.0)-1.0 for calculating the normalvector?

着色器使用法线/凹凸贴图,这意味着您将获得具有法线矢量编码为RGB的纹理.由于RGB纹理固定在<0,1>范围内,法向矢量坐标在<-1,+1>范围内,因此您只需要重新调整纹理像素的大小即可.

The shader use normal/bump mapping which means you got an texture with normal vectors encoded as RGB. As RGB textures are clamped to range <0,1> and normal vector coordinates are in range <-1,+1> then you just need to rescale the texel So:

  • RGB*2.0的范围是<0,2>
  • RGB*2.0-1.0的范围是<-1,+1>
  • RGB*2.0 is in range <0,2>
  • RGB*2.0-1.0 is in range <-1,+1>

这将获取多边形坐标系中的法线向量,因此您需要将其转换为方程式所使用的坐标系.通常是全球世界空间或相机空间.如果您的法线/凹凸贴图已经被标准化,则不需要标准化.普通纹理具有独特的颜色...

This obtains your normal vector in polygon coordinate system so you need to convert it to the coordinate system your equations work with. Usually global world space or camera space. The normalize is not necessary if your normal/bump map is normalized already. Normal textures are distinctive with colors ...

  • 平坦的表面具有normal=(0.0,0.0,+1.0),因此在RGB中为(0.5,0.5,1.0)
  • flat surface has normal=(0.0,0.0,+1.0) so in RGB it would be (0.5,0.5,1.0)

这是纹理中常见的常见的蓝色/品红色(请参见上面的链接).

That is the common bluish/magenta color often seen in textures (see the link above).

但是可以,您可以使用Normal = normalize( gl_NormalMatrix * gl_Normal);

But Yes you can use Normal = normalize( gl_NormalMatrix * gl_Normal);

但是,这将消除凹凸/法线贴图,而您只会得到平坦的表面.像这样:

But that will eliminate the bump/normal map and you would got just flat surfaces instead. Something like this:

vec3(fCosTime0_X,0, fSinTime0_X)看起来像光的方向.这一个绕y轴旋转.如果要将光的方向更改为其他方向,只需使其均匀,然后将其直接传递给着色器即可,而不是fCosTime0_X,fSinTime0_X

vec3(fCosTime0_X,0, fSinTime0_X) looks like the light direction. This one is rotating around y axis. If you want to change light direction to something else just make it an uniform and pass it directly to shader instead of fCosTime0_X,fSinTime0_X

这篇关于openGL工作原理说明的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆