改进了 WebGL 中的区域照明 &三JS [英] Improved Area Lighting in WebGL & ThreeJS

查看:27
本文介绍了改进了 WebGL 中的区域照明 &三JS的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直致力于在 WebGL 中实现与此演示类似的区域照明:

您应该关注的相关代码是 318 行.

castingPoint.locationTHREE.Vector3 的一个实例,是拼图的缺失部分.您还应该注意到草图的左下方有 2 个值 - 这些值会动态更新以显示相关向量之间的点积.

我认为该解决方案需要另一个伪平面,该平面与顶点法线的方向对齐并且垂直于光平面,但我可能错了!

解决方案

好消息是有一个解决方案;但首先是坏消息.

您使用点积最大化点的方法从根本上是有缺陷的,并且在物理上不合理.

在上面的第一个插图中,假设您的区域光仅由左半部分组成.

紫色"点 - 最大化左半部分点积的点 - 与最大化两半组合点积的点相同.

因此,如果要使用您提出的解决方案,人们会得出结论,区域光的左半部分发出的辐射与整个光发出的辐射相同.显然,这是不可能的.

计算区域光投射在给定点上的总光量的解决方案相当复杂,但作为参考,您可以在 1994 年的论文中找到解释 The Irradiance Jacobian for Partially Occluded Polyhedral Sources 这里.

我建议您查看图 1第 1.2 节 的几段——然后停下来.:-)

为了简单起见,我编写了一个非常简单的着色器,它使用three.js WebGLRenderer 实现了解决方案——而不是延迟的.

这是一个更新的小提琴:http://jsfiddle.net/hh74z2ft/1/

片段着色器的核心非常简单

//从点到区域光角的方向向量for( int i = 0; i < NVERTS; i ++ ) {lPosition[ i ] = viewMatrix * lightMatrixWorld * vec4( lightverts[ i ], 1.0 );//在相机空间lVector[ i ] = normalize( lPosition[ i ].xyz + vViewPosition.xyz );//dir 从顶点到 areaLight}//点处的矢量辐照度vec3 lightVec = vec3( 0.0 );for( int i = 0; i < NVERTS; i ++ ) {vec3 v0 = lVector[i];vec3 v1 = lVector[ int( mod( float( i + 1 ), float( NVERTS ) ) ) ];//呃...lightVec += acos( dot( v0, v1 ) ) * normalize( cross( v0, v1 ) );}//点的辐照度系数浮点因子 = max( dot( lightVec, normal ), 0.0 )/( 2.0 * 3.14159265 );

更多好消息:

  1. 这种方法在物理上是正确的.
  2. 衰减是自动处理的.(请注意,较小的灯需要较大的强度值.)
  3. 理论上,这种方法应该适用于任意多边形,而不仅仅是矩形.

注意事项:

  1. 我只实现了漫反射组件,因为这就是您的问题所解决的问题.
  2. 您必须使用合理的启发式方法来实现镜面反射组件——我希望类似于您已经编写的代码.
  3. 这个简单的示例无法处理区域光部分低于地平线"的情况——即并非所有 4 个顶点都在面部平面之上.
  4. 由于 WebGLRenderer 不支持区域光,您不能将光添加到场景中"并期望它工作.这就是我将所有必要数据传递到自定义着色器的原因.(当然,WebGLDeferredRenderer 确实支持区域灯.)
  5. 不支持阴影.

three.js r.73

I have been working on an area lighting implementation in WebGL similar to this demo:

http://threejs.org/examples/webgldeferred_arealights.html

The above implementation in three.js was ported from the work of ArKano22 over on gamedev.net:

http://www.gamedev.net/topic/552315-glsl-area-light-implementation/

Though these solutions are very impressive, they both have a few limitations. The primary issue with ArKano22's original implementation is that the calculation of the diffuse term does not account for surface normals.

I have been augmenting this solution for some weeks now, working with the improvements by redPlant to address this problem. Currently I have normal calculations incorporated into the solution, BUT the result is also flawed.

Here is a sneak preview of my current implementation:

Introduction

The steps for calculating the diffuse term for each fragment is as follows:

  1. Project the vertex onto the plane that the area light sits on, so that the projected vector is coincident with the light's normal/direction.
  2. Check that the vertex is on the correct side of the area light plane by comparing the projection vector with the light's normal.
  3. Calculate the 2D offset of this projected point on the plane from the light's center/position.
  4. Clamp this 2D offset vector so that it sits inside the light's area (defined by its width and height).
  5. Derive the 3D world position of the projected and clamped 2D point. This is the nearest point on the area light to the vertex.
  6. Perform the usual diffuse calculations that you would for a point light by taking the dot product between the the vertex-to-nearest-point vector (normalised) and the vertex normal.

Problem

The issue with this solution is that the lighting calculations are done from the nearest point and do not account for other points on the lights surface that could be illuminating the fragment even more so. Let me try and explain why…

Consider the following diagram:

The area light is both perpendicular to the surface and intersects it. Each of the fragments on the surface will always return a nearest point on the area light where the surface and the light intersect. Since the surface normal and the vertex-to-light vectors are always perpendicular, the dot product between them is zero. Subsequently, the calculation of the diffuse contribution is zero despite there being a large area of light looming over the surface.

Potential Solution

I propose that rather than calculate the light from the nearest point on the area light, we calculate it from a point on the area light that yields the greatest dot product between the vertex-to-light vector (normalised) and the vertex normal. In the diagram above, this would be the purple dot, rather than the blue dot.

Help!

And so, this is where I need your help. In my head, I have a pretty good idea of how this point can be derived, but don't have the mathematical competence to arrive at the solution.

Currently I have the following information available in my fragment shader:

  • vertex position
  • vertex normal (unit vector)
  • light position, width and height
  • light normal (unit vector)
  • light right (unit vector)
  • light up (unit vector)
  • projected point from the vertex onto the lights plane (3D)
  • projected point offset from the lights center (2D)
  • clamped offset (2D)
  • world position of this clamped offset – the nearest point (3D)

To put all this information into a visual context, I created this diagram (hope it helps):

To test my proposal, I need the casting point on the area light – represented by the red dots, so that I can perform the dot product between the vertex-to-casting-point (normalised) and the vertex normal. Again, this should yield the maximum possible contribution value.

UPDATE!!!

I have created an interactive sketch over on CodePen that visualises the mathematics that I currently have implemented:

http://codepen.io/wagerfield/pen/ywqCp

The relavent code that you should focus on is line 318.

castingPoint.location is an instance of THREE.Vector3 and is the missing piece of the puzzle. You should also notice that there are 2 values at the lower left of the sketch – these are dynamically updated to display the dot product between the relevant vectors.

I imagine that the solution would require another pseudo plane that aligns with the direction of the vertex normal AND is perpendicular to the light's plane, but I could be wrong!

解决方案

The good news is there is a solution; but first the bad news.

Your approach of using the point that maximizes the dot product is fundamentally flawed, and not physically plausible.

In your first illustration above, suppose that your area light consisted of only the left half.

The "purple" point -- the one that maximizes the dot-product for the left half -- is the same as the point that maximizes the dot-product for both halves combined.

Therefore, if one were to use your proposed solution, one would conclude that the left half of the area light emits the same radiation as the entire light. Obviously, that is impossible.

The solution for computing the total amount of light that the area light casts on a given point is rather complicated, but for reference, you can find an explanation in the 1994 paper The Irradiance Jacobian for Partially Occluded Polyhedral Sources here.

I suggest you look at Figure 1, and a few paragraphs of Section 1.2 -- and then stop. :-)

To make it easy, I have coded a very simple shader that implements the solution using the three.js WebGLRenderer -- not the deferred one.

EDIT: Here is an updated fiddle: http://jsfiddle.net/hh74z2ft/1/

The core of the fragment shader is quite simple

// direction vectors from point to area light corners

for( int i = 0; i < NVERTS; i ++ ) {

    lPosition[ i ] = viewMatrix * lightMatrixWorld * vec4( lightverts[ i ], 1.0 ); // in camera space

    lVector[ i ] = normalize( lPosition[ i ].xyz + vViewPosition.xyz ); // dir from vertex to areaLight

}

// vector irradiance at point

vec3 lightVec = vec3( 0.0 );

for( int i = 0; i < NVERTS; i ++ ) {

    vec3 v0 = lVector[ i ];
    vec3 v1 = lVector[ int( mod( float( i + 1 ), float( NVERTS ) ) ) ]; // ugh...

    lightVec += acos( dot( v0, v1 ) ) * normalize( cross( v0, v1 ) );

}

// irradiance factor at point

float factor = max( dot( lightVec, normal ), 0.0 ) / ( 2.0 * 3.14159265 );

More Good News:

  1. This approach is physically correct.
  2. Attenuation is handled automatically. ( Be aware that smaller lights will require a larger intensity value. )
  3. In theory, this approach should work with arbitrary polygons, not just rectangular ones.

Caveats:

  1. I have only implemented the diffuse component, because that is what your question addresses.
  2. You will have to implement the specular component using a reasonable heuristic -- similar to what you already have coded, I expect.
  3. This simple example does not handle the case where the area light is "partially below the horizon" -- i.e. not all 4 vertices are above the plane of the face.
  4. Since WebGLRenderer does not support area lights, you can't "add the light to the scene" and expect it to work. This is why I pass all necessary data into the custom shader. ( WebGLDeferredRenderer does support area lights, of course. )
  5. Shadows are not supported.

three.js r.73

这篇关于改进了 WebGL 中的区域照明 &amp;三JS的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆