使用OpenGL进行延迟渲染,在表面的光照边界附近出现沉重的像素化 [英] Deferred Rendering with OpenGL, experiencing heavy pixelization near lit boundaries on surfaces

查看:144
本文介绍了使用OpenGL进行延迟渲染,在表面的光照边界附近出现沉重的像素化的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

问题解释

我目前正在为延迟渲染器实现点光源,并且在确定只有在光的边界附近才能看到的笨重的像素化/三角形来自何处时遇到了麻烦.

该问题似乎是由于某处的精度损失所致,但是我一直无法找到确切的来源.法线是一种明显的可能性,但我有一个正在使用Directx并以类似方式毫无问题地处理法线的同班同学.

约2米远的地方以我们游戏的单位(64个单位/米):

几厘米远.请注意,像素化"在我接近世界时不会改变大小.但是,如果我更改相机的方向,它似乎会游泳:

与我的前向渲染器的特写进行的比较,它演示了人们期望使用RGBA8渲染目标的球形条纹(每种颜色只有0-255个可能的值).请注意,在我的延后照片中,后壁呈现出正常的球形条纹:

此处的光量显示为绿色线框:

可以看出,除非靠近表面(以游戏单位为一米左右),否则效果是不可见的.


位置重建

首先,我要提到的是,我使用的是球形网格,仅用于渲染屏幕上与光重叠的部分.如果深度大于或等于深度缓冲区,则我仅渲染背面如此处建议.

要重构片段的相机空间位置,我要从相机空间片段的光量中获取矢量,对其进行归一化,然后根据距我的gbuffer的线性深度对其进行缩放.这是此处(使用线性深度)此处(球形轻量).


几何缓冲区

我的gBuffer设置是:

enum render_targets { e_dist_32f = 0, e_diffuse_rgb8, e_norm_xyz8_specpow_a8, e_light_rgb8_specintes_a8, num_rt };
//...
GLint internal_formats[num_rt] = {  GL_R32F, GL_RGBA8, GL_RGBA8, GL_RGBA8 };
GLint formats[num_rt]          = {   GL_RED,  GL_RGBA,  GL_RGBA,  GL_RGBA };
GLint types[num_rt]            = { GL_FLOAT, GL_FLOAT, GL_FLOAT, GL_FLOAT };
for(uint i = 0; i < num_rt; ++i)
{
  glBindTexture(GL_TEXTURE_2D, _render_targets[i]);
  glTexImage2D(GL_TEXTURE_2D, 0, internal_formats[i], _width, _height, 0, formats[i], types[i], nullptr);
}
// Separate non-linear depth buffer used for depth testing
glBindTexture(GL_TEXTURE_2D, _depth_tex_id);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, _width, _height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, nullptr);


解决方案

常规精度

问题是我的法线没有足够的精度.每个分量8位,表示255个可能的离散值.检查覆盖在照明顶部的我的gbuffer中的法线显示与正常值到被照明的像素"值的1-1对应.

我不确定为什么我的同学没有遇到同样的问题(他将进一步调查).

经过更多研究,我发现它的术语是量化.可以看到它的另一个示例这里,在第19页有镜面高光.


解决方案

将常规渲染目标更改为 RG16F 后,问题已解决.

建议使用的方法此处来存储和检索法线我得到以下结果:

现在我需要更紧凑地存储法线(我只能容纳2个分量). 如果有人发现自己处于相同的情况,这是对技术的很好的调查. >


正如安东(Andon)和盖伊(GuyRT)在评论中指出的那样,对于我所需要的东西,16位有点过大了.我已经按照他们的建议切换到RGB10_A2,即使在圆形表面上,它也给出了非常令人满意的结果.额外的2位有很大帮助(256与1024离散值).

这是现在的样子.

还应注意(对于以后引用此帖子的任何人),我为RG16F发布的图像与我用来压缩/解压缩法线的方法相比有一些不良条纹(涉及一些错误).


与同学(使用没有不良影响的RGB8)讨论了更多问题后,我认为值得一提的是,我可能只是完美地结合了要素才能使这种现象出现.我正在为其构建渲染器的游戏是一款恐怖游戏,它使您处于类似声纳的黑色环境中.通常,在一个场景中,您会以不同的角度拥有许多灯光(我同学的环境都光线很好-他们正在制作户外赛车游戏).再加上它仅出现在相对较近的非常圆形的物体上的事实,可能就是我挑衅这一点的原因.对我而言,这仅仅是(一个受过良好教育的)猜测.

Problem Explaination

I am currently implementing point lights for a deferred renderer and am having trouble determining where a the heavy pixelization/triangulation that is only noticeable near the borders of lights is coming from.

The problem appears to be caused by loss of precision somewhere, but I have been unable to track down the precise source. Normals are an obvious possibility, but I have a classmate who is using directx and is handling his normals in a similar manner with no issues.

From about 2 meters away in our game's units (64 units/meter):

A few centimeters away. Note that the "pixelization" does not change size in the world as I approach it. However, it will appear to swim if I change the camera's orientation:

A comparison with a closeup from my forward renderer which demonstrates the spherical banding that one would expect with a RGBA8 render target (only 0-255 possible values for each color). Note that in my deferred picture the back walls exhibit normal spherical banding:

The light volume is shown here as the green wireframe:

As can be seen the effect isn't visible unless you get close to the surface (around one meter in our game's units).


Position reconstruction

First, I should mention that I am using a spherical mesh which I am using to only render the portion of the screen that the light overlaps. I rendering only the back-faces if the depth is greater or equal the depth buffer as suggested here.

To reconstruct the camera space position of a fragment I am taking the vector from the camera space fragment on the light volume, normalizing it, and scaling it by the linear depth from my gbuffer. This is sort of a hybrid of the methods discussed here (using linear depth) and here (spherical light volumes).


Geometry Buffer

My gBuffer setup is:

enum render_targets { e_dist_32f = 0, e_diffuse_rgb8, e_norm_xyz8_specpow_a8, e_light_rgb8_specintes_a8, num_rt };
//...
GLint internal_formats[num_rt] = {  GL_R32F, GL_RGBA8, GL_RGBA8, GL_RGBA8 };
GLint formats[num_rt]          = {   GL_RED,  GL_RGBA,  GL_RGBA,  GL_RGBA };
GLint types[num_rt]            = { GL_FLOAT, GL_FLOAT, GL_FLOAT, GL_FLOAT };
for(uint i = 0; i < num_rt; ++i)
{
  glBindTexture(GL_TEXTURE_2D, _render_targets[i]);
  glTexImage2D(GL_TEXTURE_2D, 0, internal_formats[i], _width, _height, 0, formats[i], types[i], nullptr);
}
// Separate non-linear depth buffer used for depth testing
glBindTexture(GL_TEXTURE_2D, _depth_tex_id);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, _width, _height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, nullptr);


解决方案

Normal Precision

The problem was that my normals just didn't have enough precision. At 8 bits per component that means 255 discrete possible values. Examining the normals in my gbuffer overlaid ontop of the lighting showed a 1-1 correspondence with normal value to lit "pixel" value.

I am unsure why my classmate does not get the same issue (he is going to investigate further).

After some more research I found that a term for this is quantization. Another example of it can be seen here with a specular highlight on page 19.


Solution

After changing my normal render target to RG16F the problem is resolved.

Using method suggested here to store and retrieve normals I get the following results:

I now need to store my normals more compactly (I only have room for 2 components). This is a good survey of techniques if anyone finds themselves in the same situation.


[EDIT 1]

As both Andon and GuyRT have pointed out in the comments, 16 bits is a bit overkill for what I need. I've switched to RGB10_A2 as they suggested and it gives very satisfactory results, even on rounded surfaces. The extra 2 bits help a lot (256 vs 1024 discrete values).

Here's what it looks like now.

It should also be noted (for anyone that references this post in the future) that the image I posted for RG16F has some undesirable banding from the method I was using to compress/decompress the normal (there was some error involved).


[EDIT 2]

After discussing the issue some more with a classmate (who is using RGB8 with no ill effects), I think it is worth mentioning that I might just have the perfect combination of elements to make this appear. The game I'm building this renderer for is a horror game that places you in pitch black environments with a sonar-like ability. Normally in a scene you would have a number of lights at different angles (my classmate's environments are all very well lit - they're making an outdoor racing game). That combined with the fact that it only appears on very round objects relatively close up might be why I provoked this. This is all just a (slightly educated) guess on my part.

这篇关于使用OpenGL进行延迟渲染,在表面的光照边界附近出现沉重的像素化的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆