CPU到GPU的法线映射 [英] CPU to GPU normal mapping

查看:128
本文介绍了CPU到GPU的法线映射的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在创建一个地形网格,并遵循此SO答案,我正在尝试迁移CPU计算的法线到基于着色器的版本,以通过降低网格分辨率并使用在片段着色器中计算的法线贴图来提高性能.

I'm creating a terrain mesh, and following this SO answer I'm trying to migrate my CPU computed normals to a shader based version, in order to improve performances by reducing my mesh resolution and using a normal map computed in the fragment shader.

我正在使用 MapBox高度图获取地形数据.磁贴看起来像这样:

I'm using MapBox height map for the terrain data. Tiles look like this:

每个像素的高程由以下公式给出:

And elevation at each pixel is given by the following formula:

const elevation = -10000.0 + ((red * 256.0 * 256.0 + green * 256.0 + blue) * 0.1);

我的原始代码首先创建一个密集的网格(2个三角形的256 * 256正方形),然后计算三角形和顶点法线.为了获得视觉上令人满意的结果,我将海拔高度潜水了5000,以匹配瓷砖的宽度& ;;场景中的高度(以后,我将进行适当的计算以显示实际标高).

My original code first creates a dense mesh (256*256 squares of 2 triangles) and then computes triangle and vertices normals. To get a visually satisfying result I was diving the elevation by 5000 to match the tile's width & height in my scene (in the future I'll do a proper computation to display the real elevation).

我正在使用这些简单的着色器进行绘制:

I was drawing with these simple shaders:

顶点着色器:

uniform mat4 u_Model;
uniform mat4 u_View;
uniform mat4 u_Projection;

attribute vec3 a_Position;
attribute vec3 a_Normal;
attribute vec2 a_TextureCoordinates;

varying vec3 v_Position;
varying vec3 v_Normal;
varying mediump vec2 v_TextureCoordinates;

void main() {

  v_TextureCoordinates = a_TextureCoordinates;
  v_Position = vec3(u_View * u_Model * vec4(a_Position, 1.0));
  v_Normal = vec3(u_View * u_Model * vec4(a_Normal, 0.0));
  gl_Position = u_Projection * u_View * u_Model * vec4(a_Position, 1.0);
}

片段着色器:

precision mediump float;

varying vec3 v_Position;
varying vec3 v_Normal;
varying mediump vec2 v_TextureCoordinates;

uniform sampler2D texture;

void main() {

    vec3 lightVector = normalize(-v_Position);
    float diffuse = max(dot(v_Normal, lightVector), 0.1);

    highp vec4 textureColor = texture2D(texture, v_TextureCoordinates);
    gl_FragColor = vec4(textureColor.rgb * diffuse, textureColor.a);
}

这很慢,但给出了令人满意的视觉效果:

It was slow but gave visually satisfying results:

现在,我删除了所有基于CPU的法线计算代码,并用以下着色器替换了着色器:

Now, I removed all the CPU based normals computation code, and replaced my shaders by those:

顶点着色器:

#version 300 es

precision highp float;
precision highp int;

uniform mat4 u_Model;
uniform mat4 u_View;
uniform mat4 u_Projection;

in vec3 a_Position;
in vec2 a_TextureCoordinates;

out vec3 v_Position;
out vec2 v_TextureCoordinates;
out mat4 v_Model;
out mat4 v_View;

void main() {

  v_TextureCoordinates = a_TextureCoordinates;
  v_Model = u_Model;
  v_View = u_View;

  v_Position = vec3(u_View * u_Model * vec4(a_Position, 1.0));
  gl_Position = u_Projection * u_View * u_Model * vec4(a_Position, 1.0);
}

片段着色器:

#version 300 es

precision highp float;
precision highp int;

in vec3 v_Position;
in vec2 v_TextureCoordinates;

in mat4 v_Model;
in mat4 v_View;

uniform sampler2D u_dem;
uniform sampler2D u_texture;

out vec4 color;

const vec2 size = vec2(2.0,0.0);
const ivec3 offset = ivec3(-1,0,1);

float getAltitude(vec4 pixel) {

  float red = pixel.x;
  float green = pixel.y;
  float blue = pixel.z;

  return (-10000.0 + ((red * 256.0 * 256.0 + green * 256.0 + blue) * 0.1)) * 6.0; // Why * 6 and not / 5000 ??
}

void main() {

    float s01 = getAltitude(textureOffset(u_dem, v_TextureCoordinates, offset.xy));
    float s21 = getAltitude(textureOffset(u_dem, v_TextureCoordinates, offset.zy));
    float s10 = getAltitude(textureOffset(u_dem, v_TextureCoordinates, offset.yx));
    float s12 = getAltitude(textureOffset(u_dem, v_TextureCoordinates, offset.yz));

    vec3 va = (vec3(size.xy, s21 - s01));
    vec3 vb = (vec3(size.yx, s12 - s10));

    vec3 normal = normalize(cross(va, vb));
    vec3 transformedNormal = normalize(vec3(v_View * v_Model * vec4(normal, 0.0)));

    vec3 lightVector = normalize(-v_Position);
    float diffuse = max(dot(transformedNormal, lightVector), 0.1);

    highp vec4 textureColor = texture(u_texture, v_TextureCoordinates);
    color = vec4(textureColor.rgb * diffuse, textureColor.a);
}

现在它几乎立即加载,但是出了点问题:

It now loads nearly instantly, but something is wrong:

  • 在片段着色器中,我必须将高程乘以6而不是除以5000才能得到接近于原始代码的内容
  • 结果不是很好.尤其是当我倾斜场景时,阴影非常暗(我倾斜得越深,阴影越深):

您能找出导致这种差异的原因吗?

Can you spot what causes that difference?

我创建了两个JSFiddles:

I created two JSFiddles:

  • first version with CPU computed vertices normals: http://jsfiddle.net/tautin/tmugzv6a/10
  • second version with GPU computed normal map: http://jsfiddle.net/tautin/8gqa53e1/42

使用倾斜滑块播放时出现问题.

The problem appears when you play with the tilt slider.

推荐答案

我可以找到三个问题.

您看到并通过反复试验修复的错误,这是因为您计算的身高比例尺是错误的.在CPU中,颜色坐标的范围是0到255,但是在GLSL上,纹理值从0到1进行了归一化,因此正确的高度计算是:

One you saw and fixed by trial and error, which is that the scale of your height calculation was wrong. In CPU, your color coordinates varies from 0 to 255, but on GLSL, texture values are normalized from 0 to 1, so the correct height calculation is:

return (-10000.0 + ((red * 256.0 * 256.0 + green * 256.0 + blue) * 0.1 * 256.0)) / Z_SCALE;

但是,出于此着色器的目的,-10000.00并不重要,因此您可以执行以下操作:

But for this shader purpose, the -10000.00 doesn't matter, so you can do:

return (red * 256.0 * 256.0 + green * 256.0 + blue) * 0.1 * 256.0 / Z_SCALE;

第二个问题是您的x和y坐标的比例尺也是错误的.在CPU代码中,两个相邻点之间的距离为(SIZE * 2.0 / (RESOLUTION + 1)),但是在GPU中,您已将其设置为1.定义size变量的正确方法是:

The second problem is that the scale of your x and y coordinates was also wrong. In the CPU code the distance between two neighbor points is (SIZE * 2.0 / (RESOLUTION + 1)), but in GPU, you had set it to 1. The correct way to define your size variable is:

const float SIZE = 2.0;
const float RESOLUTION = 255.0;

const vec2 size = vec2(2.0 * SIZE / (RESOLUTION + 1.0), 0.0);

请注意,我将分辨率提高到了255,因为我假设这就是您想要的(一个减去纹理分辨率).另外,需要与offset的值相匹配,您将其定义为:

Notice that I increased the resolution to 255 because I assume this is what you want (one minus the texture resolution). Also, this is needed to match the value of offset, which you defined as:

const ivec3 offset = ivec3(-1,0,1);

要使用其他RESOLUTION值,您将必须相应地调整offset,例如对于RESOLUTION == 127offset = ivec3(-2,0,2),即偏移量必须为<real texture resolution>/(RESOLUTION + 1),这限制了RESOLUTION的可能性,因为偏移量必须为整数.

To use a different RESOLUTION value, you will have to adjust offset accordingly, e.g. for RESOLUTION == 127, offset = ivec3(-2,0,2), i.e. the offset must be <real texture resolution>/(RESOLUTION + 1), which limits the possibilities for RESOLUTION, since offset must be integer.

第三个问题是您在GPU中使用了不同的常规计算算法,这对我而言具有比CPU上更低的分辨率,这是因为您使用了十字的四个外部像素,但是却忽略了中央像素.看来这还不是全部,但是我无法解释它们为何如此不同.我尝试实现我认为应该的确切CPU算法,但结果却有所不同.相反,我必须使用类似但不完全相同的以下算法来获得几乎相同的结果(如果将CPU分辨率提高到255):

The third problem is that you used a different normal calculation algorithm in the GPU, which strikes to me as having lower resolution than the one used on CPU, because you use the four outer pixels of a cross, but ignores the central one. It seems that this is not the full story, but I can't explain why they are so different. I tried to implement the exact CPU algorithm as I thought it should be, but it yield different results. Instead, I had to use the following algorithm, which is similar but not exactly the same, to get an almost identical result (if you increase the CPU resolution to 255):

    float s11 = getAltitude(texture(u_dem, v_TextureCoordinates));
    float s21 = getAltitude(textureOffset(u_dem, v_TextureCoordinates, offset.zy));
    float s10 = getAltitude(textureOffset(u_dem, v_TextureCoordinates, offset.yx));

    vec3 va = (vec3(size.xy, s21 - s11));
    vec3 vb = (vec3(size.yx, s10 - s11));

    vec3 normal = normalize(cross(va, vb));

这是原始的CPU解决方案,但分辨率为255: http://jsfiddle.net/k0fpxjd8/

This is the original CPU solution, but with RESOLUTION=255: http://jsfiddle.net/k0fpxjd8/

这是最终的GPU解决方案: http://jsfiddle.net/7vhpuqd8/

This is the final GPU solution: http://jsfiddle.net/7vhpuqd8/

这篇关于CPU到GPU的法线映射的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆