GLSL几何着色器取代了glLineWidth [英] GLSL Geometry shader to replace glLineWidth

查看:522
本文介绍了GLSL几何着色器取代了glLineWidth的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试编写几何着色器以替换glLineWidth行为.我想以可自定义的宽度绘制线条(现在用统一的宽度即可).无论摄像机的投影距离或与行距的距离如何,这些行的粗细都应始终相同.

I'm trying to write a geometry shader to replace glLineWidth behavior. I want to draw lines with a customizable width (doing this with a uniform suffices for now). The lines should always have the same thickness, regardless of the camera projection or distance to where the lines are.

基于大量的搜索,我提出了以下几何着色器:

Based on a lot of googling, I've come up with the following geometry shader:

#version 330

layout (lines) in;
layout (triangle_strip, max_vertices = 4) out;

uniform mat4    u_model_matrix;
uniform mat4    u_view_matrix;
uniform mat4    u_projection_matrix;
uniform float   u_thickness = 4; // just a test default

void main()
{
    float r = u_thickness / 2;

    mat4 mv = u_view_matrix * u_model_matrix;
    vec4 p1 = mv * gl_in[0].gl_Position;
    vec4 p2 = mv * gl_in[1].gl_Position;

    vec2 dir = normalize(p2.xy - p1.xy);
    vec2 normal = vec2(dir.y, -dir.x);

    vec4 offset1, offset2;
    offset1 = vec4(normal * r, 0, 0);
    offset2 = vec4(normal * r, 0, 0);

    vec4 coords[4];
    coords[0] = p1 + offset1;
    coords[1] = p1 - offset1;
    coords[2] = p2 + offset2;
    coords[3] = p2 - offset2;

    for (int i = 0; i < 4; ++i) {
        coords[i] = u_projection_matrix * coords[i];
        gl_Position = coords[i];
        EmitVertex();
    }
    EndPrimitive();
}

为完整起见,这是顶点着色器:

For completeness, here is the vertex shader:

#version 330

in vec4 a_position;

void main() {
    gl_Position = a_position;
}

...和我的片段着色器:

... and my fragment shader:

#version 330

uniform vec4 u_color = vec4(1, 0, 1, 1);
out vec4 fragColor;

void main() {
    fragColor = u_color;
}

我无法让数学在所有情况下都能发挥作用.使用正交相机,以上方法可以正常工作:

I can't get the math to work in all situations. With an orthogonal camera, the above works fine:

但是对于透视相机,问题在于线条的尺寸不是固定的.相对于物体的距离,它变得越来越大.

But with a perspective camera, the problem is that the line is not a fixed size. It gets bigger and smaller relative to how far away the object is.

我也希望使用透视相机拍摄的线条尺寸相同.我究竟做错了什么?

I expected the line the be the same size using a perspective camera as well. What am I doing wrong?

推荐答案

我设法通过考虑视口大小并使用该视口缩放来修复它.我不知道这是否是解决此问题的最有效方法(我绝不是数学头),但它确实有效.

I managed to fix it by taking into account the viewport size, and scaling my r using that. I do not know if this is the most efficient way to solve this problem (I am by no means a math head), but it does work.

在下面的代码中,我现在在屏幕空间而不是相机/视图空间中完成所有工作,并且我使用u_viewportInvSize vec2(即1/viewportSize)来缩放所需的半径!

In the code below, I now do all the work in screen space rather than camera/view space, and I use the u_viewportInvSize vec2 (which is 1/viewportSize) to scale my desired radius!

#version 330

layout (lines) in;                              // now we can access 2 vertices
layout (triangle_strip, max_vertices = 4) out;  // always (for now) producing 2 triangles (so 4 vertices)

uniform vec2    u_viewportInvSize;
uniform mat4    u_modelviewprojection_matrix;
uniform float   u_thickness = 4;

void main()
{
    float r = u_thickness;

    vec4 p1 = u_modelviewprojection_matrix * gl_in[0].gl_Position;
    vec4 p2 = u_modelviewprojection_matrix * gl_in[1].gl_Position;

    vec2 dir = normalize(p2.xy - p1.xy);
    vec2 normal = vec2(dir.y, -dir.x);

    vec4 offset1, offset2;
    offset1 = vec4(normal * u_viewportInvSize * (r * p1.w), 0, 0);
    offset2 = vec4(normal * u_viewportInvSize * (r * p1.w), 0, 0);

    vec4 coords[4];
    coords[0] = p1 + offset1;
    coords[1] = p1 - offset1;
    coords[2] = p2 + offset2;
    coords[3] = p2 - offset2;

    for (int i = 0; i < 4; ++i) {
        gl_Position = coords[i];
        EmitVertex();
    }
    EndPrimitive();
}

这篇关于GLSL几何着色器取代了glLineWidth的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆