OpenGL:调试“单程线框渲染"(Single-pass Wireframe Rendering) [英] OpenGL: debugging "Single-pass Wireframe Rendering"

查看:154
本文介绍了OpenGL:调试“单程线框渲染"(Single-pass Wireframe Rendering)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试实现论文单遍线框渲染",这看起来很简单,但是它给了我期望的浓,暗值.

I'm trying to implement the paper "Single-Pass Wireframe Rendering", which seems pretty simple, but it's giving me what I'd expect as far as thick, dark values.

该论文没有给出确切的代码来计算海拔高度,所以我做了我认为合适的事情.该代码应将这三个顶点投影到视口空间中,获取它们的高度"并将其发送到片段着色器.

The paper didn't give the exact code to figure out the altitudes, so I did it as I thought fit. The code should project the three vertices into viewport space, get their "altitudes" and send them to the fragment shader.

片段着色器确定最近边缘的距离并生成edgeIntensity.我不确定应该使用该值做什么,但是由于它应该在[0,1]之间缩放,因此我将逆值与输出颜色相乘,但是它非常微弱.

The fragment shader determines the distance of the closest edge and generates an edgeIntensity. I'm not sure what I'm supposed to do with this value, but since it's supposed to scale between [0,1], I multiply the inverse against my outgoing color, but it's just very weak.

我有一些不确定的问题在论文中得到解决.首先,应该以2D而非3D计算海拔吗?其次,他们使用DirectX功能,其中DirectX具有不同的视口空间z范围,对吗?那有关系吗我将出站高度距离乘以视口空间坐标的w值,因为他们建议校正透视投影.

I had a few questions that I'm not sure are addressed in the papers. First, should the altitudes be calculated in 2D instead of 3D? Second, they site DirectX features, where DirectX has a different viewport-space z-range, correct? Does that matter? I'm premultiplying the outgoing altitude distances by the w-value of the viewport-space coordinates as they recommend to correct for perspective projection.

尝试校正透视投影的图像

不进行更正(不乘以w值)

未经校正的图像似乎存在明显的问题,无法校正面向更多侧面的透视图,但是经过透视图校正的图像的值非常弱.

The non-corrected image seems to have clear problems not correcting for the perspective on the more away-facing sides, but the perspective-corrected one has very weak values.

任何人都可以看到我的代码有什么问题,或者如何从此处调试它吗?

Can anyone see what's wrong with my code or how to go about debugging it from here?

我在GLSL中的顶点代码...

float altitude(in vec3 a, in vec3 b, in vec3 c) { // for an ABC triangle
  vec3 ba = a - b;
  vec3 bc = c - b;
  vec3 ba_onto_bc = dot(ba,bc) * bc;
  return(length(ba - ba_onto_bc));
}

in vec3 vertex; // incoming vertex
in vec3 v2; // first neighbor (CCW)
in vec3 v3; // second neighbor (CCW)
in vec4 color;
in vec3 normal;
varying vec3 worldPos;
varying vec3 worldNormal;
varying vec3 altitudes;
uniform mat4 objToWorld;
uniform mat4 cameraPV;
uniform mat4 normalToWorld;
void main() {
  worldPos = (objToWorld * vec4(vertex,1.0)).xyz;
  worldNormal = (normalToWorld * vec4(normal,1.0)).xyz;
  //worldNormal = normal;
  gl_Position = cameraPV * objToWorld * vec4(vertex,1.0);
  // also put the neighboring polygons in viewport space
  vec4 vv1 = gl_Position;
  vec4 vv2 = cameraPV * objToWorld * vec4(v2,1.0);
  vec4 vv3 = cameraPV * objToWorld * vec4(v3,1.0);
  altitudes = vec3(vv1.w * altitude(vv1.xyz,vv2.xyz,vv3.xyz),
                   vv2.w * altitude(vv2.xyz,vv3.xyz,vv1.xyz),
                   vv3.w * altitude(vv3.xyz,vv1.xyz,vv2.xyz));
  gl_FrontColor = color;
}

和我的片段代码...

varying vec3 worldPos;
varying vec3 worldNormal;
varying vec3 altitudes;
uniform vec3 cameraPos;
uniform vec3 lightDir;
uniform vec4 singleColor;
uniform float isSingleColor;
void main() {
    // determine frag distance to closest edge
    float d = min(min(altitudes.x, altitudes.y), altitudes.z);
    float edgeIntensity = exp2(-2.0*d*d);
    vec3 L = lightDir;
    vec3 V = normalize(cameraPos - worldPos);
    vec3 N = normalize(worldNormal);
    vec3 H = normalize(L+V);
    //vec4 color = singleColor;
    vec4 color = isSingleColor*singleColor + (1.0-isSingleColor)*gl_Color;
    //vec4 color = gl_Color;
    float amb = 0.6;
    vec4 ambient = color * amb;
    vec4 diffuse = color * (1.0 - amb) * max(dot(L, N), 0.0);
    vec4 specular = vec4(0.0);
    gl_FragColor = (edgeIntensity * vec4(0.0)) + ((1.0-edgeIntensity) * vec4(ambient + diffuse + specular));
}

推荐答案

我已经实现了猪的想法,结果很完美,这是我的屏幕截图:

I have implemented swine's idea, and the result is perfect, here is my screenshot:

struct MYBUFFEREDVERTEX {
    float x, y, z;
    float nx, ny, nz;
    float u, v;
    float bx, by, bz;
};

const MYBUFFEREDVERTEX g_vertex_buffer_data[] = {
    -1.0f, -1.0f, 0.0f,
    0.0f, 0.0f, 1.0f,
    0.0f, 0.0f,
    1.0f, 0.0f, 0.0f,

    1.0f, -1.0f, 0.0f,
    0.0f, 0.0f, 1.0f,
    1.0f, 0.0f,
    0.0f, 1.0f, 0.0f,

    -1.0f, 1.0f, 0.0f,
    0.0f, 0.0f, 1.0f,
    0.0f, 1.0f,
    0.0f, 0.0f, 1.0f,

    1.0f, 1.0f, 0.0f,
    0.0f, 0.0f, 1.0f,
    1.0f, 1.0f,
    1.0f, 0.0f, 0.0f,
};

glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

顶点着色器:

#ifdef GL_ES
// Set default precision to medium
precision mediump int;
precision mediump float;
#endif

uniform mat4 u_mvp_matrix;
uniform vec3 u_light_direction;

attribute vec3 a_position;
attribute vec3 a_normal;
attribute vec2 a_texcoord;
attribute vec3 a_barycentric;

varying vec2 v_texcoord;
varying float v_light_intensity;
varying vec3 v_barycentric;

void main()
{
    // Calculate vertex position in screen space
    gl_Position = u_mvp_matrix * vec4(a_position, 1.0);
    // calculate light intensity, range of 0.3 ~ 1.0
    v_light_intensity = max(dot(u_light_direction, a_normal), 0.3);
    // Pass texture coordinate to fragment shader
    v_texcoord = a_texcoord;
    // Pass bary centric to fragment shader
    v_barycentric = a_barycentric;
}

片段着色器:

#ifdef GL_ES
// Set default precision to medium
precision mediump int;
precision mediump float;
#endif

uniform sampler2D u_texture;

varying vec2 v_texcoord;
varying float v_light_intensity;
varying vec3 v_barycentric;

void main()
{
    float min_dist = min(min(v_barycentric.x, v_barycentric.y), v_barycentric.z);
    float edgeIntensity = 1.0 - step(0.005, min_dist);
    // Set diffuse color from texture
    vec4 diffuse = texture2D(u_texture, v_texcoord) * vec4(vec3(v_light_intensity), 1.0);
    gl_FragColor = edgeIntensity * vec4(0.0, 1.0, 1.0, 1.0) + (1.0 - edgeIntensity) * diffuse;
}

这篇关于OpenGL:调试“单程线框渲染"(Single-pass Wireframe Rendering)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆