什么是 OpenGL 中的着色器,我们需要它们做什么? [英] What are shaders in OpenGL and what do we need them for?

查看:34
本文介绍了什么是 OpenGL 中的着色器,我们需要它们做什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当我试图通过

但这对于想要创建许多不同的复杂视觉效果的程序员来说太严格了.

因此,随着半导体制造技术的进步,以及 GPU 设计人员能够在每平方毫米中限制更多的晶体管,供应商开始允许渲染管道的某些部分使用类似 C 的编程语言进行编程

  • 片段着色器:

    • 输入:三角形所有像素的二维位置+(边缘颜色或纹理图像)+光照参数
    • 输出:三角形的每个像素的颜色(如果它没有被另一个更近的三角形遮挡),通常在顶点之间插值

    片段是从先前计算的三角形投影中离散化的,参见:

    相关问题:

    改编自这张图片SVG 源.

    源代码示例

    要真正了解着色器及其所有功能,您必须查看许多示例并学习 API.https://github.com/JoeyDeVries/LearnOpenGL 例如是一个很好的来源.

    在现代 OpenGL 4 中,即使是 hello world 三角形程序也使用超级简单的着色器,而不是像 glBeginglColor 这样的旧的已弃用的直接 API.

    考虑这个三角形 hello world 示例,它在单个程序中同时具有着色器和即时版本:https://stackoverflow.com/a/36166310/895245

    main.c

    #include #include #define GLEW_STATIC#include #include #define INFOLOG_LEN 512静态常量 GLuint 宽度 = 512,高度 = 512;/* 顶点数据作为输入传递给这个着色器* ourColor 作为输入传递给片段着色器.*/静态常量 GLchar* vertexShaderSource =#version 330 核心
    "布局(位置 = 0)在 vec3 位置;
    "vec3 颜色中的布局(位置 = 1);
    ""out vec3 ourColor;
    ";void main() {
    ";"gl_Position = vec4(position, 1.0f);
    ""ourColor = 颜色;
    "}
    ";静态常量 GLchar* fragmentShaderSource =#version 330 核心
    "在 vec3 ourColor;
    ";输出 vec4 颜色;
    "void main() {
    ";"color = vec4(ourColor, 1.0f);
    "}
    ";GLfloat 顶点[] = {/* 位置颜色 */0.5f、-0.5f、0.0f、1.0f、0.0f、0.0f、-0.5f、-0.5f、0.0f、0.0f、1.0f、0.0f、0.0f、0.5f、0.0f、0.0f、0.0f、1.0f};int main(int argc, char **argv) {int 立即数 = (argc > 1) &&argv[1][0] == '1';/* 仅用于 !immediate.*/GLuint vao, vbo;GLint 着色器程序;glfwInit();GLFWwindow* window = glfwCreateWindow(WIDTH, HEIGHT, __FILE__, NULL, NULL);glfwMakeContextCurrent(window);glewExperimental = GL_TRUE;glewInit();glClearColor(0.0f, 0.0f, 0.0f, 1.0f);glViewport(0, 0, 宽度, 高度);如果(立即){浮动比率;整数宽度,高度;glfwGetFramebufferSize(window, &width, &height);比率=宽度/(浮动)高度;glClear(GL_COLOR_BUFFER_BIT);glMatrixMode(GL_PROJECTION);glLoadIdentity();glOrtho(-ratio, ratio, -1.f, 1.f, 1.f, -1.f);glMatrixMode(GL_MODELVIEW);glLoadIdentity();glBegin(GL_TRIANGLES);glColor3f( 1.0f, 0.0f, 0.0f);glVertex3f(-0.5f, -0.5f, 0.0f);glColor3f(0.0f, 1.0f, 0.0f);glVertex3f(0.5f,-0.5f,0.0f);glColor3f(0.0f, 0.0f, 1.0f);glVertex3f(0.0f, 0.5f, 0.0f);glEnd();} 别的 {/* 构建和编译着色器程序.*//* 顶点着色器 */GLint vertexShader = glCreateShader(GL_VERTEX_SHADER);glShaderSource(vertexShader, 1, &vertexShaderSource, NULL);glCompileShader(vertexShader);GLint 成功;GLchar infoLog[INFOLOG_LEN];glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &success);如果(!成功){glGetShaderInfoLog(vertexShader, INFOLOG_LEN, NULL, infoLog);printf("ERROR::SHADER::VERTEX::COMPILATION_FAILED
    %s
    ", infoLog);}/* 片段着色器 */GLint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);glShaderSource(fragmentShader, 1, &fragmentShaderSource, NULL);glCompileShader(fragmentShader);glGetShaderiv(fragmentShader, GL_COMPILE_STATUS, &success);如果(!成功){glGetShaderInfoLog(fragmentShader, INFOLOG_LEN, NULL, infoLog);printf("ERROR::SHADER::FRAGMENT::COMPILATION_FAILED
    %s
    ", infoLog);}/* 链接着色器 */shaderProgram = glCreateProgram();glAttachShader(shaderProgram, vertexShader);glAttachShader(shaderProgram, fragmentShader);glLinkProgram(shaderProgram);glGetProgramiv(shaderProgram, GL_LINK_STATUS, &success);如果(!成功){glGetProgramInfoLog(shaderProgram, INFOLOG_LEN, NULL, infoLog);printf("ERROR::SHADER::PROGRAM::LINKING_FAILED
    %s
    ", infoLog);}glDeleteShader(vertexShader);glDeleteShader(fragmentShader);glGenVertexArrays(1, &vao);glGenBuffers(1, &vbo);glBindVertexArray(vao);glBindBuffer(GL_ARRAY_BUFFER, vbo);glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);/* 位置属性 */glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), (GLvoid*)0);glEnableVertexAttribArray(0);/* 颜色属性 */glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), (GLvoid*)(3 * sizeof(GLfloat)));glEnableVertexAttribArray(1);glBindVertexArray(0);glUseProgram(着色器程序);glBindVertexArray(vao);glDrawArrays(GL_TRIANGLES, 0, 3);glBindVertexArray(0);}glfwSwapBuffers(window);/* 主循环.*/而 (!glfwWindowShouldClose(window)) {glfwPollEvents();}如果(!立即){glDeleteVertexArrays(1, &vao);glDeleteBuffers(1, &vbo);glDeleteProgram(shaderProgram);}glfwTerminate();返回退出成功;}

    改编自

    从中我们可以看到:

    • 顶点和片段着色器程序被表示为包含 GLSL 语言(vertexShaderSourcefragmentShaderSource)的 C 样式字符串中央处理器

    • 这个 C 程序进行 OpenGL 调用,将这些字符串编译成 GPU 代码,例如:

      glShaderSource(fragmentShader, 1, &fragmentShaderSource, NULL);glCompileShader(fragmentShader);

    • 着色器定义了它们的预期输入,C 程序通过一个指向 GPU 代码的内存指针提供它们.例如,片段着色器将其预期输入定义为顶点位置和颜色数组:

      布局(位置 = 0)在 vec3 位置;
      "vec3 颜色中的布局(位置 = 1);
      ""out vec3 ourColor;
      ";

      并且还将其输出之一ourColor定义为颜色数组,然后成为片段着色器的输入:

      static const GLchar* fragmentShaderSource =#version 330 核心
      "在 vec3 ourColor;
      ";

      C 程序然后将包含顶点位置和颜色的数组从 CPU 提供给 GPU

       glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);

    然而,在直接非着色器示例中,我们看到进行了显式指定位置和颜色的魔术 API 调用:

    glColor3f( 1.0f, 0.0f, 0.0f);glVertex3f(-0.5f, -0.5f, 0.0f);

    因此我们理解这代表了一个更受限制的模型,因为位置和颜色不是内存中的任意用户定义数组,然后由任意用户提供的程序处理,而只是输入到类似 Phong 的程序模型.

    在这两种情况下,渲染的输出通常直接进入视频,而不会通过 CPU 返回,尽管可以读取到 CPU,例如如果要将它们保存到文件中: 是片段着色器的 Twitter".它包含大量视觉上令人印象深刻的着色器,可以作为零设置"使用片段着色器的方式.Shadertoy 在

  • 对于某些功能,绘图可以比 CPU 更快完成:

  • When I'm trying to get through openGL wiki and tutorials on www.learnopengl.com, it never ends up understandable by intuition how whole concept works. Can someone maybe explain me in more abstract way how it works? What are vertex shader and fragment shader and what do we use them for?

    解决方案

    The OpenGL wiki gives a good definition:

    A Shader is a user-defined program designed to run on some stage of a graphics processor.

    History lesson

    In the past, graphics cards were non-programmable pieces of silicon which performed a set of fixed algorithms:

    • inputs: 3D coordinates of triangles, their colors, light sources
    • output: a 2D image

    all using a single fixed parametrized algorithm, typically similar to the Phong reflection model. Image from Wiki:

    But that was too restrictive for programmers who wanted to create many different complex visual effects.

    So as semiconductor manufacture technology advanced, and GPU designers were able to cramp more transistors per square millimeter, vendors started allowing some the parts of the rendering pipeline to be programmed programming languages like the C-like GLSL.

    Those languages are then converted to semi-undocumented instruction sets that runs on small "CPUs" built-into those newer GPU's.

    In the beginning, those shader languages were not even Turing complete!

    The term General Purpose GPU (GPGPU) refers to this increased programmability of modern GPUs.

    Overview of the modern shader pipeline

    In the OpenGL 4 model, only the blue stages of the following diagram are programmable:

    Image source.

    Shaders take the input from the previous pipeline stage (e.g. vertex positions, colors, and rasterized pixels) and customize the output to the next stage.

    The two most important ones are:

    Related question: What are Vertex and Pixel shaders?

    From this we see that the name "shader" is not very descriptive for current architectures. The name originates of course from "shadows", which is handled by what we now call the "fragment shader". But "shaders" in GLSL now also manage vertex positions as is the case for the vertex shader, not to mention OpenGL 4.3 GL_COMPUTE_SHADER, which allows for arbitrary calculations completely unrelated to rendering, much like OpenCL.

    TODO could OpenGL be efficiently implemented with OpenCL alone, i.e., making all stages programmable? Of course, there must be a performance / flexibility trade-off.

    The first GPUs with shaders even used different specialized hardware for vertex and fragment shading, since those have quite different workloads. Current architectures however use multiple passes of a single type of hardware (basically small CPUs) for all shader types, which saves some hardware duplication. This design is known as an Unified Shader Model:

    Adapted from this image, SVG source.

    Source code example

    To truly understand shaders and all they can do, you have to look at many examples and learn the APIs. https://github.com/JoeyDeVries/LearnOpenGL for example is a good source.

    In modern OpenGL 4, even hello world triangle programs use super simple shaders, instead of older deprecated immediate APIs like glBegin and glColor.

    Consider this triangle hello world example that has both the shader and immediate versions in a single program: https://stackoverflow.com/a/36166310/895245

    main.c

    #include <stdio.h>
    #include <stdlib.h>
    
    #define GLEW_STATIC
    #include <GL/glew.h>
    
    #include <GLFW/glfw3.h>
    
    #define INFOLOG_LEN 512
    
    static const GLuint WIDTH = 512, HEIGHT = 512;
    /* vertex data is passed as input to this shader
     * ourColor is passed as input to the to the fragment shader. */
    static const GLchar* vertexShaderSource =
        "#version 330 core
    "
        "layout (location = 0) in vec3 position;
    "
        "layout (location = 1) in vec3 color;
    "
        "out vec3 ourColor;
    "
        "void main() {
    "
        "    gl_Position = vec4(position, 1.0f);
    "
        "    ourColor = color;
    "
        "}
    ";
    static const GLchar* fragmentShaderSource =
        "#version 330 core
    "
        "in vec3 ourColor;
    "
        "out vec4 color;
    "
        "void main() {
    "
        "    color = vec4(ourColor, 1.0f);
    "
        "}
    ";
    GLfloat vertices[] = {
    /*   Positions            Colors */
         0.5f, -0.5f, 0.0f,   1.0f, 0.0f, 0.0f,
        -0.5f, -0.5f, 0.0f,   0.0f, 1.0f, 0.0f,
         0.0f,  0.5f, 0.0f,   0.0f, 0.0f, 1.0f
    };
    
    int main(int argc, char **argv) {
        int immediate = (argc > 1) && argv[1][0] == '1';
    
        /* Used in !immediate only. */
        GLuint vao, vbo;
        GLint shaderProgram;
    
        glfwInit();
        GLFWwindow* window = glfwCreateWindow(WIDTH, HEIGHT, __FILE__, NULL, NULL);
        glfwMakeContextCurrent(window);
        glewExperimental = GL_TRUE;
        glewInit();
        glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
        glViewport(0, 0, WIDTH, HEIGHT);
        if (immediate) {
            float ratio;
            int width, height;
            glfwGetFramebufferSize(window, &width, &height);
            ratio = width / (float) height;
            glClear(GL_COLOR_BUFFER_BIT);
            glMatrixMode(GL_PROJECTION);
            glLoadIdentity();
            glOrtho(-ratio, ratio, -1.f, 1.f, 1.f, -1.f);
            glMatrixMode(GL_MODELVIEW);
            glLoadIdentity();
            glBegin(GL_TRIANGLES);
            glColor3f(  1.0f,  0.0f, 0.0f);
            glVertex3f(-0.5f, -0.5f, 0.0f);
            glColor3f(  0.0f,  1.0f, 0.0f);
            glVertex3f( 0.5f, -0.5f, 0.0f);
            glColor3f(  0.0f,  0.0f, 1.0f);
            glVertex3f( 0.0f,  0.5f, 0.0f);
            glEnd();
        } else {
            /* Build and compile shader program. */
            /* Vertex shader */
            GLint vertexShader = glCreateShader(GL_VERTEX_SHADER);
            glShaderSource(vertexShader, 1, &vertexShaderSource, NULL);
            glCompileShader(vertexShader);
            GLint success;
            GLchar infoLog[INFOLOG_LEN];
            glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &success);
            if (!success) {
                glGetShaderInfoLog(vertexShader, INFOLOG_LEN, NULL, infoLog);
                printf("ERROR::SHADER::VERTEX::COMPILATION_FAILED
    %s
    ", infoLog);
            }
            /* Fragment shader */
            GLint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
            glShaderSource(fragmentShader, 1, &fragmentShaderSource, NULL);
            glCompileShader(fragmentShader);
            glGetShaderiv(fragmentShader, GL_COMPILE_STATUS, &success);
            if (!success) {
                glGetShaderInfoLog(fragmentShader, INFOLOG_LEN, NULL, infoLog);
                printf("ERROR::SHADER::FRAGMENT::COMPILATION_FAILED
    %s
    ", infoLog);
            }
            /* Link shaders */
            shaderProgram = glCreateProgram();
            glAttachShader(shaderProgram, vertexShader);
            glAttachShader(shaderProgram, fragmentShader);
            glLinkProgram(shaderProgram);
            glGetProgramiv(shaderProgram, GL_LINK_STATUS, &success);
            if (!success) {
                glGetProgramInfoLog(shaderProgram, INFOLOG_LEN, NULL, infoLog);
                printf("ERROR::SHADER::PROGRAM::LINKING_FAILED
    %s
    ", infoLog);
            }
            glDeleteShader(vertexShader);
            glDeleteShader(fragmentShader);
    
            glGenVertexArrays(1, &vao);
            glGenBuffers(1, &vbo);
            glBindVertexArray(vao);
            glBindBuffer(GL_ARRAY_BUFFER, vbo);
            glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
            /* Position attribute */
            glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), (GLvoid*)0);
            glEnableVertexAttribArray(0);
            /* Color attribute */
            glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), (GLvoid*)(3 * sizeof(GLfloat)));
            glEnableVertexAttribArray(1);
            glBindVertexArray(0);
            glUseProgram(shaderProgram);
            glBindVertexArray(vao);
            glDrawArrays(GL_TRIANGLES, 0, 3);
            glBindVertexArray(0);
        }
        glfwSwapBuffers(window);
    
        /* Main loop. */
        while (!glfwWindowShouldClose(window)) {
            glfwPollEvents();
        }
    
        if (!immediate) {
            glDeleteVertexArrays(1, &vao);
            glDeleteBuffers(1, &vbo);
            glDeleteProgram(shaderProgram);
        }
        glfwTerminate();
        return EXIT_SUCCESS;
    }
    

    Adapted from Learn OpenGL, my GitHub upstream.

    Compile and run on Ubuntu 20.04:

    sudo apt install libglew-dev libglfw3-dev
    gcc -ggdb3 -O0 -std=c99 -Wall -Wextra -pedantic -o main.out main.c -lGL -lGLEW -lglfw
    # Shader
    ./main.out
    # Immediate
    ./main.out 1
    

    Identical outcome of both:

    From that we see how:

    • the vertex and fragment shader programs are being represented as C-style strings containing GLSL language (vertexShaderSource and fragmentShaderSource) inside a regular C program that runs on the CPU

    • this C program makes OpenGL calls which compile those strings into GPU code, e.g.:

      glShaderSource(fragmentShader, 1, &fragmentShaderSource, NULL);
      glCompileShader(fragmentShader);
      

    • the shader defines their expected inputs, and the C program provides them through a pointer to memory to the GPU code. For example, the fragment shader defines its expected inputs as an array of vertex positions and colors:

      "layout (location = 0) in vec3 position;
      "
      "layout (location = 1) in vec3 color;
      "
      "out vec3 ourColor;
      "
      

      and also defines one of its outputs ourColor as an array of colors, which is then becomes an input to the fragment shader:

      static const GLchar* fragmentShaderSource =
          "#version 330 core
      "
          "in vec3 ourColor;
      "
      

      The C program then provides the array containing the vertex positions and colors from the CPU to the GPU

          glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
      

    On the immediate non-shader example however, we see that magic API calls are made that explicitly give positions and colors:

    glColor3f(  1.0f,  0.0f, 0.0f);
    glVertex3f(-0.5f, -0.5f, 0.0f);
    

    We understand therefore that this represents a much more restricted model, since the positions and colors are not arbitrary user-defined arrays in memory that then get processed by an arbitrary user provided program anymore, but rather just inputs to a Phong-like model.

    In both cases, the rendered output normally goes straight to the video, without passing back through the CPU, although it is possible to read to the CPU e.g. if you want to save them to a file: How to use GLUT/OpenGL to render to a file?

    Cool non-trivial shader applications to 3D graphics

    One classic cool application of a non-trivial shader are dynamic shadows, i.e. shadows cast by one object on another, as opposed to shadows that only depend on the angle between the normal of a triangle and the light source, which was already covered in the Phong model:

    Image source.

    Cool non-3D fragment shader applications

    https://www.shadertoy.com/ is a "Twitter for fragment shaders". It contains a huge selection of visually impressive shaders, and can serve as a "zero setup" way to play with fragment shaders. Shadertoy runs on WebGL, an OpenGL interface for the browser, so when you click on a shadertoy, it renders the shader code in your browser. Like most "fragment shader graphing applicaitons", they just have a fixed simple vertex shader that draws two triangles on the screen right in front of the camera: WebGL/GLSL - How does a ShaderToy work? so the users only code the fragment shader.

    Here are some more scientific oriented examples hand picked by me:

    这篇关于什么是 OpenGL 中的着色器,我们需要它们做什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆