OpenGL中的着色器是什么,我们需要它们吗? [英] What are shaders in OpenGL and what do we need them for?

查看:556
本文介绍了OpenGL中的着色器是什么,我们需要它们吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我不是英语为母语的人,当我尝试浏览openGL Wiki和www.learnopengl.com上的教程时,凭直觉就无法理解整个概念的工作原理.有人可以用更抽象的方式向我解释它是如何工作的吗?什么是顶点着色器和片段着色器?我们将它们用于什么?

解决方案

OpenGL Wiki 给出了一个很好的定义:

着色器是用户定义的程序,旨在在图形处理器的某个阶段运行.

过去,图形卡是不可编程的硅片,它们执行一组固定算法:

  • 输入:三角形的3D坐标,其颜色,光源
  • 输出:2D图像

全部使用单个固定的参数化算法,通常类似于 Phong反射模型.图片来自Wiki:

但是对于想要创建许多不同的复杂视觉效果的程序员来说,这太过严格了.

因此,随着半导体制造技术的进步,GPU设计人员能够每平方毫米增加更多的晶体管,供应商开始允许将渲染管线的某些部分编程为类似C的半未记录的指令集在那些较新的GPU内置的小型"CPU"上运行.

在一开始,那些着色器语言甚至都没有图灵完成

术语通用GPU(GPGPU)指的是现代GPU不断增强的可编程性.

在OpenGL 4模型中,仅下图的蓝色阶段是可编程的:

图像源.

着色器从上一个流水线阶段获取输入(例如,顶点位置,颜色和栅格化像素),然后将输出自定义到下一个阶段.

两个最重要的是:

  • 顶点着色器:

    • 输入:点在3D空间中的位置
    • 输出:点的2D投影(使用如何在OpenGL中使用glOrtho()?

    • 片段着色器:

      • 输入:三角形所有像素的2D位置+(边缘的颜色或纹理图像的颜色)+照明参数
      • 输出:三角形的每个像素的颜色(如果没有被另一个更接近的三角形遮挡),通常在顶点之间进行插值

      如先前在什么是顶点和像素着色器?

      由此我们可以看出,"shader"这个名称对当前的体系结构不是很具描述性.该名称当然源于阴影",现在我们称之为片段着色器"来处理.但是GLSL中的着色器"现在也像顶点着色器一样管理顶点位置,更不用说OpenGL 4.3 GL_COMPUTE_SHADER了,它允许与渲染完全无关的任意计算,就像OpenCL一样.

      GPU片段着色器的一些很酷的非3D"应用程序包括:

      TODO是否可以仅使用OpenCL有效地实现OpenGL,即使所有阶段都可编程?当然,必须在性能/灵活性之间进行权衡.

      第一个带有着色器的GPU使用不同的专用硬件进行顶点和片段着色,因为它们的工作量完全不同.但是,当前的体系结构对所有着色器类型都使用同一类型的硬件(基本上是小型CPU)的多次传递,这节省了一些硬件重复.这种设计被称为统一着色器模型:

      图片来源.

      要真正了解着色器及其所有功能,您必须查看许多示例并学习API.例如, https://github.com/JoeyDeVries/LearnOpenGL 是一个很好的来源.

      在现代OpenGL 4中,甚至Hello World三角形程序也使用超级简单的着色器,而不是较旧的不推荐使用的即时API(如glBeginglColor).这是一个示例: https://stackoverflow.com/a/36166310/895245

      动态阴影是经典的非凡着色器的炫酷应用:

      图片源.

      . >

      I'm not native English speaker and when I'm trying to get through openGL wiki and tutorials on www.learnopengl.com, it never ends up understandable by intuition how whole concept works. Can someone maybe explain me in more abstract way how it works? What are vertex shader and fragment shader and what do we use them for?

      解决方案

      The OpenGL wiki gives a good definition:

      A Shader is a user-defined program designed to run on some stage of a graphics processor.

      In the past, graphics cards were non-programmable pieces of silicon which performed a set of fixed algorithms:

      • inputs: 3D coordinates of triangles, their colors, light sources
      • output: a 2D image

      all using a single fixed parametrized algorithm, typically similar to the Phong reflection model. Image from Wiki:

      But that was too restrictive for programmers who wanted to create many different complex visual effects.

      So as semiconductor manufacture technology advanced, and GPU designers were able to cramp more transistors per square millimeter, vendors started allowing some the parts of the rendering pipeline to be programmed programming languages like the C-like GLSL.

      Those languages are then converted to semi-undocumented instruction sets that runs on small "CPUs" built-into those newer GPU's.

      In the beginning, those shader languages were not even Turing complete!

      The term General Purpose GPU (GPGPU) refers to this increased programmability of modern GPUs.

      In the OpenGL 4 model, only the blue stages of the following diagram are programmable:

      Image source.

      Shaders take the input from the previous pipeline stage (e.g. vertex positions, colors, and rasterized pixels) and customize the output to the next stage.

      The two most important ones are:

      Related question: What are Vertex and Pixel shaders?

      From this we see that the name "shader" is not very descriptive for current architectures. The name originates of course from "shadows", which is handled by what we now call the "fragment shader". But "shaders" in GLSL now also manage vertex positions as is the case for the vertex shader, not to mention OpenGL 4.3 GL_COMPUTE_SHADER, which allows for arbitrary calculations completely unrelated to rendering, much like OpenCL.

      Some cool "non 3D" applications of GPU fragment shaders include:

      TODO could OpenGL be efficiently implemented with OpenCL alone, i.e., making all stages programmable? Of course, there must be a performance / flexibility trade-off.

      The first GPUs with shaders used different specialized hardware for vertex and fragment shading, since those have quite different workloads. Current architectures however, use multiple passes of a single type of hardware (basically small CPUs) for all shader types, which saves some hardware duplication. This design is known as an Unified Shader Model:

      Image source.

      To truly understand shaders and all they can do, you have to look at many examples and learn the APIs. https://github.com/JoeyDeVries/LearnOpenGL for example is a good source.

      In modern OpenGL 4, even hello world triangle programs use super simple shaders, instead of older deprecated immediate APIs like glBegin and glColor. Here is an example: https://stackoverflow.com/a/36166310/895245

      One classic cool application of a non-trivial shader are dynamic shadows:

      Image source.

      这篇关于OpenGL中的着色器是什么,我们需要它们吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆