OpenGL以像素为单位定义顶点位置 [英] OpenGL define vertex position in pixels

查看:29
本文介绍了OpenGL以像素为单位定义顶点位置的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在用 OpenGL/C++ 编写一个 2D 基本游戏引擎,并在学习过程中学习所有内容.我仍然对定义顶点及其位置"感到困惑.也就是说,我还在努力理解OpenGL的顶点到像素的转换机制.是否可以简要解释一下,或者有人可以指出一篇文章或可以解释这一点的东西.谢谢!

I've been writing a 2D basic game engine in OpenGL/C++ and learning everything as I go along. I'm still rather confused about defining vertices and their "position". That is, I'm still trying to understand the vertex-to-pixels conversion mechanism of OpenGL. Can it be explained briefly or can someone point to an article or something that'll explain this. Thanks!

推荐答案

这是您最喜欢的 OpenGL 学习资源应该教给您的第一件相当基础的知识.但无论如何标准的 OpenGL 管道如下:

This is rather basic knowledge that your favourite OpenGL learning resource should teach you as one of the first things. But anyway the standard OpenGL pipeline is as follows:

  1. 顶点位置从对象空间(某些对象的局部)转换到世界空间(相对于某些全局坐标系).此转换指定您的对象(顶点所属的)在世界中的位置

  1. The vertex position is transformed from object-space (local to some object) into world-space (in respect to some global coordinate system). This transformation specifies where your object (to which the vertices belong) is located in the world

现在世界空间位置转换为相机/视图空间.此转换由您查看场景的虚拟相机的位置和方向决定.在 OpenGL 中,这两种转换实际上合二为一,即模型视图矩阵,它直接将您的顶点从对象空间转换到视图空间.

Now the world-space position is transformed into camera/view-space. This transformation is determined by the position and orientation of the virtual camera by which you see the scene. In OpenGL these two transformations are actually combined into one, the modelview matrix, which directly transforms your vertices from object-space to view-space.

接下来应用投影变换.而模型视图变换应该只包括仿射变换(旋转、平移、缩放),而投影变换可以是透视变换,它基本上扭曲物体以实现真实的透视图(越远的物体越小).但是在您使用 2D 视图的情况下,它可能是正交投影,它只不过是平移和缩放.这种转换在 OpenGL 中由投影矩阵表示.

Next the projection transformation is applied. Whereas the modelview transformation should consist only of affine transformations (rotation, translation, scaling), the projection transformation can be a perspective one, which basically distorts the objects to realize a real perspective view (with farther away objects being smaller). But in your case of a 2D view it will probably be an orthographic projection, that does nothing more than a translation and scaling. This transformation is represented in OpenGL by the projection matrix.

经过这 3 次(或 2 次)变换(然后通过 w 分量进行透视分割,这实际上实现了透视失真,如果有的话)您所拥有的是归一化的设备坐标.这意味着在这些转换之后,可见对象的坐标应该在 [-1,1] 范围内.超出此范围的所有内容都将被剪掉.

After these 3 (or 2) transformations (and then following perspective division by the w component, which actually realizes the perspective distortion, if any) what you have are normalized device coordinates. This means after these transformations the coordinates of the visible objects should be in the range [-1,1]. Everything outside this range is clipped away.

在最后一步中,应用视口变换并将坐标从 [-1,1] 范围变换到 [0,w]x[0,h]x[0,1] 立方体(假设一个 glViewport(0, w, 0, h) 调用),它们是顶点在帧缓冲区中的最终位置,因此是它的像素坐标.

In a final step the viewport transformation is applied and the coordinates are transformed from the [-1,1] range into the [0,w]x[0,h]x[0,1] cube (assuming a glViewport(0, w, 0, h) call), which are the vertex' final positions in the framebuffer and therefore its pixel coordinates.

当使用顶点着色器时,步骤 1 到 3 实际上是在着色器中完成的,因此可以以任何您喜欢的方式完成,但通常也符合此标准模型视图 -> 投影管道.

When using a vertex shader, steps 1 to 3 are actually done in the shader and can therefore be done in any way you like, but usually one conforms to this standard modelview -> projection pipeline, too.

要记住的主要事情是,在模型视图和投影转换后,坐标在 [-1,1] 范围之外的每个顶点都将被剪掉.所以 [-1,1]-box 决定了这两个转换后的可见场景.

The main thing to keep in mind is, that after the modelview and projection transforms every vertex with coordinates outside the [-1,1] range will be clipped away. So the [-1,1]-box determines your visible scene after these two transformations.

因此,根据您的问题,我假设您想对顶点坐标和变换使用带有像素单位的 2D 坐标系?在这种情况下,最好使用 glOrtho(0.0, w, 0.0, h, -1.0, 1.0)wh视口的尺寸.这基本上抵消了视口转换,因此将您的顶点从 [0,w]x[0,h]x[-1,1]-box 转换为 [-1,1]-box,然后视口转换将其转换回 [0,w]x[0,h]x[0,1]-box.

So from your question I assume you want to use a 2D coordinate system with units of pixels for your vertex coordinates and transformations? In this case this is best done by using glOrtho(0.0, w, 0.0, h, -1.0, 1.0) with w and h being the dimensions of your viewport. This basically counters the viewport transformation and therefore transforms your vertices from the [0,w]x[0,h]x[-1,1]-box into the [-1,1]-box, which the viewport transformation then transforms back to the [0,w]x[0,h]x[0,1]-box.

这些都是非常笼统的解释,没有提到实际的变换是通过矩阵-向量-乘法完成的,也没有谈论齐次坐标,但它们应该已经解释了本质.这个 gluProject 文档 也可能会给你一些见解,因为它实际上模拟单个顶点的转换管道.但是在这个文档中,他们实际上忘了在 v' = P x M xvv" = v'/v'(3))> 步骤.

These have been quite general explanations without mentioning that the actual transformations are done by matrix-vector-multiplications and without talking about homogenous coordinates, but they should have explained the essentials. This documentation of gluProject might also give you some insight, as it actually models the transformation pipeline for a single vertex. But in this documentation they actually forgot to mention the division by the w component (v" = v' / v'(3)) after the v' = P x M x v step.

不要忘记查看第一个链接 在 epatel 的回答中,它更实际和详细地解释了转换管道.

Don't forget to look at the first link in epatel's answer, which explains the transformation pipeline a bit more practical and detailed.

这篇关于OpenGL以像素为单位定义顶点位置的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆