OpenGL定义以像素为单位的顶点位置 [英] OpenGL define vertex position in pixels

查看:843
本文介绍了OpenGL定义以像素为单位的顶点位置的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在OpenGL / C ++中编写一个2D基本的游戏引擎,在我学习的过程中学习所有东西。我仍然很困惑定义顶点及其位置。也就是说,我仍然试图理解OpenGL的顶点到像素的转换机制。可以简单解释,或者有人指向一篇文章或东西,将解释这一点。谢谢!

解决方案

这是一个相当基础的知识,你最喜欢的OpenGL学习资源应该教你作为第一件事。但是无论如何,标准的OpenGL流水线如下:


  1. 顶点位置从对象空间进入世界空间(就某些全球坐标系而言)。


  2. 现在,世界空间位置被转换为相机/视图空间。此变换由您查看场景的虚拟相机的位置和方向决定。在OpenGL中,这两个变换实际上组合成一个模型视图矩阵,它直接将你的顶点从对象空间变换到视图空间。


  3. 变换。而模型视图转换应仅包括仿射变换(旋转,平移,缩放),投影变换可以是透视变换,其基本上扭曲对象以实现真实的透视图(较远的对象较小)。但在你的2D视图的情况下,它可能是一个正交投影,这只是一个翻译和缩放。


  4. 在这些3(或2)变换之后(然后通过w分量进行透视除法,实现透视失真,如果有的话)你已经是标准化的设备坐标。这意味着在这些转换之后,可见对象的坐标应该在 [ - 1,1] 的范围内。


  5. 最后一步将应用视口转换,坐标从 [ - 1,1] 范围转换为 [0,w] x [0,h] x [0,1] c $ c> glViewport(0,w,0,h) call),它们是framebuffer中顶点的最终位置,因此也是像素坐标。


当使用顶点着色器时,步骤1到3实际上是在着色器中完成的,因此可以以任何你喜欢的方式完成,但通常一个符合标准模型视图 - >投影流水线。



要记住的主要事情是,模型视图和投影转换每个顶点坐标 [ - 1,1] 范围将被剪裁掉。因此, [ - 1,1] -box会在这两种转换后决定您的可见场景。



问题我假设你想使用一个2D坐标系统的像素为单位的顶点坐标和变换?在这种情况下,最好使用 w glOrtho(0.0,w,0.0,h,-1.0,1.0) c>和 h 是您的视口的尺寸。这基本上是对视口变换,因此将你的顶点从 [0,w] x [0,h] x [-1,1] 视口变换然后变换回到 [ - 1,1] -box,其中 [0,w] x [0,h] 1] -box。



这些都是相当一般的解释,没有提到实际的转换是通过矩阵向量乘法完成的,关于同质坐标,但他们应该解释的要点。此 gluProject 的文档也可能为您提供一些见解,因为它实际上为转换管道建模对于单个顶点。但在本文档中,他们实际上忘记了在 v之后的w组件( v= v'/ v'(3) '= P x M xv 步骤。



编辑:不要忘记查看第一个链接在epatel的答案,这解释了转换管道一点更实用和详细。


I've been writing a 2D basic game engine in OpenGL/C++ and learning everything as I go along. I'm still rather confused about defining vertices and their "position". That is, I'm still trying to understand the vertex-to-pixels conversion mechanism of OpenGL. Can it be explained briefly or can someone point to an article or something that'll explain this. Thanks!

解决方案

This is rather basic knowledge that your favourite OpenGL learning resource should teach you as one of the first things. But anyway the standard OpenGL pipeline is as follows:

  1. The vertex position is transformed from object-space (local to some object) into world-space (in respect to some global coordinate system). This transformation specifies where your object (to which the vertices belong) is located in the world

  2. Now the world-space position is transformed into camera/view-space. This transformation is determined by the position and orientation of the virtual camera by which you see the scene. In OpenGL these two transformations are actually combined into one, the modelview matrix, which directly transforms your vertices from object-space to view-space.

  3. Next the projection transformation is applied. Whereas the modelview transformation should consist only of affine transformations (rotation, translation, scaling), the projection transformation can be a perspective one, which basically distorts the objects to realize a real perspective view (with farther away objects being smaller). But in your case of a 2D view it will probably be an orthographic projection, that does nothing more than a translation and scaling. This transformation is represented in OpenGL by the projection matrix.

  4. After these 3 (or 2) transformations (and then following perspective division by the w component, which actually realizes the perspective distortion, if any) what you have are normalized device coordinates. This means after these transformations the coordinates of the visible objects should be in the range [-1,1]. Everything outside this range is clipped away.

  5. In a final step the viewport transformation is applied and the coordinates are transformed from the [-1,1] range into the [0,w]x[0,h]x[0,1] cube (assuming a glViewport(0, w, 0, h) call), which are the vertex' final positions in the framebuffer and therefore its pixel coordinates.

When using a vertex shader, steps 1 to 3 are actually done in the shader and can therefore be done in any way you like, but usually one conforms to this standard modelview -> projection pipeline, too.

The main thing to keep in mind is, that after the modelview and projection transforms every vertex with coordinates outside the [-1,1] range will be clipped away. So the [-1,1]-box determines your visible scene after these two transformations.

So from your question I assume you want to use a 2D coordinate system with units of pixels for your vertex coordinates and transformations? In this case this is best done by using glOrtho(0.0, w, 0.0, h, -1.0, 1.0) with w and h being the dimensions of your viewport. This basically counters the viewport transformation and therefore transforms your vertices from the [0,w]x[0,h]x[-1,1]-box into the [-1,1]-box, which the viewport transformation then transforms back to the [0,w]x[0,h]x[0,1]-box.

These have been quite general explanations without mentioning that the actual transformations are done by matrix-vector-multiplications and without talking about homogenous coordinates, but they should have explained the essentials. This documentation of gluProject might also give you some insight, as it actually models the transformation pipeline for a single vertex. But in this documentation they actually forgot to mention the division by the w component (v" = v' / v'(3)) after the v' = P x M x v step.

EDIT: Don't forget to look at the first link in epatel's answer, which explains the transformation pipeline a bit more practical and detailed.

这篇关于OpenGL定义以像素为单位的顶点位置的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆