光栅化原语的时间间隔是多少? [英] What is the interval when rasterizing primitives

查看:142
本文介绍了光栅化原语的时间间隔是多少?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

通常在计算机科学中,当我从 a b 时,时间间隔为 [a,b)。当光栅化几何图元时,这是否正确?



例如,当我有一行从(0,0)(0,10)位置时,行将包含点(0,10)投影与1个GPU单元映射到屏幕上的1个像素?



编辑:在相同条件下的相同问题,但纹理:

如果我有纹理<$从(0,0)(2,2)映射到四元组上的c $ c> 2x2 c $ c>使用(0,0)(1,1)映射将是像素完美 ,在屏幕上的一个像素上的纹理的一个像素或将纹理缩放?如果间隔是 [0,2] 那么四边形将是3x3,纹理必须缩放...



后期编辑:
这可能有所帮助: http://msdn.microsoft.com/en-us/library/windows/desktop/bb219690%28v=vs.85%29.aspx


首先,这完全取决于所使用的特定光栅化框架(例如OpenGL,Direct3D,GDI,...),所以我将基于这个答案问题被标记为 opengl



这不是一个容易的问题,因为通常实际的窗口坐标是(或者更确切地说它的一个片段)不是整数,而是浮点或固定点坐标,经过一堆(可能不精确的)转换(这些转换不是所有的都可以定制为使用着色器标识,特别是固定功能视口转换标准化设备坐标到窗口坐标)。因此,即使您将转换管道配置为直接在窗口空间中指定顶点坐标,也不要期望得到的片段的窗口坐标在所有情况下都是完美的整数。如果需要,请查看此问题及其答案,以获得对OpenGL转换管道的更多见解。



然后它取决于原始类型。对于多边形(三角形/四边形),如果片段的中心位于多边形顶点的窗口坐标定义的多边形边界内,那么栅格化器会检查每个片段。因此,如果我们有一个从窗口坐标(0,0)的矩形(以及两个三角形,但让我们将其视为矩形) (2,2),它将覆盖2x2区域,因为只有片段(0,0)(中心坐标(0.5,0.5))和(1,1)(中心坐标(1.5,1.5) code>),以及它们的组合与矩形有关。

对于行来说,它更复杂一些,使用所谓的 钻石退出规则 。在这里我不会详细讨论这个问题,但对于水平线或垂直线,这基本上意味着最后一个像素也是独占的。但事实上,这也意味着整数窗口坐标对于行最不利,因为由于变换流水线中的舍入问题,很难决定这个分段属于哪个像素,因为行的判定阈值在整数片段的边界,而不是它的中心,因为它是多边形。



但考虑纹理时,会出现另一个问题,即插值。虽然从(0,0)(2,2)的四元组将覆盖2x2像素区域,从实际的窗口坐标插补(从颜色或纹理坐标穿过基元插入的顶点属性)。因此,像素(0,0)(对应于中心坐标(0.5,0.5))的片段赢得' t具有左下四边形顶点的确切值,但是这些值插入内部的片段大小的一半(对于其他角落也是如此)。

同样重要的是OpenGL如何过滤纹理。当使用线性过滤时,纹理坐标在texel中心(即(i + 0.5)/ size )和纹理大小的整数因子(即 i / size )会导致相邻纹理颜色之间的中途混合。当使用最近邻采样代替(当尝试做像素或纹理精确的时候建议这样做)时,浮点纹理坐标向下舍入(floor-operation),因此决定颜色是否为判定阈值来自一个纹理元素或其邻居的纹理坐标位于纹理边界处(因此纹理大小的整数除数为纹理坐标)。因此,在纹理像素中心进行采样时,建议使用线性滤波(反过来在工作像素精确时不建议)以及最接近的滤波,因为这会减少由于不精确性和舍入误差而导致从一个纹理翻转纹理坐标插值。




所以让我们来看看你的特殊例子。我们得到了一个带坐标的四元组。

 (0,2) - (2,2)
| |
(0,0) - (2,0)

和纹理坐标

 (0,1) - (1,1)
| |
(0,0) - (1,0)

所以如果这些头寸已经在窗口/视口空间中给出,这将导致带有中心的片段

 (0.5,1.5)(1.5,1.5)
(0.5,0.5)(1.5,0.5)

覆盖,代表2x2- [0, 1]像素的正方形。插值后这些片段的纹理坐标是

 (0.25,0.75)(0.75,0.75)
(0.25 ,0.25)(0.75,0.25)

对于2x2纹理,它们确实在纹理中心。所以一切都很好。可能存在舍入和精确度错误,导致例如坐标 1.999 或者 0.255 的texCoords,但这仍然不是问题,因为我们远离点在那里我们将贴近到相邻的像素或纹理像素(假设我们使用最接近的滤波器,但即使进行线性滤波,通常也不会注意到与确切的纹理颜色不同)。

尽管对于行示例,由于精度,舍入和实现问题(如果它来自(0,0))而很难说, (0,9)或从( - 1,0)( - 1, 9)(因此被剪掉),甚至是倾斜的,你应该使用(0.5,0.5)(0.5 ,10.5),这肯定会导致从(0,0)(0,9)






因此,总而言之,OpenGL并非真正用于完全像素精确操作,但可以实现一些谨慎。但为了达到最佳效果,您应该首先配置您的转换以直接在窗口坐标中指定顶点位置,例如

  glViewport( 0,0,宽度,高度); 
glOrtho(0,width,0,height,-1,1); //或使用着色器时类似的东西

然后对于多边形,使用纹理大小的整数位置和整数因数作为纹理坐标(并使用 GL_NEAREST 过滤)。但是对于线条使用半像素位置(即 i + 0.5 )和纹理中心作为纹理坐标(即(i + 0.5)/ size )。这应该给你像素和纹理精确的光栅化你的基元和你的问题中描述的半开放间隔。但请记住,在这种情况下,光栅化多边形的圆锥像素与它的顶点角不匹配(半个像素向内移动)。对于纹理来说,这可以很好地与过滤规则一起使用,但对于颜色等其他属性来说,这意味着矩形的左下像素不会精确地(在这个有限范围内的 无论如何,精确的上下文?)左下角顶点的颜色。尽管如此,它们的确会匹配(尽可能)。


Usually in computer science when I have something from a to b the interval is [a, b). Is this true when rasterizing geometric primitives?

For example when I have a line that starts at position (0, 0) and ends at position (0, 10) will the line contain point (0, 10) when using parallel projection with 1 GPU unit mapped on 1 pixel on screen?

EDIT: Same question in the same conditions, but for textures:

If I have a texture 2x2 mapped on a quad from (0, 0) to (2, 2) using (0, 0) to (1, 1) mapping will it be "pixel perfect", one pixel from texture on one pixel on screen or will be the texture scaled? If the the interval is [0, 2] the quad will be 3x3 and the texture has to be scaled...

LATER EDIT: This might help: http://msdn.microsoft.com/en-us/library/windows/desktop/bb219690%28v=vs.85%29.aspx

解决方案

First of all this entirely depends on the particular rasterization framework used (e.g. OpenGL, Direct3D, GDI, ...), so I'll base this answer on the question being tagged opengl.

This is not such an easy question, because usually the actual window coordinates of a drawn primitive (or rather a fragment thereof) are not integral but floating point or fixed point coordinates, resulting after a bunch of (possibly inexact) transformations (which not all can be customized to identity using shaders, especially the fixed-function viewport transformation from normalized device coordinates to window coordinates). So even if you configure your transformation pipeline to specify the vertex coordinates directly in window space don't expect your resulting fragments' window coordinates to be perfect integers in all cases. Take a look at this question and its answers for some more insights into OpenGL's transformation pipeline, if needed.

Then it depends on the primitive type. For a polygon (triangle/quad), the rasterizer checks for each fragment if the fragment's center lies within the polygon boundaries as defined by the polygon vertices' window coordinates. So if we have a rectangle (well, two triangles, but let's just take it as rectangle) that spans from window coordinates (0,0) to (2,2), it would cover a 2x2 region, because only fragments (0,0) (with center coordinate (0.5,0.5)) and (1,1) (with center coordinate (1.5,1.5)), and combinations thereof lie withing the rectangle.

For lines it is a bit more complicated, using the so-called "diamond exit rule". I won't discuss this here in detail, but for horizontal or vertical lines this essentially means that the last pixel is also exclusive. But in fact this also means that integer window coordinates are the worst to use for lines, because due to rounding issues in the transformation pipeline it is hard to decide to which pixel such a fragment belongs, since the "decision threshold" for lines is at the integer fragment boundaries, rather than its center as it is for polygons.

But when considering texturing, there comes another problem into play, namely the interpolation. While a quad from (0,0) to (2,2) would cover a 2x2 pixel region, the values of the varyings (the vertex attributes interpolated across the primitive, like the color or texture coordinates) are interpolated from the actual window coordinates. Thus the pixel (0,0) (corresponsing to the fragment with center coordinate (0.5,0.5)) won't have the exact values of the lower left quad vertex, but the values interpolated into the interior by half the size of a fragment (and likewise for the other corners).

What also matters then is how OpenGL filters textures. When using linear filtering, the exact texel color is returned for texture coordinates at texel centers (i.e. (i+0.5)/size) and integer divisors of the texture size (i.e. i/size) would result in a half-way blend between neighbouring texel colors. And when using nearest neighbour sampling instead (which is advisable when trying to do something pixel- or texel-accurate), the floating point texture coordinate is rounded down (floor-operation) and thus the "decision threshold" that decides if the color is from one texel or its neighbor is at the texel borders (thus integer divisors of the texture size as texture coordinates). So sampling at texel centers is advisable both with linear filtering (which is in turn not advisable when working pixel-exact) and with nearest filtering, since that reduces the chance of "flipping" from one texel to another due to inexactnesses and rounding errors in the texture coordinate interpolation.


So let's look at your particular example. we got a quad with coordinates

(0,2) - (2,2)
  |       |
(0,0) - (2,0)

and texture coordinates

(0,1) - (1,1)
  |       |
(0,0) - (1,0)

So if those positions are already given in window/viewport space, this results in the fragments with centers

(0.5,1.5) (1.5,1.5)
(0.5,0.5) (1.5,0.5)

covered, representing the 2x2-[0,1]-pixel square. The texture coordinates of those fragments after interpolation would be

(0.25,0.75) (0.75,0.75)
(0.25,0.25) (0.75,0.25)

And for a 2x2 texture those indeed are at the texel centers. So everything plays out nicely. There could be rounding and precision errors resulting in e.g. coordinates of 1.999 or texCoords of 0.255, but this is still not a problem, since we're far from the points where we would "snap over" to the neighbouring pixels or texels (assuming we use nearest filtering, but even with linear filtering you wouldn't usually notice a difference from the exact texel color).

Though for the line example it is hard to say, due to precision, rounding and implementation issues, if it will be from either (0,0) to (0,9) or from (-1,0) to (-1,9) (thus clipped away), or even skewed and you should rather use (0.5,0.5) and (0.5,10.5), which will definitely result in a line from (0,0) to (0,9).


So to sum up, OpenGL is not really made for completely pixel exact operations, but with some care it can be achieved. But to achieve the best results, you should first configure your transformations to specify the vertex positions directly in window coordinates, e.g.

glViewport(0, 0, width, height);
glOrtho(0, width, 0, height, -1, 1); //or something similar when using shaders

Then for polygons use integer positions and integer divisors of the texture size as texture coordinates (and use GL_NEAREST filtering). But for lines use half-pixel positions (i.e. i+0.5) and texel centers as texture coordinates (i.e. (i+0.5)/size). This should give you pixel- and texel-exact rasterization of your primitives and the half-open intervals described in your question. But always keep in mind that in this case the coner pixels of a rasterized polygon don't match the vertex-corners of it (are half a pixel shifted inward). For texturing this plays out nicely with the filtering rules, but for other attributes, like colors, this means the lower left pixel of a rectangle won't have exactly (what is "exactly" in this limited-precision context anyway?) the color of the lower left vertex. For lines, though, they will indeed match (as far as possible).

这篇关于光栅化原语的时间间隔是多少?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆