为什么我们需要在OpenGL中进行纹理过滤? [英] Why do we need texture filtering in OpenGL?

查看:78
本文介绍了为什么我们需要在OpenGL中进行纹理过滤?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

将纹理映射到几何时,可以在 GL_NEAREST GL_LINEAR 之间选择过滤方法.

并解释了每种算法如何选择片段的颜色,例如根据与纹理坐标的距离对所有相邻的纹理像素进行线性插值.

不是每个纹理坐标实际上都是映射到屏幕上像素的片段位置吗?那么,这些坐标如何比本质上为像素且与片段大小相同的纹理像素小呢?

解决方案

可以将(2D)纹理视为函数 t(u,v),其输出为颜色"价值.这是一个纯函数,因此对于相同的 u v 值,它将返回相同的值.该值来自存储在内存中的查找表,该查找表通过 u v 进行索引,而不是通过某种计算.

纹理映射"是将表面上的特定位置与纹理空间中的特定位置相关联的过程.也就是说,您将表面位置映射"到纹理中的位置.这样,纹理函数 t 的输入通常称为纹理坐标".一些表面位置可能映射到纹理上的相同位置,而某些纹理位置可能没有映射到它们的表面位置.一切都取决于映射

实际的纹理图像不是平滑函数;它是一个离散函数.它在纹理像素位置(0,0)有一个值,在(1,0)有另一个值,但是(0.5,0)的纹理值是不确定的.在图像空间中, u v 是整数.

您放大的部分纹理图片不正确.在纹理像素之间"没有没有值,因为不可能在纹理像素之间".整数行上的0和1之间没有数字.

但是,从表面到纹理函数的任何有用的映射都需要在连续空间而不是离散空间中进行.毕竟,每个片段不太可能准确落在映射到纹理内确切整数的位置上.毕竟,尤其是在基于着色器的渲染中,着色器可以任意地发明一个映射.映射"可以基于光的方向(投影纹理),片段相对于某个表面的高程或用户可能想要的任何东西.对于片段着色器,纹理只是可以评估以产生值的函数 t(u,v).

所以我们真的希望该功能在连续的空间中.

过滤的目的是通过发明离散离散像素之间的值来创建连续函数 t .这使您可以声明 u v 是浮点值,而不是整数.我们还可以归一化纹理坐标,以使它们位于[0,1]范围内,而不是基于纹理的大小.

When mapping texture to a geometry when we can choose the filtering method between GL_NEAREST and GL_LINEAR.

In the examples we have a texture coordinate surrounded by the texels like so:

And it's explained how each algorithm chooses what color the fragment be, for example linear interpolate all the neighboring texels based on distance from the texture coordinate.

Isn't each texture coordinate is essentially the fragment position which are mapped to pixel on screen? So how these coordinates are smaller than the texels which are essentially pixels and the same size as fragments?

解决方案

A (2D) texture can be looked at as a function t(u, v), whose output is a "color" value. This is a pure function, so it will return the same value for the same u and v values. The value comes from a lookup table stored in memory, indexed by u and v, rather than through some kind of computation.

Texture "mapping" is the process whereby you associate a particular location on a surface with a particular location in the space of a texture. That is, you "map" a surface location to a location in a texture. As such, the inputs to the texture function t are often called "texture coordinates". Some surface locations may map to the same position on a texture, and some texture positions may not have surface locations mapped to them. It all depends on the mapping

An actual texture image is not a smooth function; it is a discrete function. It has a value at the texel locations (0, 0), and another value at (1, 0), but the value of a texture at (0.5, 0) is undefined. In image space, u and v are integers.

Your picture of a zoomed in part of the texture is incorrect. There are no values "between" the texels, because "between the texels" is not possible. There is no number between 0 and 1 on an integer number line.

However, any useful mapping from surface to the texture function is going to need to happen in a continuous space, not a discrete space. After all, it's unlikely that every fragment will land exactly on a location that maps to an exact integer within a texture. After all, especially in shader-based rendering, a shader can just invent a mapping arbitrarily. The "mapping" could be based on light directions (projective texturing), the elevation of a fragment relative to some surface, or anything a user might want. To a fragment shader, a texture is just a function t(u, v) which can be evaluated to produce a value.

So we really want that function to be in a continuous space.

The purpose of filtering is to create a continuous function t by inventing values in-between the discrete texels. This allows you to declare that u and v are floating-point values, rather than integers. We also get to normalize the texture coordinates, so that they're on the range [0, 1] rather than being based on the texture's size.

这篇关于为什么我们需要在OpenGL中进行纹理过滤?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆