纹理的内部格式 [英] the internalformat of Texture

查看:117
本文介绍了纹理的内部格式的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

  void glTexImage2D(GLenum target,
GLint level,
GLint internalFormat,
GLsizei宽度,
GLsizei高度,
GLint边界,
GLenum格式,
GLenum类型,
常量GLvoid * data);

我知道参数的格式和类型描述了图像数据的格式和类型, t理解参数 internalFormat 。如何在应用程序中设置它的值?

例如,我创建了一个如下所示的纹理:

  glTexImage2D(GL_TEXTURE_2D,0,GL_LUMINANCE8,size,size,0,GL_LUMINANCE,GL_UNSIGNED_BYTE,buffer); 

当我在我的GLSL着色器中使用纹理时,看起来我得到的值是在[0,1] .WHy之间,不应该介于[0,255]之间吗?



我的着色器代码的一部分是:

  vec = EntryPoint + delta_dir *纹理(noiseTex,EntryPoint.xy * 32).x; 

部分我的C ++代码:



<$ p (int i = 0; i {
buffer [i] = 255.0 * rand()/(float)RAND_MAX; $ p>
}
glGenTextures(1,& noiseTex);
glActiveTexture(GL_TEXTURE0 + activeTexUnit);
glBindTexture(GL_TEXTURE_2D,noiseTex);
glTexImage2D(GL_TEXTURE_2D,0,GL_LUMINANCE8,size,size,
0,GL_LUMINANCE,GL_UNSIGNED_BYTE,buffer);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);


解决方案

格式 type 参数描述了作为像素传输操作 internalformat 描述纹理的格式 格式 的。你告诉OpenGL你要让它看起来像X,而OpenGL将它存储在数据为Y的纹理中。 internalformat 是Y 。
$ b

GL_LUMINANCE8 内部格式表示一个标准化的无符号整数格式。这意味着数据在概念上是浮点数据,但是以标准化的整数形式存储为压缩方式。



对于这个问题, GL_LUMINANCE 的格式表示您正在传递浮点数据或标准化整数数据( type 表示它是标准化的整数数据)。当然,由于没有 GL_LUMINANCE_INTEGER (这就是你如何传递 integer 数据,与整数内部格式一起使用),您无法真正使用像这样的亮度数据。



使用 GL_RED_INTEGER 作为格式和 GL_R8UI 作为内部格式,如果您真的需要纹理中的8位无符号整数。请注意,整数纹理支持需要OpenGL 3.x级硬件。



就是说, 不能 使用带有整数纹理的 sampler2D 。如果您使用的是使用无符号整数纹理格式的纹理,则 必须 使用 usampler2D


Look at the following OpenGL function:

void glTexImage2D(GLenum    target,
                  GLint     level,
                  GLint     internalFormat,
                  GLsizei   width,
                  GLsizei   height,
                  GLint     border,
                  GLenum    format,
                  GLenum    type,
                  const GLvoid * data);

I know the parameter format and type describe the format and type of the image data,but I don't understand the prameter internalFormat.How should I set its value in my application?

For example,I create a texture like this:

glTexImage2D(GL_TEXTURE_2D,0,GL_LUMINANCE8,size,size,0,GL_LUMINANCE,GL_UNSIGNED_BYTE,buffer);

When I aceess the texture in it in my GLSL shader,it seems that the value that I get is between [0,1].WHy?Shouldn't it between [0,255]?

Part of My shader code is :

vec = EntryPoint + delta_dir * texture(noiseTex,EntryPoint.xy * 32).x;

Part of my C++ Code :

for (int i = 0;i < temp;++i)
    {
        buffer[i] = 255.0 * rand() / (float)RAND_MAX;
    }
    glGenTextures(1,&noiseTex);
    glActiveTexture(GL_TEXTURE0 + activeTexUnit);
    glBindTexture(GL_TEXTURE_2D,noiseTex);
    glTexImage2D(GL_TEXTURE_2D,0,GL_LUMINANCE8,size,size,
                            0,GL_LUMINANCE,GL_UNSIGNED_BYTE,buffer);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);

解决方案

The format and type parameters describe the data you are passing to OpenGL as part of a pixel transfer operation. The internalformat describes the format of the texture. You're telling OpenGL that you're giving it takes that looks like X, and OpenGL is to store it in a texture where the data is Y. The internalformat is "Y".

The GL_LUMINANCE8 internal format represents a normalized unsigned integer format. This means that the data is conceptually floating-point, but stored in a normalized integer form as a means of compression.

For that matter, the format of GL_LUMINANCE says that you're passing either floating-point data or normalized integer data (the type says that it's normalized integer data). Of course, since there's no GL_LUMINANCE_INTEGER (which is how you say that you're passing integer data, to be used with integer internal formats), you can't really use luminance data like this.

Use GL_RED_INTEGER for the format and GL_R8UI for the internal format if you really want 8-bit unsigned integers in your texture. Note that integer texture support requires OpenGL 3.x-class hardware.

That being said, you cannot use sampler2D with an integer texture. If you are using a texture that uses an unsigned integer texture format, you must use usampler2D.

这篇关于纹理的内部格式的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆