在OpenGL中绑定点的目的? [英] Purpose of binding points in OpenGL?

查看:93
本文介绍了在OpenGL中绑定点的目的?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我不了解OpenGL中绑定点(例如GL_ARRAY_BUFFER)的目的是什么.据我了解,glGenBuffers()创建了一种指向位于GPU内存中某处的顶点缓冲区对象的指针.

所以:

glGenBuffers(1, &bufferID)

意味着我现在在图形卡上有一个句柄bufferID,它指向1个顶点对象.现在我知道下一步是将bufferID绑定到绑定点

glBindBuffer(GL_ARRAY_BUFFER, bufferID)

这样,我就可以使用glBufferData()函数使用该绑定点向下发送数据:

glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW)

但是为什么我不能只使用bufferID来指定要发送数据的位置呢?像这样:

glBufferData(bufferID, sizeof(data), data, GL_STATIC_DRAW)

然后,当调用绘图函数时,我也只需将要绘图函数绘制的那个ID放到哪个VBO中.像这样:

glDrawArrays(bufferID, GL_TRIANGLES, 0, 3)

为什么我们需要额外的glBindBuffers间接步骤?

解决方案

OpenGL将对象绑定点用于两件事:指定一个对象以用作渲染过程的一部分,并能够修改该对象. /p>

为什么将它们用于前者很简单:OpenGL需要很多对象才能进行渲染.

考虑您过于简单的示例:

glDrawArrays(bufferID, GL_TRIANGLES, 0, 3)

该API不允许我拥有来自单独缓冲区的单独顶点属性.当然,您可以提出glDrawArrays(GLint count, GLuint *object_array, ...).但是,如何将特定的缓冲区对象连接到特定的顶点属性?或者您如何从缓冲区0获得2个属性,并从缓冲区1获得第三个属性?这些就是我现在可以使用当前API进行的操作.但是您提出的建议无法解决.

即使这样,您也需要放置许多其他对象:程序/管道对象,纹理对象,UBO,SSBO,变换反馈对象,查询对象等.在单个命令中指定的所需对象从根本上是不可行的(并且不考虑性能成本).

并且每次API需要添加一种新的对象时,您都必须添加glDraw*函数的新变体.现在,在许多这样的功能中,有这样的功能.您的方式会给我们数百.

因此,OpenGL为您定义了一种方式来说:下次我渲染时,以这种方式使用该对象进行处理."这就是绑定对象以供使用的意思.


但是为什么我不能只使用bufferID来指定要发送数据的位置呢?

这是为了修改对象的目的绑定对象,而不是说将使用该对象.那是...另一回事.

一个明显的答案是:您不能这样做,因为OpenGL API(直到4.5)没有让您这样做的功能."但是我相当怀疑这个问题确实是为什么OpenGL没有这样的API(直到4.5,其中存在glNamedBufferStorage之类的东西).

实际上,4.5确实具有这样的功能这一事实证明,在4.5之前的OpenGL中,没有技术原因.这确实是一个决定",是由于OpenGL API从1.0开始发展而来的,这要归功于它的阻力最小.反复.

实际上,OpenGL所做的几乎每一个错误决定都可以追溯到采用API中阻力最小的路径.但是我离题了.

在OpenGL 1.0中,只有一种对象:显示列表对象.这意味着均匀纹理没有存储在对象中.因此,每次切换纹理时,都必须使用glTexImage*D重新指定整个纹理.这意味着需要重新上传.现在,您可以(并且人们确实做到了)将每个纹理的创建内容包装在一个显示列表中,这使您可以通过执行该显示列表来切换纹理.希望驱动程序会意识到您正在这样做,而是适当地分配了视频内存等.

因此,当1.1出现时,OpenGL ARB意识到那真是令人难以置信的愚蠢.因此,他们创建了纹理对象,该对象既封装了纹理的内存存储,又封装了其中的各种状态.当您要使用纹理时,将其绑定.但是有一个障碍.即,如何更改它.

看,1.0有很多已经存在的功能,例如glTexImage*DglTexParamter等.这些修改纹理的状态.现在,ARB可能已经添加了新功能,它们执行相同的功能,但是将纹理对象作为参数.

但这将意味着将所有OpenGL用户分为两个阵营:使用纹理对象的人和不使用纹理对象的人.这意味着,如果要使用纹理对象,则必须重写修改纹理的现有代码的 all .如果您有一些函数在当前纹理上进行了许多glTexParameter调用,则必须更改该函数才能调用新的纹理对象函数.但是,您还必须 更改调用它的函数,以使其将要对其进行操作的纹理对象作为参数.

如果该函数不属于您(因为它是您正在使用的库的一部分),那么您甚至无法做到这一点.

因此,ARB决定保留这些旧功能,并根据纹理是否绑定到上下文来简单地使它们的行为有所不同.如果已绑定,则glTexParameter/etc会修改绑定的纹理,而不是上下文的常规纹理.

这一决定 建立了通用范式几乎所有OpenGL对象.

出于相同的原因,ARB_vertex_buffer_object使用此范例.请注意各种gl*Pointer函数(glVertexAttribPointer等)如何相对于缓冲区工作.您必须将缓冲区绑定到GL_ARRAY_BUFFER,然后调用这些函数之一来设置属性数组.当一个缓冲区绑定到该插槽时,该函数将拾取该缓冲区,并将指针视为在调用*Pointer函数时绑定的缓冲区的偏移量.

为什么?出于同样的原因:易于兼容(或促进懒惰,这取决于您如何看待). ATI_vertex_array_object必须为gl*Pointer函数创建新的类似物.而ARB_vertex_buffer_object只是背负现有入口点.

用户不必从使用glVertexPointer更改为glVertexBufferOffset或其他功能.他们要做的就是在调用设置顶点信息的函数之前绑定一个缓冲区(当然,将指针更改为字节偏移量).

这还意味着他们不必添加一堆glDrawElementsWithBuffer类型的函数即可使用来自缓冲区对象的索引进行渲染.

因此,从短期来看,这并不是一个坏主意.但是,与大多数短期决策一样,随着时间的流逝,它开始变得不那么合理了.

当然,如果您可以访问GL 4.5/ARB_direct_state_access,则可以按照原本应该做的方式进行操作.

I don't understand what the purpose is of binding points (such as GL_ARRAY_BUFFER) in OpenGL. To my understanding glGenBuffers() creates a sort of pointer to a vertex buffer object located somewhere within GPU memory.

So:

glGenBuffers(1, &bufferID)

means I now have a handle, bufferID, to 1 vertex object on the graphics card. Now I know the next step would be to bind bufferID to a binding point

glBindBuffer(GL_ARRAY_BUFFER, bufferID)

so that I can use that binding point to send data down using the glBufferData() function like so:

glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW)

But why couldn't I just use the bufferID to specifiy where I want to send the data instead? Something like:

glBufferData(bufferID, sizeof(data), data, GL_STATIC_DRAW)

Then when calling a draw function I would also just put in which ever ID to whichever VBO I want the draw function to draw. Something like:

glDrawArrays(bufferID, GL_TRIANGLES, 0, 3)

Why do we need the extra step of indirection with glBindBuffers?

解决方案

OpenGL uses object binding points for two things: to designate an object to be used as part of a rendering process, and to be able to modify the object.

Why it uses them for the former is simple: OpenGL requires a lot of objects to be able to render.

Consider your overly simplistic example:

glDrawArrays(bufferID, GL_TRIANGLES, 0, 3)

That API doesn't let me have separate vertex attributes come from separate buffers. Sure, you might then propose glDrawArrays(GLint count, GLuint *object_array, ...). But how do you connect a particular buffer object to a particular vertex attribute? Or how do you have 2 attributes come from buffer 0 and a third attribute from buffer 1? Those are things I can do right now with the current API. But your proposed one can't handle it.

And even that is putting aside the many other objects you need to render: program/pipeline objects, texture objects, UBOs, SSBOs, transform feedback objects, query objects, etc. Having all of the needed objects specified in a single command would be fundamentally unworkable (and that leaves aside the performance costs).

And every time the API would need to add a new kind of object, you would have to add new variations of the glDraw* functions. And right now, there are over a dozen such functions. Your way would have given us hundreds.

So instead, OpenGL defines ways for you to say "the next time I render, use this object in this way for that process." That's what binding an object for use means.


But why couldn't I just use the bufferID to specifiy where I want to send the data instead?

This is about binding an object for the purpose of modifying the object, not saying that it will be used. That is... a different matter.

The obvious answer is, "You can't do it because the OpenGL API (until 4.5) doesn't have a function to let you do it." But I rather suspect the question is really why OpenGL doesn't have such APIs (until 4.5, where glNamedBufferStorage and such exist).

Indeed, the fact that 4.5 does have such functions proves that there is no technical reason for pre-4.5 OpenGL's bind-object-to-modify API. It really was a "decision" that came about by the evolution of the OpenGL API from 1.0, thanks to following the path of least resistance. Repeatedly.

Indeed, just about every bad decision that OpenGL has made can be traced back to taking the path of least resistance in the API. But I digress.

In OpenGL 1.0, there was only one kind of object: display list objects. That means that even textures were not stored in objects. So every time you switched textures, you had to re-specify the entire texture with glTexImage*D. That means re-uploading it. Now, you could (and people did) wrap each texture's creation in a display list, which allowed you to switch textures by executing that display list. And hopefully the driver would realize you were doing that and instead allocate video memory and so forth appropriately.

So when 1.1 came around, the OpenGL ARB realized how mind-bendingly silly that was. So they created texture objects, which encapsulate both the memory storage of a texture and the various state within. When you wanted to use the texture, you bound it. But there was a snag. Namely, how to change it.

See, 1.0 had a bunch of already existing functions like glTexImage*D, glTexParamter and the like. These modify the state of the texture. Now, the ARB could have added new functions that do the same thing but take texture objects as parameters.

But that would mean dividing all OpenGL users into 2 camps: those who used texture objects and those who did not. It meant that, if you wanted to use texture objects, you had to rewrite all of your existing code that modified textures. If you had some function that made a bunch of glTexParameter calls on the current texture, you would have to change that function to call the new texture object function. But you would also have to change the function of yours that calls it so that it would take, as a parameter, the texture object that it operates on.

And if that function didn't belong to you (because it was part of a library you were using), then you couldn't even do that.

So the ARB decided to keep those old functions around and simply have them behave differently based on whether a texture was bound to the context or not. If one was bound, then glTexParameter/etc would modify the bound texture, rather than the context's normal texture.

This one decision established the general paradigm shared by almost all OpenGL objects.

ARB_vertex_buffer_object used this paradigm for the same reason. Notice how the various gl*Pointer functions (glVertexAttribPointer and the like) work in relation to buffers. You have to bind a buffer to GL_ARRAY_BUFFER, then call one of those functions to set up an attribute array. When a buffer is bound to that slot, the function will pick that up and treat the pointer as an offset into the buffer that was bound at the time the *Pointer function was called.

Why? For the same reason: ease of compatibility (or to promote laziness, depending on how you want to see it). ATI_vertex_array_object had to create new analogs to the gl*Pointer functions. Whereas ARB_vertex_buffer_object just piggybacked off of the existing entrypoints.

Users didn't have to change from using glVertexPointer to glVertexBufferOffset or some other function. All they had to do was bind a buffer before calling a function that set up vertex information (and of course change the pointers to byte offsets).

It also mean that they didn't have to add a bunch of glDrawElementsWithBuffer-type functions for rendering with indices that come from buffer objects.

So this wasn't a bad idea in the short term. But as with most short-term decision making, it starts being less reasonable with time.

Of course, if you have access to GL 4.5/ARB_direct_state_access, you can do things the way they ought to have been done originally.

这篇关于在OpenGL中绑定点的目的?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆