在 64 位操作系统中使用 glMultiDrawElements [英] Using glMultiDrawElements in 64bit OS
问题描述
我最近从 32 位环境迁移到 64 位环境,除了一个问题之外,它已经顺利进行了:glMultiDrawElements
使用了一些在 64 位操作系统下如果不进行一些调整就无法工作的数组.
I have recently migrated from a 32bit environment to a 64bit one, and it has gone smoothly apart from one problem: glMultiDrawElements
uses some arrays that do not work without some tweaking under a 64bit OS.
glMultiDrawElements( GL_LINE_LOOP, fCount_, GL_UNSIGNED_INT,
reinterpret_cast< const GLvoid** >( iOffset_ ),
mesh().faces().size() );
我对顶点和顶点索引都使用 VBO.fCount_
和 iOffset_
是 GLsizei
的数组.由于缓冲区绑定到 GL_ELEMENT_ARRAY_BUFFER
,iOffset_
的元素用作从 VBO 开始的字节偏移量.这在 32 位操作系统下完美运行.
I am using VBOs for both the vertices and vertex indices. fCount_
and iOffset_
are arrays of GLsizei
. Since a buffer is bound to GL_ELEMENT_ARRAY_BUFFER
, iOffset_
's elements are used as byte offsets from the VBO beginning. This works perfectly under a 32bit OS.
如果我将 glMultiDrawElements
更改为 glDrawElements
并将其放入循环中,则它在两个平台上都可以正常工作:
If I change glMultiDrawElements
to glDrawElements
and put it into a loop, it works fine on both platforms:
int offset = 0;
for ( Sy_meshData::Faces::ConstIterator i = mesh().faces().constBegin();
i != mesh().faces().constEnd(); ++i ) {
glDrawElements( GL_LINE_LOOP, i->vertexIndices.size(), GL_UNSIGNED_INT,
reinterpret_cast< const GLvoid* >( sizeof( GLsizei ) * offset ) );
offset += i->vertexIndices.size();
}
我认为我所看到的是 OpenGL 读取 64 位 iOffset_
块导致大量数字,但是 glMultiDrawElements
不支持任何比 32 位更宽的类型(GL_UNSIGNED_INT
),所以我不知道如何纠正它.
I think what I am seeing is OpenGL reading 64bit chunks of iOffset_
leading to massive numbers, but glMultiDrawElements
does not support any type wider than 32bit (GL_UNSIGNED_INT
), so I'm not sure how to correct it.
有没有其他人遇到过这种情况并解决了它?或者我是否完全错误地处理了这个问题,只是在 32 位操作系统上很幸运?
Has anyone else had this situation and solved it? Or am I handling this entirely wrong and was just lucky on a 32bit OS?
换出我现有的代码:
typedef void ( *testPtr )( GLenum mode, const GLsizei* count, GLenum type,
const GLuint* indices, GLsizei primcount );
testPtr ptr = (testPtr)glMultiDrawElements;
ptr( GL_LINE_LOOP, fCount_, GL_UNSIGNED_INT, iOffset_, mesh().faces().size() );
结果完全相同.
推荐答案
原因很简单,glMultiDrawElements
不需要整数偏移量数组(在您的平台上为 32 位),而是一个指针数组(在您的平台上为 64 位),解释为缓冲区偏移量.
The simple reason is, that glMultiDrawElements
doesn't expect an array of integer offsets (32bit on your platform), but an array of pointers (64bit on your platform), interpreted as buffer offsets.
但是您只是将(或指向)整数的数组转换为(或指向)指针的数组,这是行不通的,因为该函数现在只是将您的 n 个连续 32 位值重新解释为 n 个连续 64 位值.当然,它适用于 glDrawElements
,因为您只是将单个整数转换为单个指针,这实际上将您的 32 位值转换为 64 位值.
But you are just casting the array of (or pointer to) integers to an array of (or pointer to) pointers, which won't work, as the function now just reinterprets your n consecutive 32bit values as n consecutive 64bit values. Of course it works for glDrawElements
because you're just casting a single integer into a single pointer, which essentially converts your 32bit value into a 64bit value.
你需要做的不是转换你的指针/数组,而是这个偏移数组中的每个单独的值:
What you need to do is not cast your pointer/array, but each individual value in this offset array:
std::vector<void*> pointers(mesh().faces().size());
for(size_t i=0; i<pointers.size(); ++i)
pointers[i] = static_cast<void*>(iOffset_[i]);
glMultiDrawElements( GL_LINE_LOOP, fCount_, GL_UNSIGNED_INT,
&pointers.front(), mesh().faces().size() );
或者更好的是,从一开始就将偏移量存储为指针而不是整数.
Or better, just store your offsets as pointers instead of integers right from the start.
这篇关于在 64 位操作系统中使用 glMultiDrawElements的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!