为什么在Core Profile 3.2下OpenGL的glDrawArrays()会失败并返回GL_INVALID_OPERATION,但不是3.3或4.2? [英] Why does OpenGL's glDrawArrays() fail with GL_INVALID_OPERATION under Core Profile 3.2, but not 3.3 or 4.2?

查看:631
本文介绍了为什么在Core Profile 3.2下OpenGL的glDrawArrays()会失败并返回GL_INVALID_OPERATION,但不是3.3或4.2?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有OpenGL渲染代码调用glDrawArrays,当OpenGL上下文(自动/隐式地获取)4.2但是一致地失败(GL_INVALID_OPERATION)和明确请求的OpenGL核心上下文3.2时,它完美地工作。 (在这两种情况下,着色器总是被设置为#version 150,但这与我怀疑的地方没有关系。)根据规格,只有两个如果glDrawArrays()失败并返回GL_INVALID_OPERATION,则返回强实例:


  • 如果非零缓冲区对象名称绑定到启用数组和缓冲区对象的数据存储区当前被映射为 - 我现在没有进行任何缓冲区映射如果几何着色器处于活动状态,则

  • 模式与[...]不兼容 - nope,目前没有几何着色器。


    1. 我已验证&再次检查它是否只有glDrawArrays()调用失败。此外,双重检查传递给glDrawArrays()的所有参数在两个GL版本,缓冲区绑定下都是相同的。

    2. 这发生在3个不同的nvidia GPU和2不同的操作系统(Win7和OSX,都是64位 - 当然,在OSX中,我们只有 3.2 ,无论如何4.2)。 它不会发生集成的英特尔高清GPU,但对于那个,我只会得到一个自动隐式3.3上下文(试图通过GLFW显式强制使用此GPU的3.2核心配置文件,但创建窗口失败,但这是一个完全不同的问题......)


      值得一提的是,下面是从render循环中摘录的相关例程,在Golang:

        func(me * TMesh)render(){
      curMesh = me
      curTechnique .OnRenderMesh()
      gl.BindBuffer(gl.ARRAY_BUFFER,me.glVertBuf)
      if me.glElemBuf> 0 {
      gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER,me.glElemBuf)
      gl.VertexAttribPointer(curProg.AttrLocs [aPos],3,gl.FLOAT,gl.FALSE,0,gl.Pointer (nil))
      gl.DrawElements(me.glMode,me.glNumIndices,gl.UNSIGNED_INT,gl.Pointer(nil))
      gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER,0)
      } else {
      gl.VertexAttribPointer(curProg.AttrLocs [aPos],3,gl.FLOAT,gl.FALSE,0,gl.Pointer(nil))
      / * BOOM! * /
      gl.DrawArrays(me.glMode,0,me.glNumVerts)
      }
      gl.BindBuffer(gl.ARRAY_BUFFER,0)
      }
      code>

      所以当然这是一个更大的渲染循环的一部分,尽管现在整个TMesh构造只是两个实例,一个简单的立方体,另一个简单的金字塔。重要的是,整个绘图循环完美无误地工作,没有错误报告,当GL在3.3和4.2下查询错误时,在3个nvidia GPU上显示3.2核心配置文件失败,错误代码根据spec仅在两个具体情况,根据我的说法,这些情况都不适用于此。



      这里可能有什么问题?你有没有碰到过这个?任何想法,我一直缺少?

      解决方案

      它不是 DrawArrays ,我在这里弄错了。不知何故,我调用 glVertexAttribPointer 的方式是这里的问题:在任何严格的核心配置文件中,无论是3.2还是4.2 ...都会进一步调查。在4.2非严格的情况下,没有问题。

      I have OpenGL rendering code calling glDrawArrays that works flawlessly when the OpenGL context is (automatically / implicitly obtained) 4.2 but fails consistently (GL_INVALID_OPERATION) with an explicitly requested OpenGL core context 3.2. (Shaders are always set to #version 150 in both cases but that's beside the point here I suspect.)

      According to specs, there are only two instances when glDrawArrays() fails with GL_INVALID_OPERATION:

      • "if a non-zero buffer object name is bound to an enabled array and the buffer object's data store is currently mapped" -- I'm not doing any buffer mapping at this point

      • "if a geometry shader is active and mode​ is incompatible with [...]" -- nope, no geometry shaders as of now.

      Furthermore:

      1. I have verified & double-checked that it's only the glDrawArrays() calls failing. Also double-checked that all arguments passed to glDrawArrays() are identical under both GL versions, buffer bindings too.

      2. This happens across 3 different nvidia GPUs and 2 different OSes (Win7 and OSX, both 64-bit -- of course, in OSX we have only the 3.2 context, no 4.2 anyway).

      3. It does not happen with an integrated "Intel HD" GPU but for that one, I only get an automatic implicit 3.3 context (trying to explicitly force a 3.2 core profile with this GPU via GLFW here fails the window creation but that's an entirely different issue...)

      For what it's worth, here's the relevant routine excerpted from the render loop, in Golang:

      func (me *TMesh) render () {
          curMesh = me
          curTechnique.OnRenderMesh()
          gl.BindBuffer(gl.ARRAY_BUFFER, me.glVertBuf)
          if me.glElemBuf > 0 {
              gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, me.glElemBuf)
              gl.VertexAttribPointer(curProg.AttrLocs["aPos"], 3, gl.FLOAT, gl.FALSE, 0, gl.Pointer(nil))
              gl.DrawElements(me.glMode, me.glNumIndices, gl.UNSIGNED_INT, gl.Pointer(nil))
              gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, 0)
          } else {
              gl.VertexAttribPointer(curProg.AttrLocs["aPos"], 3, gl.FLOAT, gl.FALSE, 0, gl.Pointer(nil))
              /* BOOM! */
              gl.DrawArrays(me.glMode, 0, me.glNumVerts)
          }
          gl.BindBuffer(gl.ARRAY_BUFFER, 0)
      }
      

      So of course this is part of a bigger render-loop, though the whole "*TMesh" construction for now is just two instances, one a simple cube and the other a simple pyramid. What matters is that the entire drawing loop works flawlessly with no errors reported when GL is queried for errors under both 3.3 and 4.2, yet on 3 nvidia GPUs with an explicit 3.2 core profile fails with an error code that according to spec is only invoked in two specific situations, none of which as far as I can tell apply here.

      What could be wrong here? Have you ever run into this? Any ideas what I have been missing?

      解决方案

      It's not just DrawArrays, I was mistaken here. Somehow my way of calling glVertexAttribPointer is the problem here: in any strict core profile, whether 3.2 or 4.2... will investigate further. In a 4.2 non-strict context, no problem.

      这篇关于为什么在Core Profile 3.2下OpenGL的glDrawArrays()会失败并返回GL_INVALID_OPERATION,但不是3.3或4.2?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆