将 2D 纹理阵列的单层从 GPU 复制到 CPU [英] Copying a single layer of a 2D Texture Array from GPU to CPU

查看:78
本文介绍了将 2D 纹理阵列的单层从 GPU 复制到 CPU的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 2D 纹理数组来存储一些数据.因为我经常想绑定这个 2D 纹理数组的单层,所以我为每一层创建了单独的 GL_TEXTURE_2D 纹理视图:

I'm using a 2D texture array to store some data. As I often want to bind single layers of this 2D texture array, I create individual GL_TEXTURE_2D texture views for each layer:

for(int l(0); l < m_layers; l++)
{
    QOpenGLTexture * view_texture = m_texture.createTextureView(QOpenGLTexture::Target::Target2D,
                                                                m_texture_format,
                                                                0,0,
                                                                l,l);
    view_texture->setMinMagFilters(QOpenGLTexture::Filter::Linear, QOpenGLTexture::Filter::Linear);
    view_texture->setWrapMode(QOpenGLTexture::WrapMode::MirroredRepeat);
    assert(view_texture != 0);

    m_texture_views.push_back(view_texture);
}

这些 2D TextureViews 工作正常.但是,如果我想使用该纹理视图从 GPU 端检索 2D 纹理数据,则它不起作用.

These 2D TextureViews work fine. However, if I want to retrieve the 2D texture data from the GPU side using that texture view it doesn't work.

换句话说,以下内容不复制数据(但不会引发 GL 错误):

In other words, the following copies no data (but throws no GL errors):

glGetTexImage(GL_TEXTURE_2D, 0, m_pixel_format, m_pixel_type, (GLvoid*) m_raw_data[layer] )

但是,检索整个 GL_TEXTURE_2D_ARRAY 确实有效:

However, retrieving the entire GL_TEXTURE_2D_ARRAY does work:

glGetTexImage(GL_TEXTURE_2D_ARRAY, 0, m_pixel_format, m_pixel_type, (GLvoid*) data );

如果我需要在仅修改单个图层的数据时跨 2D 纹理阵列的所有图层进行复制,显然会导致性能损失.

There would obviously be a performance loss if I need to copy across all layers of the 2D texture array when only data for a single layer has been modified.

有没有办法只复制 GPU->CPU 的 GL_TEXTURE_2D_ARRAY 的单层?我知道有相反的情况(即 CPU->GPU),所以如果没有,我会感到惊讶.

Is there a way to copy GPU->CPU only a single layer of a GL_TEXTURE_2D_ARRAY? I know there is for the opposite (i.e CPU->GPU) so I would be surprised if there wasn't.

推荐答案

您使用的是什么版本的 GL?

What version of GL are you working with?

您可能不会喜欢这个,但是... GL 4.5 引入了 glGetTextureSubImage (...) 来精确地执行您想要的操作.对于如此简单的事情来说,这是一个相当高的版本要求;它也可以在扩展形式中使用,但该扩展相对较新,因为嗯.

You are probably not going to like this, but... GL 4.5 introduces glGetTextureSubImage (...) to do precisely what you want. That is a pretty hefty version requirement for something so simple; it is also available in extension form, but that extension is relatively new as well.

此功能没有特殊的硬件要求,但需要最新的驱动程序.

There is no special hardware requirement for this functionality, but it requires a very recent driver.

不过,我还不会绝望.

您可以将整个纹理数组复制到 PBO 中,然后使用缓冲区对象 API(eg glGetBufferSubData (...)).这需要 GPU 端的额外内存,但允许您传输此 2D 阵列的单个切片.

You can copy the entire texture array into a PBO and then read a sub-rectangle of that PBO back using the buffer object API (e.g. glGetBufferSubData (...)). That requires extra memory on the GPU-side, but will allow you to transfer a single slice of this 2D array.

这篇关于将 2D 纹理阵列的单层从 GPU 复制到 CPU的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆