在OpenGL上的迭代DE求解器中,从相同的纹理进行写入和读取 [英] Writing and reading from the same texture for an iterative DE solver on OpenGL

查看:95
本文介绍了在OpenGL上的迭代DE求解器中,从相同的纹理进行写入和读取的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试编写一种流体模拟器,该流体模拟器需要迭代求解一些微分方程(莱迪思-波尔兹曼方法).我希望它是使用OpenGL的实时图形可视化.我遇到了一个问题.我使用着色器在GPU上执行相关计算.我要如何将在时间t处描述系统状态的纹理传递到着色器中,着色器执行计算并在时间t + dt处返回系统状态,我在四边形上渲染纹理,然后将其传递给纹理回到着色器中.但是,我发现我无法同时读取和写入相同的纹理.但是我敢肯定,我已经在GPU上看到了这种计算的实现.他们如何解决呢?我想我围绕OpenGL可以读取和写入相同纹理的事实,以不同的方式进行了一些讨论,但是我不太了解它们并使它们适应我的情况.要渲染为纹理,我使用:glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, renderedTexture, 0);

I am trying to write a fluid simulator that requires solving iteratively some differential equations (Lattice-Boltzmann Method). I want it to be a real-time graphical visualisation using OpenGL. I ran into a problem. I use a shader to perform relevant calculations on GPU. What I what is to pass the texture describing the state of the system at time t into the shader, shader performs the calculation and returns the state of the system at time t+dt, I render the texture on a quad and then pass the texture back into the shader. However, I found that I can not read and write to the same texture at the same time. But I am sure I have seen implementations of such calculations on GPU. How do they work around it? I think I saw a few discussion on a different way of working around the fact that OpenGL can read and write the same texture, but I could not quite understand them and adapt them to my case. To render to texture I use: glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, renderedTexture, 0);

这是我的渲染例程:

do{


    //count frames
    frame_counter++;


    // Render to our framebuffer
    glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
    glViewport(0,0,windowWidth,windowHeight); // Render on the whole framebuffer, complete from the lower left corner to the upper right

    // Clear the screen
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    // Use our shader
    glUseProgram(programID);
    // Bind our texture in Texture Unit 0
    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D, renderTexture);


    glUniform1i(TextureID, 0);

    printf("Inv Width: %f", (float)1.0/windowWidth);
    //Pass inverse widths (put outside of the cycle in future)
    glUniform1f(invWidthID, (float)1.0/windowWidth);
    glUniform1f(invHeightID, (float)1.0/windowHeight);

    // 1rst attribute buffer : vertices
    glEnableVertexAttribArray(0);
    glBindBuffer(GL_ARRAY_BUFFER, quad_vertexbuffer);
    glVertexAttribPointer(
                          0,                  // attribute 0. No particular reason for 0, but must match the layout in the shader.
                          3,                  // size
                          GL_FLOAT,           // type
                          GL_FALSE,           // normalized?
                          0,                  // stride
                          (void*)0            // array buffer offset
                          );

    // Draw the triangles !
    glDrawArrays(GL_TRIANGLES, 0, 6); // 2*3 indices starting at 0 -> 2 triangles

    glDisableVertexAttribArray(0);
    // Render to the screen
    glBindFramebuffer(GL_FRAMEBUFFER, 0);
    // Render on the whole framebuffer, complete from the lower left corner to the upper right
    glViewport(0,0,windowWidth,windowHeight);

    // Clear the screen
    glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    // Use our shader
    glUseProgram(quad_programID);

    // Bind our texture in Texture Unit 0
    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D, renderedTexture);
    // Set our "renderedTexture" sampler to user Texture Unit 0
    glUniform1i(texID, 0);

    glUniform1f(timeID, (float)(glfwGetTime()*10.0f) );

    // 1rst attribute buffer : vertices
    glEnableVertexAttribArray(0);
    glBindBuffer(GL_ARRAY_BUFFER, quad_vertexbuffer);
    glVertexAttribPointer(
                          0,                  // attribute 0. No particular reason for 0, but must match the layout in the shader.
                          3,                  // size
                          GL_FLOAT,           // type
                          GL_FALSE,           // normalized?
                          0,                  // stride
                          (void*)0            // array buffer offset
                          );

    // Draw the triangles !
    glDrawArrays(GL_TRIANGLES, 0, 6); // 2*3 indices starting at 0 -> 2 triangles

    glDisableVertexAttribArray(0);

    glReadBuffer(GL_BACK);
    glBindTexture(GL_TEXTURE_2D, sourceTexture);
    glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 0, 0, windowWidth, windowHeight, 0);


    // Swap buffers
    glfwSwapBuffers(window);
    glfwPollEvents();



}

现在发生的是,当我渲染到帧缓冲区时,我认为输入得到的纹理为空.但是,当我在屏幕上渲染相同的纹理时,它可以成功渲染我所期待的东西.

What happens now, is that when I render to the framebuffer, I the texture I get as an input is empty, I think. But when I render the same texture on screen, it renders succesfully what I excpect.

推荐答案

好的,我想我已经设法弄清楚了.我可以使用glCopyTexImage2D而不是渲染到帧缓冲区来将屏幕上渲染的任何内容复制到纹理.但是,现在,我还有另一个问题:我不明白glCopyTexImage2D是否可以与帧缓冲区一起使用.它可以在屏幕上渲染,但是当我渲染到帧缓冲区时,我无法使其正常工作.首先不确定这是否有可能.对此提出了一个单独的问题: glCopyTexImage2D在渲染屏幕外是否可以正常工作?

Okay, I think I've managed to figure something out. Instead of rendering to a framebuffer what I can do is to use glCopyTexImage2D to copy whatever got rendered on the screen to a texture. Now, however, I have another issue: I can't understand if glCopyTexImage2D will work with a frame buffer. It works with onscreen rendering, but I am failing to get it to work when I am rendering to a framebuffer. Not sure if this is even possible in the first place. Made a separate question on this: Does glCopyTexImage2D work when rendering offscreen?

这篇关于在OpenGL上的迭代DE求解器中,从相同的纹理进行写入和读取的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆