我正在尝试在不同线程上的Windows API中使用OpenGL [英] Im trying to use OpenGL with the windows API on different threads

查看:46
本文介绍了我正在尝试在不同线程上的Windows API中使用OpenGL的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

所以基本上我是使用 windows api 创建一个emty窗口,然后使用 OpenGL 从不同的线程绘制到该窗口.我设法仅用一个线程完成该操作,但是获取和调度系统消息以使窗口可用会减慢我能够获得的帧速率,因此我试图获取另一个线程在我画主线程时并行执行此操作.

为此,我有一个第二个线程,该线程创建一个空窗口并进入一个无限循环以处理Windows消息循环.在进入循环之前,它将空窗口的 HWND传递给主线程,以便可以初始化OpenGl.为此,我使用PostThreadMessage函数,并使用消息代码WM_USER和消息的wParam将窗口处理程序发送回去.这是该辅助线程的代码:

  bool t2main(DWORD parentThreadId,int x = 0,int y = 0,int w = 256,int h = 256,int pixelw = 2,int pixelh = 2,const char * windowName ="Window"){//基本工程图值int sw = w,sh = h,pw = pixelw,ph = pixelh;int ww = 0;int wh = 0;//Windows API窗口处理程序HWND windowHandler;//计算窗口总尺寸ww = sw * pw;wh = sh * ph;//创建窗口处理程序WNDCLASS wc;wc.hIcon = LoadIcon(NULL,IDI_APPLICATION);wc.hCursor = LoadCursor(NULL,IDC_ARROW);wc.style = CS_HREDRAW |CS_VREDRAW |CS_OWNDC;wc.hInstance = GetModuleHandle(nullptr);wc.lpfnWndProc = DefWindowProc;wc.cbClsExtra = 0;wc.cbWndExtra = 0;wc.lpszMenuName = nullptr;wc.hbrBackground = nullptr;wc.lpszClassName ="windowclass";RegisterClass(& wc);DWORD dwExStyle = WS_EX_APPWINDOW |WS_EX_WINDOWEDGE;DWORD dwStyle = WS_CAPTION |WS_SYSMENU |WS_VISIBLE |WS_THICKFRAME;RECT rWndRect = {0,0,ww,wh};AdjustWindowRectEx(& rWndRect,dwStyle,FALSE,dwExStyle);int width = rWndRect.right-rWndRect.left;int高度= rWndRect.bottom-rWndRect.top;windowHandler = CreateWindowEx(dwExStyle,"windowclass",windowName,dwStyle,x,y,width,height,NULL,NULL,GetModuleHandle(nullptr),NULL);if(windowHandler == NULL){返回false;}PostThreadMessageA(parentThreadId,WM_USER,(WPARAM)windowHandler,0);为了(;;) {味精味精;PeekMessageA(& msg,NULL,0,0,PM_REMOVE);DispatchMessageA(& msg);}} 

此函数从主入口点被调用,该入口点正确接收窗口处理程序,然后尝试使用它来设置OpenGL.这是代码:

  int main(){//基本工程图值int sw = 256,sh = 256,pw = 2,ph = 2;int ww = 0;int wh = 0;const char * windowName ="Window";//线程的东西DWORD t1Id,t2Id;处理t1Handler,t2Handler;//像素阵列Pixel * pixelBuffer = nullptr;//绘制的OpenGl设备上下文HDC glDeviceContext;HWND threadWindowHandler;t1Id = GetCurrentThreadId();std :: thread t = std :: thread(& t2main,t1Id,0,0,sw,sh,pw,ph,windowName);t.detach();t2Handler = t.native_handle();t2Id = GetThreadId(t2Handler);while(true){味精味精;PeekMessageA(& msg,NULL,WM_USER,WM_USER + 100,PM_REMOVE);if(msg.message == WM_USER){threadWindowHandler =(HWND)msg.wParam;休息;}}//使用刚刚创建的窗口处理程序初始化OpenGLglDeviceContext = GetDC(threadWindowHandler);PIXELFORMATDESCRIPTOR pfd = {sizeof(PIXELFORMATDESCRIPTOR),1,PFD_DRAW_TO_WINDOW |PFD_SUPPORT_OPENGL |PFD_DOUBLEBUFFER,PFD_TYPE_RGBA,32,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,PFD_MAIN_PLANE,0,0,0,0};int pf = ChoosePixelFormat(glDeviceContext,& pfd);SetPixelFormat(glDeviceContext,pf,& pfd);HGLRC glRenderContext = wglCreateContext(glDeviceContext);wglMakeCurrent(glDeviceContext,glRenderContext);//创建一个OpenGl缓冲区GLuint glBuffer;glEnable(GL_TEXTURE_2D);glGenTextures(1,& glBuffer);glBindTexture(GL_TEXTURE_2D,glBuffer);glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);glTexEnvf(GL_TEXTURE_ENV,GL_TEXTURE_ENV_MODE,GL_DECAL);//创建一个像素缓冲区来保存屏幕数据并为其分配空间pixelBuffer =新的Pixel [sw * sh];for(int32_t i = 0; i< sw * sh; i ++){pixelBuffer [i] = Pixel();}//测试一个像素pixelBuffer [10 * sw + 10] = Pixel(255,255,255);//将当前缓冲区推入视图glViewport(0,0,ww,wh);glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,sw,sh,0,GL_RGBA,GL_UNSIGNED_BYTE,pixelBuffer);glBegin(GL_QUADS);glTexCoord2f(0.0,1.0);glVertex3f(-1.0f,-1.0f,0.0f);glTexCoord2f(0.0,0.0);glVertex3f(-1.0f,1.0f,0.0f);glTexCoord2f(1.0,0.0);glVertex3f(1.0f,1.0f,0.0f);glTexCoord2f(1.0,1.0);glVertex3f(1.0f,-1.0f,0.0f);glEnd();SwapBuffers(glDeviceContext);为了(;;) {}} 

要保存像素信息,我正在使用此结构:

  struct Pixel {工会{uint32_t n = 0xFF000000;//默认为255 alpha结构{uint8_t r;uint8_t g;uint8_t b;uint8_t a;};};像素(){r = 0;g = 0;b = 0;a = 255;}像素(uint8_t红色,uint8_t绿色,uint8_t蓝色,uint8_t alpha = 255){r =红色;g =绿色;b =蓝色;a = alpha;}}; 

当我尝试运行此代码时,我没有得到所需的像素输出,而是得到了空白窗口,就好像OpenGl没有正确初始化一样.当我使用相同的代码但全部合并到一个线程中时,我得到其中包含像素的空窗口.我在这里做什么错的?,在另一个线程中初始化OpenGl之前,我需要做些什么吗?我很喜欢各种反馈.提前谢谢.

解决方案

这里有几个问题.让我们按顺序解决它们.

首先让我们回想一下以下规则:

OpenGL和线程

关于OpenGL的关于窗口,设备上下文和线程的基本规则是:

  1. OpenGL上下文未与特定的窗口或设备上下文关联.

  2. 您可以在与原始创建上下文的设备上下文兼容的任何设备上下文(HDC,通常与Window关联)上使OpenGL上下文为当前".

  3. 一个OpenGL上下文一次只能在一个线程上处于当前"状态,或者根本不处于活动状态.

  4. 要将OpenGL上下文当前状态"从一个线程移动到另一个线程,您可以执行以下操作:

    • 首先:在当前使用的线程上取消当前"上下文
    • 秒:在您要作为当前线程的线程上使其当前".
  5. 一个进程中的多个线程(包括所有线程)可以同时具有OpenGL上下文当前".

  6. 多个OpenGL上下文(包括所有上下文)(将成为规则5在不同线程中是最新的)可以同时与相同设备上下文(HDC)相同.

  7. p>
  8. 对于在不同线程上同时发生但在同一HDC上同时发生的绘制命令,没有定义的规则.用户必须通过放置适用于OpenGL同步原语的适当锁来进行排序.在将显式的细粒度同步对象引入OpenGL之前,唯一可用的同步是 glFinish 和OpenGL的隐式同步点调用(例如 glReadPixels ).

您对OpenGL的理解存在误解

这来自阅读代码中的注释:

  int main(){ 

为什么您的线程函数称为 main . main 是保留名称,专门用于进程输入功能.即使您输入的是 WinMain ,您也一定不要使用main作为函数名称.

 //像素阵列Pixel * pixelBuffer = nullptr; 

目前尚不清楚 pixelBuffer 的含义.您将在纹理上调用它.但显然不要将图形设置为使用纹理.

  t1Id = GetCurrentThreadId();std :: thread t = std :: thread(& t2main,t1Id,0,0,sw,sh,pw,ph,windowName);t.detach();t2Handler = t.native_handle();t2Id = GetThreadId(t2Handler); 

什么,我什至没有.首先,这应该做什么?首先,不要混用Win32线程API和C ++ std :: thread .切成一体,然后坚持下去.

  while(true){味精味精;PeekMessageA(& msg,NULL,WM_USER,WM_USER + 100,PM_REMOVE);if(msg.message == WM_USER){threadWindowHandler =(HWND)msg.wParam;休息;}} 

您为什么还要通过线程消息传递窗口句柄?在很多层面上这都是错误的.所有线程都生活在相同的地址空间中,因此您可以使用队列或全局变量,或者将is作为参数传递给线程入口函数,等等,等等.

此外,您只需在主线程中创建OpenGL上下文,然后将其传递即可.

  wglMakeCurrent(glDeviceContext,glRenderContext);//创建一个OpenGl缓冲区GLuint glBuffer;glEnable(GL_TEXTURE_2D);glGenTextures(1,& glBuffer); 

这不会创建OpenGL缓冲区对象,而是创建一个纹理名称.

  glBindTexture(GL_TEXTURE_2D,glBuffer);glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);glTexEnvf(GL_TEXTURE_ENV,GL_TEXTURE_ENV_MODE,GL_DECAL);//创建像素缓冲区以保存屏幕数据并分配空间pixelBuffer [10 * sw + 10] = Pixel(255,255,255);为此 

嗯,不,您不以这种方式为OpenGL提供可绘制的缓冲区.哎呀,您甚至根本没有为OpenGL显式提供绘制缓冲区(这不是D3D12,Metal或Vulkan所在的地方).

 //将当前缓冲区推入视图glViewport(0,0,ww,wh); 

不.那不是 glViewport 所做的!

glViewport 是转换管道状态的一部分,并最终设置目标矩形,该矩形是剪辑空间体积将被映射到可绘制对象中的位置.相对于可绘制缓冲区,它绝对什么都没有 .

  glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,sw,sh,0,GL_RGBA,GL_UNSIGNED_BYTE,pixelBuffer); 

我认为您不了解纹理的用途.该调用的作用是将 pixelBuffer 的上下文复制到当前绑定的纹理中.之后,OpenGL不再关心 pixelBuffer .

  glBegin(GL_QUADS);glTexCoord2f(0.0,1.0);glVertex3f(-1.0f,-1.0f,0.0f);glTexCoord2f(0.0,0.0);glVertex3f(-1.0f,1.0f,0.0f);glTexCoord2f(1.0,0.0);glVertex3f(1.0f,1.0f,0.0f);glTexCoord2f(1.0,1.0);glVertex3f(1.0f,-1.0f,0.0f);glEnd(); 

在这里您绘制了一些东西,但始终没有启用纹理的使用.因此,有关设置纹理的所有工作都是徒劳的.

  SwapBuffers(glDeviceContext);为了(;;) {}} 

因此,在交换窗口缓冲区之后,您将使线程永远旋转.这样做有两个问题:在另一个线程中仍然存在主消息循环,该线程确实处理了窗口的其他消息.可能包括 WM_PAINT ,并且取决于是否设置了背景画笔和/或处理 WM_ERASEBKGND 的方式,此后所绘制的内容可能会立即消失.

通过旋转线程,您无缘无故浪费了CPU时间.您也可以结束线程.

So basically I am using the windows api to create an emty window and then I use OpenGL to draw to that window from different threads. I managed to do this just with one thread, but getting and dispatching system messages so that the window is usable was slowing down the frame rate I was able to get, so I'm trying to get another thread to do that in parallel while I draw in the main thread.

To do this I have a second thread which creates an empty window and enters an infinite loop to handle the windows message loop. Before entering the loop it passes the HWND of the empty window to the main thread so OpenGl can be initialised. To do that I use the PostThreadMessage function and use the message code WM_USER and the wParam of the message to send the window handler back. Here is the code to that secondary thread:

bool t2main(DWORD parentThreadId, int x = 0, int y = 0, int w = 256, int h = 256, int pixelw = 2, int pixelh = 2, const char* windowName = "Window") {

// Basic drawing values
int sw = w, sh = h, pw = pixelw, ph = pixelh;
int ww = 0; int wh = 0;

// Windows API window handler
HWND windowHandler;

// Calculate total window dimensions
ww = sw * pw; wh = sh * ph;

// Create the window handler
WNDCLASS wc;
wc.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wc.hCursor = LoadCursor(NULL, IDC_ARROW);
wc.style = CS_HREDRAW | CS_VREDRAW | CS_OWNDC;
wc.hInstance = GetModuleHandle(nullptr);
wc.lpfnWndProc = DefWindowProc;
wc.cbClsExtra = 0;
wc.cbWndExtra = 0;
wc.lpszMenuName = nullptr;
wc.hbrBackground = nullptr;
wc.lpszClassName = "windowclass";

RegisterClass(&wc);

DWORD dwExStyle = WS_EX_APPWINDOW | WS_EX_WINDOWEDGE;
DWORD dwStyle = WS_CAPTION | WS_SYSMENU | WS_VISIBLE | WS_THICKFRAME;

RECT rWndRect = { 0, 0, ww, wh };
AdjustWindowRectEx(&rWndRect, dwStyle, FALSE, dwExStyle);
int width = rWndRect.right - rWndRect.left;
int height = rWndRect.bottom - rWndRect.top;

windowHandler = CreateWindowEx(dwExStyle, "windowclass", windowName, dwStyle, x, y, width, height, NULL, NULL, GetModuleHandle(nullptr), NULL);

if(windowHandler == NULL) { return false; }

PostThreadMessageA(parentThreadId, WM_USER, (WPARAM) windowHandler, 0);

for(;;) {

    MSG msg;

    PeekMessageA(&msg, NULL, 0, 0, PM_REMOVE);
    DispatchMessageA(&msg);

}
}

This function gets called from the main entry point, which correctly recieves the window handler and then tries to setup OpenGL with it. Here is the code:

int main() {

// Basic drawing values
int sw = 256, sh = 256, pw = 2, ph = 2;
int ww = 0; int wh = 0;
const char* windowName = "Window";

// Thread stuff
DWORD t1Id, t2Id;
HANDLE t1Handler, t2Handler;

// Pixel array
Pixel* pixelBuffer = nullptr;

// OpenGl device context to draw
HDC glDeviceContext;
HWND threadWindowHandler;

t1Id = GetCurrentThreadId();

std::thread t = std::thread(&t2main, t1Id, 0, 0, sw, sh, pw, ph, windowName);
t.detach();

t2Handler = t.native_handle();
t2Id = GetThreadId(t2Handler);

while(true) {

    MSG msg;

    PeekMessageA(&msg, NULL, WM_USER, WM_USER + 100, PM_REMOVE);

    if(msg.message == WM_USER) {

        threadWindowHandler = (HWND) msg.wParam;
        break;

    }

}

// Initialise OpenGL with thw window handler that we just created
glDeviceContext = GetDC(threadWindowHandler);

PIXELFORMATDESCRIPTOR pfd = {
    sizeof(PIXELFORMATDESCRIPTOR), 1,
    PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER,
    PFD_TYPE_RGBA, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
    PFD_MAIN_PLANE, 0, 0, 0, 0
};

int pf = ChoosePixelFormat(glDeviceContext, &pfd);
SetPixelFormat(glDeviceContext, pf, &pfd);

HGLRC glRenderContext = wglCreateContext(glDeviceContext);
wglMakeCurrent(glDeviceContext, glRenderContext);

// Create an OpenGl buffer
GLuint glBuffer;

glEnable(GL_TEXTURE_2D);
glGenTextures(1, &glBuffer);
glBindTexture(GL_TEXTURE_2D, glBuffer);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);

// Create a pixel buffer to hold the screen data and allocate space for it
pixelBuffer = new Pixel[sw * sh];
for(int32_t i = 0; i < sw * sh; i++) {
    pixelBuffer[i] = Pixel();
}

// Test a pixel
pixelBuffer[10 * sw + 10] = Pixel(255, 255, 255);

// Push the current buffer into view
glViewport(0, 0, ww, wh);

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, sw, sh, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelBuffer);

glBegin(GL_QUADS);
glTexCoord2f(0.0, 1.0); glVertex3f(-1.0f, -1.0f, 0.0f);
glTexCoord2f(0.0, 0.0); glVertex3f(-1.0f, 1.0f, 0.0f);
glTexCoord2f(1.0, 0.0); glVertex3f(1.0f, 1.0f, 0.0f);
glTexCoord2f(1.0, 1.0); glVertex3f(1.0f, -1.0f, 0.0f);
glEnd();

SwapBuffers(glDeviceContext);

for(;;) {}

}

To hold the pixel information I'm using this struct:

struct Pixel {
union {
    uint32_t n = 0xFF000000; //Default 255 alpha
    struct {
        uint8_t r;  uint8_t g;  uint8_t b;  uint8_t a;
    };
};

Pixel() {

    r = 0;
    g = 0;
    b = 0;
    a = 255;

}

Pixel(uint8_t red, uint8_t green, uint8_t blue, uint8_t alpha = 255) {

    r = red;
    g = green;
    b = blue;
    a = alpha;

}

};

When I try to run this code I don't get the desired pixel output, instead I just get the empty window, as if OpenGl handn't initialised correctly. When I use the same code but all into one thread I get the empty window with the pixel in it. What am I doing wrong here?, Is there something I need to do before I initialise OpenGl in another thread? I apreciate all kind of feedback. Thanks in advance.

解决方案

There are several issues here. Let's address them in order.

First let's recall the rules of:

OpenGL and threads

The basic rules about OpenGL with regard to windows, device context and threads are:

  1. An OpenGL context is not associated with a particular window or device context.

  2. You can make a OpenGL context "current" on any device context (HDC, usually associated with a Window) that is compatible to the device context with which the context was original created with.

  3. An OpenGL context can be "current" on only one thread at a time, or not be active at all.

  4. To move OpenGL context "current state" from one thread to another you do:

    • first: unmake "current" the context on the thread it's currently used on
    • second: make it "current" on the thread you want to be current on.
  5. More than one (including all) threads in a process can have a OpenGL context "current" at the same time.

  6. Multiple OpenGL contexts (including all) – which will be rule 5 be current in different threads – can be current with the same device context (HDC) at the same time.

  7. There are no defined rules for drawing commands happening concurrently on different threads, but current on the same HDC. Ordering must happen by the user, by placing appropriate locks that work OpenGL synchronization primitives. Until the introduction of explicit, fine grains synchronization objects into OpenGL the only synchronization available were glFinish and the implicit synchronization point calls of OpenGL (e.g. glReadPixels).

Misconceptions in your understanding what OpenGL does

This comes from reading the comments in your code:

int main() {

Why is your thread function called main. main is a reserved name, exclusively to be used for the process entry function. Even if your entry is WinMain you must not use main as a functio name.

// Pixel array
Pixel* pixelBuffer = nullptr;

It's unclear what the pixelBuffer is meant for, later on. You will call it on a texture. but apparently don't set up the drawing to use a texture.

t1Id = GetCurrentThreadId();

std::thread t = std::thread(&t2main, t1Id, 0, 0, sw, sh, pw, ph, windowName);
t.detach();

t2Handler = t.native_handle();
t2Id = GetThreadId(t2Handler);

What, I don't even. What is this supposed to do in the first place? First things first: Don't mix Win32 threads API and C++ std::thread. Decice in one, and stick with it.

while(true) {

    MSG msg;

    PeekMessageA(&msg, NULL, WM_USER, WM_USER + 100, PM_REMOVE);

    if(msg.message == WM_USER) {

        threadWindowHandler = (HWND) msg.wParam;
        break;

    }

}

Why the hell are you passing the window handle through a thread message? This is so wrong on so many levels. Threads all live in the same address space, so you could use a queue, or global variables, or pass is as parameter to the thread entry function, etc., etc.

Furthermore you could just have created the OpenGL context in the main thread and then just passed it over.

wglMakeCurrent(glDeviceContext, glRenderContext);

// Create an OpenGl buffer
GLuint glBuffer;

glEnable(GL_TEXTURE_2D);
glGenTextures(1, &glBuffer);

That doesn't create an OpenGL buffer object, it creates a texture name.

glBindTexture(GL_TEXTURE_2D, glBuffer);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);

// Create a pixel buffer to hold the screen data and allocate space 
pixelBuffer[10 * sw + 10] = Pixel(255, 255, 255);for it

Uhh, no, you don't supply drawable buffers to OpenGL in that way. Heck, you don't even supply draw buffers to OpenGL explicitly at all (this is not D3D12, Metal or Vulkan, where you do).

// Push the current buffer into view
glViewport(0, 0, ww, wh);

Noooo. That's not what glViewport does!

glViewport is part of the transformation pipeline state and ultimately is sets the destination rectangle of where inside a drawable the clip space volume will be mapped to. It does absolutely nothing with respect to the drawable buffers.

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, sw, sh, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelBuffer);

I think you don't understand what a texture is for. What this call does is, copying over the contexts of pixelBuffer into the currently bound texture. After that OpenGL is no longer concerned with pixelBuffer at all.

glBegin(GL_QUADS);
glTexCoord2f(0.0, 1.0); glVertex3f(-1.0f, -1.0f, 0.0f);
glTexCoord2f(0.0, 0.0); glVertex3f(-1.0f, 1.0f, 0.0f);
glTexCoord2f(1.0, 0.0); glVertex3f(1.0f, 1.0f, 0.0f);
glTexCoord2f(1.0, 1.0); glVertex3f(1.0f, -1.0f, 0.0f);
glEnd();

Here you draw something, but never enabled the use of the texture in the first place. So all that ado about setting up the texture is for nothing.

SwapBuffers(glDeviceContext);

for(;;) {}

}

So after swapping the window buffers you make the thread spin forever. Two problems with that: There is still the main message loop over in the other thread that does handle other messages for the window. Including maybe WM_PAINT, and depending on if you've set a background brush and/or how you handle WM_ERASEBKGND whatever you just draw might instantly vanish thereafter.

And by spinning the thread you're consuming CPU time for no reason whatsover. You could just as well end the thread.

这篇关于我正在尝试在不同线程上的Windows API中使用OpenGL的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆