GTK +和OpenGL库如何在单个X服务器上进行协作? [英] How do GTK+ and OpenGL libraries cooperate on a single X server?

查看:140
本文介绍了GTK +和OpenGL库如何在单个X服务器上进行协作?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

图形用户界面在其幕后隐藏了神秘的力学。它在单个屏幕上混合2D和3D上下文,并允许这两个截然不同的世界无缝组合。但是它们以什么样的方式和实际的交叉层次?实践表明,OpenGL上下文可以嵌入2D小部件库中,因此整个2D界面可以用OpenGL来支持。另外一些应用程序可能会探索硬件加速,而其他应用程序则不会(在同一屏幕上渲染时)。图形卡是否了解屏幕上的2D和3D区域,窗口管理器是否创造了一个前后一致的幻觉? ...可以注意到加速的窗口(3D,视频)跳跃以适应2D界面,例如,滚动网页或在屏幕上移动视频播放器。

这个问题似乎微不足道,但我还没有遇到任何人能够给我一个全面的答案。一个答案,可以让我将OpenGL上下文嵌入到GTK +应用程序中,并理解它为什么以及如何工作。我已经尝试了GtkGlExt和GLUT,但我想深入理解这个主题,并将自己的解决方案作为学术项目的一部分编写。我想知道X,GLX,GTK,OpenGL和窗口管理器之间的关系以及如何探索这个图书馆网络来自觉地编程它。



我不要指望有人会在这里写论文,但我会很感激任何有关该主题的文章,建议或链接。

你想的太多了,太复杂了。像GTK +或Qt这样的工具包对一些东西加了一层抽象,实际上很简单:系统的图形设备由一个处理器和一些可以操作的内存组成。在最简单的情况下,处理器是常规的系统CPU,内存是正常的系统内存。现代计算机具有专用图形处理器(GPU),但它具有自己的高带宽存储器。

内存包含帧缓冲区。从逻辑上讲,帧缓冲区是一个二维数组值。 GPU可以被编程为以某种方式处理帧缓冲器中的值。这可以用来绘制帧缓冲区。显示画面的监视器连接到一个特殊的电路,它将内存中某个帧缓冲器的数据连续输入到屏幕(通常称为RAMDAC或CRTC)。所以在GPU的内存中有一个直接进入屏幕的帧缓冲区。如果您在那里绘制,则会出现在屏幕上。



像X11服务器这样的程序可以加载知道如何编程GPU以绘制图形基元的驱动程序。 X11本身定义了某些图形基元,扩展模块可以添加更多的图元。 X11本身允许将GPU内存中的帧缓冲区分隔成称为Drawables的逻辑区域。屏幕上的framebuffer上的Drawables被称为Windows。由于逻辑Windows可以重叠,所以X服务器还管理Z堆栈,它用于对Windows进行重新排序。每当客户想要绘制到X11服务器告诉GPU的某个窗口时,绘图操作将只修改framebuffer的那些像素,其中绘制的窗口可见(这称为像素所有权测试)。 X11服务器还将创建不属于屏幕帧缓冲区内存区域的Drawable(即帧缓冲区)。这些在X11术语中称为PBuffers或Pixmaps(也有一个特殊的扩展,它也可以将窗口移出屏幕)。

然而,所有这些Drawables只是存储器。从技术上讲,这些都是Canvas的东西。这被称为图形原语。 X11本身提供了一个名为X core的特定集合。还有一个名为XRender的事实标准扩展,它提供了在X核心中找不到的原语。然而,X11核心和XRender都不能提供可以生成3D绘图印象的图形基元。所以还有另外一个扩展,称为GLX,它向X11服务器传授另一组图形原语,即以OpenGL的形式。

然而,X核心,XRender和GLX / OpenGL是所有只是在同一种画布上操作的不同笔,画笔和铅笔,即由X11管理的简单画幅缓冲器。

然后,像Qt或GTK +这样的工具包又有什么用处呢? ?那么,他们使用X11和它提供的图形基元来实际绘制小部件,比如按钮,菜单和类似的东西,这是X11不知道的。


The graphical user interface hides mysterious mechanics under its curtain. It mixes 2D and 3D contexts on a single screen and allows for seamless composition of these two, much different worlds. But in what way and at which level are they actually interleaved?

Practice has shown that an OpenGL context can be embedded into a 2D widget library, and so the whole 2D interface can be backed with OpenGL. Also some applications may explore hardware acceleration while others don't (while being rendered on the same screen). Does the graphic card "know" about 2D and 3D areas on the screen and the window manager creates the illusion of a cohesive front-end? ...one can notice accelerated windows (3D, video) "hopping" to fit into 2D interface when, e.g. scrolling a web page or moving an video player across the screen.

The question seems to be trivial, but I haven't met anybody able to give me a comprehensive answer. An answer, which could enable me to embed an OpenGL context into a GTK+ application and understand why and how it is working. I've tried GtkGlExt and GLUT, but I would like to deeply understand the topic and write my own solution as a part of an academic project. I'd like to know what are the relations between X, GLX, GTK, OpenGL and window manager and how to explore this network of libraries to consciously program it.

I don't expect that someone will write here a dissertation, but I will be grateful for any indications, suggestions or links to articles on that topic.

解决方案

You're thinking much, much much too complicated. Toolkits like GTK+ or Qt add quite a layer of abstraction over somthing, that's actually rather simple: Your system's graphics device consists of a processor and some memory it can operate on. In the simplemost case the processor is the regular system CPU and the memory is the normal system memory. Modern computers feature a special purpose graphics processor (GPU), though, which has its own, high bandwidth memory.

The memory holds framebuffers. Logically a framebuffer is a 2D array of values. The GPU can be programmed to process the values in the framebuffers in a certain way. That can be used to draw into framebuffers. The monitors, displaying a picture are connected to a special piece of circuitry which continuously feeds the data of a certain framebuffer in the memory to the screen (usually called RAMDAC or CRTC). So in the GPU's memory there's a framebuffer that's directly going to the screen. If you draw there, things will appear on the screen.

A program, like the X11 server can load drivers that "know" how to program the GPU to draw graphical primitives. X11 itself defines certain graphics primitives, and extension modules can add further ones. X11 itself allows to segregate the framebuffers on the GPU memory into logical areas called Drawables. Drawables on the on-screen framebuffer are called Windows. Since logical Windows can overlap the X server also manages Z stacking, which it uses to sort the Windows for redraw. Everytime a Client wants to draw to some Window that X11 server will tell the GPU, that drawing operations will modify only those pixels of the framebuffer, of which the Window drawn to is visible (this is called "Pixel Ownership Test"). The X11 server will also create Drawables (i.e. framebuffers) that are not part of the on-screen framebuffer memory area. Those are called PBuffers or Pixmaps in X11 terminology (also with a special extension its possible to move a Window off-screen as well).

However all those Drawables are just memory. Technically those are Canvas to draw on with something. This something is called "graphics primitives". X11 itself provides a certain set, named X core. Also there's a de-facto standard extension called XRender which provides primitives not found in X core. However neither X11 core nor XRender provide graphics primitives with which the impression of a 3D drawing could be generated. So there's another extension, called GLX which teaches the X11 server another set of graphics primitives, namely in the form of OpenGL.

However X core, XRender and GLX/OpenGL are all just different pens, brushes and pencils that all operate on the same kind of Canvas, namely a simply framebuffer manages by X11.

And what do toolkits like Qt or GTK+ then? Well, they use X11 and the graphics primitives it provides to actually draw widgets, like Buttons, Menus and stuff like that, which X11 doesn't know about.

这篇关于GTK +和OpenGL库如何在单个X服务器上进行协作?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆