虚拟机如何渲染GUI? [英] How do virtual machines render GUI?

查看:355
本文介绍了虚拟机如何渲染GUI?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

所以我一直在做很多关于执行环境(Python的,JVM ...)的阅读,我开始自己实现一个。它是一个用C语言编写的基于寄存器的环境。我有一个基本的字节码格式定义,到目前为止执行进行得很顺利。我的问题是VE如何呈现GUI。到目前为止我的工作的更详细的描述,我的VE有一个屏幕缓冲区(试验它)。每次我戳它,我完全输出屏幕缓冲区以了解输出。

So I have been doing a lot of reading about execution environments (Python's, JVM...) and I am starting to implement one on my own. It is a register-based environment written in C. I have a basic byte code format defined and the execution is going pretty smooth so far. My question is how does VEs render GUIs. In a more detailed description about my work so far, my VE has a screen buffer (experimenting with it). Every time I poke it, I output the screen buffer completely to know the output.

到目前为止,基本的计算和东西,但我打了一个碰撞,当我想了解如何渲染GUI。我没有这个。任何帮助将不胜感激。即使我认为这完全错误,任何指针作为一个开始在正确的方向是真的很大。感谢。

So far so good with basic calculations and stuff but I hit a bump when I wanted to understand how to render GUIs. I am nowhere with this. Any help would be appreciated. Even if I am thinking completely wrong about this, any pointers as a start in the right direction would be really great. Thanks.

推荐答案

Python上的所有GUI工具包都是C / C ++代码的包装器。在Java上有一些纯Java工具包像Swing,但是它们依赖于C代码来执行绘图和处理用户输入的最低级别。没有对Java VM中的图形之类的东西的特殊支持。

All GUI toolkits on Python are a wrapper around C/C++ code. On Java there a some "pure" Java toolkits like Swing, but a the lowest level they depend on C code to do the drawing and handle user input. There's no special support for things like graphics in the Java VM.

至于GUI如何在最低级别渲染,这取决于。在Windows上,不允许用户模式软件直接访问视频硬件。最终任何C / C ++ GUI代码都必须通过GDI或Direct3D来做渲染。内核模式GDI代码能够通过写入framebuffer来执行所有的渲染,但是也支持通过将操作传递给显示驱动器来加速。另一方面,Direct3D内核代码几乎将一切传递给驱动程序,而驱动程序又将所有内容传递给GPU。几乎所有的内核模式代码都用C语言编写,而在GPU上运行的代码是手工编码汇编和用更高级别着色语言编写的代码的混合。

As for how the GUI gets rendered at the lowest level, it depends. On Windows, user mode software isn't allowed direct access the video hardware. Ultimately any C/C++ GUI code has to go through either GDI or Direct3D to do the rendering. The kernel mode GDI code is able to do all the rendering itself by writing to the framebuffer, but also supports acceleration by passing operations to display driver. On the other hand, the Direct3D kernel code passes pretty much everything to the driver which in turn passes everything on to the GPU. Almost all of the kernel mode code is written in C, while code running on the GPU is a mixture of hand coded assembly from and code written in higher level shading languages.

请注意,GPU汇编语言与Intel x86汇编语言非常不同,在制造商和GPU代之间差别很大。

Note that GPU assembly language is very different from Intel x86 assembly language, and varies considerably between manufacturers and GPU generations.

我不知道当前的做法是在Linux和其他Unix类型的操作系统上,但它常常给X服务器,这是一个用户模式进程,直接访问framebuffer。 X服务器中的C代码最终负责渲染。大概这已经改变了至少有点现在,GPU加速是更常见的。

I'm not sure what current practice is on Linux and other Unix type operating systems, but it used to be common to give the X server, which is a user mode process, direct access to the framebuffer. C code in the X server was ultimately responsible for rendering. Presumably this has changed at least somewhat now that GPU acceleration is more common.

这篇关于虚拟机如何渲染GUI?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆