使用3D加速图形渲染 [英] Graph rendering using 3D acceleration

查看:292
本文介绍了使用3D加速图形渲染的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们生成图表的巨大的数据集。我们谈论的每秒4096个样本,每图10分钟。通过简单的计算使得4096 * 60 * 10 = 2457600元linegraph样本。每个样本是一个双(8字节)precision FP。此外,我们呈现在一个屏幕上多个linegraphs,最多约一百。这使得我们呈现约25M采样在一个屏幕。使用常识和小窍门,我们可以使用CPU二维的画布上绘制该得到这code高性能。高性能,那就是渲染时间低于一分钟。 由于这是科学的数据,我们不能忽略任何样品。说真的,这是不是一种选择。甚至不要开始思考它。

当然,我们要提高渲染使用所有可用的技术时代。多核,pre-渲染,缓存都相当有趣,但不剪。我们希望30FPS呈现这些数据集最小,60FPS preferred。我们现在这是一个雄心勃勃的目标。

一个自然的方法来卸载图形渲染使用系统的GPU。 GPU的均采用巨大的数据集工作,并处理它们具有平行。一些简单的HelloWorld测试表明我们白天和黑夜的区别在渲染速度,使用GPU。

现在的问题是:GPU的API,如OpenGL的,支持DirectX和XNA是为记3D场景制作。因此,使用它们来渲染2D linegraphs是可能的,但并不理想。在我们开发的概念证明,我们遇到了,我们需要改造的2D世界变成一个3D世界。 Suddnely我们必须继续努力,XYZ坐标系与多边形顶点多的善良。这是远从发展的角度理想。 code变得不可读,维护是一个噩梦,而更多的问题沸腾了起来。

你会有什么建议或意见是这个在3D?就是要做到这一点实际转换两个系统(二维坐标与三维坐标和放大器;实体)的唯一途径?还是有更时尚的方式来实现这一目标?

- 为什么是它有用呈现多个样品上的一个像素?的 因为它重新presents数据集更好。说上一个像素,具有值2,5和8由于一些样品省略算法,仅5绘制。的线将只去5,而不是8,因此数据失真。你可以人则持相反也一样,但实际上这件事是第一个参数的计算与我们合作的数据集。 这就是为什么我们不能忽略样本的原因。

解决方案

一个非常受欢迎的工具包,用于科学计算可视化是 VTK ,我觉得很适合你的需要:

  1. 这是一个高层次的API,这样你就不必使用OpenGL(VTK是建立在OpenGL上)。有对C ++,Python和Java和Tcl的接口。我认为这将让您的codeBase的pretty的清洁。

  2. 您可以导入各种数据集到VTK(有吨的医疗成像财务数据的例子)。

  3. VTK是pretty的快,你可以​​分发到多台机器VTK图形管线,如果你想要做的非常大的可视化效果。

  4. 关于:

      

    这使得我们呈现约25M采样在一个屏幕。

         

    [...]

         

    由于这是科学的数据,我们不能忽略任何样品。说真的,这是不是一种选择。甚至不要开始思考它。

您可以通过采样,并通过使用LOD模型渲染大型数据集的VTK。也就是说,你有一个模型,你看到一个低分辨率的版本从远出,但如果你在你放大就会看到一个更高分辨率的版本。这就是很多大型数据集渲染完成的。

您不必从实际数据集消除点,但你一定能够逐步完善它,当用户放大,这对你没有好当用户没可能呈现25000000点单屏处理所有的数据。我建议你​​看看这两个VTK库和VTK用户指南,因为有一个在那里的方式来可视化大型数据集的一些宝贵的信息。

We generate graphs for huge datasets. We are talking 4096 samples per second, and 10 minutes per graph. A simple calculation makes for 4096 * 60 * 10 = 2457600 samples per linegraph. Each sample is a double (8 bytes) precision FP. Furthermore, we render multiple linegraphs on one screen, up to about a hundred. This makes we render about 25M samples in a single screen. Using common sense and simple tricks, we can get this code performant using the CPU drawing this on a 2D canvas. Performant, that is the render times fall below one minute. As this is scientific data, we cannot omit any samples. Seriously, this is not an option. Do not even start thinking about it.

Naturally, we want to improve render times using all techniques available. Multicore, pre-rendering, caching are all quite interesting but do not cut it. We want 30FPS rendering with these datasets at minimum, 60FPS preferred. We now this is an ambitious goal.

A natural way to offload graphics rendering is using the GPU of the system. GPU's are made to work with huge datasets and process them parrallel. Some simple HelloWorld tests showed us a difference of day and night in rendering speed, using the GPU.

Now the problem is: GPU API's such as OpenGL, DirectX and XNA are made for 3D scenes in mind. Thus, using them to render 2D linegraphs is possible, but not ideal. In the proof of concepts we developed, we encountered that we need to transform the 2D world into a 3D world. Suddnely we have to work with and XYZ coordinate system with polygons, vertices and more of the goodness. That is far from ideal from a development perspective. Code gets unreadable, maintenance is a nightmare, and more issues boil up.

What would your suggestion or idea be to to this in 3D? Is the only way to do this to actually convert the two systems (2D coordinates versus 3D coordinates & entities)? Or is there a sleeker way to achieve this?

-Why is it usefull to render multiple samples on one pixel? Since it represents the dataset better. Say on one pixel, you have the values 2, 5 and 8. Due to some sample omitting algorithm, only the 5 is drawn. The line would only go to 5, and not to 8, hence the data is distorted. You could argue for the opposite too, but fact of the matter is that the first argument counts for the datasets we work with. This is exactly the reason why we cannot omit samples.

解决方案

A really popular toolkit for scientific visualization is VTK, and I think it suits your needs:

  1. It's a high-level API, so you won't have to use OpenGL (VTK is built on top of OpenGL). There are interfaces for C++, Python, Java, and Tcl. I think this would keep your codebase pretty clean.

  2. You can import all kinds of datasets into VTK (there are tons of examples from medical imaging to financial data).

  3. VTK is pretty fast, and you can distribute VTK graphics pipelines across multiple machines if you want to do very large visualizations.

  4. Regarding:

    This makes we render about 25M samples in a single screen.

    [...]

    As this is scientific data, we cannot omit any samples. Seriously, this is not an option. Do not even start thinking about it.

You can render large datasets in VTK by sampling and by using LOD models. That is, you'd have a model where you see a lower-resolution version from far out, but if you zoom in you would see a higher-resolution version. This is how a lot of large dataset rendering is done.

You don't need to eliminate points from your actual dataset, but you can surely incrementally refine it when the user zooms in. It does you no good to render 25 million points to a single screen when the user can't possibly process all that data. I would recommend that you take a look at both the VTK library and the VTK user guide, as there's some invaluable information in there on ways to visualize large datasets.

这篇关于使用3D加速图形渲染的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆