VexCL,Thrust和Boost.Compute之间的差异 [英] Differences between VexCL, Thrust, and Boost.Compute

查看:718
本文介绍了VexCL,Thrust和Boost.Compute之间的差异的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

对这些库的粗略理解,他们看起来非常相似。我知道VexCL和Boost.Compute使用OpenCl作为后端(虽然v1.0版本VexCL也支持CUDA作为后端)和Thrust使用CUDA。除了不同的后端,这些之间的区别是什么。



具体来说,他们解决了什么问题空间,为什么要使用其中一个。



此外,在Thrust常见问题中,指出


OpenCL支持的主要障碍是缺少OpenCL编译器和运行时支持C ++模板


如果是这种情况,VexCL和Boost.Compute是如何存在的。

解决方案

我是 VexCL 的开发人员,但我真的喜欢 Kyle Lutz ,作者 Boost.Compute ,不得不对 Boost邮件列表上的相同主题说。简而言之,从用户角度来看, Thrust ,Boost.Compute,AMD的 Bolt ,并且可能是Microsoft的 C ++ AMP 都实现了类似STL的API,而VexCL是一个基于表达式模板的库,更接近 Eigen 性质。我相信STL类库之间的主要区别是它们的可移植性:


  1. Thrust只支持NVIDIA GPU,但也可以通过CPU其OpenMP和TBB后端。

  2. Bolt使用AMD扩展到OpenCL,这些扩展仅在AMD GPU上可用。它还提供Microsoft C ++ AMP和Intel TBB后端。

  3. 支持Microsoft C ++ AMP的唯一编译器是Microsoft Visual C ++(虽然在)。

  4. Boost.Compute似乎

同样,所有这些库都试图实现一个基于标准OpenCL的可移植的解决方案。类似STL的接口,因此它们具有非常广泛的适用性。 VexCL考虑到科学计算。如果Boost.Compute的开发有点早,我可以基于VexCL顶部:)。另一个值得关注的科学计算库是 ViennaCL ,一个免费的开源线性代数库,用于在多核架构(GPU, MIC)和多核CPU。有关引用的Thrust开发人员无法添加OpenCL后端的信息,请参阅[1]以了解该字段的VexCL,ViennaCL,CMTL4和Thrust的比较。



<推力,VexCL和Boost.Compute(我不熟悉其他库的内部)都使用元编程技术做他们做的。但是由于CUDA支持C ++模板,所以Thrust开发人员的工作可能会更容易一些:他们必须编写元程序,在C ++编译器的帮助下生成CUDA程序。 VexCL和Boost.Compute作者编写元程序,生成生成OpenCL源代码的程序。请查看幻灯片,我在此尝试解释如何实施VexCL。所以我同意目前的Thrust的设计禁止他们添加一个OpenCL后端。



[1] Denis Demidov,Karsten Ahnert,Karl Rupp,Peter Gottschling,编程CUDA和OpenCL:使用现代C ++库的案例研究,SIAM J.Sci。 Comput。,35(5),C453-C472。 (也可以使用 arXiv版本)。



更新:@gnzlbg评论说,在基于OpenCL的库中不支持C ++函子和lambdas。事实上,OpenCL是基于C99的,并且是从运行时存储在字符串中的源编译的,所以没有简单的方法来完全与C ++类交互。但是为了公平起见,基于OpenCL的库确实支持基于用户的函数,甚至在一定程度上支持lambda。




$ b $尽管如此,基于CUDA的库(可能是C ++ AMP)具有实际编译时编译器的明显优势(你甚至可以这样说吗?),因此与用户代码的集成可以更加紧密。 / p>

With a just a cursory understanding of these libraries, they look to be very similar. I know that VexCL and Boost.Compute use OpenCl as a backend (although the v1.0 release VexCL also supports CUDA as a backend) and Thrust uses CUDA. Aside from the different backends, what's the difference between these.

Specifically, what problem space do they address and why would I want to use one over the other.

Also, on the Thrust FAQ it is stated that

The primary barrier to OpenCL support is the lack of an OpenCL compiler and runtime with support for C++ templates

If this is the case, how is it possible that VexCL and Boost.Compute even exist.

解决方案

I am the developer of VexCL, but I really like what Kyle Lutz, the author of Boost.Compute, had to say on the same subject on Boost mailing list. In short, from the user standpoint Thrust, Boost.Compute, AMD's Bolt and probably Microsoft's C++ AMP all implement an STL-like API, while VexCL is an expression template based library that is closer to Eigen in nature. I believe the main difference between the STL-like libraries is their portability:

  1. Thrust only supports NVIDIA GPUs, but may also work on CPUs through its OpenMP and TBB backends.
  2. Bolt uses AMD extensions to OpenCL which are only available on AMD GPUs. It also provides Microsoft C++ AMP and Intel TBB backends.
  3. The only compiler that supports Microsoft C++ AMP is Microsoft Visual C++ (although the work on Bringing C++AMP Beyond Windows is being done).
  4. Boost.Compute seems to be the most portable solution of those, as it is based on standard OpenCL.

Again, all of those libraries are trying to implement an STL-like interface, so they have very broad applicability. VexCL was developed with scientific computing in mind. If Boost.Compute was developed a bit earlier, I could probably base VexCL on top of it :). Another library for scientific computing worth looking at is ViennaCL, a free open-source linear algebra library for computations on many-core architectures (GPUs, MIC) and multi-core CPUs. Have a look at [1] for the comparison of VexCL, ViennaCL, CMTL4 and Thrust for that field.

Regarding the quoted inability of Thrust developers to add an OpenCL backend: Thrust, VexCL and Boost.Compute (I am not familiar with the internals of other libraries) all use metaprogramming techniques to do what they do. But since CUDA supports C++ templates, the job of Thrust developers is probably a bit easier: they have to write metaprograms that generate CUDA programs with help of C++ compiler. VexCL and Boost.Compute authors write metaprograms that generate programs that generate OpenCL source code. Have a look at the slides where I tried to explain how VexCL is implemented. So I agree that current Thrust's design prohibits them adding an OpenCL backend.

[1] Denis Demidov, Karsten Ahnert, Karl Rupp, Peter Gottschling, Programming CUDA and OpenCL: A Case Study Using Modern C++ Libraries, SIAM J. Sci. Comput., 35(5), C453–C472. (an arXiv version is also available).

Update: @gnzlbg commented that there is no support for C++ functors and lambdas in OpenCL-based libraries. And indeed, OpenCL is based on C99 and is compiled from sources stored in strings at runtime, so there is no easy way to fully interact with C++ classes. But to be fair, OpenCL-based libraries do support user-based functions and even lambdas to some extent.

Having said that, CUDA-based libraries (and may be C++ AMP) have an obvious advantage of actual compile-time compiler (can you even say that?), so the integration with user code can be much tighter.

这篇关于VexCL,Thrust和Boost.Compute之间的差异的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆