GPU上的整数计算 [英] Integer calculations on GPU

查看:254
本文介绍了GPU上的整数计算的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

对于我的工作,进行整数计算特别有趣,显然不是GPU的制作方法。我的问题是:现代GPU是否支持高效的整数运算?我意识到这应该很容易为自己弄清楚,但我发现相互矛盾的答案(例如 vs no ),所以我认为最好问。

For my work it's particularly interesting to do integer calculations, which obviously are not what GPUs were made for. My question is: Do modern GPUs support efficient integer operations? I realize this should be easy to figure out for myself, but I find conflicting answers (for example yes vs no), so I thought it best to ask.

此外,GPU上有任意精度整数的库/技术吗?

Also, are there any libraries/techniques for arbitrary precision integers on GPUs?

推荐答案

首先,您需要考虑您正在使用的硬件:GPU设备的性能在构造函数之间差异很大。

其次,它还取决于所考虑的操作:例如,添加可能是快于倍增。

First, you need to consider the hardware you're using: GPU devices performance widely differs from a constructor to another.
Second, it also depends on the operations considered: for example adds might be faster than multiplies.

在我的情况下,我只使用NVIDIA设备。对于这种硬件:官方文档宣布了使用新架构(Fermi)的32位整数和32位单精度浮点数的等效性能。以前的架构(Tesla)曾经为32位整数和浮点数提供相同的性能,但只有在考虑增加和逻辑运算时才会提供。

In my case, I'm only using NVIDIA devices. For this kind of hardware: the official documentation announces equivalent performance for both 32-bit integers and 32-bit single precision floats with the new architecture (Fermi). Previous architecture (Tesla) used to offer equivalent performance for 32-bit integers and floats but only when considering adds and logical operations.

但是再一次,这可能不是真的取决于您使用的设备和说明。

But once again, this may not be true depending on the device and instructions you use.

这篇关于GPU上的整数计算的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆