CUDA数组到数组和 [英] CUDA array-to-array sum

查看:219
本文介绍了CUDA数组到数组和的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一小段代码如下:

typedef struct {
  double sX;
  double sY;
  double vX;
  double vY;
  int rX;
  int rY;
  int mass;
  int species;
  int boxnum;
} particle;

typedef struct {
  double mX;
  double mY;
  double count;
  int rotDir;
  double cX; 
  double cY; 
  int superDir;
} box;
//....
int i;
for(i=0;i<PART_COUNT;i++) {
    particles[i].boxnum = ((((int)(particles[i].sX+boxShiftX))/BOX_SIZE)%BWIDTH+BWIDTH*((((int)(particles[i].sY+boxShiftY))/BOX_SIZE)%BHEIGHT));
}
for(i=0;i<PART_COUNT;i++) {
    //sum the momenta
    boxnum = particles[i].boxnum;
    boxes[boxnum].mX += particles[i].vX*particles[i].mass;
    boxes[boxnum].mY += particles[i].vY*particles[i].mass;
    boxes[boxnum].count++;
}



现在,我想将它移植到CUDA。第一步很容易;在一堆线程中传播计算没有问题。问题是第二个。因为任何两个粒子同样可能在同一个盒子,我不知道如何分区,以避免冲突。

Now, I want to port this to CUDA. The first step is easy; spreading the calculation across a bunch of threads is no problem. The issue is the second. Since any two particles are equally likely to be in any same box, I'm not sure how I can partition it so as to avoid conflicts.

粒子数量

想法?

推荐答案

您可以尝试使用 atomicAdd 操作来修改框数组。对全局内存的原子操作非常缓慢,但是同时由于两个原因不可能进行涉及共享内存的任何优化:

You can try to use atomicAdd operations to modify your boxes array. Atomic operations on global memory are very slow but at the same time it's quite impossible to do any optimizations involving shared memory for two reasons:


  1. 在假设粒子 particles [0] .. particles [n] 的属性 boxnum 不在任何小边界(在块大小的范围内),您不能预测哪些框从全局内存加载到共享内存。

  2. 如果你尝试收集所有的boxnumbers,你不能使用一个数组,每个可能的boxnumber作为索引,因为有太多的方法框以适应共享内存。所以你必须收集带有队列的索引(用数组实现,指向下一个空闲槽和原子操作的指针),但是你仍然有冲突,因为同一个boxnumber可能在你的队列中出现多次。 / li>
  1. Under the assumption that the properties boxnum of the particles particles[0]..particles[n] aren't ordered and do not lie in any small boundaries (in the range of a block size) you can't predict which boxes to load from global memory into shared memory. You would've to first collect all the boxnumbers..
  2. If you try to collect all boxnumbers you can't use an array with every possible boxnumber as an index since there are way too many boxes to fit into shared memory. So you'd have to collect indices with a queue (realized with an array, a pointer to the next free slot and atomic operations), but then you'd still have conflicts because the same boxnumber could occur multiple times in your queue.

结论: atomicAdd 会给你至少正确的行为。尝试一下并测试性能。如果你不满意的性能,想想如果有另一种方法来做同样的计算,将从共享内存中获利。

Conclusion: atomicAdd will give you at least correct behavior. Try it out and test the performance. If you aren't satisfied by the performance, think if there's another way to do the same computations that would profit from shared memory.

这篇关于CUDA数组到数组和的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆