OpenMP 中的并行累积(前缀)总和:线程之间的通信值 [英] Parallel cumulative (prefix) sums in OpenMP: communicating values between threads

查看:46
本文介绍了OpenMP 中的并行累积(前缀)总和:线程之间的通信值的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

假设我有一个函数 f(i) 取决于索引 i (以及其他无法预先计算的值).我想填充一个数组 a 以便 a[n] = sum(f(i)) 从 i=0 到 n-1.

Assume I have a function f(i) which depends on an index i (among other values which cannot be precomputed). I want to fill an array a so that a[n] = sum(f(i)) from i=0 to n-1.

在 Hristo Iliev 发表评论后,我意识到我在做的是 累积/前缀总和.

After a comment by Hristo Iliev I realized what I am doing is a cumulative/prefix sum.

这可以写成代码

float sum = 0;
for(int i=0; i<N; i++) {
    sum += f(i);
    a[i] = sum;
}

现在我想使用 OpenMP 并行执行此操作.我可以使用 OpenMP 执行此操作的一种方法是并行写出 f(i) 的值,然后以串行方式处理依赖关系.如果 f(i) 是一个慢速函数,那么这可能会很好地工作,因为非并行循环很简单.

Now I want to use OpenMP to do this in parallel. One way I could do this with OpenMP is to write out the values for f(i) in parallel and then take care of the dependency in serial. If f(i) is a slow function then this could work well since the non-paralleled loop is simple.

#pragma omp parallel for
for(int i=0; i<N; i++) {
    a[i] = f(i);
}
for(int i=1; i<N; i++) {
    a[i] += a[i-1];
}

但在没有 OpenMP 的非并行循环的情况下也可以做到这一点.然而,我想出的解决方案很复杂,而且可能很老套.所以我的问题是,是否有一种更简单、不那么复杂的方式来使用 OpenMP 做到这一点?

But it's possible to do this without the non-parallel loop with OpenMP. The solution, however, that I have come up with is complicated and perhaps hackish. So my question is if there is a simpler less convoluted way to do this with OpenMP?

下面的代码基本上运行我为每个线程列出的第一个代码.结果是给定线程中的 a 的值在常数范围内是正确的.我将每个线程的总和保存到具有 nthreads+1 元素的数组 suma 中.这使我可以在线程之间进行通信并确定每个线程的恒定偏移量.然后我用偏移量更正 a[i] 的值.

The code below basically runs the first code I listed for each thread. The result is that values of a in a given thread are correct up to a constant. I save the sum for each thread to an array suma with nthreads+1 elements. This allows me to communicate between threads and determine the constant offset for each thread. Then I correct the values of a[i] with the offset.

float *suma;
#pragma omp parallel
{
    const int ithread = omp_get_thread_num();
    const int nthreads = omp_get_num_threads();
    const int start = ithread*N/nthreads;
    const int finish = (ithread+1)*N/nthreads;
    #pragma omp single
    {
        suma = new float[nthreads+1];
        suma[0] = 0;
    }
    float sum = 0;
    for (int i=start; i<finish; i++) {
        sum += f(i);
        a[i] = sum;
    }
    suma[ithread+1] = sum;
    #pragma omp barrier
    float offset = 0;
    for(int i=0; i<(ithread+1); i++) {
        offset += suma[i];
    }
    for(int i=start; i<finish; i++) {
        a[i] += offset;
    }
}
delete[] suma;

一个简单的测试就是设置f(i) = i.那么解决方案是 a[i] = i*(i+1)/2 (在无穷远处是 -1/12).

A simple test is just to set f(i) = i. Then the solution is a[i] = i*(i+1)/2 (and at infinity it's -1/12).

推荐答案

您可以将策略扩展到任意数量的子区域,并使用任务递归地减少它们:

You can extend your strategy to an arbitrary number of sub-regions, and reduce them recursively, using tasks:

#include<vector>
#include<iostream>

using namespace std;

const int n          = 10000;
const int baseLength = 100;

int f(int ii) {
  return ii;
}

int recursiveSumBody(int * begin, int * end){

  size_t length  = end - begin;
  size_t mid     = length/2;
  int    sum     = 0;


  if ( length < baseLength ) {
    for(size_t ii = 1; ii < length; ii++ ){
        begin[ii] += begin[ii-1];
    }
  } else {
#pragma omp task shared(sum)
    {
      sum = recursiveSumBody(begin    ,begin+mid);
    }
#pragma omp task
    {
      recursiveSumBody(begin+mid,end      );
    }
#pragma omp taskwait

#pragma omp parallel for
    for(size_t ii = mid; ii < length; ii++) {
      begin[ii] += sum;
    }

  }
  return begin[length-1];
}

void recursiveSum(int * begin, int * end){

#pragma omp single
  {
    recursiveSumBody(begin,end);
  }    
}


int main() {

  vector<int> a(n,0);

#pragma omp parallel
  {
    #pragma omp for
    for(int ii=0; ii < n; ii++) {          
      a[ii] = f(ii);
    }  

    recursiveSum(&a[0],&a[n]);

  }
  cout << n*(n-1)/2 << endl;
  cout << a[n-1] << endl;

  return 0;
}

这篇关于OpenMP 中的并行累积(前缀)总和:线程之间的通信值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆