OpenMP中进程间通信的线程 [英] Thread for interprocess communication in OpenMP

查看:451
本文介绍了OpenMP中进程间通信的线程的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个看起来像这样的OpenMP并行程序:

I have an OpenMP parallelized program that looks like that:

[...]
#pragma omp parallel
{
//initialize threads

#pragma omp for
for(...)
  {
  //Work is done here

  }

}

现在,我正在添加MPI支持.我需要一个处理通信的线程,在我的情况下,该线程始终调用GatherAll并填充/清空一个链接列表,用于接收/发送来自其他进程的数据.该线程应该发送/接收,直到设置了一个标志.因此,现在示例中没有MPI内容,我的问题是有关OpenMP中该例程的实现. 如何实现这样的线程?例如,我尝试在此处引入单个指令:

Now I'm adding MPI support. What I will need is a thread that handles the communication, in my case, calls GatherAll all the time and fills/empties a linked list for receiving/sending data from the other processes. That thread should send/receive until a flag is set. So right now there is no MPI stuff in the example, my question is about the implementation of that routine in OpenMP. How do I implement such a thread? For example, I tried to introduce a single directive here:

[...]
int kill=0
#pragma omp parallel shared(kill)
{
//initialize threads
#pragma omp single nowait
 {
  while(!kill)
   send_receive(); 
 }
#pragma omp for
for(...)
  {
  //Work is done here

  }
kill=1

} 

但是在这种情况下,程序被卡住了,因为for循环后的隐式屏障等待上面while循环中的线程.

but in this case the program gets stuck because the implicit barrier after the for-loop waits for the thread in the while-loop above.

谢谢你,鲁格米尼.

推荐答案

您可以尝试在single构造中添加nowait子句:

You could try adding a nowait clause to your single construct:

编辑:回复第一条评论

如果为OpenMP启用嵌套并行处理,则可以通过进行两个级别的并行处理来实现所需的功能.在顶层,您有两个并发并行部分,一个用于MPI通信,另一个用于本地计算.最后一部分本身可以并行化,为您提供第二级并行化.只有执行此级别的线程才会受到其中障碍的影响.

If you enable nested parallelism for OpenMP, you might be able to achieve what you want by making two levels of parallelism. In the top level, you have two concurrent parallel sections, one for the MPI communications, the other for local computation. This last section can itself be parallelized, which gives you a second level of parallelisation. Only threads executing this level will be affected by barriers in it.

#include <iostream>
#include <omp.h>

int main()
{
  int kill = 0;
#pragma omp parallel sections
  {
#pragma omp section
    {
      while (kill == 0){
        /* manage MPI communications */
      }
    }

#pragma omp section
    {
#pragma omp parallel
#pragma omp for
      for (int i = 0; i < 10000 ; ++i) {
        /* your workload */
      }
      kill = 1;
    }
  }
}

但是,您必须知道,如果您没有至少两个线程,则代码将中断,这意味着您正在打破代码的顺序版本和并行版本应该执行相同操作的假设.

However, you must be aware that your code is going to break if you don't have at least two threads, which means you're breaking the assumption that the sequential and parallelized versions of the code should do the same thing.

将您的OpenMP内核包装在更全局的MPI通信方案中(可能使用异步通信将通信与计算重叠)会更加干净.

It would be much cleaner to wrap your OpenMP kernel inside a more global MPI communication scheme (potentially using asynchronous communications to overlap communications with computations).

这篇关于OpenMP中进程间通信的线程的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆