MPI_Allgather和MPI_Alltoall函数之间的区别? [英] Difference between MPI_Allgather and MPI_Alltoall functions?

查看:128
本文介绍了MPI_Allgather和MPI_Alltoall函数之间的区别?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

MPI中的MPI_Allgather和MPI_Alltoall函数之间的主要区别是什么?

What is the main difference betweeen the MPI_Allgather and MPI_Alltoall functions in MPI?

我的意思是有人可以给我一些示例,其中MPI_Allgather会有所帮助而MPI_Alltoall不会有帮助吗?反之亦然.

I mean can some one give me examples where MPI_Allgather will be helpful and MPI_Alltoall will not? and vice versa.

我无法理解主要区别?在这两种情况下,似乎所有进程都将send_cnt元素发送到参与通信器的每个其他进程并接收它们?

I am not able to understand the main difference? It looks like in both the cases all the processes sends send_cnt elements to every other process participating in the communicator and receives them?

谢谢

推荐答案

一张图片说出了上千个单词,因此这里有几张ASCII艺术图片:

A picture says more than thousand words, so here are several ASCII art pictures:

rank    send buf                        recv buf
----    --------                        --------
 0      a,b,c         MPI_Allgather     a,b,c,A,B,C,#,@,%
 1      A,B,C        ---------------->  a,b,c,A,B,C,#,@,%
 2      #,@,%                           a,b,c,A,B,C,#,@,%

这只是常规的MPI_Gather,仅在这种情况下,所有进程都接收数据块,即该操作是无根的.

This is just the regular MPI_Gather, only in this case all processes receive the data chunks, i.e. the operation is root-less.

rank    send buf                        recv buf
----    --------                        --------
 0      a,b,c          MPI_Alltoall     a,A,#
 1      A,B,C        ---------------->  b,B,@
 2      #,@,%                           c,C,%

(a more elaborate case with two elements per process)

rank    send buf                        recv buf
----    --------                        --------
 0      a,b,c,d,e,f    MPI_Alltoall     a,b,A,B,#,@
 1      A,B,C,D,E,F  ---------------->  c,d,C,D,%,$
 2      #,@,%,$,&,*                     e,f,E,F,&,*

(如果每个元素都用发送它的等级着色,则看起来更好,但是...)

(looks better if each element is coloured by the rank that sends it but...)

MPI_Alltoall作为MPI_ScatterMPI_Gather的组合工作-每个进程中的发送缓冲区都像MPI_Scatter中一样进行拆分,然后由各自的进程收集块的每一列,它们的等级与行号相匹配.块列. MPI_Alltoall也可以看作是全局转置操作,作用于数据块.

MPI_Alltoall works as combined MPI_Scatter and MPI_Gather - the send buffer in each process is split like in MPI_Scatter and then each column of chunks is gathered by the respective process, whose rank matches the number of the chunk column. MPI_Alltoall can also be seen as a global transposition operation, acting on chunks of data.

两个操作可以互换吗?要正确回答这个问题,必须简单地分析发送缓冲区中数据的大小和接收缓冲区中数据的大小:

Is there a case when the two operations are interchangeable? To properly answer this question, one has to simply analyse the sizes of the data in the send buffer and of the data in the receive buffer:

operation      send buf size      recv buf size
---------      -------------      -------------
MPI_Allgather  sendcnt            n_procs * sendcnt
MPI_Alltoall   n_procs * sendcnt  n_procs * sendcnt

接收缓冲区的大小实际上是n_procs * recvcnt,但是MPI要求发送的基本元素的数量应等于接收到的基本元素的数量,因此,如果MPI_All...,那么recvcnt必须等于sendcnt.

The receive buffer size is actually n_procs * recvcnt, but MPI mandates that the number of basic elements sent should be equal to the number of basic elements received, hence if the same MPI datatype is used in both send and receive parts of MPI_All..., then recvcnt must be equal to sendcnt.

很明显,对于相同大小的接收数据,每个进程发送的数据量是不同的.为了使两个操作相等,一个必要条件是两种情况下发送缓冲区的大小相等,即n_procs * sendcnt == sendcnt,只有在n_procs == 1(即只有一个进程,或者sendcnt == 0,即根本没有数据在发送.因此,在实际可行的情况下,这两种操作都不能互换.但是,通过重复发送缓冲区中相同数据的n_procs倍,就可以用MPI_Alltoall模拟MPI_Allgather(如Tyler Gill所述).这是MPI_Allgather具有一个元素的发送缓冲区的操作:

It is immediately obvious that for the same size of the received data, the amount of data sent by each process is different. For the two operations to be equal, one necessary condition is that the sizes of the sent buffers in both cases are equal, i.e. n_procs * sendcnt == sendcnt, which is only possible if n_procs == 1, i.e. if there is only one process, or if sendcnt == 0, i.e. no data is being sent at all. Hence there is no practically viable case where both operations are really interchangeable. But one can simulate MPI_Allgather with MPI_Alltoall by repeating n_procs times the same data in the send buffer (as already noted by Tyler Gill). Here is the action of MPI_Allgather with one-element send buffers:

rank    send buf                        recv buf
----    --------                        --------
 0      a             MPI_Allgather     a,A,#
 1      A            ---------------->  a,A,#
 2      #                               a,A,#

此处与MPI_Alltoall相同:

rank    send buf                        recv buf
----    --------                        --------
 0      a,a,a          MPI_Alltoall     a,A,#
 1      A,A,A        ---------------->  a,A,#
 2      #,#,#                           a,A,#

不可能相反-通常情况下,无法用MPI_Allgather模拟MPI_Alltoall的动作.

The reverse is not possible - one cannot simulate the action of MPI_Alltoall with MPI_Allgather in the general case.

这篇关于MPI_Allgather和MPI_Alltoall函数之间的区别?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆