MPI_Gather是最佳选择吗? [英] Is MPI_Gather the best choice?

查看:422
本文介绍了MPI_Gather是最佳选择吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

有4个进程,其中之一( 0 )是必须构建矩阵的主人 C 如下

  -1 0 0 -1 0 
0 -1 0 0 -1
-1 1 1 -1 1
1 -1 1 1 -1
-1 2 2 -1 2
2 -1 2 2 -1
-1 3 3 -1 3
3 -1 3 3 -1

为此,矩阵被声明为 REAL,DIMENSION(:, :),ALLOCATABLE :: C 并分配给$ $ b

  IF(myid == 0)THEN 
ALLOCATE(C(2 * nprocs,-2:+2))
END IF

其中 nprocs 是进程的数量。进程0还设置 C = -1 。对于我第一次尝试使用的通讯

  CALL MPI_GATHER((/ 0.0 + myid,0.0 + myid /),& 
& 2,MPI_REAL,&
& C(:,0),&
& 2,MPI_REAL,&
& 0,MPI_COMM_WORLD,ieri)

填满中央栏目,并且工作正常。
然后我试着用

  CALL MPI_GATHER((/ myid,myid,myid,myid /),& $ (1:2 * nprocs:2,-1),C(2:2 * nprocs:2,-2),C(1,b * b和4),MPI_REAL,& 
& :2 * nprocs:2,+ 2),C(2:2 * nprocs:2,+ 1)/),&
& 4,MPI_REAL,&
& 0,MPI_COMM_WORLD ,ierr)

来填充其他列,但它不起作用,给出如下错误

  Fortran运行时错误:数组'c'的维1的索引'1'超出预期范围(140735073734712:140735073734712)。 

为了理解原因,我试着单独填写第一栏,并打电话给

  CALL MPI_GATHER((/ 0.0-myid /),& 
& 1,MPI_REAL,&
& C (1:2 * nprocs:2,-2),&
& 1,MPI_REAL,&
& 0,MPI_COMM_WORLD,ierr)



但是同样的情况发生了,或多或少。我通过分配 C 用于所有进程(即不管进程ID)。为什么这会使调用起作用?



在此之后,我做了一些更改(在再次尝试填充所有列之前),只需将接收缓冲区放在<$ c / code code code code $ MP $ _ C> MPI_GATHER((/ 0.0-myid) /),&
& 1,MPI_REAL,&
&(/ C(1:2 * nprocs:2,-2)/),&
& amp; 1 ,MPI_REAL,&
& 0,MPI_COMM_WORLD,ieri)

调用无效(没有错误,但是甚至没有一个元素在 C 中改变)。



希望有人可以向我解释




  • 接收器中的构造函数(/.../)有什么问题缓冲区?

  • 为什么必须在非根进程中分配接收缓冲区?

  • 有必要使用 mpi_gatherv 来完成任务吗?

  • 有没有更好的方式来建立这样的矩阵?



编辑
是否可以使用MPI派生数据类型来构建矩阵?

解决方案

<如果你还没有这样做,首先使用使用mpi 而不是 include mpif.h 。其中一些错误可能是由此发现的。



您不能将数组构造函数用作接收缓冲区。为什么?由构造函数创建的数组是一个表达式。你不能在需要变量的地方使用它。



以同样的方式,你不能将 1 + 1 传递给子例程这些变化是争论。 1 + 1 是一个表达式,如果要更改,您需要一个变量。




其次,必须分配您写入或读取的每个数组。在MPI_Gather中,所有非根进程都忽略接收缓冲区。但是当你从 C(1:2 * nprocs:2,-2)从 C >,这样的数组必须被分配。这是一个Fortran的东西,而不是MPI。




如果从每个等级收到的元素数量相同可以使用 MPI_Gather ,您不需要 MPI_Gatherv






您可能会考虑将数据接收到一维缓冲区并根据需要对其进行重新排序。另一种选择是沿着最后一个维度分解它。


There are 4 processes and one of them (0) is the master which has to build the matrix C as follow

-1  0  0 -1  0
 0 -1  0  0 -1
-1  1  1 -1  1
 1 -1  1  1 -1
-1  2  2 -1  2
 2 -1  2  2 -1
-1  3  3 -1  3
 3 -1  3  3 -1

To do so, the matrix is declared as REAL, DIMENSION(:,:), ALLOCATABLE :: C and allocated with

IF (myid == 0) THEN
        ALLOCATE(C(2*nprocs,-2:+2))
END IF

where nprocs is the number of processes. Process 0 also sets C = -1. For the communications I first tried with

CALL MPI_GATHER((/0.0+myid,0.0+myid/),&
              & 2,MPI_REAL,&
              & C(:,0),&
              & 2,MPI_REAL,&
              & 0,MPI_COMM_WORLD,ieri)

to fill up the central column, and this worked. Then I tried with

CALL MPI_GATHER((/myid, myid, myid, myid/),&
              & 4,MPI_REAL,&
              & (/C(1:2*nprocs:2,-1),C(2:2*nprocs:2,-2),C(1:2*nprocs:2,+2),C(2:2*nprocs:2,+1)/),&
              & 4,MPI_REAL,&
              & 0,MPI_COMM_WORLD,ierr)

to fill the other columns, but it didn't work, giving errors like the following

Fortran runtime error: Index '1' of dimension 1 of array 'c' outside of expected range (140735073734712:140735073734712).

To understand why, I tried to fill the first column alone with the call

CALL MPI_GATHER((/0.0-myid/),&
              & 1,MPI_REAL,&
              & C(1:2*nprocs:2,-2),&
              & 1,MPI_REAL,&
              & 0,MPI_COMM_WORLD,ierr)

but the same happened, more or less.

I solved the problem by allocating C for all the processes (i.e. regardless of the process id). Why does this make the call work?

After this I did a little change (before trying again to fill all the columns at once) simply putting the receive buffer in (/.../)

CALL MPI_GATHER((/0.0-myid/),&
              & 1,MPI_REAL,&
              & (/C(1:2*nprocs:2,-2)/),&
              & 1,MPI_REAL,&
              & 0,MPI_COMM_WORLD,ieri)

but this makes the call ineffective (no errors, but not even one element in C changed).

Hope someone can explain to me

  • what's wrong with the constructor (/.../) in the receive buffer?
  • why the receive buffer has to be allocated in the non-root processes?
  • it is necessary to use mpi_gatherv to accomplish the task?
  • is there a better way to build up such a matrix?

EDIT Is it possible to use MPI derived data types to build the matrix?

解决方案

First do use use mpi instead of include mpif.h if you are not doing that already. Some of these errors might be found by this.

You cannot use an array constructor as a receive buffer. Why? The array created by a constructor is an expression. You cannot use it where a variable is required.

The same way you cannot pass 1+1 to a subroutine which changes is argument. 1+1 is an expression and you need a variable if it is to be changed.


Secondly, every array into which you write or from which you read must be allocated. In MPI_Gather the receive buffer is ignored for all nonroot processes. BUT when you make a subarray from an array like C(1:2*nprocs:2,-2) from C, such an array must be allocated. This is a Fortran thing, not an MPI one.


If the number of elements received from each rank is the same you can use MPI_Gather, you don' need MPI_Gatherv.


You may consider just receiving the data into a 1D buffer and reorder them as necessary. Another option is to decompose it along the last dimension instead.

这篇关于MPI_Gather是最佳选择吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆