在Fortran 90中使用MPI_Send/Recv处理多维数组的块 [英] Using MPI_Send/Recv to handle chunk of multi-dim array in Fortran 90

查看:776
本文介绍了在Fortran 90中使用MPI_Send/Recv处理多维数组的块的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我必须在FORTRAN 90中发送和接收(MPI)多维数组的一部分.该行

I have to send and receive (MPI) a chunk of a multi-dimensional array in FORTRAN 90. The line

MPI_Send(x(2:5,6:8,1),12,MPI_Real,....)

根据Gropp,Lusk和Skjellum撰写的使用MPI ..."一书,不应使用

.做这个的最好方式是什么?我是否必须创建一个临时数组并将其发送或使用MPI_Type_Create_Subarray或类似的东西?

is not supposed to be used, as per the book "Using MPI..." by Gropp, Lusk, and Skjellum. What is the best way to do this? Do I have to create a temporary array and send it or use MPI_Type_Create_Subarray or something like that?

推荐答案

不将数组节与MPI_SEND一起使用的原因是,编译器必须使用 some MPI实现创建一个临时副本.这是由于这样的事实,即Fortran只能将数组节正确地传递给具有显式接口的子例程,并且在所有其他情况下通常必须在调用子例程的堆栈上生成临时的平化"副本.不幸的是,在TR 29113扩展到F2008之前的Fortran中,无法声明带有可变类型参数的子例程,而MPI实现通常诉诸于语言破解,例如MPI_Send完全用C语言实现,并且依赖于Fortran始终将数据作为指针传递.

The reason not to use array sections with MPI_SEND is that the compiler has to create a temporary copy with some MPI implementations. This is due to the fact that Fortran can only properly pass array sections to subroutines with explicit interfaces and has to generate temporary "flattened" copies in all other cases, usually on the stack of the calling subroutine. Unfortunately in Fortran before the TR 29113 extension to F2008 there is no way to declare subroutines that take variable type arguments and MPI implementations usually resort to language hacks, e.g. MPI_Send is entirely implemented in C and relies on Fortran always passing the data as a pointer.

一些MPI库通过为MPI_SEND生成大量重载来解决此问题:

Some MPI libraries work around this issue by generating huge number of overloads for MPI_SEND:

  • 只需要一个INTEGER
  • 的一个
  • 一个采用一维数组INTEGER
  • 的数组
  • 一个采用INTEGER二维数组的
  • 依此类推
  • one that takes a single INTEGER
  • one that takes an 1-d array of INTEGER
  • one that takes an 2-d array of INTEGER
  • and so on

然后对CHARACTERLOGICALDOUBLE PRECISION等重复相同的操作.这仍然是一个技巧,因为它不涉及通过用户定义类型的情况.此外,由于它现在必须了解Fortran数组描述符,而这是非常特定于编译器的,因此它使C实现变得非常复杂.

The same is then repeated for CHARACTER, LOGICAL, DOUBLE PRECISION, etc. This is still a hack as it does not cover cases where one passes user-defined type. Further it greatly complicates the C implementation as it now has to understand the Fortran array descriptors, which are very compiler-specific.

幸运的是,时代在变化. Fortran 2008的TR 29113扩展包括两个新功能:

Fortunately times are changing. The TR 29113 extension to Fortran 2008 includes two new features:

  • 假定类型的参数:TYPE(*)
  • 假定维度参数:DIMENSION(..)
  • assumed-type arguments: TYPE(*)
  • assumed-dimension arguments: DIMENSION(..)

两者的组合,即TYPE(*), DIMENSION(..), INTENT(IN) :: buf,描述了一个既可以具有不同类型又可以具有任意维度的参数. MPI-3中新的mpi_f08界面已经利用了这一点.

The combination of both, i.e. TYPE(*), DIMENSION(..), INTENT(IN) :: buf, describes an argument that can both be of varying type and have any dimension. This is already being taken advantage of in the new mpi_f08 interface in MPI-3.

非阻塞调用在Fortran中提出了更大的问题,超出了Alexander Vogt的描述.原因是Fortran没有抑制编译器优化的概念(即Fortran中没有volatile关键字).以下代码可能无法按预期运行:

Non-blocking calls present bigger problems in Fortran that go beyond what Alexander Vogt has described. The reason is that Fortran does not have the concept of suppressing compiler optimisations (i.e. there is no volatile keyword in Fortran). The following code might not run as expected:

INTEGER :: data

data = 10
CALL MPI_IRECV(data, 1, MPI_INTEGER, 0, 0, MPI_COMM_WORLD, req, ierr)
! data is not used here
! ...
CALL MPI_WAIT(req, MPI_STATUS_IGNORE, ierr)
! data is used here

人们可能希望在调用MPI_WAIT data之后将包含从等级0接收到的值,但是事实并非如此.原因是编译器不知道MPI_IRECV返回后data可能会异步更改,因此将其值保存在寄存器中.这就是为什么在Fortran中非阻塞MPI调用通常被认为是危险的.

One might expect that after the call to MPI_WAIT data would contain the value received from rank 0, but this might very well not be the case. The reason is that the compiler cannot know that data might change asynchronously after MPI_IRECV returns and therefore keep its value in a register instead. That's why non-blocking MPI calls are generally considered as dangerous in Fortran.

TR 29113的ASYNCHRONOUS属性也解决了第二个问题.如果您查看MPI_IRECVmpi_f08定义,则其buf自变量声明为:

TR 29113 has solution for that second problem too with the ASYNCHRONOUS attribute. If you take a look at the mpi_f08 definition of MPI_IRECV, its buf argument is declared as:

TYPE(*), DIMENSION(..), INTENT(OUT), ASYNCHRONOUS :: buf

即使buf是标量参数,即未创建任何临时副本,符合TR 29113的编译器也不会对缓冲区参数进行优化注册.

Even if buf is a scalar argument, i.e. no temporary copy is created, a TR 29113 compliant compiler would not resort to register optimisations for the buffer argument.

这篇关于在Fortran 90中使用MPI_Send/Recv处理多维数组的块的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆