如何以便携方式保留Fortran MPI程序的精度? [英] How do I retain precision for a Fortran MPI program in a portable way?

查看:150
本文介绍了如何以便携方式保留Fortran MPI程序的精度?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个Fortran程序,指定数字数据类型的类型,试图保持最低级别的精度,而不管使用什么编译器该程序。例如:

I have a Fortran program where I specify the kind of the numeric data types in an attempt to retain a minimum level of precision, regardless of what compiler is used to build the program. For example:

integer, parameter :: rsp = selected_real_kind(4)
...
real(kind=rsp) :: real_var

问题是我使用了MPI来并行化代码,我需要确保MPI通信以相同的精度指定相同的类型。我正在使用以下方法与我的程序中的方法保持一致:

The problem is that I have used MPI to parallelize the code and I need to make sure the MPI communications are specifying the same type with the same precision. I was using the following approach to stay consistent with the approach in my program:

call MPI_Type_create_f90_real(4,MPI_UNDEFINED,rsp_mpi,mpi_err)
...
call MPI_Send(real_var,1,rsp_mpi,dest,tag,MPI_COMM_WORLD,err)

然而,我发现这个MPI例程对于不同的MPI实现并没有特别好的支持,所以它实际上使得我的程序不可移植。如果省略 MPI_Type_create 例程,那么我只能依赖标准的 MPI_REAL MPI_DOUBLE_PRECISION 数据类型,但如果该类型与 selected_real_kind 选择不一致,那么MPI最终会传递哪种类型的实际类型呢?我坚持使用标准的真正的声明作为数据类型,没有 kind 属性,如果我这样做,我保证无论编译器和机器如何, MPI_REAL real 总是具有相同的精度。

However, I have found that this MPI routine is not particularly well-supported for different MPI implementations, so it's actually making my program non-portable. If I omit the MPI_Type_create routine, then I'm left to rely on the standard MPI_REAL and MPI_DOUBLE_PRECISION data types, but what if that type is not consistent with what selected_real_kind picks as the real type that will ultimately be passed around by MPI? Am I stuck just using the standard real declaration for a datatype, with no kind attribute and, if I do that, am I guaranteed that MPI_REAL and real are always going to have the same precision, regardless of compiler and machine?

更新:

我创建了一个简单的程序,演示了我的内部实际精度高于由 MPI_DOUBLE_PRECISION 类型提供:

I created a simple program that demonstrates the issue I see when my internal reals have higher precision than what is afforded by the MPI_DOUBLE_PRECISION type:

program main

   use mpi

   implicit none

   integer, parameter :: rsp = selected_real_kind(16)
   integer :: err
   integer :: rank

   real(rsp) :: real_var

   call MPI_Init(err)
   call MPI_Comm_rank(MPI_COMM_WORLD,rank,err)

   if (rank.eq.0) then
      real_var = 1.123456789012345
      call MPI_Send(real_var,1,MPI_DOUBLE_PRECISION,1,5,MPI_COMM_WORLD,err)
   else
      call MPI_Recv(real_var,1,MPI_DOUBLE_PRECISION,0,5,MPI_COMM_WORLD,&
         MPI_STATUS_IGNORE,err)
   end if

   print *, rank, real_var

   call MPI_Finalize(err)

end program main

如果我使用2个内核构建并运行,我会得到:

If I build and run with 2 cores, I get:

       0   1.12345683574676513672      
       1   4.71241976735884452383E-3998

现在将<16 $ c> c中的16改为15,并且得到:

Now change the 16 to a 15 in selected_real_kind and I get:

       0   1.1234568357467651     
       1   1.1234568357467651  

它总是会可以安全地使用 selected_real_kind(15) MPI_DOUBLE_PRECISION 无论使用何种机器/编译器进行构建?

Is it always going to be safe to use selected_real_kind(15) with MPI_DOUBLE_PRECISION no matter what machine/compiler is used to do the build?

推荐答案

使用Fortran 2008内部 STORAGE_SIZE 每个数字都需要以字节形式发送。请注意, STORAGE_SIZE 会以位为单位返回大小,所以您需要用8除以字节大小。

Use the Fortran 2008 intrinsic STORAGE_SIZE to determine the number bytes that each number requires and send as bytes. Note that STORAGE_SIZE returns the size in bits, so you will need to divide by 8 to get the size in bytes.

此解决方案适用于移动数据,但不能帮助您使用减少。为此,您将不得不实施用户定义的缩减操作。如果这对你很重要,我会更新我的答案和细节。

This solution works for moving data but does not help you use reductions. For that you will have to implement a user-defined reduction operation. If that's important to you, I will update my answer with the details.

例如:

For example:

program main

   use mpi

   implicit none

   integer, parameter :: rsp = selected_real_kind(16)
   integer :: err
   integer :: rank

   real(rsp) :: real_var

   call MPI_Init(err)
   call MPI_Comm_rank(MPI_COMM_WORLD,rank,err)

   if (rank.eq.0) then
      real_var = 1.123456789012345
      call MPI_Send(real_var,storage_size(real_var)/8,MPI_BYTE,1,5,MPI_COMM_WORLD,err)
   else
      call MPI_Recv(real_var,storage_size(real_var)/8,MPI_BYTE,0,5,MPI_COMM_WORLD,&
         MPI_STATUS_IGNORE,err)
   end if

   print *, rank, real_var

   call MPI_Finalize(err)

end program main

我确认这个更改纠正了这个问题,我看到的输出是:

I confirmed that this change corrects the problem and the output I see is:

   0   1.12345683574676513672      
   1   1.12345683574676513672  

这篇关于如何以便携方式保留Fortran MPI程序的精度?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆