MPI-IO:MPI_File_Set_View与MPI_File_Seek [英] MPI-IO: MPI_File_Set_View vs. MPI_File_Seek

查看:877
本文介绍了MPI-IO:MPI_File_Set_View与MPI_File_Seek的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在Fortran 90中尝试使用MPI-IO编写文件时遇到问题。如果执行以下操作,使用 MPI_File_Set_View

 程序测试
隐式无

包含mpif.h

integer :: myrank,nproc,fhandle,ierr
integer :: xpos,ypos
整数,参数:: loc_x = 10,loc_y = 10
整数:: loc_dim
整数:: nx = 2,ny = 2
real(8),dimension(loc_x,loc_y):: data,data_read
integer :: written_arr
整数,维数(2):: wa_size,wa_subsize,wa_start
integer :: int_size,double_size
integer(kind = MPI_OFFSET_KIND):: offset

call MPI_Init(ierr)
call MPI_Comm_Rank(MPI_COMM_WORLD,myrank,ierr)
call MPI_Comm_Size(MPI_COMM_WORLD,nproc,ierr)

xpos = mod(myrank,nx)
ypos = mod(myrank / nx,ny)

data = myrank

loc_dim = loc_x * loc_y

!使用MPI_File_Set_View写入
wa_size =(/ nx * loc_x,ny * loc_y /)
wa_subsize =(/ loc_x,loc_y /)
wa_start =(/ xpos,ypos /)* wa_subsize
call MPI_Type_Create_Subarray(2,wa_size,wa_subsize,wa_start&
,MPI_ORDER_FORTRAN,MPI_DOUBLE_PRECISION,written_arr,ierr)
call MPI_Type_Commit(written_arr,ierr)

call MPI_Type_Size(MPI_INTEGER ,int_size,ierr)
调用MPI_Type_Size(MPI_DOUBLE_PRECISION,double_size,ierr)

调用MPI_File_Open(MPI_COMM_WORLD,file_set_view.dat&
,MPI_MODE_WRONLY + MPI_MODE_CREATE,MPI_INFO_NULL,fhandle ,ierr)
调用MPI_File_Set_View(fhandle,0,MPI_DOUBLE_PRECISION,written_arr&
,native,MPI_INFO_NULL,ierr)
调用MPI_File_Write_All(fhandle,data,loc_dim,MPI_DOUBLE_PRECISION&
,MPI_STATUS_IGNORE,ierr)
调用MPI_File_Close(fhandle,ierr)

调用MPI_Finalize(ierr)

结束程序测试

我得到一个69Go文件,考虑到我写的文件太大了。顺便说一下,如果我增加 loc_x loc_y ,文件的大小不会发生变化。



然而,如果我使用 MPI_File_Seek ,它会更好。创建一个合理大小的文件,其中包含我要写入的数据

 程序测试
隐式无

包含mpif.h

整数:: myrank,nproc,fhandle,ierr
整数:: xpos,ypos $ b $整数,参数:: loc_x = 10,loc_y = 10
integer :: loc_dim
integer :: nx = 2,ny = 2
real(8),维度(loc_x,loc_y):: data,data_read
integer :: written_arr
整数,维数(2):: wa_size,wa_subsize,wa_start
整数:: int_size,double_size
整数(kind = MPI_OFFSET_KIND):: offset

call MPI_Init(ierr)
call MPI_Comm_Rank(MPI_COMM_WORLD,myrank,ierr)
call MPI_Comm_Size(MPI_COMM_WORLD,nproc,ierr)

xpos = mod(myrank,nx )
ypos = mod(myrank / nx,ny)

data = myrank

loc_dim = loc_x * loc_y

!使用MPI_File_Seek
调用MPI_File_Open(MPI_COMM_WORLD,file_seek.dat&
,MPI_MODE_WRONLY + MPI_MODE_CREATE,MPI_INFO_NULL,fhandle,ierr)写入
offset = loc_x * loc_y * myrank
print *,'myrank,offset,data:',myrank,offset,data(1,:2)
调用MPI_File_Seek(fhandle,offset,MPI_SEEK_SET)
调用MPI_File_Write_All(fhandle,data,loc_dim,MPI_DOUBLE_PRECISION& ;
,MPI_STATUS_IGNORE,ierr)
调用MPI_File_Close(fhandle,ierr)

调用MPI_Finalize(ierr)

结束程序测试

在我看来,这两种方法应该会产生相同的结果,特别是第一种方法应该创建一个如此大的文件。

我使用gfortran 4.6.3和OpenMPI 1.6.2编译我的代码。



任何帮助将不胜感激!

解决方案

答案实际上是在Hristo Iliev的


将<$在
0_MPI_OFFSET_KIND 中调用 MPI_FILE_SET_VIEW 中的c $ c> 0 一个类型为
INTEGER(KIND = MPI_OFFSET_KIND)的常量,值为零,然后传递它。



< pre $ call MPI_File_Set_View(fhandle,0_MPI_OFFSET_KIND,MPI_DOUBLE_PRECISION,...

  integer(kind = MPI_OFFSET_KIND),parameter :: zero_off = 0 
...
调用MPI_File_Set_View(fhandle,zero_off,MPI_DOUBLE_PRECISION,...

这两种方法都会导致输出文件为大小为3200字节(与预期的一样)。


I encounter a problem when trying to write a file with MPI-IO, in Fortran 90. If I do the following, using MPI_File_Set_View

program test
  implicit none

  include "mpif.h"

  integer :: myrank, nproc, fhandle, ierr
  integer :: xpos, ypos
  integer, parameter :: loc_x=10, loc_y=10
  integer :: loc_dim
  integer :: nx=2, ny=2
  real(8), dimension(loc_x, loc_y) :: data, data_read
  integer :: written_arr
  integer, dimension(2) :: wa_size, wa_subsize, wa_start
  integer :: int_size, double_size
  integer(kind=MPI_OFFSET_KIND) :: offset

  call MPI_Init(ierr)
  call MPI_Comm_Rank(MPI_COMM_WORLD, myrank, ierr)
  call MPI_Comm_Size(MPI_COMM_WORLD, nproc, ierr)

  xpos = mod(myrank, nx)
  ypos = mod(myrank/nx, ny)

  data = myrank

  loc_dim    = loc_x*loc_y

  ! Write using MPI_File_Set_View
  wa_size    = (/ nx*loc_x, ny*loc_y /)
  wa_subsize = (/ loc_x, loc_y /)
  wa_start   = (/ xpos, ypos /)*wa_subsize
  call MPI_Type_Create_Subarray(2, wa_size, wa_subsize, wa_start &
       , MPI_ORDER_FORTRAN, MPI_DOUBLE_PRECISION, written_arr, ierr)
  call MPI_Type_Commit(written_arr, ierr)

  call MPI_Type_Size(MPI_INTEGER, int_size, ierr)
  call MPI_Type_Size(MPI_DOUBLE_PRECISION, double_size, ierr)

  call MPI_File_Open(MPI_COMM_WORLD, "file_set_view.dat" &
       , MPI_MODE_WRONLY + MPI_MODE_CREATE, MPI_INFO_NULL, fhandle, ierr)
  call MPI_File_Set_View(fhandle, 0, MPI_DOUBLE_PRECISION, written_arr &
       , "native", MPI_INFO_NULL, ierr)
  call MPI_File_Write_All(fhandle, data, loc_dim, MPI_DOUBLE_PRECISION &
       , MPI_STATUS_IGNORE, ierr)
  call MPI_File_Close(fhandle, ierr)

  call MPI_Finalize(ierr)

end program test

I get a 69Go file, which is way too big considering what I am writing in it. By the way, the size of the file does not change if I increase loc_x and loc_y.

However, if I use MPI_File_Seek, it works much better; a file of a reasonable size is created, containing the data I want to write

program test
  implicit none

  include "mpif.h"

  integer :: myrank, nproc, fhandle, ierr
  integer :: xpos, ypos
  integer, parameter :: loc_x=10, loc_y=10
  integer :: loc_dim
  integer :: nx=2, ny=2
  real(8), dimension(loc_x, loc_y) :: data, data_read
  integer :: written_arr
  integer, dimension(2) :: wa_size, wa_subsize, wa_start
  integer :: int_size, double_size
  integer(kind=MPI_OFFSET_KIND) :: offset

  call MPI_Init(ierr)
  call MPI_Comm_Rank(MPI_COMM_WORLD, myrank, ierr)
  call MPI_Comm_Size(MPI_COMM_WORLD, nproc, ierr)

  xpos = mod(myrank, nx)
  ypos = mod(myrank/nx, ny)

  data = myrank

  loc_dim    = loc_x*loc_y

  ! Write using MPI_File_Seek
  call MPI_File_Open(MPI_COMM_WORLD, "file_seek.dat" &
       , MPI_MODE_WRONLY + MPI_MODE_CREATE, MPI_INFO_NULL, fhandle, ierr)
  offset = loc_x*loc_y*myrank
  print*, 'myrank, offset, data: ', myrank, offset, data(1,:2)
  call MPI_File_Seek(fhandle, offset, MPI_SEEK_SET)
  call MPI_File_Write_All(fhandle, data, loc_dim, MPI_DOUBLE_PRECISION &
       , MPI_STATUS_IGNORE, ierr)
  call MPI_File_Close(fhandle, ierr)

  call MPI_Finalize(ierr)

end program test

It seems to me that these two methods should produce the same thing, and, in particular, that the first method should create a so large file.

I compile my code with gfortran 4.6.3 and OpenMPI 1.6.2.

Any help would be appreciated!

解决方案

The answer was actually given in Hristo Iliev's answer of this question:

Replace the 0 in the MPI_FILE_SET_VIEW call with 0_MPI_OFFSET_KIND or declare a constant of type INTEGER(KIND=MPI_OFFSET_KIND) and a value of zero and then pass it.

call MPI_File_Set_View(fhandle, 0_MPI_OFFSET_KIND, MPI_DOUBLE_PRECISION, ...

or

integer(kind=MPI_OFFSET_KIND), parameter :: zero_off = 0
...
call MPI_File_Set_View(fhandle, zero_off, MPI_DOUBLE_PRECISION, ...

Both methods lead to an output file of size 3200 bytes (as expected).

这篇关于MPI-IO:MPI_File_Set_View与MPI_File_Seek的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆