带数组的MPI结构数据类型 [英] MPI struct datatype with an array

查看:235
本文介绍了带数组的MPI结构数据类型的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想轻松地在一个MPI_SEND / RECV调用中发送一个 someObject (mpi)。

> type someObject
integer :: foo
real :: bar,baz
double precision :: a,b,c
double precision,dimension(someParam):: x,y
end type someObject

我开始使用MPI_TYPE_STRUCT,但后来实现了数组的大小 x y 依赖于someParam。我最初想到在结构中嵌套一个MPI_TYPE_CONTIGUOUS来表示数组,但似乎无法使其工作。如果这是可能的话?

 ! 1 MPI_INTEGER字段的设置描述
offset(0)= 0
oldtypes(0)= MPI_INTEGER
blockcounts(0)= 1
! 2 MPI_REAL字段的设置描述
调用MPI_TYPE_EXTENT(MPI_INTEGER,extent,ierr)
offsetsets(1)= blockscounts(0)* extent
oldtypes(1)= MPI_REAL
blockcounts (1)= 2
! 3 MPI_DOUBLE_PRECISION字段的设置描述符
调用MPI_TYPE_EXTENT(MPI_DOUBLE_PRECISION,extent,ierr)
offset(2)= offset(1)+ blockcounts(1)* extent
oldtypes(2)= MPI_DOUBLE_PRECISION
blockcounts(2)= 3
!设置x和y MPI_DOUBLE_PRECISION数组字段
调用MPI_TYPE_CONTIGUOUS(someParam,MPI_DOUBLE_PRECISION,sOarraytype,ierr)
调用MPI_TYPE_COMMIT(sOarraytype,ierr)
调用MPI_TYPE_EXTENT(sOarraytype,extent,ierr)
偏移(3)=偏移(2)+块计数(2)*范围
旧类型(3)= sOarraytype
块计数(3)= 2! x和y

!现在定义结构化类型并提交
调用MPI_TYPE_STRUCT(4,blockcounts,offset,oldtypes,sOtype,ierr)
调用MPI_TYPE_COMMIT(sOtype,ierr)
pre>

我想要做什么:

  ... 
type(someObject):: newObject,rcvObject
double precision,dimension(someParam):: x,y
do i = 1,someParam
x(i)= i
y(i)= i
end do
newObject = someObject(1,0.0,1.0,2.0,3.0,4.0,x,y)
MPI_SEND(newObject,1,sOtype,1 ,1,MPI_COMM_WORLD,ierr)!掌握
...
! slave会:
MPI_RECV(rcvObject,1,sOtype,master,MPI_ANY_TAG,MPI_COMM_WORLD,status,ierr)
WRITE(*,*)rcvObject%foo
do i = 1,someParam
WRITE(*,*)rcvObject%x(i),rcvObject%y(i)
end do
...

到目前为止,我只是遇到了分段错误,没有太多的指示我做错了什么或者甚至可能。文档从未说过我不能在结构数据类型中使用连续的数据类型。

解决方案

From看起来你不能嵌套这些类型的数据类型,并且是完全错误的解决方案。



感谢: http://static.msi.umn.edu/tutorial/scicomp/general/MPI/mpi_data.html http://www.osc.edu/supercomputing/training/mpi/Feb_05_2008



定义MPI_TYPE_STRUCT的正确方法如下:

pre> type(someObject):: newObject,rcvObject
double precision,dimension(someParam):: x,y
data x / someParam * 0 /,w / someParam * 0 /
整数sOtype,oldtypes(0:7),blocklengths(0:7),offset(0:7),iextent,rextent,dpe xtent
!为someObject对象
定义MPI数据类型!设置扩展
调用MPI_TYPE_EXTENT(MPI_INTEGER,iextent,ierr)
调用MPI_TYPE_EXTENT(MPI_REAL,rextent,ierr)
调用MPI_TYPE_EXTENT(MPI_DOUBLE_PRECISION,dpextent,ierr)
! setup blocklengths / foo,bar,baz,a,b,c,x,y /
数据块长度/ 1,1,1,1,1,1,someParam,someParam /
! (2)= MPI_REAL
oldtypes(3)= MPI_DOUBLE_PRECISION
oldtypes(4) = MPI_DOUBLE_PRECISION
oldtypes(5)= MPI_DOUBLE_PRECISION
oldtypes(6)= MPI_DOUBLE_PRECISION
oldtypes(7)= MPI_DOUBLE_PRECISION
!设置偏移量
偏移量(0)= 0
偏移量(1)= iextent *块长度(0)
偏移量(2)=偏移量(1)+ rextent *块长度(1)$ b $ (3)=偏移量(2)+剩余*块长度(2)
偏移量(4)=偏移量(3)+ dpextent *块长度(3)
偏移量(5)=偏移量(4) + dpextent * blocklengths(4)
偏移量(6)=偏移量(5)+ dpextent *块长度(5)
偏移量(7)=偏移量(6)+ dpextent *块长度(6)
!现在定义结构化类型并提交
调用MPI_TYPE_STRUCT(8,blocklengths,offset,oldtypes,sOtype,ierr)
调用MPI_TYPE_COMMIT(sOtype,ierr)
pre>

这允许我以我最初想要的方式发送和接收对象!


I would like to easily send an someObject in one MPI_SEND/RECV call in mpi.

   type someObject
     integer :: foo
     real :: bar,baz
     double precision :: a,b,c
     double precision, dimension(someParam) :: x, y
   end type someObject

I started using a MPI_TYPE_STRUCT, but then realized the sizes of the arrays x and y are dependent upon someParam. I initially thought of nesting a MPI_TYPE_CONTIGUOUS in the struct to represent the arrays, but cannot seem to get this to work. If this is even possible?

  ! Setup description of the 1 MPI_INTEGER field
  offsets(0) = 0
  oldtypes(0) = MPI_INTEGER
  blockcounts(0) = 1
  ! Setup description of the 2 MPI_REAL fields
  call MPI_TYPE_EXTENT(MPI_INTEGER, extent, ierr)
  offsets(1) = blockcounts(0) * extent
  oldtypes(1) = MPI_REAL
  blockcounts(1) = 2
  ! Setup descripton of the 3 MPI_DOUBLE_PRECISION fields
  call MPI_TYPE_EXTENT(MPI_DOUBLE_PRECISION, extent, ierr)
  offsets(2) = offsets(1) + blockcounts(1) * extent
  oldtypes(2) = MPI_DOUBLE_PRECISION
  blockcounts(2) = 3
  ! Setup x and y MPI_DOUBLE_PRECISION array fields
  call MPI_TYPE_CONTIGUOUS(someParam, MPI_DOUBLE_PRECISION, sOarraytype, ierr)
  call MPI_TYPE_COMMIT(sOarraytype, ierr)
  call MPI_TYPE_EXTENT(sOarraytype, extent, ierr)
  offsets(3) = offsets(2) + blockcounts(2) * extent
  oldtypes(3) = sOarraytype
  blockcounts(3) = 2 ! x and y

  ! Now Define structured type and commit it
  call MPI_TYPE_STRUCT(4, blockcounts, offsets, oldtypes, sOtype, ierr)
  call MPI_TYPE_COMMIT(sOtype, ierr)

What I would like to do:

...
type(someObject) :: newObject, rcvObject
double precision, dimension(someParam) :: x, y
do i=1,someParam
  x(i) = i
  y(i) = i
end do
newObject = someObject(1,0.0,1.0,2.0,3.0,4.0,x,y)
MPI_SEND(newObject, 1, sOtype, 1, 1, MPI_COMM_WORLD, ierr) ! master
...
! slave would:
MPI_RECV(rcvObject, 1, sOtype, master, MPI_ANY_TAG, MPI_COMM_WORLD, status, ierr)
WRITE(*,*) rcvObject%foo
do i=1,someParam
  WRITE(*,*) rcvObject%x(i), rcvObject%y(i)
end do
...

So far I am just getting segmentation faults, without much indication of what I'm doing wrong or if this is even possible. The documentation never said I couldn't use a contiguous datatype inside a struct datatype.

解决方案

From what it seems you can't nest those kinds of datatypes and was a completely wrong solution.

Thanks to: http://static.msi.umn.edu/tutorial/scicomp/general/MPI/mpi_data.html and http://www.osc.edu/supercomputing/training/mpi/Feb_05_2008/mpi_0802_mod_datatypes.pdf for guidance.

the right way to define the MPI_TYPE_STRUCT is as follows:

type(someObject) :: newObject, rcvObject
double precision, dimension(someParam) :: x, y
data x/someParam * 0/, w/someParam * 0/
integer sOtype, oldtypes(0:7), blocklengths(0:7), offsets(0:7), iextent, rextent, dpextent
! Define MPI datatype for someObject object
! set up extents
call MPI_TYPE_EXTENT(MPI_INTEGER, iextent, ierr)
call MPI_TYPE_EXTENT(MPI_REAL, rextent, ierr)
call MPI_TYPE_EXTENT(MPI_DOUBLE_PRECISION, dpextent, ierr)
! setup blocklengths /foo,bar,baz,a,b,c,x,y/
data blocklengths/1,1,1,1,1,1,someParam,someParam/
! setup oldtypes
oldtypes(0) = MPI_INTEGER
oldtypes(1) = MPI_REAL
oldtypes(2) = MPI_REAL
oldtypes(3) = MPI_DOUBLE_PRECISION
oldtypes(4) = MPI_DOUBLE_PRECISION
oldtypes(5) = MPI_DOUBLE_PRECISION
oldtypes(6) = MPI_DOUBLE_PRECISION
oldtypes(7) = MPI_DOUBLE_PRECISION
! setup offsets
offsets(0) = 0
offsets(1) = iextent * blocklengths(0)
offsets(2) = offsets(1) + rextent*blocklengths(1)
offsets(3) = offsets(2) + rextent*blocklengths(2)
offsets(4) = offsets(3) + dpextent*blocklengths(3)
offsets(5) = offsets(4) + dpextent*blocklengths(4)
offsets(6) = offsets(5) + dpextent*blocklengths(5)
offsets(7) = offsets(6) + dpextent*blocklengths(6)
! Now Define structured type and commit it
call MPI_TYPE_STRUCT(8, blocklengths, offsets, oldtypes, sOtype, ierr)
call MPI_TYPE_COMMIT(sOtype, ierr)

That allows me to send and receive the object with the way I originally wanted!

这篇关于带数组的MPI结构数据类型的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆