R + Fortran + MPI内存未映射错误 [英] R + Fortran + MPI memory not mapped error

查看:183
本文介绍了R + Fortran + MPI内存未映射错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图在R中使用一个使用MPI的Fortran模块。



这是Fortran模块:

 模块Fortranpi 
USE MPI
IMPLICIT NONE
包含
子程序dboard(飞镖,dartsscore)
整数,intent(in):: darts
double precision,intent(out):: dartsscore
double precision :: x_coord,y_coord
integer :: score,n
$ b $如果((x_coord ** 2 + y_coord *)b = 0, * 2)<= 1.0d0)然后
得分=得分+ 1
结束如果
结束做

dartsscore = 4.0d0 *得分/飞镖

结束子程序dboard

子程序MPIpi(avepi,DARTS,ROUNDS)bind(C,name =pi2_)
use,intrinsic :: iso_c_binding,only:c_double, c_int
real(c_double),intent(out):: avepi
整数(c_int),intent(in) :: DARTS,ROUNDS
integer :: i,n,mynpts,ierr,numprocs,proc_num
整数,allocatable :: seed(:)
双精度:: pi_est,y,sumpi

调用mpi_init(ierr)
调用mpi_comm_size(MPI_COMM_WORLD,numprocs,ierr)
调用mpi_comm_rank(MPI_COMM_WORLD,proc_num,ierr)

if(numprocs。 EQ。 0)然后
mynpts = ROUNDS - (numprocs-1)*(ROUNDS / numprocs)
else
mynpts = ROUNDS / numprocs
endif

!初始化随机数发生器
!我们确保每个任务的种子不同
调用random_seed()
调用random_seed(size = n)
allocate(seed(n))
seed = 12 + proc_num * 11
调用random_seed(put = seed(1:n))
deallocate(seed)

y = 0.0d0
do i = 1,mynpts
调用dboard(飞镖,pi_est)
y = y + pi_est
end do

call mpi_reduce(y,sumpi,1,mpi_double_precision,mpi_sum,0,&
mpi_comm_world,ierr)
if(proc_num == 0)avepi = sumpi / ROUNDS
调用mpi_finalize(ierr)
结束子程序MPIpi

结束模块Fortranpi

我可以编译它:

  mpif90 -fpic -shared -o Fpi.so Fpi.f90 

这是我尝试运行的R代码:

 #SPMD风格程序:通过mpirun启动所有工作
库(Rmpi)
dyn.load(Fpi.so)
DARTS = 5000
ROUNDS = 1000
MyPi < - .Fortran(pi2,avepi = as。数字(1),DARTS = as.intege r(DARTS),ROUNDS = as.integer(ROUNDS))$ avepi
saveRDS(MyPi,file ='MyPi.RDS')

#完成MPI并退出
mpi .quit()

这是我运行它时得到的结果:

  $ mpirun -n 2 R --slave -f MyPi.R 

***抓住段错误***
地址0x44000098,导致'内存未映射'
------------------------------------ --------------------------------------
调用MPI_Init或MPI_Init_thread两次是错误的。
---------------------------------------------- ----------------------------

追溯:
1:.Fortran(pi2 ,avepi = as.numeric(1),DARTS = as.integer(DARTS),ROUNDS = as.integer(ROUNDS))
aborting ...

*** catch segfault * **
地址0x44000098,导致'内存未映射'

追踪:
1:.Fortran(pi2,avepi = as.numeric(1),DARTS = as .integer(DARTS),ROUNDS = as.integer(ROUNDS))
正在中止...
----------------------- -------------------------------------------------- -
mpirun注意到在信号11(分段故障)上退出节点2d60fd60575b时,进程等级1,节点2d60fd60575b上的PID为6400。
---------------------------------------------- ----------------------------
共有2个进程被杀死(有些可能由清理期间的mpirun)
$

我做错了什么?

解决方案

不要做

  library(Rmpi)
dyn.load( Fpi.so)

将代码放入一个包中,并将其安装到所有节点上加载到所有节点上。我喜欢

  clusterEvalQ(cl,library(myPackage))

为了确保它被加载(其中 cl snow 集群对象)。

我更喜欢 r Rscript 作为脚本前端......


I'm trying to use, in R, a Fortran module that uses MPI.

This is the Fortran module:

Module Fortranpi
USE MPI
IMPLICIT NONE
contains
subroutine dboard(darts, dartsscore)
integer, intent(in)                    :: darts
double precision, intent(out)          :: dartsscore
double precision                       :: x_coord, y_coord
integer                                :: score, n

score = 0
do n = 1, darts
call random_number(x_coord)
call random_number(y_coord)

if ((x_coord**2 + y_coord**2) <= 1.0d0) then
score = score + 1
end if
end do

dartsscore = 4.0d0*score/darts

end subroutine dboard

subroutine MPIpi(avepi, DARTS, ROUNDS) bind(C, name="pi2_")
use, intrinsic                         :: iso_c_binding, only : c_double, c_int
real(c_double), intent(out)            :: avepi
integer(c_int), intent(in)             :: DARTS, ROUNDS
integer                                :: i, n, mynpts, ierr, numprocs, proc_num
integer, allocatable                   :: seed(:)
double precision                       :: pi_est, y, sumpi

call mpi_init(ierr)
call mpi_comm_size(MPI_COMM_WORLD, numprocs, ierr)
call mpi_comm_rank(MPI_COMM_WORLD, proc_num, ierr)

if (numprocs .eq. 0) then
mynpts = ROUNDS - (numprocs-1)*(ROUNDS/numprocs)
else
  mynpts = ROUNDS/numprocs
endif

! initialize the random number generator
! we make sure the seed is different for each task
call random_seed()
call random_seed(size = n)
allocate(seed(n))
seed = 12 + proc_num*11
call random_seed(put=seed(1:n))
deallocate(seed)

y=0.0d0
do i = 1, mynpts
call dboard(darts, pi_est)
y = y + pi_est
end do

call mpi_reduce(y, sumpi, 1, mpi_double_precision, mpi_sum, 0, &
                  mpi_comm_world, ierr)
if (proc_num==0) avepi = sumpi/ROUNDS
call mpi_finalize(ierr)
end subroutine MPIpi

end module Fortranpi

I can compile it with:

mpif90 -fpic -shared -o Fpi.so Fpi.f90

This is the R code I'm trying to run:

# SPMD-style program: start all workers via mpirun
library(Rmpi)
dyn.load("Fpi.so")
DARTS=5000
ROUNDS=1000
MyPi <- .Fortran("pi2", avepi = as.numeric(1), DARTS =  as.integer(DARTS), ROUNDS =  as.integer(ROUNDS))$avepi
saveRDS(MyPi, file = 'MyPi.RDS')

# Finalize MPI and quit
mpi.quit()

This is what I get when I run it:

$ mpirun -n 2 R --slave -f MyPi.R

*** caught segfault ***
  address 0x44000098, cause 'memory not mapped'
--------------------------------------------------------------------------
  Calling MPI_Init or MPI_Init_thread twice is erroneous.
--------------------------------------------------------------------------

  Traceback:
  1: .Fortran("pi2", avepi = as.numeric(1), DARTS = as.integer(DARTS),     ROUNDS = as.integer(ROUNDS))
aborting ...

*** caught segfault ***
  address 0x44000098, cause 'memory not mapped'

Traceback:
  1: .Fortran("pi2", avepi = as.numeric(1), DARTS = as.integer(DARTS),     ROUNDS = as.integer(ROUNDS))
aborting ...
--------------------------------------------------------------------------
  mpirun noticed that process rank 1 with PID 6400 on node 2d60fd60575b exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
  2 total processes killed (some possibly by mpirun during cleanup)
$ 

What am I doing wrong?

解决方案

Don't do

library(Rmpi)
dyn.load("Fpi.so")

Put your code into a package, install it on all nodes, and have it loaded on all nodes. I like

clusterEvalQ(cl, library(myPackage))

for that to ensure it is loaded (where cl is a snow cluster object).

I also prefer r or Rscript as the scripting frontend ...

这篇关于R + Fortran + MPI内存未映射错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆