为什么我必须在析构函数中调用MPI.Finalize()? [英] Why do I have to call MPI.Finalize() inside the destructor?

查看:35
本文介绍了为什么我必须在析构函数中调用MPI.Finalize()?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前正在尝试理解mpi4py。我设置mpi4py.rc.initialize = Falsempi4py.rc.finalize = False是因为我不明白为什么我们需要自动初始化和结束。默认行为是在导入MPI时调用MPI.Init()。我认为这是因为对于每个级别,都在运行Python解释器的一个实例,每个实例都将运行整个脚本,但这只是猜测。归根结底,我喜欢直截了当地说。

现在这引入了一些问题。我有此代码

import numpy as np
import mpi4py
mpi4py.rc.initialize = False  # do not initialize MPI automatically
mpi4py.rc.finalize = False # do not finalize MPI automatically

from mpi4py import MPI # import the 'MPI' module
import h5py

class DataGenerator:
    def __init__(self, filename, N, comm):
        self.comm = comm
        self.file = h5py.File(filename, 'w', driver="mpio", comm=comm)

        # Create datasets
        self.data_ds= self.file.create_dataset("indices", (N,1), dtype='i')

    def __del__(self):
        self.file.close()
        

if __name__=='__main__':
    MPI.Init()
    world = MPI.COMM_WORLD
    world_rank = MPI.COMM_WORLD.rank

    filename = "test.hdf5"
    N = 10
    data_gen = DataGenerator(filename, N, comm=world)

    MPI.Finalize()

这将导致

$ mpiexec -n 4 python test.py 
*** The MPI_Barrier() function was called after MPI_FINALIZE was invoked.
*** This is disallowed by the MPI standard.
*** Your MPI job will now abort. [eu-login-04:01559] Local abort after MPI_FINALIZE started completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
*** The MPI_Barrier() function was called after MPI_FINALIZE was invoked.
*** This is disallowed by the MPI standard.
*** Your MPI job will now abort. [eu-login-04:01560] Local abort after MPI_FINALIZE started completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
-------------------------------------------------------------------------- Primary job  terminated normally, but 1 process returned a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
*** The MPI_Barrier() function was called after MPI_FINALIZE was invoked.
*** This is disallowed by the MPI standard.
*** Your MPI job will now abort. [eu-login-04:01557] Local abort after MPI_FINALIZE started completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
-------------------------------------------------------------------------- mpiexec detected that one or more processes exited with non-zero status, thus causing the job to be terminated. The first process to do so was:

  Process name: [[15050,1],3]   Exit code:    1
--------------------------------------------------------------------------

我对这里发生的事情有点困惑。如果我将MPI.Finalize()移到析构函数的末尾,它可以正常工作。

并不是说我也使用h5py,它使用MPI进行并行化。所以我这里有一个并行文件IO。并不是说h5py需要使用MPI支持进行编译。您可以通过设置虚拟环境并运行pip install --no-binary=h5py h5py轻松完成此操作。

推荐答案

您编写它的方式,data_gen在Main函数返回之前一直有效。但是您在函数内调用MPI.Finalize。因此,析构函数在Finalize之后运行。h5py.File.close方法似乎在内部调用MPI.Comm.Barrier。禁止在Finalize之后调用此函数。

如果您希望拥有显式控制,请确保在调用MPI.Finalize之前销毁所有对象。当然,如果某些对象只由垃圾回收器销毁,而不是引用计数器销毁,那么即使这样也可能不够。

若要避免此情况,请使用上下文管理器而不是析构函数。

class DataGenerator:
    def __init__(self, filename, N, comm):
        self.comm = comm
        self.file = h5py.File(filename, 'w', driver="mpio", comm=comm)

        # Create datasets
        self.data_ds= self.file.create_dataset("indices", (N,1), dtype='i')

    def __enter__(self):
        return self

    def __exit__(self, type, value, traceback):
        self.file.close()


if __name__=='__main__':
    MPI.Init()
    world = MPI.COMM_WORLD
    world_rank = MPI.COMM_WORLD.rank

    filename = "test.hdf5"
    N = 10
    with DataGenerator(filename, N, comm=world) as data_gen:
        pass
    MPI.Finalize()

这篇关于为什么我必须在析构函数中调用MPI.Finalize()?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆