MPI进程突发工作 [英] mpi processes working in a burst

查看:75
本文介绍了MPI进程突发工作的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用mpi4py为分布式应用程序建模.

我有n个进程在访问共享文件并在执行过程中将一些日志写入共享文件.我注意到日志不是统一编写的.以下是如何将日志写入共享文件的示例:

process0.log0
process0.log1
process0.log2
process0.log3
process0.log4
process2.log0
process2.log1
process2.log2
process1.log0
process1.log1

理想情况下应该是这样:

process0.log0
process1.log0
process2.log0
process0.log1
process2.log1
process1.log1
process0.log2

任何人都可以告诉我我的实现方式可能有什么问题吗?我正在使用Pickle模块写入文件.

以下是转储日志的功能:

import pickle

log_file_name = "store.log"

def writeLog(data):
  try:
    with open(log_file_name,"a") as fp:
        pickle.dump(obj=data,file=fp)
  except:
    with open(log_file_name,"w") as fp:
        pickle.dump(obj=data,file=fp)

def readLog():
 data = []
 try:
    with open(log_file_name,"r") as fp:
        while True:
            data.append(pickle.load(fp))
    return data
 except EOFError:
    return data

所有n个进程都访问此函数以转储数据

解决方案

有很多问题/答案可以解释您在这里看到的现象:

即使这些(大多数情况下)是在谈论在屏幕上打印,但问题是相同的. MPI是一种分布式模型,这意味着某些进程的执行速度将比其他进程快,并且每次执行的顺序可能会有所不同,具体取决于每个进程的工作负载/顺序.

如果排序很重要,则可以使用同步功能来强制执行它,或者可以使用更高级的功能(例如MPI I/O)来写入文件(这不是我的专长,所以我不能告诉您更多有关它的信息). /p>

I am using mpi4py to model a distributed application.

I have n processes accessing a shared file and writing some logs into the shared file during their execution. I notice that the logs are not uniformly written. Here is an example of how logs are written into the shared file:

process0.log0
process0.log1
process0.log2
process0.log3
process0.log4
process2.log0
process2.log1
process2.log2
process1.log0
process1.log1

Ideally it should be like:

process0.log0
process1.log0
process2.log0
process0.log1
process2.log1
process1.log1
process0.log2

Can anyone tell me what is possibly wrong with my implementation? I am writing into the file using Pickle module.

following is the function which dumps the log:

import pickle

log_file_name = "store.log"

def writeLog(data):
  try:
    with open(log_file_name,"a") as fp:
        pickle.dump(obj=data,file=fp)
  except:
    with open(log_file_name,"w") as fp:
        pickle.dump(obj=data,file=fp)

def readLog():
 data = []
 try:
    with open(log_file_name,"r") as fp:
        while True:
            data.append(pickle.load(fp))
    return data
 except EOFError:
    return data

All n processes access this function to dump the data

解决方案

There are lots of questions/answers out there that explain the phenomenon you're seeing here:

Even though these are (mostly) talking about printing to the screen, the problem is the same. MPI is a distributed model which means that some processes will execute faster than others and it will probably be a different order every time depending on the workload/ordering of each process.

If ordering is important, you can use synchronization functions to enforce it or you can use something more fancy like MPI I/O for writing to files (not my specialty so I can't tell you much more about it).

这篇关于MPI进程突发工作的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆