如何在调用Abaqus的Python中并行化此嵌套循环 [英] How to parallelize this nested loop in Python that calls Abaqus

查看:442
本文介绍了如何在调用Abaqus的Python中并行化此嵌套循环的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我下面有嵌套循环. 我如何并行化外部循环,以便可以将外部循环分布为4个同时运行,并等待所有4个运行完成,然后再继续执行脚本的其余部分?

I have the nested loops below. How can i parallelize the outside loop so i can distribute the outside loop into 4 simultaneous runs and wait for all 4 runs to complete before moving on with the rest of the script?

    for r in range(4):
        for k in range( r*nAnalysis/4, (r+1)*nAnalysis/4 ):

            # - Write Abaqus INP file - #
            writeABQfile(ppos,props,totalTime[k],recInt[k],inpFiles[k],i,lineNum[k],aPath[k])

            # - Delete LCK file to Enable Another Analysis - #
            delFile(aPath[k]+"/"+inpFiles[k]+".lck")

            # - Run Analysis - #
            runABQfile(inpFiles[k],aPath[k])

我尝试使用multiprocess.pool作为它,但它永远不会进入:

I tried using multiprocess.pool as but it never gets in:

            def parRunABQfiles(nA,nP,r,ppos,prop0,prop1,totalTime2Run_,recIntervals_,inpFiles_,i,lineNumbers_,aPath_):
            from os import path 
            from auxFunctions import writeABQfile, runABQfile 
            print("I am Here")
            for k in range( r*nA/nP, (r+1)*nA/nP ):
                # - Write Abaqus INP file - #
                writeABQfile(ppos,prop0,prop1,totalTime2Run_,recIntervals_,inpFiles_,i,lineNumbers_,aPath_)
                # - Delete LCK file to Enable Another Analysis - #
                delFile(aPath_+"/"+inpFiles[k]+".lck")
                # - Run Analysis - #
                runABQfile(inpFiles_,aPath_)
                # - Make Sure Analysis is not Bypassed - #
                while os.path.isfile(aPath_+"/"+inpFiles[k]+".lck") == True:
                      sleep(0.1)
            return k

        results = zip(*pool.map(parRunABQfiles, range(0, 4, 1)))

runABQfile只是一个子进程.对运行abaqus的sh脚本的调用

The runABQfile is just a subprocess.call to a sh script that runs abaqus

     def runABQfile(inpFile,path):    
         import subprocess
         import os

         prcStr1 = ('sbatch '+path+'/runJob.sh')

         process = subprocess.call(prcStr1, stdin=None, stdout=None, stderr=None, shell=True )

         return

我没有出现任何错误,所以我不确定为什么不进入那里.我知道,因为writeABQfile不会写入输入文件.再次的问题是:

I have no errors showing up so I am not sure why is not getting in there. I know because the writeABQfile does not write the input file. The question again is:

我如何并行化外部循环,以便可以将外部循环分布为4个同时运行,并等待所有4个运行完成,然后再继续执行脚本的其余部分?

推荐答案

如果需要多处理,请使用concurrent.futures模块.

Use concurrent.futures module if multiprocessing is what you want.

from concurrent.futures import ProcessPoolExecutor

def each(r):
    for k in range( r*nAnalysis/4, (r+1)*nAnalysis/4 ):
        writeABQfile(ppos,props,totalTime[k],recInt[k],inpFiles[k],i,lineNum[k],aPath[k])
        delFile(aPath[k]+"/"+inpFiles[k]+".lck")
        runABQfile(inpFiles[k],aPath[k])

with ProcessPoolExecutor(max_workers=4) as executor:
    output = executor.map(each, range(4)) # returns an iterable

如果您只想做"东西而不是生产",请从同一模块中检出as_completed功能.文档中有直接的示例.

If you just want to "do" stuff rather than "produce", check out as_completed function from the same module. There are direct examples in the doc.

这篇关于如何在调用Abaqus的Python中并行化此嵌套循环的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆