多处理另一个功能的功能 [英] Function that multiprocesses another function

查看:52
本文介绍了多处理另一个功能的功能的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在分析仿真的时间序列.基本上,它在每个时间步骤中都执行相同的任务.由于时间步长非常多,而且每个时间步长的分析都是独立的,因此我想创建一个可以对另一个功能进行多处理的功能.后者将具有参数,并返回结果.

使用共享字典和libcurrent.futures,我成功地编写了此代码:

import concurrent.futures as Cfut
def multiprocess_loop_grouped(function, param_list, group_size, Nworkers, *args):
    # function : function that is running in parallel
    # param_list : list of items
    # group_size : size of the groups
    # Nworkers : number of group/items running in the same time
    # **param_fixed : passing parameters

    manager = mlp.Manager()
    dic = manager.dict()
    executor = Cfut.ProcessPoolExecutor(Nworkers)

    futures = [executor.submit(function, param, dic, *args)
           for param in grouper(param_list, group_size)]

    Cfut.wait(futures)
    return [dic[i] for i in sorted(dic.keys())]

通常,我可以这样使用它:

def read_file(files, dictionnary):
    for file in files:
        i = int(file[4:9])
        #print(str(i))
        if 'bz2' in file:
            os.system('bunzip2 ' + file)
            file = file[:-4]
        dictionnary[i] = np.loadtxt(file)
        os.system('bzip2 ' + file)

Map = np.array(multiprocess_loop_grouped(read_file, list_alti, Group_size, N_thread))

或这样:

def autocorr(x):
    result = np.correlate(x, x, mode='full')
    return result[result.size//2:]

def find_lambda_finger(indexes, dic, Deviation):
    for i in indexes :
        #print(str(i))
        # Beach = Deviation[i,:] - np.mean(Deviation[i,:])
        dic[i] = Anls.find_first_max(autocorr(Deviation[i,:]), valmax = True)

args = [Deviation]
Temp = Rescal.multiprocess_loop_grouped(find_lambda_finger, range(Nalti), Group_size, N_thread, *args)

基本上,它正在工作.但是它不能很好地工作.有时会崩溃.有时候,它实际上启动了许多与Nworkers相等的python进程,有时在我指定Nworkers = 15时,一次只运行2或3个.

例如,我提出的以下主题描述了我遇到的经典错误:解决方案

Python多处理的基本概念之一是使用队列.当您有一个可以迭代的输入列表,并且不需要由子流程更改时,它会很好地工作.它还使您可以很好地控制所有进程,因为您可以生成所需的编号,因此可以使其空闲或停止.

调试起来也容易得多.显式共享数据通常是一种很难正确设置的方法.

队列可以容纳任何东西,因为根据定义它们是可迭代的.因此,您可以使用文件路径字符串(用于读取文件),不可重复的数字(用于进行计算)甚至图像(用于绘制)填充它们.

在您的情况下,布局可能看起来像这样:

import multiprocessing as mp
import numpy as np
import itertools as it


def worker1(in_queue, out_queue):
    #holds when nothing is available, stops when 'STOP' is seen
    for a in iter(in_queue.get, 'STOP'):
        #do something
        out_queue.put({a: result}) #return your result linked to the input

def worker2(in_queue, out_queue):
    for a in iter(in_queue.get, 'STOP'):
        #do something differently
        out_queue.put({a: result}) //return your result linked to the input

def multiprocess_loop_grouped(function, param_list, group_size, Nworkers, *args):
    # your final result
    result = {}

    in_queue = mp.Queue()
    out_queue = mp.Queue()

    # fill your input
    for a in param_list:
        in_queue.put(a)
    # stop command at end of input
    for n in range(Nworkers):
        in_queue.put('STOP')

    # setup your worker process doing task as specified
    process = [mp.Process(target=function,
               args=(in_queue, out_queue), daemon=True) for x in range(Nworkers)]

    # run processes
    for p in process:
        p.start()

    # wait for processes to finish
    for p in process:
        p.join()

    # collect your results from the calculations
    for a in param_list:
        result.update(out_queue.get())

    return result

temp = multiprocess_loop_grouped(worker1, param_list, group_size, Nworkers, *args)
map = multiprocess_loop_grouped(worker2, param_list, group_size, Nworkers, *args)

当您担心您的队列将耗尽内存时,可以使它更具动态性.不需要在进程运行时填充和清空队列.请在此处查看此示例.

最后一句话:它不是您所要求的Pythonic.但是对于新手来说更容易理解;-)

I'm performing analyses of time-series of simulations. Basically, it's doing the same tasks for every time steps. As there is a very high number of time steps, and as the analyze of each of them is independant, I wanted to create a function that can multiprocess another function. The latter will have arguments, and return a result.

Using a shared dictionnary and the lib concurrent.futures, I managed to write this :

import concurrent.futures as Cfut
def multiprocess_loop_grouped(function, param_list, group_size, Nworkers, *args):
    # function : function that is running in parallel
    # param_list : list of items
    # group_size : size of the groups
    # Nworkers : number of group/items running in the same time
    # **param_fixed : passing parameters

    manager = mlp.Manager()
    dic = manager.dict()
    executor = Cfut.ProcessPoolExecutor(Nworkers)

    futures = [executor.submit(function, param, dic, *args)
           for param in grouper(param_list, group_size)]

    Cfut.wait(futures)
    return [dic[i] for i in sorted(dic.keys())]

Typically, I can use it like this :

def read_file(files, dictionnary):
    for file in files:
        i = int(file[4:9])
        #print(str(i))
        if 'bz2' in file:
            os.system('bunzip2 ' + file)
            file = file[:-4]
        dictionnary[i] = np.loadtxt(file)
        os.system('bzip2 ' + file)

Map = np.array(multiprocess_loop_grouped(read_file, list_alti, Group_size, N_thread))

or like this :

def autocorr(x):
    result = np.correlate(x, x, mode='full')
    return result[result.size//2:]

def find_lambda_finger(indexes, dic, Deviation):
    for i in indexes :
        #print(str(i))
        # Beach = Deviation[i,:] - np.mean(Deviation[i,:])
        dic[i] = Anls.find_first_max(autocorr(Deviation[i,:]), valmax = True)

args = [Deviation]
Temp = Rescal.multiprocess_loop_grouped(find_lambda_finger, range(Nalti), Group_size, N_thread, *args)

Basically, it is working. But it is not working well. Sometimes it crashes. Sometimes it actually launches a number of python processes equal to Nworkers, and sometimes there is only 2 or 3 of them running at a time while I specified Nworkers = 15.

For example, a classic error I obtain is described in the following topic I raised : Calling matplotlib AFTER multiprocessing sometimes results in error : main thread not in main loop

What is the more Pythonic way to achieve what I want ? How can I improve the control this function ? How can I control more the number of running python process ?

解决方案

One of the basic concepts for Python multi-processing is using queues. It works quite well when you have an input list that can be iterated and which does not need to be altered by the sub-processes. It also gives you a good control over all the processes, because you spawn the number you want, you can run them idle or stop them.

It is also a lot easier to debug. Sharing data explicitly is usually an approach that is much more difficult to setup correctly.

Queues can hold anything as they are iterables by definition. So you can fill them with filepath strings for reading files, non-iterable numbers for doing calculations or even images for drawing.

In your case a layout could look like that:

import multiprocessing as mp
import numpy as np
import itertools as it


def worker1(in_queue, out_queue):
    #holds when nothing is available, stops when 'STOP' is seen
    for a in iter(in_queue.get, 'STOP'):
        #do something
        out_queue.put({a: result}) #return your result linked to the input

def worker2(in_queue, out_queue):
    for a in iter(in_queue.get, 'STOP'):
        #do something differently
        out_queue.put({a: result}) //return your result linked to the input

def multiprocess_loop_grouped(function, param_list, group_size, Nworkers, *args):
    # your final result
    result = {}

    in_queue = mp.Queue()
    out_queue = mp.Queue()

    # fill your input
    for a in param_list:
        in_queue.put(a)
    # stop command at end of input
    for n in range(Nworkers):
        in_queue.put('STOP')

    # setup your worker process doing task as specified
    process = [mp.Process(target=function,
               args=(in_queue, out_queue), daemon=True) for x in range(Nworkers)]

    # run processes
    for p in process:
        p.start()

    # wait for processes to finish
    for p in process:
        p.join()

    # collect your results from the calculations
    for a in param_list:
        result.update(out_queue.get())

    return result

temp = multiprocess_loop_grouped(worker1, param_list, group_size, Nworkers, *args)
map = multiprocess_loop_grouped(worker2, param_list, group_size, Nworkers, *args)

It can be made a bit more dynamic when you are afraid that your queues will run out of memory. Than you need to fill and empty the queues while the processes are running. See this example here.

Final words: it is not more Pythonic as you requested. But it is easier to understand for a newbie ;-)

这篇关于多处理另一个功能的功能的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆