python多处理地图处理最后的进程 [英] python multiprocessing map mishandling of last processes

查看:118
本文介绍了python多处理地图处理最后的进程的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在使用Python的 multiprocessing.Pool 时, c + c>在下面的例子中,4个处理器的池将用于28个任务。这应该需要七次,每次需要4秒。

There is a strange behavior of map when using Python's multiprocessing.Pool. In the example below a pool of 4 processors will work on 28 tasks. This should take seven passes, each taking 4 seconds.

但是,需要8遍。在前六个通行证中,所有处理器都被接收。在第七次通过中,只有两个任务完成(两个空转处理器)。剩下的2个任务在第8次完成(再次是两个空转处理器)。这种行为出现在看似随机组合的CPU数量和任务数量,不必要地失去时间。

However, it takes 8 passes. In the first six passes all processors are engaged. In the 7th pass only two tasks are completed (two idling processors). The remaining 2 tasks are finished in the 8th pass (two idling processors, again). This behavior appears for seemingly random combinations of number of cpus and number of tasks, unnecessarily losing time.

此示例已经在英特尔至强Haswell(20个内核)和Intel i7(4核)。

This example has been reproduced on both Intel Xeon Haswell (20 cores) and Intel i7 (4 cores).

有关如何强制使用所有可用处理器的任何想法所有通行证?

Any ideas on how to force Pool to make use of all available processors in all the passes?

import time
import multiprocessing
from multiprocessing import Pool
import datetime

def f(values):
    now = str(datetime.datetime.now())
    proc_id = str(multiprocessing.current_process())
    print(proc_id+' '+now)
    a=values**2
    time.sleep(4)
    return a 

if __name__ == '__main__':
    p = Pool(4) #number of processes
    processed_values= p.map( f, range(28))
    p.close()
    p.join()
    print processed_values

ru的输出n在下面给出

The output of the run is given below

<Process(PoolWorker-1, started daemon)> 2016-05-13 17:08:49.604065
<Process(PoolWorker-2, started daemon)> 2016-05-13 17:08:49.604189
<Process(PoolWorker-3, started daemon)> 2016-05-13 17:08:49.604252
<Process(PoolWorker-4, started daemon)> 2016-05-13 17:08:49.604866
<Process(PoolWorker-1, started daemon)> 2016-05-13 17:08:53.608475
<Process(PoolWorker-2, started daemon)> 2016-05-13 17:08:53.608878
<Process(PoolWorker-3, started daemon)> 2016-05-13 17:08:53.608931
<Process(PoolWorker-4, started daemon)> 2016-05-13 17:08:53.609503
<Process(PoolWorker-1, started daemon)> 2016-05-13 17:08:57.612831
<Process(PoolWorker-2, started daemon)> 2016-05-13 17:08:57.613135
<Process(PoolWorker-3, started daemon)> 2016-05-13 17:08:57.613555
<Process(PoolWorker-4, started daemon)> 2016-05-13 17:08:57.614065
<Process(PoolWorker-1, started daemon)> 2016-05-13 17:09:01.616974
<Process(PoolWorker-2, started daemon)> 2016-05-13 17:09:01.617273
<Process(PoolWorker-3, started daemon)> 2016-05-13 17:09:01.617699
<Process(PoolWorker-4, started daemon)> 2016-05-13 17:09:01.618190
<Process(PoolWorker-1, started daemon)> 2016-05-13 17:09:05.621284
<Process(PoolWorker-2, started daemon)> 2016-05-13 17:09:05.621489
<Process(PoolWorker-3, started daemon)> 2016-05-13 17:09:05.622130
<Process(PoolWorker-4, started daemon)> 2016-05-13 17:09:05.622404
<Process(PoolWorker-1, started daemon)> 2016-05-13 17:09:09.625522
<Process(PoolWorker-2, started daemon)> 2016-05-13 17:09:09.625631
<Process(PoolWorker-3, started daemon)> 2016-05-13 17:09:09.626555
<Process(PoolWorker-4, started daemon)> 2016-05-13 17:09:09.626566
<Process(PoolWorker-1, started daemon)> 2016-05-13 17:09:13.629761
<Process(PoolWorker-2, started daemon)> 2016-05-13 17:09:13.629846
<Process(PoolWorker-1, started daemon)> 2016-05-13 17:09:17.634003
<Process(PoolWorker-2, started daemon)> 2016-05-13 17:09:17.634317
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289, 324, 361, 400, 441, 484, 529, 576, 625, 676, 729]

这与以下问题有关,没有明确或正确的答案。
Python:多处理映射需要较长时间才能完成最后几个进程

This is related to the following question, which does not have a clear or correct answer. Python: Multiprocessing Map takes longer to complete last few processes

推荐答案

这是由 Pool.map 将您传递给它的迭代块发送给中的每个工作者。如果您强制 chunksize 为1,您会看到您期望的行为:

This is caused by the way Pool.map chunks up the iterable you pass it and sends it to each worker in the Pool. If you force the chunksize to be 1, you'll see the behavior you expect:

import time
import multiprocessing
from multiprocessing import Pool
import datetime

def f(values):
    now = str(datetime.datetime.now())
    proc_id = str(multiprocessing.current_process())
    print(proc_id+' '+now)
    a=values**2
    time.sleep(4)
    return a 

if __name__ == '__main__':
    p = Pool(4) #number of processes
    processed_values= p.map( f, range(28), chunksize=1)
    p.close()
    p.join()
    print processed_values

输出:

<Process(PoolWorker-1, started daemon)> 2016-05-13 21:34:06.548733
<Process(PoolWorker-2, started daemon)> 2016-05-13 21:34:06.548803
<Process(PoolWorker-3, started daemon)> 2016-05-13 21:34:06.549013
<Process(PoolWorker-4, started daemon)> 2016-05-13 21:34:06.549052
<Process(PoolWorker-4, started daemon)> 2016-05-13 21:34:10.549509
<Process(PoolWorker-3, started daemon)> 2016-05-13 21:34:10.551091
<Process(PoolWorker-1, started daemon)> 2016-05-13 21:34:10.553057
<Process(PoolWorker-2, started daemon)> 2016-05-13 21:34:10.553263
<Process(PoolWorker-2, started daemon)> 2016-05-13 21:34:14.553765
<Process(PoolWorker-4, started daemon)> 2016-05-13 21:34:14.553821
<Process(PoolWorker-3, started daemon)> 2016-05-13 21:34:14.554953
<Process(PoolWorker-1, started daemon)> 2016-05-13 21:34:14.557262
<Process(PoolWorker-3, started daemon)> 2016-05-13 21:34:18.556535
<Process(PoolWorker-2, started daemon)> 2016-05-13 21:34:18.556611
<Process(PoolWorker-4, started daemon)> 2016-05-13 21:34:18.558019
<Process(PoolWorker-1, started daemon)> 2016-05-13 21:34:18.561597
<Process(PoolWorker-2, started daemon)> 2016-05-13 21:34:22.560039
<Process(PoolWorker-3, started daemon)> 2016-05-13 21:34:22.560097
<Process(PoolWorker-4, started daemon)> 2016-05-13 21:34:22.562236
<Process(PoolWorker-1, started daemon)> 2016-05-13 21:34:22.565912
<Process(PoolWorker-2, started daemon)> 2016-05-13 21:34:26.564383
<Process(PoolWorker-3, started daemon)> 2016-05-13 21:34:26.564430
<Process(PoolWorker-4, started daemon)> 2016-05-13 21:34:26.564589
<Process(PoolWorker-1, started daemon)> 2016-05-13 21:34:26.570232
<Process(PoolWorker-2, started daemon)> 2016-05-13 21:34:30.568634
<Process(PoolWorker-3, started daemon)> 2016-05-13 21:34:30.568647
<Process(PoolWorker-4, started daemon)> 2016-05-13 21:34:30.568752
<Process(PoolWorker-1, started daemon)> 2016-05-13 21:34:30.574456
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289, 324, 361, 400, 441, 484, 529, 576, 625, 676, 729]

    if chunksize is None:
        chunksize, extra = divmod(len(iterable), len(self._pool) * 4)
        if extra:
            chunksize += 1
    if len(iterable) == 0:
        chunksize = 0

对于大小为28的迭代,它出现为2.这意味着每个工作进程都是从一次迭代中获取两个项目,而不是一个。所以,当排队队伍中只剩下四件物品时,第一名免费工人得到两名,第二名免费工人得到两名,另外两名工人不再留下。

For an iterable of size 28, that comes out to 2. This means that each worker process is grabbing two items from your iterable at a time, not one. So, when there are only four items left in the queue, the first free worker gets two, and the second free worker gets two, leaving no more for the other two workers.

首先分块的原因是通过减少IPC开销大大提高了处理非常大的迭代的性能。对于较小的迭代,它往往没有太大的区别,甚至损害性能,因为在这种情况下。

The reason for the chunking in the first place is it greatly improves performance when dealing with very large iterables, by reducing IPC overhead. For smaller iterables it tends to not make much difference, or even hurt performance, as it does in this case.

这篇关于python多处理地图处理最后的进程的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆