如何确保在关闭池之前执行所有 python **pool.apply_async()** 调用? [英] How to make sure that all the python **pool.apply_async()** calls are executed before the pool is closed?
本文介绍了如何确保在关闭池之前执行所有 python **pool.apply_async()** 调用?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
如何确保在过早调用 pool.close 之前执行所有 pool.apply_async() 调用并通过 回调 累积它们的结果()
和 pool.join()
?
How to make sure that all the pool.apply_async() calls are executed and their results are accumulated through callback before a premature call to pool.close()
and pool.join()
?
numofProcesses = multiprocessing.cpu_count()
pool = multiprocessing.Pool(processes=numofProcesses)
jobs=[]
for arg1, arg2 in arg1_arg2_tuples:
jobs.append(pool.apply_async(function1,
args=(arg1,
arg2,
arg3,),
callback=accumulate_apply_async_result))
pool.close()
pool.join()
推荐答案
您需要等待附加的 AsyncResult
对象,然后退出池.这是
You need to wait on the appended AsyncResult
objects before exiting the pool. That's
for job in jobs:
job.wait()
在 pool.close()
之前.
但你可能在这里工作太辛苦了.你可以
But you may be working too hard here. You could
with multiprocessing.Pool() as pool:
for result in pool.starmap(function1,
(arg_1, arg_2, arg_3) for arg_1, arg_2 in sim_chr_tuples)):
accumulate_apply_async_result(result)
- Pool 的默认值是
cpu_count()
所以不需要添加它 with
为您关闭/加入starmap
等待结果- the default for Pool is
cpu_count()
so no need to add it with
does the close/join for youstarmap
waits for results for you
一个完整的工作示例是
import multiprocessing
result_list = []
def accumulate_apply_async_result(result):
result_list.append(result)
def function1(arg1, arg2, arg3):
return arg1, arg2, arg3
sim_chr_tuples = [(1,2), (3,4), (5,6), (7,8)]
arg_3 = "third arg"
if __name__ == "__main__":
with multiprocessing.Pool() as pool:
for result in pool.starmap(function1,
((arg_1, arg_2, arg_3)
for arg_1, arg_2 in sim_chr_tuples)):
accumulate_apply_async_result(result)
for r in result_list:
print(r)
这篇关于如何确保在关闭池之前执行所有 python **pool.apply_async()** 调用?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文