持久性进程发布后的Python池 [英] Persistent Processes Post Python Pool

查看:67
本文介绍了持久性进程发布后的Python池的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个Python程序,大约需要10分钟才能执行.因此,我使用了multiprocessing中的Pool来加快速度:

I have a Python program that takes around 10 minutes to execute. So I use Pool from multiprocessing to speed things up:

from multiprocessing import Pool
p = Pool(processes = 6) # I have an 8 thread processor
results = p.map( function, argument_list ) # distributes work over 6 processes!

它的运行速度要快得多.愿上帝保佑Python!所以我想就是这样.

It runs much quicker, just from that. God bless Python! And so I thought that would be it.

但是,我注意到,即使每次p超出范围,每次执行此操作时,进程及其相当大的状态都将保留.有效地,我造成了内存泄漏.这些进程以Python进程的形式显示在我的系统监视器应用程序中,该进程此时不使用CPU,但是需要大量内存来维持其状态.

However I've noticed that each time I do this, the processes and their considerably sized state remain, even when p has gone out of scope; effectively, I've created a memory leak. The processes show up in my System Monitor application as Python processes, which use no CPU at this point, but considerable memory to maintain their state.

池具有功能closeterminatejoin,我假设其中之一将终止进程.有人知道哪种方法最好告诉我池p我已经结束了吗?

Pool has functions close, terminate, and join, and I'd assume one of these will kill the processes. Does anyone know which is the best way to tell my pool p that I am finished with it?

非常感谢您的帮助!

推荐答案

来自

From the Python docs, it looks like you need to do:

p.close()
p.join()

map()之后,指示工人应终止,然后等待他们这样做.

after the map() to indicate that the workers should terminate and then wait for them to do so.

这篇关于持久性进程发布后的Python池的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆