如何在进程终止时清理 subprocess.Popen 实例 [英] How to Clean Up subprocess.Popen Instances Upon Process Termination
问题描述
我有一个在 Python/PyQt/QtWebKit 基础上运行的 JavaScript 应用程序,它创建 subprocess.Popen
对象来运行外部进程.
I have a JavaScript application running on a Python / PyQt / QtWebKit foundation which creates subprocess.Popen
objects to run external processes.
Popen
对象保存在字典中并由内部标识符引用,以便 JS 应用程序可以通过 pyqtSlot
调用 Popen
的方法> 如 poll()
确定进程是否仍在运行或 kill()
杀死一个流氓进程.
Popen
objects are kept in a dictionary and referenced by an internal identifier so that the JS app can call Popen
's methods via a pyqtSlot
such as poll()
to determine whether the process is still running or kill()
to kill a rogue process.
如果一个进程不再运行,我想从字典中删除它的 Popen
对象以进行垃圾收集.
If a process is not running any more, I would like to remove its Popen
object from the dictionary for garbage collection.
自动清理字典以防止内存泄漏的推荐方法是什么?
What would be the recommended approach to cleaning up the dictionary automatically to prevent a memory leak ?
我目前的想法:
- 在每个衍生进程的线程中调用
Popen.wait()
以在终止时执行自动清理.
PRO:立即清理,线程可能不会消耗太多 CPU 功率,因为它们应该处于休眠状态,对吗?
缺点:许多线程取决于生成活动. - 使用线程在所有现有进程上调用
Popen.poll()
并检查returncode
是否已终止并在这种情况下进行清理.
PRO:所有进程只有一个工作线程,内存使用率更低.
缺点:需要定期轮询,如果有许多长时间运行的进程或产生大量处理,则 CPU 使用率会更高.
- Call
Popen.wait()
in a thread per spawned process to perform an automatic cleanup right upon termination.
PRO: Immediate cleanup, threads probably do not cost much CPU power as they should be sleeping, right ?
CON: Many threads depending on spawning activity. - Use a thread to call
Popen.poll()
on all existing processes and checkreturncode
if they have terminated and clean up in that case.
PRO: Just one worker thread for all processes, lower memory usage.
CON: Periodic polling necessary, higher CPU usage if there are many long-running processes or lots of processed spawned.
你会选择哪一个,为什么?或者有什么更好的解决方案?
Which one would you choose and why ? Or any better solutions ?
推荐答案
对于与平台无关的解决方案,我会选择选项 #2,因为高 CPU 使用率的CON"可以通过类似....
For a platform-agnostic solution, I'd go with option #2, since the "CON" of high CPU usage can be circumvented with something like...
import time
# Assuming the Popen objects are in the dictionary values
PROCESS_DICT = { ... }
def my_thread_main():
while 1:
dead_keys = []
for k, v in PROCESS_DICT.iteritems():
v.poll()
if v.returncode is not None:
dead_keys.append(k)
if not dead_keys:
time.sleep(1) # Adjust sleep time to taste
continue
for k in dead_keys:
del PROCESS_DICT[k]
...因此,如果没有进程在迭代中死亡,您只需睡一会儿.
...whereby, if no processes died on an iteration, you just sleep for a bit.
因此,实际上,您的线程大部分时间仍处于休眠状态,尽管在子进程死亡和随后的清理"之间存在潜在的延迟,但这真的没什么大不了的,这应该比使用更好地扩展每个进程一个线程.
So, in effect, your thread would still be sleeping most of the time, and although there's potential latency between a child process dying and its subsequent 'cleanup', it's really not a big deal, and this should scale better than using one thread per process.
不过,有更好的平台相关解决方案.
There are better platform-dependent solutions, however.
对于 Windows,您应该能够使用 WaitForMultipleObjects
函数通过 ctypes
为 ctypes.windll.kernel32.WaitForMultipleObjects
,尽管您必须研究可行性.
For Windows, you should be able to use the WaitForMultipleObjects
function via ctypes
as ctypes.windll.kernel32.WaitForMultipleObjects
, although you'd have to look into the feasibility.
对于 OSX 和 Linux,处理 SIGCHLD
异步,使用 signal
模块.
For OSX and Linux, it's probably easiest to handle the SIGCHLD
asynchronously, using the signal
module.
一个简单粗暴的例子...
A quick n' dirty example...
import os
import time
import signal
import subprocess
# Map child PID to Popen object
SUBPROCESSES = {}
# Define handler
def handle_sigchld(signum, frame):
pid = os.wait()[0]
print 'Subprocess PID=%d ended' % pid
del SUBPROCESSES[pid]
# Handle SIGCHLD
signal.signal(signal.SIGCHLD, handle_sigchld)
# Spawn a couple of subprocesses
p1 = subprocess.Popen(['sleep', '1'])
SUBPROCESSES[p1.pid] = p1
p2 = subprocess.Popen(['sleep', '2'])
SUBPROCESSES[p2.pid] = p2
# Wait for all subprocesses to die
while SUBPROCESSES:
print 'tick'
time.sleep(1)
# Done
print 'All subprocesses died'
这篇关于如何在进程终止时清理 subprocess.Popen 实例的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!