Python多处理:子进程崩溃? [英] Python Multiprocessing: Crash in subprocess?

查看:560
本文介绍了Python多处理:子进程崩溃?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当python脚本打开子进程并且一个进程崩溃时会发生什么?

What happens when a python script opens subprocesses and one process crashes?

https://stackoverflow.com/a/18216437/311901

主进程会崩溃吗?

其他子进程会崩溃吗?

Will the other subprocesses crash?

是否传播了信号或其他事件?

Is there a signal or other event that's propagated?

推荐答案

使用multiprocessing.Pool时,如果池中的一个子进程崩溃了,则根本不会收到通知,并且新的进程将立即开始.取代它:

When using multiprocessing.Pool, if one of the subprocesses in the pool crashes, you will not be notified at all, and a new process will immediately be started to take its place:

>>> import multiprocessing
>>> p = multiprocessing.Pool()
>>> p._processes
4
>>> p._pool
[<Process(PoolWorker-1, started daemon)>, <Process(PoolWorker-2, started daemon)>, <Process(PoolWorker-3, started daemon)>, <Process(PoolWorker-4, started daemon)>]
>>> [proc.pid for proc in p._pool]
[30760, 30761, 30762, 30763]

然后在另一个窗口中

dan@dantop:~$ kill 30763

回到游泳池:

>>> [proc.pid for proc in p._pool]
[30760, 30761, 30762, 30767]  # New pid for the last process

您可以继续使用该池,就像什么也没发生一样.但是,被杀死的子进程死亡时正在运行的任何工作项将完成或重新启动.如果您正在运行依赖该工作项的阻塞mapapply调用,则该调用可能会无限期挂起.为此存在一个错误,但该问题仅在丑陋的解决方法.

You can continue using the pool as if nothing happened. However, any work item that the killed child process was running at the time it died will not be completed or restarted. If you were running a blocking map or apply call that was relying on that work item to complete, it will likely hang indefinitely. There is a bug filed for this, but the issue was only fixed in concurrent.futures.ProcessPoolExecutor, rather than in multiprocessing.Pool. Starting with Python 3.3, ProcessPoolExecutor will raise a BrokenProcessPool exception if a child process is killed, and disallow any further use of the pool. Sadly, multiprocessing didn't get this enhancement. For now, if you want to guard against a pool call blocking forever due to a sub-process crashing, you have to use ugly workarounds.

注意:上面的内容仅适用于实际上崩溃的池中的进程,这意味着该进程完全死亡.如果子流程引发异常,则当您尝试检索工作项的结果时,该异常将在父流程中传播:

Note: The above only applies to a process in a pool actually crashing, meaning the process completely dies. If a sub-process raises an exception, that will be propagated up the parent process when you try to retrieve the result of the work item:

>>> def f(): raise Exception("Oh no")
... 
>>> pool = multiprocessing.Pool()
>>> result = pool.apply_async(f)
>>> result.get()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python2.7/multiprocessing/pool.py", line 528, in get
    raise self._value
Exception: Oh no

直接使用multiprocessing.Process时,如果进程崩溃,则进程对象将显示进程以非零退出代码退出:

When using a multiprocessing.Process directly, the process object will show that the process has exited with a non-zero exit code if it crashes:

>>> def f(): time.sleep(30)
... 
>>> p = multiprocessing.Process(target=f)
>>> p.start()
>>> p.join()  # Kill the process while this is blocking, and join immediately ends
>>> p.exitcode
-15

如果引发异常,则行为类似:

The behavior is similar if an exception is raised:

from multiprocessing import Process

def f(x):
    raise Exception("Oh no")

if __name__ == '__main__':
    p = Process(target=f)
    p.start()
    p.join()
    print(p.exitcode)
    print("done")

输出:

Process Process-1:
Traceback (most recent call last):
  File "/usr/lib/python3.2/multiprocessing/process.py", line 267, in _bootstrap
    self.run()
  File "/usr/lib/python3.2/multiprocessing/process.py", line 116, in run
    self._target(*self._args, **self._kwargs)
TypeError: f() takes exactly 1 argument (0 given)
1
done

如您所见,子进程的回溯是打印出来的,但它不会影响主进程的执行,这可以表明子进程的exitcode1.

As you can see, the traceback from the child is printed, but it doesn't affect exceution of the main process, which is able to show the exitcode of the child was 1.

这篇关于Python多处理:子进程崩溃?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆