如何在python中通过多处理正确终止子进程 [英] How to properly terminate child processes with multiprocessing in python

查看:113
本文介绍了如何在python中通过多处理正确终止子进程的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一些回调函数,我想作为多个进程启动,并使其全部通过父进程的信号终止.

I have a few callback functions and I'd like to launch as multiple processes and have them all terminate via signal from the parent process.

我当前的操作方式是使用multiprocessing.Value创建一个共享的c_bool并将其设置为True,然后在创建它们时将其分发给我的所有进程.我的所有进程都使用共享的bool运行while循环,如下所示:

My current way of doing this is creating a shared c_bool with multiprocessing.Value and setting it to True, then distributing it to all of my processes when they are created. My processes all run a while loop using the shared bool like so:

while myC_bool: ...keep running...

然后我可以将bool从父进程切换为False,所有子进程将完成其最终循环并退出.

I can then just switch the bool to False from my parent process and all child processes will complete their final loop and exit.

许多人告诉我,我在文档中读到,在使用多处理程序时,应尽量避免使用共享内存. 有人告诉我避免这种情况的最佳方法是守护进程,为其提供自定义信号处理程序并向其发送sigint/sigterm/etc ...

I've been told by many people, and have read in the docs that one should try avoid using shared memory when using multiprocessing. I was told the best way to avoid this is to daemonize the process, give it a custom signal handler and send it a sigint/sigterm/etc...

我的问题是,是否仅使用bool使循环保持活动状态,并且仅从父进程中更改其值,并从多个子进程中读取它的值是一个合适的解决方案,以使我的所有子进程快速安全地终止?我觉得所有孩子只看一头共享的bool,而不是给他们发送x个Sigint的开销就更少了.

My question is, is exclusively using the bool to keep a loop alive and ONLY ever alter it's value from my parent process, and read it from multiple child processes a suitable solution to make all of my child processes terminate quickly and safely? I feel like there is less overhead for all the children to just look at the one shared bool, than to send x number of sigints to them.

守护进程将是一个更好的解决方案吗?如果是这样,我希望得到一些帮助,以了解原因.

Would daemonizing be a better solution? If so I'd like some help understanding why.

推荐答案

有很多充分的理由可以采用您的解决方案:

There are a lot of good reasons to go with your solution:

  • 比信号更容易思考.
  • 要处理的跨平台问题更少.
  • 您已经有了可以这种方式运行的代码.
  • 如果将来需要,可以轻松添加正常关闭"机制.

...等等.

请记住,除非您可以向自己证明multiprocessing和底层操作系统原语在您关心的每个平台上都能在不同步的情况下工作,否则您需要放置Lock或其他内容围绕共享bool的每次访问.这并不是很复杂,但是……一旦完成,使用例如没有共享布尔值的Event可能会更简单.

Keep in mind that, unless you can prove to yourself that multiprocessing and the underlying OS primitives, on every platform you care about, are guaranteed to work without synchronization here, you need to put a Lock or something else around every access to the shared bool. That isn't exactly complicated, but… once you've done that, using, e.g., an Event without the shared bool might be even simpler.

无论如何,如果您有任何原因,我会说很好,那就这样做.但是根据您的问题,您实际上是出于性能的原因选择了它:

At any rate, if any of those were your reason, I'd say great, do it that way. But according to your question, you actually chose this because of performance:

我觉得所有孩子只看一个共享的bool,而不是给他们发送x sigint的开销就更少了

I feel like there is less overhead for all the children to just look at the one shared bool, than to send x number of sigints to them

如果这是您的原因,那么几乎可以肯定您是错误的.孩子们每次必须通过某个循环查看共享的布尔值(并获取共享的锁!),而信号只需要发送给每个孩子一次.因此,这种方式的开销几乎肯定会更高.

If that's your reason, you're almost certainly wrong. The children have to look at the shared bool (and acquire the shared lock!) every time through some loop, while a signal only has to be sent to each child once. So, your overhead is almost certainly going to be much higher this way.

但是,实际上,我无法想象每个子进程发送一个信号的开销,或者甚至每个进程每个循环获取一次进程间锁的开销,在任何有用的程序中都接近瓶颈,所以……为什么开销如此大?甚至在这里最重要?以最简单的方式做最有意义的事情.

But really, I can't imagine the overhead of sending one signal per child process, or even grabbing an interprocess lock once per loop per process, is anywhere close to a bottleneck in any useful program, so… why does the overhead even matter here in the first place? Do what makes the most sense in the most simple way.

这篇关于如何在python中通过多处理正确终止子进程的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆