Python:在生成的进程之间共享锁 [英] Python : sharing a lock between spawned processes

查看:60
本文介绍了Python:在生成的进程之间共享锁的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

最终目标是在后台执行一个方法,而不是并行执行:当多个对象调用此方法时,每个对象都应等待轮到它们继续执行.要实现在后台运行,我必须在子进程(不是线程)中运行该方法,并且需要使用 spawn(不是 fork)启动它.为了防止并行执行,显而易见的解决方案是在进程之间共享一个全局锁.
当进程被 fork 时,这是 Unix 上的默认设置,很容易实现,如以下两个代码中突出显示的.
我们可以将其作为类变量共享:

The end goal is to execute a method in background, but not in parallel : when multiple objects are calling this method, each should wait for their turn to proceed. To achieve running in background, I have to run the method in a subprocess (not a thread), and I need to start it using spawn (not fork). To prevent parallel executions, the obvious solution is to have a global lock shared between processes.
When processes are forked, which is the default on Unix, it is easy to achieve, as highlighted in both of the following codes.
We can share it as a class variable :

import multiprocessing as mp
from time import sleep

class OneAtATime:

    l = mp.Lock()

    def f(self):
        with self.l:
            sleep(1)
        print("Hello")

if __name__ == "__main__":
    a = OneAtATime()
    b = OneAtATime()
    p1 = mp.Process(target = a.f)
    p2 = mp.Process(target = b.f)
    p1.start()
    p2.start()

或者我们可以将它传递给方法:

Or we can pass it to the method :

import multiprocessing as mp
from time import sleep

class OneAtATime:
    def f(self, l):
        with l:
            sleep(1)
        print("Hello")

if __name__ == "__main__":
    a = OneAtATime()
    b = OneAtATime()
    m = mp.Manager()
    l = mp.Lock()
    p1 = mp.Process(target = a.f, args = (l,))
    p2 = mp.Process(target = b.f, args = (l,))
    p1.start()
    p2.start()

这两个代码都具有以一秒为间隔打印hello"的适当行为.但是,当将 start method 更改为产卵",它们就坏了.
第一个 (1) 同时打印两个hello".这是因为类的内部状态没有被pickle,所以它们没有相同的锁.
第二个 (2) 在运行时因 FileNotFoundError 失败.我认为这与不能腌制锁有关:请参阅 Python 共享锁进程之间.
在这个答案中,建议进行两个修复(旁注:我不能使用池,因为我想随机创建任意数量的进程).
我还没有找到适应第二个修复的方法,但我尝试实现第一个:

Both of these codes have the appropriate behaviour of printing "hello" at one second of interval. However, when changing the start method to 'spawn', they become broken.
The first one (1) prints both "hello"s at the same time. This is because the internal state of a class is not pickled, so they do not have the same lock.
The second one (2) fails with FileNotFoundError at runtime. I think it has to do with the fact that locks cannot be pickled : see Python sharing a lock between processes.
In this answer, two fixes are suggested (side note : I cannot use a pool because I want to randomly create an arbitrary number of processes).
I haven't found a way to adapt the second fix, but I tried to implement the first one :

import multiprocessing as mp
from time import sleep

if __name__ == "__main__":
    mp.set_start_method('spawn')

class OneAtATime:
    def f(self, l):
        with l:
            sleep(1)
        print("Hello")

if __name__ == "__main__":
    a = OneAtATime()
    b = OneAtATime()
    m = mp.Manager()
    l = m.Lock()
    p1 = mp.Process(target = a.f, args = (l,))
    p2 = mp.Process(target = b.f, args = (l,))
    p1.start()
    p2.start()

这会因 AttributeError 和 FileNotFoundError (3) 而失败.事实上,当使用 fork 方法时,它也会失败 (BrokenPipe) (4).
在衍生进程之间共享锁的正确方法是什么?
对我编号的四个失败的快速解释也很好.我在 Archlinux 下运行 Python 3.6.

This fails with AttributeError and FileNotFoundError (3). In fact it also fails (BrokenPipe) when the fork method is used (4).
What is the proper way of sharing a lock between spawned processes ?
A quick explanation of the four fails I numbered would be nice, too. I'm running Python 3.6 under Archlinux.

推荐答案

最后一段代码有效,前提是脚本不会过早退出.加入进程就足够了:

The last code snippet works, provided the script does not exit prematurely. Joining processes is enough :

import multiprocessing as mp
from time import sleep

class OneAtATime:
    def f(self, l):
        with l:
            sleep(1)
        print("Hello")

if __name__ == "__main__":
    mp.set_start_method('spawn')
    a = OneAtATime()
    b = OneAtATime()
    m = mp.Manager()
    l = m.Lock()
    p1 = mp.Process(target = a.f, args = (l,))
    p2 = mp.Process(target = b.f, args = (l,))
    p1.start()
    p2.start()
    p1.join()
    p2.join()

有关它在此处引起的错误的更多信息https://stackoverflow.com/a/25456494/8194503.

More info on the error it was causing here https://stackoverflow.com/a/25456494/8194503.

这篇关于Python:在生成的进程之间共享锁的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆