在Python中监视并发(跨进程共享对象) [英] Monitor concurrency (sharing object across processes) in Python

查看:497
本文介绍了在Python中监视并发(跨进程共享对象)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是新来的,我是意大利人(如果我的英语不太好,请原谅我)。
我是一个计算机科学专业的学生,​​我正在开发一个Python中的并发程序项目。
我们应该使用监视器,一个带有方法和数据的类(例如条件变量)。这个类监视器的实例(对象)应该在我们所有的进程(由os.fork o通过多进程模块创建)之间共享,但是我们不知道该怎么做。它是更简单的线程,因为他们已经共享内存,但我们必须使用进程。有什么办法使这个对象(监视器)在所有进程共享吗?
希望我不是说无意义...感谢大家的旅游注意。
等待答案。
Lorenzo

I'm new here and I'm Italian (forgive me if my English is not so good). I am a computer science student and I am working on a concurrent program project in Python. We should use monitors, a class with its methods and data (such as condition variables). An instance (object) of this class monitor should be shared accross all processes we have (created by os.fork o by multiprocessing module) but we don't know how to do. It is simpler with threads because they already share memory but we MUST use processes. Is there any way to make this object (monitor) shareable accross all processes? Hoping I'm not saying nonsenses...thanks a lot to everyone for tour attention. Waiting answers. Lorenzo

推荐答案

进程之间的共享内存通常是一个不好的主意;当调用 os.fork()时,操作系统将父进程使用的所有内存和子进程继承的内存标记为写入时复制;如果任何一个进程尝试修改页面,它会被复制到一个不在两个进程之间共享的新位置。

shared memory between processes is usually a poor idea; when calling os.fork(), the operating system marks all of the memory used by parent and inherited by the child as copy on write; if either process attempts to modify the page, it is instead copied to a new location that is not shared between the two processes.

这意味着你的常规线程原语,条件变量等)不能用于跨进程边界的通信。

This means that your usual threading primitives (locks, condition variables, et-cetera) are not useable for communicating across process boundaries.

有两种方法可以解决这个问题:首选方法是使用管道,并在两端串行通信。 Brian Cain的回答,使用 multiprocessing.Queue ,以这种方式工作。因为管道没有任何共享状态,并且使用内核提供的强大的ipc机制,因此不太可能会导致进程处于不一致状态。

There are two ways to resolve this; The preferred way is to use a pipe, and serialize communication on both ends. Brian Cain's answer, using multiprocessing.Queue, works in this exact way. Because pipes do not have any shared state, and use a robust ipc mechanism provided by the kernel, it's unlikely that you will end up with processes in an inconsistent state.

其他选项是以一种特殊的方式分配一些内存,以便os可以允许你使用共享内存。最自然的方式是使用 mmap 。 cPython不会为本地python对象使用共享内存,所以你仍然需要整理如何使用这个共享区域。一个合理的库是numpy,它可以将无类型的二进制内存区域映射到某种有用的数组。共享内存在管理并发性方面要困难得多;因为没有简单的方法让一个进程知道另一个进程如何访问共享区域。只有当少量进程需要共享大量数据时,这种方法才有意义,因为共享内存可以避免通过管道复制数据。

The other option is to allocate some memory in a special way so that the os will allow you to use shared memory. the most natural way to do that is with mmap. cPython won't use shared memory for native python object's though, so you would still need to sort out how you will use this shared region. A reasonable library for this is numpy, which can map the untyped binary memory region into useful arrays of some sort. Shared memory is much harder to work with in terms of managing concurrency, though; since there's no simple way for one process to know how another processes is accessing the shared region. The only time this approach makes much sense is when a small number of processes need to share a large volume of data, since shared memory can avoid copying the data through pipes.

这篇关于在Python中监视并发(跨进程共享对象)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆