正确断开多处理远程管理器的连接 [英] properly disconnect multiprocessing remote manager

查看:117
本文介绍了正确断开多处理远程管理器的连接的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

使用multiprocessing Manager对象创建服务器并远程连接到该服务器时,客户端需要维持与远程服务器的连接.如果服务器在客户端关闭之前就离开了,客户端将尝试永久连接到服务器的预期地址.

When using multiprocessing Manager objects to create a server and connect remotely to that server, the client needs to maintain a connection to the remote server. If the server goes away before the client shuts down, the client will try to connect to the expected address of the server forever.

我遇到了客户端代码的僵局,试图在服务器消失后退出,因为我的客户端进程永不退出.

I'm running into a deadlock on client code trying to exit after the server has gone away, as my client process never exits.

如果在服务器关闭之前我del我的远程对象和客户管理器,该过程将正常退出,但是使用后立即删除我的客户管理器对象和远程对象是不理想的.

If I del my remote objects and my clients manager before the server goes down, the process will exit normally, but deleting my client's manager object and remote objects immediately after use is less than ideal.

这是我能做的最好的吗?是否存在另一种(更适当的)断开与远程管理器对象的连接的方式?服务器关闭和/或连接丢失后,是否有办法彻底退出客户端?

Is that the best I can do? Is there another (more proper) way to disconnect from a remote manager object? Is there a way to cleanly exit a client after a server has gone down and/or the connection is lost?

我知道socket.setdefaulttimeout不适用于多重处理,但是有没有办法专门为多重处理模块设置连接超时?这是我遇到的代码:

I know socket.setdefaulttimeout doesn't work with multiprocessing, but is there a way to set a connection timeout for the multiprocessing module specifically? This is the code I'm having trouble with:

from multiprocessing.managers import BaseManager
m = BaseManager(address=('my.remote.server.dns', 50000), authkey='mykey')
# this next line hangs forever if my server is not running or gets disconnected
m.connect()

更新在多处理中已被破坏.连接超时需要在套接字级别发生(并且套接字必须是非阻塞的才能执行此操作),但是非阻塞的套接字会中断多处理.如果远程服务器不可用,则无法放弃建立连接.

UPDATE This is broken in multiprocessing. The connection timeout needs to happen at the socket level (and the socket needs to be non-blocking in order to do this) but non-blocking sockets break multiprocessing. It is not possible to handle giving up on making a connection if the remote server is not available.

推荐答案

是否可以为多处理模块专门设置连接超时?

is there a way to set a connection timeout for the multiprocessing module specifically?

是的,但这是一个hack.我希望拥有python-fu更高的人可以改善此答案.多处理超时在multiprocessing/connection.py中定义:

Yes, but this is a hack. It is my hope that someone with greater python-fu can improve this answer. The timeout for multiprocessing is defined in multiprocessing/connection.py:

# A very generous timeout when it comes to local connections...
CONNECTION_TIMEOUT = 20.
...
def _init_timeout(timeout=CONNECTION_TIMEOUT):
        return time.time() + timeout

具体来说,我能够使其工作的方法是通过如下方式修补_init_timeout方法:

Specifically, the way I was able to make it work was by monkey-patching the _init_timeout method as follows:

import sys
import time

from multiprocessing import managers, connection

def _new_init_timeout():
    return time.time() + 5

sys.modules['multiprocessing'].__dict__['managers'].__dict__['connection']._init_timeout = _new_init_timeout
from multiprocessing.managers import BaseManager
m = BaseManager(address=('somehost', 50000), authkey='secret')
m.connect()

其中5是新的超时值.如果有更简单的方法,我相信有人会指出.如果不是这样,则可能是向多处理开发团队提出功能请求的候选人.我认为像设置超时这样的基本操作应该比这更容易.另一方面,他们可能出于哲学上的原因未在API中公开超时.

Where 5 is the new timeout value. If there's an easier way, I'm sure someone will point it out. If not, this might be a candidate for a feature request to the multiprocessing dev team. I think something as elementary as setting a timeout should be easier than this. On the other hand, they may have philosophical reasons for not exposing timeout in the API.

希望有帮助.

这篇关于正确断开多处理远程管理器的连接的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆