如何限制到 ZeroMQ 发布者端点的并发订阅者连接总数? [英] How can I limit total concurrent subscriber connections to a ZeroMQ publisher endpoint?

查看:70
本文介绍了如何限制到 ZeroMQ 发布者端点的并发订阅者连接总数?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在 Linux 系统上使用 ZeroMQ 构建发布-订阅服务时,有没有办法强制执行并发订阅者限制?

When building a pub-sub service using ZeroMQ on a Linux system, is there any way to enforce concurrent subscriber limits?

例如,我可能想在资源有限的系统上创建一个 ZeroMQ 发布者服务,并希望通过设置到 tcp 发布者端点的 100 个并发连接的限制来防止系统过载.达到该限制后,来自 ZeroMQ 订阅者的所有后续连接尝试都将失败.

For example, I might want to create a ZeroMQ publisher service on a resource-limited system, and want to prevent overloading the system by setting a limit of, say, 100 concurrent connections to the tcp publisher endpoint. After that limit is reached, all subsequent connection attempts from ZeroMQ subscribers would fail.

我知道 ZeroMQ 不提供有关连接/断开连接的通知,但我一直在寻找可能允许此类限制的套接字选项 - 到目前为止,没有运气.

I understand ZeroMQ doesn't provide notifications about connect/disconnect, but I've been looking for socket options that might allow such limits -- so far, no luck.

或者这是否应该在其他级别处理,也许在协议内?

Or is this something that should be handled at some other level, perhaps within the protocol?

推荐答案

是的,ZeroMQ 是一个 Can-Do 消息传递框架:

除了微不足道的正式通信模式框架元素(库原语)之外,ZeroMQ 背后最强大的功能是开发自己的消息传递系统的能力.

Yes, ZeroMQ is a Can-Do messaging framework:

Besides the trivial Formal Communication Pattern Framework elements ( the library primitives ), the strongest powers behind the ZeroMQ is the ability to develop one's own messaging system(s).

在你的情况下,用一些额外的东西来丰富场景就足够了......一个 SUB-process -> PUB-process message-flow-channel,从而让PUB端进程统计SUB-process 实例同时连接并允许断开连接(委托而返回"到 SUB-process side suicside 移动的步骤,如一旦动态达到限制,经典的 PUB 流程有意地没有管理订阅的工具).另外为节点间信令添加一些动态以开始重新计数和/或为 SUB 进程方配备自我广告机制以推送-keepAliveSIG-s 到 PUB-side 并期望此信号是一个弱且仅提供信息的指示,因为存在许多现实世界的冲突,其中去中心化节点根本无法提供保证交付"" 消息和一个设计良好、分布式、低延迟、高性能的系统必须很好地应对这一现实,并具有设计好的自我修复状态恢复策略,并将其内置到自己的行为中.

In your case, it is enough to enrich the scene with a few additional things ... a SUB-process -> PUB-process message-flow-channel, so as to allow PUB-side process to count a number of SUB-process instances concurrently connected and to allow for a disconnect ( a step delegated rather "back" to a SUB-process side suicside move, as the classical PUB-process, intentionally, has no instrumentation to manage subscriptions ) once a limit is dynamically achieved. Plus add some dynamics for the inter-node signalling to start re-counting and/or to equip the SUB-process side(s) with a self-advertising mechanism to push-keepAliveSIG-s to the PUB-side and expect this signalling to be a weak and informative-only indication as there are many real-world collisions, where decentralised node simply fail to deliver a "guaranteed-delivery" message(s) and a well designed, distributed, low-latency, high-performance system has to cope well with this reality and have the self-healing state-recovery policies designed and in-built into own behaviour.

(图由 imatix/ZeroMQ 提供)

请假设 ZeroMQ 库是一个非常强大的乐高工具箱,用于设计很酷的分布式系统,而不是一个现成的/包含电池的、僵硬的、solution-for-just-a-few-a-few-academic-cases(好吧,它可能被认为是这样,但只是为了一些简单的生活,而我们的生活更加丰富多彩 &戏弄,不是吗?)

Kindly assume the ZeroMQ library to be rather a very powerfull LEGO-tool-box for designing cool distributed systems, than a ready-made / batteries-included, stiff, quasi-solution-for-just-a-few-academic-cases ( well, it might be considered such, but just for some no-brainer's life, while our lives are much more colourfull & teasing, aren't they ? )

值得,绝对值得花几天时间阅读 Pieter Hintjens 的两本书 &几个星期的时间来改变主意,开始使用 ZeroMQ 全功能进行设计.

Worth, definitely worth a few days to read the both of Pieter Hintjens' books & a few weeks for shifting one's mind to start designing with the ZeroMQ full-powers on one's side.

只需要一些 Python 附加习惯(zmq.Context() 早期设置,并且不要忘记 finally:aContext.term() )

With just a few python add-on habits ( a zmq.Context() early-setup, and not forgetting a finally: aContext.term() )

您一定会爱上这个智能世界.

You will definitely love this smart world.

这篇关于如何限制到 ZeroMQ 发布者端点的并发订阅者连接总数?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆