c++ - 如何在扩展的PUB-SUB模式中将发布者和订阅者与C++中ZeroMQ中的中介同步? [英] How to synchronize the publishers and subscribers in extended PUB-SUB pattern with Intermediary in ZeroMQ in c++?

查看:32
本文介绍了c++ - 如何在扩展的PUB-SUB模式中将发布者和订阅者与C++中ZeroMQ中的中介同步?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在一个有 1 个中介的用例中,我有多个发布者和多个订阅者.

I have multiple publishers and multiple subscribers in a use case with 1 intermediary.

在 ZeroMQ 指南中,我了解了如何使用额外的 REQ/REP 套接字同步 1 个发布者和 1 个订阅者.我尝试为我的用例编写同步代码,但是如果我尝试根据 1-1 PUB/SUB 给出的逻辑编写代码,它会变得混乱.

In the ZeroMQ guide, I learnt about synchronizing 1 publisher and 1 subscriber, using additional REQ/REP sockets. I tried to write a synchronization code for my use case, but it is getting messy if I try to write code according to logic given for 1-1 PUB/SUB.

当我们只有 1 个发布者时的发布者代码是:

The publisher code when we have only 1 publisher is :

//Socket to receive sync request
zmq::socket_t syncservice (context, ZMQ_REP);
syncservice.bind("tcp://*:5562");

//  Get synchronization from subscribers
int subscribers = 0;
while (subscribers < SUBSCRIBERS_EXPECTED) {

    //  - wait for synchronization request
    s_recv (syncservice);

    //  - send synchronization reply
    s_send (syncservice, "");

    subscribers++;
}

当我们只有 1 个订阅者时的订阅者代码是:

The subscriber code when we have only 1 subscriber is:

zmq::socket_t syncclient (context, ZMQ_REQ);
syncclient.connect("tcp://localhost:5562");

//  - send a synchronization request
s_send (syncclient, "");

//  - wait for synchronization reply
s_recv (syncclient);

现在,当我有多个订阅者时,每个订阅者都需要向每个发布者发送请求吗?

Now, when I have multiple subscribers, then does each subscriber need to send a request to every publisher?

我的用例中的发布者来来去去.他们的人数不固定.

The publishers in my use case come and go. Their number is not fixed.

因此,订阅者不会知道要连接多少个节点以及存在或不存在哪些发布者.

So, a subscriber won't have any knowledge about how many nodes to connect to and which publishers are present or not.

请提出同步扩展PUB/SUB代码的逻辑

Please suggest a logic to synchronize an extended PUB/SUB code

推荐答案

鉴于存在 XPUB/XSUB 中介节点,

对于XSUB-mediator-side来说,实际的PUB-节点发现可能完全不费吹灰之力(实际上主要避免这样).

Given the XPUB/XSUB mediator node is present,

the actual PUB-node discovery may be completely effort-less for the XSUB-mediator-side ( actually principally avoided as such ).

只需使用反转XSUB.bind()-s/PUB.connect()-s,问题就不复存在完全没有.

Just use the reversed the XSUB.bind()-s / PUB.connect()-s and the problem ceased to exist at all.

聪明,不是吗?

PUB-节点可能来来去去,但 XSUB-side Policy-mediator 节点不需要打扰(除了一些初始的 .setsockopt( { LINGER, IMMEDIATE, CONFLATE, RCVHWM, MAXSIZE } ) 性能调优和健壮性增加settings ),享受实际 Topic-filter(s) .setsockopt( zmq.SUBSCRIBE, ** ) 服务中设置的仍然有效和工作的组合,并且可以集中保持这样的组合,主要是对现在/以后 .connect()-ed live/功能障碍 PUB-side Agent-nodes.

PUB-nodes may come and go, yet the XSUB-side of the Policy-mediator node need not bother ( except for a few initial .setsockopt( { LINGER, IMMEDIATE, CONFLATE, RCVHWM, MAXSIZE } ) performance tuning and robustness increasing settings ), enjoying the still valid and working composition of the actual Topic-filter(s) .setsockopt( zmq.SUBSCRIBE, ** ) settings in-service and may centrally maintain such composition remaining principally agnostic about the state/dynamic of the semi-temporal group of the now / later .connect()-ed live / dysfunctional PUB-side Agent-nodes.

更好,不是吗?

这篇关于c++ - 如何在扩展的PUB-SUB模式中将发布者和订阅者与C++中ZeroMQ中的中介同步?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆