Majordomo经纪人:处理大量连接 [英] Majordomo broker: handling large number of connections

查看:103
本文介绍了Majordomo经纪人:处理大量连接的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用在这里找到的majordomo代码( https://github.com/zeromq/majordomo ),方法如下:



我不再使用单个 经纪人 来处理请求和答复,两个 经纪人 ,这样其中一个可以处理所有请求,另一个可以处理所有答复。



I做了一些测试,看看majordomo 经纪人 可以处理多少个连接:

 每个客户端的请求数已处理的请求数而无pkt损失

1614(614个客户端)
10 6000(600个客户端)
100 35500(355个客户端)
1000 300000(300个客户)
5000 750000
10000 600000
15000 450000
20000 420000
25000 375000
30000 360000

我是无法正确理解结果。



为什么 经纪人 每次只能处理614个客户我只在一台机器上运行了该测试,但是614似乎仍然很低。



有人可以告诉我出什么问题吗?






所以我将HWM设置如下:

 在发送/接收时经纪人的HWM设置为40 k。 
TCP发送/接收缓冲区设置为10 MB。
员工在发送/接收时的HWM设置为100 k。
客户的发送HWM设置为100,
的HWM设置为100 k。
所有客户端都在同一台计算机上运行。
所有工作程序(运行回显服务的10个工作程序),
和两个代理实例在单个ec2实例上运行。

客户端程序简单地发送所有请求(一次发送)。

我对发送的HWM的理解是,当达到HWM时,套接字将阻塞。这就是为什么我将客户端的发送HWM设置为100条消息,希望这可以给我某种流控制的原因。



现在,当我有10个客户端发送10,000个请求(一次完成)。而且,当每个客户端发送10,000个请求,但一次只发送前1000个请求时,则当128个客户端并行运行时会发生数据包丢失。



经纪人的HWM设置为40k,那么为什么在爆炸大小小于40,000(如我上面使用的爆炸)时却丢弃数据包?我知道zmq指南说,管道的已分配容量将约为我们设置的容量的60%,但是10,000只是我将其设置为(40,000)的容量的25%。同样,1000仅为10%。所以我不明白是什么原因导致经纪人丢失数据包。 HWM应该是每个对等连接,不是吗?请帮助我了解这种行为。

解决方案

为什么会发生?




TLDR


让我引用一个奇妙而宝贵的资料-Pieter HINTJENS的书


已连接代码,第1卷


(绝对值得花任何时间和时间来阅读PDF副本...关键信息在Pieter精心制作的300多个令人兴奋的页面中都是文字和故事)






高水位标记



何时可以从流程中快速发送消息在处理过程中,您很快就会发现内存是一种宝贵的资源,并且可以被微不足道地填满。除非您了解问题并采取预防措施,否则流程中某处的几秒钟延迟可能会导致积压的服务器瘫痪。



...



ØMQ使用 HWM (高水位线)的概念来定义内部管道。套接字中或套接字中的每个连接都有自己的管道,根据套接字类型, HWM 。某些套接字( PUB PUSH )仅发送过缓冲区。某些( SUB Pull REQ REP )仅具有接收缓冲区。一些( 经销商 ROUTER PAIR )都有发送和接收缓冲区。



在ØMQv2.x中, HWM 默认情况下是无限的。很简单,但通常是致命的 >适用于大量发布者。在ØMQ v3.x中,默认情况下将其设置为1,000,这样更明智。如果您仍在使用ØMQv2.x,则应始终设置 HWM ,可以是ØMQv3.x的1,000,也可以是考虑您的消息大小和预期的订户性能的另一个数字。



当套接字达到其 HWM 时,它将根据套接字类型阻止或丢弃数据 PUB ROUTER 套接字将下降数据,如果它们达到其 HWM ,而其他套接字类型将被阻止。在 inproc 传输中,发送方和接收方共享相同的缓冲区,因此实际的 HWM 是<双方设置的code> HWM 。



最后, HWM -s不准确;虽然默认情况下您最多可以接收1,000条消息,但是由于 libzmq 的方式,实际缓冲区的大小可能要低得多(只有一半)。 >执行其队列。






解决方案



调整实验您的 RCVHWM / SNDHWM 和其他底层IO -thread / API参数,以便您的测试设置既可以保持可行的内存占用量,又可以保持稳定并符合您的 IO-resources-incompressible-data- hydraulics


I am using the majordomo code found here (https://github.com/zeromq/majordomo) in the following manner:

Instead of using a single broker to process the requests and replies, I start two brokers such that one of them handles all the requests, and the other handles all the replies.

I did some testing to see how many connections the majordomo broker can handle:

num of reqs per client     num of requests handled without pkt loss

          1                         614 (614 clients)
         10                        6000 (600 clients)
        100                       35500 (355 clients)
       1000                      300000 (300 clients)
       5000                      750000
      10000                      600000
      15000                      450000
      20000                      420000
      25000                      375000
      30000                      360000

I am not able to understand the results properly.

Why is the broker able to handle only 614 clients when each one is sending only a single request?

I ran this test within a single machine, but still 614 seems very low.

Can someone please tell what could be going wrong?


So I set the HWM as follows:

Broker’s HWM on send/receive is set to  40 k.
TCP send/receive buffer      is set to  10 MB.
Worker’s HWM on send/receive is set to 100 k.
Client’s HWM on send         is set to 100,
         and on receive      is set to 100 k.
All the clients run on the same machine.
All the workers (10 workers running the echo service),
and the two broker instances run on a single ec2 instance.

Client program simply sends all the requests in a blast (all at once).

My understanding of HWM on send is that when the HWM is reached, the socket will block. That is why I have set the client's send HWM to 100 messages, hoping that this would give me some sort of flow control.

Now, I see packet loss when I have 10 clients sending 10,000 requests (all in one go). And, when clients send 10,000 requests each, but only the first 1000 are sent in one go, then packet loss occurs when 128 clients run in parallel.

When I have set the broker's HWM set to 40k, then why does it drop packets when the blast size is less than 40,000 (like the ones I have used above)? I know that the zmq guide says that the allocated capacity of the pipe will be around 60% of what we have set it to, but 10,000 is only 25% of what I have set it to (40,000). Just the same way, 1000 is only 10%. So I don't understand what causes the broker to lose packets. HWM is supposed to be per peer connection, isn't it? Please help me in understanding this behavior.

解决方案

WHY THAT HAPPENS?

TLDR

Let me quote from a marvelous and precious source -- Pieter HINTJENS' book

"Code Connected, Volume 1"

( definitely worth spending anyone's time and step through the PDF copy ... key messages are in the text and stories that Pieter has crafted into his 300+ thrilling pages )


High-Water Marks

When you can send messages rapidly from process to process, you soon discover that memory is a precious resource, and one that can be trivially filled up. A few seconds of delay somewhere in a process can turn into a backlog that blows up a server unless you understand the problem and take precautions.

...

ØMQ uses the concept of HWM (high-water mark) to define the capacity of its internal pipes. Each connection out of a socket or into a socket has its own pipe, and HWM for sending, and/or receiving, depending on the socket type. Some sockets (PUB, PUSH) only have send buffers. Some (SUB, PULL, REQ, REP) only have receive buffers. Some (DEALER, ROUTER, PAIR) have both send and receive buffers.

In ØMQ v2.x, the HWM was infinite by default. This was easy but also typically fatal for high-volume publishers. In ØMQ v3.x, it’s set to 1,000 by default, which is more sensible. If you’re still using ØMQ v2.x, you should always set a HWM on your sockets, be it 1,000 to match ØMQ v3.x or another figure that takes into account your message sizes and expected subscriber performance.

When your socket reaches its HWM, it will either block or drop data depending on the socket type. PUB and ROUTER sockets will drop data if they reach their HWM, while other socket types will block. Over the inproc transport, the sender and receiver share the same buffers, so the real HWM is the sum of the HWM set by both sides.

Lastly, the HWM-s are not exact; while you may get up to 1,000 messages by default, the real buffer size may be much lower (as little as half), due to the way libzmq implements its queues.


SOLUTION

Experiment with adjusting your RCVHWM / SNDHWM and other low-level IO-thread / API-parameters so that your testing setup remains both memory-footprint feasible, stable and performing well in accord with your IO-resources-incompressible-data-"hydraulics"

这篇关于Majordomo经纪人:处理大量连接的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆