最大MQTT连接 [英] Max MQTT connections

查看:450
本文介绍了最大MQTT连接的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我需要创建一个服务器场,该服务器场可以处理5+百万个连接,5 +百万个主题(每个客户端一个),每秒处理30万条消息.

I have a need to create a server farm that can handle 5+ million connections, 5+ million topics (one per client), process 300k messages/sec.

我试图查看各种消息代理的功能,因此我目前正在使用两个RHEL EC2实例(r3.4xlarge)来提供大量可用资源.因此,您无需查找它,它具有16vCPU,122GB RAM.我的使用量还远远没有达到极限.

I tried to see what various message brokers were capable so I am currently using two RHEL EC2 instances (r3.4xlarge) to make lots of available resources. So you do not need to look it up, it has 16vCPU, 122GB RAM. I am nowhere near that limit in usage.

我无法通过600k的连接限制.既然在客户端和服务器上似乎都没有任何O/S限制(大量的RAM/CPU/等),那是什么限制了我呢?

我对/etc/security/limits.conf进行了如下

I have edited /etc/security/limits.conf as follows:

* soft  nofile  20000000
* hard  nofile  20000000

* soft  nproc  20000000
* hard  nproc  20000000

root  soft  nofile 20000000
root  hard  nofile 20000000

我对/etc/sysctl.conf进行了如下

I have edited /etc/sysctl.conf as follows:

net.ipv4.ip_local_port_range = 1024 65535  
net.ipv4.tcp_tw_reuse = 1 
net.ipv4.tcp_mem = 5242880  5242880 5242880 
net.ipv4.tcp_tw_recycle = 1 
fs.file-max = 20000000 
fs.nr_open = 20000000 
net.ipv4.tcp_syncookies = 0

net.ipv4.tcp_max_syn_backlog = 10000 
net.ipv4.tcp_synack_retries = 3 
net.core.somaxconn=65536 
net.core.netdev_max_backlog=100000 
net.core.optmem_max = 20480000

对于阿波罗: 出口APOLLO_ULIMIT = 20000000

For Apollo: export APOLLO_ULIMIT=20000000

对于ActiveMQ:

For ActiveMQ:

ACTIVEMQ_OPTS="$ACTIVEMQ_OPTS -Dorg.apache.activemq.UseDedicatedTaskRunner=false"
ACTIVEMQ_OPTS_MEMORY="-Xms50G -Xmx115G"

我在客户端上为eth0创建了20个其他私有地址,然后为其分配了地址: ip addr add 11.22.33.44/24 dev eth0

I created 20 additional private addresses for eth0 on the client, then assigned them: ip addr add 11.22.33.44/24 dev eth0

我完全了解65k的端口限制,这就是我这样做的原因.

I am FULLY aware of the 65k port limits which is why I did the above.

  • 对于ActiveMQ,我必须:574309
  • 对于阿波罗,我到了:592891
  • 对于Rabbit,我达到了90k,但是日志记录很糟糕,尽管我知道它有可能,但无法弄清楚该怎么做.
  • 对于Hive,我的试用上限为1000.正在等待许可
  • IBM希望以我的房屋交易价来使用它们-不!

推荐答案

答案: 在执行此操作时,我意识到我在/etc/sysctl.conf文件中的客户端设置中拼写错误:net.ipv4.ip_local_port_range

ANSWER: While doing this I realized that I had a misspelling in my client setting within /etc/sysctl.conf file for: net.ipv4.ip_local_port_range

我现在能够在188秒内将956,591个MQTT客户端连接到我的Apollo服务器.

I am now able to connect 956,591 MQTT clients to my Apollo server in 188sec.

更多信息: 为了确定这是操作系统连接限制还是代理,我决定编写一个简单的客户端/服务器.

More info: Trying to isolate if this is an O/S connection limitation or a Broker, I decided to write a simple Client/Server.

服务器:

    Socket client = null;
    server = new ServerSocket(1884);
    while (true) {
        client = server.accept();
        clients.add(client);
    }

客户:

    while (true) {
        InetAddress clientIPToBindTo = getNextClientVIP();
        Socket client = new Socket(hostname, 1884, clientIPToBindTo, 0);
        clients.add(client);
    }

对于21个IP,我希望65535-1024 * 21 = 1354731是边界.实际上,我能够达到1231734

With 21 IPs, I would expect 65535-1024*21 = 1354731 to be the boundary. In reality I am able to achieve 1231734

[root@ip ec2-user]# cat /proc/net/sockstat
sockets: used 1231734
TCP: inuse 5 orphan 0 tw 0 alloc 1231307 mem 2
UDP: inuse 4 mem 1
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0

因此解决了socket/kernel/io的问题.

So the socket/kernel/io stuff is worked out.

我仍然无法使用任何经纪人来实现这一目标.

I am STILL unable to achieve this using any broker.

再次在我的客户端/服务器测试之后,这就是内核设置.

Again just after my client/server test this is the kernel settings.

客户:

[root@ip ec2-user]# sysctl -p
net.ipv4.ip_local_port_range = 1024     65535
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_mem = 5242880      5242880 15242880
net.ipv4.tcp_tw_recycle = 1
fs.file-max = 20000000
fs.nr_open = 20000000

[root@ip ec2-user]# cat /etc/security/limits.conf
* soft  nofile  2000000
* hard  nofile  2000000    
root  soft  nofile 2000000
root  hard  nofile 2000000

服务器:

[root@ ec2-user]# sysctl -p
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_mem = 5242880      5242880 5242880
net.ipv4.tcp_tw_recycle = 1
fs.file-max = 20000000
fs.nr_open = 20000000
net.ipv4.tcp_syncookies = 0
net.ipv4.tcp_max_syn_backlog = 1000000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 1000000
net.core.optmem_max = 20480000

这篇关于最大MQTT连接的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆