增加 Linux 中 TCP/IP 连接的最大数量 [英] Increasing the maximum number of TCP/IP connections in Linux

查看:50
本文介绍了增加 Linux 中 TCP/IP 连接的最大数量的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在为服务器编程,但似乎我的连接数受到限制,因为即使我将连接数设置为无限制",我的带宽也没有饱和.

I am programming a server and it seems like my number of connections is being limited since my bandwidth isn't being saturated even when I've set the number of connections to "unlimited".

如何增加或消除我的 Ubuntu Linux 机器一次可以打开的最大连接数?操作系统是否限制了这一点,还是路由器或 ISP?或者是别的什么?

How can I increase or eliminate a maximum number of connections that my Ubuntu Linux box can open at a time? Does the OS limit this, or is it the router or the ISP? Or is it something else?

推荐答案

最大连接数受到客户端和客户端的某些限制的影响服务器端,虽然有点不同.

Maximum number of connections are impacted by certain limits on both client & server sides, albeit a little differently.

在客户端:增加临时端口范围,减少tcp_fin_timeout

On the client side: Increase the ephermal port range, and decrease the tcp_fin_timeout

要找出默认值:

sysctl net.ipv4.ip_local_port_range
sysctl net.ipv4.tcp_fin_timeout

临时端口范围定义了主机可以从特定 I.P 创建的最大出站套接字数.地址.fin_timeout 定义了这些套接字将保持在 TIME_WAIT 状态的最短时间(使用一次后无法使用).通常的系统默认值是:

The ephermal port range defines the maximum number of outbound sockets a host can create from a particular I.P. address. The fin_timeout defines the minimum time these sockets will stay in TIME_WAIT state (unusable after being used once). Usual system defaults are:

  • net.ipv4.ip_local_port_range = 32768 61000
  • net.ipv4.tcp_fin_timeout = 60

这基本上意味着您的系统无法始终保证每秒超过 (61000 - 32768)/60 = 470 个套接字.如果您对此不满意,可以从增加 port_range 开始.如今,将范围设置为 15000 61000 非常普遍.您可以通过减少 fin_timeout 进一步提高可用性.假设您同时执行这两项操作,您应该更容易看到每秒 1500 个以上的出站连接.

This basically means your system cannot consistently guarantee more than (61000 - 32768) / 60 = 470 sockets per second. If you are not happy with that, you could begin with increasing the port_range. Setting the range to 15000 61000 is pretty common these days. You could further increase the availability by decreasing the fin_timeout. Suppose you do both, you should see over 1500 outbound connections per second, more readily.

更改值:

sysctl net.ipv4.ip_local_port_range="15000 61000"
sysctl net.ipv4.tcp_fin_timeout=30

以上内容不应被解释为影响每秒建立出站连接的系统能力的因素.而是这些因素会影响系统以可持续的方式处理大量活动"期间并发连接的能力.

The above should not be interpreted as the factors impacting system capability for making outbound connections per second. But rather these factors affect system's ability to handle concurrent connections in a sustainable manner for large periods of "activity."

tcp_tw_recycle 的典型 Linux 机器上的默认 Sysctl 值 &tcp_tw_reuse 将是

Default Sysctl values on a typical Linux box for tcp_tw_recycle & tcp_tw_reuse would be

net.ipv4.tcp_tw_recycle=0
net.ipv4.tcp_tw_reuse=0

这些不允许来自已使用"套接字(处于等待状态)的连接并强制套接字持续完整的 time_wait 周期.我建议设置:

These do not allow a connection from a "used" socket (in wait state) and force the sockets to last the complete time_wait cycle. I recommend setting:

sysctl net.ipv4.tcp_tw_recycle=1
sysctl net.ipv4.tcp_tw_reuse=1 

这允许在 time_wait 状态下快速循环套接字并重新使用它们.但在您进行此更改之前,请确保这与您将用于需要这些套接字的应用程序的协议不冲突.请务必阅读帖子 "Coping with the TCP TIME-WAIT"来自文森特·伯纳特,以了解其含义.net.ipv4.tcp_tw_recycle 选项对于面向公众的服务器来说是相当有问题的,因为它不会处理来自同一 NAT 设备后面的两台不同计算机的连接,这是一个问题很难发现并等待咬你.请注意,net.ipv4.tcp_tw_recycle 已经已从 Linux4.12 中移除.

This allows fast cycling of sockets in time_wait state and re-using them. But before you do this change make sure that this does not conflict with the protocols that you would use for the application that needs these sockets. Make sure to read post "Coping with the TCP TIME-WAIT" from Vincent Bernat to understand the implications. The net.ipv4.tcp_tw_recycle option is quite problematic for public-facing servers as it won’t handle connections from two different computers behind the same NAT device, which is a problem hard to detect and waiting to bite you. Note that net.ipv4.tcp_tw_recycle has been removed from Linux 4.12.

在服务器端:net.core.somaxconn 值具有重要作用.它限制排队到侦听套接字的最大请求数.如果您确定您的服务器应用程序的能力,请将其从默认值 128 提高到类似 128 到 1024 的值.现在您可以通过将应用程序的侦听调用中的侦听积压变量修改为相等或更高的整数来利用这种增加.

On the Server Side: The net.core.somaxconn value has an important role. It limits the maximum number of requests queued to a listen socket. If you are sure of your server application's capability, bump it up from default 128 to something like 128 to 1024. Now you can take advantage of this increase by modifying the listen backlog variable in your application's listen call, to an equal or higher integer.

sysctl net.core.somaxconn=1024

txqueuelen 以太网卡的参数也有作用.默认值为 1000,因此如果您的系统可以处理,则将它们提高到 5000 甚至更多.

txqueuelen parameter of your ethernet cards also have a role to play. Default values are 1000, so bump them up to 5000 or even more if your system can handle it.

ifconfig eth0 txqueuelen 5000
echo "/sbin/ifconfig eth0 txqueuelen 5000" >> /etc/rc.local

同样提高net.core.netdev_max_backlognet.ipv4.tcp_max_syn_backlog 的值.它们的默认值分别为 1000 和 1024.

Similarly bump up the values for net.core.netdev_max_backlog and net.ipv4.tcp_max_syn_backlog. Their default values are 1000 and 1024 respectively.

sysctl net.core.netdev_max_backlog=2000
sysctl net.ipv4.tcp_max_syn_backlog=2048

现在请记住通过在 shell 中增加 FD ulimts 来启动客户端和服务器端应用程序.

Now remember to start both your client and server side applications by increasing the FD ulimts, in the shell.

除上述之外,程序员使用的另一种流行技术是减少tcp write 调用的次数.我自己的偏好是使用缓冲区,在其中推送我希望发送到客户端的数据,然后在适当的时候将缓冲的数据写入实际的套接字.这种技术使我能够使用大数据包,减少碎片,降低用户级和内核级的 CPU 使用率.

Besides the above one more popular technique used by programmers is to reduce the number of tcp write calls. My own preference is to use a buffer wherein I push the data I wish to send to the client, and then at appropriate points I write out the buffered data into the actual socket. This technique allows me to use large data packets, reduce fragmentation, reduces my CPU utilization both in the user land and at kernel-level.

这篇关于增加 Linux 中 TCP/IP 连接的最大数量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆