增加Linux中TCP/IP连接的最大数量 [英] Increasing the maximum number of TCP/IP connections in Linux

查看:496
本文介绍了增加Linux中TCP/IP连接的最大数量的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在对服务器进行编程,似乎连接数受到限制,因为即使将连接数设置为无限",带宽也没有达到饱和.

I am programming a server and it seems like my number of connections is being limited since my bandwidth isn't being saturated even when I've set the number of connections to "unlimited".

如何增加或消除Ubuntu Linux盒一次可以打开的最大连接数?操作系统是否对此进行了限制,或者它是路由器还是ISP?或者是别的什么?

How can I increase or eliminate a maximum number of connections that my Ubuntu Linux box can open at a time? Does the OS limit this, or is it the router or the ISP? Or is it something else?

推荐答案

最大连接数受客户端和客户端的某些限制影响服务器端,尽管有所不同.

Maximum number of connections are impacted by certain limits on both client & server sides, albeit a little differently.

在客户端: 增加外围端口范围,并减小tcp_fin_timeout

On the client side: Increase the ephermal port range, and decrease the tcp_fin_timeout

要查找默认值:

sysctl net.ipv4.ip_local_port_range
sysctl net.ipv4.tcp_fin_timeout

外围端口范围定义了主机可以从特定I.P创建的最大出站套接字数.地址. fin_timeout定义这些套接字将保持在TIME_WAIT状态的最短时间(使用一次后将无法使用). 通常的系统默认值为:

The ephermal port range defines the maximum number of outbound sockets a host can create from a particular I.P. address. The fin_timeout defines the minimum time these sockets will stay in TIME_WAIT state (unusable after being used once). Usual system defaults are:

  • net.ipv4.ip_local_port_range = 32768 61000
  • net.ipv4.tcp_fin_timeout = 60
  • net.ipv4.ip_local_port_range = 32768 61000
  • net.ipv4.tcp_fin_timeout = 60

这基本上意味着您的系统不能始终保证每秒提供超过(61000 - 32768) / 60 = 470个套接字.如果您对此不满意,则可以从增加port_range开始.如今,将范围设置为15000 61000很常见.您可以通过减小fin_timeout进一步提高可用性.假设您同时执行了这两项操作,则应该更容易地看到每秒超过1500个出站连接.

This basically means your system cannot consistently guarantee more than (61000 - 32768) / 60 = 470 sockets per second. If you are not happy with that, you could begin with increasing the port_range. Setting the range to 15000 61000 is pretty common these days. You could further increase the availability by decreasing the fin_timeout. Suppose you do both, you should see over 1500 outbound connections per second, more readily.

更改值:

sysctl net.ipv4.ip_local_port_range="15000 61000"
sysctl net.ipv4.tcp_fin_timeout=30

以上内容不应解释为影响系统每秒进行出站连接能力的因素.但是,这些因素会影响系统在长时间的活动"中以可持续的方式处理并发连接的能力.

The above should not be interpreted as the factors impacting system capability for making outbound connections per second. But rather these factors affect system's ability to handle concurrent connections in a sustainable manner for large periods of "activity."

tcp_tw_recycle&的典型Linux机器上的默认Sysctl值. tcp_tw_reuse应该是

Default Sysctl values on a typical Linux box for tcp_tw_recycle & tcp_tw_reuse would be

net.ipv4.tcp_tw_recycle=0
net.ipv4.tcp_tw_reuse=0

这些不允许来自已用"套接字的连接(处于等待状态),并强制套接字持续整个time_wait周期.我建议设置:

These do not allow a connection from a "used" socket (in wait state) and force the sockets to last the complete time_wait cycle. I recommend setting:

sysctl net.ipv4.tcp_tw_recycle=1
sysctl net.ipv4.tcp_tw_reuse=1 

这可以使处于time_wait状态的套接字快速循环并重新使用它们.但是,在进行此更改之前,请确保这与用于这些套接字的应用程序所使用的协议不冲突.确保阅读帖子应对TCP TIME-WAIT "来自Vincent Bernat 了解其中的含义. net.ipv4.tcp_tw_recycle 选项对于面向公众的服务器来说是一个很大的问题,因为它无法处理来自同一NAT设备后面的两台不同计算机的连接,这是一个很难检测并等待您咬下去的问题.请注意,net.ipv4.tcp_tw_recycle从Linux4.12中删除.

This allows fast cycling of sockets in time_wait state and re-using them. But before you do this change make sure that this does not conflict with the protocols that you would use for the application that needs these sockets. Make sure to read post "Coping with the TCP TIME-WAIT" from Vincent Bernat to understand the implications. The net.ipv4.tcp_tw_recycle option is quite problematic for public-facing servers as it won’t handle connections from two different computers behind the same NAT device, which is a problem hard to detect and waiting to bite you. Note that net.ipv4.tcp_tw_recycle has been removed from Linux 4.12.

在服务器端: net.core.somaxconn值具有重要作用.它限制了排队到侦听套接字的最大请求数.如果确定服务器应用程序的功能正常,请将其从默认的128提高到128到1024.现在,您可以通过将应用程序的listen调用中的listen backlog变量修改为相等或更大的整数来利用此增加.

On the Server Side: The net.core.somaxconn value has an important role. It limits the maximum number of requests queued to a listen socket. If you are sure of your server application's capability, bump it up from default 128 to something like 128 to 1024. Now you can take advantage of this increase by modifying the listen backlog variable in your application's listen call, to an equal or higher integer.

sysctl net.core.somaxconn=1024

以太网卡的

txqueuelen参数也可以发挥作用.默认值为1000,因此如果系统可以处理,则将其提高到5000甚至更高.

txqueuelen parameter of your ethernet cards also have a role to play. Default values are 1000, so bump them up to 5000 or even more if your system can handle it.

ifconfig eth0 txqueuelen 5000
echo "/sbin/ifconfig eth0 txqueuelen 5000" >> /etc/rc.local

类似地提高net.core.netdev_max_backlognet.ipv4.tcp_max_syn_backlog的值.它们的默认值分别是1000和1024.

Similarly bump up the values for net.core.netdev_max_backlog and net.ipv4.tcp_max_syn_backlog. Their default values are 1000 and 1024 respectively.

sysctl net.core.netdev_max_backlog=2000
sysctl net.ipv4.tcp_max_syn_backlog=2048

现在请记住,通过增加外壳中的FD ulimts来启动客户端和服务器端应用程序.

Now remember to start both your client and server side applications by increasing the FD ulimts, in the shell.

除了上述程序员使用的另一种比较流行的技术是减少 tcp write 调用的次数.我自己的偏好是使用一个缓冲区,在该缓冲区中,我将希望发送的数据推送到客户端,然后在适当的时候,将缓冲的数据写出到实际的套接字中.此技术使我可以使用大数据包,减少碎片并降低用户域和内核级的CPU使用率.

Besides the above one more popular technique used by programmers is to reduce the number of tcp write calls. My own preference is to use a buffer wherein I push the data I wish to send to the client, and then at appropriate points I write out the buffered data into the actual socket. This technique allows me to use large data packets, reduce fragmentation, reduces my CPU utilization both in the user land and at kernel-level.

这篇关于增加Linux中TCP/IP连接的最大数量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆