增加TCP窗口大小 [英] Increasing TCP Window Size

查看:795
本文介绍了增加TCP窗口大小的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我对在应用程序中增加TCP窗口大小有一些疑问。在我的C + +软件应用程序,我们使用TCP / IP阻塞套接字发送大小约1k的数据包从客户端到服务器。最近我遇到了这个概念TCP窗口大小。所以我尝试使用 setsockopt() SO_SNDBUF SO_RCVBUF

I have some doubts over increasing TCP Window Size in application. In my C++ software application, we are sending data packets of size around 1k from client to server using TCP/IP blocking socket. Recently I came across this concept TCP Window Size. So I tried increasing the value to 64K using setsockopt() for both SO_SNDBUF and SO_RCVBUF. After increasing this value, I get some improvements in performance for WAN connection but not in LAN connection.

根据我在TCP窗口大小中的理解,

As per my understanding in TCP Window Size,

客户端将数据包发送到服务器。在达到此TCP窗口大小时,它将等待确保从服务器接收到针对窗口大小中的第一个分组的ACK。在WAN连接的情况下,ACK从服务器延迟到客户端,因为RTT的延迟大约为100ms。因此,在这种情况下,增加TCP窗口大小会补偿ACK等待时间,从而提高性能。

Client will send data packets to server. Upon reaching this TCP Window Size, it will wait to make sure ACK received from the server for the first packet in the window size. In case of WAN connection, ACK is getting delayed from the server to the client because of latency in RTT of around 100ms. So in this case, increasing TCP Window Size compensates ACK wait time and thereby improving performance.

我想了解应用程序的性能如何提高。

I want to understand how the performance improves in my application.

在我的应用程序中,即使使用 setsockopt 在套接字级别增加TCP窗口大小(发送和接收缓冲区)相同的包大小为1k(即我们在单个套接字发送时从客户端发送到服务器的字节)。此外,我们禁用了Nagle算法(内置选项将小数据包合并成一个大数据包,从而避免频繁的套接字调用)。

In my application, even though TCP Window Size (Both Send and Receive Buffer) is increased using setsockopt at socket level, we still maintain the same packet size of 1k (i.e the bytes we send from client to server in a single socket send). Also we disabled Nagle algorithm (inbuilt option to consolidate small packets into a large packet thereby avoiding frequent socket call).

我的疑问如下:

    因为我使用阻塞套接字,对于每个数据包发送1k,它应该阻止,如果ACK不是从服务器。那么在单独改善WAN连接的TCP窗口大小后,性能如何提高呢?如果我误解TCP窗口大小的概念,请更正我。

  1. Since I am using blocking socket, for each data packet send of 1k, it should block if ACK doesn't come from the server. Then how does the performance improve after improving the TCP window Size in WAN connection alone ? If I misunderstood the concept of TCP Window Size, please correct me.

对于发送64K的数据,我相信我还需要调用socket发送函数64次(因为我通过阻塞套接字发送1k每次发送),即使我将我的TCP窗口大小增加到64K。

For sending 64K of data, I believe I still need to call socket send function 64 times ( since i am sending 1k per send through blocking socket) even though I increased my TCP Window Size to 64K. Please confirm this.

使用RFC 1323算法启用Windows扩展时,TCP窗口大小的最大限制是多少?

What is the maximum limit of TCP window size with windows scaling enabled with RFC 1323 algorithm ?

我的英语不太好。

推荐答案

首先,有一个大的误解从您的问题明显:TCP窗口大小是由 SO_SNDBUF SO_RCVBUF 控制的。这不是真的。

First of all, there is a big misconception evident from your question: that the TCP window size is what is controlled by SO_SNDBUF and SO_RCVBUF. This is not true.

简而言之,TCP窗口大小确定您的网络堆栈愿意在接收尚未确认的最早数据包的确认之前,有多少后续数据(数据包)。

In a nutshell, the TCP window size determines how much follow-up data (packets) your network stack is willing to put on the wire before receiving acknowledgement for the earliest packet that has not been acknowledged yet.

TCP堆栈必须存在并考虑到以下事实:一旦分组被确定为在传输期间丢失或被损坏,则从该信息包向前发送的每个分组必须被重新发送,因为分组只能由接收器按顺序确认。因此,允许太多的未确认的分组同时存在将消耗连接的带宽:不能保证所使用的带宽实际上会产生有用的东西。

The TCP stack has to live with and account for the fact that once a packet has been determined to be lost or mangled during transmission, every packet sent, from that one onwards, has to be re-sent since packets may only be acknowledged in order by the receiver. Therefore, allowing too many unacknowledged packets to exist at the same time consumes the connection's bandwidth speculatively: there is no guarantee that the bandwidth used will actually produce anything useful.

另一方面,不允许同时存在多个未确认的数据包会简单地占用高度带宽延迟乘积。因此,TCP堆栈必须在使用带宽以免受益并且不足够地驱动管道(从而允许其一些容量未使用)之间取得平衡。

On the other hand, not allowing multiple unacknowledged packets at the same time would simply kill the bandwidth of connections that have a high bandwidth-delay product. Therefore, the TCP stack has to strike a balance between using up bandwidth for no benefit and not driving the pipe aggressively enough (and thus allowing some of its capacity to go unused).

TCP窗口大小决定了这个余额的发生位置。

The TCP window size determines where this balance is struck.

它们控制网络堆栈为保留套接字而保留的缓冲区空间。这些缓冲区用于累积堆栈尚未能够处理的输出数据,以及已从线路接收但尚未由应用程序读取的数据。

They control the amount of buffer space that the network stack has reserved for servicing your socket. These buffers serve to accumulate outgoing data that the stack has not yet been able to put on the wire and data that has been received from the wire but not yet read by your application respectively.

如果其中一个缓冲区已满,您将无法在释放空间之前发送或接收更多数据。注意,这些缓冲区仅影响网络栈如何处理网络接口近侧(在它们被发送之前或在它们已经到达之后)的数据,而TCP窗口影响堆栈如何管理远

If one of these buffers is full you won't be able to send or receive more data until some space is freed. Note that these buffers only affect how the network stack handles data on the "near" side of the network interface (before they have been sent or after they have arrived), while the TCP window affects how the stack manages data on the "far" side of the interface (i.e. on the wire).


  1. 否。如果是这样,那么您将对每个发送的数据包产生往返延迟,这将完全破坏具有高延迟的连接的带宽。

  1. No. If that were the case then you would incur a roundtrip delay for each packet sent, which would totally destroy the bandwidth of connections with high latency.

这与TCP窗口大小或分配给该套接字的缓冲区大小没有任何关系。

Yes, but that has nothing to do with either the TCP window size or with the size of the buffers allocated to that socket.

根据我已经能够找到(示例),缩放允许窗口最大大小为1GB。

According to all sources I have been able to find (example), scaling allows the window to reach a maximum size of 1GB.

这篇关于增加TCP窗口大小的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆