TCP接收窗口大小大于net.core.rmem_max [英] TCP receiving window size higher than net.core.rmem_max

查看:3661
本文介绍了TCP接收窗口大小大于net.core.rmem_max的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在通过10Gbit链路连接的两台服务器之间运行iperf测量.我正在尝试将观察到的最大窗口大小与系统配置参数相关联.

I am running iperf measurements between two servers, connected through 10Gbit link. I am trying to correlate the maximum window size that I observe with the system configuration parameters.

尤其是,我观察到最大窗口大小为3 MiB.但是,我在系统文件中找不到相应的值.

In particular, I have observed that the maximum window size is 3 MiB. However, I cannot find the corresponding values in the system files.

通过运行sysctl -a,我得到以下值:

By running sysctl -a I get the following values:

net.ipv4.tcp_rmem = 4096        87380   6291456
net.core.rmem_max = 212992

第一个值告诉我们,最大接收器窗口大小为6 MiB.但是,TCP倾向于分配两倍于请求的大小,因此最大接收器窗口大小应为3 MiB,这与我所测量的完全相同.来自man tcp:

The first value tells us that the maximum receiver window size is 6 MiB. However, TCP tends to allocate twice the requested size, so the maximum receiver window size should be 3 MiB, exactly as I have measured it. From man tcp:

请注意,TCP实际上分配的空间是setsockopt(2)调用中请求的缓冲区大小的两倍,因此,后续的getsockopt(2)调用将不会返回与setsockopt(2)调用中请求的缓冲区大小相同的缓冲区. TCP将多余的空间用于管理目的和内部内核结构,与实际的TCP窗口相比,/proc文件的值反映出更大的大小.

Note that TCP actually allocates twice the size of the buffer requested in the setsockopt(2) call, and so a succeeding getsockopt(2) call will not return the same size of buffer as requested in the setsockopt(2) call. TCP uses the extra space for administrative purposes and internal kernel structures, and the /proc file values reflect the larger sizes compared to the actual TCP windows.

但是,第二个值net.core.rmem_max指出最大接收器窗口大小不能超过208 KiB.根据man tcp,这应该是硬限制:

However, the second value, net.core.rmem_max, states that the maximum receiver window size cannot be more than 208 KiB. And this is supposed to be the hard limit, according to man tcp:

tcp_rmem max:每个TCP套接字使用的接收缓冲区的最大大小.此值不会覆盖全局net.core.rmem_max.这不是用来限制在套接字上使用SO_RCVBUF声明的接收缓冲区的大小.

tcp_rmem max: the maximum size of the receive buffer used by each TCP socket. This value does not override the global net.core.rmem_max. This is not used to limit the size of the receive buffer declared using SO_RCVBUF on a socket.

那么,为什么我观察到的最大窗口尺寸大于net.core.rmem_max中指定的最大窗口尺寸?

So, how come and I observe a maximum window size larger than the one specified in net.core.rmem_max?

NB:我还计算了带宽延迟乘积:window_size = Bandwidth x RTT约为3 MiB(10 Gbps @ 2毫秒RTT),从而验证了流量捕获情况.

NB: I have also calculated the Bandwidth-Latency product: window_size = Bandwidth x RTT which is about 3 MiB (10 Gbps @ 2 msec RTT), thus verifying my traffic capture.

推荐答案

出现了快速搜索:

https://github.com/torvalds/linux/blob/4e5448a31d73d0e944b7adb9049438a09bc332cb/net/ipv4/tcp_output.c

void tcp_select_initial_window()

if (wscale_ok) {
    /* Set window scaling on max possible window
     * See RFC1323 for an explanation of the limit to 14
     */
    space = max_t(u32, sysctl_tcp_rmem[2], sysctl_rmem_max);
    space = min_t(u32, space, *window_clamp);
    while (space > 65535 && (*rcv_wscale) < 14) {
        space >>= 1;
        (*rcv_wscale)++;
    }
}

max_t采用较高的参数值.因此,此处较大的值优先.

max_t takes the higher value of the arguments. So the bigger value takes precedence here.

sysctl_rmem_max处使用了另一个引用,该引用用于将参数限制为SO_RCVBUF(在net/core/sock.c中).

One other reference to sysctl_rmem_max is made where it is used to limit the argument to SO_RCVBUF (in net/core/sock.c).

所有其他tcp代码仅引用sysctl_tcp_rmem.

All other tcp code refers to sysctl_tcp_rmem only.

因此,无需深入研究代码,您可以得出结论,在所有情况下,较大的net.ipv4.tcp_rmem会覆盖net.core.rmem_max,但设置SO_RCVBUF时除外(可以使用SO_RCVBUFFORCE绕过其检查)

So without looking deeper into the code you can conclude that a bigger net.ipv4.tcp_rmem will override net.core.rmem_max in all cases except when setting SO_RCVBUF (whose check can be bypassed using SO_RCVBUFFORCE)

这篇关于TCP接收窗口大小大于net.core.rmem_max的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆