UDP接收队列满了? [英] UDP receive queue full?

查看:17
本文介绍了UDP接收队列满了?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个应用程序在端口 12201 上接收大量 UDP 流量,我注意到一些 UDP 数据包从未进入应用程序(仅由内核接收).

I have an application that receives heavy UDP traffic on port 12201 and I have noticed that some of the UDP packets never make into the application (received by kernel only).

当我跑步时

netstat -c --udp -an | grep 12201

我可以看到 Recv-Q 几乎总是 126408,很少低于,从不高于:

I can see that Recv-Q is almost always 126408, rarely going below, never going above:

Proto Recv-Q Send-Q Local Address Foreign Address State
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*

这是否意味着接收队列已满?数字 126408 来自哪里?怎么增加呢?

Does this mean that receive queue is full? Where does number 126408 come from? How can I increase it?

Sysctl 配置:

# sysctl -a | grep mem
vm.overcommit_memory = 0
vm.nr_hugepages_mempolicy = 0
vm.lowmem_reserve_ratio = 256   256     32
vm.meminfo_legacy_layout = 1
vm.memory_failure_early_kill = 0
vm.memory_failure_recovery = 1
net.core.wmem_max = 124928
net.core.rmem_max = 33554432
net.core.wmem_default = 124928
net.core.rmem_default = 124928
net.core.optmem_max = 20480
net.ipv4.igmp_max_memberships = 20
net.ipv4.tcp_mem = 365760       487680  731520
net.ipv4.tcp_wmem = 4096        16384   4194304
net.ipv4.tcp_rmem = 4096        87380   4194304
net.ipv4.udp_mem = 262144       327680  393216
net.ipv4.udp_rmem_min = 4096
net.ipv4.udp_wmem_min = 4096

推荐答案

看起来你的应用程序使用系统默认接收缓冲区,这是通过 sysctl 定义的

Looks like your application using system default receive buffer, which is defined via sysctl

net.core.rmem_default = 124928

因此您在 Recv-Q 中看到的上限接近上面.尝试将应用程序中的 SO_RCVBUF 套接字选项更改为更高的值,可能达到最大限制.在 sysctl 设置中定义 net.core.rmem_max = 33554432

Hence you see upper limit in Recv-Q close to above. Try changing SO_RCVBUF socket option in you application to higher values, probably up to max limit. As defined in sysctl setting net.core.rmem_max = 33554432

由于队列已满而丢弃的数据包计数,可以通过netstat -us查看(查找packet receive errors)

Dropped packet count due to queue full, can be seen via netstat -us (look for packet receive errors)

这篇关于UDP接收队列满了?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆