为什么特定的 UDP 消息总是低于特定的缓冲区大小? [英] Why are particular UDP messages always getting dropped below a particular buffer size?

查看:14
本文介绍了为什么特定的 UDP 消息总是低于特定的缓冲区大小?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

3 条不同的消息以不同的速率发送到同一个端口:

3 different messages are being sent to the same port at different rates:

消息  大小(字节)  每发送一次传输速度
高           232                 10毫秒          100Hz                  
中等     148                 20ms           50Hz                    
低            20                   60毫秒          16.6Hz                 

Message  size (bytes)  Sent everytransmit speed
High           232                 10 ms          100Hz                  
Medium     148                 20ms           50Hz                    
Low            20                   60 ms          16.6Hz                 

我只能每 ~ 6 毫秒处理一条消息.
单线程.阻止读取.

I can only process one message every ~ 6 ms.
Single threaded. Blocking read.

发生了一种奇怪的情况,我对此没有任何解释.
当我将接收缓冲区设置为 4,799 字节时,我的所有 低速消息 都会被丢弃.
我看到可能有一两个得到处理,然后什么都没有.

A strange situation is occurring, and I don't have an explanation for it.
When I set my receive buffer to 4,799 bytes, all of my low speed messages get dropped.
I see maybe one or two get processed, and then nothing.

当我将接收缓冲区设置为 4,800(或更高!)时,似乎所有低速消息都开始得到处理.我看到大约 16/17 秒.

When I set my receive buffer to 4,800(or higher!), it appears as though all of the low speed messages start getting processed. I see about 16/17 a second.

这一直被观察到.发送数据包的应用程序总是在接收应用程序之前启动.接收应用程序在创建套接字之后和开始处理之前总是有很长的延迟.所以处理开始时缓冲区总是满的,而且每次发生测试时它都不是同一个开始缓冲区.这是因为套接字是在发送者已经发送消息之后创建的,因此接收者可能会在发送周期的中间开始监听.

This has been observed consistently. The application sending the packets is always started before the receiving application. The receiving application always has a long delay after the sockets are created, and before it begins processing. So the buffer is always full when the processing starts, and it is not the same starting buffer each time a test occurs. This is because the socket is created after the sender is already sending out messages, so the receiver might start listening in the middle of a send cycle.

为什么将接收缓冲区大小增加一个字节,会导致低速消息处理发生巨大变化?

我建立了一个表格来更好地可视化预期的处理:

I built a table to better visualize the expected processing:

随着其中一些消息得到处理,更多消息可能会被放入队列而不是被丢弃.

As some of these messages get processed, more messages presumably get put on the queue instead of being dropped.

不过,我希望 4,799 字节缓冲区的行为方式与 4,800 字节相同.

Nonetheless, I would expect a 4,799 byte buffer to behave the same way as 4,800 bytes.

但这不是我观察到的.

我认为这个问题与低速消息与其他两条消息同时发送的事实有关.它总是在高速/中速消息之后接收.(这已经通过wireshark得到证实).

I think the issue is related to the fact that low speed messages are sent at the same time as the other two messages. It is always received after the high/medium speed messages. (This has been confirmed over wireshark).

例如,假设缓冲区一开始是空的,很明显低速消息需要比其他消息更长的排队时间.
*每 6 毫秒 1 条消息大约是每 30 毫秒 5 条消息.

For example, assuming the buffer was empty to begin with, it is clear that the low speed message would need queued longer than the other messages.
*1 message every 6ms is about 5 messages every 30ms.

这仍然不能解释缓冲区大小.

This still doesn't explain the buffer size.

我们正在运行 VxWorks,并使用他们的 sockLib,它是 Berkeley 套接字的一个实现.下面是我们创建套接字的一个片段:
SOCKET_BUFFER_SIZE 是我要改变的.

We are running VxWorks, and using their sockLib, which is an implementation of Berkeley sockets. Here is a snippet of what our socket creation looks like:
SOCKET_BUFFER_SIZE is what I'm changing.

struct sockaddr_in tSocketAddress;                          // Socket address
int     nSocketAddressSize = sizeof(struct sockaddr_in);    // Size of socket address structure
int     nSocketOption = 0;

// Already created
if (*ptParameters->m_pnIDReference != 0)
    return FALSE;

// Create UDP socket
if ((*ptParameters->m_pnIDReference = socket(AF_INET, SOCK_DGRAM, 0)) == ERROR)
{
    // Error
    CreateSocketMessage(ptParameters, "CreateSocket: Socket create failed with error.");

    // Not successful
    return FALSE;
}

// Valid local address
if (ptParameters->m_szLocalIPAddress != SOCKET_ADDRESS_NONE_STRING && ptParameters->m_usLocalPort != 0)
{
    // Set up the local parameters/port
    bzero((char*)&tSocketAddress, nSocketAddressSize);
    tSocketAddress.sin_len = (u_char)nSocketAddressSize;
    tSocketAddress.sin_family = AF_INET;
    tSocketAddress.sin_port = htons(ptParameters->m_usLocalPort);

    // Check for any address
    if (strcmp(ptParameters->m_szLocalIPAddress, SOCKET_ADDRESS_ANY_STRING) == 0)
        tSocketAddress.sin_addr.s_addr = htonl(INADDR_ANY);
    else
    {
        // Convert IP address for binding
        if ((tSocketAddress.sin_addr.s_addr = inet_addr(ptParameters->m_szLocalIPAddress)) == ERROR)
        {
            // Error
            CreateSocketMessage(ptParameters, "Unknown IP address.");

            // Cleanup socket
            close(*ptParameters->m_pnIDReference);
            *ptParameters->m_pnIDReference = ERROR;

            // Not successful
            return FALSE;
        }
    }

    // Bind the socket to the local address
    if (bind(*ptParameters->m_pnIDReference, (struct sockaddr *)&tSocketAddress, nSocketAddressSize) == ERROR)
    {
        // Error
        CreateSocketMessage(ptParameters, "Socket bind failed.");

        // Cleanup socket
        close(*ptParameters->m_pnIDReference);
        *ptParameters->m_pnIDReference = ERROR;

        // Not successful
        return FALSE;
    }
}

// Receive socket
if (ptParameters->m_eType == SOCKTYPE_RECEIVE || ptParameters->m_eType == SOCKTYPE_RECEIVE_AND_TRANSMIT)
{
    // Set the receive buffer size
    nSocketOption = SOCKET_BUFFER_SIZE;
    if (setsockopt(*ptParameters->m_pnIDReference, SOL_SOCKET, SO_RCVBUF, (char *)&nSocketOption, sizeof(nSocketOption)) == ERROR)
    {
        // Error
        CreateSocketMessage(ptParameters, "Socket buffer size set failed.");

        // Cleanup socket
        close(*ptParameters->m_pnIDReference);
        *ptParameters->m_pnIDReference = ERROR;

        // Not successful
        return FALSE;
    }
}

并且在无限循环中调用的套接字接收:
*缓冲区大小肯定足够大

and the socket receive that's being called in an infinite loop:
*The buffer size is definitely large enough

int SocketReceive(int nSocketIndex, char *pBuffer, int nBufferLength)
{
    int nBytesReceived = 0;
    char szError[256];

    // Invalid index or socket
    if (nSocketIndex < 0 || nSocketIndex >= SOCKET_COUNT || g_pnSocketIDs[nSocketIndex] == 0)
    {
        sprintf(szError,"SocketReceive: Invalid socket (%d) or ID (%d)", nSocketIndex, g_pnSocketIDs[nSocketIndex]);
        perror(szError);
        return -1;
    }

    // Invalid buffer length
    if (nBufferLength == 0)
    {
        perror("SocketReceive: zero buffer length");
        return 0;
    }

    // Send data
    nBytesReceived = recv(g_pnSocketIDs[nSocketIndex], pBuffer, nBufferLength, 0);

    // Error in receiving
    if (nBytesReceived == ERROR)
    {
        // Create error string
        sprintf(szError, "SocketReceive: Data Receive Failure: <%d> ", errno);

        // Set error message
        perror(szError);

        // Return error
        return ERROR;
    }

    // Bytes received
    return nBytesReceived;
}

任何关于为什么将缓冲区大小增加到 4,800 会导致成功且一致地读取低速消息的线索?

推荐答案

为什么 SO_RCVBUF 大小为 4799 会导致丢失低速消息而大小为 4800 工作正常的问题的基本答案是 UDP 数据包进来的混合,它们进来的速率,你处理进来的数据包的速率,以及你的 vxWorks 中 mbuff 和簇号的大小内核允许足够的网络堆栈吞吐量,从而不会丢弃较大尺寸的低速消息.

The basic answer to the question of why a SO_RCVBUF size of 4799 results in lost low speed messages and a size of 4800 works fine is that with the mixture of the UDP packets coming in, the rate at which they are coming in, the rate at which you are processing incoming packets, and the sizing of the mbuff and cluster numbers in your vxWorks kernel allow for sufficient network stack throughput that the low speed messages are not being discarded with the larger size.

setsockopt() 手册页中的 SO_SNDBUF 选项说明,位于 URL http://www.vxdev.com/docs/vx55man/vxworks/ref/sockLib.html#setsockopt 有这个说说指定的大小和对 mbuff 使用的影响:

The SO_SNDBUF option description in the setsockopt() man page at URL http://www.vxdev.com/docs/vx55man/vxworks/ref/sockLib.html#setsockopt mentioned in a comment above has this to say about the size specified and the effect on mbuff usage:

设置最大缓冲区大小的效果(对于 SO_SNDBUF和 SO_RCVBUF,如下所述)实际上并不是分配 mbufs来自 mbuf 池.相反,效果是设置高水位线在协议数据结构中,稍后用于限制mbuf 分配的数量.

The effect of setting the maximum size of buffers (for both SO_SNDBUF and SO_RCVBUF, described below) is not actually to allocate the mbufs from the mbuf pool. Instead, the effect is to set the high-water mark in the protocol data structure, which is used later to limit the amount of mbuf allocation.

UDP 数据包是离散的单元.如果您发送 10 个大小为 232 的数据包,则在连续内存中不被视为 2320 字节的数据.相反,网络堆栈中有 10 个内存缓冲区,因为 UDP 是离散的数据包,而 TCP 是连续的字节流.

UDP packets are discrete units. If you send 10 packets of size 232 that is not considered to be 2320 bytes of data in contiguous memory. Instead that is 10 memory buffers within the network stack because UDP is discrete packets while TCP is a continuous stream of bytes.

请参阅 如何调整网络缓冲在 VxWorks 5.4 中? 在 DDS 社区网站上讨论了 mbuff 大小和网络集群的混合的相互依赖关系.

See How do I tune the network buffering in VxWorks 5.4? in the DDS community web site which gives a discussion about the interdependence of the mixture of mbuff sizes and network clusters.

请参阅 如何解决 VxWorks 缓冲区问题? 在 DDS 社区网站中.

See How do I resolve a problem with VxWorks buffers? in the DDS community web site.

请参阅此 PDF 幻灯片演示文稿,A New Tool to study Network Stack Exhaustion in VxWorks 从 2004 年开始讨论使用各种工具,例如 mBufShowinetStatShow 来查看网络堆栈中发生的情况.

See this PDF of a slide presentation, A New Tool to study Network Stack Exhaustion in VxWorks from 2004 which discusses using various tools such as mBufShow and inetStatShow to see what is happening in the network stack.

这篇关于为什么特定的 UDP 消息总是低于特定的缓冲区大小?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆