当tcp/udp服务器发布的速度快于客户端消耗的速度时,会发生什么情况? [英] what happens when tcp/udp server is publishing faster than client is consuming?

查看:270
本文介绍了当tcp/udp服务器发布的速度快于客户端消耗的速度时,会发生什么情况?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图弄清服务器发布(通过tcp,udp等)的发布速度超过客户端消耗数据的速度的情况.

I am trying to get a handle on what happens when a server publishes (over tcp, udp, etc.) faster than a client can consume the data.

在一个程序中,我了解到,如果生产者和消费者之间有一个队列,它将开始变得更大.如果没有队列,那么在消费者可以消费之前,生产者将根本无法生产任何新东西(我知道可能会有更多变化).

Within a program I understand that if a queue sits between the producer and the consumer, it will start to get larger. If there is no queue, then the producer simply won't be able to produce anything new, until the consumer can consume (I know there may be many more variations).

我不清楚数据离开服务器(可能是不同的进程,机器或数据中心)并发送到客户端时会发生什么情况.如果客户端仅仅不能足够快地响应传入的数据(假设服务器和使用者之间的耦合非常松散),那么运行中的数据会发生什么?

I am not clear on what happens when data leaves the server (which may be a different process, machine or data center) and is sent to the client. If the client simply can't respond to the incoming data fast enough, assuming the server and the consumer are very loosely coupled, what happens to the in-flight data?

在哪里可以阅读以获得有关该主题的详细信息?我是否只需要阅读TCP/UDP的低级详细信息?

Where can I read to get details on this topic? Do I just have to read the low level details of TCP/UDP?

谢谢

推荐答案

使用TCP,有一个 TCP窗口用于流量控制. TCP仅允许一次保留一定量的数据.如果服务器生成数据的速度比客户端消耗数据的速度快,那么直到TCP窗口满"时,未被确认的数据量将增加,此时发送TCP堆栈将等待,直到客户端发送完所有数据确认某些未决数据.

With TCP there's a TCP Window which is used for flow control. TCP only allows a certain amount of data to remain unacknowledged at a time. If a server is producing data faster than a client is consuming data then the amount of data that is unacknowledged will increase until the TCP window is 'full' at this point the sending TCP stack will wait and will not send any more data until the client acknowledges some of the data that is pending.

使用UDP时,没有这样的流控制系统.毕竟是不可靠的.客户端和服务器上的UDP堆栈都可以丢弃数据报,只要它们之间感觉良好,它们之间的所有路由器也是如此.如果您发送的数据报超出了链接可以传递给客户端的数量,或者如果链接传递的数据报超出了客户端代码可以接收的数量,那么其中的一些数据报将被丢弃.除非您已在基本UDP上构建了某种形式的可靠协议,否则服务器和客户端代码可能永远不会知道.尽管实际上您可能会发现数据报并没有被网络堆栈丢弃,并且NIC驱动程序简单地占用了所有可用的非页面缓冲池,最终导致系统崩溃(请参见

With UDP there's no such flow control system; it's unreliable after all. The UDP stacks on both client and server are allowed to drop datagrams if they feel like it, as are all routers between them. If you send more datagrams than the link can deliver to the client or if the link delivers more datagrams than your client code can receive then some of them will get thrown away. The server and client code will likely never know unless you have built some form of reliable protocol over basic UDP. Though actually you may find that datagrams are NOT thrown away by the network stack and that the NIC drivers simply chew up all available non-paged pool and eventually crash the system (see this blog posting for more details).

返回TCP,服务器代码如何处理TCP窗口已满取决于您使用的是阻塞I/O,非阻塞I/O还是异步I/O.

Back with TCP, how your server code deals with the TCP Window becoming full depends on whether you are using blocking I/O, non-blocking I/O or async I/O.

  • 如果使用阻塞的I/O,则发送的呼叫将阻塞,并且服务器将变慢.有效地,您的服务器现在与客户处于同步状态.在客户端收到未决数据之前,它无法发送更多数据.

  • If you are using blocking I/O then your send calls will block and your server will slow down; effectively your server is now in lock step with your client. It can't send more data until the client has received the pending data.

如果服务器使用的是非阻塞I/O,则可能会返回错误消息,告诉您该调用将被阻塞;否则,将返回错误消息.您可以执行其他操作,但是服务器将需要稍后重新发送数据...

If the server is using non blocking I/O then you'll likely get an error return that tells you that the call would have blocked; you can do other things but your server will need to resend the data at a later date...

如果您使用的是异步I/O,则事情可能会更加复杂.例如,使用Windows上使用I/O完成端口的异步I/O,您根本不会注意到任何不同.重叠的发送仍会被接受,但是您可能会注意到它们需要更长的时间才能完成.重叠的发送正在服务器计算机上排队,并且正在使用内存作为重叠的缓冲区,并且可能还会占用非页面缓冲池".如果继续发出重叠的发送,则可能会耗尽非分页缓冲池内存或将可能无限制的内存量用作I/O缓冲区.因此,对于异步I/O和服务器,其生成数据的速度可能超过其客户端消耗数据的速度,则应编写自己的流控制代码,并使用写操作的完成来驱动.我已经在我的博客这里此处和我的服务器框架提供了可自动为您处理的代码.

If you're using async I/O then things may be more complex. With async I/O using I/O Completion Ports on Windows, for example, you wont notice anything different at all. Your overlapped sends will still be accepted just fine but you might notice that they are taking longer to complete. The overlapped sends are being queued on your server machine and are using memory for your overlapped buffers and probably using up 'non-paged pool' as well. If you keep issuing overlapped sends then you run the risk of exhausting non-paged pool memory or using a potentially unbounded amount of memory as I/O buffers. Therefore with async I/O and servers that COULD generate data faster than their clients can consume it you should write your own flow control code that you drive using the completions from your writes. I have written about this problem on my blog here and here and my server framework provides code which deals with it automatically for you.

就数据传输中"而言,两个对等方中的TCP堆栈将确保数据按预期到达(即按顺序且没有丢失),他们将通过在需要时重新发送数据来做到这一点

As far as the data 'in flight' is concerned the TCP stacks in both peers will ensure that the data arrives as expected (i.e. in order and with nothing missing), they'll do this by resending data as and when required.

这篇关于当tcp/udp服务器发布的速度快于客户端消耗的速度时,会发生什么情况?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆