为什么Windows7上的TCP / IP需要500个发送来进行预热? (w10,w8证明不受影响) [英] Why does TCP/IP on Windows7 take 500 sends to warm-up? ( w10,w8 proved not to suffer )

查看:149
本文介绍了为什么Windows7上的TCP / IP需要500个发送来进行预热? (w10,w8证明不受影响)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们看到 Windows 7 上的ZeroMQ出现了一种奇怪且无法解释的现象,它通过TCP发送消息。(或通过 > inproc ,因为ZeroMQ在Windows上内部使用TCP进行信号传输。)

现象是前500条消息到达的速度越来越慢,并且延迟不断增加。然后,除了CPU /网络争用导致的峰值外,等待时间下降并且消息始终如一地迅速到达。

The phenomenon is that the first 500 messages arrive slower and slower, with latency rising steadily. Then latency drops and messages arrive consistently rapidly, except for spikes caused by CPU/network contention.

问题在此处描述: https://github.com/zeromq/libzmq/issues/1608

始终是500条消息。如果我们不拖延地发送消息,则将对消息进行批处理,因此我们可以看到现象遍及数千个发送。如果我们在发送之间延迟,我们将更清楚地看到该图。即使在两次发送之间最多延迟50-100毫秒也不会改变事情。

It is consistently 500 messages. If we send without a delay, then messages are batched so we see the phenomenon stretch over several thousand sends. If we delay between sends, we see the graph more clearly. Even delaying as much as 50-100 msec between sends does not change things.

消息大小也无关紧要。我已经测试了10字节消息和10K消息,结果相同。

Message size is also irrelevant. I've tested with 10-byte messages and 10K messages, with the same results.

最大延迟始终为2毫秒(2,000微秒)。

The maximum latency is always 2 msec (2,000 usec).

在Linux机器上,我们看不到这种现象。

On Linux boxes we do not see this phenomenon.

我们想要做的是消除此初始曲线,因此消息保持正常的低延迟(大约20至100微秒)保持新鲜连接。

What we'd like to do is eliminate this initial curve, so messages leave on a fresh connection with their normal low latency (around 20-100 usec).


更新:该问题在Windows 10或8上均未显示。它似乎仅在Windows 7 上发生。

Update: the issue does not show on Windows 10 nor 8. It seems to happen only on Windows 7.


推荐答案

我们已经找到了原因和解决方法。这是Windows 7上所有TCP活动(至少)上的一个普遍问题,至少是由接收方的缓冲引起的。您可以在 TCP慢启动下的行上找到一些提示。

We've found the cause and a workaround. This is a general issue with all TCP activity on Windows 7 (at least) caused by buffering at the receiver side. You can find some hints on line under "TCP slow start."

在新连接上,或者如果连接空闲时间(我认为)为150毫秒或更长,接收方缓冲传入的数据包,并且将这些数据包提供给应用程序,直到接收缓冲区已满和/或某些超时到期(尚不清楚)。

On a new connection, or if there connection is idle for (I think) 150 msec or more, the receiver buffers incoming packets and does not provide these to the application, until the receive buffer is full and/or some timeout expires (it's unclear).

在ZeroMQ中,我们使用TCP套接字进行线程间信令的解决方法是在新的信号对上发送虚拟数据块。这迫使TCP堆栈正常工作,然后我们看到大约100-150微秒的一致延迟。

Our workaround in ZeroMQ, where we are using TCP sockets for interthread signalling, is to send a dummy chunk of data on new signal pairs. This forces the TCP stack to work "normally" and we then see consistent latencies of around 100-150 usec.

我不确定这是否通常有用;对于大多数应用程序来说,等待一会儿是有利可图的,因此TCP堆栈可以向调用应用程序交付更多内容。

I'm not sure whether this is generally useful; for most applications it's profitable to wait a little on reception, so the TCP stack can deliver more to the calling application.

但是对于发送许多小消息的应用程序,此解决方法可能会有所帮助。

However for apps that send many small messages, this workaround may be helpful.

请注意,如果连接处于空闲状态,则会再次发生缓慢启动,因此,如果这很关键,则连接应每100毫秒左右进行一次心跳。

Note that if the connection is idle, the slow start happens again, so connections should heartbeat every 100 msec or so, if this is critical.

这篇关于为什么Windows7上的TCP / IP需要500个发送来进行预热? (w10,w8证明不受影响)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆