为什么当网络套接字是静态连接时延迟会有所不同? [英] why latency varies in web socket when it's a static connection?

查看:34
本文介绍了为什么当网络套接字是静态连接时延迟会有所不同?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

由于 HTTP 为要通过网络传输的每个数据一次又一次地创建连接,WEB SOCKETS 是静态的,连接将在最初建立一次并保持不变直到传输完成...但是如果网络套接字是静态的,那么为什么每个数据包的延迟会不同..???

as HTTP creates the connection again and again for each data to be transferred over a network, WEB SOCKETS are static and the connection will be made once initially and will stay until the transmission is done...but if the web sockets are static then why the latency differs for each data packet..???

我创建的延迟测试应用显示了不同的时间延迟..所以Web 套接字作为静态连接的优势是什么,或者这是否是 Web 套接字中的常见问题?

the latency test app i have created shows me different time lag.. so what is the advantage of web socket being a static connection or if this is a common issue in web sockets ??

我是否需要创建一个缓冲区来控制数据流,因为数据传输是连续的..?数据连续传输时延迟会增加吗?

Do i need to create a buffer to control the flow of data because the data transmission in the is continous..? does the latecy increases when data transmission is continous?

推荐答案

与静态打开的 Web 套接字建立新连接没有开销(因为连接已经打开并已建立),但是当您进行请求地球另一端,网络需要一些时间,因此当您与地球另一端的服务器通话时会出现延迟.

There is no overhead to establish a new connection with a statically open web socket (as the connection is already open and established), but when you're making a request half way around the world, networking takes some time so there's latency when you're talking to a server half way around the world.

这就是网络的运作方式.

That's just how networking works.

您从自己 LAN 上的服务器获得近乎立即的响应,服务器距离(在网络拓扑方面)越远,每个数据包通过的路由器越多,总延迟就越大.正如您在之前与此主题相关的问题中所看到的,当您从您的位置到服务器位置执行 tracert 时,您会看到每个数据包必须遍历很多不同的跃点.这些跃点中的每一个的时间都加起来,如果繁忙的路由器没有立即处理您的数据包,它们也可能会增加一个小的延迟.

You get a near immediate response from a server on your own LAN and the further away the server gets (in terms of network topology) the more routers each packet much transit through, the more total delay there is. As you witnessed in your earlier question related to this topic, when you do a tracert from your location to your server location, you saw a LOT of different hops that each packet has to traverse. The time for each one of these hops all adds up and busy routers may also each add a small delay if they aren't instantly processing your packet.

发送数据包和获得响应之间的延迟只是数据包传输时间的 2 倍,加上您的服务器响应所需的时间,再加上 TCP 的一点点开销(因为它是一个可靠的协议,它需要确认).您无法加快传输时间,除非您选择更近的服务器或以某种方式影响数据包采用的路由到更快的路由(一旦您选择了要使用的本地 ISP,这通常不受您的控制).

The latency between when you send a packet and get a response is just 2x the packet transit time plus whatever your server takes to respond plus perhaps a tiny little overhead for TCP (since it's a reliable protocol, it needs acknowledgements). You cannot speed up the transit time unless you pick a server that is closer or somehow influence the route the packet takes to a faster route (this is mostly not under your control once you've selected a local ISP to use).

在您的终端上进行任何缓冲都不会减少到您的服务器的往返时间.

No amount of buffering on your end will decrease the roundtrip time to your server.

此外,您的客户端和服务器之间的网络跃点越多,您在传输时间中从一个时刻到下一个时刻的变化就越大.数据包经过的每一台路由器以及它所经过的每一条链路都有自己的负载、拥塞等……随着时间的推移而变化.您可能会观察到最小传输时间(它永远不会比 x 快),但是随着时间的推移,许多事情会影响它,使其在某些时刻比它慢.甚至可能有 ISP 将路由器脱机进行维护的情况,这会给处理流量的其他路由器或跳之间的路由带来更多负载,因此临时的但更慢和更长的路由被替代.有数百种事情会导致运输时间不时变化.一般来说,一分钟到下一分钟不会有太大变化,但很容易在一天或更长的时间内发生变化.

In addition, the more hops in the network there are between your client and server, the more variation you may get from one moment to the next in the transit time. Each one of the routers the packet traverses and each one of the links it travels on has their own load, congestion, etc... that varies with time. There will likely be a minimum transit time that you will ever observe (it will never be faster than x), but many things can influence it over time to make it slower than that in some moments. There could even be a case of an ISP taking a router offline for maintenance which puts more load on the other routers handling the traffic or a route between hops going down so a temporary, but slower and longer route is substituted in its place. There are literally hundreds of things that can cause the transit time to vary from moment to moment. In general, it won't vary a lot from one minute to the next, but can easily vary through the day or over longer periods of time.

您还没有说明这是否相关,但是当给定往返的延迟很差或性能非常重要时,您想要做的是尽量减少您等待的往返次数.您可以通过以下几种方式做到这一点:

You haven't said whether this is relevant or not, but when you have poor latency on a given roundtrip or when performance is very important, what you want to do is to minimize the number of roundtrips that you wait for. You can do that a couple of ways:

1.不要对小块数据进行排序. 发送大量数据最慢的方法是发送一点数据,等待响应,再发送一点数据,等待响应等......如果您有 100 个字节要发送,并且每次发送 1 个字节的数据以等待响应,并且您的往返时间为 X,则发送所有数据的总时间为 100 倍.相反,收集更大的数据并一次发送.如果您一次发送 100 个字节,您可能只有 X 的总延迟而不是 100X.

1. Don't sequence small pieces of data. The slowest way to send lots of data is to send a little bit of data, wait for a response, send a little more data, wait for a response, etc... If you had 100 bytes to send and you sent the data 1 byte at a time waiting for a response each time and your roundtrip time was X, you'd have 100X as your total time to send all the data. Instead, collect up a larger piece of the data and send it all at once. If you send the 100 bytes all at once, you'd probably only have a total delay of X rather than 100X.

2.可以的话,并行发送数据. 上面解释了发送数据的模式,等待响应,发送更多数据,当往返时间很短时等待响应很慢.如果您的数据可以标记为独立存在,那么有时您可以并行发送数据,而无需等待先前的响应.在上面的例子中,发送 1 个字节,等待响应,发送下一个字节,等待响应非常慢.但是,如果您发送 1 个字节,然后发送下一个字节,然后发送下一个字节,然后在一段时间后处理所有响应,您将获得更好的吞吐量.显然,如果您已经有 100 字节的数据,您也可以一次发送所有数据,但是如果数据是实时到达的,您可能只想在它到达时将其发送出去,而不是等待先前的响应.显然,您能否做到这一点完全取决于您的客户端和服务器之间的数据协议.

2. If you can, send data parallel. As explained above the pattern of send data, wait for response, send more data, wait for response is slow when the roundtrip time is poor. If your data can be tagged such that it stands on its own, then sometimes you can send data in parallel without waiting for prior responses. In the above example, it was very slow to send 1 byte, wait for response, send next byte, wait for response. But, if you send 1 byte, then send next byte, then send next byte and then some times later you process all the response, you get much, much better throughput. Obviously, if you already have 100 bytes of data, you may as well just send that all at once, but if the data is arriving real time, you may want to just send it out as it arrives and not wait for prior responses. Obviously whether you can do this depends entirely upon the data protocol between your client and server.

3.一次发送更大的数据块.如果可以,一次发送更大的数据块.根据您的应用程序,在发送之前实际等待数据累积可能有意义也可能没有意义,但如果您已经有 100 字节的数据,则尝试一次发送所有数据,而不是将其分成较小的部分发送.

3. Send bigger pieces of data at a time. If you can, send bigger chunks of data at once. Depending upon your app, it may or may not make sense to actually wait for data to accumulate before sending it, but if you already have 100 bytes of data, then try to send it all at once rather than sending it in smaller pieces.

这篇关于为什么当网络套接字是静态连接时延迟会有所不同?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆