长轮询的严重缺点? [英] Hard downsides of long polling?

查看:116
本文介绍了长轮询的严重缺点?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

对于交互式网络应用程序,Websockets 之类的东西越来越流行.然而,由于客户端和代理世界并不总是完全兼容,因此通常使用像Socket.IO"这样的复杂框架,为任何可能禁用其他机制的情况隐藏几种不同的机制.

For interactive web apps, things like Websockets are getting more popular. However, as the client, and proxy world is not always fully compliant, one usually use a complex framework like 'Socket.IO', hiding several different mechanisms for any case that may disable the other ones.

我只是想知道正确实现的长轮询的缺点是什么,因为对于像 node.js 这样的今天的服务器来说,它很容易实现并且依赖于得到很好支持的旧的 http 技术(尽管长轮询行为本身可能会破坏它).

I just wonder what the downsides of a properly implemented long polling are, because with today's servers like node.js it is quite easy to implement and relies on old http technology which is well supported (despite the long polling behaveiour itself may break it).

从高层次的角度来看,长轮询(尽管有一些额外的开销,对于中等流量的应用程序是可行的)类似于 WebSockets 所做的真正的推送行为,因为服务器实际上在他喜欢的时候发送它的答案(尽管有一些超时/心跳机制).

From an high level view, long polling (despite some additional overhead, feasable for medium traffic apps) resembles a true push behaviour as WebSockets do, as the server actually send it's answer whenever he likes (despite some timeout / heartbeat mechanism).

所以我猜由于更多的 TCP/IP 确认,我们有更多的开销,但没有像频繁轮询这样的恒定流量.

So we have some more overhead due to the more TCP/IP acknowledgements I guess, but no constant traffic like frequent polling would do.

并且使用事件驱动的服务器,我们将没有线程开销来保持连接被阻塞.

And using an event driven server, we would have no thread overhead to keep the connections blocked.

那么,是否还有其他严重的缺点迫使聊天等中等流量的应用程序使用 WebSockets 而不是长轮询?

So is there any else hard downside that forces medium-traffic apps like chats to use WebSockets rather than long polling?

推荐答案

开销

它每次都会创建一个新连接,因此它会发送 HTTP 标头...包括可能很大的 cookie 标头.

Overhead

It will create a new connection each time, so it will send the HTTP headers... including the cookie header that may be large.

此外,只需检查是否有新内容";是另一种无用的连接.连接意味着防火墙、负载平衡器、网络服务器等许多项目的工作.可能,一旦您的 IT 基础设施有几个检查员,建立连接是最耗时的事情.

Also, just "check if there is something new" is another connection for nothing. Connections implies the work of many items like firewalls, load balancers, web servers ... etc.. Probably, establish the connection is most time consuming thing as soon your IT infrastructure have several inspectors.

如果您使用 HTTPS,您将一次又一次地执行最昂贵的操作,即 TLS 握手.一旦建立连接并且对称加密起作用,TLS 性能就很好,但是建立连接、密钥交换和所有爵士乐的过程并不快.

If you are using HTTPS, you are doing again and again the most expensive operation, the TLS handshake. TLS performance is good once the connection is established and the symmetric encryption is working, but the process of establishing the connection, key exchange and all that jazz is not fast.

此外,当连接完成时,日志条目被写入某处,计数器在某处增加,内存被消耗,对象被创建......等等......例如,为什么我们有不同的日志配置时在生产和开发中,是因为写入日志条目也会影响性能.

Also, when connections are done, log entries are written somewhere, counters are incremented somewhere, memory is consumed, objects are created... etc... etc.. For example, the reason why we have different logging configurations when in production and in development, is because writing log entries also affect performance.

长轮询用户何时连接或断开?如果您在给定的时间检查这一点...您应该等待仔细检查以确保它已断开连接或已连接的可靠时间是多少?

When is a long polling user connected or disconnected? If you check for this at a given moment of time... what would be the reliable amount of time you should wait to double check, to ensure it is disconnected or connected?

如果您的应用程序只是广播内容,这可能完全无关,但如果您的应用程序是游戏,则可能非常相关.

This may be totally irrelevant if your application just broadcast stuff, but it may be very relevant if your application is a game.

这很重要.

由于每次都会创建一个新连接,如果您有负载均衡的服务器,那么在循环场景中,您无法知道下一个连接将落在哪个服务器上.

Since a new connection is created each time, if you have load balanced servers, in a round robbin scenario you cannot know in which server the next connection is going to fall.

当用户的服务器已知时,例如使用 WebSocket 时,您可以立即将事件推送到该服务器,服务器会将它们中继到连接.如果用户断开连接,服务器可以立即通知用户不再连接,再次连接时可以重新订阅.

When a user's server is known, like when using a WebSocket, you can push the events to that server straight away, and the server will relay them to the connection. If the user disconnects, the server can notify straight away that the user is not connected anymore, and when connect again can subscribe again.

如果用户在为他生成事件时连接的服务器未知,则必须等待用户连接,然后您可以说嘿,用户 123 在这里,给我所有自从这个时间戳以来的消息",是什么让它变得有点麻烦.长轮询并不是真正的推送技术,而是请求-响应技术,因此如果您计划使用 EDA 架构,那么在某些时候您将需要解决一定程度的阻抗,例如,您需要一个事件聚合器,它可以为您提供给定时间戳(用户最后一次连接以获取新闻的时间)中的所有事件.

If the server where the user is connected at the moment that an event for him is generated is unknown, you have to wait for the user to connect so then you can say "hey, user 123 is here, give me all the news since this timestamp", what make it a little bit more cumbersome. Long polling is not really push technology, but request-response, so if you plan for a EDA architecture, at some point you are going to have some level of impedance you have to address, like for example, you need a event aggregator that can give you all the events from a given timestamp (the last time that user connected to ask for news).

SignalR(我猜它在 .NET 中相当于 socket.io),例如,有一个名为 backplane,将所有消息中继到所有服务器,作为向外扩展的关键.因此,当用户连接到其他服务器时,他的"待处理的事件也有"(!)这是一种还不错的方法",但您可以猜到,会影响吞吐量:

SignalR (I guess it is the equivalent in .NET to socket.io) for example, has a message bus named backplane, that relay all the messages to all the servers, as key for scaling out. Therefore, when a user connect to other server, "his" pending events are there "as well"(!) It is a "not too bad approach", but as you can guess, affects the throughput:

限制

使用背板,最大消息吞吐量低于当客户端直接与单个服务器节点交谈时.那是因为背板将每条消息转发到每个节点,因此背板可以成为瓶颈.此限制是否有问题取决于应用程序.例如,以下是一些典型的 SignalR 场景:

Using a backplane, the maximum message throughput is lower than it is when clients talk directly to a single server node. That's because the backplane forwards every message to every node, so the backplane can become a bottleneck. Whether this limitation is a problem depends on the application. For example, here are some typical SignalR scenarios:

  • 服务器广播(例如,股票行情):背板为此工作得很好场景,因为服务器控制消息发送的速率发送.

  • Server broadcast (e.g., stock ticker): Backplanes work well for this scenario, because the server controls the rate at which messages are sent.

客户端到客户端(例如聊天):在这种情况下,背板可能成为瓶颈,如果消息的数量随着数量的增加而增加客户;也就是说,如果消息的速率随着更多客户加入.

Client-to-client (e.g., chat): In this scenario, the backplane might be a bottleneck if the number of messages scales with the number of clients; that is, if the rate of messages grows proportionally as more clients join.

高频实时(例如,实时游戏):背板不是推荐用于这种情况.

对于某些项目来说,这可能是一个亮点.

For some projects, this may be a showstopper.

有些应用程序只广播一般数据,但其他应用程序具有连接语义,例如多人游戏,将正确的事件发送到正确的连接非常重要.

Some applications just broadcast general data, but others have a connection semantics, like for example a multiplayer game, and it is important to send the right events to the right connections.

长轮询对于小型项目来说是一个很好的解决方案,但对于需要高频率和/或非常分段的事件发送的高可扩展性应用程序来说却是一个很大的负担.

Long polling is a good solution for small projects, but became a big burden for high scalable apps that need high frecuency and/or very segmented event sending.

这篇关于长轮询的严重缺点?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆