HTTP流水线 - 每个连接的并发响应 [英] HTTP pipelining - concurrent responses per connection

查看:340
本文介绍了HTTP流水线 - 每个连接的并发响应的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我刚刚阅读了有关HTTP流水线的维基百科文章,并从图表中看到响应可以在一个连接上同时发送。我是否误解了图表或允许这样做?

I was just reading this Wikipedia article on HTTP pipelining and from the diagram it appears that responses can be sent concurrently on one connection. Am I misinterpreting the diagram or is this allowed?

RFC 2616的第8.1.2.2节规定:


服务器必须将响应发送给那些以相同的顺序请求
收到请求。

A server MUST send its responses to those requests in the same order that the requests were received.

虽然没有明确排除并发响应,但它没有没有提到需要确保响应不仅必须以与请求相关的正确顺序开始,而且还要以正确的顺序完成。

Whilst that stops short of explicitly ruling out concurrent responses, it does not mention a need to ensure that responses must not only start in the correct order with relation to requests, but also finish in the correct order.

我也无法想象处理并发响应的实用性 - 客户如何知道接收数据应用于哪个响应?

I also cannot imagine the practicalities of dealing with concurrent responses - how would the client know to which response the received data applies?

因此,我对RFC的解释是,尽管可以在正在处理对第一个请求的响应,客户端不允许发送并发请求uests或服务器在同一连接上发送并发响应。

Therefore my interpretation of the RFC is that whilst additional requests can be made whilst the response to the first request is being processed, it is not allowedfor the client to send concurrent requests or the server to send concurrent responses on the same connection.

这是正确的吗?我附上了一张图表来说明我的解释。

Is this correct? I've attached a diagram below to illustrate my interpretation.

它可以防止我提到的问题发生,但它似乎与维基百科中的图表完全一致。

It would prevent the problems I mentioned from occurring, but it does not appear to completely align with the diagram in Wikipedia.

推荐答案

简答:是的,客户和服务器可以同时发送请求和响应。

Short answer: Yes, clients and servers can send requests and responses concurrently.

但是,服务器不能向一个请求发送多个响应,即请求响应模式仍然适用。 RFC 2616(以及您所引用的维基百科文章)只是声明客户端不需要等待服务器的响应在同一连接上发送额外的请求。所以图表中的请求看起来很好:)。

However, a server cannot send multiple responses to one request, i.e. the request response pattern still applies. RFC 2616 (and the Wikipedia article you are refering to) simply state that a client does not need to wait for the server's response to send an additional request on the same connection. So the requests in your diagram look good :).

但是服务器不必等待每个响应完成之前它可以开始传输下一个响应。它可以在收到客户端请求时将响应发送给客户端。 (这导致维基百科文章中显示的图表。)

But the server doesn't have to wait for each of its responses to finish before it can start transmission of the next response. It can just send the responses to the client as it receives the client's requests. (Which results in the diagram shown in the Wikipedia article.)

好吧,让我们在这里忽略整个网络延迟一分钟,并假设流水线请求或响应消息一次到达,但只有在所有这些消息都被发送后才会到达。

Well, let's ignore that whole network delay stuff for a minute here and assume that pipelined request or response messages arrive at once but only after all of them have been sent.


  1. 客户端按特定顺序发送请求(无需等待请求之间的响应)。

  2. 服务器以相同的顺序接收请求( TCP保证一次性完成。

  3. 服务器接收第一条请求消息,对其进行处理,并将响应存储在队列中。

  4. 服务器获取第二个请求消息,对其进行处理,并将响应存储在队列中。

  5. (你明白了......)

  6. 服务器将该队列的内容发送给客户端。响应按顺序存储,因此对第一个请求的响应位于该队列的开头,然后是对第二个请求的响应,依此类推......

  7. 客户端收到响应以相同的顺序(TCP保证)并将第一个响应与它所做的第一个请求相关联等等。

  1. The client sends its requests in a certain order (without waiting for responses inbetween requests).
  2. The server receives the requests in the same order (TCP guarantees that) all at once.
  3. The server takes the first request message, processes it, and stores the response in a queue.
  4. The server takes the second request message, processes it, and stores the response in a queue.
  5. (You get the idea...)
  6. The server sends the contents of that queue to the client. The responses are stored in order so the response to the first request is at the beginning of that queue followed by the response to the second request and so on...
  7. The client receives the responses in the same order (TCP guarantees that) and associates the first response with the first request it made and so on.

这仍然有效即使我们不假设我们一次收到所有消息,因为TCP保证发送的数据以相同的顺序接收。

This still works even if we don't assume that we receive all the messages at once because TCP guarantees that the data that was sent is received in the same order.

我们也可以忽略网络完全,只需查看服务器和客户端之间传输的消息。

We could also ignore the network completely and just look at the messages that are transferred between server and client.

GET /request1.html HTTP/1.1
Host: example.com
...

GET /request2.html HTTP/1.1
Host: example.com
...

GET /request3.html HTTP/1.1
Host: example.com
...



服务器 - >客户



Server -> Client

HTTP/1.1 200 OK
Content-Length: 234
...

HTTP/1.1 200 OK
Content-Length: 123
...

HTTP/1.1 200 OK
Content-Length: 345
...

TCP的优点是这个特定的消息流总是看起来是一样的。您可以先发送所有请求,然后再收到回复;您可以先发送请求1,接收第一个响应,发送剩余请求,然后接收剩余的响应;您可以发送第一个和第二个请求的一部分,接收第一个响应的一部分,发送剩余的请求,接收剩余的响应;因为TCP保证保持传输消息的顺序,我们总是可以将第一个请求与第一个响应相关联,依此类推。

The great thing about TCP is that this particular stream of messages always looks the same. You can send all of the requests first and then receive the responses; you can send request 1 first, receive the first response, send the remaining requests, and receive the remaining responses; you can send the first and part of the second request, receive part of the first response, send the remaining requests, receive the remaining responses; etc. Because TCP guarantees to keep the order of the transmitted messages, we can always associate the first request with the first response and so on.

我希望这能回答你的问题......

I hope this answers your question...

这篇关于HTTP流水线 - 每个连接的并发响应的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆