HTTP2与Nginx代理后面的node.js [英] HTTP2 with node.js behind nginx proxy

查看:178
本文介绍了HTTP2与Nginx代理后面的node.js的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个在nginx代理后面运行的node.js服务器. node.js在端口3000上运行HTTP 1.1(无SSL)服务器.两者都在同一服务器上运行.

I have a node.js server running behind an nginx proxy. node.js is running an HTTP 1.1 (no SSL) server on port 3000. Both are running on the same server.

我最近将nginx设置为将HTTP2与SSL(h2)结合使用.看来HTTP2确实已启用并且可以正常工作.

I recently set up nginx to use HTTP2 with SSL (h2). It seems that HTTP2 is indeed enabled and working.

但是,我想知道代理连接(nginx<-> node.js)正在使用HTTP 1.1的事实是否会影响性能.也就是说,因为我的内部连接是HTTP 1.1,我是否在速度方面没有HTTP2的好处?

However, I want to know whether the fact that the proxy connection (nginx <--> node.js) is using HTTP 1.1 affects performance. That is, am I missing the HTTP2 benefits in terms of speed because my internal connection is HTTP 1.1?

推荐答案

通常,HTTP/2的最大直接好处就是

In general, the biggest immediate benefit of HTTP/2 is the speed increase offered by multiplexing for the browser connections which are often hampered by high latency (i.e. slow round trip speed). These also reduce the need (and expense) of multiple connections which is a work around to try to achieve similar performance benefits in HTTP/1.1.

对于内部连接(例如,充当反向代理的Web服务器与后端应用程序服务器之间),延迟通常非常非常低,因此HTTP/2的速度优势可忽略不计.此外,每个应用程序服务器通常已经是一个单独的连接,因此这里再也没有收获.

For internal connections (e.g. between webserver acting as a reverse proxy and back end app servers) the latency is typically very, very, low so the speed benefits of HTTP/2 are negligible. Additionally each app server will typically already be a separate connection so again no gains here.

因此,仅在边缘支持HTTP/2,您将获得最大的性能优势.这是一个相当普遍的设置-与HTTPS通常在反向代理/负载均衡器上终止而不是一直执行的方式类似.

So you will get most of your performance benefit from just supporting HTTP/2 at the edge. This is a fairly common set up - similar to the way HTTPS is often terminated on the reverse proxy/load balancer rather than going all the way through.

但是,始终支持HTTP/2有潜在的好处.例如,它可以允许服务器从应用程序一直推送.由于HTTP/2的二进制性质和标头压缩,该最后一跳的数据包大小减小也可能带来好处.尽管像延迟一样,带宽对于内部连接而言通常不是一个大问题,所以这一点的重要性是有争议的.最后有人认为,反向代理将HTTP/2连接到HTTP/2连接的工作比连接HTTP/1.1连接少,因为不需要将一个协议转换为另一个协议,尽管我对此持怀疑态度.引人注意,因为它们是单独的连接(除非它仅充当TCP直通代理的角色).因此,对我来说,端到端HTTP/2的主要原因是允许端到端服务器推送,但是

However there are potential benefits to supporting HTTP/2 all the way through. For example it could allow server push all the way from the application. Also potential benefits from reduced packet size for that last hop due to the binary nature of HTTP/2 and header compression. Though, like latency, bandwidth is typically less of an issue for internal connections so importance of this is arguable. Finally some argue that a reverse proxy does less work connecting a HTTP/2 connect to a HTTP/2 connection than it would to a HTTP/1.1 connection as no need to convert one protocol to the other, though I'm sceptical if that's even noticeable since they are separate connections (unless it's acting simply as a TCP pass through proxy). So, to me, the main reason for end to end HTTP/2 is to allow end to end Server Push, but even that is probably better handled with HTTP Link Headers and 103-Early Hints due to the complications in managing push across multiple connections.

就目前而言,尽管服务器仍在添加支持并且服务器推送使用率很低(并且仍在尝试以定义最佳实践),但我建议仅在终点使用HTTP/2.在撰写本文时,Nginx还不支持ProxyPass连接的HTTP/2(尽管Apache支持),并且具有

For now, while servers are still adding support and server push usage is low (and still being experimented on to define best practice), I would recommend only to have HTTP/2 at the end point. Nginx also doesn't, at the time of writing, support HTTP/2 for ProxyPass connections (though Apache does), and has no plans to add this, and they make an interesting point about whether a single HTTP/2 connection might introduce slowness (emphasis mine):

是否计划在不久的将来提供HTTP/2代理支持?

Is HTTP/2 proxy support planned for the near future?

简短答案:

不,没有计划.

长答案:

实现它几乎没有任何意义,因为这是HTTP/2的主要好处 是它允许在一个单一的请求中多路复用多个请求 连接,因此[几乎]取消了数量限制 很小的请求-与之交谈时没有这样的限制 您自己的后端. 此外,使用时情况甚至可能变得更糟 HTTP/2到后端,因为使用了单个T​​CP连接 多个.

There is almost no sense to implement it, as the main HTTP/2 benefit is that it allows multiplexing many requests within a single connection, thus [almost] removing the limit on number of simalteneous requests - and there is no such limit when talking to your own backends. Moreover, things may even become worse when using HTTP/2 to backends, due to single TCP connection being used instead of multiple ones.

另一方面,实现HTTP/2协议和请求 上游模块中单个连接内的多路复用将 需要对上游模块进行重大更改.

On the other hand, implementing HTTP/2 protocol and request multiplexing within a single connection in the upstream module will require major changes to the upstream module.

由于上述原因,没有计划在以下版本中实现HTTP/2支持 上游模块,至少在可预见的将来.如果你 仍然认为需要通过HTTP/2与后端通信- 随时提供补丁.

Due to the above, there are no plans to implement HTTP/2 support in the upstream module, at least in the foreseeable future. If you still think that talking to backends via HTTP/2 is something needed - feel free to provide patches.

最后,还应该注意的是,尽管浏览器需要HTTPS来支持HTTP/2(h2),但大多数服务器却不需要,因此可以支持通过HTTP(h2c)的最后一跳.因此,如果Node部分中不存在端到端加密(因为通常不存在),则无需端到端加密.但是,根据后端服务器相对于前端服务器的位置,即使流量通过不安全的网络(例如CDN到Internet上的原始服务器)进行传输,甚至对于此连接也应使用HTTPS.

Finally, it should also be noted that, while browsers require HTTPS for HTTP/2 (h2), most servers don't and so could support this final hop over HTTP (h2c). So there would be no need for end to end encryption if that is not present on the Node part (as it often isn't). Though, depending where the backend server sits in relation to the front end server, using HTTPS even for this connection is perhaps something that should be considered if traffic will be travelling across an unsecured network (e.g. CDN to origin server across the internet).

这篇关于HTTP2与Nginx代理后面的node.js的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆