nginx 代理后面带有 node.js 的 HTTP2 [英] HTTP2 with node.js behind nginx proxy

查看:30
本文介绍了nginx 代理后面带有 node.js 的 HTTP2的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个在 nginx 代理后面运行的 node.js 服务器.node.js 在端口 3000 上运行 HTTP 1.1(无 SSL)服务器.两者都在同一台服务器上运行.

I have a node.js server running behind an nginx proxy. node.js is running an HTTP 1.1 (no SSL) server on port 3000. Both are running on the same server.

我最近设置了 nginx 以使用带有 SSL (h2) 的 HTTP2.看来 HTTP2 确实已启用并且可以正常工作.

I recently set up nginx to use HTTP2 with SSL (h2). It seems that HTTP2 is indeed enabled and working.

但是,我想知道代理连接(nginx <--> node.js)使用 HTTP 1.1 是否会影响性能.也就是说,我是否因为我的内部连接是 HTTP 1.1 而错过了 HTTP2 在速度方面的优势?

However, I want to know whether the fact that the proxy connection (nginx <--> node.js) is using HTTP 1.1 affects performance. That is, am I missing the HTTP2 benefits in terms of speed because my internal connection is HTTP 1.1?

推荐答案

总的来说,HTTP/2 最大的直接好处是 多路复用 用于浏览器连接,这些连接经常受到高延迟(即缓慢的往返速度)的阻碍.这些还减少了对多个连接的需求(和费用),这是尝试在 HTTP/1.1 中实现类似性能优势的一种变通方法.

In general, the biggest immediate benefit of HTTP/2 is the speed increase offered by multiplexing for the browser connections which are often hampered by high latency (i.e. slow round trip speed). These also reduce the need (and expense) of multiple connections which is a work around to try to achieve similar performance benefits in HTTP/1.1.

对于内部连接(例如充当反向代理的网络服务器和后端应用服务器之间),延迟通常非常非常低,因此 HTTP/2 的速度优势可以忽略不计.此外,每个应用服务器通常已经是一个单独的连接,因此这里也没有任何好处.

For internal connections (e.g. between webserver acting as a reverse proxy and back end app servers) the latency is typically very, very, low so the speed benefits of HTTP/2 are negligible. Additionally each app server will typically already be a separate connection so again no gains here.

因此,仅在边缘支持 HTTP/2,您将获得最大的性能优势.这是一个相当常见的设置 - 类似于 HTTPS 通常在反向代理/负载平衡器上终止而不是一直通过的方式.

So you will get most of your performance benefit from just supporting HTTP/2 at the edge. This is a fairly common set up - similar to the way HTTPS is often terminated on the reverse proxy/load balancer rather than going all the way through.

然而,始终支持 HTTP/2 有潜在好处.例如,它可以允许服务器从应用程序一路推送.由于 HTTP/2 和标头压缩的二进制性质,最后一跳的数据包大小减小也有潜在好处.但是,与延迟一样,带宽对于内部连接而言通常不是问题,因此其重要性值得商榷.最后,有些人认为,反向代理将 HTTP/2 连接到 HTTP/2 连接的工作量少于连接到 HTTP/1.1 连接的工作量,因为不需要将一种协议转换为另一种协议,尽管我对此表示怀疑值得注意,因为它们是单独的连接(除非它只是作为 TCP 直通代理).所以,对我来说,端到端 HTTP/2 的主要原因是允许端到端服务器推送,但是 即使使用 HTTP Link Headers 和 103-Early Hints 可能会更好地处理,因为管理跨多个连接的推送很复杂,而且我不知道有任何 HTTP 代理服务器可以支持这一点(很少后端支持 HTTP/2 不必介意像这样链接 HTTP/2 连接),因此您需要一个第 4 层负载平衡器转发 TCP 打包程序而不是链接 HTTP 请求 - 这会带来其他复杂性.

However there are potential benefits to supporting HTTP/2 all the way through. For example it could allow server push all the way from the application. Also potential benefits from reduced packet size for that last hop due to the binary nature of HTTP/2 and header compression. Though, like latency, bandwidth is typically less of an issue for internal connections so importance of this is arguable. Finally some argue that a reverse proxy does less work connecting a HTTP/2 connect to a HTTP/2 connection than it would to a HTTP/1.1 connection as no need to convert one protocol to the other, though I'm sceptical if that's even noticeable since they are separate connections (unless it's acting simply as a TCP pass through proxy). So, to me, the main reason for end to end HTTP/2 is to allow end to end Server Push, but even that is probably better handled with HTTP Link Headers and 103-Early Hints due to the complications in managing push across multiple connections and I'm not aware of any HTTP proxy server that would support this (few enough support HTTP/2 at backend never mind chaining HTTP/2 connections like this) so you'd need a layer-4 load balancer forwarding TCP packers rather than chaining HTTP requests - which brings other complications.

就目前而言,虽然服务器仍在添加支持并且服务器推送使用率很低(并且仍在试验以定义最佳实践),但我建议仅在端点使用 HTTP/2.在撰写本文时,Nginx 也不支持 ProxyPass 连接的 HTTP/2(尽管 Apache 支持),并且具有 没有计划添加这个,他们提出了一个有趣的观点,即单个 HTTP/2 连接是否会导致缓慢(我的重点):

For now, while servers are still adding support and server push usage is low (and still being experimented on to define best practice), I would recommend only to have HTTP/2 at the end point. Nginx also doesn't, at the time of writing, support HTTP/2 for ProxyPass connections (though Apache does), and has no plans to add this, and they make an interesting point about whether a single HTTP/2 connection might introduce slowness (emphasis mine):

是否计划在不久的将来支持 HTTP/2 代理?

Is HTTP/2 proxy support planned for the near future?

简答:

不,没有计划.

长答案:

几乎没有实现它的意义,因为它是 HTTP/2 的主要好处是它允许在单个请求中复用多个请求连接,因此 [几乎] 消除了对数量的限制simalteneous 请求 - 谈话时没有这样的限制你自己的后端.此外,使用时情况甚至可能变得更糟HTTP/2 到后端,因为使用的是单个 TCP 连接多个.

There is almost no sense to implement it, as the main HTTP/2 benefit is that it allows multiplexing many requests within a single connection, thus [almost] removing the limit on number of simalteneous requests - and there is no such limit when talking to your own backends. Moreover, things may even become worse when using HTTP/2 to backends, due to single TCP connection being used instead of multiple ones.

另一方面,实现HTTP/2协议和请求在上游模块中的单个连接内进行多路复用需要对上游模块进行重大更改.

On the other hand, implementing HTTP/2 protocol and request multiplexing within a single connection in the upstream module will require major changes to the upstream module.

由于上述原因,没有计划在上游模块,至少在可预见的未来.如果你仍然认为需要通过 HTTP/2 与后端​​通信 -随时提供补丁.

Due to the above, there are no plans to implement HTTP/2 support in the upstream module, at least in the foreseeable future. If you still think that talking to backends via HTTP/2 is something needed - feel free to provide patches.

最后,还应该注意的是,虽然浏览器要求 HTTP/2 (h2) 使用 HTTPS,但大多数服务器不需要,因此可以支持 HTTP (h2c) 上的最后一跳.因此,如果 Node 部分不存在端到端加密(通常不存在),则不需要端到端加密.尽管如此,根据后端服务器相对于前端服务器的位置,如果流量将通过不安全的网络(例如 CDN 到 Internet 上的源服务器),即使为此连接使用 HTTPS 也是应该考虑的事情.

Finally, it should also be noted that, while browsers require HTTPS for HTTP/2 (h2), most servers don't and so could support this final hop over HTTP (h2c). So there would be no need for end to end encryption if that is not present on the Node part (as it often isn't). Though, depending where the backend server sits in relation to the front end server, using HTTPS even for this connection is perhaps something that should be considered if traffic will be travelling across an unsecured network (e.g. CDN to origin server across the internet).

编辑 2021 年 8 月

HTTP/1.1 基于文本而不是 binary 确实使它容易受到各种请求走私攻击.在 Defcon 2021 PortSwigger 展示了许多现实生活中的攻击,主要与前端 HTTP 降级时的问题有关/2 请求后端 HTTP/1.1 请求.这些可能主要可以通过一直使用 HTTP/2 来避免,但鉴于当前支持前端服务器和 CDN 将 HTTP/2 与后端​​通话,而后端支持 HTTP/2 似乎需要很长时间为了让这种情况变得普遍,确保这些攻击不可利用的前端 HTTP/2 服务器似乎是更现实的解决方案.

HTTP/1.1 being text-based rather than binary does make it vulnerable to various request smuggling attacks. In Defcon 2021 PortSwigger demonstrated a number of real-life attacks, mostly related to issues when downgrading front end HTTP/2 requests to back end HTTP/1.1 requests. These could probably mostly be avoided by speaking HTTP/2 all the way through, but given current support of front end servers and CDNs to speak HTTP/2 to backend, and backends to support HTTP/2 it seems it’ll take a long time for this to be common, and front end HTTP/2 servers ensuring these attacks aren’t exploitable seems like the more realistic solution.

这篇关于nginx 代理后面带有 node.js 的 HTTP2的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆