优化文件缓存和HTTP2 [英] Optimizing File Cacheing and HTTP2

查看:99
本文介绍了优化文件缓存和HTTP2的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们的网站正在考虑切换到http2.

Our site is considering making the switch to http2.

我的理解是, http2使诸如文件串联之类的优化技术过时了,因为使用http2的服务器仅发送了一个请求.

My understanding is that http2 renders optimization techniques like file concatenation obsolete, since a server using http2 just sends one request.

相反,我看到的建议是最好减小文件大小,以使它们更有可能被浏览器缓存.

Instead, the advice I am seeing is that it's better to keep file sizes smaller so that they are more likely to be cached by a browser.

这可能取决于网站的大小,但是如果网站使用http2并且要专注于缓存,则网站的文件应该多小?

It probably depends on the size of a website, but how small should a website's files be if its using http2 and wants to focus on caching?

在我们的情况下,我们许多单独的js和css文件都在1kb至180kb的范围内. jQuery和bootstrap可能更多.累积地,我们网站上的页面的全新下载通常少于900 kb.

In our case, our many individual js and css files fall in the 1kb to 180kb range. Jquery and bootstrap might be more. Cumulatively, a fresh download of a page on our site is usually less than 900 kb.

所以我有两个问题:

这些文件的大小是否足以被浏览器缓存?

Are these file sizes small enough to be cached by browsers?

如果它们足够小以至于可以缓存,那么对于使用不支持http2的浏览器的用户而言,将文件串联起来是一个好习惯.

If they are small enough to be cached, is it good to concatenate files anyways for users who use browsers that don't support http2.

在这种情况下,使用较大的文件大小并使用HTTP2是否会有害?这样,由于可以针对http和http2都对网站进行了优化,这将使使用两种协议的用户受益.

Would it hurt to have larger file sizes in this case AND use HTTP2? This way, it would benefit users running either protocol because a site could be optimized for both http and http2.

推荐答案

让我们澄清一些事情:

我的理解是,由于使用http2的服务器仅发送一个请求,因此http2使诸如文件串联之类的优化技术过时了.

My understanding is that http2 renders optimization techniques like file concatenation obsolete, since a server using http2 just sends one request.

HTTP/2使得诸如文件串联有点之类的优化技术已过时,因为HTTP/2允许许多文件通过同一连接并行下载.以前,在HTTP/1.1中,浏览器可以请求一个文件,然后必须等待该文件完全下载后才能请求下一个文件.这就导致了诸如文件串联(以减少所需的文件数量)和多个连接(一种允许并行下载的黑客)之类的变通办法.

HTTP/2 renders optimisation techniques like file concatenation somewhat obsolete since HTTP/2 allows many files to download in parallel across the same connection. Previously, in HTTP/1.1, the browser could request a file and then had to wait until that file was fully downloaded before it could request the next file. This lead to workarounds like file concatenation (to reduce the number of files required) and multiple connections (a hack to allow downloads in parallel).

但是,有一个反论点,即多个文件仍然有开销,包括请求它们,对其进行缓存,从缓存中读取它们……等等.在HTTP/2中已大大减少,但并未完全消除.另外,与单独压缩许多较小的文件相比,gzip压缩文本文件在较大的文件上效果更好.不过,就我个人而言,我认为不利因素超过了这些担忧,并且我认为,一旦HTTP/2普及了,连接就会消失.

However there's a counter argument that there are still overheads with multiple files, including requesting them, caching them, reading them from cache... etc. It's much reduced in HTTP/2 but not gone completely. Additionally gzipping text files works better on larger files, than gzipping lots of smaller files separately. Personally, however I think the downsides outweigh these concerns, and I think concatenation will die out once HTTP/2 is ubiquitous.

相反,我看到的建议是,最好将文件的大小保持较小,以便更有可能被浏览器缓存.

Instead, the advice I am seeing is that it's better to keep file sizes smaller so that they are more likely to be cached by a browser.

这可能取决于网站的大小,但是如果网站使用http2并希望专注于缓存,则网站的文件应该多小?

It probably depends on the size of a website, but how small should a website's files be if its using http2 and wants to focus on caching?

文件的大小与是否要缓存无关(除非我们谈论的是比缓存本身更大的真正大文件).将文件分割成较小的块更适合缓存的原因是,这样,如果您进行了任何更改,那么仍可以从缓存中使用任何未触及的文件.如果您将所有JavaScript(例如)保存在一个大的.js文件中,并且更改了一行代码,则整个文件都需要再次下载-即使该文件已经在缓存中.

The file size has no bearing on whether it would be cached or not (unless we are talking about truly massive files bigger than the cache itself). The reason splitting files into smaller chunks is better for caching is so that if you make any changes, then any files which has not been touched can still be used from the cache. If you have all your javascript (for example) in one big .js file and you change one line of code then the whole file needs to be downloaded again - even if it was already in the cache.

类似地,如果您有图像精灵图,那么这对于减少HTTP/1.1中单独的图像下载非常有用,但是如果您需要对其进行编辑以添加一个额外的图像,则要求再次下载整个精灵文件.更不用说下载了整个东西-即使对于仅使用这些图像精灵之一的页面也是如此.

Similarly if you have an image sprite map then that's great for reducing separate image downloads in HTTP/1.1 but requires the whole sprite file to be downloaded again if you ever need to edit it to add one extra image for example. Not to mention that the whole thing is downloaded - even for pages which just use one of those image sprites.

但是,说了这么多,有种思路说长期缓存的好处已经被夸大了.请参见本文,尤其是关于HTTP缓存表明大多数人的浏览器缓存比您想象的要小,因此不太可能将您的资源缓存很长时间.这并不是说缓存并不重要-而是它对在该会话中浏览而不是长期浏览很有用.因此,每次访问您的网站都可能会再次下载您的所有文件-除非它们是非常频繁的访问者,具有很大的缓存或不会在网上冲浪太多.

However, saying all that, there is a train of thought that says the benefit of long term caching is over stated. See this article and in particular the section on HTTP caching which goes to show that most people's browser cache is smaller than you think and so it's unlikely your resources will be cached for very long. That's not to say caching is not important - but more that it's useful for browsing around in that session rather than long term. So each visit to your site will likely download all your files again anyway - unless they are a very frequent visitor, have a very big cache, or don't surf the web much.

对于使用不支持http2的浏览器的用户而言,无论如何都可以串联文件.

is it good to concatenate files anyways for users who use browsers that don't support http2.

可能.但是,除了在Android上以外, HTTP/2浏览器支持实际上非常好,因此很可能是您的访问者已启用HTTP/2.

Possibly. However, other than on Android, HTTP/2 browser support is actually very good so it's likely most of your visitors are already HTTP/2 enabled.

说,在HTTP/2下并置文件(在HTTP/1.1下尚不存在)没有其他缺点.好的,可以争论的是,可以通过HTTP/2并行下载许多小文件,而作为一个请求需要下载一个大文件,但是我不认为这样做会降低下载速度.对此没有任何证据,但直觉表明仍需要发送数据,因此无论哪种方式都存在带宽问题.另外,请求资源很多的开销,尽管在HTTP/2中已经大大减少了.对于大多数用户和站点而言,延迟仍然是最大的问题,而不是带宽.除非您的资源确实是巨大的,否则我怀疑您会注意到我已经下载了1个大资源,还是将相同的数据拆分为10个以HTTP/2并行下载的小文件之间的区别(尽管在HTTP/1.1中会如此) .更不用说上面讨论的gzip压缩问题了.

Saying that, there are no extra downsides to concatenating files under HTTP/2 that weren't there already under HTTP/1.1. Ok it could be argued that a number of small files could be downloaded in parallel over HTTP/2 whereas a larger file needs to be downloaded as one request but I don't buy that that slows it down much any. No proof of this but gut feel suggests the data still needs to be sent, so you have a bandwidth problem either way, or you don't. Additionally the overhead of requesting many resources, although much reduced in HTTP/2 is still there. Latency is still the biggest problem for most users and sites - not bandwidth. Unless your resources are truly huge I doubt you'd notice the difference between downloading 1 big resource in I've go, or the same data split into 10 little files downloaded in parallel in HTTP/2 (though you would in HTTP/1.1). Not to mention gzipping issues discussed above.

因此,我认为继续连接一会儿没有什么害处.在某些时候,您需要确定缺点是否大于给定用户配置文件所带来的好处.

So, in my opinion, no harm to keep concatenating for a little while longer. At some point you'll need to make the call of whether the downsides outweigh the benefits given your user profile.

在这种情况下,使用较大的文件大小并使用HTTP2是否会有害?这样,由于可以针对http和http2都对网站进行了优化,这将使使用两种协议的用户受益.

Would it hurt to have larger file sizes in this case AND use HTTP2? This way, it would benefit users running either protocol because a site could be optimized for both http and http2.

绝对不会受伤.如上文所述,(基本上)在HTTP/2下连接文件并没有额外的缺点,而在HTTP/1.1下尚不存在这些缺点.在HTTP/2下,它不再是必需的,并且具有弊端(潜在地减少了缓存的使用,需要构建步骤,由于部署的代码与源代码不一样,使得调试更加困难.等等.)

Absolutely wouldn't hurt at all. As mention above there are (basically) no extra downsides to concatenating files under HTTP/2 that weren't there already under HTTP/1.1. It's just not that necessary under HTTP/2 anymore and has downsides (potentially reduces caching use, requires a build step, makes debugging more difficult as deployed code isn't same as source code... etc.).

使用HTTP/2,对于任何网站,您仍然会看到很大的好处-除了最简单的网站,这些网站可能看不到任何改善,也不会带来负面影响.而且,由于较旧的浏览器可以坚持使用HTTP/1.1,因此它们没有任何缺点.何时或是否决定停止实施HTTP/1.1性能调整,例如连接是一个单独的决定.

Use HTTP/2 and you'll still see big benefits for any site - except the most simplest sites which will likely see no improvement but also no negatives. And, as older browsers can stick with HTTP/1.1 there are no downsides for them. When, or if, you decide to stop implementing HTTP/1.1 performance tweaks like concatenating is a separate decision.

实际上,使用HTTP/2的唯一原因是实现仍然处于前沿,因此您可能还不满意在其上运行生产网站.

In fact only reason not to use HTTP/2 is that implementations are still fairly bleeding edge so you might not be comfortable running your production website on it just yet.

****编辑2016年8月****

**** Edit August 2016 ****

This post from an image heavy, bandwidth bound, site has recently caused some interest in the HTTP/2 community as one of the first documented example of where HTTP/2 was actually slower than HTTP/1.1. This highlights the fact that HTTP/2 technology and understand is still new and will require some tweaking for some sites. There is no such thing as a free lunch it seems! Well worth a read, though worth bearing in mind that this is an extreme example and most sites are far more impacted, performance wise, by latency issues and connection limitations under HTTP/1.1 rather than bandwidth issues.

这篇关于优化文件缓存和HTTP2的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆