After more than 15 years of living with HTTP/1.1 we can finally start to enjoy the benefits of HTTP/2! As an early adopter I've taken a look at some of the key improvements in HTTP/2 and how we might just have to undo some of the changes we made as developers to accommodate the limitations of HTTP/1.1 in real world use.
After many, many years of working around limitations in HTTP/1.x, the specification for HTTP/2 was published earlier this year by the IETF. You can read it here if you like. Whilst HTTP/2 was going to require the use of TLS early on, that didn't make the final cut but any browser that currently supports HTTP/2 will only do so if the site is served using TLS. From my initial testing HTTP/2 is considerably faster that HTTP/1.1 and replaces my support for the SPDY protocol. Some of the key changes in HTTP/2 are the compression of header data, which is now sent in a binary format rather than plain text, and that HTTP/2 only uses a single, multiplexed connection with a host rather than opening multiple connections like HTTP/1.1 did. This is where some of the performance optimisations we implemented for HTTP/1.1 could actually degrade the experience of using HTTP/2.
HTTP/1.1 vs. HTTP/2
Let's take a look through a few of the optimisations we made when using HTTP/1.1, how they might impact the use of HTTP/2 and what we can do about them.
A common method of increasing the number of resources a browser can download in parallel was to domain shard. This is where a host might serve their image files from a subdomain like images.scotthelme.co.uk to work around limitations in the number of connections a browser would open to any given host. If the browser will only open 8 connections to scotthelme.co.uk then it can only download 8 resources at any given time. If images on the page are served from images.scotthelme.co.uk then the browser can now open up to 16 connections to download resources, 8 to scotthelme.co.uk and another 8 to images.scotthelme.co.uk. This would give you faster page load times due to the higher number of resources being downloaded in parallel. In HTTP/2 all requests and their responses are now multiplexed over a single TCP connection. This means that we don't need multiple connections to download multiple resources in parallel, that single TCP connection will do the trick just nicely. Unfortunately, for sites that currently domain shard and plan to migrate to HTTP/2, this could actually result in a degradation of the performance of HTTP/2 due to the now unnecessary overheads of establishing the second connection and things like DNS lookups.
Another trick for reducing the number of connections that needed to be made in HTTP/1.1 was to concatenate resources like JS and CSS into a single file. With the new multiplexing in HTTP/2 we don't need to be concerned with the additional connections. Again, if anything, this will actually make the situation worse as the browser has to download a much larger file before it can run what might have only been a small snippet of JS.
Similar to concatenating resources, image sprites combine several images into the same file to make for a more efficient, single connection download. From the sprite, images are then retrieved and used. Again, the problem with this approach is that the browser can't use a single image until the entire sprite containing all images has been downloaded.
Alongside the above, another favourite was to inline small chunks of CSS or JS, and even images in some circumstances, in the HTML itself. Google's PageSpeed performance module even had an option to do this automatically on the server side. In HTTP/2 all this now serves to do is bulk out the HTML and delay page rendering until the bigger page has been downloaded.
New in HTTP/2
There are also some new features in HTTP/2 that we didn't have comparable workarounds for in HTTP/1.1 that bring some nice performance gains.
In HTTP/1.1 connections the HTTP response headers would be sent over the network in plain text resulting in a larger payload. Now in HTTP/2 all HTTP headers are compressed and sent in binary form resulting in a much smaller payload. Historically this might not have been considered too much of a problem but with the recent rise in security based HTTP response headers, they have the potential to grow considerably in size. Content Security Policy can result in a fairly lengthy response header along with HTTP Public Key Pinning. Couple that with the huge range of other HTTP response headers we can send, and there's definitely some room for optimisation.
Now that all of our requests are being sent down a single multiplexed connection, the ordering of the responses becomes more important. Each resource being downloaded can be given a priority and can also be made dependent on another resource being downloaded. This means the browser can indicate which files it wants first and which others can wait until later or until another resource has been downloaded.
Although this feature isn't currently supported in NginX it will no doubt be coming along very soon. Server push allows the server to send files to the client before they are actually requested. If your page uses a set of CSS and JS files, they can be pushed to the client before it has to download the HTML and realise it has to then request them, speeding up the process. We already do this now in a way with inlining of code. The CSS/JS is embedded in the page and sent down with the initial HTML response before being requested to avoid the burden of a subsequent connection and request to fetch it.
Having a single connection under HTTP/2 provides us with a great deal of benefits. Beyond those listed above there are additional benefits like minimising TLS handshakes. We can also do away with a lot of development and deployment overhead by removing things like file concatenation and image spriting. Smaller JS and CSS files no longer need to be pushed inline either. These changes will also allow caching to work better by increasing the likelihood a given asset can successfully be served from cache. Being able to remove the need for domain sharding also presents some quite significant gains as the number of domains we administer is reduced. This means less configuration and less maintenance.
At the time of writing I've just built the latest mainline version of NginX, which is 1.9.5, to get HTTP/2 support. It's not difficult to switch over and I will be following up with a short blog on how to do it if you're interested. Apache has support for HTTP/2 in version 2.4.12 using the mod_h2 module and IIS has support in Windows 10 and Server 2016.