We have quite a few security features at our disposal to help us better protect our websites and our visitors. I talk about them a lot on my blog and a few of them, mainly security headers, get a lot of coverage. Is it possible to use these security features for bad things?


The idea

The idea of taking a feature intended for good and using it for something bad isn't mine and certainly isn't new. Given my interest in security headers I was particularly interested when HSTS Super Cookies became a thing. Just the other week a good friend of mine, Per Thorsheim, was attending DEF CON 24 and sent me some pictures of a talk that covered using HPKP for nasty purposes. That prompted me to go over some of these 'attacks' and explain what's going on and also a few thoughts I've had along the way.


HSTS Super Cookies

In short, an attacker could set HSTS on or off for an arbitrary number of subdomains for a domain they own. Then, if they embed requests to these subdomains in a page and observe whether or not your browsers makes the requests using HTTP or HTTPS they can effectively fingerprint your browser. The blog post linked above goes into more details if you want to dig into a bit more.


Sniffly

There was also another attack created by Yan Zhu called Sniffly that abused HSTS coupled with CSP. The attack could be used to effectively sniff your browser history when you visit a page controlled by an attacker. The page would try to load an image from a HSTS domain, like facebook.com, but the page would use CSP to restrict images to being loaded via HTTP only. The HTTP only restriction causes the HTTPS load to fail, resulting in onerror being called that then timed how long the redirect took. If it was a few milliseconds it was an internal HSTS request that didn't hit the network meaning the browser has been to facebook.com to get their HSTS policy. If the redirect took tens of milliseconds then it hit the network, meaning the browser hadn't been to facebook.com to pick up their HSTS policy. Rinse and repeat for x websites and you can build up a browser history.


HSTS causing issues without bad guys

I've talked about HSTS a lot and one of the common concerns is that anyone in an organisation that has the ability to set a HTTP response header can turn HSTS on. That could be any one of a number of roles including server admins and developers. This means that people can set the header who perhaps shouldn't, or, people set it without fully thinking it through and then even worse, HSTS Preloading the domain. It's gotten to be quite a thing and there's even a bug on the Chromium bug tracker to list removals and edits to the HSTS Preload list. A large amount of these are along the lines of "we turned it on and it broke some things we didn't expect" or "we were magically added to the list but it wasn't us".

uber.com: Issues with subdomains maintained by contractors.


etoprekrasno.ru: We had to switch to Wix hosting which doesn't support HTTPS on custom domains.


Remove subdomains from segurosocial.gov, socialsecurity.gov, and ssa.gov: The problem is that many of our intranet sites are not HTTPS, and we are seeing issues in our rollout.


attotech.net: The site operator believes that they never requested to be added.


lucameraga.it: tried HSTS on CloudFlare, changed their mind


These were accidental preloads actually initiated by those responsible for the site that broke things and had to have their domain removed from the preload list. Removal could potentially take months and there is no assurance that other browser vendors that scrape the list will even remove you at all. Preloading should be viewed as a one way street. Even without preloading, you can still set HSTS with a max-age of 1 year and cause some serious long term problems. I published a blog just a few days ago about sites with the preload token that didn't seem like they should be preloaded and it looked like they'd just copied and pasted a config from somewhere. The blog is suitably named Death by Copy/Paste.


How can the bad guys abuse this?

Stepping away from the more extravagant attacks like those listed above, HSTS is set on a per domain basis and has a flag that will cascade the policy down to all subdomains below it. Looking at some of the comments for removal from the preload list, breaking subdomains on a site is quite a real problem. All an attacker needs is to be able to inject a HTTP response header on one of your pages somewhere, anywhere, and they have an avenue to start causing problems. Take the following page:

https://facebook.com/profile/scotthelme/notes/blahblah

If there was a bug on this page that gave me the ability to inject an arbitrary response header then I could set a HSTS policy for the facebook.com domain and all subdomains.

Strict-Transport-Security: max-age 31536000; includeSubDomains

Anyone that now visits this page would receive this policy, cache it and apply it. Perhaps not the end of the world in itself but the more widespread this becomes and the more pages you could do this on, the more likely it is to start causing problems. You could even request the page from other sites, by loading it in an iframe or the src attribute of another tag, and the browser will still receive and cache the HSTS policy. Obviously you want to get as close to the bare domain as possible for the biggest impact but if you can inject a header on the homepage then you really can do some damage:

Strict-Transport-Security: max-age 31536000; includeSubDomains; preload

That little preload token on the end of the header means you now have the authority to submit the site to the Chromium preload list to be hard coded into the source of all mainstream browsers. This means even if the site fixes the header injection flaw, all browsers that saw the header will cache it for a year and the site owners now have a removal from the preload list to contend with. Getting into the preload list can take a little time, I'm tracking it in another blog, but if you can get the header there and it goes unnoticed, you can cause some real harm. I guess I don't need to mention what a disgruntled employee could do...


Using HPKP for evil

I didn't see the talk at DEF CON by Bryant Zadegan and Ryan Lester but I read the slide deck and caught up with Bryant on Skype. They had a pretty cool idea on how an attacker could abuse HPKP. In the scenario of your server being compromised you're already in a pretty bad place. An attacker has somehow found a way in and they can do whatever they want really. Once you get control back, their ability to affect you is gone. You can restore the site and continue as normal. By abusing HPKP the attacker can have a much more devastating impact. They can continue to cripple you long after they're gone.


HPKP Suicide

The term HPKP Suicide was coined early on in the creation of the HPKP standard for when a site sets some pins but then loses control of them. You're now pinned to these keys but you can't use them and you've effectively committed suicide for your site for the duration of max-age. This is also known as the HPKP Footgun. The talk at DEF CON was taking HPKP Suicide and pushing it that one step further.


RansomPKP

If an attacker gets access to your server and commits HPKP Suicide on your behalf, you're really screwed. This is HPKP Ransom, or RansomPKP (I got the term from the linked slide deck). Once the attacker is on your server they issue a HPKP header that ties you to their keys.

Public-Key-Pins: max-age=31536000; includeSubDomains;
    pin-sha256= LOCKOUT_KEY;
    pin-sha256= RANSOM_KEY;

The lockout key can be generated on the server and the attacker can get that signed somewhere like Let's Encrypt, as they can answer challenges with control of the server, and the ransom key is generated offline somewhere to be handed over when terms are met presumably. The attacker then simply rotates the keys/certs as often as they like so when the host gets control back, they only have access to one of the pinned keys. The ransom key remains constant throughout so the attacker can sell that back as a single solution to the entire problem.


HPKP abuse with header injection

Similar to the above approach with HSTS you could also inject an arbitrary HPKP header but the effects are a little less disastrous. Without compromising the server as you would need to in the HPKP Ransom scenario, all you could do with header injection is pin against the sites current public key and a backup key of your creation. This doesn't really have any downside other than preventing the site from rotating their Leaf key until max-age has expired. How much of an impact that will have will depend on the site but it could still cause a pretty large amount of inconvenience and downtime.

For either of these approaches the only saving grace is that Chrome capped the HPKP max-age to 60 days (bug) regardless of what is set in the delivered policy so even if the attacker sets a higher value, Chrome will not respect it. According to the slide deck Firefox will also be following suit.


Conclusion

Introducing the above features has brought some potential problems for us but their introduction has definitely brought about huge improvements to our security. HSTS Super Cookies and Sniffly both required the user to visit a page under the control of the attacker, or at least make requests to them, and the HSTS/HPKP issues required a vulnerability like HTTP Header Injection or compromise of the host. None of them can be picked up on their own and used for bad things. Looking at the benefits of CSP like XSS and mixed-content mitigation, enforcing HTTPS with HSTS and reducing the risk of rogue certificate issuance with HPKP, we're definitely better off with these things than we are without them.