We've seen a few notable news events this year along the same lines. Major websites have suffered serious breaches not because they were hacked, but because of a compromise in a 3rd party dependency. Two of the biggest stories involved targetted attacks on payment data, they were skimming credit card details right off the website. There is, however, something you can do to prevent this from happening, and it might be easier than you think.
The problem is getting worse
Back in Feb 2018 several thousand government websites around the world were found to be running a cryptominer on their website and I wrote about it right here. I know government budgets are tight, but hijacking the resources of a visitor's device? Turns out that a 3rd party JS plugin that all of those sites loaded had been compromised and altered to include the cryptominer. The government sites load the infected JS file and it's game over.
Ticketmaster suffered a similar fate in mid-2018 when Inbenta, an AI chatbot provider, had some of their JS compromised and attackers included a keylogger. Of course, Ticketmaster loaded the chatbot onto every page on their site, including the payment page, and the attackers made off with a whole heap of credit card data.
More recently, and the inspiration for writing this blog, was the British Airways breach that's currently making headlines as I write this. Whilst slightly different to the others, it's still essentially the same problem, compromised JS resulted in a keylogger being loaded onto payment pages and goodbye credit card data.
Up last is this excellent article about how to steal a heap of credit card data from websites using a tainted npm package. It looks like it gives you pretty colours when logging to the console but it contains a little something 'extra'.
How do we solve the problem?
Well, if 3rd party JS/code is the problem, it's easy to fix, right? Throw away all the 3rd party code and host all of your own JS! Except, just no. Advice like that, which technically solves the problem, but has absolutely no chance of being followed isn't really good advice, we need a better solution. And we have one. These steps don't necessarily need to be applied to your entire website, perhaps just really sensitive pages like those that accept credit card / bank details and your login form, as a couple of suggestions to get started.
1) Define a Content-Security-Policy
Regular readers know I'm a huge fan of CSP but honestly it's one of the key solutions to part of the problem here. CSP allows you to define a list of locations that resources can be loaded from on your pages. Ideally you'd define a CSP for your whole site but you can start with mega sensitive pages like your payment page. Where should this page load scripts from? Just you and your payment processor? That's easy to lock down, just like this:
Content-Security-Policy: script-src 'self' stripe.com
That CSP would completely neutralise any script tag injecting into the page that tried to load from a different domain. Doing CSP site wide can be a little more work but for now, start with sensitive parts of your app. If you want to see a really cool trick with CSP, you can ask the browser to tell you if there's JS on your page that shouldn't be there! This is exactly what I created Report URI for and you can find out about these issues in real-time. As soon as the first visitor lands on your page that contains hostile JS, the browser will send a report and you can know about it. Compared to the alternative of not knowing that's much better, because now you can start reacting to the problem much faster.
2) Trust your 3rd parties, but verify with Subresource Integrity
We all like loading scripts from a CDN, especially if they're larger files. The CDN is probably faster than we are, and closer to the user, we don't have to pay for bandwidth and there's a good chance a user may already have the file in their local cache too. It's a great situation, until it's not. A trypical script tag tells the browser to load a file, take this one from my own blog:
It tells the browser I need this file, that it should go fetch it and include it in the page. How do we know what file we actually get back? What if the CDN changes the file, would the browser know? Of course not, the browser just has a URL, it will fetch the file and use it, no matter what it gets back, virus, keylogger or otherwise. You can change that with SRI though and it honestly couldn't be easier.
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=" crossorigin="anonymous"></script>
See how that's just a standard script tag with 2 extra attributes? Of the 2 attributes the main one to focus on is the integrity attribute. When the browser downloads the file from the CDN, it will now hash the file and compare it to the hash in the integrity attribute. If the hashes match, we got the file we expected and can use it safely. If the hashes don't match then the file has been tampered with and is not the file we were expecting. The file will be rejected by the browser and will not be loaded. That nasty keylogger or malware that had been added to the file, it's now been completely neutralised. That wasn't so hard, was it!
3) Make sure all assets on sensitive pages use SRI
One of the things that we need to make sure of on our sensitive pages is that assets loaded in use SRI. If a developer puts a new script tag onto the page, yes it could be whitelisted in the CSP so it will load, but the script tag might not have an SRI integrity attribute. We need a technical measure to make sure that script tag has SRI enabled and I'm happy to say we have one!
Content-Security-Policy: require-sri-for script style
This directive in your CSP tells the browser that all script and style tags on the page should have SRI enforced. We don't want an asset to somehow sneak into the page and not have SRI because that would allow a way to bypass the safety it offers. Just like the other CSP reporting features, the browser will not only block the script from loading if it doesn't have SRI, it will also report back to you and tell you that you have a problem on the page!
CSP and SRI really would have saved them
The cryptojacking attack that hit thousands of government websites earlier this year was a really big problem. Like I said back then, just imagine if the attackers had more imagination and had done something a lot worse than cryptojacking... The NCSC here in the UK, which is part of GCHQ, were involved in the incident response and gave advice along the same lines of what I'm saying here. Website operators should use CSP + SRI to protect themselves from attacks just like this. The same can be said for Ticketmaster and their credit card breach and for British Airways with a small side note. In the BA attack the script file that was modified wasn't hosted by a 3rd party, it was hosted by another application within BA. Now, I'd agree that in a lot of scenarios using SRI on your own assets doesn't really make sense. After all, if an attacker can change the content of files on your server then you really do have bigger problems to deal with! That said, with an organisation the size of BA and the likelihood that different applications are hosted/operated/maintained by completely different teams within the organisation, do you want a breach in another application to be able to spread like it did here? The script file could be loaded from a different origin, or even a different path on the same origin if they're routing at layer 7, so it might look like SRI would be pointless here but if they're separate applications then SRI will still help contain the spread of a compromise. If they'd had SRI on that script file then the compromise of the CMS application wouldn't have affected the one they were skimming credit cards from. Taking a 'zero trust' stance, even on assets loaded from somewhere that you might control, is certainly going to help limit or even remove the impact of an attack like this.