An Analysis of Software Supply Chain Attacks
Editor’s Note: Below is part one of a two-part series dedicated to understanding software supply chain attacks.
Security failures in the software supply chain seem to be on the increase – at least they are definitely getting more attention. But what’s to be done about it, short of following advice to “be careful”? To answer that one, we need to dig deep. So, let’s sit back and start with a look at why the problem arises. Once we know that, the way forward should become clear.
Intentional Software Flaws?
Security has always been about making sure what shouldn’t happen doesn’t happen. It means proving that something bad can never happen, even if things aren’t used properly. With software this is way harder than showing correct operation gets you the right behaviour, because you can’t test all possible operations in all possible states – there’s an infinite number of those. Generally, with software, you can show it works as required, but not that it doesn’t do something extra you might not like.
When the software consumer signs up for a licence and starts using it, if it functions the way they need they’re happy. But just because it works for them, doesn’t mean it’s secure. It just means they’ve not (yet) used it in a way that makes it go wrong and cause damage. A feeling of security comes by using software that is widely used, because the more users there are the more likely it is that someone will have done something wrong and exposed any security flaws, allowing them to be fixed.
But what if the software vendor intentionally added a security flaw to their software? If it were subtle and stealthy, the flaw could be widely distributed and persist for a long time, causing a lot of damage. But surely the problem would eventually be discovered, and the vendor would see their business collapse as customers take their business elsewhere. Well, that’s where it gets more complicated, because software does not have one author.
Software has Multiple Authors
Not since the very dawn of computing has software had one author. With the advent of the first assembly languages in 1949, software was used to help write software, with the end result being a joint product of the authors of both bits of software. And in 1984, Ken Thompson described how this might be a problem.
In this hypothetical case, extra functionality was written into the software tools (a compiler) used to create software. This was designed to inject a password backdoor into the logon software of the Unix operating system. Anyone who knew of the flaw could then login as anybody without knowing their password. But the tools are also used to build themselves, and the really clever bit is that the extra functionality also adds itself when it builds the tools. Once this is done, the flaw is invisible.
This means the hypothetical Unix operating system logon software really has two authors – the one who created the tools and the one who created the logon software using those tools. If the author of the tools takes Ken Thompson’s approach, the author of the logon software will not realise the software they produce has the additional backdoor functionality. The software gets widely used, with no apparent ill effects, and consumers of the software feel secure. The attack is totally stealthy with little chance of discovery. We can only hope that nobody put Thompson’s idea into practice.
Fast forward to today, and we find those basic tools have expanded into complete eco systems of software produced by a myriad of authors. The functionality demanded by users requires software that is so complex it can only be created using tools and components that others have produced. And software must be delivered fast, so even simple functions need the same treatment. The net effect is, nobody really knows what any of the software does in total, and if a flaw is discovered it can be really difficult to pin down who its author is.
We want rich functionality, which means having complex software, which makes it too complex to test exhaustively. And the scale is such that no one author can create it or understand all the tools they import in detail to be sure there are no flaws. So, the software vendor cannot be sure the end result doesn’t have any undesirable backdoor functionality. This is not necessarily a big problem, because any attack introduced in this way has to be targeted or coordinated, but systems are opening up more and more, and this is making more room for this kind of attack.
How Attackers Exploit Backdoors
To succeed, an attack based on a backdoor introduced into widely used software must not cause obvious damage until it reaches its target, otherwise it will be discovered and fixed before reaching its target.
If the attack has a very specific target, say a particular bank, then the attack needs to have a way of establishing that it is running in the target before causing the damage. In other systems it must lie low and avoid detection. This makes the attack “one shot.” Once it has reached the target it will be discovered, and the flaw will then get removed. This means the attack must be worth the not inconsiderable effort that went into creating it.
To get more value out of their attack, the attacker must make it remain stealthy to avoid discovery. So instead of it causing direct damage, the backdoor could be used to steal credentials that give direct access to the target system, or a secondary, independent backdoor could be installed. Any damage caused now would lead back to the credential theft or additional backdoor, not to the original backdoor that is built into the software. So that remains undiscovered and can be used to infiltrate other systems.
For attacks like this to work, the target system must be accessible. It must face the Internet to allow the stolen credentials to be used, or the additional backdoor to be accessed. Until recently, business systems tended to be isolated, accessed from inside a corporate office with limited access to external resources, so attacks like these also needed the right physical access, which meant they can only be highly targeted. But the move to cloud services and home working means this is no longer the case.
Businesses are now an Internet of Things – components like workstations, mobile devices, data stores, collaboration services, sensors and actuators – all connected by the public Internet. It means any component can be reached by anyone on the net, and any component can reach anyone. Strong authentication and access controls get applied to keep attackers away, but they don’t work if the attacker has a backdoor in the software of those components, because then the attacker is on the inside.
This is why we are seeing more attacks entering through the software supply chain. The attacks start by getting into software components that are built into other software that is trusted. The software is then deployed inside critical systems, which have the connectivity needed by the attacker to control it. The attacker does not have to break in, they are invited in – a genuine Trojan Horse.
Having looked closely at the problem, perhaps we are now able to find a way out of it. I’ll dive into part two soon.