Could a cyber-attack kill you?
The return this month of the Triton virus begs the question, and serves as a stark reminder that the vulnerabilities in computing systems can extend beyond data loss and financial fraud.
For those who don’t remember, Triton is a powerful malware that first popped up in the industrial control mechanisms of a Saudi oil and gas plant back in 2017. Infected systems included physical controllers and associated software designed to kick-in when dangerous conditions are detected – stabilising processes by closing valves and triggering pressure-release mechanisms, or shutting machinery down completely.
Hackers had got their virus into plant systems that didn’t hold information of any particular commercial value – but which would have been the last line of defence against a life-threatening disaster.
In the worst-case, Triton’s rogue code could have caused toxic hydrogen sulphide gas to be released, or set off a cascading set of mechanical failures to trigger an explosion.
At the time it felt like some sort of Rubicon had been crossed. The hackers were likely state sponsored, and the objective of the malware infection likely down to a trial run, testing industrial defences and searching for additional vulnerabilities for use at a later date. But what if the capability had been deployed in a military conflict? What if it fell into the wrong hands?
In a world where more and more industrial processes depend on computers sending instructions to robots, where critical systems are increasingly entrusted to AI-driven automation, and vehicles of every kind are internet connected, are we reaching a point where cybersecurity becomes physical – and the threat of harm becomes personal?
We don’t do the fear thing here at The Defence Works. Cybersecurity is packed with companies stoking breach anxiety and that’s a game we don’t want to play. But still, some things make you go ‘hmmm’. The prospect of a hacker taking control of your (future) driverless car while it whirls down the motorway is one of them.
It doesn’t seem so far-fetched. Today’s automobiles contain more than 100 million lines of code. The electrical control units in passenger cars are part of a network contained within each vehicle. If a hacker were to gain access to it through a vehicle’s wireless communication system, vehicle functions could potentially be manipulated.
This isn’t science fiction:
- In 2015, researchers proved that they could take control of a Jeep Cherokee remotely and send it off the road.
- That same year, hackers found a weakness in BMW’s ConnectedDrive technology and exploited it to take control of vehicle functions.
- In 2016, hackers proved they could break into Volkswagen electronic keys and security systems to unlock vehicles remotely.
Experts have also been warning about vulnerabilities in air traffic control systems since at least 2015.
As for industrial facilities, the arrival of Triton raised questions about how hackers were able to get into critical systems – many of which weren’t directly connected to the internet.
It also came at a time when companies are embedding internet of things (IoT) technology into everything from manufacturing systems, to logistics and transportation, and kitchen appliances.
More connectivity lets businesses better monitor equipment and rapidly gather data so they can make operations more efficient, but it also adds more vectors of attack for hackers.
Don’t be scared, be smart
Hype and fear mongering don’t help anybody. At the moment, hacked cars, compromised air traffic systems, and smart kettles set to explode feel more like an urban myth – like crocodiles in the sewage system – than an imminent reality.
And even if/when they do become real, improvements in technology already underway in major R&D facilities will play a part in defending against them.
People will too.
The security weaknesses in air traffic control systems have thus far been human centric – operating systems that hadn’t been patched in months, passwords that weren’t encrypted – and the Triton virus, no matter how clever and insidious, needed basic human nature to help it along.
When it first started affecting systems at the Saudi oil refinery in 2017, managers assumed it was a standard mechanical glitch. It first triggered a safety system alarm in June that brought the plant to a standstill. Then two months later, other systems were tripped, causing another shutdown. It wasn’t until August that plant managers decided to bring in IT consultants to investigate what was happening.
A failure of awareness
What Triton, and in fact most hacks tell us about cyber defence is that the signs of an attack are often directly observable, or detectable when people have been educated to the signal behaviours of malware in a network.
Leaving operating systems in critical infrastructure unpatched for months, or allowing submarines to run on Windows XP, can hardly be blamed on bad technology. Those weaknesses are structural and arise from failures of awareness up and down the organisational hierarchy. They also indicate a failure to empower people fully.
Everyone in the organisation has a role to play in cybersecurity, and companies can promote that principle with a programme of security awareness training. Along with instrumented systems, cyber criminals are also trying to weaponise staff, who sometimes become the source of a breach through simple error.
There’s no body armour for breaches, but employees can be a kind of cyber defence force – armed with information, and awareness. We can beat back malware’s physical threat by arming employees with the skills they need to identify odd or adverse behaviour on the systems they use every day.