First published: 15th September 2010
Presented by Allan G. Dyer at the 1st INTERPOL Information Security Conference - Hong Kong Police Headquarters in the second Panel Session, Day 1.
Abstract
We are seeing increasing numbers of warnings, some almost hysterical, about the dangers of catastrophic digital attacks on "critical infrastructure", but hard data on what could happen, and its likelihood, is difficult to find.
Some say that attacks have already happened. The Slammer worm, for example, disrupted the internet and severely affected banks and financial organisations, often defined as part of the critical infrastructure. If that is the worst that can happen, so what? Also, alleged cyberwars in Estonia and Georgia have not proved to be a devastating new form of war.
On the other hand, security experts have long pointed at the lax "security by obscurity" approach in SCADA and related industrial control systems, and now the Win32/Stuxnet worm that uses a zero-day vulnerability and targets Siemens WinCC SCADA systems has emerged.
Is this another example of "cybergeddon" failing to meet expectations, or the last wakeup call? What roles should technical defences, education and human intelligence play in our response?
Presentation
When I started preparing for this presentation, two news items caught my attention, one was the Stuxnet worm, that targets SCADA systems; and the other was the publication of an investigation into the crash of Spanair flight number JK 5022 in 2008, which mentioned that an airline computer that should have issued a warning was infected with a trojan. I'll talk a little about the detail of these incidents later, but my first point is that both of these news stories have disappeared. Both stories involved malicious software affecting critical infrastructure – industrial control systems and air transportation, and, in both cases, the underlying problem has not been addressed, but media interest has evaporated. On the other hand, there have not been any new, related catastrophes either. The evil mastermind behind the SCADA worm didn't use it to send nuclear power stations into meltdown, or shut off water supplies for major cities; and airlines have not been either scrabbling to remove trojans from their computers, or issuing smug press-releases saying they are unaffected.
You could say that digital threats to critical infrastructure have consistently failed to live up to the hype. The phrase "digital Pearl Harbour" was first published in 1991, and there are frequent news stories about these threats, yet there have been no catastrophic incidents. Even in the 2008 war in South Ossetia, when Georgia claimed that Russia launched "cyberattacks", the digital casualties were Government websites. Bringing down a website might be a propaganda victory, but it does not affect critical infrastructure. The availability of a few websites did not affect the outcome of the war, or the number of people killed. It is also nothing new, intercepting and disrupting enemy communications has always been a part of warfare, though usually with more valuable targets.
Categorising the Targets
Critical infrastructure covers a wide range of assets, broadly everything essential for the functioning of a society or economy. For the moment, I would like to divide them into three groups.
Functions like agriculture, health care and emergency services rely on personnel with only incidental digital support, and the personnel can adapt to disruption, minimising the consequences of a cyberattack. The fundamental unit of healthcare is a Doctor – he or she can continue to operate (literally, if necessary) with no digital support. The same is true for farming, and policing – you still know what the law is, even if you are out of communications. These functions are resilient against cyberattack.
A second group, financial services and telecommunications, cannot function without digital support but a certain level of failures or attacks is normal. Banking transactions have been totally dependant on digital support for a long time – even if you hand over cash, it hasn't happened until the computer record exists. Telecommunications is the digital glue that holds the information society together. Obviously, an attack on this second category of critical infrastructure would be a nightmare scenario. But wait! We have already had major, disruptive cyberattacks, think about worms like Slammer, Blaster, and, earlier, Loveletter, they disrupted worldwide communications, and, in some cases, affected banks, but the result was not the collapse of Society. More recent attacks have struggled to get headlines, for example, there was a successful DoS attack on the Russian stock exchange that stopped trading for over an hour in February 2006. The impact has not lived up to the hype, firstly, I suspect, because these institutions are constantly under cyberattack. The IT staff are constantly reminded that there is a threat, and they continuously learn by responding. There are successful attacks, but the victory is short-lived, and the services soon resume. Slammer, for example, was largely controlled by individual ISPs studying the traffic and unilaterally deciding what to block. The infected systems were, largely, not cleaned up until the system owners got back to their offices the following Monday and realised something was wrong. This second group expect constant attacks and even major, unexpected incidents such as Slammer or Blaster merely cause delay.
The third group is the industrial parts of the critical infrastructure - electricity, gas and oil production and distribution, and water and sewage treatment. These assets are digitally controlled and appear to be highly attractive targets, yet they are poorly defended and have not suffered a catastrophic attack. There are, however, Urban Legends about incidents, last year I came across a claim, in a paper I was editing for the education department. It said, "Recently, a student created a virus, which halted the entire electric supply network in North China". This would have been a stunning incident: a virus bringing down a major power grid, so I tried to find corroborating references but came up blank. I've come to the conclusion that either the well-publicised discussions about the possibility of the US-power grid being brought down by a virus have morphed into a rumour about the China power grid; or a student really did take down the North China power grid, but it has been hushed up really well.
This industrial section of the critical infrastructure really looks like the low-hanging fruit. SCADA systems nowadays are heavily networked and control vital functions, but have minimal security. Yet, attackers have not targeted them – until the first of my examples, Stuxnet.
In July 2010, a worm Win32/Stuxnet was found spreading via USB devices. Two features made it enormously interesting, first, it uses a zero-day vulnerability in how Windows handles .lnk files to get executed even if Autorun is turned off. Secondly, and more pertinent to this discussion, it contains and uses the default password for Siemens' WinCC SCADA systems. In fact, it has a lot more than that and it can query details of a SCADA system's structure, and upload them to a remote site, and it can insert new code modules into PLCs and hide them. In short, it can allow the attacker to take control of your industrial plant.
Stuxnet has really revealed the negligence and complacency in the SCADA field. First, there is that default password – OK, we know that many systems have a default password, and they are often left unchanged, a basic security error. The stunning news is Siemens' advice: DON'T change the password, your plant might stop working. It seems that the "default password" is hard-coded in so many places that it is infeasible to change it. It isn't really a password at all, it is an additional, fixed command parameter. Siemens' further advice is equally surprising – if you try to remove the worm, your plant might stop working. That is, removing unauthorised, untested software may prevent the authorised, tested software behaving correctly! Or, it is preferable leaving an unknown attacker with control of your industrial plant, because Siemens has no idea what will happen if you try to put things right.
Stuxnet also demolishes some anti-virus myths. How about, "systems protected by an air-gap firewall can't get infected" – wrong, Stuxnet travels on USB drives, Sneakernet bypasses the air-gap. Or, "code signing would eliminate the virus problem" – wrong, several Stuxnet components are signed by valid certificates from known hardware manufacturers. Code signing doesn't protect you after the keys have been compromised.
There are also tantalising clues about it's author's motives. First, there's that zero-day exploit. Zero-day exploits are very valuable on the black market, so either the author didn't know or didn't care about selling the exploit for cash. Then Stuxnet is a worm, it spreads, but it stops after just three generations – presumably to limit its spread. So where did Stuxnet spread? Top of the list is Iran, followed by Indonesia and India, the results might also be skewed by anti-virus usage in those places, but Iran certainly looks the likely target. What about the code signing? The author had the resources to get hold of the signing key from two Taiwanese companies, whether that was by bribing trusted employees, or pressuring the companies in some other way is uncertain. Those sort of resources also argue against the author not knowing about the zero-day exploit black market. Symantec also reports that it has received samples of an earlier version of Stuxnet, without the zero-day exploit, that might have been written at least a year ago, so the author had long-term commitment. All circumstantial evidence pointing to a well-organised, well-funded attacker, with friendly links to Taiwan, wanting to find out about and control industrial systems in Iran. Perhaps my co-panellists can speculate on possible suspects?
However, even if we suspect Stuxnet was some cloak-and-dagger spy operation run by a major government, the fact remains that it is now a template of how to do it. Everything I've talked about is publicly-available, on various company and news organisation websites, and we would be foolish to assume that the black hats are taking no interest. Terrorists would be the major concern, but criminals could use SCADA attacks for extortion – similar to the DDoS attacks on gambling sites, demonstrate the ability to shutdown the system, and demand a fee not to do it at a critical time.
My second example was the report into the crash of Spanair flight number JK 5022 in 2008. In the first news story I saw, a trojan was cited as a contributing factor in the crash, and the resulting 154 deaths. My initial reaction was horrified resignation – it had finally happened, malicious software had killed someone. This was the scare story from the early days of computer viruses:
What is the worst thing a computer virus could do?
Well – I suppose it could, say, infect a life-support machine in a hospital.
And then what?
Umm – the machine wouldn't work, and someone would die.
Oh No, Horror, computer viruses are going to kill us all!
Reasoned voices would then point out that it had never happened, and life-support machines and similar critical systems are controlled by specialised computers that don't get PC viruses, and anyway, these systems are isolated. Since then, we've had increasing commoditisation of hardware and increased networking, so it is only a matter of time before the components come together in a tragedy. Perhaps flight JK 5022 was that tragedy.
It turns out I was wrong, later reports clarified that the infected system was keeping maintenance records, it would have given a warning when there were three problem reports for the aircraft, but the most recent report had not been entered by the maintenance engineers, so the trojan definitely did not cause the computer to fail to give a warning.
The point remains, however, that commodity hardware and software is spreading. Even when the control systems are specialised, the monitoring and support systems are not, as we have seen with Stuxnet and with the maintenance computer. There is also no such thing as an isolated computer, again demonstrated by both these examples.
How Likely is an Attack?
I've already said, we've seen massive, crippling attacks in some areas: Slammer and Blaster disrupted global communications, but the consequences were not too severe. What about collapse-of-Society severity attacks? Here, the low-hanging fruit is SCADA, these systems are wide open to abuse, and the consequences could be catastrophic. However, I doubt that they will be for two reasons. One is that the engineers building those systems might not have been concerned about security, but safety is a different matter. The most critical systems will have fail-safe mechanisms. The second is the attacks are unlikely to be scalable. SCADA systems are highly customised, so gaining access and changing a few controls, a valve here, a motor there, might do anything – send a reactor into meltdown, or divert a planeload of luggage to the wrong destination. Both are attacks on critical infrastructure, only one is a catastrophe. For a widespread, effective attack, the attacker would need a lot of reconnaissance, starting with which SCADA systems are of interest, followed by detailed evaluation: what changes will cause the desired disaster?
Defending Critical Infrastructure
I think there are some lessons to learn here. First, there is no magic bullet, one solution to the problem. We cannot gather all the critical infrastructure together and protect it in one way. It goes everywhere, and connects to everything. Each system will need its own security posture, adapted for its unique circumstances. Fortunately, each system has its own experts that can be co-opted for this, that is, the task is one of educating the responsible parties and empowering them to make the appropriate decisions for their systems.
Second, defence in depth is required. Stuxnet has shown that security through obscurity can be defeated, air-gap firewalls are not impervious, and digital signatures don't guarantee safe code, but they can all contribute to making an attack harder. Well, except security through obscurity – that assumes your enemy can't do research, and doesn't know how to spell Google. The lesson is, don't make one or two strong layers and become complacent, add more layers, assume they are going to fail, plan accordingly.
Third, with so many diverse systems, there are countless possible attack scenarios, so it is infeasible to make plans for each one. Ignore the "movie-plot" scenarios, and focus on the principles of good security.
The diversity of systems has an upside. It makes planning a catastrophic attack difficult. Taking out one target, one bank or one power station, might be easy, but taking out every bank or every power station will require reconnaissance and planning for each one, which implies manpower and therefore provides an opportunity for human intelligence. Surveillance and monitoring can identify people who are researching internal details of many systems, and flag them for further investigation.
Critical to Who?
Two final thoughts:
What is critical depends on who you are. One particular company, particularly if it is an SME, unable to make a deal because its bank has been hit, or not getting any web sales because its ISP has been hit, would define the failure that puts them out of business as critical. The economy is built from these companies.
If SCADA vulnerabilities seem frightening, consider the plans for smart electricity meters. They won't have the advantages of security through obscurity or diversity of systems. Imagine that an attacker takes control of them and, instead of reducing peak electricity demand, increases it. Smart meters have great Green potential, but it would be a really good idea to consider the security implications before putting one in every home.