The zero effect
Some fifteen years ago I reviewed a book about the scary future of our dependence on computers. The concluding note: that criminals could take advantage of a zero-day exploit to disseminate a virus around the world that at a given moment would shut down all the world's computers.
I'm fairly sure I thought this was absurd and said so. Since when have we been able to get every computer to do anything on command? But there was always the scarier and less unlikely prospect that building computers into more and more everyday things would add more and more chances for code to increase the physical world's vulnerability to attack.
By any measure, the Stuxnet worm that has been dominating this week's technology news is an impressive bit of work (it may even have had a beta test). So impressive, in fact, that you imagine its marketing brochure said, like the one for the spaceship Heart of Gold in The Hitchhiker's Guide to the Galaxy, "Be the envy of other major governments."
The least speculative accounts, like those of Bruce Schneier, Business Standard, and Symantec, and that of long-time Columbia University researcher Steve Bellovin agree on a number of important points.
First, whoever coded this worm was an extremely well-resourced organization. Symantec estimates the effort would have required five to ten people and six months - and, given that the worm is nearly bug-free, teams of managers and quality assurance folks. (Nearly bug-free: how often can you say that of any software?) In a paper he gave at Black Hat several years ago, Peter Gutmann documented the highly organized nature of the malware industry (PDF). Other security researchers have agreed: there is a flourishing ecosystem around malware that includes myriad types of specialist groups who provide all the features of other commercial sectors, up to and including customer service.
In addition, the writers were willing to use up three zero-day exploits (many reports say four, but Schneier notes that one has been identified as a previously reported vulnerability). This is expensive, in the sense that these vulnerabilities are hard to find and can only be used once (because once used, they'll be patched). You don't waste them on small stuff.
Plus, the coders were able to draw on rather specialised knowledge of the inner workings of Siemens programmable logic controller systems and gain access to the certificates needed to sign drivers. And the worm was both able to infect widely and target specifically. Interesting.
The big remaining question is what the goal was: send a message; do something to one specific target; as yet unidentified; simple proof of concept? Whatever the purpose was, it's safe to say that this will not be the last piece of weapons-grade malware (as Bellovin calls it) to be unleashed on the world. If existing malware is any guide, future Stuxnets will be less visible, harder to find and stop, and written to more specific goals. Yesterday's teenaged bedroom hacker defacing Web pages has been replaced by financial criminals whose malware cleans other viruses off the systems it infects and steals very specifically useful data. Today's Stuxnet programmers will most likely be followed by more complex organizations with much clearer and more frightening agendas. They won't stop all the world's computers (because they'll need their own to keep running); but does that matter if they can disrupt the electrical grid and the water supply, or reroute trains and disrupt air traffic control?
Schneier notes that press reports incorrectly identified the Siemens systems Stuxnet attacked as SCADA (for Supervisory Control and Data Acquisition) rather than PLC. But that doesn't mean that SCADA systems are invulnerable: Tom Fuller, who ran the now-defunct Blindside project in 2006-2007 for the government consultancy Kable under a UK government contract, spotted the potential threats to SCADA systems as long ago as that. Post-Stuxnet, others are beginning to audit these systems and agree. An Australia audit of Victoria's water systems concluded that these are vulnerable to attack, and it seems likely many more such reports will follow.
But the point Bellovin makes that is most likely to be overlooked is this one: that building a separate "secure" network will not provide a strong defense. To be sure, we are making many pieces of infrastructure more vulnerable by adding new vulnerabilities. Many security experts agree that the deployment of wireless electrical meters and the "smart grid" has failed to understand the privacy and security issues this is going to raise.
The temptation to overlook Bellovin's point is going to be very strong. But the real-world equivalent is to imagine that because your home computer is on a desert island surrounded by a moat filled with alligators it can't be stolen. Whereas, the reality is that a family member or invited guest can still copy your data and make off with it or some joker can drop in by helicopter.
Computers are porous. Infrastructure security must assume that and limit the consequences.
Wendy M. Grossman's Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. This blog eats all non-spam comments; I still don't know why.