Dangerous corner
It's a sign of the times that the most immediately useful explanation of the Intel CPU security flaws announced this week is probably the one at Lawfare. There, Nicholas Weaver explains the problems that Meltdown and Spectre create, and give a simple, practical immediate workaround for unnerved users: install an ad blocker to protect yourself from JavaScript exploits embedded in the vulnerable advertising supply chain.
There is, of course, plenty of other coverage. On Twitter, New York Times cybersecurity reporter Nicole Perlroth has an explanatory stream. The Guardian says it's the worst CPU bug in history, Ars Technica has a reasonably understandable technical explanation, and explains the basics, and Bruce Schneier is collecting further links. At his blog, Steve Bellovin reminds us that security is a *systems* property - that is, that a system is not secure unless every one of its components is, even the ones we've never heard of.
The gist: the Meltdown flaw affects almost all Intel processors back to 1995. The Spectre bug affects billions of processors: all CPUs from Intel, AMD, and ARM. The workaround - not really solution - is operating system patches that will replace the compromised hardware functions and therefore slow performance somewhat. The longer-term solution to the latter is to redesign processors, though since Pewwrlroth's posting CERT appears to have changed its recommended solution from replacing the processors to patching the operating system. Because the problem is in the underlying hardware, no operating system escapes the consequences.
More entertaining is Thomas Claburn's SorryWatch-style analysis of Intel's press release. If you have a few minutes, you may like to use Clayburn's translation to play Matt Blaze's Security Problem Excuse Bingo. It's also worth citing Blaze's comment on Meltdown and Spectre: "Meltdown and Spectre are serious problems. I look forward to seeing the innovative ways in which their impact will be both wildly exaggerated and foolishly dismissed over the coming weeks."
Doubtless there's a lot more to learn about the flaws and their consequences. Desktop operating systems and iPhones/iPads will clearly have fixes. What's less clear is what will happen with operating systems with less active update teams, such as older versions of Android, which are rarely, if ever, updated. Other older software matters, too: as we saw last year with Wannacry, there are a noticeable number of people still running Windows XP,. For some of those, upgrading isn't really possible because the software that runs on those machines is itself irreplaceable. Those machines should not be connected to the internet, but as we wrote in 2014 when Microsoft discontinued all support for XP, software is forever. We must assume that there are systems lurking in all sorts of places that will never be updated, though they may migrate if and when their underlying hardware eventually fails. How long does it take to replace 20 years of processors?
The upshot is that we must take the view that we should have taken all along: nothing is completely secure. The Purdue professor Gene Spafford summed this up years ago: "The only truly secure system is one that is powered off, cast in a block of concrete, and sealed in a lead-lined room with armed guards - and even then I have my doubts."
Since a computer secured in line with Spafford's specifications is unusable, clearly we must make compromises.
One of the problems that WannaCry made clear is the terrible advice that emerges in the wake of a breach. Everyone's first go-to piece of advice is usually to change your password. But this would have made no difference in the WannaCry case, where the problem was aging systems exploited by targeted malware. After the system's been replaced or remediated (Microsoft did release, exceptionally, a security patch for XP on that occasion), *then*, sure, change your password. It would make no difference in this case, either, since to date in-the-wild exploitation of these bugs is not known, and Spectre in particular requires a high level of expertise and resources to exploit.
The most effective course would be to replace our processors. This is obviously not so easy, since at least one of the flaws affects all Intel processors back to 1995 (so...Windows 95, anyone?) and many ARM, AMD, and even some Qualcomm processors as well. The problem appears to have been, as Thomas Claburn explains in an update that processor designers made incorrect assumptions about the safety of the "speculative execution" aspect of the design. They thought it was adequately fenced off; Google's researchers beg to differ. Significantly, they appear not to have revisited their original decision since 1995, despite the clearly escalating threats and attacks since then.
The result of the whole exercise is to expose two truths that have been the subject of much denial. The first is to provide yet more evidence that security is a market failure. Everything down to the basics needs to be reviewed and rethought for an era in which we must design everything on the assumption that an adversary is waiting to attack it. The second is that we, the users, need to be careful about the assumptions we make about the systems we use. The most dangerous situation is to believe you are secure, because it's then that you unknowingly take the biggest risks.
Illustrations: Satan inside (via Wikimedia).
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard - or follow on Twitter.