The Cyber Armageddon
Published in IEEE Spectrum Magazine, Sept 2010
I ran across one of those the-end-is-near cartoons. A scruffy person holds a sign that says "The World Will End in 2000"—except the "2000" is crossed out and amended to "2012." The many dire predictions about cyberwar feel a lot like that. The word has shown up frequently on magazine newsstands this summer—but, my editor reminds me, it was also on the cover of Time back in August 1995.
I've had the opportunity to listen to lots of smart people about the cyber problem, and to be honest, I don't know what conclusion to draw. My fear is that no one else knows, either. There is no lack of information about how bad the problem is, but there is almost nothing written about what to do about it. In the end, I believe it comes down to intelligent risk management—something we're not often good at.
For one thing, much of the threat comes from users already behind the Maginot Line of network firewalls. Even well-intentioned people do risky or forgetful things, such as leaving a laptop at airport security. Moreover, an insider isn't just a systems administrator; it's anyone or anything that touches your network, including all the equipment and the whole supply chain behind it. All it takes is one employee using the same USB drive on two different networks—the IT equivalent of a surgeon not washing his hands between operations—to fatally compromise a system's security.
If your data is valuable enough, there is almost nothing you can do to provide total security against an expert adversary. Simply put, the attacker may be smarter than anyone you have defending the network. Cyberattacks are not only impossible to block, they're often difficult to detect; you may not even know you're under attack in the first place. Then there's the problem of attribution—not just identifying the source of the attack but also determining intent and responsibility.
If you talk to knowledgeable defenders about attribution, they will say that they know how to trace attacks. If you talk to offensive experts, they will of course say nothing, but with a smile that projects an unmistakable confidence. An expert in the field summed it up by observing that there is a huge imbalance between the probability of detection and attribution—and the relatively minor consequences to the attacker even when caught—and the enormous impact of a defensive failure.
Yet for all that, what cyber catastrophes have we actually experienced? The financial system has been seen as particularly vulnerable—so much so that a proposal has been floated in diplomatic circles for nation-states to eschew attacks on the financial structure, much as in conventional warfare we agree not to harm churches and hospitals. Yet of all the banks that have gone under recently (and there have been a lot of them), none did so because of a cyberattack. The Internet itself has proven resilient, and though parts of it can go down, the organic growth of pathways and the diversity of equipment provide enormous robustness. Many of us engineers remember when Bob Metcalfe, one of the pioneers of the Internet, famously—and quite literally—ate the words he had written in 1995 predicting a collapse of the Internet the following year.
The cybersecurity problem has many dimensions, and technology can be only a part of any proposed defensive strategy. I think that any objective analysis of the situation would conclude that perfect security is not possible, other than through the draconian proposition of complete isolation from networks. But if computer security is fundamentally impossible, what is Plan B? My own belief is that it can only be the acknowledgment of fallibility, the acceptance of risk, and the preparedness for continued operation under degraded cyber conditions. I wish I had better wisdom, but this is truly a wicked problem.