By Peter W. Singer and Allan Friedman
This is an excerpt from Singer and Friedman’s new book, “Cybersecurity and Cyberwar: What Everyone Needs to Know,” released Monday by Oxford University Press. Another excerpt, “Exercise is good for you,” ran yesterday.
“Cyber offense may provide the means to respond in-kind. The protected conventional capability should provide credible and observable kinetic effects globally. Forces supporting this capability are isolated and segmented from general-purpose forces to maintain the highest level of cyber resiliency at an affordable cost. Nuclear weapons would remain the ultimate response and anchor the deterrence ladder.”
These lines come from a 2013 US Defense Science Board report, one of the highest-level official advisory groups to the Secretary of Defense. While the text reads like typical Pentagonese, what these lines translate into is a proposal to create a new US military force specially designed to retaliate against a cyber strike. Of note, it wouldn’t just be able to respond with counter cyber weapons, but also would include “Global selective strike systems e.g. penetrating bomber, submarines with long range cruise missiles, and Conventional Prompt Global Strike [a ballistic missile].” Foreign Policy magazine’s reaction to the news perhaps sums it up the best: “Wow.”
When we think about deterrence, what most often comes to mind is the Cold War model of MAD, mutually assured destruction. Any attack would be met with an overwhelming counterstrike that would destroy the aggressor as well as most life on the planet, making any first strike literally mad.
Yet rather than just getting MAD, deterrence really is about the ability to alter an adversary’s actions by changing its cost-benefit calculations. It reflects subjective, psychological assessments, a “state of mind,” as the US Department of Defense says, “brought about by the existence of a credible threat of unacceptable counteraction.” In addition to massive retaliation, the adversary’s decisions can also be affected by defenses, in what has been called “deterrence by denial.” If you can’t get what you want by attacking, then you won’t attack in the first place.
Theorists and strategists have worked for decades to fully understand how deterrence works, but one of the key differences in the cyber realm, as we have explored, is the problem of “who” to deter or retaliate against. Specifically, this is the issue of attribution we explored earlier.
The effect of this on real-world politics is driven by the fact that the question of “who” in cyberspace is far more difficult than ever could have been imagined by the original thinkers on deterrence theory back in the 1950s. Tanks and missile launches are hard to disguise, while networks of compromised machines or tools like Tor make anonymity easy. The threat of counterstrike requires knowing who launched the initial attack, a difficult thing to prove in cyberspace, especially in a fast-moving crisis. Computer code does not have a return address, and sophisticated attackers have grown adept at hiding their tracks. So painstaking forensic research is required, and, as we saw, it’s rarely definitive.
Moreover, for the purposes of deterrence, it’s not enough to trace an attack back to a computer or find out who was operating a specific computer. Strategically, we must know what political actor was responsible, in order to change their calculations.
This problem has made improving attribution (or at least making people think you have improved attribution) a key strategic priority for nations that believe themselves at risk of cyberattack. So, in addition to considering the massive retaliatory forces outlined by the Defense Science Board, the United States has grown its messaging efforts on this front. In 2012, for example, then Secretary of Defense Panetta laid down a public marker that “Potential aggressors should be aware that the United States has the capacity to locate them and to hold them accountable for their actions that may try to harm America.” In turn, these potential aggressors must now weigh whether it was bluster or real.
The “who” of deterrence is not just about identification but also context. The United States has approached deterrence very differently when facing terrorists, rogue nations, and major powers. While the theory often lays out a series of set actions and counteractions, the reality is that different actors can dictate very different responses. Imagine, for example, what the Bush administration’s reaction might have been if the groups attacking the United States’ NATO partner Estonia in 2007 had been linked to Tehran instead of Moscow.
If the actor is known, the next component in deterrence is the commitment to retaliate, a decision whether to match or escalate the use of force. Unlike when the United States and the Soviet Union pointed nuclear weapons at each other ’s territory, the players and stakes in the cyber realm are far more amorphous. Some even argue that if one wants to change an adversary’s “state of mind,” the “credible threat” against cyberattack needs to go beyond the cyber realm.
This is the essence of the Pentagon’s plan for a mixed cyber- and real-world retaliatory force, which has also been proposed even in situations of espionage. But going back to the issue of context, the challenge of intellectual property theft is that an in-kind response would not be effective; the very fact that your secrets are being stolen is a pretty good indicator that the enemy doesn’t have anything worth stealing back. Likewise, the traditional deterrence and retaliation model in espionage (they arrest your spies, you arrest theirs or deport some embassy staff) doesn’t translate well when the spy is thousands of miles away and likely outside of the government. Thus, some have argued that alternative means have to be found to influence an enemy’s calculations. Dmitri Alperovitch, who watched the massive Shady RAT attacks play out, argues that we should try to “raise the economic costs on the adversary through the use of such tools as sanctions, trade tariffs, and multilateral diplomatic pressure to impact their cost benefit analysis of these operations.”
Timing also plays a more complicated role in cyber deterrence. In the nuclear age, speed was key to MAD. It was crucial to show that you could get your retaliatory missiles and bombers off the ground before the other side’s first strike. In the cyber age, however, there is simultaneously no time and all the time in the world to respond. The first strike might play out in nanoseconds, but there are many compelling reasons to delay a counterstrike, such as to gain better attribution or better plan a response.
Similarly, how much of a guarantee of reprisal is needed? In the nuclear realm, the game theory that guided American Cold War planners put a mandate on having comparable “survivable” counterstrike forces that would make sure the other guy got nuked even if he tried a sneak attack. In a cyber era, it’s unclear what a “survivable” counterforce would look like, hence the US plan to establish a nuclear equivalent.
The same lack of clarity extends to the signals that the two sides send each other, so key to the game of deterrence. If you fire back with a missile, the other side knows you have retaliated. But fire back with malware, and the effect is not always so evident, especially as its impact can sometimes play out just like a normal systems failure. This means that different types of cyber weapons will be needed for different purposes in deterrence. When you want to signal, “noisy” cyber weapons with obvious effects may be better, while stealthy weapons might be more key to offensive operations. The result, though, is something that would be familiar to those wrestling with past deterrence strategies: in the effort to head off war, new weapons will be in constant development, driving forward an arms race.
In short, the growing capacity to carry out multiple types of cyberattack is further complicating the already complex field of deterrence. Without a clear understanding or real reservoir of test cases to study for what works, countries may have to lean more heavily on deterrence by denial than during the nuclear age.
Ultimately, while the technologies may be shifting, the goals of deterrence remain the same: to reshape what an enemy thinks. Cyber deterrence may play out on computer networks, but it’s all about a state of mind.
Peter Warren Singer is Senior Fellow and Director of the Center for 21st Century Security and Intelligence at the Brookings Institution. He is a contributing editor to Armed Forces Journal.
Allan Friedman is a Visiting Scholar at the the Cyber Security Policy Research Institute in the School of Engineering and Applied Sciences at George Washington University, where he works on cybersecurity policy.