Features

January 8, 2014  

Exercise is good for you

Navy Cyber Defense Operations Command, Joint Expeditionary Base Little Creek-Fort Story, Va. (U.S. Navy)

How can we better prepare for cyber incidents?

By Peter W. Singer and Allan Friedman

This is an excerpt from Singer and Friedman’s new book, “Cybersecurity and Cyberwar: What Everyone Needs to Know,” released Monday by Oxford University Press. Another excerpt is here: “What about deterrence in an era of cyberwar?

Twice in six months sophisticated attackers were able to gain access to the production code that runs Facebook’s website, used by over a billion people around the world. The first time, a Facebook engineer’s computer was compromised by an unpatched zero-day exploit. This enabled the attacker to “push” their own malicious computer code into the “live build” that runs the website.

The second time, in early 2013, several engineers’ computers were compromised after visiting a website that launched a zero-day exploit on its victims. But this time, the attacker was unable to get inside sensitive systems and could cause no major damage.

The reason these two attacks caused such differing effects lies in their origin. The attackers in the first incident were actually part of a security training exercise in 2012, led by an independent “red team.” This preparation meant that when real attackers tried to harm Facebook in the second incident just a few months later, they weren’t able to do much at all.

The challenge of defending against cyber threats is not just due to their varied and diffuse nature, but also because so much depends on how organizations react and respond when cyber push comes to cyber shove.

Cybersecurity and Cyberwar by Peter W. Singer and Allan Friedman

Cybersecurity and Cyberwar by Peter W. Singer and Allan Friedman

Prussian Gen. Helmuth Graf von Moltke’s famous military adage should serve as warning: “No plan survives first contact with the enemy.” It is one thing to develop a plan; it’s quite another to understand how well that plan will hold up when tested. In the cyber world this holds even truer. Responses must be considered at every level, from national security strategy to enterprise risk management, down to the technical level, where engineers must make fast decisions about network incursions. It is not just about protection; the wrong response could be worse than the attack itself.

This is where the value of exercises and simulations come in. They don’t just test defenses at the pointy end of the cyberspace but also help all better understand the effects of their plans and procedures.

At the technical level, controlled environments offer a semiscientific environment to study both attacks and defenses. “Test beds” are extensible simulations of systems, networks and operational environments that can be attacked over and over again. This repetition allows researchers to simulate failures, test the interoperability of equipment and standards, and understand how attacks and defenses indirect. And, of course, you can carry out actions in a test bed that you would never want to in the real world. One test bed created by the National Institute of Standards and Technology allows researchers to repeatedly crash a simulated version of the electrical power grid to observe its failure modes and resiliency— this would obviously be problematic with the actual power grid!

Controlled environments can be used to study the offensive side of cyber as well. A particular tactic used by security researchers are “honeypots” or isolated machines that are intentionally exposed to attacks. By observing how different types of malware attack these machines, we can identify new types of attacks and devise defenses. Entire test “honey nets” simulate complete networks or even regions of the whole Internet. During these tests, there is a cat-and-mouse game that plays out between researchers and attackers: Sophisticated hackers try to determine whether they are in a honey net, in which case they change their behavior to avoid disclosing their defensive tactics and tricks.

Meanwhile, the military offensive capacity can be refined in “cyber ranges.” One of the challenges in developing cyber weapons is understanding how an attack will spread. If they are to be used as a precision weapon, it is imperative both to avoid detection and to minimize the collateral damage beyond the intended target. This precision becomes even more important if the attack is to interfere with physical processes.

In the case of Stuxnet, for example, many believe that practice was needed to understand how the software would deploy and how altering the industrial controllers would impact the targeted process of uranium enrichment. Reportedly, the new cyber weapon was tested at Israel’s secretive nuclear facility in Dimona.

As one source told the New York Times about the test effort, “To check out the worm, you have to know the machines. …  The reason the worm has been effective is that the Israelis tried it out.”

On the defensive side, vulnerability tests and practice exercises are quite valuable for the actors in cyberspace that range from militaries to private companies. This can be as simple as penetration testing, or having a “red team” of outside security experts look for vulnerabilities to exploit. These experts understand how to attack live networks in a controlled fashion, and lay the foundation for what might be a more damaging attack without putting the actual operation at risk.

More sophisticated exercises can be completely simulated like a traditional war game. Again, the notion of a “war game” is a bit of a national misnomer in that such exercises can help any cyber defenders, be they a military unit, a university or a private firm, better understand what threats they face and where they are vulnerable. More important, they then help defenders better understand their own likely and needed responses.

As an illustration, one company that went through a war game studied by the McKinsey consulting firm found that their entire security team was completely dependent on email and instant messaging and did not have a backup communication plan to coordinate defenses under a full-scale network-based attack.

These exercises also underscore the importance of coordination between technical decisions and business mission. In another war game, McKinsey found that disconnecting the network to secure it hurt their customers more than remediating the issue while still online.

Exercises also help key leaders grasp what matters before a real crisis occurs. Senior management, which two often dismisses cybersecurity concerns as either too technical or too unlikely, can get a hands-on understanding of the importance of planning. This exposure can prevent future panic and open the manager up to committing more resources for defense, resiliency and response.

Indeed, raising awareness and security management buy-in for cyber security is a key outcome of many a simulation. As one Estonian defense official explained, leaders have many priorities and interests, and so a health minister “who will be yawning through cybersecurity talk” might pay attention if the attack is an exercise involving something relevant to his department, such as a pension database.

There is a natural trade-off between the scale of the exercise and the level of detail in what can be learned, however. This has often been an issue in simulations at the national defense level, where too often the events emphasize the performance aspect of the exercise. The Bipartisan Policy Center’s “Cyber ShockWave” of 2010 was an attempt at a war game featuring former senior department officials playing the roles of simulated government officials as the nation was hit by a series of crippling cyberattacks. This exercise was criticized by some as focusing more on appearances then on the substance, especially when fake news coverage of the game was later broadcast on CNN under the title, “We were warned.”

Given the cost of these larger, more complex simulations, the designers must have a clear vision of the goals of the exercise and design the game appropriately. For example, finding vulnerabilities is a different task from discovering better modes for coordination, just as testing strategy is different from raising public awareness.

Exercises can also create useful opportunities to strengthen personal networks of cooperation between different agencies and even different governments. For instance, the European Network and Information Security Agency’s “Cyber Europe” war game is based on a fairly simple scenario but really has a goal of inducing key officials from different European countries to interact more on cyber issues. The whole idea is that you don’t want these people to talk for the first time in the midst of an actual cyber crisis.

Such exercises can even be used as a means to actually help defuse tensions between nations seemingly at odds. Think tanks in Washington in Beijing have, for instance, cooperated to run a series of small-scale cyber simulations involving the United States and China. While these were not official collaborations between governments, representatives from the State Department and the Pentagon participated, along with their Chinese counterparts. The goal was to build a shared understanding of how cyber weapons can be used and how each side approached the problem. The hope is that in the long run such exchanges will help build trust and reduce the likelihood of miscommunication during a real crisis or under poor assumptions.

Exercises and simulations are useful but have some obstacles that limit effectiveness. Like any model, they are simplifications and face a balance between verisimilitude and generalizability. If they are too focused and realistic, there are fewer lessons that can be learned and applied and it is easier to “fight the scenario.” On the other hand, if they are too general, it is unclear what will be learned.

This balancing act also applies to the scenarios and the “red team” that is implementing them. When groups are testing themselves, there is a risk that they will go easy to show off how good they already are or staff their “attackers” with people from their own organization, who think and act in the very same ways. However, if the goal is to explore other areas, such as cooperation, making the test too hard can also be a problem. Former DHS official Stewart Baker highlighted this tension in scenario-building: “If it’s so one-sided the attackers win all the time … then the exercise is not actually teaching people anything.”

This gets back to the purpose of an exercise. While unexpected lessons are a lucky outcome, it is important to have a stated goal of the exercise to inform the scenario creation and simulation parameters along with a plan for debriefing and implementing lessons learned. One of the noteworthy features of cybersecurity is that it spans traditional boundaries, whether they are national or organizational. Responses require the interactions of numerous individuals with different responsibilities, management structures and incentives, not to mention different professional paradigms. Here is where exercises and simulations may bring the most value. IT professionals, managers, lawyers and public relations experts may all instinctively approach a cyber incident differently. Thus, engaging in simulations and exercises allows a more direct confrontation between those personal and organizational viewpoints and roles.

The Facebook team learned this from their own self-testing experience. They saw how an effective response required cooperation between teams that range from the development team for the Facebook commercial product to the internal information security team responsible for the company network. The prior testing didn’t just help them plug vulnerabilities, it yielded lessons in cooperation that proved critical when the war game came to life in an actual attack. As the director of incident response, Ryan McGeehan said, “We’re very well-prepared now and I attribute that to the drill.”

Peter Warren Singer is Senior Fellow and Director of the Center for 21st Century Security and Intelligence at the Brookings Institution. He is a contributing editor to Armed Forces Journal.

Allan Friedman is a Visiting Scholar at the the Cyber Security Policy Research Institute in the School of Engineering and Applied Sciences at George Washington University, where he works on cybersecurity policy.