January 1, 2009  

Botnets outmaneuvered

Georgia’s cyberstrategy disproves cyberspace carpet-bombing theory

In May, Col. Charles Williamson proposed that the U.S. project power by building a military botnet “to target computers [so] that they can no longer communicate,” thereby “[creating] the deterrent we lack.” (“Carpet bombing in cyberspace,” AFJ, May)

The July and August distributed denial-of-service (DDoS) botnet attacks against the country of Georgia were the first substantial, real-world tests of his theory. We now have tangible, hard evidence upon which to base judgments. So how did Williamson’s proposed botnet strategy hold up against the unflinching, harsh reality of real cyberconflict? It appears the Georgian government didn’t stick to the script. Although Williamson raises insightful points in his review of the Defense Department’s current cyberposture, nonetheless, Georgia’s reaction to the 2008 botnet attacks disproves Williamson’s theory in several areas: shooting back, cybermaneuver, targeting information, defense in breadth and deterrence.

Shooting back

On July 19, unknown parties used a computer located at a U.S. “.com” IP address to command and control a DDoS attack against the Web site of Georgia’s president, Mikheil Saakashvili. The attack rendered the site inoperable. Although unable to pinpoint the party that controlled the U.S. computer, cyberexperts identified the server as a MachBot DDoS controller written in Russian and frequently attributed to Russian hackers.

To reiterate the point, unknown parties used a civilian computer in U.S. territory to control a botnet attack against Georgia, a presumed U.S. ally. This should sound familiar, as Williamson states: “More challenging is the problem of an attack coming from an ally’s civilian computers. … If we attack them as a matter of proportionate response, it would only be because computers in their territory are attacking us.” He further states: “If the enemy is using civilian computers in his country so as to cause us harm, then we may attack them.” According to Williamson’s model, the July attack could give Georgia — or any government — legitimate reason for proportionate counterattack against the U.S. because hackers used a civilian computer in U.S. territory.

The Georgia cyberevent turns on its head Williamson’s construct of “them attacking us.” In the Georgia case, it was “us attacking them.” This scenario depicts the reality of current cybertrends, as verified in a recent SecureWorks Inc. report that indicates that the U.S. by far is the greatest source of global botnet attacks. According to SecureWorks, 20.6 million attacks originated from U.S. computers as of September, compared with 7.7 million from second place China. To compound the situation, cyberattackers are often not physically based in the U.S., but nonetheless use compromised civilian U.S. computers to launch worldwide attacks.

This point must be hammered home: It is highly likely that the vast majority of the 20.6 million botnet attacks originating from the U.S. came from civilian (.com, .org, etc.) computers, not government (.gov) or military (.mil) systems. Williamson does not take this fact into account when he postulates his “proportionate response” theory.

Given the above statistics, the U.S. could ironically end up on the receiving end of Williamson’s proportionate response proposition. Williamson approves of a suggestion to “[mount] botnet code on the Air Force’s high-speed intrusion-detection systems” because it “allows a quick response by directly linking our counterattack to the system that detects an incoming attack.” Lest we forget, those on the receiving end of U.S.-based botnet attacks can play this game, too, and Williamson’s proportionate response proposal would provide them the roadmap and the apparent justification for doing so.

In July, it appeared to the Georgian government that it was being attacked by a presumed ally — the U.S., or at least from a civilian computer in U.S. territory. According to Williamson, Georgia would have been justified, assuming it determined a threat to its national security, to launch a proportionate cyber-counterattack against the U.S. The question is, against whom in the U.S. would Georgia launch its proportional response? The White House’s Web site? The Internet service provider in the U.S. from which the unknown bot-herders controlled the attack? In a reversal of Williamson, how would Georgia explain to its best friends that it had to shut down U.S. computers?

It is inconceivable that the U.S. government would agree with Williamson’s fundamental assertion that international law should encourage governments to launch proportional counterattack botnets against other countries, especially given the attribution challenge. As James Lewis of the Center for Strategic and International Studies warns: “It is difficult to deter by threatening counterattack if you do not know who is attacking … [or] if you cannot estimate the degree of collateral damage. It would be a bold president who authorized counterstrikes when he or she does not know the target or the possible extent of collateral damage.”

It is incontrovertible that the U.S. government, as a party to the 2001 Council of Europe Convention on Cybercrime (COE Convention), considers as criminal acts “damaging, deleting, deteriorating, altering or suppressing computer data.” Articles IV (data interference) and IV (system interference) of the COE Convention clearly characterize the botnet attack against Georgia as cybercrime. From the COE Convention’s perspective, which implies the U.S. government’s perspective, Interpol and the FBI, rather than NATO or a DoD botnet, are the proper response to botnet attacks. A legal task team from the NATO-accredited and highly regarded Cooperative Cyber Defence Centre of Excellence in Tallinn, Estonia, drew a similar conclusion in stating that “it is highly problematic to apply the Law of Armed Conflict to the Georgian cyber attacks — the objective facts of the case are too vague to meet the necessary criteria of both state involvement and gravity of effect.” U.S. precedence exists, as the Justice Department has successfully prosecuted several high-profile criminal cases over the past two years involving botnet attacks. As we shall see next, in response to a second round of botnet attacks the Georgian government employed a creative strategy — one that Williamson’s model does not account for.


In August, cybersecurity experts observed a second, much larger wave of DDoS attacks against Georgian government Web sites. In response, the Georgian government took an unorthodox step and sought cyberrefuge in the U.S., Poland and Estonia. Within the U.S., Georgia located its cybercapabilities on servers at Tulip Systems (TSHost) in Atlanta, Ga., and at Google in California. When Estonia experienced a cyberattack in 2007, it essentially defended in place; Georgia, on the other hand, maneuvered. It elegantly relocated strategic IP-based cybercapabilities to other defensive points on the Internet, thereby ensuring continued war-time communications with Georgian citizens and forces. By doing so, the Georgian government partially defeated the botnet cyberattack by flowing a portion of its strategic C2 through the U.S. and other allies. Future cyberopponents, taking note of the Georgia example, could merely maneuver their information to avoid Williamson’s botnet carpet bombing.

The core feature of Georgia’s creative cyberstrategy was the belief that botnets could not take down TSHost or Google’s Internet servers, given their much larger infrastructure and world-class ability to defend it. During the conflict, an astute analyst noted that “Georgia has turned to using the Google blog service as a method of communication … and it has proved to be a sustainable resource. Governments will need to have strategies in place to prepare for this type of attack.” According to TSHost, DDoS attackers also apparently took aim at Georgia’s Web sites on the company’s servers, but to no avail, as its Web servers continued to function.

John Lowry, a member of Professionals for Cyber Defense, points to a counterintuitive trend in the attacker-defender system, in that “unfortunately for the attacker, the defender can frequently implement defenses faster than the attacker can develop attacks.” In response to a botnet attack, Williamson would symmetrically counter with brute force; Georgia, on the other hand, à la Sun Tzu, flowed like water and asymmetrically moved around the attack. Russian botnet attackers didn’t contemplate Georgia’s strategy and couldn’t react quickly enough to Georgia’s cyber-retrograde. Russian hackers never contemplated the need to execute a DDoS attack against Google in order to prevent Georgian communications; thus the hackers were unprepared and arguably incapable.

Oddly, as a solution to the fixed fortress defense problem, Williamson structures a fixed cyberdefense. He espouses a strategy that relies on fixing hardware and software en masse on the NIPRNET. This system then becomes a nonmaneuverable, point-defense/offense target with limited options. Williamson’s strategy also mandates a fixed opponent and does not compensate for cybermaneuver. Georgia’s rapid cybermaneuver would reduce Williamson’s strategy to reactive cyber-Whack-a-Mole, which Georgia proved, and Russian hackers can attest, doesn’t work. Ultimately, in an unscripted, fully transparent counter-DDoS operation, subject to the scrutiny of international cyberexperts, Georgia demonstrated that cybermaneuver can easily defeat Williamson’s proposed botnet strategy.

Information targeting

Williamson’s botnet strategy targets opponent equipment (computers) rather than capability (information). Williamson proposes to launch “such massive amounts of traffic to target computers that they can no longer communicate and become no more useful to our adversaries than hunks of metal and plastic.” The axis of success in Williamson’s model rests on botnets destroying computers; the effect, according to Williamson, is that “the target computers are cut off from the Internet.”

Georgia’s real-world response: “So what?”

Williamson’s strategy focuses on the wrong effect. The Georgian government understood that information, not equipment, is the prize, and merely maneuvered its information away from Williamson’s “hunks of metal.” Botnets (means) may have temporarily shut down Georgia’s Web servers, but Georgia’s strategic information capability (ends) remained intact and turned international opinion in its favor.

John Arquilla and David Ronfeldt, authors of “Networks and Netwars: The Future of Terror, Crime, and Militancy,” postulated in 1993 that information and perception are the targets in cyberattacks, not computer equipment. Arquilla and Ronfeldt define netwar as “trying to disrupt, damage, or modify what a target population knows or thinks about itself and the world around it.” They further describe cyberwar as “turning the balance of information and knowledge in one’s favor.” Cyberprofessor Richard Harknett refined these definitions to include attacks on societal connectivity or linkages essential to the functioning of modern society. Importantly, neither netwar nor cyberwar requires the presence of advanced technology. As retired Air Force Lt. Gen. Charles Croom, former director of the Defense Information Systems Agency, stressed: “Information is America’s greatest weapon system.” By targeting equipment, rather than information or perception, Williamson’s model is an ineffective strategy for netwar or cyberwar. Georgia’s ability to communicate despite a botnet attack provides convincing real-world proof.

Hackers indeed temporarily shut down Georgian government IP-based communications capabilities, i.e., Web servers. However, Georgia — undeterred — simply pulled a creative on-the-fly maneuver and relocated its information assets to other IP-based communications channels in the U.S., Poland and Estonia. Hackers employed botnets as a means against Georgia’s computers, but to what end? If the hackers’ goal was to prevent the Georgian government from communicating, they ultimately failed. The botnets did delay Georgian communications, but did not prevent Georgia from getting its message out. The Georgian case confirms Arquilla and Ronfeldt’s prediction that netwar and cyberwar are pre-eminently about targeting and affecting information and perception, rather than computers or network equipment. As Georgia demonstrated, Williamson’s “turn computers into rocks” model failed the real-world test.

Defense in breadth

Williamson justifies his botnet model, in part, on the assertion that it would be part of “layered defense in depth with firewalls, software patches, good information assurance and brilliant defenders.” Williamson assumes that the rest of the Internet community also relies on the defense-in-depth approach. Georgia dismissed this notion.

Unlike Williamson’s assertion that “the U.S. still needs a layered defense in depth” approach, the Defense Department and industry are actually moving toward a “defense in breadth” concept. Robert Lentz, the deputy assistant secretary of defense for information and identity assurance, indicates that Pentagon information assurance (IA), which previously relied on a defense-in-depth approach, is now evolving toward the defense-in-breadth construct. Rather than resting on firewalls and software patches, the Defense Department is moving toward integrated, multilayer, multidimensional protection. In fact, the department rewrite of its IA policy will refocus the entire IA strategy on the defense-in-breadth approach.

As Dorene Kewley and John Lowry point out in a paper prepared for the Defense Advanced Research Projects Agency, the goal of the defense-in-breadth approach is to “cause attack points to move to more manageable locations where the adversary’s actions can be contained and possibly monitored.” The key term is “move.” Williamson’s defense-in-depth approach is based on the increasingly outdated view of static “point defense” equipment.

Again, Williamson focuses on the fortress mentality solution of “computers on NIPRNET” protected behind firewalls. Georgia’s chosen strategy of defense in breadth focused on using the entire Internet security ecosystem as part of a complete defensive strategy.

Information is becoming highly mobile. Cyberspace’s movement toward cloud computing, mobile code, widgitization and platform as a service brings the entire Internet into the realm of defense. Williamson’s model does not consider that the Internet writ large can itself act as a defense mechanism. By maneuvering to other Internet points, Georgia found locations to better manage its cyberdefense and reduce DDoS vulnerability. Georgia confronted botnet hackers with the daunting challenge of having to simultaneously attack Georgia, the U.S. (Google and Tulip Systems), Poland and Estonia. This strategy ensured that Georgia’s IP-based communications continued nearly unaffected by the botnet attacks. In short, Georgia demonstrated that the defense-in-breadth strategy can defeat Williamson’s botnets.


Contrary to Williamson’s assertion, Georgia’s ability to outmaneuver a DDoS attack shows that botnets are not a valid cyberdeterrent model. Williamson correctly points out that America lacks a credible deterrence in cyberspace. However, the Georgian case illuminates errors in his supposition that the af.mil botnet will “create the deterrent we lack.”

More than a decade ago, Harknett argued that, “Deterrence models developed during the Cold War will provide poor guidance for strategic thinking about” information warfare.

Harknett stressed that a properly designed deterrence strategy will cause the challenger to calculate that the expected benefits of the use of force will be negated by the costs inflicted by the deterrer’s response. As long as the costs associated with a deterrent threat can be viewed by an opponent as contestable to a significant degree, deterrence is unlikely to hold under great stress. Finally, Harknett said, the competitive quest for strategic advantage revolves around shared information and rationality. In order for deterrence to function, the challenger and the deterrer must possess specific shared information about each other and comprehend each other’s rationality. The well-regarded Cyber Conflict Studies Association indicates that to date, there is no compelling evidence that refutes Harknett’s position.

Incomplete or incorrect information about the challenger can lead to insufficient deterrence. Without clear information about the opponent’s capabilities, the deterrence strategy will be vulnerable to the challenger’s countermeasures — weapons, tactics or strategy.

Russian cyberattackers had incomplete information about Georgia’s strategy. The assumed goal of the cyberattackers, as a proxy for Russia, was to communicate to the Georgian government that if Georgia attempted future operations in South Ossetia, the cost would be prohibitive. Georgia’s response to the botnet attack was to maneuver.

Because the hackers did not anticipate Georgia’s actions, they were unable to sufficiently deter Georgia. The Georgian case demonstrates that Williamson’s botnet operators, as the deterrer, cannot know enough about a challenger to successfully raise the deterrent costs high enough. Further, given that no permanent damage was done to Georgia’s cybercapabilities, it is unlikely that the botnet attack will deter Georgia from future operations.

As Ethan Zuckerman of Harvard’s Berkman Center for Internet and Society notes: “In this ‘cyberwar,’ the most serious consequence is that a Web site becomes temporarily inaccessible.” In fact, now that Georgia has survived Russian hacker DDoS attacks, the fear of botnets is unlikely to be a credible deterrent in the calculus of Georgian strategists. Others in the international community, taking note, may reach a similar conclusion. Georgia’s relatively easy counter to the botnet attack demonstrates Harknett’s supposition that Williamson’s botnet strategy remains contestable, and therefore not a credible deterrent.

To promote a decision-making process that leads the challenger to evaluate rationally the consequences of deterrent threats in a way that matches the expectations of the deterrer, uncertainty needs to be reduced. A challenger must be made aware that the deterrent threat, when employed, will inflict severe and specific costs.

In the Georgian case, botnet hackers were unable to influence the rationality of Georgia’s decision-making. Because the hackers targeted Georgia’s Web servers rather than information, Georgia was undeterred. Georgia moved its assets to alternate sites and continued communicating. Russian hackers never followed through on larger threats to Georgia’s Internet capabilities, and did not attack Georgian assets after relocation to the U.S., Poland and Estonia.

Lowry said coercion can only work when the defender has no other rationale or cost-effective options but to submit. Consequently, adaptability of the defender is key. Clearly, when the defender can adapt with relatively little cost, coercion fails. This is precisely why Williamson’s rationale of the botnet as a deterrent power failed in the Georgian case. Georgia adapted with little coercive penalty. The Georgian case certainly appears to support Harknett’s theorem, refuting Williamson’s Cold War-based “cyber carpet bombing” model. In fairness, the Georgian event will not deter future opponents from attacking the U.S. However, an opponent, noting Georgia’s strategy, could easily maneuver to avoid Williamson’s attempt to botnet them back to the cyber-stone age.

In summary, Williamson offers the “NIPRNET botnet” to confront adversary botnet attacks against the U.S. military. Georgian strategists opted for a different option — maneuver. Georgia’s response to the August DDoS attacks provides evidence of alternative strategies to counter botnets. Georgia’s actions are not Williamson’s easily dismissed “parade of horribles”; rather, they are the glare of concrete cyber-reality. In the end, Williamson’s proposal contains two fatal mistakes: fighting the last war and countering with a symmetrical response. Sun Tzu admonishes strategists to “not repeat the tactics which have gained you one victory, but let your methods be regulated by the infinite variety of circumstances.” Russian hackers apparently aren’t reading Sun Tzu, because in succession they have attacked Estonia, Lithuania, and now Georgia. Hackers, and Williamson, are fighting the last war.

Admittedly, a military DDoS tool could be of limited value in the near term for certain cyberoperations, such as to temporarily deny connectivity against an individual target. However, DDoS and botnets are no longer asymmetrical, they are expected. Vint Cerf, regarded as the “father of the Internet,” recently compared the spread of botnets to a pandemic that “could undermine the Internet’s future.” In reaction, industry heavyweights associated with the International Botnet Task Force are investing in solutions to defeat botnets. In addition, ongoing research at the Internet Engineering Task Force, DARPA’s Strategic Technologies Office and the National Science Foundation have the combined potential to render botnet attacks ineffective or eliminate them altogether. These efforts afford the Defense Department the opportunity to develop creative cyberdefense strategies from among numerous alternatives, rather than be trapped in Williamson’s mirror-imaging botnet solution.

Georgia’s strategic cybermaneuver and ability to operate in the face of massive DDoS traffic raises serious doubts regarding Williamson’s theory that a NIPRNET botnet would provide effective cyberdeterrence. Given this, it would appear challenging for a future Air Force or defense secretary to advocate on Capital Hill for appropriations to fund Williamson’s botnet strategy and unlikely that Congress would authorize the program. Cyberexperts suspect that certain nation-states have far more sinister cybercapabilities compared to botnets.

Therefore, the Pentagon would be wise to focus instead on Williamson’s well-stated, valid point of improving our cyberdefense, while ensuring our cyberarmory is filled with other, more effective cyberoptions to created the desired effects. Williamson proposes, “We want potential adversaries to know this [botnet] capability works and will be used when needed.” Georgia demonstrated that a botnet capability does not work well against an adaptive cyberopponent, and thus is not a strategy the Pentagon should pursue.

Col. Stephen W. Korns is vice director of strategy, plans, policy and international relations at the Joint Task Force-Global Network Operations. He assists in development of cyber policy and strategy for operations and defense of Defense Department networks, including the Global Information Grid.