Features

February 1, 2011  

Controlling armed robots

I AM an engineering psychologist in an Army organization that is in the forefront of research and development and applications work on military robotics and automated battle command systems. Hence, I read Paul Scharre’s letter to the editor in the November issue [“Appropriate autonomy”] with some interest. I retrieved the article that prompted it in the July/August issue [“Robot revolution”]. As a point of interest, I was the technical lead on an Army effort looking at human performance contributors to the fratricides by the Patriot air and missile defense system during Operation Iraqi Freedom. These were among the “inappropriate engagements” noted by Scharre.

After reading the article, I agree with Scharre’s characterization of those authors’ discussion of autonomous modes of operation for armed robotic systems as “overly simplistic.” Admittedly, a more detailed discussion of the ins and outs of effective control for armed robotic systems in combat operations was probably beyond the scope the article by Col. Christopher Carlile and retired Lt. Col. Glenn Rizzi. I have observed, however, that their rather superficial discussion of the topic of effective control of such systems is the norm among military decision makers.

From my perspective, after having worked on systems like Patriot for more than 30 years, the central operational issue in using armed robotic systems in combat is balancing autonomy with effective human control. P.W. Singer in his recent book “Wired for War” refers to the topic of effective control as the “Issue-That-Must-Not-Be-Discussed.”

The reality associated with greater autonomy on the part of armed robotic systems is that there will likely be many more “oops moments” (Singer’s term) than are politically or operationally tolerable. Based on our assessment of contributors to the Patriot fratricides, these incidents were an example of an “oops moment” on the part of an armed robotic system used in automatic mode. If the past is any indicator of the future, such incidents will result in initial “surprise” and “shock” on the part of the leadership that these advanced systems behaved thusly, followed by the imposition of restrictive rules of engagement that effectively take the offending system out of the fight.

I support the view implied in Scharre’s letter that we need a more realistic, nuanced assessment by those in policymaking jobs of the potential problems associated with the use of armed, autonomous robotic systems in actual combat. The British cognitive and computer scientist Noel Sharkey cautions that with respect to the widespread use of such systems, “We are sleepwalking into a brave new world.”

Scharre is correct in his assertion that armed robotic systems have been and will continue to be fielded. They will be allowed to operate autonomously. Oops moments will occur. And unpleasant fallout and scapegoating will take place in the aftermath of such incidents. For example, in the early days of Patriot (circa 1985), the “official” position was that the system would not be used in automatic mode. That view lasted until Operation Desert Storm, when tactical ballistic-missile engagements opened the door for automatic engagement operations. The view that the system could be used safely in automatic mode against tactical ballistic missiles and related threats persisted through Iraqi Freedom and the fratricides, in spite of test results indicating that track classification was not always accurate.

After 25 years of experience, the Army is backing away (albeit slowly) from using the Patriot system in fully automatic mode. That move should not go unnoticed by potential users of armed autonomous systems in other areas.

The issue of control in accord with human intent versus the illusion of control is complex and will not easily be solved. Software glitches aside, oops moments will mostly result from what has been termed the “brittleness problem of automata”: an inability to reliably handle unusual or ambiguous problems. That was the core issue with Patriot, exacerbated by inadequate crew training. Scharre is correct in his assertion that “appropriate autonomy” and “adequate safeguards” for armed, autonomous robotic systems will be a “significant challenge for the future.”

— John K. Hawley, U.S. Army Research Laboratory, Human Research and Engineering Directorate, Fort Bliss Field Element, Fort Bliss, Texas