Features

April 1, 2012  

Evaluating humanitarian ops

A modern system of measurements would boost effectiveness

“It’s déjà vu all over again” — the clever aphorism from philosopher and baseball legend Yogi Berra — comes to my mind nearly every time I hear or read a report about a military humanitarian operation.

In the 1980s, I commanded a portable hospital at Clark Air Base in the Philippines. My unit deployed to a rural area of Luzon with a list of wartime tasks to practice, and we treated indigent host-nation citizens as part of that training. There was no capacity building or public health impact of our care. There was little coordination with host-nation stakeholders before the mission, and none afterward, so we missed a chance to learn about unintended consequences or even antipathy that we may have inadvertently created. The essay-style “after-action report” of our mission did not provide our headquarters leadership and planners the ability to determine the relative value of our work, compared with other humanitarian missions they supported. In fact, we were judged largely by how many patients we saw and whether we stayed within budget.

Today’s missions still fit all these descriptions, so Berra’s quote is appropriate.

As a senior medical planner in the later years of my Air Force career, I realized that nonmilitary humanitarian groups — the U.S. Agency for International Development (USAID) and the nongovernmental organizations (NGOs) — were moving beyond the evaluation paradigm I described. They were focused on creating a sustainable impact, giving program ownership to the host nation (“owner-driven, not donor-driven”), and using ranked outcomes of missions to reshape future budgets for a better return on investment. Their donors expected accountable, effective use of resources, so the NGOs banded together into several consortia to pool their efforts to improve evaluation of their work. These consortia set consensus performance and effectiveness standards. They moved beyond simple indicators, like dollars spent or patients seen, to evaluating outcomes for sustainability, stakeholder ownership and relative return on investment.

The Defense Department has a large humanitarian program in each regional area of focus. “Hearts and minds” and “photo ops” remain important components of this effort, as they should. Many missions are launched based on the success and relationship priorities in the Theater Security Cooperation Plan, and thus conform to strategic security and diplomatic goals. Yet in many ways, DoD has not moved beyond simplistic Cold War evaluation techniques. To do so could bring immediate, additional value to nonkinetic programs. I believe this can be done in the current fiscal year, without new authorizations, appropriations or agencies. Although my own humanitarian experience, both civil and military, is confined largely to medical work, the modern methods of humanitarian mission evaluation can also apply to broader military humanitarian programs, such as civil engineering and civil affairs.

Three Steps

I recommend three essential steps toward the creation of a modern evaluation program for DoD. First is the rigorous measurement of long-term attributable impact. For most missions today, the mission commander writes a narrative after-action report soon after redeploying, much like I did at Clark 25 years ago. While regional headquarters staffers may hear further news from the military group at the U.S. embassy, this anecdotal evidence rarely gets back to the mission commander or into the report. Even if it does, the essay-style format of the report makes it difficult to search for lessons learned. There is little effort to archive the reports in a useful way. And there is rarely any longer-term feedback of positive or negative results that came from the partner nation after redeployment.

USAID and NGOs often set aside a small portion of the mission budget (typically 3 percent to 5 percent) for the long-term impact evaluation, to be done months after the mission is finished. A qualified individual or small team returns to the deployment site and gathers outcomes from as many stakeholders as possible. With this important information, the final report is then completed.

A well-organized mission is planned with an eye toward outcomes that can be measured and entered into a searchable database. Outcomes are more than simple metrics, like dollars spent or patients seen. Attributable public health improvement is an outcome. For example, if the mission team immunized schoolchildren, did disease rates go down? If a new well was dug to provide cleaner water, did diarrhea rates decrease?

Host-nation capacity building is an outcome. A decade ago, I was a member of the U.S. military teaching team in an exercise-based disaster-response course in El Salvador. After we redeployed, the host nation held an unprecedented civil-military disaster exercise based on our curriculum. When a severe earthquake struck a few months later, our students and their colleagues saved lives with the coordination, communications and disaster-response skills created by the two exercises. And no emergency U.S. resources were needed for casualty treatment.

The second essential step is engaging the proper host-nation stakeholders. Some host-nation input has been part of bilateral and multilateral military exercise planning since before my time in service. The variety and scope of today’s stakeholder group, however, is too limited, and to expand this group would bring immediate value to DoD’s humanitarian work. In both deliberate exercises and disaster response, there are key host-nation groups that can bring additional insights to the mission, both in the planning stages and after the event, when the dust has settled. Ministry-level stakeholders may get their say, but regional and local stakeholders may not. Polling a large variety of the focus population can bring rewards in lessons learned.

The third essential step is to determine the relative value of most or all of a group of humanitarian missions, to allow comparisons and future-year resource prioritization. There are many common features of missions, such as coordination and planning tasks, achievement of desired end states and outcomes, and completion of required military training. These can be delineated and judged by a variety of stakeholders, resulting in a “score” for the mission and a relative ranking among all scored missions. This ranking of return on investment, together with other strategic and political considerations, can be part of the equation used to guide leadership and planners in future years.

The Defense Security Cooperation Agency (DSCA) has fiscal oversight of Title 10 humanitarian programs. Its administrative tracking software, Overseas Humanitarian Assistance Shared Information System (OHASIS), was recently upgraded to allow some outcome data and begin to answer the needs I cite above. Studies of OHASIS data have shown that there is little serious evaluation or effort to apply lessons to future programs.

DoD could expect several immediate benefits from a more scientific evaluation program. First, a modern, rigorous evaluation program could identify the unintended consequences of humanitarian efforts. For example, local health care providers may be discredited or have their patient base displaced by a transient U.S. medical team. Some patients may resent the triage that led to care for their neighbor’s child but none for theirs. There may be complications of U.S. treatment that does not occur until the U.S. team has redeployed. When mission results seem positive or negative, our host-nation friends can help our teams understand the subtle external factors that contribute perceived results. Under the current system of reporting mission activity, some of these lessons are lost. Subsequent missions may repeat the same mistakes.

Second, building host-nation capacity can create regional synergy that reduces DoD requirements and costs. The exemplary legacy of the El Salvador disaster-response course can be a much more common result of DoD humanitarian programs.

Finally, the ranking of relative value of missions within each theater or worldwide would allow leaders and planners to better prioritize resources for subsequent fiscal years. While short-term strategic and political goals are usually more important than demonstrating sustainable impact or stakeholder ownership, DoD can have both in most circumstances, and the latter may ultimately provide more mutual security than the former in some cases.

In summary, DoD humanitarian programs bring political and public relations value to our regional and national security efforts. They can do much more, with few additional resources. Three simple evaluation methods can be implemented during the current fiscal year without additional authorizations, appropriations or agencies, and DoD can get great added value for its humanitarian relief efforts in short order.

Our philosopher-athlete, Yogi Berra, also said, “If you come to a fork in the road, take it.” Now is such an opportunity for DoD. We should take it.

Dr. Stephen G. Waller, a retired Air Force colonel, is an associate professor in the Department of Preventive Medicine and Biometrics at the Uniformed Services University of the Health Sciences in Bethesda, Md.