Features

May 1, 2010  

Nearer the holodeck

Virtual environments replicate our complex wars

It is April 2009. Not far from Los Angeles International Airport, I am in the giant Hughes Aircraft hangar, which once housed the Spruce Goose and is now home to the crew and heavy-breathing computing power of James Cameron’s virtual movie set. Holding a large flat-panel computer screen in front of me, I step forward into a virtual world imagined by Cameron more than a decade ago and now brought to life by Peter Jackson’s Weta Digital, Glenn Derry, Vince Pace and the rest of the crew of the movie “Avatar.”

As I walk through the football-field-sized space that Cameron calls “the volume,” my point of view on the screen moves with me, as if I were carrying a window through which I view this fantastic world. Huge ships come in to land on the lush Pandora moon, while soldiers in mechanized exoskeletons (called AMP suits) move past without acknowledging my presence, their eyes fixed on the techno-color jungle beyond. In the distance I see Na’vi avatars walking. Is that avatar Sigourney Weaver’s?

This technology is reshaping the fundamental interactions between humans and the worlds — real and virtual — in which we exist. The motion-capture systems, stereoscopic 3-D cameras and virtual world compositing techniques in this hangar will transform how human capabilities evolve in the information age.

When I walk back to the live set where Stephen Lang is filming his epic battle, things look a bit more familiar. There is the actor in his large exoskeleton prop, camera pointing at him and green screen behind, gaffers adjusting the lights. Derry and his team of computer gaming programmers are monitoring and controlling the real-time 3-D environment, and that looks comfortably familiar to my experienced game developer gaze. But when the action starts, Cameron becomes electrified, and I realize that something historic is happening. He directs the action, monitors the camera shot, views the stereoscopic composite image and, afterward, gives direction to the real-time 3-D team to alter the world in small ways to set up for the next shot.

The creative control represented by real and virtual world composited together in high-fidelity stereoscopic 3-D reels the mind. Cameron can have any shot he wishes from any angle, either now or in post production, whether the actors are available or not.

To meet the challenge that U.S. Joint Forces Command commander Marine Gen. James M. Mattis laid before the entire defense industry at the I/ITSEC 2009 conference in Orlando, Fla., in December, we will need to do the opposite. We will need to composite live humans into convincing, compelling virtual environments to improve their performance in complex situations in the real world.

Success or failure against the threats we now face will not be determined by firepower, but by decisions made on the ground by small groups of men and women. Their training and situational understanding are paramount to our success and are our only hope for a secure world.

“We need a giant leap forward in our simulated training environment for small units in ground combat … to replicate to the degree practical using modern simulation, combat scenarios that will test our small units,” Mattis said.

Army Gen. Stanley McChrystal, commander of U.S. forces in Afghanistan, has also provided guidance to our forces to make it clear that we are not there to simply pull triggers. The skills required for success are complex and require the ability to quickly assess information and act accordingly.

AN EXPONENTIAL AGE

We are in an exponential age; an age of increasing complexity and uncertainty. Aided by advancing computer power, we humans are creating systems thousands of times more complex than our comprehension of what emerges. It should not be surprising in an interconnected world of such complexity to see events frequently spin out of our control and overwhelm us — to see “Black Swans” and unanticipated events appear more frequently. Grains of sand cause avalanches. Butterflies flap their wings and unleash hurricanes on distant shores. Financial markets crash and digital viruses wrack the Internet while biological viruses take wing with air travel to sweep through entire populations. On the network-centric battlefield and in the technology-laden hospital operating room, we are asking humans to adapt to enormous complexity and perform flawlessly where mistakes lead to death.

Once, we would take 18-year-olds from the cornfields of Nebraska or the tobacco fields of North Carolina and clothe, equip and train them to fight and follow orders; today, we drop them into Afghanistan or Haiti and ask them to perform complex missions requiring delicately nuanced understanding of the culture, language and disposition of local populations. Those who command these young soldiers are no longer merely planners of straightforward logistics, maneuver and concentration of firepower. Now they must consider the so-called DIME (diplomatic, informational, military and economic) and PMESII (political, military, economic, social, infrastructure and information systems) effects of every action while attempting to protect a mistrusting foreign population in the face of a vicious and adaptive enemy. How do we prepare our leadership and soldiers for the complex missions of the future?

In his 2009 book, “The Age of the Unthinkable,” Joshua Cooper Ramo stated that the best bet for humanity is to harness the power of the same digital tools that are remaking the world to gain any advantage in preparation or comprehension we can glean. As witnessed in recent computer games and Cameron’s “Avatar” virtual world, computer simulations permit us to create more convincing environments and increasingly convincing characters. New ways to create content and to interface with simulations are removing cost and technical barriers while also making our interaction with these worlds more natural.

In the defense industry, we have decades of experience with flight simulation. We know that if we ask a human to perform a complex task in a high-risk environment using very expensive and complicated technology, the best way to achieve the shortest path to mastery is to provide that human with a high-fidelity, safe practice environment. Simulated safe practice environments permit humans to try things such as landing an airplane on a river, or to respond to wind shear, engine flameouts and other emergencies that are too expensive or dangerous to practice in a real aircraft.

We have now learned that this same safe practice experience of “failing forward” provides dramatic opportunities for performance improvement in a wide spectrum of human endeavor, from long-haul trucking to setting up communications equipment, from managing a fast-food restaurant to anesthesiology. A shorter path to mastery is achieved through practicing encounters with every conceivable situation from a wide variety of perspectives.

It would be a fairly straightforward process, unhampered by any technical obstacle, to create vibrant, high-fidelity, safe practice virtual environments modeled on every potential area of interest and hot spot around the globe. Afghan villages, Haitian cities and Sudanese townships could all be modeled with great physical accuracy and populated with real and computer-generated citizens driven by ever-advancing artificial intelligence who will go about their lives and interact with trainees with increasingly complex and realistic dialogue.

Imagine if we had created a high-fidelity model of Port-au-Prince, Haiti, prior to the earthquake and that a tiger team of U.S. Agency for International Development, National Guard and other personnel had spent a year training in the environment. Our previously novice Nebraskans and North Carolinians could arrive in Port-au-Prince already familiar with the physical layout of the city; they would be comfortable with the cultural patterns, traditions and language of the people, and they would have already practiced setting up field hospitals, generators and supply lines. They would more quickly achieve a shorter path to mastery, which would allow them to perform better alongside more experienced experts.

In Malcolm Gladwell’s 2008 book, “Outliers,” he repeatedly returns to the “10,000 hour rule” derived from Anders Ericsson, professing that to achieve expertise in any field requires 10,000 hours of practice. In Paul Roman’s 2008 I/ITSEC paper, he describes a study that demonstrates that simulation may be able to shorten training times by as much as 50 percent while also improving effectiveness. This raises the question: Could more advanced simulation dramatically reduce the time to create experts in small-unit operations around the world?

HOLOGRAPHIC ENVIRONMENT

The concept of the holodeck, the holographic virtual environment used for education and recreation in the “Star Trek: The Next Generation” television series, has actually been part of the human imagination for decades. More than a half-century ago, Ray Bradbury conjured the ability to create objects with thought in “The Illustrated Man.” In the 1950s, computer scientist Evan Sutherland dreamed of making a computer that could control the existence of matter, and the “X-Men” comic book series described something called “the danger room,” where heroes could engage in realistic mission rehearsal in computer-generated environments.

In just the last year, we have seen rapid advances in technologies that, collectively organized in an imaginative way, could very well provide our dismounted soldiers with the equivalent of a flight simulator for ground forces. At Microsoft, Project Natalis is a breakthrough capability engineered to create a better home gaming experience by watching and listening and tracking the player with sensors and allowing a more natural interface, free from keyboards and game controllers. The result holds great promise for the future of human-to-human and human-to-machine interaction. Add this to the large-scale tracking done by the technology team in the Hughes Aircraft hangar “volume” for “Avatar,” and we now have the ability to take a unit of 10 people through their paces in a large space, track them accurately and composite them into a virtual environment. We will also be able to include vehicles and aircraft by compositing in other simulators such as the close-combat convoy trainer.

Rapid advances in artificial intelligence, rendering, computing power and networking are likewise bringing the possibility of a holodecklike experience ever closer.

I predict that we will be able to afford our ground units this capability within the next five years.

RICHARD BOYD is the director of 3D Learning Solutions and chief architect at Lockheed Martin Simulation, Training & Support’s Virtual World Labs.