Features

July 1, 2012  

How to build good stuff

Five rules for a sensible approach to acquisition

It probably goes without saying that the defense acquisition community wants to build “stuff that works good.” For that matter, we generally want our stuff to be better than anyone else’s stuff. I want my gun, for example, to have better range, accuracy and reliability than the guy who’s shooting back at me. And since I have to carry the thing around, I wouldn’t mind if it was lighter, too. But mostly, I want to make sure my buddies and I have them when we need them.

Despite a universal desire to build good stuff, military technology programs often leave much to be desired — or as Gen. Martin Dempsey put it when he was Army chief of staff, our formal acquisition process performs “not so well” compared with faster, cheaper alternative methods.

What’s the problem here? I suspect many acquisition problems are rooted not primarily in bad processes, but in bad definitions of “good.” You see, the way we define a program’s goals has a big impact on how we approach those goals — or whether we approach them at all. The way we define good also influences our process design, but the process itself — cumbersome and ineffective though it may be — is more symptom than cause. The root issue goes much deeper.

For example, if we think the sign of a good weapon system is to have lots of features, a big price tag and to take a long time to produce, we’re going to make certain decisions and take certain actions. It doesn’t matter how streamlined the process is. We’ll still figure out a way to spend lots of time and money in our quest to add lots of capabilities.

The result of this mentality tends to be stuff that’s bloated, broken, unaffordable, operationally irrelevant and technically obsolete. Or, as a 2011 Harvard Business School report on acquisition reform summarized things, such systems will “require more than 15 years to deliver less capability than planned, often at two to three times the planned cost.” That’s not a very good outcome.

Let’s look at an example. Based on my time in Afghanistan, I can attest that I don’t need a billion-dollar, 700-pound rifle with an integrated espresso maker — particularly if I’d have to wait 30 years to get it. Such a weapon may have more features and a bigger pricetag than the M4 I actually was issued, but it wouldn’t be better. Unless, of course, it makes really, really good espresso, with the foam just right. In that case, it’s definitely worth a billion dollars. I’ll take two.

Kidding! If I had two Future Airman Rifle Combat Espresso weapons (with optional M-829 Joint Milk Frother/Night Vision Scope), I’d be constantly wired and would never get any sleep. One of those would be more than enough. Much more.

Of course, there is no 30-year, billion-dollar acquisition program to build a coffee maker/rifle combo. That would be silly. The point is that adding features does not necessarily lead to superior gear. We’re almost always better off exercising restraint, constraining the budget and keeping the schedule as short as possible.

So, what exactly constitutes a good system, if not a huge feature set and a pricetag to match? How can we distinguish between alternatives and pick the best new gear? Here are five comparatives to consider.

1. Real beats hypothetical

In a fight between me and Batman, I always win, even without my espresso-rifle. What’s that, you say? You think Batman could beat me? Not a chance. Here’s why: Actual capabilities beat imaginary ones, every single time, and unlike Batman, I’m real. That means I would also beat starship captains Kirk and Picard in a fight — at the same time. Guess what: You’d beat them, too.

That sounds obvious, I know, but bear it in mind the next time you hear someone talking about the superior performance of the next-gen system they’re planning to deliver in 10 years, just as soon as someone funds it, designs it, builds it, tests it — you get the picture. That hypothetical system is not nearly as good as a real one.

From a capability perspective, there’s a huge temptation to focus on what a recently developed (or soon-to-be developed) system will do rather than on what it actually does do. This temptation particularly pops up in demonstrations, where the presenter says things like, “The next version will …” Any conversation involving what the thing “will do” is dealing with hypotheticals. What we need is a demonstration of what it “does do.” There’s a big difference.

That’s why I’m a huge fan of the fly-before-you-buy approach, but even there, we aren’t free of the danger of buying this because the system’s advocates promise someday it’ll do “that.”

If you’re ever in that position, please don’t get taken in. And if you’re the one doing the demo, make sure you focus on what the thing actually does. If you don’t, I just might have to fight you. Nobody wants that.

2. Now Beats Later

It is entirely possible for a capability to be real but still unavailable on an operationally meaningful timeline, for any number of reasons. Maybe there are delays in production or delivery, maybe we haven’t figured out how to integrate it into the relevant environment, maybe there are some testing hiccups or maybe we just can’t afford it right now.

In those cases, the capability’s reality is undermined by its unavailability.

Regardless of the reason for delay, if the capability’s delivery is pushed to the right, its value decreases proportionately. I’m not saying a future capability is worthless, just that a current ability to do something is better than a future ability to do it. Heck, a current ability to do something could even be better than a future ability to do more, largely because of the uncertainty surrounding any future capability (but also because schedule delays are evil).

Don’t worry: This does not mean we should sacrifice all long-term research projects in favor of short-term developments. I’ve spent much of my career in a research lab, and I know full well that all of those new technologies have to come from somewhere. Since breakthroughs in basic science are notoriously difficult to schedule in advance or to do as quickly as we’d like, we should focus our system development efforts on solving engineering problems rather than science problems.

Keep in mind there is nothing shortsighted about establishing and maintaining a capacity to rapidly respond to operational needs, particularly if this ability is accompanied by a continuing effort to make additional improvements to whatever we field, and a commitment to modular designs and shared interface standards. Stated another way, a tactical ability to rapidly field new capabilities is itself a strategic capability.

Yes, maintaining such a capability requires a solid tech base to draw from. One more time — the DoD doesn’t need to get out of the long-term research business. But there should be a clear distinction between building new systems (out of mature technologies) and pursuing new technology breakthroughs, which then feed into new systems. Both can be pursued with speed in mind, but we have a lot more potential for control and predictability over system development when we stick to short timelines and available components.

3. Simple Beats Complex

Complexity is often viewed as a sign of sophistication — an unfortunate and counterproductive design bias. The truth is, excessive complexity has a negative impact on a technical system. For starters, complexity reduces reliability, largely by increasing the number of possible failure modes. The more pieces a thing has, the more ways it can break, and the harder it is to diagnose and repair. So, simpler equals more reliable, all else being equal.

Simplicity also tends to reduce the production and maintenance costs of the thing, allowing us to buy and operate more of them for the same amount of money.

But an acquisition program isn’t just made of tech — it’s also charts and procedures and meetings and integrated product teams. As you might expect, complexity reaches its slimy tentacles into every nook and cranny. It infests our processes, organizations and communications — including our briefing charts. It’s scary to think we might build systems the same way we build PowerPoint presentations. There are clearly plenty of opportunities to simplify across the entire spectrum of acquisition decision making.

However, as long as we treat complexity as both desirable and inevitable, we’ll never even make the attempt to follow Thoreau’s famous advice and “simplify, simplify, simplify.” Once we decide to reduce complexity, we can turn to all sorts of tools and techniques (check out TRIZ and the Simplicity Cycle for starters). But the first step is to understand that simple beats complex.

4. Interim Beats Future

Not long ago, in an age that was perhaps too enamored with the appearance of progress and futuristic-liciousness, it was fashionable to put the word “future” into the name of a new project (e.g. the Future Imagery Architecture, Future Combat Systems). Turns out, any project named the Future System is doomed to fail (FIA and FCS were both canceled).

In contrast, one of the first projects I ever worked on was called the Interim Intel Feed, or IIF for short. It was an inexpensive stopgap solution we quickly put in place while we waited for the long-delayed, rather expensive “real” system to be developed, tested and delivered.

You can probably guess the rest of the story. After countless delays, the so-called real system was never delivered, and the interim solution went on to be used far longer than anyone ever anticipated. Why? Because it worked and the users liked it. That pattern has been repeated countless times in the years that followed.

This isn’t really about the way we name things. It’s about what the name represents. When we call something a Future System, what are we saying about the plans to deliver it? Aren’t we saying we’ll deliver it “in the future,” even though we mere mortals are persistently stuck in the present? And aren’t we counting on currently unavailable Future Tech and Future Developments? Surely the Future System wouldn’t be built out of Today’s Technology, right? That wouldn’t be very futuristic-ish.

Psychologically, labeling something a Future System reduces the time pressure to deliver, because it’s not the future yet, so we clearly aren’t late yet. It also fosters a tendency to overreach, over-promise and over-engineer, to the detriment of anyone who actually needs to use the thing.

And then there’s the question of what to call the Future System once it goes operational. True, we seem to have avoided this dilemma so far by never actually delivering any Future Systems, but once it’s in actual use, it’s not a Future System anymore, right? It’s a current system with a funny name.

Thankfully, we don’t see very many programs trying to build Future Systems, but the remnants of the Future mentality persist. We really are better off building interim solutions instead. Because they work. And they’re not expensive. And the users like them. That’s got to count for something.

5. What You Do Beats How You Do It

For all the talk about the importance and value of good processes, I’d like to reiterate that process is a symptom, not the problem. A bad process can get in the way, a great process can help a lot, but a great product is what matters. We should direct our focus accordingly.

Yes, there is such a thing as a good process, and therefore, there also is such a thing as a bad process. But the ultimate indicator of process quality has little to do with efficient execution and everything to do with the product itself. Thus, the best processes are product-centered.

Unfortunately, all too often the Process Improvement Mafia directs its considerable energy toward “how we do stuff” while largely neglecting the question of “what we’re doing in the first place.”

Just as complexity advocates take massive comfort in the sheer complexity of their technology, tools and processes while failing to consider performance-oriented measures like usability or reliability, process mavens sleep well at night because their process is so efficient, regardless of whether the product is effective.

As Eli Goldratt pointed out in his book “The Goal,” if a factory’s manufacturing process efficiently delivers quality products but doesn’t make a profit, that’s a major failure, not something to be proud of. It’s entirely possible to efficiently produce a whole series of overly complex, irrelevant and unnecessary pieces of military gear. Hooray for process.

Similarly, if the process looks stupid and inefficient but delivers good stuff, it isn’t stupid. If the process is poorly documented but consistently produces what we need, that might not be so bad. And despite what you may have heard, such outcomes are entirely possible.

Technique and efficiency are great, but at the end of the day, what you accomplished is more important than how you did it.

Those of us in the business of spending tax dollars to build new gear for life-and-death missions have a special obligation to make sure we deliver good stuff. Accordingly, it is incumbent upon us — government and industry alike — to have a solid grasp of what exactly constitutes a good system. If we’re operating under the faulty assumptions that complexity is a sign of sophistication or that spending lots of time and money increases a system’s quality, we’re going to fail — hard. Same goes for getting distracted by imaginary future capabilities or an excessive emphasis on process over product. Far better to focus on delivering real capabilities over short timelines.

On that note, I think I’ll go get an espresso. Afterwards, I just might pick a fight with Batman, just ’cause I can.

Lt. Col. Dan Ward is an Air Force acquisitions officer currently deployed to the International Security Assistance Force headquarters in Kabul, Afghanistan. The views expressed in this article are solely those of the author, and do not reflect the official policy or position of the Air Force or Department of Defense.