U.S. flag

An official website of the United States government

Dot gov

Official websites use .gov
A .gov website belongs to an official government organization in the United States.


Secure .gov websites use HTTPS
A lock () or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Squeezing More Value From Test


  1. Home
  2. Squeezing More Value From Test
Squeezing More Value From Test

Even in the current era of record defense budgets, quickly delivering lethal, effective weapon systems appears an elusive goal in the Defense Acquisition System, especially for major programs. Congress, the Office of the Secretary of Defense and the Services continue to enact laws and publish policies to field capabilities faster. Examples include Joint and Service urgent needs, Fiscal Year (FY) 2016 National Defense Authorization Act Section 804 rapid fielding and prototyping authorities, and streamlined decision chains such as those in the 2003 Air Force Rapid Capabilities Office and 2018 Air Force rapid procurement charters. All of these initiatives seem to assist more at program inception as they guide structuring the acquisition. For those programs already under way, however, these initiatives may not provide much help.

Verification Chips on the Table

Between 1990 and 2010 testing comprised roughly 50 percent of aircraft platform acquisition schedules and a whopping 80 percent of the duration of program schedules for weapons. These figures indicate verification programs are fertile ground for Service leadership to influence both the timing and quality of new capabilities delivered through acquisitions. This influence, which traditionally manifests in the form of trade-offs, may be applied by shining a light on test program content in a senior leader forum. 
In fact, an area of strategic interest to program executive officers (PEOs) is the system verification campaign in Major Defense Acquisition Programs. This group voiced its opinion in the Defense AT&L magazine (the predecessor to this publication) article “Improving Acquisition from Within—Suggestions From Our PEOs:”

Configuration Steering Boards (CSBs) and Testing: CSBs have been especially helpful in adjusting requirements (both to provide a forum for the deliberate addition of some requirements as well as removing some requirements where they don’t make sense). This process should be extended to include using the CSB process to adjust test plans and requirements as well rather than allowing independent members of the test community virtually unlimited authority to commit programs to cost and schedule of tests that the operational leaders of the Service do not believe are warranted. Similarly, it would provide a forum for those same uniformed leaders to insist on testing that might otherwise be overlooked.

Trading Test Activities for Program Schedule

There are several ways programs cut testing to meet schedule. With their backs to the wall, program managers may be tempted to simply cut the tail end of testing to meet a fielding milestone date. Test campaign cuts may also be made with an eye on systems engineering criteria such as risk to achieve a subset of specifications. They may even go one step further and include consideration for “big R” user requirements in Joint Requirements Oversight Council or Service-level capabilities documents. 
But these types of traditional systems engineering criteria may not be relevant to the tactical user. Corner-case negative thermal margins in the guidance and control section of a munition may be an issue for a systems engineer, but for the pilot who rarely—or ever—would employ the munition under tactical conditions producing that thermal environment, it would be imprudent to chase this specification at the cost of a delay to Initial Operational Capability. In general, this type of behavior can be self-defeating for Department of Defense (DoD) acquisitions writ large.

And traditional systems engineering is reticent to revisit the type of testing initially proposed by the developer and accepted by the government. As a result, decisions on whether to cut testing often can come down to a binary choice with no middle ground considered along the continuum of verification fidelity. 
This is not to say that operational relevance and verification fidelity of a requirement are not in the cross-check of systems engineers or program managers. But our acquisition tool box may not provide a ubiquitous way of conveying this information in decisional meetings.

A Top-Level Focus

In my experiences as a test director, test manager and program manager on five ACAT I programs and one ACAT II program, I have observed a common but informal practice of viewing verification programs through the lenses of operational relevance and test fidelity. These programs attempted to find opportunities to skinny down schedules and deliver a slice of essential capabilities to the field as fast as possible. These efforts always occurred after the acquisition strategy was approved—during execution in Engineering and Manufacturing Development (EMD) when annual CSBs typically would be required for an ACAT I program. I believe that this observed practice aligns with the PEOs’ consensus of how test programs should be managed, and it should be considered for elevation to the CSB level. 

Why these two lenses? Operational relevance and test fidelity cut directly to the ultimate purpose of verification in a program; they answer the seminal leadership questions of what’s needed and how we know it works. This is especially true after an acquisition strategy has been approved and program execution inflicts delays and adds risk to schedule milestones. With these answers in hand, programs can continue along a line of inquiry to help guide adjustments to the verification program to achieve acquisition goals. Finally, I believe that these two views of a verification program enable the CSB to better see if tests are warranted or overlooked, as recognized by the PEOs. 

Operational relevance from the perspective of a verification program involves threats—both naturally occurring and developed by adversaries—faced by a weapon system conducting a particular mission or mission set. Depending on how the system is used, the weapon may or may not face all the anticipated threats it was designed to withstand or counter. When delivery schedule is paramount, it may be in the best interest of the Service and the Department of Defense to qualify and test a weapon system for the most urgent subset of missions that expose the item to limited natural environments and adversary threats. 

As a tangible example, consider a joint air-launched missile intended for employment by both carrier-capable and conventional runway aircraft designed to counter current and future high-end threats (one type of missile to do nearly everything). Imagine a technical risk arises from setting such a robust employment requirement. This issue leads to a redesign and requalification of an internal isolation structure for a flight computer to mitigate the repeated shock impulses of carrier catapults and landings (cats and traps). Furthermore, the latest Validated Online Life-cycle Threat (VOLT) projects a delay in a next-generation adversary capability, allowing the program to delay verification of an advanced self-protection countermeasure. These two events—a schedule setback and threat relief possibility—provide opportunities to adjust the verification program schedule, albeit with implications to the acquisition strategy’s capabilities fielding date. This is the type of situation the CSB is meant to address. 

Test fidelity is akin to the verification method, except it is richer in detail. It affords an opportunity to more precisely tailor verification needs to what the stakeholders value as essential to a particular mission or mission set. Exactly this level of insight is needed to provide the 360-degree awareness enabling more sophisticated tailoring by senior decision makers. Test fidelity illuminates the interdependency between “speed to need” and operational effectiveness with a deeper understanding of the risks posed to confirming that the weapon system will function as intended in the field. 

Test fidelity expands on the broadly grouped verification methods used in the DoD—examination, analysis, demonstration and test—to include more aspects such as the test environment, test article configuration and use case. For example, the verification method “test” could be expanded to include the test article representativeness to the production configuration and the tactical similarity to the employment scenario planned. These additional attributes of the verification method describe its test fidelity, and this fineness of detail can be used to make strategic decisions such as taking credit for operational test points during a program’s developmental test phase. This example, of course, is the integrated test policy codified in the DoD Instruction 5000.02 implemented to accelerate acquisitions. By continuing to look at test fidelity in more levels of complexity, the DoD can find additional efficiencies in verification programs to speed weapon systems to the field. 

Arming annual CSBs with these insights would be a step in the right direction to formalizing consideration of operational relevance and test fidelity at a decision-making level especially empowered to compel action. This is true even if the action drives a change in the approved acquisition strategy for executing programs. An easily digestible, conceptualized visualization for adjusting verification program requirements through these two lenses could provide the decision aid structure to facilitate implementing the PEOs’ vision for future CSBs.

Operational Relevance and Test Fidelity

So what should this visualization tool look like? One way to answer this question is to introduce a hypothetical (yet plausible) example. 

Consider a scenario in which a USAF missile EMD program is executing all-up system integration testing, but the program is projected to overrun a major program date. Of all the capabilities to be verified, performance against threat countermeasures is causing negative schedule margin along the driving path to the milestone. The annual CSB date is approaching and test/schedule tradeoffs are expected to be prominent in the discussion based on pre-meeting staff work. In preparation for the board, the program office builds slides according to the SAF/AQ (Air Force Acquisition) template and adds a chart with the graphic shown in the figure. The image shows which threat countermeasures still require testing, the percentage of threats employing a particular countermeasure type, and the fidelity of testing. Additionally, the program office labels blocks of testing with planned durations pulled from the latest Integrated Management Schedule (IMS), and this information is presented in the context of the overarching schedule milestones to aid in the review. The tool guides discussions among technical experts, warfighter representatives and program managers present at the meeting leading to a decision by the board satisfying the varied needs of all stakeholders. 

The decision in this scenario is unimportant. Rather, it is the structure and information provided by the visualization tool that guide shrewd managerial discourse at a level of authority empowered to affect change for the better. Its simplicity provides a clear picture to senior leadership by conveying a conceptual view of verification progress and what trades in test fidelity and operational impacts can be made to meet milestone dates. Importantly, this crisp visualization tool facilitates exactly what the PEOs recommended: using the CSB process to “adjust test plans and requirements” and enabling operational leaders in the Services to influence test scope.

Building the Visualization Tool

Information needed to create this decision aid can be sourced readily from existing program data. In the previous example, operational relevance was represented as the type of countermeasure, which may be found in the VOLT and related intelligence documents, and frequency of encounter, which may be drawn from mission- and engagement-level simulations already populated with threat data. Test fidelity can be found in many places including an overarching system verification plan (if contracted) and the IMS, which could also provide the duration of test activities. 

Recognizing which capabilities to track should be based on critical capabilities designed into the weapon system, verification tasks on schedule driving paths, or any other consideration deemed important by the program. Critical capabilities may be described in select Key Performance Parameters; Key System Attributes; Additional Performance Attributes; and Technical Performance Measures—or, if finer granularity is required, some other salient system capability depending on program timing. Beyond criticality, the selection of capabilities for monitoring may be based on schedule durations impacting important intermediate milestones (this selection criterion was used in the preceding example).


This article advocates elevating an observed practice in program offices to the CSB to help make better-informed decisions on verification program adjustments and squeeze more value out of test. A simple yet powerful visualization tool is proposed as a decision aid to help guide discussions regarding test program adjustments to meet a broad set of stakeholder needs. This visualization tool can be built leveraging data already available to program offices. By viewing test programs through the lenses of operational relevance and test fidelity, CSBs can address the concern voiced by PEOs regarding the amount of influence Service operational leaders wield on test programs. 

Petrucci is an Air Force acquisition and test professional with more than 18 years of experience. He has served as program manager, test manager and test director on five Acquisition Category (ACAT) I programs and one ACAT II program covering all phases of acquisitions in both traditional and non-traditional approaches. He is a deputy technical director in the 413th Flight Test Squadron at Duke Field in Florida for the ACAT IC HH-60W Combat Rescue Helicopter program. 

The author can be contacted at [email protected].