Vous êtes sur la page 1sur 187

Analysis of Alternatives (AoA)

Handbook

A Practical Guide to the Analysis of Alternatives

10 June 2013

Office of Aerospace Studies


Air Force Materiel Command (AFMC) OAS/A5
1655 1st Street SE
Kirtland AFB, NM 87117-5522 For public release.
(505) 846-8322, DSN 246-8322 Distribution unlimited.
www.oas.kirtland.af.mil 377ABW-2013-0453
1
2
Table of Contents
PREFACE.................................................................................................................................................... 6
1 INTRODUCTION .............................................................................................................................. 7
1.1 PURPOSE OF THE AOA ........................................................................................................8
1.2 AOA ENTRY CRITERIA .......................................................................................................8
1.3 AOA STUDY GUIDANCE ...................................................................................................10
1.4 AOA PRODUCTS ...............................................................................................................11
1.5 DECISION MAKER EXPECTATIONS OF AN AOA .................................................................13
1.6 REFERENCE INFORMATION ...............................................................................................15
2 DECISION MAKING AND HOW THE AOA FITS .................................................................... 17
2.1 MAJOR PROCESSES ...........................................................................................................17
2.2 WHAT DECISIONS MUST BE MADE/SUPPORTED? .............................................................19
2.3 ROLE OF ANALYSIS IN THE MATERIEL SOLUTION ANALYSIS PHASE ................................22
2.4 WHO USES THE ANALYSIS ................................................................................................22
2.5 RELATIONSHIP BETWEEN AOA AND OTHER ACTIVITIES ...................................................25
3 PLANNING THE ANALYTICAL EFFORT................................................................................. 31
3.1 SCOPING THE ANALYSIS ...................................................................................................31
3.2 DEFINING THE ALTERNATIVE CONCEPTS ..........................................................................35
3.3 IDENTIFYING STAKEHOLDER COMMUNITY .......................................................................35
3.4 DETERMINING LEVEL OF EFFORT .....................................................................................36
3.5 ESTABLISHING THE STUDY TEAM .....................................................................................38
3.6 STUDY PLAN PREPARATION AND REVIEW ........................................................................41
4 PERFORMING THE EFFECTIVENESS ANALYSIS ................................................................ 43
4.1 EFFECTIVENESS METHODOLOGY ......................................................................................43
4.2 EFFECTIVENESS ANALYSIS METHODOLOGY .....................................................................45
4.3 LEVELS OF ANALYSIS .......................................................................................................53
4.4 SENSITIVITY ANALYSIS ....................................................................................................57
4.5 EFFECTIVENESS ANALYSIS RESULTS PRESENTATION .......................................................58
5 PERFORMING COST ANALYSIS................................................................................................ 60
5.1 GENERAL COST ESTIMATING ............................................................................................60
5.2 AOA COST ESTIMATING ...................................................................................................60
5.3 LIFE CYCLE COST CONSIDERATIONS ................................................................................62
5.4 COST ANALYSIS RESPONSIBILITY .....................................................................................65
5.5 COST ANALYSIS METHODOLOGY .....................................................................................67
5.6 COST RESULTS PRESENTATION.........................................................................................77
5.7 COST DOCUMENTATION ...................................................................................................78
6 PERFORMING THE RISK ANALYSIS ....................................................................................... 80
6.1 RISK ASSESSMENT FRAMEWORK ......................................................................................80
3
6.2 RISK IDENTIFICATION .......................................................................................................83
6.3 USING PREVIOUS ANALYSES ............................................................................................85
7 ASSESSING SUSTAINABILITY IN THE ANALYSIS OF ALTERNATIVES STUDY .......... 86
7.1 INTRODUCTION .................................................................................................................86
7.2 WHAT IS SUSTAINABILITY? ..............................................................................................86
7.3 DEFINING THE MAINTENANCE CONCEPT AND PRODUCT SUPPORT STRATEGY .................86
7.4 SUSTAINABILITY PERFORMANCE, COST, AND RISK ..........................................................87
7.5 RELIABILITY, AVAILABILITY, MAINTAINABILITY AND COST RATIONALE REPORT ...........94
7.6 SUSTAINMENT KEY PERFORMANCE PARAMETER .............................................................95
8 ALTERNATIVE COMPARISONS ................................................................................................ 97
8.1 ALTERNATIVE COMPARISON METHODOLOGY ..................................................................97
9 DOCUMENTING ANALYTICAL FINDINGS ........................................................................... 104
APPENDIX A: ACRONYMS ............................................................................................................... 107
APPENDIX B: REFERENCES AND INFORMATION SOURCES ................................................ 113
APPENDIX C: STUDY PLAN TEMPLATE ...................................................................................... 114
APPENDIX D: FINAL REPORT TEMPLATE ................................................................................. 121
APPENDIX E: STUDY PLAN ASSESSMENT .................................................................................. 126
APPENDIX F: FINAL REPORT ASSESSMENT .............................................................................. 127
APPENDIX G: LESSONS LEARNED ................................................................................................ 129
APPENDIX H: OAS REVIEW OF DOCUMENTS FOR AFROC................................................... 130
APPENDIX I: JOINT DOD-DOE NUCLEAR WEAPONS ACQUISITION ACTIVITIES .......... 131
APPENDIX J: HUMAN SYSTEMS INTEGRATION (HSI) ............................................................ 140
APPENDIX K: ACQUISITION INTELLIGENCE IN THE AOA PROCESS ................................ 148
APPENDIX L: MISSION TASKS, MEASURES DEVELOPMENT, AND DATA IN DETAIL ... 154
APPENDIX M: GAO CEAG, TABLE 2 .............................................................................................. 167
APPENDIX N: DEVELOPING A POINT ESTIMATE .................................................................... 170
APPENDIX O: CAPE AOA STUDY GUIDANCE TEMPLATE ..................................................... 182

List of Figures
Figure 1-1: Air Force AoA Activities Overview ..................................................................... 12
Figure 2-1: Capabilities Based Planning (CBP) Process ....................................................... 18
Figure 2-2: Decision Framework ............................................................................................. 19
Figure 3-1: Example Study Team Structure .......................................................................... 40
Figure 4-1: General Approach for Effectiveness Analysis .................................................... 44

4
Figure 4-2: Hierarchy of Analysis ........................................................................................... 54
Figure 4-3: Notional Example of Tool and Measure Linkage ............................................... 56
Figure 4-4: Effectiveness Analysis Results Presentation ....................................................... 59
Figure 5-1: Comparing All Alternatives Across the Same Life Cycle ...................................... 65
Figure 5-2: Cost by fiscal year and appropriation ................................................................. 77
Figure 5-3: General LCC Summary (By Alternative) ........................................................... 78
Figure 6-1: Standard Air Force Risk Scale Definitions ......................................................... 81
Figure 7-1: Sustainment Key Performance Parameter ......................................................... 95
Figure 8-1: Aircraft Survivability System Cost/Capability Tradeoff Example .................. 99
Figure 8-2: Target Defeat Weapon Cost/Capability Tradeoff Example............................ 101
Figure 8-3: Example of Critical MOE Results ..................................................................... 102
Figure 8-4: Example of Comparing Alternatives by Effectiveness, Risk, and Cost ......... 103

List of Tables
Table 4-1: Weighting Measures ............................................................................................... 45
Table 4-2: JCIDS JCAs ............................................................................................................ 48
Table 5-1: GAOs Basic Characteristics of Credible Cost Estimates .................................. 61
Table 5-2: Most Common Cost Estimating Methods ................. Error! Bookmark not defined.
Table 5-3: Cost As an Independent Variable (CAIV)................................................................. 73
Table 5-4: A Hardware Risk Scoring Matrix .......................................................................... 75
Table 5-5: A Software Risk Scoring Matrix ............................................................................ 76
Table 7-1: Sustainability Concepts/Attributes ....................................................................... 88
Table 7-2: Measure of Suitability Description Example ....................................................... 90
Table 7-3: Operations and Support Cost Element Categories ............................................. 92

5
Preface
This is the 2013 version of the OAS AoA Handbook. It has undergone a major rewriting based
upon recent changes to OSD and Air Force policy and guidance. It incorporates a number of
recommendations we received from the AoA community about the 2010 version. This
handbook is intended to make you think, it is not a one-approach-fits-all recipe. It is also not
intended to be a standalone self-help manual. OAS has a companion handbook, the Pre-MDD
Handbook, which addresses key analytic issues that precede the AoA.

Since every AoA is different, the handbook emphasizes the whats and whys of requirements
analysis and less on the how. The details of how are very problem specific and best done one-
on-one with an OAS advisor. Philosophically, we have focused on broadly addressing the right
requirements questions to the level of detail that the senior decision makers need. We advocate
frequent and open communication both to understand what the senior decision makers need and
to convey what the analysis uncovers. We advocate sound analytic processes, not specific tools.
While detailed analytic tools are often necessary for key parts of an AoA, much can be learned
with good use of simpler approaches such as parametric analyses and expert elicitation. We
encourage you to read through Chapters 1 through 3 to get a general understanding of how the
AoA relates to the requirements decision processes. More specific information regarding the
AoA process can be found in Chapters 4 through 9.

This handbook is grounded in over twenty years of providing analytic advice on Air Force and
DoD AoAs. It has been shaped by best practices we have gathered from well over two hundred
AoAs, and by what we have observed to be the expectations of Air Force and OSD senior
decision makers. Those expectations keep evolving, and in response so will this handbook. If
you have questions regarding the currency of your AoA handbook version, please contact OAS
at (OAS.DIR@kirtland.af.mil) to ensure that you are in possession of the most recent version.

We encourage you to contact us and ask questions if parts of the handbook are not clear, or you
are not sure how they apply to your situation, or if you have suggestions on how to improve the
document. We always appreciate feedback.

Jeff Erikson
Director, Office of Aerospace Studies

6
1 Introduction
The Analysis of Alternatives (AoA) is an analytical comparison of the operational effectiveness,
suitability, risk, and life cycle cost (or total ownership cost, if applicable) of alternatives that
satisfy validated capability needs (usually stipulated in an approved Initial Capabilities
Document (ICD)). An AoA typically occurs during the Materiel Solution Analysis (MSA)
phase and applies to all Acquisition Category (ACAT) initiatives in accordance with the
Weapon Systems Acquisition Reform Act (WSARA) of 2009, Department of Defense
Instruction (DoDI) 5000.02, Air Force Instruction (AFI) 10-601 and AFI 63-101 direction. The
AoA must make a compelling statement about the capabilities and military worth that the
alternatives provide. In short, the AoA must provide decision-quality information that enables
senior decision makers to debate and assess a potential program's operational capability and
affordability, and maximize its investment.

AoAs are essential elements of three Department of Defense (DoD) processes that work in
concert to deliver the capabilities required by warfighters: the requirements process, the
acquisition process, and the Planning, Programming, Budgeting, and Execution (PPBE) process.
Details of how AoAs support and interact with these processes are discussed in Chapter 2 of this
handbook.

Other Services/DoD Components have their own processes for executing AoAs. When the Air
Force is directed to support an AoA led by another Service/DoD Component, the Air Force will
follow the lead organizations procedures and guidance. The Air Forces direct involvement in
the lead organizations process will ensure that Air Force interests are considered and addressed
in the AoA. Likewise, for Air Force-led AoAs, it is imperative that the Air Force represent,
address, and analyze the supporting organizations issues and concerns. Finally, all AoAs must
be conducted in accordance with the Acquisition Decision Memorandum (ADM) issued by the
Milestone Decision Authority (MDA) at the Materiel Development Decision (MDD) and any
additional guidance provided by appropriate requirements and acquisition authorities.

The processes outlined in this handbook apply to all AoAs regardless of ACAT level or Joint
Staffing Designator (JSD), formerly known as Joint Potential Designator (JPD) (for additional
insight on the JSD definitions, see the JCIDS Manual). They ensure that the recommendations
from the AoA represent credible, defensible results. The only difference between AoAs of
different ACAT levels or JSDs is the level of effort, oversight, and approval required.
According to the Weapon Systems Acquisition Reform Act (WSARA) of 2009, Director, Cost
Assessment and Program Evaluation (CAPE) has responsibility for all efforts that are JROC
Interest regardless of ACAT level. Approval processes are dictated more by the JSD than by
the ACAT level since any effort may have JROC or OSD interest, regardless of ACAT level.

7
The AoA process consists of the following primary activities which are described throughout
the remainder of this handbook: study guidance development, planning, execution, and
reporting.

1.1 Purpose of the AoA


There are two primary goals of the AoA. The first is to provide decision-quality analysis and
results to inform the Milestone Decision Authority (MDA) and other stakeholders at the next
milestone or decision point. The AoA should shape and scope courses of action (COA) for new
materiel to satisfy operational capability needs and the Request for Proposal (RFP) for the next
acquisition phase. The AoA provides the analytic basis for performance parameters
documented in the appropriate requirements documents (e.g., Joint DCR, AF Form 1067,
Capability Development Document (CDD), and/or Capability Production Document (CPD)).
Additionally, the AoA should use the capability gaps from the ICD(s) as starting points rather
than as minimum standards to disqualify alternatives. The AoA should provide feedback to the
requirements process of any recommended changes to validated capability requirements that
appear unachievable and/or undesirable from a cost, schedule, performance, and/or risk point of
view. The AoA results enable decision-makers to have the appropriate cost, schedule,
performance, and/or risk tradeoff discussions.

The second goal of the AoA is to provide an understanding of why alternatives do well or
poorly. The AoA results should characterize the circumstances in which a given alternative
appears superior and the conditions under which its outcome degrades. The AoA should
explain cost drivers or factors that impact ratings of assessment metrics such as Concept of
Operations (CONOPS), manpower, and performance parameters of alternatives.

AoAs are useful in determining the appropriate investment strategy for validated, prioritized,
operational needs. In addition to considering operational effectiveness, suitability, risk, and life
cycle cost, the AoA should also highlight the impact to each alternative with respect to domestic
and foreign policy, technical maturity, industrial base, environment, treaties, etc. AoAs also
provide a foundation for the development of documents required at the next acquisition
milestone, such as, the Technology Development Strategy (TDS), Test and Evaluation Strategy
(TES), and Systems Engineering Plan (SEP).

1.2 AoA Entry Criteria


Air Force policy requires that the following information be presented to the Air Force
Requirements Review Group (AFRRG) for approval before a team may proceed in initiating an
AoA study plan:

A completed CBA that has been approved by the sponsoring command


An Air Force Requirements Oversight Council (AFROC) approved ICD
8
An operational risk assessment of not filling the capability gap approved by the
sponsoring command and the AFROC
A list of the most promising alternatives to address the gap
Intelligence Support Considerations approved by AF/A2
Signed AoA Study Guidance, by Air Force and when appropriate, Cost Assessment and
Program Evaluation (CAPE)

The list of promising alternatives may include representatives of the following solution
categories:

Legacy systems
Modified legacy systems
Modified commercial/government/allied off-the-shelf systems
Dual-use items
Additional production of previously developed U.S./allied systems
New development alternatives, which can include:
o Cooperative development with allies
o New Joint Service development
o New DoD component-unique development

For programs that have OSD (AT&L) as the MDA, the AoA entry criteria items discussed
above as well as an AFROC validated and CAPE approved study plan are required at MDD.

According to the Defense Acquisition Board (DAB) template, OSD (AT&L) requires that the
following additional information be presented before an AoA will be approved at MDD:

CONOPS summary - provides operational context for understanding the need and
solution tradespace
o Desired operational outcome
o Effects produced to achieve outcome
o How capability complements joint forces
o Enabling capabilities
Development Planning (DP)
o Non-materiel solution approaches to mitigate the capability gap
Review of what was considered
Includes buying more of existing
Includes using existing differently than current employment
Includes tolerating the gap
Evidence of how well (or not) they mitigate the gap
o Materiel solution approaches which could address the capability gap
Review of what was considered
Evidence that these approaches provide the desired operational
performance parameters
9
o Which approaches are included in AoA guidance and/or study plan
Evidence of technical feasibility to meet needed timeframe
Basic capabilities that solution has to fill capability gap within needed
timeframe (mission effectiveness)
For each alternative, what are the implications or dependencies
Includes portfolio implications
Includes existing system impacts
Includes related ICDs
Includes additional capabilities needed to address the gap (System
of Systems (SoS)/Family of Systems (FoS))
How are dependencies factored into the AoA study plan
o When is the capability needed
Is proposed materiel solution expected to be available
What is being done to address the gap until the materiel solution becomes
available
Affordability constraints (describe how much DoD is willing to spend to fill the gap)

In addition to the information required by policy, the following may be available from previous
analytical efforts and early systems engineering activities and can be used in the AoA:

Scenarios and threats utilized in the CBA


Analysis measures (mission tasks, measures of effectiveness, etc.) utilized as evaluation
criteria in the CBA
Definition of baseline capabilities from the CBA
Core team members from the ICD High Performance Team (HPT) membership
Initial Concept Characterization and Technical Descriptions (CCTDs) from Early
Systems Engineering (ESE) and DP activities

This information is discussed further in Section 3.1.1 Using Previous Analysis as the
Foundation.

1.3 AoA Study Guidance


AoA study guidance is required prior to the MDD. AoA study guidance is developed to address
the critical areas that the senior decision makers want explored during the AoA. For Air Force
led AoAs, the study guidance will build upon the initial input identified during the ICD High
Performance Team (HPT). A Best Practice is to maintain continuity of HPT membership
from the ICD through the AoA (and beyond). Having enduring HPT membership will help
provide continuity, greatly facilitate AoA planning, and ensure the stakeholder communities are
properly represented.

AF/A5R requires the sponsoring organization to notify AF/A5R-P for approval to proceed, prior
to drafting AoA study guidance. The notification must also identify which specific AFROC
10
validated/prioritized topic and the associated ICD(s) this analysis will address. Additionally, an
HPT is required to develop AoA study guidance and is usually conducted in conjunction with
the ICD HPT. AF/A5R-P will review and approve the HPT membership prior to approving an
HPT. Air Force policy has identified core organizations that have enduring membership for all
HPTs. The sponsoring organization develops and coordinates the guidance and submits the
document to AF/A5R-P. AF/A5R will approve all Air Force sponsored AoA study guidance
before it is submitted to CAPE. For those AoAs where Director, CAPE elects not to provide
AoA study guidance, AF/A5R will serve as the approval authority.

The AoA study guidance provides direction and insight to the team to plan and execute the
study. CAPE-issued AoA study guidance should be drafted using the AoA Study Guidance
Template included in Appendix O of this handbook. There may be additional sections and
major considerations required by the Air Force (either supplemental to CAPE guidance or when
there is no CAPE guidance).

1.4 AoA Products


Most AoAs produce four major products:
1. A study plan which outlines the background, key questions, guidance, methodologies,
tools, data, schedule, and other elements of the AoA
2. An interim progress briefing to summarize early findings (this often results in
refinement to the direction of the AoA)
3. A final report to document the AoA results in detail
a. Includes identification of the tradespace explored
b. Includes cost/performance/schedule tradespace findings
4. A final briefing, including the Requirements Correlation Table (RCT), to summarize the
final results of the AoA

Figure 1-1 illustrates the review process and sequence of events for these products beginning
with the ICD HPT/draft AoA study guidance through the decisions supported by the AoA final
report and briefing.

11
Figure 1-1: Air Force AoA Activities Overview

The study plan is critical to the AoA process because it defines what will be accomplished
during the AoA and how it will be done. As shown in Figure 1-1, the AoA study plan must be
completed and approved prior to the MDD. The study plan must be reviewed by the AFRRG
and validated by the AFROC. For those efforts with OSD oversight, the study plan must also
be approved by CAPE prior to the MDD. The study plan should clearly identify how the effort
will address all AoA guidance received from the Air Force and OSD. [Note: this requires the
study team to develop and obtain AFROC validation of the study plan before the formal MDA
decision is made to conduct an AoA]. Appendix D contains a recommended template for the
study plan.

The interim progress briefing is designed to provide interim results and to permit redirection of
the AoA by senior reviewers. The most common reasons for interim progress briefings include:

Changes to key assumptions and constraints


Significant modification of the mission tasks and/or measures
Knowledge gained in the analysis to date that would impact requirements decisions,
such as:
o Alternatives showing insufficient mitigation of the gaps
o Alternatives demonstrating unaffordability
o Recommendations for the focus of remaining analysis

12
o Early identification of areas requiring sensitivity analysis [Note: this is critical
to end results because it identifies the key areas of the tradespace that need to be
explored sufficiently.]

The final report is the repository for AoA information describing what was done, how it was
accomplished, and the results/findings. The final report requires significant time and effort to
produce and staff. It should include detailed descriptions of the analysis and results of the AoA
effort. Since team members may disperse quickly after their parts of the study are completed, it
is important to continuously document the process and results throughout the study. If the final
report is not finalized shortly after the end of the study, there may be little to show for what was
accomplished during the AoA. A study not documented is a study not done.

The final briefing is generated from the final report and is the mechanism to illustrate the
answers to important questions and issues, and summarize the findings for the decision makers.
[Note: the briefing is usually the item referred to more frequently. Therefore, the team must
ensure that it is an appropriate representation of the final report.] It is important that both the
final report and final briefing address the following:

Enablers such as logistics, intelligence, Human Systems Integration, and


communications and their impact on the cost, risk, and effectiveness of the study
alternatives
Key study questions sufficiently to inform the decision makers
Appropriate operational performance parameters and threshold/objective values for the
RCT
Alternatives capabilities to mitigate gaps
Performance, cost, and risk drivers
Tradespace explored:
o Trade-offs between performance, cost, and risk
o Sensitivity analysis results (e.g., dependency of results on assumptions,
scenarios, CONOPS/CONEMP, technology maturity, etc.)
Affordability constraints identified at MDD

Both the final report and final briefing should follow the entire review/oversight process to the
AFROC and OSD for approval as outlined in AFI 10-601.

As the products are created and briefed, the Study Director should consider the classification
and proprietary nature of the information. The Study Director should also ensure the team
understands the AoA products may contain pre-decisional information and should not be
released outside the AoA study team or the approval chain. During the creation and briefing of
each of these products, OAS stands ready to assist the study team.

1.5 Decision Maker Expectations of an AoA


13
Senior leaders and decision makers continually refine their expectations of an AoA. CAPE and
the AFROC have identified the following key expectations for an AoA:

Unbiased inquiry into the costs and capabilities of options (identify the strengths and
weaknesses of all options analyzed)
Identification of key trades among cost, schedule, and performance using the capability
requirements (e.g., ICD and CDD gaps) as reference points
Identification of potential KPP/KSAs and an assessment of the consequence of not
meeting them
Explanation of how key assumptions drive results, focused on the rationale for the
assumption
Explanation of WHY alternatives do or do not meet requirements and close capability
gaps
Identification of the best value alternatives based on results of sensitivity analysis
Increased emphasis on affordability assessments (conditions and assumptions under
which a program may or may not be affordable)
Increased emphasis on legacy upgrades and non-developmental solutions versus new
starts
o Explore how to better use existing capabilities
o Explore lower cost alternatives that sufficiently mitigate capability gaps but may
not provide full capability
Increased emphasis on expanding cost analysis to focus beyond investment, for
example, O&S across the force beyond the alternatives being analyzed
Explore the impact of a range of legacy and future force mixes on the alternatives
Increased emphasis on exploring an operationally realistic range of scenarios to
determine impact on performance capabilities and affordability

CAPE and the AFROC have also identified what is NOT valued in an AoA:

Building a case for a preferred solution by attempting to validate a pre-selected option


instead of allowing results to inform the decision.
Over-emphasis on meeting KPP thresholds by automatically disqualifying alternatives
instead of exploring what is needed to sufficiently mitigate the gap and achieve an
acceptable level of operational risk.
Over-emphasis on performance capability without adequately addressing cost and
schedule risks.
Focus on the best system versus the best value for the investment.
Focus on assumptions, data, and scenarios that unfairly skew the results toward a
preferred alternative to the detriment of not understanding the actual need.

In summary, the AoA must identify what is the best value, not what is the best system.

14
1.6 Reference Information
The following terms and definitions are provided to facilitate the use of this handbook and the
AoA process:

Cost Driver - any element within the cost work breakdown structure (WBS)
that has a noticeable or significant effect on the overall cost.
Decision framework - the series of informal and formal investment decisions
supported by operational requirements analysis.
Enablers - any element, such as a system, process, or information that is
required for the success of an assigned task or mission in support associated
with an operational capability.
Problem space - the area to be explored to determine how well AoA options
mitigate the specific identified gaps associated with a specific mission or core
function.
Solution space - the range of alternatives that adequately address the capability gaps
defined by the problem space and whose characteristics/performance parameters will
define the tradespace to be analyzed.
Stakeholders - any organization, agency or Service with a vested interest (a
stake) in the outcome of the pre-acquisition analyses. A stakeholder may
contribute directly or indirectly to these pre-acquisition activities. A
stakeholder usually stands to gain or lose depending on the decisions made
from these pre-acquisition activities.
Tradespace - the range of options to address operational requirements that
explore trade-offs among performance, risk, cost, and schedule.
Tradespace analysis - analysis to highlight key trades among performance, risk,
cost, and schedule, if they exist. These trades highlight where the tradespace
exists to determine what level of required capability should be acquired, when,
and at what cost. This analysis should also identify where there is no
tradespace.

The following references provide more detailed information regarding AoA processes and
products:

AFI 10-601
AFI 63-101
AT&Ls MDD Defense Acquisition Board (DAB) Template
AT&Ls MDD Checklist
CAPEs AoA Guidance Template
Defense Acquisition Guidebook (DAG)
(https://acc.dau.mil/CommunityBrowser.aspx?id=526151)
Defense Acquisition Portal (https://dap.dau.mil/Pages/Default.aspx)

15
DoDI 5000.02
JCIDS Manual

16
2 Decision Making and How the AoA Fits
The purpose of this chapter is to describe the operational capability requirements development
process. It defines the decisions supported by various analytical efforts, the role of analysis, and
the appropriate decision makers. Additionally, it shows the interfaces between the analysis and
other activities.

2.1 Major Processes

Capability Based Planning (CBP) supports operational capability requirements development.


Figure 2-1 shows the six key processes that make up CBP. The six processes include the
following, of which the first three have the greatest impact on the AoA:

Strategic guidance is issued by Office of the Secretary of Defense (OSD) to provide


strategic direction for all subsequent decisions and to provide planning and
programming guidance for the building of the Program Objective Memorandum (POM)
and the development of acquisition programs. This information is the foundational
guidance for all analysis.
Support for Strategic Analysis (formerly known as the Analytic Agenda) is driven by
guidance from OSD and the analysis and modeling and simulation are executed by the
JS/J-8 to identify potential force structure issues and to provide detail on the Defense
Planning Scenarios (DPSs) and Integrated Security Constructs (ISCs) used for
identifying capability needs.
The JCIDS process identifies capability needs based on input from the concepts and the
Strategic Analysis and feeds the results to the acquisition and budgeting processes.
The Planning, Programming, Budgeting and Execution (PPBE) is directed by the
Comptroller and the OSD Director, Cost Assessment and Program Evaluation to ensure
appropriate funding for the Departments efforts.
AT&L provides policy guidance and oversight on the acquisition process, makes
acquisition decisions on Major Defense Acquisition Programs (MDAP) and Major
Automated Information Systems (MAIS), and coordinates program decisions through
use of capability roadmaps.
The J7 manages the Joint Concepts development and approval process. These top-down
identified concepts become the baseline for developing capability needs.

17
Figure 2-1: Capabilities Based Planning (CBP) Process

For the Air Force, AFI 10-601, Operational Capability Requirements Development, outlines the
processes for development of operational requirements documents (what JS/J8 calls JCIDS
documents). There are several categories of operational requirements development efforts that
may or may not require an AoA study for mitigating or closing gaps. The Air Force
Requirements Review Group (AFRRG) reviews and approves the initial Requirements Strategy
Review which defines the strategy for mitigating the identified gaps and the requirement for an
AoA. Two categories where an AoA is not required are:

Efforts that support technology refreshment of existing systems, but provide no new
capability. An AoA is not required since there will be no new acquisition effort (e.g.,
AF Form 1067).
Efforts to address current operational urgent needs are handled through the Urgent
Operational Need (UON)/ Joint Urgent Operational Need (JUON) process.

If the effort addresses future new capabilities (including sustainment efforts that contain
requirements for new capabilities) and requires articulated capability needs via JCIDS (i.e.,
ICD/CDD), then an AoA is required.

18
2.2 What Decisions Must Be Made/Supported?
It is important to understand how the AoA fits into the decision framework and the decisions
made prior to and following the AoA. As illustrated in Figure 2-2, there are four major decision
points within the decision framework.

Figure 2-2: Decision Framework

The focus of Decision Point 1 is to ensure examine the problem space and decide if additional
analysis is required. The problems to study may be downward directed (e.g., Chief of Staff of
the Air Force) or identified by the Core Function Lead Integrator (CFLI). This decision often
identifies the need to conduct a Capabilities Based Assessment (CBA) or other analysis (e.g.,
DOTmLPF-P analysis). The analysis resulting from this decision point should answer:

What parts of the problem are already understood?


What is the gap(s)? What is the cause(s)? What will happen if we stay on the present
course?
What is the operational risk(s) caused by the gap(s)?
How can the community better use what they already have? What gaps and risks
remain?
How much can be solved through modifications to existing systems? What gaps and
risks remain?
Is a new materiel option even feasible or is S&T investment needed?

The results from this analysis inform Decision Point 2. At this point, a decision is made to
develop (or not develop) an ICD. AF Policy requires the Air Force Requirements Review
Group (AFRRG) to review and approve an initial requirements strategy review (RSR) prior to
Decision Point 2. The purpose of this review is to conduct a strategic requirements overview
that will accomplish the following:
19
Examine previous analysis (Is the CBA or other analyses sound?)
Assess operational risk (Is there compelling operational risk to drive a need for a
materiel solution?)
Evaluate affordability (Do the costs merit an investment decision?)
Validate capability gap(s) (Are the gaps still operationally relevant and a priority?)
Examine materiel and non-materiel solutions (Will the solutions proposed for
investigation likely close or sufficiently mitigate the identified gaps?)

Information presented at the RSR and previous analysis informs Decision Point 2. The decision
may include:

Determining which gaps will not be addressed (risk is acceptable)


Identifying DOTmLPF-P changes to better use current systems
Deciding where to invest in Science and Technology (S&T) or modify current systems
Approving development of an ICD when a new materiel solution is likely needed
Identifying, scoping, and prioritizing the additional analysis needed before the Air Force
will be ready to request an Materiel Development Decision (MDD)

Note that Decision Point 2 does not authorize the start of an AoA. After Decision Point 2, ICD
development, Development Planning (DP), and associated pre-MDD analysis are the primary
activities that occur in preparation for the MDD (Decision Point 3). The focus of these
activities is to identify the following information:

Which gaps can be mitigated by non-developmental materiel solutions?


What are the COTS/GOTS solution types?
What are the broad cross-capability solution types?
o Air, Space, Surface, Subsurface, Cyber?
o Manned/unmanned? Sensor/shooter?
How would these solutions be employed? What are the implied capabilities/needed to
make it work (logistics, command and control, communications, intelligence, etc.)?
What are the risks (technical, operations, integration, political, etc.) with each of these
solutions?
Which solutions demonstrate a capability to address the appropriate gap(s)? Which
solutions are affordable? Which meet the scheduled need-date?
The results from these activities (ICD development, DP, pre-MDD Analysis) become inputs for
determining if any of the viable (existing and new) materiel solutions are likely to be affordable
in the AF budget (funded through next milestone), mitigate enough of the gap(s) to be worth the
cost and risks, and have an acceptable level of impact on other systems, functions and enablers.

The following are several significant policy requirements that must be addressed prior to the
start of an AoA:
20
The AFROC requires that each lead command present bi-annually, a list of ongoing and
forecasted AoAs with traceability to the appropriate Core Function Master Plans
(CFMPs). The AFROC reviews these lists for validation, de-confliction, and
prioritization. As a result of this requirement, the lead command must identify the
specific AFROC validated/prioritized AoA topic to analyze and its associated ICD(s).
AF policy requires that the entry criteria identified in Section 1.2 be presented for
approval prior to proceeding to AoA planning and development of the AoA study plan.
AT&L policy and the AT&L MDD DAB template require the following information be
presented at the MDD for approval to execute the AoA:
o Approved ICD (Joint Staff/Service Sponsor)
Definition of the CONOPS
Identification of capability gap and operational risk of not filling the gap
o Development Planning (Service Sponsor)
Range of materiel solution approach(s) which could address the gap
[Note: this is where the CCTD content is discussed. CCTDs must be
approved by SAF/AQR prior to submission of the AoA Study Plan to the
AFROC (see Section 3.1.1)]
Evidence of technical feasibility and external implications of the
alternatives in the AoA
Timeliness to capability need
o Approved AoA study guidance and plan (CAPE/Service Sponsor)
o Acquisition Plans (Service Sponsor)
Materiel solution analysis phase funding and staffing
Program schedule, affordability constraints
Entry criteria for next milestone/ADM content

The following are the types of questions asked by the AFRRG/AFROC to determine if an AoA
is required:

Is this an AF priority now?


Can we afford it now?
What is the capability gap(s)?
o Is it still valid in this budget reality?
What is the operational impact/risk?
What is the risk of maintaining the baseline?
What is the risk of divesting this capability/system?

AoAs contribute significantly to the acquisition process by providing the MDA critical
information to inform milestone decisions. The MDA may authorize entry into the process at
any point consistent with phase-specific entrance criteria and statutory requirements. AoAs are
typically accomplished in the Materiel Solution Analysis, but may be accomplished in any
subsequent acquisition phase to answer questions not addressed by a previous AoA or that
21
require updating. Results from the AoA provide information that allows the MDA to make an
informed decision on whether an acquisition program is appropriate and at which milestone the
program should begin. It also allows the Program Manager (PM) to structure a tailored,
responsive, and innovative program.

OAS can assist the analysis team in ensuring they have a defensible need to conduct an AoA.

2.3 Role of Analysis in the Materiel Solution Analysis Phase


According to the JCIDS Manual, analysis executed during the Materiel Solution Analysis Phase
supports the following:

Assessment of potential materiel solutions to mitigate validated capability gaps


identified in an ICD
Identification of required DOTmLPF-P changes
Identification of best course of action to inform the MDA on how to mitigate prioritized
capability gaps
Development of the Capability Development Document and Technology Development
Strategy

According to the Defense Acquisition Guidebook, analysis executed during the Materiel
Solution Analysis Phase supports the following:

Assessment of industrial and manufacturing capability for each evaluated alternative in


the AoA. This information should be used when developing the TDS
Identification of new or high risk manufacturing capability or capacity risks, if
applicable. This should include any risks associated with production scale-up efforts
and/or potential supply chain issues
Consideration of possible trade-offs among cost, schedule, and performance objectives
for each materiel solution analyzed.
Assessment of whether or not the operational capability requirement can be met
consistent with the cost and schedule objectives identified by the JROC [Note: if the
operational capability requirement cannot be met consistent with the JROC objectives,
need to identify the operational impact.]

2.4 Who uses the Analysis


AoA study plans and results are usually briefed at high levels in the Air Force and the OSD.
These products inform the decision making process to potentially change doctrine, tactics,
techniques and procedures and, if appropriate, support acquisition of new capabilities. AoA
results influence the investment of significant DoD resources. Therefore, AoAs receive multi-

22
layered direction and oversight from start to finish. This direction and oversight is necessary to
achieve agreement on the results and findings by all stakeholders.

For all AoA efforts, the following organizations will typically review the analysis:

Stakeholders
AFRRG/AFROC
Senior Review Group (SRG)/Senior Advisory Group (SAG)
Approval authority of AoA Study Guidance (AF/A5R and CAPE, where appropriate)
MDA
OAS
A5R functionals
SAF/AQ
SAF/A8

Stakeholders are representatives from any organization, Agency or Service with a vested
interest in the outcome of the AoA. They include representatives from the requirements
community, appropriate operational communities, engineers, logisticians, intelligence analysts,
maintainers, etc. The primary role of the stakeholders is to represent the mission
area/community associated with the problem being studied in the AoA and those who will
design, support, and maintain these capabilities. They review the analysis to ensure that the
trade-offs being examined inform the degree to which the gaps can be mitigated and the
operational risk reduced.

The AFRRG/AFROC, Senior Advisory Group, Guidance Approval Authority representatives


and MDA representatives ensure the study team accomplishes what it planned to complete.
They review the analysis to determine if the key questions are being answered and to provide
redirection, if necessary. They are also the principal oversight organizations for the execution
of the AoA. Their primary objective is to determine the degree to which each solution can
mitigate the gap(s) and reduce the operational risk. The analysis used to inform the decision
makers includes cost, effectiveness, risk, sensitivity, and tradespace. This information will also
be used to inform AF and DoD investment decisions in order to resolve/mitigate the identified
gap(s). The following are key areas that are scrutinized throughout the study:

What enablers (e.g., logistics, intelligence, communications, and Human Systems


Integration) are addressed? How well are their interdependencies understood? What are
the costs associated with the enablers?
Are the key questions answered sufficiently for the decision makers?
How well does the analysis determine each alternatives capability to mitigate each gap?
How well does the analysis determine potential key parameters and threshold and
objective values to inform development of the RCT?
How well does the analysis explore the tradespace? What sensitivity analysis is
accomplished to refine the threshold and objective values?
23
How sensitive is each solution to the analysis assumptions?
How sensitive are the threshold and objective values to cost and schedule? What are the
cost, schedule, and performance drivers?
How do the costs compare with any affordability constraints identified at the MDD
(based on rough cost estimate)?
What questions are still unanswered? What information is still needed?
What are the risks?
Which parts of the analysis are sound and which are the best that could be done, but
introduce more error?

The exact questions and issues change over time and are based upon
personalities and politics. OAS attends every AFROC and has a
representative co-located with AF/A5R, giving us the ability to quickly
adjust to new questions being asked and assist teams and sponsoring
organizations to be better prepared.

For AoAs where AT&L is the MDA, CAPE may also identify a Study Advisory Group (SAG)
in the AoA study guidance. AT&L may use the Overarching Integrated Product Team (OIPT)
and/or the Cost Performance IPT (CPIPT) to support their oversight of the AoA. See the
Defense Acquisition Guidebook for information concerning these panels.

AoAs that are Joint Requirements Oversight Council (JROC) Interest or Joint Capabilities
Board (JCB) Interest must be presented to the Functional Control Board (FCB), JCB, and
JROC. Their primary role is to validate the threshold and objective values in the RCT.
Additionally, they provide informed advice to the MDA on the best course of action to mitigate
the specified prioritized capability gaps. During the Technology Development phase, the
program office will explore the tradespace using the threshold and objective values.

Before the final report is presented to the JROC, CAPE also has the responsibility of
accomplishing an AoA sufficiency review at the completion of the AoA. This is accomplished
for all efforts that are JROC Interest, JCB Interest and/or have OSD oversight. CAPEs
sufficiency review is primarily focused on the following:

Were the key questions answered?


Does the analysis support the proposed acquisition investment strategy?
Can the Joint military requirement be met in a manner consistent with the cost and
schedule objectives recommended by the JROC?

24
OAS assists the study team in ensuring the analysis results are presented in a clear and
comprehensive manner which address the questions and issues identified above. OAS conducts
a risk assessment of the study plan, interim results, and final report for the AFRRG and AFROC
principals. This assessment assists the Air Force in determining the investment course of action
based on the analysis results.

In addition to these roles, OAS provides assistance to the AoA Study Director in identifying
each of these organizations before the initiation of the AoA, working with the stakeholder
community, and preparing for each of the appropriate reviews and analytical decision points.

2.5 Relationship between AoA and Other Activities


This section outlines the specific relationships between the AoA and other requirements related
activities.

2.5.1 Activities that shape the AoA

The main activities that lay the foundation for and provide a starting point for the AoA are:

Capability Based Planning (which includes the CBA)


Doctrine, Operations, Training, materiel, Leadership/Education, Personnel, Facilities,
and Policy (DOTmLPF-P) Analysis
Early Systems Engineering and Development Planning (DP)
Materiel Development Decision (MDD)

Depending on the AoA, there may be activities in non-DoD organizations (e.g. the other Federal
Departments, the Intelligence Community, etc.) to consider. If so, contact OAS for assistance
in making the right contacts with the appropriate agencies for analytical support.

2.5.1.1 Capability Based Planning Contributions

The primary information that the Capability Based Planning contributes to the AoA is:

Definition of the existing/programmed capabilities (also known as the baseline)


Threats and scenarios utilized in the conduct of the CBA
Measures and metrics utilized in the conduct of the CBA
o Tasks, conditions, and standards
o Evaluation criteria utilized in the CBA that demonstrate a gap exists
Capability gaps identified during the conduct of the CBA
o Assessment of the operational risk if the gap remains
o Identification of the cause of the gap
25
Results of the CBA documented in a CBA final report and where appropriate, an
Initial Capabilities Document (ICD)
Identification of the measures (or other metrics) that demonstrate that a capability gap
exists from sources other than the CBA

2.5.1.2 DOTmLPF-P Analysis Contributions

Prior DOTmLPF-P analyses, conducted as part of the CBA or as separate studies, focus on
whether non-materiel approaches sufficiently mitigate any of the capability gaps by
recommending changes to one or more DOTmLPF-P areas. The small m, refers to existing
materiel solutions, not an initiation of a new program of record.

According to the JCIDS Manual, the most common non-materiel approaches are:

Alternative doctrinal approaches and alternative CONOPS. Investigating alternative


CONOPS is a JCIDS requirement. Where applicable, alternatives should also consider
CONOPS involving allied/partner nation or interagency participation.
Policy Alternatives. When considering policy alternatives, the CBA must document
which policies are contributing to capability gaps and under which circumstances. A
policy change that allows new applications of existing capabilities or modifies force
posture to increase deterrence is always of interest and should be considered.
Organizational and personnel alternatives. This means examining ways in which
certain functions can be strengthened to eliminate gaps and point out mismatches
between force availability and force needs.

A Joint DCR is generated when the DOTmLPF-P analysis shows that the capability gap can be
sufficiently addressed by one of the three approaches below. In these situations, an ICD and
AoA are not required.

New non-materiel solution


Recommending changes to existing capabilities of the Joint force in one or more of the
eight DOTmLPF-P areas
Increased quantities of existing capability solutions

This DOTmLPF-P analysis should also identify any interdependencies between any potential
solutions and S&T and/or experimentation recommendations.

Refer to Chapter 4 of OASs Pre-MDD Analysis Handbook, dated June 2010, for additional
guidance on the execution of this analysis.

2.5.1.3 Early Systems Engineering and Development Planning Contributions

26
The focus of Early Systems Engineering is not to engineer a system but to better understand the
systems engineering aspects of the solution space and the technical feasibility. The goal is to
determine the most viable, affordable solutions to be explored in post-MDD activities and
processes, such as the AoA or S&T activities. This process is used to investigate types or
categories of potential solutions (e.g., satellite, armed airframe, ground-launched weapon, cyber
solutions) vice specific systems (e.g., B-2 with weapons modifications).

Early Systems Engineering should also identify architectures appropriate to the capability areas
being explored. These architectures enable a better understanding of the complexity of each of
the materiel solutions. Since architectures integrate visions, requirements, and capabilities, they
help provide unique insights for the MDA about the discriminating differences between the
potential solutions. Some solutions may have multiple, complex interdependencies which must
be identified to the decision makers. Architectures can also provide insight into the logistics
and sustainment requirements of each of the solutions as well as the ease of improving them
(such as technology insertions) over their individual life cycles.

The final consideration for use of architectures is to illustrate how the capabilities provided by
existing and to-be architectures address human system integration issues. This information
will be critical to the MDD presentation and AoA planning efforts, especially:

Understanding the impacts of each solution on other parts of the architecture


Developing ROM costs across the architecture
Gaining insights into the affordability aspects of MDD

This will provide information to the decision makers about the external implications of the
alternatives. [Note: when considering how architectures will be utilized, the objective is not to
accomplish an architecture study at this time, but rather to use them to aid in the illustration of
complex materiel solutions (i.e., family-of-systems (FoS) and system-of-systems (SoS) solutions)
and interdependencies. Additionally, each solution should be examined to see how it impacts
the baseline architecture and, in some cases, may result in a future architecture study.]

In support of SAF/AQRs Early Systems Engineering initiative, Air Force Materiel Command
(AFMC) and Air Force Space Command (AFSPC) established DP with the overall objective of
ensuring the launch of high-confidence programs capable of delivering warfighting systems
with required capabilities on time and within budget. While it applies across the entire life cycle
of a given capability, simply stated, DP is the process by which the AoA solution space is
defined and CCTDs are developed prior to MDD. DP support is obtained by submitting a DP
effort request to either AFMC or AFSPC, as appropriate.

As stated in the SAF/AQ Early Systems Engineering Guidebook, the intent of early systems
engineering is to enhance the quality and fidelity of proposed future military system concepts
that may eventually be considered in AoAs. The primary means of meeting this intent is via
development of a CCTD for each conceptual materiel approach being considered to fill the

27
related capability gaps/shortfalls. The CCTD provides a mechanism for documenting and
communicating the data and information associated with the prospective materiel solution
analyzed during the Pre-MDD analysis. This enables the analysis to produce increasing levels
of detail regarding the materiel concepts under consideration. Chapter 3 of the SAF/AQ guide
describes how this analysis effort and identification of the tradespace is supported. CCTDs are
required at the Air Force Review Board (AFRB) that is conducted prior to the MDD. They are
included as an appendix to the AoA study plan and final report.

A Best Practice is to capture other information about the solution space in addition to that
found in the CCTD or DCR (for non-materiel solutions). Some examples of other information
to help define the solution space include:

Overarching assumptions (these are the assumptions that are specific to the problem and
apply to all potential solutions)
Overarching operational concept/employment concept (this is what is defined in the ICD
and refined as the overarching concepts independent of the individual solutions)
Overarching operational considerations (this is problem specific and applies to all
potential solutions equally)
Overall DOTmLPF-P implications (these are the implications that apply regardless of
solution)

2.5.1.4 MDD Contributions

MDD is the formal decision to conduct an AoA and when the MDA accepts the approved AoA
study guidance and AoA study plan. Current requirements development and acquisition
policies identify the following list of recommended approaches in preferred order:

1. Implementation of DOTmLPF-P changes which do not require development and


procurement of a new materiel capability solution.
2. Procurement or modification of commercially available products, services, and
technologies, from domestic or international sources, or the development of dual-use
technologies.
3. The additional production or modification of previously-developed U.S. and/or allied
military or interagency systems or equipment.
4. A cooperative development program with one or more allied nations.
5. A new, joint, DoD component or interagency development program.
6. A new DoD component-unique development program.

If sponsors select a less preferred approach from the list above (for example, the preferred
solution is #1 but the sponsoring organization selects #2), they must explain and provide
supporting evidence to justify the decision. It is critical to ensure that the range of alternatives
includes non-Air Force solutions and U.S. government solution options. Since the Joint Staff
now requires that all studies have to be identified in a central repository, J8 study repository
28
(currently known as Knowledge Management/ Decision Support (KM/DS)) is a great resource
to review previously assessed solutions and other studies in the same mission area. It is also
critical to ensure that pre-MDD activities include sufficient time to identify and explore these
non-Air Force solutions.

2.5.2 Activities shaped by the AoA

The AoA is a primary input to:

Development of Technology Development Strategy (TDS)


Development of Capability Development Document (CDD)

The AoA provides information for the development of the TDS phase in the following manner:

Informs technology maturation activities such as the building and evaluation of


competitive prototypes and refinements of the user capability requirements leading up to
a preliminary design review
Informs development of a RCT which will identify the operational performance
parameters, thresholds, and objectives to be explored during the TD phase.
Aids in identifying which performance parameters require further refinement during the
TD phase. The RCT must be included in the AoA final report.
Informs assessments of the technical maturity of the proposed solution and when an
MDA should consider abbreviating or eliminating the TD phase based on these
assessments
Informs development of the TDS and RFPs for the TD phase following the MS A
decision where appropriate.

With respect to updates to a JCIDS document such as a CDD, the sponsor must review the
AoA to determine if it continues to be relevant. If a CDD update invalidates the previous
AoA, the sponsor will update or initiate a new AoA to support the CDD update. A CDD
may not be submitted for staffing and validation until the AoA is completed.

The AoA provides information for the development of the CDD in the following manner:

Provides sufficient information to define Key Performance Parameters (KPPs) and Key
System Attributes (KSAs) for multiple capability increments. A single CDD may be
validated to support the MS B decisions for all of the described increments. Therefore,
the CDD must clearly articulate the unique set of KPPs and KSAs for each increment
and identify any KPPs and KSAs that apply to all increments.
Provides a summary of the alternatives, performance criteria, assumptions,
recommendations, and conclusions.
Identifies where relevant training criteria and alternatives were evaluated. This
information provides the analytical foundation for establishing the Training KPP.
29
Training aspects should be part of the cost, schedule, and performance trade-offs
examined during the AoA.

30
3 Planning the Analytical Effort
This section describes AoA planning activities. Since AoAs are used to support the decisions
identified in Section 2.2, it is important to understand how to scope the AoA, utilize previous
efforts as a foundation for analysis, identify the stakeholders, and form the team. Additionally,
it is important to determine the level of effort required for the AoA. This section also describes
use of the enduring HPT membership discussed earlier in the handbook. Lastly, this section
addresses how to capture all planning details in an AoA study plan and the review/approval
processes required for the plan.

3.1 Scoping the Analysis


As identified in the JCIDS Manual, the study guidance and study plan should build on
appropriate prior analysis including that conducted as part of the JCIDS process. AoAs are
designed to provide decision quality information to inform investment decisions. As a result, it
is important to tailor the AoA appropriately to focus on the information required for those
decisions. Senior leaders are not looking to reinvent the wheel and repeat analysis that has
already been accomplished, but instead on identifying what additional analysis is needed before
the next milestone decision. It is important to understand what information the MDA needs to
make an informed decision.

Since planning for the AoA begins before there is any formal guidance or direction, it is critical
that scoping discussions occur with decision makers early in the planning stage. This
discussion assists in shaping the MDD Acquisition Decision Memorandum (ADM) and AoA
study guidance. This also helps ensure that the AoA effort is tailored to address only those
issues that the decision makers need for their next decision. Therefore, it is essential that the
Study Director have frequent interaction with the CAPE and MDA staff.

A Best Practice is to contact OAS as soon as there is discussion about an upcoming AoA.
OAS can facilitate and provide introductions, where necessary, with the appropriate CAPE,
MDA and Air Staff functional representatives for the effort. This can help ensure that the AoA
is properly scoped and tailored to meet AF and DoD needs.

Many of the items that define the scope of the AoA will come from the CBA and/or Doctrine,
Organization, Training, materiel, Leadership, Personnel, Facilities, and Policy (DOTmLPF-P)
analysis that precede the AoA. The following are typically used to establish the scope of the
AoA:

Capability gaps and any identified prioritization


Mission areas and tasks
Operational concepts and environment
Threats and scenarios

31
Measures and standards
Approaches and alternative concepts, including the baseline
Maturity of the technologies
Operational risk
Timeframes
Ground rules, constraints, and assumptions
Science and Technology (S&T) activities and DOTmLPF-P Change Recommendations
(DCRs)

The following are examples of key overarching questions decision makers ask:

How well does each alternative close the capability gaps?


How does each alternative compare to the baseline (current capability)?
What are all the enabling capabilities (C3, ISR, HSI, logistics, etc.)?
What are the risks (technical, operational, integration, political, etc.)?
What is the life cycle cost estimates (LCCE) for each alternative?
What are the significant performance parameters?
What are the trade-offs between effectiveness, cost, risk, and schedule for each
alternative?

3.1.1 Using Previous Analysis as the Foundation

It is important to understand what corporate knowledge exists within in the relevant stakeholder
organizations. Other potential sources for information include:
AF/A9
Institute for Defense Analysis (IDA)
RAND
Defense Technical Information Center (DTIC)
AF Knowledge Now/Intelink
J8 Study Repository (currently known as KM/DS)
CAPE
Army Training and Doctrine Commands (TRADOC) Army Experiment and Study
Information System https://cac.arcicportal.army.mil/ext/aesis/aesis/default.aspx)
AT&Ls Acquisition Information Repository (https://www.dodtechipedia.mil/AIR)

OAS can also assist in identifying relevant studies for the effort.

This research is focused on identifying the following:

Where there is extensive knowledge


Where there is little knowledge
Where there is no knowledge
32
The next step is to determine the applicability of the previous analysis to the effort. Previous
analyses are useful for identifying the baseline and other alternatives, and refining or developing
scenarios and measures. Just because the title seems to fit does not mean that it is applicable.
Contact the office associated with the previous analyses for assistance in determining
applicability. It is important to understand the objectives of the previous analyses and the
conditions under which it was executed to determine its relevance to the current study. By
reviewing the previous analyses, the study team will be better prepared to conduct the AoA.

3.1.1.1 Using Scenarios from Previous Analyses

AoA alternatives must be studied in realistic operational settings to provide reasonable comparisons
of their relative performances. The AoA does this by adopting or developing one or more
appropriate military scenarios. Scenarios define operational locations, the enemy order of battle,
and the corresponding enemy strategy and tactics ("the threat"). Scenarios are chosen with
consideration of AoA mission need, constraints and assumptions, and the physical environments
expected. The scenarios selected from previous analyses should be considered first when
determining which scenarios should be used in the AoA.

Threats and scenarios determine the nature of the physical environment in which the alternatives
operate. However, there is often a need to operate in a range of physical environments and this can
drive the selection of scenarios. The environment reflects both man-made and natural conditions.
Natural conditions include weather, climate, terrain, vegetation, geology, etc. Depending on the
alternative, these conditions can impact the target selection process, the aircraft and munitions
selection process, aircraft sortie rate, aircraft survivability, navigation and communications
capabilities, logistics, etc. Man-made conditions such as jamming and chemical/biological warfare,
have their own impacts. Chemical or biological warfare, for example, may impact the working
environment for operational crews and logistics support personnel. This can impact the results of the
war or how it is executed. Such real or potential threats may in turn affect aircraft basing decisions
and sortie rates.

The threat is most often developed and defined by the AoA study team working in conjunction with
the intelligence community. Engagement of the intelligence community should begin early in the
AoA process. MAJCOM intelligence organizations, DIA, and other intelligence organizations can
provide detailed threat and target information. If System Threat Assessment Reports (STARs or
STAs) are available, they could serve as the basis for the AoA threat description.

The Defense Planning Guidance/Illustrative Planning Scenario (DPG/IPS) provides broad context
for a limited number of scenarios and should be used as a starting point for scenario development.
The DPG contains a strategic framework and general description of potential military operations in
several areas of the world and for various contingencies. Variance from the DPG/IPS (called
scenario excursions) must be identified, explained, and approved by DIA after sponsoring command
A2 review.

The Multi-Service Force Deployment (MSFD) or other digital force projections are resources
providing details on enemy, friendly, and non-aligned forces in these areas. In Joint AoAs, Army,
33
Navy, and Marine forces must be considered, as well as the Air Force. The order of battle and roles
of allied and non-aligned forces must also be considered. Environmental factors that impact
operations (e.g., climate, atmospherics, vegetation and terrain) are important as well.

Typical threat elements addressed in an AoA are:

The enemy order of battle


Limitations on threat effectiveness, such as logistics, command and control,
operational capabilities, strategy or tactics, and technology
Countermeasures and changes in enemy strategy and tactics in response to the new
system's capabilities (i.e., reactive threats)
A range of threats to account for uncertainties in the estimates
A target set representing a cross section of all possible targets
Threat laydown showing potential threat systems and their location

In summary, scenarios must portray realistic operational environments. A range of scenarios may be
needed to investigate the full potential of the alternatives and their sensitivities to variations in
constraints and assumptions, particularly with regard to threats.

Refer to Section 3.1 of the OAS Pre-MDD Analysis Handbook for additional guidance on
scenario selection.

3.1.2 Identifying Ground Rules, Constraints, and Assumptions (GRC&A)

GRC&As help scope the AoA and must be carefully documented and coordinated with senior
decision makers. Some GRC&As will be general in nature and encompass the entire study,
while other GRC&As will be more specific and cover only a portion of the analysis. Many of
these assumptions will be described in the AoA study guidance provided to the team prior to
creation of the study plan.

In this context, the specific definitions are:

Ground rules broadly stated procedures that govern the general process, conduct, and
scope of the study. An example is: the working group leads will be members of the risk
review board.
Constraints - imposed limitations that can be physical or programmatic. Human
physical or cognitive limitations or a specific operating frequency range are
examples of physical constraints. Specifying the latest acceptable initial
operational capability (IOC) date illustrates a programmatic constraint.
Assumptions - conditions that apply to the analysis. Examples include specific
manpower levels, inclusion of a target type that will proliferate in the future
thus forcing consideration of a specific threat system, or that certain
infrastructure or architectures will be provided by another program

34
GRC&A arise from many sources. IOC time constraints, for example, may be imposed by an
estimated fielding date of a new threat or by the need to replace an aging system. Net-centricity
or interoperability with the Global Information Grid (GIG), for example, may be dictated in the
ADM. Regardless of the source, each GRC&A must be explicitly identified, checked for
consistency, fully documented, and then accounted for in the scope of the AoA. Later they will
need to be accounted for in the analytical methodologies. The source of and rationale for the
GRC&A should also be noted, if known.

The GRC&A are subject to scrutiny, particularly if not reviewed with the MDA, AF/A5R,
CAPE and critical stakeholders early in the process. It is critical that the team thoroughly
document each GRC&A. The study plan will contain an initial set of GRC&A, but may change
as the study progresses. Any changes to the GRC&A should be vetted with stakeholders and
decisions makers and documented in the final report.

3.2 Defining the Alternative Concepts

The AT&L MDD DAB template requires alternatives be identified, fully understood, and
presented at MDD. In addition, the AoA study guidance will identify a minimum set of
alternatives that must be included in the AoA. The Air Force uses CCTD documents to
describe the technical and operational aspects of each alternative. The CCTDs should be
created during Development Planning (DP) and Early System Engineering. The AoA study
team will refine the CCTDs to ensure they have sufficient information to support the
effectiveness, cost, and risk analyses. SAF/AQR is responsible for approving the CCTDs prior
to the MDD.

At a minimum, the AoA must include the following alternatives:

The baseline, which represents the existing, currently programmed system funded and
operated according to current plans
Alternatives based on potential, yet unfunded improvements to the baseline, generally
referred to as the baseline+ or modified baseline. [Note: it is not always best to include
all potential improvements to the baseline in one alternative, consider having multiple
alternatives in this category.]
Alternatives identified in the AoA study guidance (for example, COTS/GOTS, allied
systems, etc.)

3.3 Identifying Stakeholder Community


Stakeholder is defined as any agency, Service, or organization with a vested interest (a stake) in
the outcome of the pre-acquisition analyses. A stakeholder may contribute directly or indirectly
to the pre-acquisition activities and is usually affected by decisions made as a result of these
35
activities. Asking the following questions can help identify members of the stakeholder
community:

Who are the end-users (e.g., COCOMs, warfighters, etc.) of the capability?
What enablers (intelligence, HSI, logistics, communications, etc.) have
interdependencies within the solution space being analyzed in the AoA?
How do the other Services, DoD agencies, and other government agencies fit into the
mission area being explored in the AoA?

The stakeholder community can assist the AoA study team identify other solutions available
from other Services or agencies (within or outside DoD). Additionally, allied and partner
nations may offer possible solutions.

OAS can assist in identifying the stakeholder community.

3.4 Determining Level of Effort


The level of effort (LoE) for the analysis will depend on various factors such as the study
questions, complexity of the problem, time constraints, manpower and resource constraints, and
type of analysis methodology. By controlling the scope of the study, the LoE is more likely to
remain manageable over the course of the analysis. All study scoping decisions should be
coordinated with stakeholders to ensure that expectations are managed. This ensures that the
LoE and resources required are understood for each scoping decision. The results of these
discussions should be documented so that everyone understands what is within scope and what
is not.

Answers to the following questions will aid in determining the LoE:

How much analysis has been accomplished to date? (See Section 3.1.1)
What remaining information needs to be learned from the AoA? (See Section 3.1)
Who in the stakeholder community is available to participate in the effort?
Are the right experts available and can they participate?
How much government expertise is available? Contractor support?
What data and tools are needed to execute the AoA?
How much time and funding is available to execute the AoA?
What level of analytic rigor is required?
Where and what amount of analytic risk is acceptable to the decision makers?

36
There is a relationship between the level of effort and study risks. When defining the level of
effort, it is important to identify areas of risk associated with the time and resources allotted to
conduct the study. The answers to above questions will aid in identifying LoE and study risks.
There are other risks associated with uncertainties inherent in the study process such as the
effectiveness and cost analysis methodologies, funding and resources limitations, and
insufficient time to conduct the study. For example, a study with limited time and resources
may reach different conclusions compared to similar study with less constrained time and
resources. In this example, the less constrained study could utilize more empirically based
research methods that enhance confidence in the study findings. It is important that the team
recognizes and documents these uncertainties, identifies the potential impacts, and provides this
information to the decision makers.
Once the LoE and study risks are identified, the team should discuss the implications with the
senior decision makers. This discussion should include courses of action which identify
possible tradeoffs to mitigate the risk (e.g., providing more resources and/or reducing scope to
meet an aggressive study schedule). This discussion will ensure the LoE and risks are
acceptable to senior decision makers. These agreed upon risk areas should be included in
presentations to the AFRRG and AFROC.

OAS, AF/A5R and SAF/AQ can aid in determining the appropriate LoE and study risks.

3.4.1 Joint Staffing Designator (JSD) and Acquisition Category (ACAT)


Determination
The JSD and ACAT level will also influence the LoE required. An effort that is expected to be
designated as JROC Interest will have the same level of scrutiny as an ACAT I program effort
or Major Defense Acquisition Program (MDAP). The following types of efforts will have OSD
oversight and CAPE-issued guidance: ACAT ID, ACAT IAM, JROC Interest, and labeled as
special interest.

OSD and JROC determine the classification using the following criteria:

DoDI 5000.02 specifies: The USD(AT&L) shall designate programs as ACAT ID or


IAM when the program has special interest based on one or more of the following
factors: technological complexity; Congressional interest; a large commitment of
resources; the program is critical to achievement of a capability or set of capabilities;
or a program is a joint program. Exhibiting one or more of these characteristics,
however, shall not automatically lead to an ACAT ID or IAM designation.
Capabilities in Battlespace Awareness (BA), Command & Control (C2), Logistics and
Net-Centric are initially considered JROC Interest because the capabilities are enablers
that cut across Service boundaries

37
If not a Special Interest, ACAT I, or JROC Interest, capabilities in Force Application
(FA) and Protection are initially considered Independent or Joint Information
If not Special Interest, capabilities in Force Support, Building Partnerships, and
Corporate Management & Support are initially considered Independent
Revised definition of MDAP based on implementation of WSARA (DTM 09-027):
o An MDAP is a DoD acquisition program that is not a highly sensitive classified
program and:
1. That is designated by the USD(AT&L) as a MDAP; or
2. That is estimated to require an eventual total expenditure for RDT&E,
INCLUDING ALL PLANNED INCREMENTS, of more than $365 million
(based on FY 2000 constant dollars) or an eventual total expenditure for
procurement, INCLUDING ALL PLANNED INCREMENTS, of more than
$2.19 billion (based on FY 2000 constant dollars).
o This revised definition may result in a change in Milestone Decision Authority
(MDA).

3.4.2 Contract Support


Technical support contractors often conduct substantial parts of the analysis. It is important to
understand the study objectives before making contract support arrangements. This will
increase the likelihood that the chosen contractor is well suited to perform the required tasks.
Know the needs first, and then contract. It is important to remember that the responsibility for
the AoA rests with the lead command and this responsibility should not be delegated to the
contractor. Questions to answer to determine contractor support requirements:

Is there adequate expertise available within the government?


Are sources of funding available?
For which study areas do I need contract support?
Which contractors are qualified?
What are the available contract vehicles?
How will the contract be administered?

Experienced and qualified contractors are often obtained through the Air Force product centers
and program offices. For most product centers, access to technical support contractors is
available through scientific, engineering, technical, and analytical (SETA) contracts. Also,
Federally Funded Research and Development Centers (FFRDC) are available to some product
centers. Use of an existing contract for the best-qualified contractor can reduce the AoA
initiation and development time considerably.

3.5 Establishing the Study Team

38
The Study Director leads the study team in conducting the AoA. The Study Director is
normally appointed by the sponsor (most often the operational user) designated as the lead for
the AoA.

Management and integration of the information/products from each working group is


undertaken by a core team of government representatives usually comprised of the Study
Director, Deputy Study Director, lead and deputy lead from each working group, and the OAS
representative. The enduring HPT membership should serve as the foundation of this core team
membership to maintain continuity of the effort. Ideally, this team also includes members from
previous applicable studies. Finally, the study team should include appropriate members of the
stakeholder community (sponsoring command/ organization, other Air Force commands and
agencies, Army, Navy and Marines, DoD, Joint Staff, and civilian government agencies).

OAS and AF/A5R facilitate the AoA study guidance and study plan HPTs. OAS provides an
advisor to the Study Director. The advisor assists in training, planning, executing, and
facilitating the accomplishment of the AoA. The level of assistance from OAS is determined by
the scope of the AoA and where the AoA fits in the overall Air Force prioritization. OAS is
focused on ensuring quality, consistency, and value in AoAs.

A Best Practice is to organize in a way that meets the study needs. The structure of the AoA
study team depends upon the scope of the AoA and the level of effort required. Not all study
teams are identical, but are instead tailored in size and skill sets to meet the objectives of the
AoA. Team membership may include operators, logisticians, intelligence analysts, cost
estimators, and other specialists. Depending on the scope of the AoA, the team is usually
organized along functional lines to conduct the effectiveness, risk, and cost analyses.

If the AoA is only focused on conducting sensitivity analysis of the assumptions from previous
analysis and updating the cost estimates, the AoA study team will consist primarily of those
members needed to conduct those specific tasks. In other words, each study team structure is
dependent upon the questions the effort must answer and the specific scope of the AoA. Small
AoA teams with dedicated members are often better able to react to the timeline demands of the
AoA and may be more productive.

Early and proper organization is the key to a successful study. Ideally, the working group leads
and their deputies should be subject matter experts able to lead people, manage multiple
situations, and facilitate their groups. It can be difficult to find individuals with all of these
abilities. If unable to find a working group leader (or deputy) who can facilitate a group, OAS
can assist.

After the core team members have been identified, OAS can provide training to the team. The
training will be tailored to the specific analytic effort and is best accomplished prior to the AoA
study plan HPT.

39
Figure 3-1 illustrates an example study team structure and various oversight and support
organizations. In situations when stakeholder organizations have conflicting interests, consider
selecting working group co-leads from those organizations to facilitate their buy-in. Ad hoc
working groups are formed to accomplish specific tasks in support of the other working groups.
For example, the Alternative Comparison working group may be an ad hoc group because it is
formed from members of other working groups to synthesize all of the analysis results to
compare the alternatives (this is described further in Chapter 8 of this handbook).

Figure 3-1: Example Study Team Structure

Once the team is established, the working groups meet separately to address their fundamental
issues. They also meet with other working groups and/or the entire study team to exchange
information. Frequent and open exchanges of ideas and data are essential to a successful AoA.
When the team is geographically dispersed, maintaining frequent and open communication is
usually more challenging. Documenting questions, answers, and decisions made in the various
work groups facilitates clear and effective communication. This can be done through taking
and distributing minutes of study group meetings. Frequent interaction via telephone and e-mail
at all levels should also take place. If possible, keep the study team intact throughout the AoA.
A changing membership adversely impacts continuity and may create delays as new personnel
are integrated into the effort.

40
3.6 Study Plan Preparation and Review
An approved study plan is required prior to convening the Materiel Development Decision
(MDD). The study plan should illustrate with sufficient detail how the team will execute the
AoA to ensure the critical areas identified in the AoA study guidance are addressed. Appendix
C of this handbook contains the template for the study plan.

According to Air Force policy, prior to initiating a study plan, the sponsor will present the
information associated with the entry criteria identified in Section 1.2 to the AFRRG for
approval to proceed.

An HPT is required for development of the study plan. AF/A5R-P must review and approve the
membership prior to convening the HPT. The membership of the study guidance HPT should
be the foundation for this HPT and the core membership of the study team. The study plan HPT
membership can be altered at the discretion of the AF/A5R-P.

The AF/A5R process for review and staffing of the study plan is:

After the study plan has been prepared and coordinated with AoA stakeholders,
sponsors will provide the AoA study plan and AFROC briefing to the AF/A5R
Functional Division Chief and AFMC/OAS for assessment, simultaneously. The AF/A5R
Functional Division Chief will forward the AoA study plan and AFROC briefing to
AF/A5R-P with an AFMC/OAS assessment, simultaneously. AF/A5R-P will review
(allow for five working days) the study plan and determine if the AoA study plan is ready
to be submitted to the AFRRG for approval. Once the AFRRG concurs with the AoA
study plan, the study plan and AFROC briefing will be submitted to the AFROC for
validation.

A widespread review of the plan is useful in improving the plan and ensuring stakeholder
support for its execution. The review should start within the originating command and key team
member organizations. The external review should be solicited from a variety of agencies,
including OAS, appropriate AF/A5R functional divisions, AFMC/A3, other Services, and
CAPE (for ACAT I and JROC Interest programs).

According to AFI 10-601, the study plan should include the following to ensure approval:

Identification of the specific gaps that are being addressed in the AoA
Definition of the baseline (existing and planned) capability
Identification of the stakeholders and their roles/responsibilities in the AoA
Identification of the key questions identified in the study guidance

41
Identification of the alternatives identified by the study guidance. This includes
discussion about the implications and/or dependencies identified about the alternative
and how those dependencies will be factored into the analysis.
Description of the methodologies to be utilized and must include the following:
o Measures of effectiveness, performance, and suitability
o Decomposition of the gaps and key questions
o Traceability to measures used to establish minimum values in the ICD (from the
CBA)
o Cost work breakdown structure
o Methodology to determine alternatives ability to mitigate gaps
o Methodology to explore tradespace and description of what sensitivity analysis
will be done to determine key performance parameters and threshold and
objective values for the RCT
o Methodology to conduct the cost/capability tradeoff analysis
o Methodology for factoring in the dependencies identified for each alternative
o Scenarios to represent the operational environment

An OAS assessment of the study plan and its associated briefing is required prior to submission
to AF/A5R-P. Appendix E contains the study plan assessment criteria used by OAS in their
independent assessment of a study plan and associated briefing. This assessment is presented in
bullet fashion, highlighting the risk areas with the credibility and defensibility of the analysis
results. OAS will provide an initial assessment and get-well plan after the initial review to
determine readiness for submission to AF/A5R.

42
4 Performing the Effectiveness Analysis
Effectiveness analysis is normally the most complex element of the AoA and consumes a
significant amount of AoA resources. The effectiveness analysis working group (EAWG) is
responsible for accomplishing the effectiveness analysis tasks. The goal of the effectiveness
analysis is to determine the military worth of the alternatives in performing Mission Tasks
(MTs). The MTs are typically derived from the capabilities identified in the Initial Capabilities
Document (ICD). A Capability Development Document (CDD), Capability Production
Document (CPD), or Concept Characterization Technical Description (CCTD) may exist for the
current baseline, and can be useful in determining MTs and measures for the EA effort.
However, while there may be existing requirements documents, the team should use whatever
documents provide the best, most current information. Avoid using information from sources
that are superseded by or do not accurately reflect the current capabilities or required mission
tasks. The ability to satisfy the MTs is determined from estimates of alternatives' performance
with respect to measures of effectiveness (MOEs), measures of performance (MOPs), and
measures of suitability (MOSs). Additionally, AoAs and other supporting analyses can provide
the analytical foundation for determining the appropriate thresholds and objectives for system
attributes and aid in determining which of these attributes should be KPPs or KSAs.

4.1 Effectiveness Methodology


The effectiveness methodology is the sum of the processes used to conduct the EA even if some
pieces are done by other parts of the larger AoA team. The development of the effectiveness
methodology is almost always iterative: a methodology will be suggested, evaluated against the
resources and data available to support it, and then modified to correspond to what is both
possible and adequate. As the AoA progresses, this development sequence may be repeated as
more is understood about the nature of the alternatives, the models or analysis tools, and what is
necessary to support the AoA decision. Analysis continues throughout the conduct of the AoA
and based on what the team learns as it progresses, methodologies may be refined. Figure 4-1
General Approach for Effectiveness, shows the flow of analysis tasks discussed in this chapter.

43
Figure 4-1: General Approach for Effectiveness Analysis

OAS does not recommend the use of the Analytical Hierarchy Process (AHP) or similar
methods which implement weighting schemes as part of AoA effectiveness methodology.
Typically, employing AHP/weighting adds complexities to the study results which are difficult
to understand and difficult to explain to decision makers. OAS suggests keeping the
effectiveness methodology as simple as possible in order to evaluate and present accurate,
informative results.

Measure weighting schemes can oversimplify the results and potentially mask important
information. Table 4-1 below illustrates how measure weighting is dependent on the group
determining the weighting and may not be representative of what senior leaders, stakeholders,
or decision makers would consider important.

44
Table 4-1: Weighting Measures

4.2 Effectiveness Analysis Methodology


Discussion of the EA methodology must begin very early in the process; even before the AoA
study officially begins. In fact, since the study team is required to present their study plan along
with the guidance at MDD, it is very important to provide a well developed and comprehensive
plan at MDD. The plan must at a minimum identify the actual alternatives to be studied, the
relevant mission tasks, gaps, and measures, and include specific information regarding the
analysis tools and methodologies to be used to conduct the analysis. There should be clear logic
linking the tasks, gaps, measures, and methodologies.

The EA methodology is designed to compare the effectiveness of the alternatives based on


military and operational worth. It encompasses and is influenced by the MTs, measures
(MOEs, MOPs, MOSs), alternatives, threats, scenarios, operations concept, prior analysis, study
schedule, and available analysis resources. The methodology must be systematic and logical. It
must be executable and repeatable, and it must not be biased for or against any alternative.

It is important that the team determine the appropriate level of detail required in the analysis.
Because of the teams dependence on many factors, it can approach its final form only after the
above factors are defined. The identification and selection of suitable analysis tools and input
data sources must await development of the MTs, measures, selection of the alternatives, and
determination of analysis level of detail. It is important to note though that, before measures
can be developed, there must be agreement among the decision makers and stakeholders
regarding which capability gaps to address first, followed by agreement on which are the
appropriate mission tasks associated with the capability gaps. Finally, once the appropriate
level of detail is determined and suitable analysis tools are identified, the team must be sure to
secure the buy in of the senior decision makers.

45
4.2.1 Terms and Definitions

While there are certainly several other definitions in use by many different organizations, the
following terms and definitions are those used by OAS to describe parameters associated with
capabilities, mission tasks, and measures.

Capability the ability to achieve a desired effect under specified standards and
conditions through combinations of means and ways across the DOTMLPF-P to perform
a set of tasks to execute a specified course of action. (JCIDS Manual)
Mission Task tasks a system will be expected to perform; the effectiveness of system
alternatives is measured in terms of the degree to which the tasks would be attained.
Attribute a quality or feature of something. Attributes of mission tasks (e.g.,
survivability, persistence, availability, accuracy, etc.) form the basis for identifying and
drafting measures.
Measure a measure is a device designed to convey information about an entity being
addressed. It is the dimensions, capacity, or amount of an attribute an entity possesses.
A measure is used to provide the basis for comparison or for describing varying levels of
an attribute.
Metric a unit of measure that coincides with a specific method, procedure, or analysis
(e.g., function or algorithm). Examples include: mean, median, mode, percentage, and
percentile.
Criteria the acceptable levels or standards of performance for a metric. It is often
expressed as a minimum acceptable level of performance (threshold) and desired
acceptable level of performance (objective).
Data an individual measurement used to compute the metric for a measure.

4.2.2 Mission Tasks (MTs)

Because the goal of the AoA is to identify the most promising solution(s), MTs must not be
stated in solution-specific language. Each MT will have at least one measure supporting it. In
general, measures should not call for optimizing aspects of a task or effect, because this often
has unintended impacts on cost or other aspects of the alternatives performance. For example,
one solution to minimizing aircraft attrition could be not flying missions at all; however, this
solution would hardly be conducive to placing targets at risk. Similarly, maximizing targets
destroyed may result in unacceptable attrition. There may be other cases; however, where
optimization is desirable ensuring maximum personnel survivability for instance. Regardless,
measures must be grounded in requirements documents or through decision maker questions
contained in the guidance.

While the alternatives performance will be compared to each other, the team should resist rank
ordering them or making recommendations based on a rank order. Remember, the alternatives
should be evaluated on their capability to accomplish mission tasks and meet established
requirements. It is possible that all the alternatives might not meet some or all requirements.
46
Conversely, while all the alternatives might meet all requirements, the highest performer might
also be the most costly and/or most risky. Recommending the lowest performer still meeting all
the requirements might be the preferred solution given its associated cost and risk.

4.2.3 Attributes

Once the MTs are well defined and understood, the next step is to identify the necessary
attributes of successful mission tasks. An attribute is essentially a property or characteristic of
an entity some desired characteristics of the entity. An entity may have many attributes, not
all of which are of interest. Attributes should be problem specific and should be used in so far
as they enlighten decision makers, answer key questions, and respond to guidance. The key
should be to keep it logical, identify the desired attributes first, and then craft the measures to
address them.

The January 2012 JCIDS manual briefly describes attributes and provides several examples
although this list is neither exhaustive nor directive. According to the manual:

The Capabilities Based Assessment (CBA) produces a set of tasks and measures used
to assess the programmed capabilities of the force. These measures should be based on
the list of capability attributes outlined in Appendix A to Enclosure A. The Enclosure
provides examples of appropriate attributes which should be used where applicable,
although other attributes may be identified and used when those in Appendix A to this
Enclosure are not appropriate.

Additionally, an excerpt from the Air Force Operational Test and Evaluation Center (AFOTEC)
Measures Primer, May 2007 identifies several characteristics of attributes that are useful to
study teams when developing measures. AFOTEC defines an attribute as:

A property or characteristic of an entity that can be distinguished quantitatively or


qualitatively by human or automated means. An entity may have many attributes, only
some of which may be of interest for the information needs. Some of these attributes or
characteristics might likely found in ICDs or other requirements documents. A given
attribute may be incorporated in multiple measurement constructs supporting different
information needs. Measurement is the process of assigning numbers to the attributes of
an entity in such a way that relationships of the numbers reflect relationships of the
attribute being measured.

A measure specifically addresses one or more attributes. Further, an attribute may have more
than one measure associated with it.

Since the AoA should trace its MTs back to the Joint Capability Areas (JCAs), it is useful to
link the study measures back using those attributes and their associated JCAs found in
Appendix A, Enclosure A of the JCIDS Manual. [Note: that these attributes are not

47
comprehensive and not all JCAs are yet represented in the manual; however, these examples
illustrate a variety of attributes that a team may identify]. In general, teams should not feel they
are tied to any or all of these. Other attributes, not included in these examples, may be
appropriate for certain mission tasks. Nevertheless, the team should, at a minimum, link its
mission tasks back to the applicable JCAs as this linkage is required in the to be developed
CDD which includes the AoA-developed Requirements Correlation Table (RCT) with identified
Key Performance Parameters (KPPs) and Key System Attributes (KSAs). Note that these JCAs
may not apply to non-DoD mission tasks. As identified in the January 2012 JCIDS manual, the
attributes for four of the JCAs are provided below.

Table 4-2: JCIDS JCAs

48
4.2.4 Measures

Measures are a central element when conducting an AoA. Without them, there is no way to
determine the effectiveness and suitability of an alternative and their ability to close gaps either
partially or completely. Properly formed and explicitly stated, measures will:

Specify what to measure (what data to collect, e.g., time to deliver message)
Determine the type of data to collect (e.g., transmit start and stop times)
Identify the source of the data (e.g., human observation)
Establish personnel and equipment required to perform data collection
Identify how the data can be analyzed and interpreted
Provide the basis for the assessment and conclusions drawn from the assessment

There is no universal definition for a measure within the analytic and test communities. Each
organization, developer, as well as academia and industry, defines the concept of a measure
slightly different. While there is no universal definition, there are certain tenets that apply to all
measures. Measures are not requirements although they are developed from requirements.
Measures are typically not conditions such as altitude, temperature, or terrain but they will be
measured under various conditions. In some situations; however, certain conditions may be
measures of interest. For instance, altitude may be something a team might want to measure if it
is critical to platform survivability. Finally, measures are not criteria although they will be
evaluated against established criteria. Remember, measures should be framed by the tasks,
conditions, and standards (criteria).

Results from measures not only make it possible to compare alternatives, they also can be used
to investigate performance sensitivities to variations of key assumptions and measure values.
Such analyses help define input to follow-on requirements and acquisition documents such as
the CDD, CPD, and TDS.

There are a variety of terms used to describe the value of a capability to the operator/user and
measures should be stated in terms of their capability to provide this value. Frequently used
terms include military worth, military utility, operational utility and operational significance.
Success can be measured relative to the immediate goals of the system (attack, communicate,
detect, etc.) or relative to high-level goals related to "winning the war." However, in many
cases, this determination is much more difficult and attributing winning the war to the
performance of one particular system may not be possible. Nevertheless, some examples of
measures demonstrating military worth are:

Reduction in fratricide
Loss/exchange ratio
Targets held at risk
Targets defeated
Level of collateral damage
49
Attrition rate
Quantity (and types) of resources consumed
Number of operating locations needed

Measures may come from a variety of sources. For some Air Force AoAs, the operational
utility may be expressed in terms of the Air Forces end customer which may be other
departments and organizations such as U.S. Army, DHS, DOS, etc. The team should consider
potential future studies and testing and attempt to craft measures that link to and are relevant for
these events.

4.2.5 Types of Measures

There are several different types of measures:

Measures of Effectiveness (MOEs)


Measures of Suitability (MOSs)
Measures of Performance (MOPs)

4.2.5.1 Measures of Effectiveness (MOEs)

Measures associated with attributes of operational effectiveness are referred to as MOEs.

Operational Effectiveness: The overall degree of mission accomplishment of a system


when used by representative personnel in the environment planned or expected for
operational employment of the system considering organization, doctrine, tactics,
survivability, vulnerability, and threat.
Measure of Effectiveness: A measure of operational success that must be closely related
to the objective of the mission or operation being evaluated.

MOEs are a qualitative or quantitative measure of a alternatives performance or characteristic


that indicates the degree to which it performs the task or meets a requirement under specified
conditions. They are a measure of operational success that must be closely related to the
objective of the mission or operation being evaluated. There will be at least one MOE to support
each MT. Each alternative is evaluated against each MOE criteria (requirement), and the results
are used to differentiate performance and capability among the alternatives. MOEs should be
focused on operational outcomes and closing the operational gaps rather than specific technical
performance parameters.

MOEs are usually developed by the study team. If possible, MOEs should be chosen to provide
suitable assessment criteria for use during later developmental and operational testing. The
team should look to the CBA, earlier analytic activities, requirements documents, and the
testing community to help identify these criteria. This linking of the AoA to testing is valuable

50
to the test community and the decision maker. Involvement of the testing community is
extremely helpful when developing testable measures.

MOEs should be reviewed by principal stakeholders during development of the AoA study plan.
Suitable selection of MOEs helps later independent review and evaluation of the AoA study
plan and results.

MOEs should be as independent of the alternatives as possible. The measures selected should
not bias the alternatives in some way and all alternatives should be evaluated using all MOEs.
Additionally, the team should be cautious of measures that are strongly correlated with one
another to avoid overemphasizing particular aspects of the alternatives. In these cases, the team
must be cognizant of the relationship among the measures to clearly understand the capabilities
and limitations of the alternatives.

Finally, MOEs should normally represent raw quantities like numbers of something or
frequencies of occurrence. Attempts to disguise these quantities through a mathematical
transformation (for example, through normalization), no matter how well meaning, may reduce
the information content and might be regarded as tampering with the data. Although ratios are
typically used for presenting information such as attrition rates and loss/exchange ratios, one
should still use caution as a ratio can also essentially hide both quantities. This can be
particularly misleading when sample sizes are small. It is generally better to identify the
proportion (e.g. 4 of 5).

4.2.5.2 Measures of Suitability (MOSs)


Measures associated with attributes of operational suitability are referred to as MOS.
Operational Suitability: The degree to which a system can be placed satisfactorily in
field use with consideration given to availability, compatibility, transportability,
interoperability, reliability, wartime usage rates, maintainability, safety, Human Systems
Integration, manpower supportability, logistics supportability, natural environmental
effects and impacts, documentation, and training requirements.
Measure of Suitability: A measure of a systems ability to support mission/task
accomplishment with respect to reliability, availability, maintainability, transportability,
supportability, and training.

It is important for the study team to consider suitability areas when evaluating operational
effectiveness. Suitability issues such as reliability, availability, maintainability (RAM), and
deployability can be significant force effectiveness multipliers.

A suitable system results in increased combat capability with smaller, more responsive
deployable systems requiring fewer spare parts and people and less specialized equipment. In
addition to significantly impacting mission capability, an alternatives suitability performance
could be a major factor in its life cycle cost. Maintainability issues could dramatically increase
51
the number of maintainers need to sustain a system. Major Human Systems Integration (HSI)
issues might increase operator workload. Additionally, significant reliability issues could result
in low operational availability.

In developing requirements, lead commands must identify RAM and deployability performance
parameters. Support requirements should relate to a systems operational effectiveness,
operational suitability, and total ownership cost. And, in fact, all AoAs are required to address
these measures for all alternatives considered.

Finally, sustainment is a mandatory Key Performance Parameter (KPP) for all ACAT I
programs (for ACAT II and below programs, the sponsor will determine the applicability of the
KPP). MOSs and the importance of examining sustainability during the AoA are discussed in
much more detail in the Sustainability section.

As will be discussed later in the Alternative Comparison section, the study team can and should
identify not only trades among overall operational effectiveness, cost and risk, but also between
effectiveness and suitability. For instance, can improvements in reliability be achieved if some
requirements for performance are relaxed?

4.2.5.3 Measures of Performance (MOPs)

Measures associated with a quantitative measure of physical performance or physical


characteristics are MOPs.
Measure of Performance: A measure of the lowest level of physical performance (e.g.,
range, velocity, throughput, etc.) or physical characteristic (e.g., height, weight, volume,
frequency, etc.).

MOPs are chosen to support the assessment of one or more MOEs. MOPs will support the
MOEs by providing causal explanation for the MOE and/or highlighting high-interest aspects or
contributors of the MOE. MOPs may apply universally to all alternatives or, unlike MOEs;
they may be system specific in some instances. In order to determine how well an alternative
performs, each MOP should have an initial minimally acceptable value of performance (often
the threshold value). In addition to a minimum performance value, each MOP might also
have an initial, more demanding value (often the objective value). While these values may
come from existing requirements documents, there will be some cases where these documents
and requirements simply do not exist prior to AoA initiation. In these cases, the team might rely
on subject matter experts (SMEs), search Combat Air Force (CAF) standards, CONOPS,
Concepts of Employment (CONEMP), and Tactics, Techniques, and Procedures (TTP), or use
some combination of sources to help define performance standards. However, if these
documents (or sources) are dated, are superseded by, or do not reflect current capabilities the
team should find other legitimate sources for defining required performance parameters. In
some cases where no legitimate source(s) can be found, one of the purposes of the analysis may
be to determine where those required values should be. Regardless of the source, these initial
52
values and the rationale for their selection should be well documented as the MOPs and their
performance criteria may later be directly or indirectly reflected in system performance
parameters in the ICD/CDD/CPD or other documents. It is possible that the lack of identified
performance values could signify that valid capability gaps have not been established. In this
case, the team should look to earlier analysis (if any exists) such as the CBA to ensure
capability gaps exist, a materiel solution is warranted, and the AoA is the next prudent path to
pursue.

Keep in mind that not all measures of interest for the study (MOEs, MOSs, and MOPs and their
minimum performance values) will necessarily be explicitly identified in any source document.
It is up to the team to identify what needs to be measured to adequately evaluate the
alternatives capability to accomplish the required mission tasks and close the capability gaps.

Finally, as with MOEs, the MOPs should be linked (where possible) to future testing
requirements.

As stated earlier, there is no universal definition of measures within the analytic and test
communities. As will be discussed in Appendix L, different organizations not only use MOPs
in different ways, but also use the term MOP to refer to different items or factors. Appendix L
also contains more detailed information regarding the mechanics of developing mission tasks,
measures, criteria, and conducting data analysis.

4.3 Levels of Analysis


In the world of military operations analysis, levels of effectiveness analysis are characterized by
the number and types of alternatives, threat elements, and the levels of fidelity needed for the
study. A typical four-level classification for model selection is shown in Figure 4-2.

At the base of the triangle is the engineering analysis performed on individual components of an
alternative or threat system. One level up, engagement analysis can model the interaction
between a single element of the alternative and a single threat. An example of this analysis is
weapon versus target, or aircraft versus aircraft. Engagement analysis also looks at interactions
of larger quantities of the same elements, or few-on-few.

At the top two levels, mission/battle and theater/campaign (many on many), the analysis
becomes very complex involving the modeling of most or all of the forces in a specific,
complex scenario. At these higher levels the focus of the analysis changes. The applicable
models and simulations (M&S) will also change, as does the complexity of the analysis.
Analysis at higher levels may require inputs from supporting analysis at lower levels.

While the supporting analysis may come from sources outside the AoA, it will often be
performed by the AoA team. MOP values tend to be produced from engineering and one-on-
53
one analyses. MOE values tend to come from higher levels of analyses. MOS values may come
from either source. There are no hard and fast rules, though, because of the range of issues
considered in AoAs.

Given the increasing complexity of the analysis encountered in moving up the pyramid, every
effort must be made to use the appropriate level needed to answer the AoA's questions. In some
cases, a team may need to use several levels of analysis to adequately address all AoA issues.
Figure 4-2 depicts the analysis hierarchy.

Figure 4-2: Hierarchy of Analysis

Once measures have been identified and the methodologies to be used for each analytical effort
determined, it is time to determine what tools will be used to develop measure data. The term
tools is defined as spreadsheets, SMEs, methods, processes, and Modeling & Simulation
(M&S). The analysis tools are the heart and soul of analysis and can consist of everything from
hand-written steps executed with a "stubby pencil" to elegant mathematical formulations
represented by thousands of lines of computer code. In some cases, they may include person-
in-the-loop simulations or the informed judgment of SMEs. Whatever their complexity or form,
there comes a point when the AoA team must decide which tools to use to generate measure
data for alternative comparisons.

The measures developed for the analysis should dictate which tools are needed. Never develop
measures based on the availability or familiarity of a particular analysis tool. Doing so (for
example, because of easy accessibility to a particular M&S) may result in the wrong issues
being investigated and the wrong alternatives being identified as promising. Once the measures

54
are identified, the necessary level(s) of analysis can be determined and a search conducted for
tools suitable for those measure calculations. [Note: the study questions and the methodology
to address those questions should always drive tool selection and who should do the analysis,
not the other way around.]

When selecting analysis tools consider the following:

Information or input data requirements and the quality of the data sources
Credibility and acceptance of the tool output or process results (e.g., SME assessments)
Who is available to run the M&S, develop/manipulate the spreadsheets or participate in
SME assessments
Whether or not the tool can be applied to support the analysis within time and funding
constraints
Cost of running M&S

Tool inputs come from all aspects of the AoA: threats and scenarios, alternative definitions,
employment concepts, constraints and assumptions, etc. These may also be derived from the
outputs of other tools. Before selecting an M&S tool, the sources of all inputs should be
identifiable and credible. Where the best available tools fall short, the team must identify this
information to decision makers. Information regarding some commonly accepted models can be
obtained from the Air Force Standard Analysis Toolkit (AFSAT) located at the HAF/A9 portal
page.

Before deciding on a final integrated set of tools, it is useful to check that the toolset is adequate
for evaluating all measures in the AoA. Constructing a linkage diagram as illustrated in Figure
4-3 may be useful for this.

As shown, this diagram depicts the source of data to resolve the measure data values and
provides a system level diagram of how the selected analysis tools are expected to work
together. It should also show what information is expected to flow from one tool (or process) to
another. A review of the linkage diagram should also ensure that a common set of assumptions
is made across all the tools. Including a linkage diagram in the Study Plan should also enhance
the understanding of those reading or reviewing the plan.

55
Figure 4-3: Notional Example of Tool and Measure Linkage

4.3.1 M&S Accreditation

The DODI 5000 series requires that digital M&S used in support of acquisition decisions be
formally accredited for use by an Accreditation Authority. Additionally, AFI 16-1001
Verification, Validation, and Accreditation (VV&A) establishes policy, procedures, and
responsibilities for the VV&A of Air Force owned or managed M&S. MIL-STD-3022, DoD
Standard Practice, Documentation of VV&A for Models and Simulation Accreditation outlines
the templates for the M&S accreditation plan and report.

Accreditation is an official determination by the accreditation authority that a model (or


methodology, tools, data) is acceptable for a specific purpose and identifies risks associated
with using that model. Accreditation provides credibility to the study by demonstrating the
pedigree of the model, offering evidence that model is credible, and establishing that it is
appropriate for its use within the study. The study team should allow time for the M&S
accreditation process within the AoA schedule; this process should be discussed in the study
plan and the accreditation plan should be included as an appendix to the study plan. OAS can
help tailor an appropriate accreditation plan.

Model accreditation begins with development of the accreditation plan. The plan contains
criteria for model assessment based on the ability of the model to accept the required input data
and to provide appropriate output information to resolve the MOEs. All data used for model

56
input and scenario configuration should also be accredited to ensure credibility of the output.
While accreditation is important, the study team must balance the extent of work required to do
the accreditation with the study questions at hand. In other words, the accreditation authority
must determine what the appropriate level of accreditation is for this problem. Once the model
assessment is complete, a final accreditation report is prepared.

4.3.2 Study Risk

Fundamentally, AoAs consist of three primary analysis components: effectiveness, cost, and
risk. Risk in this sense, refers to the operational, technical, and programmatic risks associated
with the alternative solutions. Various factors such as technical maturity, survivability,
dependency on other programs are considered in determining these risks.

There are other risks (uncertainties) in AoAs associated with the conduct of the study rather
than the alternatives themselves. Generally, these uncertainties pertain to factors that could
impact the conduct of the study as described in Section 3.4, such as time and resource
constraints.

In terms of uncertainties associated with the effectiveness analysis, consider, for instance, a
situation where a team is evaluating both existing, established, mature systems and newer,
cutting edge technologies for which little historical data exists. While the team has access to
sufficient, credible information (maintenance records, prior test results, historical performance
data, etc.) regarding the operational capabilities of the mature technologies, it will need to make
certain assumptions regarding the newer technologies that may or may not actually be true.

Additionally, the team may only have a limited set of data, and/or subject matter expertise to
rely on for analysis. While the SMEs may come to the conclusion that the second alternative
should be capable of performing the required tasks to the required standards, they do not have
any hard evidence to unequivocally support this conclusion. Due to the uncertainty associated
with this data, the team will have less confidence in the conclusions drawn for the newer
system. While the SMEs believe it will be operationally effective, it may not.

These areas of uncertainty are excellent starting points for sensitivity analysis. Given the
uncertainty of the information used to form some conclusion, what is the operational impact if
the team is wrong? It is important that the team recognizes and documents these uncertainties,
identifies the operational impact if an unanticipated outcome occurs, and provides this
information to the decision makers.

4.4 Sensitivity Analysis


Alternatives whose effectiveness is stable over a range of conditions provide greater utility and
less risk than those lacking such stability. Alternatives in an AoA are typically defined with
57
certain appropriate assumptions made about their performance parameters: weight, volume,
power consumption, speed, accuracy, impact angle, etc. These alternatives are then assessed
against AoA-defined threats and scenarios under a set of AoA-defined assumptions. This
provides very specific cost and performance estimates, but does little to assess the stability of
alternative performance to changes in system parameters or AoA threats, scenarios,
employment, and other assumptions.

Stability can only be investigated through sensitivity analyses in which the most likely critical
parameters are varied; for instance: reduced speed or increased weight, greater or less accuracy,
different basing options, reduced enemy radar cross section, or when overarching assumptions
are changed. This form of parametric analysis can often reveal strengths and weaknesses in
alternative performance that are valuable in making decisions to keep or eliminate alternatives
from further consideration. Sensitivity analyses should always be performed with an emphasis
on alternatives that survived early screening processes. It should be budgeted for in the original
plan; however, the specific sensitivity analysis to be conducted usually will not be known until
well into the AoA. Sensitivity analysis can also add credibility to the information developed
during the effectiveness analysis. Of course, it is always necessary to balance the amount of
sensitivity analysis against its potential value and the available resources.

In addition to sensitivity analysis, the team may want to consider examining various excursions
from the original scenarios and other what if analysis to provide a more thorough and robust
evaluation of the capabilities and limitations of the alternatives in differing operational
environments and when employment situations change.

4.5 Effectiveness Analysis Results Presentation


Once the effectiveness analysis has been completed, the most important task for the team is to
provide a concise, cogent, and clear picture of the effectiveness of each alternative in relation to
the requirements. The team must determine how to convey the critical information learned. In
most AoAs, this is an art form far more than a science and requires serious operational and
military judgment. One method to do this (similar to the figure below) is to present the values
for the measures of each alternative using a color scheme indicating how well each measure was
accomplished. This is only one example which may not be suitable in each case; particularly if
the analysis included numerous measures. If a presentation such as this is used, a methodology
needs to be developed to map measured values to the colors displayed. Any method chosen;
however, should map measure values in relation to the threshold value and associated changes
in military utility and reduction in the gaps not in relation to one another. As discussed in
section 4.1 above, OAS discourages roll-up, aggregation, and weighting schemes that tend to
mask important information and potentially provide misleading results. Therefore, for studies
with an abundant amount of measures information, a balance must be achieved between
providing credible results with sufficient clarity and overwhelming or confusing the audience.

58
Figure 4-4: Effectiveness Analysis Results Presentation

59
5 Performing Cost Analysis
5.1 General Cost Estimating
Generally, cost estimates are required for government acquisition programs, as they are used to
support funding decisions. Developing a sound cost estimate requires stable program
requirements, access to detailed documentation and historical data, and well-trained,
experienced cost analysts. Cost estimating combines concepts from such disciplines as
accounting, budgeting, economics, engineering, mathematics, and statistics. Establishing
realistic estimates for projected costs supports effective resource allocation and increases the
probability of a programs success. In addition, cost estimates are used to develop annual
budget requests, evaluate resource requirements at key decision points, and to develop
performance measurement baselines.

Cost estimating is defined as the process of collecting and analyzing historical data and
applying quantitative models, techniques, and tools to predict the future cost of an item,
product, program, or task. Cost estimating is an integral part of the AoA and is used to support
the following activities:

Evaluating program or sponsor viability, structure, and resource requirements


Supporting a programs or sponsors planning, programming, budgeting, and execution
process (PPBE)
Predicting future costs based on known historical technology and manpower
requirements
Evaluating alternative courses of action
Supporting milestone decisions and reviews
Forming the basis for budget requests to Congress

5.2 AoA Cost Estimating


The Life Cycle Cost Estimate (LCCE) includes more than just the procurement cost of the
system. Although procurement cost is important, it is often not the largest portion of the overall
cost of an alternative. The LCCE can provide the following insights to inform acquisition
decisions:

Total cost to the Federal Government (to the U.S. treasury) of developing, procuring,
fielding, and sustaining operations for each alternative for its expected life cycle
The annual breakdown of costs expected for the alternative by funding categories (e.g.
3300 Military Construction, 3400 O & M, 3500 Military Personnel, 3600 RDT&E, etc.)
Trade-off analysis/Cost As an Independent Variable (CAIV) to identify solutions that,
given a fixed cost, provide the greatest (may be less than 100% solution) capability
(CAIV is discussed in paragraph 5.5.4 below)
The cost drivers of alternatives (i.e., those items having the greatest impact on the
overall costs)
60
Cost of enablers and operational support for the capability being evaluated
Estimated life cycle costs that represent what is necessary to deliver the predicted
operational effectiveness for each alternative
Projected costs associated with various operational, basing, fielding, or programmatic
decisions expected for each alternative evaluated
Uncertainty and risk associated with the cost estimate

It is critical that all cost estimates included in AoAs be credible and clearly documented. Table
5-1 describes characteristics of credible cost estimates from the Government Accountability
Office (GAO) Cost Estimating guide. This guide has been referenced in several recent reports
to Congress on how to improve DoDs acquisition process. It is provided to help study teams
understand what is necessary to produce credible cost estimates.

Table 5-1: GAOs Basic Characteristics of Credible Cost Estimates

Characteristic Description
Clear identification of task Estimator must be provided with the system
description, ground rules and assumptions, and
technical and performance characteristics.
Estimates constraints and conditions must be
clearly identified to ensure the preparation of a
well-documented estimate
Broad participation in preparing estimates All stakeholders should be involved in
deciding mission need and requirements and in
defining system parameters and other
characteristics
Availability of valid data Numerous sources of suitable, relevant, and
available data should be used from similar
systems to project costs of new systems; these
data should be directly related to the systems
performance characteristics
Standardized structure for the estimate A standard work breakdown structure, as
detailed as possible, should be used. It should
be refined as the cost estimate matures and the
system becomes more defined. The work
breakdown structure ensures that no portions
of the estimate are omitted and allows
comparisons to similar systems and programs
Provision for program uncertainties Uncertainties should be identified and
allowance developed to cover the cost effect

61
Recognition of inflation The estimator should ensure that economic
changes, such as inflation, are properly and
realistically reflected in the life cycle cost
estimate
Recognition of excluded costs All costs associated with a system should be
included; any excluded costs should be
disclosed and given a rationale
Independent review of estimates Conducting an independent review of an
estimate is crucial to establishing confidence
in the estimate; the independent reviewer
should verify, modify, and correct an estimate
to ensure realism, completeness, and
consistency

The life cycle cost in an AoA captures the total cost of each alternative over its expected life
and includes costs incurred for research and development, investment, operations and support,
and end of life disposal. Sunk costs (funds already spent or obligated) are not included in the
LCCEs; however, they may be of interest to decision makers and should be identified
separately. All AoA LCCEs are based on peacetime operations and do not include any war-
related costs such as replacement of expended or destroyed assets or increased costs associated
with wartime operational tempo.

The study team should determine what is included in peacetime operations and what is included
in contingency operations. For example, airlift operations during peacetime may entail
scheduled flights in commercial airspace and operating at various established commercial and
military airfields. On the other hand, some Special Operations missions during peacetime may
appear to be contingency operations, such as flying limited sorties into areas not serviced by
commercial airspace and landing in unimproved areas. It will be the study teams responsibility
to determine where the defining line between peacetime and contingency operations falls for
their study, and to obtain WIPT, SRG, SAG, and AFROC concurrence with this critical
assumption.

5.3 Life Cycle Cost Considerations

5.3.1 Sunk Costs

Sunk costs are those that either already occurred or will be incurred before the AoA can inform
any decisions on their expenditure. The best method of determining the cut off for sunk costs is
to use the fiscal year in which the AoA is to be completed. Any costs that are expected to be
incurred after that fiscal year should be included in the AoA LCCEs.

62
5.3.2 Research and Development (R&D) Costs

The costs of all R&D phases, including Advanced Technology Demonstration (including
Concept Development), Technology Development, and Engineering and Manufacturing
Development, are included in this cost element. There are many types of R&D costs:
prototypes, engineering development, equipment, test hardware, contractor system test and
evaluation, and government support to the test program. Engineering costs for environmental
safety, supportability, reliability, and maintainability efforts are also included, as are support
equipment, training, and data acquisition supporting R&D efforts.

5.3.3 Investment Costs

The cost of investment (low rate initial production, full rate production, and fielding) includes
the cost of procuring the prime mission equipment and its support. This includes training, data,
initial spares, support equipment, integration, pre-planned product improvement (P3I) items,
and military construction (MILCON). MILCON cost is the cost of acquisition, construction, or
modification of facilities (barracks, mess halls, maintenance bays, hangers, training facilities,
etc.) necessary to accommodate an alternative. The disposal of this infrastructure should be
captured in the disposal costs (discussed in paragraph 5.3.5). The cost of all related
procurement (including transportation, training, support equipment, etc.) is included in the
investment phase.

5.3.4 Operations and Support (O&S) Costs

O&S costs are those program costs necessary to operate, maintain, and support system
capability through its operational life. These costs include all direct and indirect elements of a
defense program and encompass costs for personnel, consumable and repairable materiel, and
all appropriate levels of maintenance, facilities, and sustaining investment. Manpower
estimates should be consistent with the Manpower Estimate Report (MER), which is produced
by the operating commands manpower office. For more information, refer to the OSD Cost
Analysis Improvement Group's Operations and Support Cost Estimating Guide, October 2007.

5.3.5 Disposal Costs

Disposal costs represent the cost of removing excess or surplus property (to include MILCON)
or materiel from the inventory. It may include costs of demilitarization, detoxification,
divestiture, demolition, redistribution, transfer, donation, sales, salvage, destruction, or long
term storage.

It may also reflect the costs of hazardous waste disposition of storage and environmental
cleanup. Disposal costs may occur during any phase of the acquisition cycle. If during
development or testing some form of environmentally unsafe materials are created, the costs to
dispose of those materials are captured here.
63
5.3.6 Baseline Extension Costs

The baseline is the existing, currently programmed system funded and operated according to
current plans. Baseline extension costs are those costs associated with maintaining the current
capabilities (i.e., the baseline alternative) through the life cycle identified in the study. Only
improvements that are included in the POM are part of the baseline. This may require Service
Life Extension Program (SLEP) efforts, additional procurement, additional maintenance, or
other efforts to continue to provide the baseline level of capability. Capabilities that may be
provided by other alternatives but are not provided by the baseline alternative should be
addressed as continued shortfalls in the baseline capability. For other study alternatives, these
costs must be continued until such time as an alternative providing that additional capability is
fielded and operational (Full Operational Capability (FOC), which will be based upon the study
assumptions).

5.3.7 Life Cycle Time Frame

The cost of each alternative (baseline and all proposed alternatives) must be evaluated for the
same life cycle time frame. The time frame should span from the end of the AoA to the end of
the life cycle as defined in the study (e.g., 20 year life cycle). This allows for a fair comparison
of each alternative and may require service life extension efforts for other alternative (including
the baseline) with expected shorter useful lives or the calculation of residual values for
alternatives that may continue to provide capability past the study cut off dates. It is important
estimate the costs associated with providing a capability (albeit possibly at different levels for
different alternatives) for the same period of time.

Figure 5-1 below illustrates the concept of comparing all alternatives across the same life cycle.
In this example all alternatives provide their evaluated capability from FY02 through FY38.
The assumption is that alternative 1 has the longest life and ends its useful life (and incurs
disposal costs) in FY38. Each alternative has a different Initial Operational Capability (IOC)
date where it becomes an operational asset and requires at least one Service Life Extension
Program (SLEP) effort during its life. Alternative 2 may have some residual value at the end of
the analysis time frame which should be included in the LCCE. The baseline alternative is
shown incurring costs until such time as its capabilities are replaced by the new alternatives.
There will likely be a ramp-down in baseline costs from IOC to FOC for each new alternative
along with a corresponding ramp-up of alternative operational costs for the alternative being
evaluated.

64
Figure 5-1: Comparing All Alternatives Across the Same Life Cycle

5.3.8 Pre-fielding Costs

Pre-fielding costs are those associated with maintaining the capabilities being analyzed in the
AoA until a specific alternative can be fielded to provide them. Pre-fielding costs must include
the costs of maintaining the current baseline alternative (or capability) until such time as the
other alternatives can be fielded (FOC). There may be ramp-up of new alternatives and a
corresponding ramp-down of baseline capabilities from IOC to FOC depending on the study
and its assumptions.

5.4 Cost Analysis Responsibility


The working group created to evaluate costs for an AoA should be led by a government cost
analyst (also referred to as cost estimator in this handbook) familiar with the type of capability
being studied. This group should also include representatives from operating and implementing
command organizations (stakeholders) with expertise in cost analysis and knowledge of the
system alternatives. Additionally, other specialists can assist the team in assessing the cost
implications of enablers (e.g., logisticians, intelligence analysts, Human Systems Integration
practitioners, and communications specialists). OAS will serve as an advisor and assist the cost
team throughout the AoA. As one of its first official duties, the cost analysis working group
should request support from the Air Force Cost Analysis Agency (AFCAA). Specifically, this
support should include AFCAA participation in the cost analysis working group, review and
65
validation of the methodologies, and an independent review of the final cost estimates. In
response to this request, AFCAA may provide a representative to support the working group in
developing the cost analysis methodology. If not possible, AFCAA should respond to the
teams request and identify what, if any, involvement they will have in the AoA. Their
involvement may include providing regulatory guidance, reviewing and approving proposed
cost analysis methodologies, and performing a sufficiency review, which is a form of Non-
Advocate Cost Assessment (NACA), per AFPD 65-5 (August 2008).

The cost group is responsible for the following cost analysis tasks:

Request AFCAA support for the cost analysis


Identify key cost analysis team support (stakeholders, modelers, etc.) requirements
Develop appropriate cost analysis ground rules and assumptions and ensure they are
consistent with other ground rules and assumptions in the study
Develop the Work Breakdown Structure (WBS) to be used in the cost analysis; the WBS
is a hierarchical organization of the items to be costed
Develop cost analysis approaches and methodologies
Locate and determine the suitability and availability of cost models and data required
Define the enabling (logistics, intelligence, Human Systems Integration, etc.) elements
necessary to create the cost analysis
Prepare point estimates and confidence ranges for the baseline and each viable
alternative, as determined by the screening process
Bound the LCCE point estimates with uncertainty ranges (or cumulative distribution
functions) specifically identifying the 50th and 80th percentile points
Document the cost analysis so that a qualified cost analyst can reconstruct the estimate
using only the documentation and references provided in the final report
Crosscheck the estimates to ensure the methodology and the ground rules and
assumptions are consistent across all alternatives and that the LCCE is complete
Include programmatic data in the cost analyses documentation, such as quantities and
delivery schedules (whether known or developed by the cost team)
Identify cost drivers (those elements to which estimates are most sensitive to changing)
and perform sensitivity analyses on significant cost drivers demonstrating the impact of
changing assumptions on the overall LCCE
Coordinate with the effectiveness analysis working group to evaluate and identify any
possible relationships between cost drivers and aspects of the alternatives that may drive
capability delivery
Address any funding and affordability constraints and specify schedule limitations
Assuming such constraints are identified, provide necessary cost data to perform CAIV
analyses
Provide support to Core Function Lead Integrator (CFLI) for an affordability assessment
of the alternatives impact on the entire mission area in accordance with DAG Section
3.2.

66
Present all costs in base-year dollars (BY$) and then-year dollars (TY$)
Identify and use of the appropriate inflation indices for use in creating TY$ estimates
(the most current OSD indices are published on the SAF/FMC web page)
Separately identify sunk costs for each alternative
Address manpower implications (government and contract manpower) to include all
costs with employing each person for each alternative in the O&S cost
Address appropriate environmental regulations, treaties, risk mitigation, etc. in
determining disposal costs
Address sources that are driving cost risk and uncertainty for each alternative and
provide mitigation plans where possible
Write cost section of the study plan, final report, and review group (WIPT, SRG,
AFROC, etc.) briefings
Participate in the alternative comparison and risk analysis efforts to ensure LCCE data is
appropriately used and interpreted

5.5 Cost Analysis Methodology


Cost analysis allows alternatives to be compared to the baseline system using their relative
estimated costs. The cost methodologies to be used are initially outlined in the study plan and
updated as the AoA proceeds. A recommended approach for structuring the LCCE process
during AoAs is outlined in Table 2 (The Twelve Steps of a High-Quality Cost Estimating
Process) of the GAO Cost Estimating and Assessment Guide (GAO CEAG), March 2009 (See
Appendix M). This guides cost estimators of all levels of experience in developing the LCCE.

The cost analysis group will use the same general work breakdown structure (WBS) to compute
cost estimates for all viable alternatives. See Section 5.5.1 for a description of WBS. The level
of alternative description available and the fidelity of the cost estimate will vary depending on
the detail of alternative definition and its technological maturity. The definition of each
alternative in the CCTD will serve as the foundation for the cost, effectiveness, and risk analysis
efforts during the AoA. It is crucial that the same version of the CCTD be used as the basis for
all analysis. As part of the cost methodology, the AoA study plan should identify general cost
ground rules and assumptions underlying the analysis (for example: all maintenance will be
provided with military personnel) as well as those specific to particular cost elements or life
cycle phases (for example: System Engineering/Project Management (SEPM) will be estimated
at 5% of green aircraft cost). At a minimum, the preliminary list of cost ground rules and
assumptions should address the following:

Cost basis of the estimate (specified in BY$)


Duration (years) alternatives are to be operational (life cycle) for costing purposes
Specific inflation indices used (OSD unless otherwise justified)

67
Definition of sunk costs (date separating costs expended or contractually committed
from those to be included in the estimate)
Schedule issues, including major milestones and significant events (IOC and FOC dates,
production schedules and quantities)
Basing, logistics, and maintenance concepts for each alternative
Fully Burdened Cost of Energy(FBCE)
MILCON requirements
Intelligence, Human Systems Integration, and other enabler support requirements
Environmental costs
Personnel requirements and constraints
Affordability constraints

5.5.1 Work Breakdown Structure (WBS)

The cost estimating methodology is generally based on a WBS. A WBS is a product-oriented


(as opposed to functionally-oriented) tree composed of hardware, software, services, data, and
facilities that define the product to be developed and produced. The following is a notional
WBS for an aircraft system; it illustrates the typical elements found at the first three WBS levels
(succeeding levels contain greater detail).

Aircraft System
Air Vehicle
Airframe
Propulsion
Air vehicle software
Armament
Weapons delivery
Systems Engineering and Program Management
(no Level 3 breakdown)
System Test & Evaluation (T&E)
Development T&E
Operational T&E
T&E support
Test facilities
Training
Equipment
Services
Facilities
Data
Technical publications
Engineering data
Management data
Support data
68
Peculiar Support Equipment
Test & measurement equipment
Support & handling equipment
Common Support Equipment
Test and measurement equipment
Support and handling equipment
Operational/Site Activation
System assembly, installation and checkout
Contractor technical support
Site construction
Industrial Facilities
Construction, conversion, or expansion
Equipment acquisition or modernization
Maintenance (industrial facilities)
Initial Spares and Repair Parts
(no Level 3 breakdown)

Once the WBS has been created, cost estimates are collected for the WBS elements and then
used to develop an overall point estimate for each alternative. It is recommended that study
teams include a WBS to at least level 3 in their AoA study plans. This demonstrates to decision
makers that the team understands each alternative. The CCTD is the best source of information
to use in developing the WBS. Although the CCTD may not be complete when the study plan
development effort begins, there should be enough information available to initiate development
of the level 3 WBS. Each alternatives WBS will be further defined and lower levels added
during the analysis. For further information on WBS, refer to MIL-STD-881 Revision C, Work
Breakdown Structures for Defense Materiel Items (3 October 2011).

5.5.2 Cost Estimating Methodologies

Once the cost estimating team has developed the WBS, the next step is to determine how to
develop cost estimates for each element of the WBS. These individual estimates will form the
basis of the overall point estimate. There are multiple cost estimating methods available which
span the Acquisition Life Cycle to facilitate the cost estimating process.

Depending on project scope, estimate purpose, project maturity, and availability of cost
estimating resources, the estimator may use one, or a combination, of these techniques.
Generally speaking, the estimating team should identify an overarching methodology that will
frame the entire estimating effort, and also identify the specific methodology that is most
appropriate for estimating each individual WBS element. As the level of project definition
increases, the estimating methodology tends to progress from conceptual techniques to
deterministic and definitive techniques.

The cost team must choose the appropriate methodology which applies to where the program

69
or effort is in its life cycle. Early in the program, definitions maybe somewhat limited and
actual costs may not have been accrued. Once a program is in production, cost and technical
data from the development phase can be used to estimate the remainder of the program. DoD
5000.4M, Cost and Software Data Reporting (CSDR) Manual, identifies five analytical cost
estimating methods and techniques commonly used to develop cost estimates for DoD
acquisition systems:

1. Analogy
2. Engineering buildup
3. Parametric
4. Extrapolation from actual costs
5. Expert opinion

For definitions and explanations for analogy, engineering build-up and parameter methods refer
to Appendix N which is an excerpt from Chapter 11 in the GAO Cost Estimating and
Assessment Guide.

Table 5-2 compares the most common cost estimating methods:

9999

70
Further information or details in applying any of these methods can be obtained through
discussions with the Office of Aerospace Studies or by consulting the GAO Cost Estimating and
Assessment Guide (March 2009).

5.5.3 Sensitivity Analysis

Sensitivity analysis reveals how the cost estimate is affected by changes in assumptions, ground
rules, and cost drivers. The cost estimator must examine the effect of changing one assumption,
ground rule, or cost driver at a time while holding all other variables constant. By doing so, it is
easier to understand which variable most affects the cost estimate. In some cases, a sensitivity
analysis can be conducted to examine the effect of multiple assumptions changing in relation to
a specific scenario.

Since estimates are built on a number of predicted technologies and assumptions, it is necessary
to determine the sensitivity of the cost elements to changes in assumptions, ground rules, and
cost drivers. If possible, cost estimators should quantify the risks they identify. This can be
done through both a sensitivity analysis and an uncertainty analysis (discussed in the paragraph
5.3.5).

Uncertainty about the values of some, if not most, of the technical parameters is common early
in an alternatives design and development. Many assumptions made at the start of a study may
prove to be inaccurate. Therefore, once the point estimate has been developed, it is important to
determine how sensitive the total cost estimate is to changes in the study assumptions, ground
rules, and cost drivers.

5.5.3.1 Sensitivity Factors

Some factors that are often varied in a sensitivity analysis are:

Duration of life cycle


Volume, mix, or pattern of workload
Threshold/objective criteria
Operational requirements
Hardware, software, or facilities configurations
Assumptions about program operations, fielding strategy, inflation rate, technology
heritage savings, and development time
Learning curves
Performance characteristics
Testing requirements
Acquisition strategy (multiyear procurement, dual sourcing, etc.)
Labor rates
Software lines of code or amount of software reuse

71
Scope of the program
Manpower levels and personnel types
Occupational health issues
Quantity planned for procurement
Purchase schedule

Many of these are usually cost drivers in AoAs and are responsible for sizable changes in early
cost estimates.

5.5.3.2 Cost as an Independent Variable (CAIV)

CAIV is one of the most common types of sensitivity analysis. CAIV is a technique for varying
the expected cost of the alternative(s) and changing performance and schedule to determine the
impact of funding limitations. This technique allows the cost team to perform what if
analysis with funding levels even before such levels have been determined or included in
budgets. It is good practice for the cost team to fluctuate the point estimate they have developed
by decrements (for example, 0, 10, and 25 percent) and then, with the alternative development
team, derive the number of units, performance characteristics, and schedules that such reduced
funding levels would represent. It is likely this effort will identify a point at which it is not
advisable to proceed with one or more alternatives. These results can provide important
information to the decision maker.

There are no set levels which the cost should be fluctuated, nor are there any set formats for
displaying this information. Table 5-4 shows a recommended way to display CAIV results in
the AoA; however, teams may have other methods that provide greater insights.

72
Table 5-2: Cost As an Independent Variable (CAIV)

5.5.4 Cost Models and Data

Cost models incorporating the five methodologies are available to assist the cost analyst in
developing the LCC estimates. The models and data intended for use in the AoA should be
identified and described in the study plan. Cost models and data generally accepted by the Air
Force cost analysis community should be used. AFCAA and CAPE can provide a
comprehensive list of acceptable cost models and databases. Cost models frequently used
include:

ACEIT (integrated)
COCOMO (software)
CRYSTAL BALL (risk)
LSC (logistics)
SEER (software/hardware)
SEM (software)

73
PRICE-H (hardware)
PRICE-S (software)

5.5.5 Cost Risk and Uncertainty

Because the LCCEs may be used as estimates for future program costs, it is important to
determine the amount of uncertainty associated with the estimate. For example, data from the
past may not always be relevant in the future, because new manufacturing processes may
change a learning curve slope or new composite materials may change the relationship between
weight and cost. Moreover, a cost estimate is usually composed of many lower-level WBS
elements, each of which comes with its own source of error. Once these elements are added
together, the resulting cost estimate can contain a great deal of uncertainty.

5.5.5.1 The Difference Between Risk and Uncertainty (GAO-09-3SP, GAO Cost
Estimating Guide)

Risk and uncertainty refer to the fact that because a cost estimate is a forecast, there is always a
chance that the actual cost will differ from the estimate. Moreover, lack of knowledge about the
future is only one possible reason for the difference. Another equally important reason is the
error resulting from historical data inconsistencies, assumptions, cost estimating equations, and
factors typically used to develop an estimate.

In addition, biases are often found in estimating program costs and developing program
schedules. The biases may be cognitiveoften based on estimators inexperienceor
motivational, where management intentionally reduces the estimate or shortens the schedule to
make the project look good to stakeholders.

Recognizing the potential for error, and deciding how best to quantify it, is the purpose of both
risk and uncertainty analysis.

It is inaccurate to add up the most likely WBS elements to derive a program cost estimate, since
their sum is not usually the most likely estimate for the total program, even if they are estimated
without bias.

Quantifying risk and uncertainty is a cost estimating Best Practice addressed in many guides
and references. DOD specifically directs that uncertainty be identified and quantified. The
Clinger-Cohen Act requires agencies to assess and manage the risks of major information
systems, including the application of the risk-adjusted return on investment criterion in deciding
whether to undertake particular investments.

While risk and uncertainty are often used interchangeably, in statistics their definitions are
distinct:

74
Risk is the chance of loss or injury. In a situation that includes favorable and
unfavorable events, risk is the probability that an unfavorable event will occur.
Uncertainty is the indefiniteness about the outcome of a situation. It is assessed in
cost estimate models to estimate the risk (or probability) that a specific funding level
will be exceeded.

Therefore, while both risk and uncertainty can affect a programs cost estimate, enough data
will never be available in most situations to develop a known frequency distribution. Cost
estimating is analyzed more often for uncertainty than risk, although many textbooks use both
terms to describe the effort.

5.5.5.2 Technology Readiness Levels (TRLs)

Technology Readiness Levels (TRLs) are often used in early analysis to determine potential
costs, uncertainties, and risks. However, given the early stages of some alternative development
there may not be credible TRL scores available. Table 5-5 from the 2003 Society of Cost
Estimating Analysis (SCEA) Cost Risk Analysis paper may be of some assistance in
identifying the risk areas associated with technology and hardware components of alternatives.
Using the risk category (relates somewhat to phase of the life cycle) and the descriptors of the
current state of the alternative or subsystem, a risk score (0-10) can be assigned which will help
to identify relative risks amongst alternatives.

For example, in the Technology Development phase of the life cycle (roughly equivalent to
Technology advancement in the table) since the alternative being considered represents the
state of the art (i.e., being used today) then the risk of requiring significant investment dollars
for R&D would be low (or 0 in the table). On the other hand, in the Engineering and
Manufacturing Development phase (roughly equivalent to Engineering development in the
table) since the alternative has only a concept defined, the risk of having to invest sizable
amounts of funding into development and testing of that concept is high (or 10 in the table).
This is provided as a tool to help teams evaluate potential cost risks and uncertainty as they
apply to AoA alternatives, other methods may be appropriate as well.

Table 5-3: A Hardware Risk Scoring Matrix


Risk score: 0 = low, 5 = medium, 10 = high
Risk category 0 12 35 68 910
1. Technology Completed, Minimum Modest Significant New
advancement state of the art advancement advancement advancement technology
required required required
2. Engineering Completed, fully Prototype Hardware Detailed design Concept
development tested and software defined
development
3. Reliability Historically high Historically Modest problems Serious Unknown
for same system high on similar known problems known

75
Systems
4. Producibility Production and Production and Production and Production No known
yield shown on yield shown on yield feasible feasible and production
same system similar system yield problems experience
5. Alternative Exists or Exists or Potential Potential Alternative
item availability on availability on alternative in alternative in does not exist
other items not other items development design and is required
important somewhat
important
6. Schedule Easily achieved Achievable Somewhat Challenging Very
challenging challenging

Source: 2003, Society of Cost Estimating and Analysis (SCEA), Cost Risk Analysis.

5.5.5.3 Software Cost Risk

Another source of cost risk to alternatives is software development, modification, and


integration. These aspects are evaluated in a similar fashion as hardware described in the
previous section. Table 5-6 is a guide to make sure that estimates reflect what we know and
what we dont know about the development effort required for the software piece of the
estimate. Like other sources of risk this needs to be addressed in the LCCE. Table 5.6,
Software Scoring Matrix, was developed by the Air Force and published in the GAO Cost
Estimating Guide. This guide helps the study team decide where the software cost risk is
prevalent. As an example of how to use the Software Risk Scoring Matrix, a subject matter
expert (SME) would determine whether the Design Engineering is scored as 0 (Design
complete and validated) or as high as 10 (Requirements are partly designed).

Table 5-4: A Software Risk Scoring Matrix


Risk score: 0 = low, 5 = medium, 10 = high
Risk category 0 12 35 68 910
1. Technology Proven Undemonstrated Emerging Unconventional Unconventional
Advancement conventional conventional approaches, new approach, approach,
analytic approach, applications concept in concept
approach, standard development unproven
standard methods
methods
2. Design Design complete Specifications Specifications Requirements Requirements
Engineering and validated defined and defined defined partly defined
validated
3. Coding Fully integrated Fully integrated Modules Modules Wholly new
code available code available integrated exist but not design, no
and validated integrated modules exist
4. Integrated Thousands of Tens of Hundreds of Millions of Tens of millions
instructions thousands of thousands of instructions of instructions
instructions instructions

76
5. Testing Tested with Tested by Structured Modules tested Untested
system simulation walk-throughs but not as a modules
conducted system
6. Alternatives Alternatives Alternatives Potential for Potential Alternative does
exist; alternative exist; design alternatives in alternatives not exist but is
design not somewhat development being required
important important considered
7. Schedule and Relaxed Modest schedule, Modest schedule, Fast track on Fast track,
management schedule, serial few concurrent many concurrent schedule, many missed
activities, high activities, review activities, concurrent milestones,
review cycle cycle reasonable occasional activities review at
frequency, early reviews, late first demonstrations
first review review only, no periodic
reviews
Source: U.S. Air Force.

5.6 Cost Results Presentation


The format illustrated in Figure 5-2 is used to display the AoA cost analysis results; it allows
the costs for each alternative and LCC element to be directly compared. This format can be
used to present both Base Year (BY$) and Then Year (TY$) costs.

Figure 5-2: Cost by fiscal year and appropriation

Figure 5-3 presents each alternative's cost in terms of fiscal year spread and appropriation.
Again, this format can be used for both BY$ and TY$. The results should be graphically
displayed for presentation. Notice sunk costs are excluded from the estimates in both examples.

77
Figure 5-3: General LCC Summary (By Alternative)

5.7 Cost Documentation


A complete set of cost documentation is an essential part of the AoA cost analysis. Without an
explanation of the data sources and methodology used for each element of the estimates, the
costs cannot be replicated and therefore may lack credibility. Chapter 3 of AFI 65-508, Cost
Analysis Guidance and Procedures, provides guidance on the level of documentation required.
Attachment 5 to the same instruction contains a cost documentation checklist useful in
determining the completeness of the cost documentation.

5.7.1 Tradespace Analysis during Alternative Comparison

Once the team determines the format and data requirements for use in developing tradespace
analysis during the alternative comparison phase of the study, cost analysis inputs will need to
be developed to feed that process. The actual format of the data required will vary from study
to study, so it will be incumbent upon the Study Director to identify both the cost and
effectiveness data required for production of the tradespace analysis early to make the analysis
useful. Refer to chapter 8 for more detail.

5.7.2 Cost Reviews

The AoA study team reviews the cost estimates for consistency and completeness. OAS also
reviews the cost section of the study plan and the final results as part of the overall AoA
78
assessment provided to the AFROC. Recent GAO reviews and AFROC requests have
reinforced the preference for an independent review of the cost estimates. This review should
be performed by an organization which has not been involved in creating the estimate.
All AoAs, regardless of Acquisition Category (ACAT) level or Joint Staffing Designator (JSD),
should have their cost analyses independently reviewed at the end of the study. An AFCAA
independent review is the most desirable as it is the most robust and includes a sufficiency
memorandum. However, resources realities and priorities mean AFCAA is not always able to
perform the independent review of all AoAs. When AFCAA is unable to provide a complete
independent review, the team needs to identify the level of independent review they can
achieve.

The recommended process for obtaining an independent review of an AoA study cost analyses
is as follows:

1. Study Directors must contact AFCAA and determine if they will conduct the
independent review. This is best done by the lead command sending a memorandum to
AFCAA requesting they participate in the study and conduct the independent review. If
AFCAA can provide this support, they will be included as part of the Cost Analysis
Working Group (CAWG) and allowed to provide real-time assistance and guidance in
order to shorten the review process at the completion of the study.
2. If AFCAA is not able to conduct the review they still may be willing to participate in the
cost methodology development and planning, but not commit to the more formal
independent review.
3. Once the level of AFCAA participation is known, the study team needs to consider their
need to approach another organization to do the independent review. The options may
include OAS, Product Center Financial Management (FM) office or a MAJCOM FM.
The correct choice will be based in part upon issues of independence and resource
availability. In Joint AoAs it may even be an appropriate financial management office
from another Service or Agency.
4. In the absence of a government led independent review, Study Directors may choose to
find a contractor organization to perform the review.
5. In all cases in which AFCAA is not able to perform the sufficiency/non-advocate
reviews, the sufficiency review will only address the sufficiency and completeness of
the costing. It is not considered a Service Cost Position (SCP), as only the Service Cost
Agency can certify a SCP.

79
6 Performing the Risk Analysis
In addition to analyzing the operational effectiveness and life cycle cost, the study team
examines the risks associated with the various alternatives using the Risk Assessment
Framework (RAF). The RAF is a scalable Air Force enterprise-wide risk assessment approach
that fosters consistency and uniformity in the use of risk-related terminology within and across
the Air Force. The RAF is linked to the Chairmans Risk Assessment definitions and the CJCS
Integrated Risk Matrix.

This chapter provides a brief overview of the RAF to help study teams identify and rate risks
associated with the alternatives. This risk assessment does not address the risks of conducting
the AoA, these risks are addressed in Section 3.3.

The application of RAF to the AoA is new and requires more definition. OAS is exploring
approaches to develop the implementation methodology of RAF to the AoA. More explicit
details regarding how to use the RAF will be provided in future materials.

6.1 Risk Assessment Framework


The RAF provides a structured way for identifying and translating risks into a consistent and
comparable format. The RAF is based on a tree structure where the base of the tree represents
the aggregation of Service Core Functional objectives. Branches of the tree connect to nodes
representing activities that are vital to the accomplishment of the objectives. Finally, the
activities are connected to metrics that are designed to measure resource, schedule, or other
performance factors that impact the activities. The following describes how risk assessments
are accomplished up to the activity level. A similar assessment approach is used for levels
above the activity level (see the HAF/A9 website on the Air Force portal for additional details).

The RAF requires development of metrics with specific threshold values to assess risk. The
metrics are associated with activities that are impacted by resource, schedule, or other
performance factors as measured by the risk metrics. In the AoA, the activities may be
associated with the capability gaps, mission tasks, or measures of effectiveness and suitability.

Each risk metric for an activity is defined with two points (typically the success and failure
endpoints) and successive levels between the two endpoints. The lowest point of risk for a
metric is set such that the activity is assured of success as far as that metric is concerned. In
other words, no additional improvement in that metric will increase the activitys chance of
success. Similarly, the highest point of risk for a metric is set such that the activity is assured to
fail as a result of the critical factor associated with that metric. In other words, no degradation
in that metric will worsen the activitys chance of failure. In between the low and high risk
points, there are thresholds marking risk assessment transitions from low to moderate to
significant to high (see Figure 6-1).

80
Once the metrics have been defined, the study team can use various techniques such as
professional military judgment, modeling and simulation, and data analysis to determine the risk
rating. The analysis is conducted to determine where on the scale the particular metric falls for
any given time frame and set of scenarios. The study team should explore the impact of
changes to assumptions, criteria, scenarios, force structures, and time frames on the risk ratings.
Those changes should be highlighted when discussing the results. The AoA risk assessment
should address the following questions:

What defines success and failure in the scenario context? This should build upon the
operational effectiveness analysis results.
o Which scenarios, timeframes, and force structure assumptions were used?
o How were the success and failure points determined for each scenario/timeframe,
etc.?
What is being done or recommended in the future to mitigate the identified risks?
o For operational answer how well the gap can be mitigated by each alternative
and to what level the operational risk is reduced. This enables decision makers
to determine if that is an acceptable level.
o For schedule and technical/manufacturing - identify mitigation strategies that
should be considered if there is a follow-on acquisition.

Using the metrics associated with each activity, the study team assesses the risk level for each
activity as low, moderate, significant, or high (see Figure 6-1). Typically, the risk assessment of
the activity is the same as the highest (worst) risk for the supporting metrics. If the worst-case
is not appropriate, professional military judgment may be applied, but the rationale should be
explained.

Figure 6-1: Standard Air Force Risk Scale Definitions

Presentation of the risk assessment results is expected to utilize a common format risk
statement. A risk statement is required for each of the risks identified during the risk
assessment. The study team will also need to identify the scenario(s), timeline(s) and force

81
structure(s) utilized for the AoA and their relationship to the identified risk element. The format
for the risk statement is:

According to (organization), the (type) risk of (activity) is (assessment) with an (analytical


rigor level) for (context/timeframe/force structure) assuming (mitigation measures/authority).

The key terms in the risk statement are defined as follows:

Organization - organization accomplishing the risk assessment (study team)


Type of risk Operational, schedule, or technology/manufacturing (see Section 6.2 for
definitions of these risks)
Activity actions that are impacted by resource, schedule, or other performance factors
as measured by the risk metrics (for AoAs, activities could be associated with the
capability gaps, mission tasks, or measures)
Assessment - defined risk levels of low, moderate, significant, and high (See Figure 6-1
for risk level definitions). This is done for each of the risks associated with each
activity. Each activitys assessment will be the same as the highest (worst) risk assessed
for supporting metric. If the worst-case risk level is not appropriate for the activity,
professional military judgment may be applied but must be documented and
substantiated for traceability and defensibility. This is usually the situation for AoAs
due to the limited knowledge about the alternatives and risks at this point in the process.
Analytic Rigor Level - gives leadership a quick understanding of how well the
assessment embodies the desired attributes (defendable, measurable, repeatable,
traceable, linkable, implementable, scalable, and incorporates military judgment). Each
activitys analytic rigor level will be set at the lowest rigor level of the metrics driving
the activity level risk assessment. Levels 1-3 are the most appropriate for an AoA. The
assessment levels are defined as:
o Level 1 - findings are based heavily on subject matter expertise. The assessment
process was not documented and lacks a tree structure and metrics. As a result, a
different set of subject matter experts could reasonably develop different results.
o Level 2 - assessment has limited structure. Discrete metrics are in place prior to
execution of the assessment. There is some ability to trace metrics to the core
function risk assessments via a tree structure.
o Level 3 - assessment process has a fully developed tree structure. There is a
traceable understanding of the linkages between vital objectives, activities and
metrics that compose the assessment.
o Level 4 - maximum level achievable to support an AFROC Risk Assessment.
Prior to the assessment, fully defensible linkages between the assessed metrics
and user (planning requirements) have been presented to the Assessment
Requestor for validation. Assessors have explored cross-functional mitigation
options and have included results in their assessment. Metric assessments are
conducted via documented and analytically rigorous methods. This level is rare
for an AoA; it will only be used when the AoA results are combined with other
82
analysis results in order to depict findings across one or more Service Core
Functions.

Scenario - intended to provide additional information needed to specifically frame the


environment that the activity is assessed.
Timeframe - timeframe for each assessment must be provided in guidance since it will
drive both friendly and hostile force assumptions.
Force Structure - provides the force structure assumption behind the assessment (e.g.,
programmed force or programmed force extended).
Mitigation/Measures/Authority - identifies mitigation actions already taken or assumed
across the areas of DOTMLPF-P by the organization making the assessment. This
information is essential to aid decision makers in understanding what actions have been
taken to date in order to best evaluate the situation and explore their risk management
options.

6.2 Risk Identification


Although many types of risks may exist (e.g., political, interoperability, etc.), senior decision
makers expect, at a minimum, the following risk assessments to be conducted in the AoA:

Operational risk assessment - the degree to which the operational risk associated with
the specified gap could be mitigated if the alternative was implemented.
Schedule and technology/manufacturing risk assessment - an assessment of the
Technology Risk Levels (TRLs)/Manufacturing Risk Levels (MRLs) for an alternatives
critical technology elements (CTEs) which could impact the likelihood of delivering the
required capability on schedule and within budget.

The following lists some areas for the study team to consider when identifying operational,
schedule, and technology/manufacturing risks:

Determine operational impact, if any, of revised thresholds based on effectiveness


analysis sensitivity analysis. In other words, does the threshold value need further
adjustment based on the risk identified?
Consider what might happen if changes in threat capabilities evolve either before, or in
response to our fielding a potential alternative
Examine current and proposed IOC/FOC schedules, design, suppliers, operational
employment, resources, dependencies, etc.
Identify testing requirements and their impacts on the various alternative timelines.
Analyze negative trends in the industry or suppliers
Determine impact of interdependencies on other programs/efforts to provide the full
capability needed to appropriately mitigate the specified gap

83
Determine the level of coalition force needed and probability of getting that support

A Best Practice is to recognize that risk identification is the responsibility of every member
of the AoA team, and should occur throughout the conduct of the study.

The study team should consider the following when identifying sources of risk:

Threat - The sensitivity of the alternatives to uncertainty in the threat description, the
degree to which the alternative or its employment would have to change if the threat's
parameters change, or the vulnerability of the alternative to foreign intelligence
collection efforts (sensitivity to threat countermeasure).
Test and Evaluation - The adequacy and capability of the test and evaluation process
and community to assess attainment of performance parameters and determine whether
the alternative is operationally effective, operationally suitable, and interoperable.
[Note: this requires T&E membership on the study team.]
Modeling and Simulation (M&S) - The adequacy and capability of M&S to support all
life cycle phases of an alternative using verified, validated, and accredited models and
simulations.
Technology - The degree to which the technology proposed for the alternative has
demonstrated sufficient maturity (TRL) to be realistically capable of providing the
required capability.
Logistics - The ability of the alternatives support concepts to achieve the sustainment
KPP thresholds based on the alternative technical description, maintenance concept,
expected availability of support data and resources, and the ability of the associated
maintenance concept to handle the expected workload.
Concurrency - The sensitivity of the alternative to uncertainty resulting from the
combining or overlapping of life cycle phases or activities.
Industrial Capabilities - The degree to which the manufacturing/industrial base has
demonstrated sufficient maturity (MRL) to be realistically capable of providing the
required capability.
Schedule - The sufficiency of the time allocated by the estimated schedule to deliver the
required capability by IOC/FOC.
Command and Control - The ability of the alternative to work within the existing C2
environment as well as the ability of alternatives being evaluated to perform C2
functions in the operational environment, if appropriate.
Interoperability - The ability of alternatives being evaluated to work with existing or
planned systems in the operational environment. This may be C2 interoperability, the
ability to coordinate fires from another weapon system, or the ability of a new
component in an existing system to operate with the remaining subsystems.
CONOPS - The impact of various aspects of the operational concept for an alternative
on its mission effectiveness. For example, will basing in certain areas impact targets
held at risk? What risk does that represent in operational or political terms?

84
Intelligence - The ability of resources expected to be available at IOC/FOC to provide
the intelligence data required by the alternative, in the right format, in a timely fashion
to allow the alternative to function as envisioned.

6.3 Using Previous Analyses


The completed CBA(s) or other analyses that identified the specific gaps to be analyzed in the
AoA also should have identified the operational risk associated with not filling that gap. This
information should be used as a starting point for determining risks associated with the
alternatives. The information may also reduce the requirement for additional analysis to support
the risk assessment.

The following resources will aid in conducting the risk assessment:

Chairmans Risk Assessment


CJCS Integrated Risk Matrix and associated AF/A9 Risk Assessment Framework (RAF)
DoD Technology Readiness Assessment (TRA) Deskbook, July 2009 prepared by
DDR&E.
Defense Acquisition Guidebook
Risk Management Guide for DoD Acquisition
SAF/AQ Guidance Memorandum on Life Cycle Risk Management (as based on the Risk
Management Guide for DoD Acquisition)

85
1 7 Assessing Sustainability in the Analysis of Alternatives
2 Study
3

4 7.1 Introduction
5
6 Acquiring systems that are both effective in meeting mission requirements and sustainable at
7 lower total ownership costs continues to be a top priority in the Air Force. Early decisions in the
8 acquisition life cycle have long-term sustainability implications that impact costs and mission
9 effectiveness. Since most of the life cycle costs of a program are locked-in early during the
10 technology development phase, it is important to address sustainability early in the acquisition
11 process. The early stages of the acquisition process provide the best opportunity to maximize
12 potential sustainability and mission capability. Accordingly, sustainability should be addressed in
13 the AoA study to ensure Air Force senior leaders make informed decisions that result in
14 sustainable and effective systems that meet mission requirements.
15

16 7.2 What is Sustainability?


17
18 Sustainability is a systems capability to maintain the necessary level and duration of operations
19 to achieve military objectives. Sustainability depends on ready forces, materiel, and consumables
20 in enough quantities and working order to support military efforts. Sustainability encompasses a
21 wide range of elements such as systems, spare parts, personnel, facilities, documentation, and
22 data. Sustainability performance not only impacts mission capability, but is also a major factor
23 that drives the life cycle cost of a system. Maintainability issues, for example, could considerably
24 increase life cycle costs by increasing the number of maintainers needed to sustain a system in the
25 field. In other situations, significant Human System Integration (HSI) issues may increase an
26 operators workload or poor reliability performance could result in low operational availability.
27

28 7.3 Defining the Maintenance Concept and Product Support


29 Strategy
30
31 Defining how alternatives will be employed in the operational environment is an essential step in
32 conducting the sustainability analysis in the AoA study. The concept of employment (CONEMP)
33 for each alternative should be defined in the CCTD document and include descriptions of the
34 projected maintenance concept and product support strategy. Given that the alternatives are
35 primarily developmental or conceptual at this early stage of the life cycle, defining the
36 maintenance concept and product support strategy can be challenging and may require the
37 assistance of system engineers and acquisition logistics, maintenance, supply, and transportation
86
38 specialists. In some situations, the maintenance concept and product support strategy may be
39 based on similar existing systems that are relevant to the alternatives being considered in the AoA
40 study. In situations where the alternative systems are new concepts, there may not be any existing
41 systems that are sufficiently similar to use in defining the maintenance concept and product
42 support strategy. In these cases, assistance from system engineers and other logistics specialists
43 to help define the maintenance concept and product support strategy is particularly important.
44
45 The maintenance concept is a general description of the maintenance tasks required in support of
46 a given system or equipment and the designation of the maintenance level for performing each
47 task. The maintenance concept is eventually implemented through a Life Cycle Sustainment
48 Plan. As an example, assume the system is a computer, with a CPU, keyboard, and mouse.
49 The maintenance concept for this system is a two-level concept, organizational and depot. The
50 organizational level maintenance will restore the computer to service by the removal and
51 replacement of the Line Replaceable Units (LRU) (e.g., the CPU, mouse, and keyboard). The
52 organizational level will forward the failed LRU to the depot for repair by removal or replacement
53 of failed assemblies, subassemblies, or parts based on economic criteria (i.e., repair or discard).
54
55 Product support consists of the management and technical activities and resources needed to
56 implement the maintenance concept, and establish and maintain the readiness and operational
57 capability of a weapon system, its subsystems, and its sustainment infrastructure. Product support
58 encompasses materiel management, distribution, technical data management, maintenance,
59 training, cataloging, configuration management, engineering support, repair parts management,
60 failure reporting and analyses, and independent logistics assessments.
61
62 Product support is implemented by the Performance-based Logistics (PBL) strategy which seeks
63 to optimize system availability while minimizing cost and the logistics footprint. The PBL
64 strategy should be tailored to fit the individual system in the intended operational environment for
65 the duration of its projected service life. The PBL strategy defines performance in terms of
66 military objectives using criteria such as operational availability, operational reliability, total cost,
67 logistics footprint, and logistics response time. PBL applies to both retail (base or organizational
68 level) logistics operations and wholesale (depot) logistics operations. While the provider of the
69 support may be public, private, or a public-private partnership, the focus is to achieve maximum
70 weapon system availability at the lowest Total Ownership Cost (TOC).
71

72 7.4 Sustainability Performance, Cost, and Risk


73
74 Sustainability of materiel solutions should be analyzed in the AoA study in terms of performance,
75 cost, and risk. The following provides key methodological insights into the analysis of
76 sustainability with respect to performance, cost, and risk. More detailed information can be found
77 in the reference sources listed at the end of this section.
78

87
79 7.4.1 Sustainability Performance Analysis
80
81 The AoA study provides the analytic basis for establishing an initial set of performance measures
82 associated with concepts of sustainability such as reliability, availability, and maintainability.
83 These measures are referred to as measures of suitability (MOS) and are designed to measure a
84 systems capability to support mission accomplishment. MOSs are essential for conducting the
85 sustainability analysis and should address sustainability related performance requirements
86 identified or implied in previous studies such as Capabilities-Based Assessments (CBAs) and
87 requirements documents such as the Initial Capabilities Document (ICD). The analyst should
88 consider the sustainment concepts and attributes described in Table 7-1 in developing the MOSs.
89
90 Table 7-1: Sustainability Concepts/Attributes
Concept/
Description
Attribute
Availability A measure of the degree to which an item is in an operable and committable state at the
start of a mission when the mission is called for at an unknown (random) time. (MIL-
HDBK-502, 30 May 1997)
Reliability The ability of a system and its parts to perform its mission without failure, degradation,
or demand on the support system. (AFI63-101, 8 April 2009)
Maintainability The ability of an item to be retained in, or restored to, a specified condition when
maintenance is performed by personnel having specified skills using prescribed
procedures and resources at each prescribed level of maintenance and repair. (AFI63-101,
8 April 2009)
Deployability The inherent ability of resources to be moved, used, sustained, and recovered with ease,
speed, and flexibility to meet mission requirements. (AFPAM63-128, 5 October 2009)
Supportability The degree to which system design characteristics and planned logistics resources,
including manpower, meet system peacetime readiness and wartime utilization
requirements. (AFI63-101, 8 April 2009)
Interoperability The ability of U.S. and coalition partner systems, units, or forces to provide data,
information, materiel, and services to and accept the same from other systems, units, or
forces, and the use the data, information, materiel, and services so exchanged to enable
them to operate effectively together. (JCIDS Manual, 31 January 2011)
Compatibility The capability of two or more items or components of equipment or material to exist or
function in the same system or environment without mutual interference. Common types
of compatibility include electrical, electromagnetic, human-systems interface, and
physical. (Human Systems Integration Requirements Pocket Guide, USAF Human
Systems Integration Office, September 2009)
Transportability The capability of material to be moved by towing, self-propulsion, or carrier through any
means such as railways, highways, waterways, pipelines, oceans, space, and airways.
(Joint Publication 1-02, DoD Dictionary of Military and Associated Terms, 8 November
2010)
Environment Air, water, land, space, cyberspace, markets, organizations, living things, built
infrastructure, cultural resources, and the interrelationships that exist among them.

88
Environmental considerations may affect the concept of operations and requirements to
protect systems from the environment and to protect the environment from system
design, manufacturing, operations, sustainment, and disposal activities. (Human Systems
Integration Requirements Pocket Guide, USAF Human Systems Integration Office,
September 2009)
Human Systems The integrated, comprehensive analysis, design and assessment of requirements, concepts
Integration and resources for system Manpower, Personnel, Training, Environment, Safety,
Occupational Health, Habitability, Survivability and Human Factors. (AFI10-601)
System Training All training methodologies (embedded, institutional, Mobile Training Team, computer,
and web-based) that can be used to train and educate operator and maintainer personnel
in the proper technical employment and repair of the equipment and components of a
system and to educate and train the commanders and staffs in the doctrinal tactics,
techniques, and procedures for employing the system in operations and missions. (JCIDS
Manual, 31 January 2011)
Safety Promotes system design characteristics and procedures to minimize the potential for
accidents or mishaps that: cause death or injury to operators, maintainers, and support
personnel; threaten the operation of a system or cause cascading failures in other systems.
(Human Systems Integration Requirements Pocket Guide, USAF Human Systems
Integration Office, September 2009)
Occupational Promotes system design features and procedures that serve to minimize the risk of injury,
Health acute or chronic illness or disability, and enhance job performance of personnel who
operate, maintain, or support the system. (Human Systems Integration Requirements
Pocket Guide, USAF Human Systems Integration Office, September 2009)
Utilization Rate The average life units expended or missions attempted (launched and airborne) per
system or subsystem during a specified interval of time. (AFPAM63-128, 5 October
2009)
Documentation Operator and maintenance instructions, repair parts lists, and support manuals, as well as
manuals related to computer programs and system software such as the software load
instruction, user manuals, and system administrator manuals. (AFOTECPAM99-104, 9
November 2010)
91
92 The analyst must consider various factors such as the study questions and objectives, the maturity
93 of the alternative concepts, and data availability when selecting measures for the analysis. For
94 example, emerging or developmental systems may not have sufficient data to measure certain
95 aspects of sustainability. Given these factors, the analyst must use some judgment in determining
96 whether the selected measures are sufficient for conducting the sustainability performance
97 analysis.
98
99 As stated in Chapter 4, the description of the MOSs should include the supported mission task,
100 attribute, measure statement, criteria, and data information. Table 7-2 provides an example of a
101 sustainability task and its associated measure parameters. At a minimum, the measure criteria
102 should identify the threshold standard (i.e., the minimum acceptable operational value of a system
103 capability or characteristic below which the utility of the system becomes questionable) and if
104 necessary, an objective standard (i.e., an operationally significant increment above the threshold).

89
105 An objective value may be the same as the threshold when an operationally significant increment
106 above the threshold is not identifiable.
107
108 Table 7-2: Measure of Suitability Description Example

109
110
111 Analysts typically rely on a combination of study methods to collect and analyze data and assess
112 the sustainability of alternative systems. Selection of the study method depends largely on the
113 data requirements, availability of applicable tools or techniques, and the maturity and specificity
114 of the alternatives. Several commonly used methods are described below:
115
116 Modeling and Simulation (M&S): A model is a physical, mathematical, or logical representation
117 of a system, entity, phenomenon, or process that allows for investigation of the properties of the
118 system. A simulation is a method for implementing a model over time. M&S offers several
119 advantages such as repeatability and control since events can be replicated under controlled
120 conditions.
121
122 An example of M&S that has been used to analyze sustainability of systems is the Logistics
123 Composite model (LCOM). LCOM is an Air Force Standard Analysis Toolkit (AFSAT) model
124 used to identify the best mix of logistical resources to support a given weapon system under
125 certain operational constraints (e.g., aircraft sortie rates, maintenance and supply policies,
126 manpower levels, and spare part quantities). Logistics resources include manpower, spare parts,
127 support equipment, and facilities. The supportability of design alternatives can be evaluated by
128 varying the reliability and maintainability characteristics of the components and tasks contained
129 in the database. The impact of policy decisions (e.g., organizational, maintenance concepts, and
130 personnel) upon resource requirements or sortie generation capability can be analyzed as well.
131
132 Concept Characterization: Also referred to as alternative characterization, this method uses
133 data and information gleaned from CCTD documents, Requests for Information (RFI), and other

90
134 documents (e.g., reports, studies, and analyses). Once verified by the analyst, the data and
135 information can be used in various ways. For example, data may be used as inputs to parametric,
136 statistical, or simulation models (e.g., altitude and range parameters are used along with other
137 variables as inputs to a model to determine survivability of a system). Other possible uses of the
138 data and information include resolving measures (e.g., the number of 463L pallet positions
139 required for transport of an alternative identified in the CCTD is used to determine whether the
140 alternative meets the two pallet position threshold standard for transport) as well as identifying
141 operational, technical, and programmatic risks associated with sustainability.
142
143 Expert Elicitation: Expert elicitation is a structured approach of gathering subject matter expert
144 judgment and answering questions concerning issues or problems of interest in a study. Since
145 expert judgment is affected by the approach used to gather it, a specially designed process is
146 required that includes procedures for developing questions, conducting the elicitation, and
147 handling biases that may arise. Although the process is formal and structured, it can differ in
148 terms of the degree of interaction between experts, level of detail in information elicited, number
149 of meetings, type of communication method, and degree of structure in the elicitation process.
150 Individual or group interviews are commonly used to elicit the information.
151 Expert elicitation is a particularly useful for collecting information from subject matter experts
152 regarding the deployability, transportability, and maintainability of alternatives. For example,
153 after reviewing technical and design information associated with each alternative, maintenance
154 experts are asked to answer a series of questions on the ease of maintainability of critical
155 components of each alternative.
156
157 Comparative Analysis: The purpose of the comparative analysis it to select or develop a Baseline
158 Comparison System (BCS) that represents characteristics of the new system for projecting
159 supportability related parameters, making judgments concerning the feasibility of the new system
160 supportability parameters, and determining the supportability, cost, and readiness drivers of the
161 new system.
162
163 A BCS may be developed using a composite of elements from different existing systems when a
164 composite most closely represents the design, operation, and support characteristics of a new
165 system alternative. The analysis requires the use of experience and historical data on similar
166 existing systems that are relevant to the materiel solutions being considered in the AoA study. If
167 support parameters (e.g., resupply time, turnaround times, transportation times, and personnel
168 constraints) are to be projected, then current systems (support systems) which are similar to the
169 new system's support concept must be identified. This may be a support system completely
170 different than the one supporting similar systems in design characteristics.
171
172 The level of detail required in describing comparative systems will vary depending on the amount
173 of detail known about the new system's design, operational, and support characteristics and the
174 accuracy required in the estimates for new system parameters. Early in the system life cycle,
175 when the design concept for the new system is very general, only a general level comparative
176 system description should be established. For this preliminary analysis, the analyst should
91
177 identify existing systems and subsystems (hardware, operational, and support) useful for
178 comparative purposes with new system alternatives. The results of the analyses can help identify
179 supportability, cost, and readiness drivers of each significantly different new system alternative.
180

181 7.4.2 Operations and Support Cost Analysis


182 Operations and Support (O&S) cost is the cost element associated with sustainability. In
183 determining O&S cost, the cost analysis should include the support resources necessary to
184 achieve specified levels of readiness for a range of assumptions regarding various aspects such as
185 system reliability, maintainability, usage rates, and operating scenarios. Because of their potential
186 impact on product performance, readiness, and cost, all manpower and personnel requirements
187 (i.e., quantities, skills, and skill levels) should be identified and evaluated early. Due to the
188 uncertainty in estimating resource costs such as manpower and energy, sensitivity analyses should
189 be performed to help identify the various factors which drive life cycle costs.
190
191 The O&S cost element structure is divided into six major categories (Table 7-3). If a cost applies
192 to a system, the cost structure identifies where a specific type of cost should appear in the
193 estimate. Some cost elements refer to expenses that may not apply to every system. For example,
194 ground radar systems do not have training munitions or expendable stores. In this case, the O&S
195 estimate for the radar system would omit (or record as zero) that portion of the cost structure.
196
197 Table 7-3: Operations and Support Cost Element Categories

Cost
Description
Element
1.0 Unit Cost of operators, maintainers, and other support personnel assigned to operating units.
Personnel Includes active and reserve military, government civilian, and contractor personnel costs.
While the cost elements in this category separate operators, maintainers, and other direct
support personnel, unit personnel sometimes serve in more than one capacity. If this
occurs, ensure that all three types are accounted in one of the categories and group the
personnel specialties using their predominant responsibility. To the extent possible,
government personnel costs will be based on personnel grades and skill categories. Costs
of military, government civilian and contractor personnel will be separately shown in the
estimate of unit personnel costs.
2.0 Unit Cost of unit unit-level consumption of operating materials such as fuel, POL, electricity,
Operations expendable stores, training munitions and other operating materials. Also included are
any unit-funded support activities; training devices or simulator operations that uniquely
support an operational unit; temporary additional duty/temporary duty (TAD/TDY)
associated with the units normal concept of operations; and other unit funded services.
Unit-funded service contracts for administrative equipment as well as unit-funded
equipment and software leases are included in this portion of the estimate. Unit operating
costs provided through a system support contract will be separately identified from those
provided organically.
3.0 Cost of all maintenance other than maintenance personnel assigned to operating units
Maintenance (includes contractor maintenance). Includes the costs of labor above the organizational

92
level and materials at all levels of maintenance in support of the primary system,
simulators, training devices, and associated support equipment. Where costs cannot be
separately identified to separate levels of maintenance, use the category that represents the
predominant costs. All maintenance costs provided through a system support contract will
be separately identified within the appropriate cost element.
4.0 Sustaining Cost of support activities other than maintenance that can be attributed to a system and are
Support provided by organizations other than operating units. Includes support services provided
by centrally managed support activities not funded by the units that own the operating
systems. It is intended that costs included in this category represent costs that can be
identified to a specific system and exclude costs that must be arbitrarily allocated. Where
a single cost element includes multiple types of support, each should be separately
identified in the cost estimate.
5.0 Continuing Cost of hardware and software modifications to keep the system operating and
System operationally current. Includes the costs of hardware and software updates that occur
Improvement after deployment of a system that improve a system's safety, reliability, maintainability, or
performance characteristics to enable the system to meet its basic operational
requirements throughout its life. These costs include government and contract labor,
materials, and overhead costs. Costs will be separated into government and contractor
costs within each cost element.
6.0 Indirect Cost of support activities that provide general services that cannot be directly attributed to
Support a system. Indirect support is generally provided by centrally managed activities that
support a wide range of activities. Indirect support costs are those installation and
personnel support costs that cannot be directly related to the units and personnel that
operate and support the system being analyzed. O&S cost analyses should include
marginal indirect costs. The intention is to include only the costs that would likely result
in changes to DoD budgets if the action being analyzed (e.g., new system development,
etc.) occurs.
198
199 The Department of Defense is increasingly using contract support for many aspects of system
200 operations and support, including functions that have historically been provided by government
201 organizations. Knowing the maintenance concept and product support strategy for each O&S
202 function is important to cost estimators. O&S cost estimates should clearly identify the expected
203 source of support for each element of an O&S cost estimate.
204
205 Interim contractor support (ICS) provides logistics support on a temporary basis until a
206 government support capability is established. The scope and duration of ICS varies depending on
207 acquisition strategy and other management decisions. ICS costs are normally included in the
208 O&S estimate, unless explicitly covered in the production/investment cost estimate.

209 7.4.3 Sustainability Risk Assessment


210 The design, maintenance concept, product support strategy, support system design, and
211 availability of support data and resources are significant sources of risk to the sustainability of a
212 system. Risks associated with sustainability should be assessed early in the acquisition since
213 failing to do so could cause significant consequences in the programs latter phases.
214

93
215 The risk assessment of sustainability constraints and concepts should be an integral part of the
216 sustainability analysis. The assessments should identify risk drivers, determine the sensitivity of
217 interrelated risks, and quantify risk impacts. Again, the analyst should rely on experience and
218 historical data to help identify risk factors.
219
220 For more information, refer to the following sources of information:
221
222 Air Force Analysis of Alternatives Measure Development Process and Guidelines. July
223 2011. Office of Aerospace Studies, Kirtland AFB, NM.
224 Air Force Analysis of Alternatives Suitability Measures. July 2011. Office of Aerospace
225 Studies, Kirtland AFB, NM.
226 AFPAM 63-128 Guide to Acquisition and Sustainment Life Cycle Management. October
227 5, 2009.
228 AFI63-101 Acquisition and Sustainment Life Cycle Management. April 8, 2009.
229 Department of Defense Risk Management Guide for DoD Acquisition, Sixth Edition.
230 August, 2006.
231 Department of Defense Acquisition Logistics Handbook. MIL-HDBK-502. May 30,
232 1997. USAMC Logistics Support Activity, Redstone Arsenal, AL.
233

234 7.5 Reliability, Availability, Maintainability and Cost Rationale


235 Report
236
237 For efforts designated as JROC Interest, an initial Reliability, Availability, Maintainability and
238 Cost Rationale Report (RAM-C Report) should be developed as part of the AoA study. For all
239 other efforts, the required RAM-C Report is determined by the DoD component. The report is
240 designed to ensure effective collaboration between the requirements and acquisition communities
241 in the establishment of RAM requirements for the Milestone A decision. Although this report
242 may be limited in scope due to the many unknowns at this stage, it will still describe the
243 reliability, availability, and maintainability requirements, assumptions, rationale, and ownership
244 costs to ensure that effective sustainment is addressed early in the life cycle for all systems.
245 For more information, refer to the following sources of information:
246
247 Department of Defense Reliability, Availability, Maintainability, and Cost Rationale
248 Report Manual. June 1, 2009. Washington, DC, Office of the Secretary of Defense.
249 Department of Defense Guide for Achieving Reliability, Availability, and Maintainability.
250 August 3, 2005.
251 USD AT&L Directive-Type Memorandum (DTM) 11-003 Reliability Analysis,
252 Planning, Tracking and Reporting. March 21, 2011.
253

94
254 7.6 Sustainment Key Performance Parameter
255
256 For all projected Acquisition Category (ACAT) I programs, sustainment is a mandatory Key
257 Performance Parameter (KPP) that should be addressed in the AoA study and the associated
258 initial RAM-C Report. For all other programs, the sponsoring command will determine the
259 applicability of the sustainment KPP. See Appendix L for additional information on KPPs.
260
261 As shown in Figure 7-1, the sustainment KPP consists of a KPP, availability, and two supporting
262 Key System Attributes (KSAs), reliability and ownership cost (see Appendix L for more
263 information on KSAs). It is important to note that AoA studies and other supporting analyses
264 provide the analytic foundation for determining appropriate threshold and objective values of
265 system attributes and aid in determining which attributes should be KPPs or KSAs.
266

267
268 Figure 7-1: Sustainment Key Performance Parameter
269
270 The availability KPP has two components, materiel availability and operational availability.
271 Materiel availability (MA) is the measure of the total inventory of a system operationally capable
272 (ready for tasking) of performing an assigned mission at a given time, based on materiel
273 condition. Development of this measure is a program manager responsibility and is determined
274 later in the acquisition cycle. Consequently, the measure would not be addressed in an AoA study
275 in the pre-Milestone A phase.
276
277 Operational availability (AO) is the percentage of time that a system or group of systems within a
278 unit is operationally capable of performing an assigned mission. Development of this measure is

95
279 the requirements manager responsibility and requires an analysis of the projected system and
280 planned use as identified in the CONEMP.
281
282 Reliability is the ability of a system and its parts to perform its mission without failure,
283 degradation, or demand on the support system. The basic concept of reliability is that the system
284 performs satisfactorily, where satisfactorily implies a lack of a broad variety of undesirable events
285 and subsequent impact. Development of the measure is a requirements manager responsibility.
286 There are two aspects of reliability, mission reliability and materiel or logistics reliability.
287
288 Mission reliability is the capability of a system to perform its required function for the stated
289 mission duration or for a specified time into the mission. The mission reliability calculation
290 depends on the system and may include numbers of operating hours, critical failures, successful
291 missions, and sorties flown. Typical measures of mission reliability include break rate (BR),
292 mean time between critical failure (MTBCF), and weapon system reliability (WSR).
293
294 Materiel (logistics) reliability is the capability of a system to perform failure free, under specified
295 conditions and time without demand on the support system. All incidents that require a response
296 from the logistics system (i.e., both maintenance and supply systems) are addressed in the
297 measure. The materiel (logistics) reliability calculation depends on the system and may include
298 number of flight hours, maintenance events, operating hours, and possessed hours. A typical
299 measure of materiel (logistics) reliability is the mean time between maintenance (MTBM).
300
301 Finally, ownership cost is the operations and support (O&S) cost associated with the availability
302 KPP. Ownership cost provides balance to the sustainment solution by ensuring that the O&S
303 costs associated with availability are considered in making decisions. The cost includes energy
304 (e.g., fuel, petroleum, oil, lubricants, and electricity), maintenance, manpower/personnel costs,
305 support sustainment, and continuing system improvements regardless of funding source. All
306 costs cover the planned life cycle timeframe. Fuel costs are based on the fully burdened cost of
307 fuel. The analysis should identify the associated sources of reference data, cost models, and
308 parametric cost estimating techniques or tools used to create the cost estimates. The ownership
309 cost is included as part of the total life cycle cost estimate in the AoA study.
310
311 For more information, refer to the following sources of information:
312
313 Air Force Analysis of Alternatives Suitability Measures. July 2011. Office of Aerospace
314 Studies, Kirtland AFB, NM.
315 Manual for the Operation of the Joint Capabilities Integration and Development System
316 (JCIDS Manual). January 19, 2012.
317 Chairman of the Joint Chiefs of Staff (CJCSI 3170.01G) Joint Capabilities Integration and
318 Development System. January 10, 2012.
319
320
321

96
322 8 Alternative Comparisons
323
324 The AoA must explore tradespace in performance, cost, risk and schedule across a full range of
325 alternatives to address validated capability requirements. Therefore, once the operational
326 effectiveness analysis results, life cycle cost estimates, and risk assessments are completed, it is
327 time to bring that information together and address overall sensitivities and tradeoffs through
328 comparative analysis.
329
330 Comparing the alternatives involves the simultaneous consideration of the alternatives cost,
331 operational effectiveness, associated risks; the outcome of this comparison highlights the factors
332 that influence the tradespace. Consumers are familiar with the concept of comparing alternatives,
333 whether buying laundry detergent, a new car, or a home. They collect data on costs and make
334 assessments on how well the alternatives will meet their needs (the effectiveness of the
335 alternatives) and any potential risks associated with each option. With data in hand, consumers
336 make comparisons and identify the tradespace to consider before buying the product or service.
337 In an AoA, the process is essentially the same. Keep in mind that there is rarely a clear-cut single
338 answer.
339

340 8.1 Alternative Comparison Methodology


341

342 8.1.1 Sensitivity Analysis during Alternative Comparison


343
344 Sensitivity analysis continues during this phase. It should leverage sensitivity analysis
345 accomplished in the operational effectiveness analysis, cost analysis, and risk assessments. The
346 previous sensitivity analyses should have identified the cost, schedule, risk and performance
347 drivers to be considered as part of the tradespace analysis being conducted during this phase. The
348 sensitivity analysis associated with this comparative analysis must accomplish the following to
349 ensure meeting the decision makers expectations and requirements for AFROC and CAPE
350 sufficiency review.
351
352 Identify the proposed parameters for the RCT, along with recommended
353 threshold/objective values for further exploration in the tradespace.
354 Identify why those parameters are proposed for the RCT
355 Identify the assumptions and variables highlighted by the sensitivity analysis.
356 Explore the sensitivity of the RCT values by addressing the impact of changes to cost,
357 effectiveness, and performance, on the alternatives ability to mitigate gaps
358 Identify key assumptions that drive results
359 Identify the conditions and assumptions for which an alternative is or is not affordable
360 Identify the conditions and assumptions for which an alternative does or does not
361 adequately mitigate the operational gap
97
362 Identify how legacy forces complement the alternatives
363 Examine of the robustness of the results. It should address the effectiveness, cost, and risk
364 changes that alter the comparative relationships among alternatives.
365 Examine variations of cost elements identified as significant drivers. This is intended to
366 identify the point at which further expenditure provides little additional value.
367 Identify performance parameters that make significant changes to mission effectiveness or
368 most likely to influence development and/or production cost.

369 8.1.2 Cost/Capability Tradeoff Analysis


370
371 The study team uses the cost/capability tradeoff analysis to determine the best value alternative
372 that provides acceptable capability to the warfighter. In conducting the analysis, the study team
373 should consider the affordability constraints expressed in the AoA guidance or ADM.
374
375 Figure 8-1 shows an example presentation of the cost/capability tradeoff analysis results for a
376 notional Aircraft Survivability System. Alternatives 1 and 2 are the most viable of the
377 alternatives analyzed and are shown in the figure (note that non-viable alternatives are not
378 displayed). The life cycle cost estimates are shown in $B along the x-axis. The y-axis shows the
379 probability of survival for a specific ISC and vignette. The results from other scenarios and
380 vignettes can be shown in separate charts to help the decision makers understand how robust the
381 alternatives are in different scenarios/vignettes. Alternatively, the results associated with all the
382 scenarios and vignettes analyzed in the study can be combined and presented in one chart.
383 Probability of survival was selected since it will be a Key Performance Parameter (note that the
384 threshold and objective values are highlighted on the chart). Other possibilities for the y-axis
385 include reduction in lethality and loss exchange rate.
386
387 The table below the graph provides a summary showing the probability of survival and LCCE
388 values as well as the overall risk rating of the alternative for the increments of capability for each
389 alternative. The color rating for the probability of survival is based on whether the alternative
390 meets the threshold/objective value.
391
392 Red: Did not meet threshold, significant shortfall
393 Yellow: Did not meet threshold, not a significant shortfall
394 Green: Met threshold
395 Blue: Met objective
396

98
397
398

399 Figure 8-1: Aircraft Survivability System Cost/Capability Tradeoff Example

400
401 Alternative 1 with the basic capability is significantly below the threshold value and is therefore
402 rated red, whereas alternative 2 with the basic capability meets the threshold and is rated green.
403 Alternative 1 with the A and B increments of capability meet the threshold and are rated green,
404 while alternative 2 with the X and Y increments of capability meet the objective value, and are
405 therefore rated blue. In situations where there is no objective value (threshold = objective), then
406 only the red, yellow, and green ratings should be used. In other situations where threshold and
407 objective values do not exist, the team will need to explain the difference in performance without
408 referencing these values. In this example, Alternative 1 with the A increment and Alternative 2

99
409 with the basic capability (circled in red) may be the best value options. Alternative 2 with the X
410 and Y increments (circled in blue) are the high performance, cost, and risk options.
411
412 Figure 8-2 shows another example presentation of the cost/capability tradeoff analysis results for
413 a notional Target Defeat Weapon. Alternatives 1, 2, and 3 are the most viable of the alternatives
414 analyzed and are shown in the chart. The life cycle cost estimates are shown in $B along the x-
415 axis. The y-axis shows the probability of functional kill for two ISC vignettes. The vertical bars
416 show the Target Template Sets (TTS) analyzed in the study. TTS range from very simple to
417 extremely complex and are defined in terms of hardness, depth, construction design, and function
418 (e.g., command and control, operations, storage, leadership, etc.). The current baseline
419 performance is shown on the chart (probability of functional kill = .55).
420
421 Alternative 1 provides increased probability of functional kill (+.11 over the current baseline
422 systems) and is capable of functional kills in the TTS-F and G that are not possible with the
423 existing baseline weapons. LCCE is $3B and the overall risk was rated moderate. Alternative 2
424 provides additional functional kill capability (+.17 over the current baseline systems) and is
425 capable of functional kills in the TTS-F, G, H, I, and J that are not possible with the existing
426 baseline weapons. LCCE is $4.2B and the overall risk was rated high. Finally, alternative 3
427 provides the most functional kill capability (+.22 over current baseline systems) and is capable of
428 functional kills in the TTS-F, G, H, I, J, and K that are not possible with existing baseline
429 weapons. LCCE is $5.3B and the overall risk was rated high.
430
431 It is important to note that none of the alternatives are capable of functional kills in the TTS-L, N,
432 O, and Q. If TTS-L, N, O, and Q include targets that are the most critical to the warfighter, the
433 determination of whether any of the alternatives are a best value option becomes more difficult
434 despite the additional capability each of the alternatives provide over the baseline.
435
436

100
437
438
439

440 Figure 8-2: Target Defeat Weapon Cost/Capability Tradeoff Example

441
442 There may be other ways to present cost/capability trade information, but regardless of the
443 method, the message must be clear and cogent. It is important to avoid rolling up or aggregating
444 effectiveness results since it can hide important information.
445

446 8.1.3 Alternative Comparison Presentation


447
448 The objective of the comparative analysis presentation is to show how the capabilities of each
449 alternative close or mitigate the capability gap(s) and present the associated tradespace to the
450 senior decision makers. Typically, there may be several viable alternatives, each with different
451 costs, effectiveness, and risks. There is no requirement for an AoA to identify a single solution.
101
452
453 The study team must answer the high-level concerns/questions from the guidance and those that
454 arise during the course of the AoA. The study team must also address the capabilities of the
455 alternatives to close or mitigate the capability gap(s) and the associated reduction in operational
456 risk (as identified in the CBA, ICD, and appropriate Core Function Master Plans (CFMPs)).
457
458 The study team should illustrate the operational impact of failing to meet threshold values. The
459 presentation should show why each key performance parameter in the RCT was chosen and the
460 tradespace analysis that resulted in the threshold values. The RCT should contain those key
461 parameters that are so critical that failure to meet them brings the military utility of the solution
462 into question and risks appropriately mitigating the gap. Finally, the study team should identify
463 and discuss the key items that discriminate the alternatives. This will aid in the presentation of
464 the critical findings.
465
466 Once all of the analysis is complete and understood, the study team must determine the most
467 appropriate way to present the critical findings of the AoA. Decision makers expect this
468 information to be presented for the capability gap(s) illustrating the trade-offs identified. There
469 are several ways to display this as shown in the following examples.
470
471 In addition to the presentation examples discussed in the cost/capability tradeoff analysis section,
472 Figure 8-2 shows a notional example of an alternative comparison. In this illustration, MOEs a,
473 b, and c are all critical, enabling the study team to show important differences in performance by
474 alternative.

475
476 Figure 8-3: Example of Critical MOE Results
477
478 Figure 8-3 shows a second example of presenting findings from the analysis. The example
479 presents the effectiveness, operational risk, technical maturity, other risk factors, and costs of each
480 alternative. If this approach is utilized, it is important to define the color rating scheme. The
102
481 color ratings for the measures are based on the capability of the alternatives to meet the measure
482 criteria (threshold, objective values). In this example, the operational risk and technical maturity
483 color rating methods should be discussed and agreed upon by stakeholders and decision makers
484 prior to executing the alternative comparisons. Once the study team applies the rating method,
485 they should conduct a review of the results to determine whether the method is sound or must be
486 revised.
487

488
489 Figure 8-4: Example of Comparing Alternatives by Effectiveness, Risk, and Cost
490
491 It is important to ensure the information presented is clear, concise, cogent, and unbiased. The
492 presentation should accurately depict the analysis results, present understandable interpretations,
493 and support recommendations. The more straightforward and clear the results are presented, the
494 easier it becomes to understand the differences among the alternatives. The study teams job is to
495 help the decision makers understand the differences among the alternatives.
496
497 The Study Director should determine the best way to tell the story of what was learned in the
498 AoA. OAS and the A9 community can assist the team in developing the story. Every effort is
499 different, but OAS and the A9 community will be able to share examples to aid in this
500 development. Finally, the Study Director should plan sufficient time for stakeholder and senior
501 decision maker review of the results.
502
503
504
505

103
506 9 Documenting Analytical Findings
507
508 With analysis complete, the final report and final results briefing document what was learned
509 concerning the ability to solve or mitigate the examined capability gaps. This documentation
510 contains answers to key questions from decision makers. Appendix D contains the template for
511 the final report.
512
513 The primary purpose of the documentation is to illustrate, for decision makers, the cost, schedule,
514 performance, and risk implications of tradespace around the validated capability requirements.
515 The final report should also provide feedback to the requirements process. This feedback
516 addresses those validated capability requirements that, upon further study, appear unachievable
517 and/or undesirable from cost, schedule, performance, and risk points of view.
518
519 The team provides the documents to the AFRRG and AFROC which determine Air Force
520 investment decisions prior to delivery to the MDA and any other appropriate oversight groups.
521 The JSD associated with the effort determines which decision makers, senior leaders, and review
522 groups receive the documents. For JROC Interest efforts, the documents will also be provided to
523 CAPE (for sufficiency review), JROC, JCB, FCB, and AT&L-led OIPT, DAB, and the Study
524 Advisory Group.
525
526 According to Air Force AoA final report approval criteria, the report should include:
527
528 Assumptions and rating criteria used for evaluation in each of the analyses
529 Answers to the key questions outlined in the study guidance. These must be answered
530 sufficiently for decision makers to support the upcoming decisions.
531 Identification of enablers and how they align with those outlined at the MDD and in the
532 AoA guidance.
533 Identification of the effectiveness, cost, and risk drivers and how they were fully explored
534 in sensitivity analysis.
535 Discussion of the tradespace through cost, effectiveness, and risk analysis. This must
536 clearly identify for the decision makers where the trade-offs exist, the operational risk
537 associated with the performance, and the degree to which the capability gap(s) have been
538 mitigated.
539 Identification of the key parameters in the RCT and analytical evidence to support the
540 thresholds and objectives identified. This must include identifying what the associated
541 cost drivers are for those values and how sensitive the cost is to those values.
542 Identification of the sensitivity each alternative to the analysis assumptions and their
543 sensitivity to specific scenarios.
544 Identification of technical feasibility of thresholds and objectives identified in the RCT
545 based on the affordability constraints identified.
546 Identification and scope of additional information/analysis required prior to initiation of
547 acquisition activities (e.g., requesting a milestone decision).

104
548 Identification of how the cost of each alternative aligns with the affordability constraints
549 identified at MDD and in the study guidance.
550 Identification of sustainability considerations for the operational environment.
551
552 To facilitate a successful CAPE sufficiency review for the MDA, the team should provide the
553 following:
554
555 Identification of the measures evaluated, including any cost, performance, and schedule
556 trade off analyses conducted.
557 Evaluation of benefit versus risk. Risks should include an examination of technical, cost,
558 and schedule risks in addition to operational risks. It is important for the team to address
559 the non-operational benefits and risks with the same level of fidelity/rigor as the
560 operational benefits and risks. Regarding risks, it is equally important for the team to
561 carefully examine the non-operational risks as they can be significant contributors to
562 program failure.
563 Explanation of why alternatives do well or poorly. This must include rationale for the
564 results.
565 Explanation of how variations to CONOPS or performance parameters might mitigate cost
566 or change effectiveness ratings. This should include characterizing the circumstances in
567 which each alternative appears superior and the conditions under which it degrades.
568 Identification of estimated schedules for each alternative. This should include an
569 assessment of existing TRLs/MRLs for critical technologies. This assessment includes the
570 impact of not completing development, integration, operational testing on schedule and
571 within budget and the likelihood of achieving the proposed schedule.
572 Identification of practical risk mitigation strategies (if they exist) to minimize impact to
573 delivering operational capability, and any potential workarounds that may be applied if a
574 risk comes to fruition.
575 Identification of all DOTMLPF-P implications for each alternative.
576 Identification and rationale for any questions not answered or analysis that remains
577 incomplete and recommendations to address these in future.
578
579 An HPT is not required for development of the final report; however, key stakeholders and
580 representatives should be involved. Members of the enduring HPT are expected to develop the
581 RCT and review the final report.
582
583 A widespread review of the report is useful in ensuring the report appropriately addresses the
584 areas addressed above. The review should start within the originating command. Outside review
585 can be solicited from a variety of agencies, including OAS, appropriate AF/A5R functional
586 divisions, stakeholders, other Services, as appropriate, and CAPE (for ACAT I and JROC Interest
587 programs).
588
589 According to AF/A5R procedures, the following process is to be followed for review and staffing
590 of the final report:
591
105
592 MAJCOM provide the final report and briefing simultaneously to AF/A5R Functional
593 Division Chief and OAS for assessment.
594 The AF/A5R Functional Division Chief will forward the report and briefing (including the
595 OAS risk assessment) simultaneously to AF/A5R-P.
596 After AF/A5R-P Gate Keeper review, the final report will be submitted to the AFRRG.
597 If the AFRRG concurs with the final report, it will be submitted to the AFROC for
598 validation or approval depending on JSD.
599
600 The OAS assessment criteria are applied to evaluate its credibility and completeness in light of
601 the requirements outlined above and the study guidance. Appendix F of this handbook contains
602 the OAS assessment criteria for the final report. See Appendix H for the recommended timelines
603 for OAS review of documents prior to submission.
604
605
606

106
607 Appendix A: Acronyms
608
ACAT Acquisition Category
ACEIT Automated Cost Estimating Integrated Tools
ACTD Advanced Concept Technology Demonstration
ADM Acquisition Decision Memorandum
AF Air Force
AF/A2 Air Force Assistant Chief of Staff for Intelligence
AF/A5R Air Force Director of Requirements
AF/A5R-P Directorate of Operational Capability Requirements, Chief of
Requirements Policy and Process Division
AF/A9 Director, Studies & Analyses, Assessments and Lessons Learned
AFCAA Air Force Cost Analysis Agency
AFI Air Force Instruction
AFMC Air Force Materiel Command
AFOTEC Air Force Operational Test & Evaluation Center
AFP Air Force Pamphlet
AoA Analysis of Alternatives
BA Battlespace Awareness
BCS Baseline Comparison System
BR Break Rate
BY$ Base Year Dollars
CAIV Cost As an Independent Variable
CAPE Cost Assessment and Program Evaluation (OSD)
CAWG Cost Analysis Working Group
CBA Capabilities Based Assessment
CBP Capabilities Based Planning
CCTD Concept Characterization and Technical Description
CDD Capability Development Document
CJCSI Chairman Joint Chiefs of Staff Instruction
CONEMP Concept of Employment

107
CONOPS Concept of Operations
CPD Capability Production Document
CPIPT Cost Performance Integrated Product Team
DAB Defense Acquisition Board
DAG Defense Acquisition Guidebook
DAP Defense Acquisition Process
DCAPE Director, CAPE
DCR Doctrine Change Request
DOE Department of Energy
DoD Department of Defense
DODD Department of Defense Directive
DOTMLPF-P Doctrine, Operations, Training, Materiel, Leadership/Education, Personnel,
and Facilities

DOTmLPF-P* *Note: in this version of the acronym, m refers to existing materiel in


the inventory (Commercial Off the Shelf (COTS) or Government Off the
Shelf (GOTS)).

DP Development Planning
EA Effectiveness Analysis
EAWG Effectiveness Analysis Working Group
ECWG Employment Concepts Working Group
FCB Functional Control Board
FFRDC Federally Funded Research and Development Center
FM Financial Management
FoS Family of Systems
FOC Full Operational Capability
GAO Government Accountability Office
GAO CEAG GAO Cost Estimating Assessment Guide
GIG Global Information Grid
GRC&A Ground Rules, Constraints & Assumptions
HPT High Performance Team

108
HSI Human Systems Integration
IC Implementing Command
ICD Initial Capabilities Document
ICS Interim Contractor Support
IDA Institute for Defense Analysis
IIPT Integrating Integrated Product Team
IOC Initial Operational Capability
IPT Integrated Product Team
ISA Intelligence Supportability Analysis
ISR Intelligence, Surveillance and Reconnaissance
ISWG Intelligence Supportability Working Group
IT Information Technology
JCA Joint Capability Area
JCB Joint Capabilities Board
JCD Joint Capabilities Document
JCIDS Joint Capabilities Integration and Development System
JCTD Joint Concept Technology Demonstration
JFACC Joint Force Air Component Commander
JROC Joint Requirements Oversight Council
JS Joint Staff
JSD Joint Staffing Designator
KM/DS Knowledge Management/Decision Support
KPP Key Performance Parameter
KSA Key System Attribute
LCOM Logistic Composite Model
LC Lead Command
LCC Life Cycle Cost
LCCE Life Cycle Cost Estimate
LCMC Life Cycle Management Center
LoE Level of Effort

109
LRU Line Replaceable Unit
LSC Logistics Support Cost
M&S Modeling & Simulation
MA Materiel Availability
MAJCOM Major Command
MDA Milestone Decision Authority
MDAP Major Defense Acquisition Program
MDD Materiel Development Decision
MER Manpower Estimate Report
MILCON Military Construction
MOE Measure of Effectiveness
MOP Measure of Performance
MOS Measure of Suitability
MOU Memorandum of Understanding
MRL Manufacturing Readiness Level
MS Milestone
MSFD Multi-Service Force Deployment
MT Mission Task
MTBCF Mean Time Between Critical Failure
MTBM Mean Time Between Maintenance
NACA Non-Advocate Cost Assessment
NSSA National Security Space Acquisition
O&S Operations and Support
O&M Operations and Maintenance
OAS Office of Aerospace Studies
OCWG Operations Concepts Working Group
OIPT Overarching Integrated Product Team
OSD Office of the Secretary of Defense
OSD/AT&L Office of the Secretary of Defense for Acquisition Technology & Logistics
OSD/Cost Assessment and Program Evaluation

110
OSDCAPE
P3I Pre-Planned Product Improvement
PBL Performance-based Logistics
PM Program Manager
POL Petroleum, Oils, and Lubricants
POM Program Objective Memorandum
PPBE Planning, Programming, Budgeting, and Execution
R&D Research and Development
RAF Risk Assessment Framework
RCT Requirements Correlation Table
RDT&E Research, Development, Test & Evaluation
RFI Request for Information
RFP Request for Proposal
S&T Science and Technology
SAF Secretary of the AF
SAF/AQ Assistant Secretary of the AF for Acquisition
SAF/FMC Deputy Assistant Secretary of the AF for Cost and Economics
SAG Study Advisory Group
SCEA Society of Cost Estimating Analysis
SEER Systems/Software Estimating and Evaluation of Resources
SEM Software Estimating Model
SEP System Engineering Plan
SETA Scientific, Engineering, Technical, and Analytical
SLEP Service Life Extension Program
SME Subject Matter Expert
SoS System of Systems
SRG Senior Review Group
STINFO Scientific & Technical Information
T&E Test and Evaluation
TAWG Technology & Alternatives Working Group

111
TDS Technology Development Strategy
TEMP Test and Evaluation Master Plan
TES Test and Evaluation Strategy
TOC Total Ownership Cost
TRL Technology Readiness Level
TSWG Threats and Scenarios Working Group
TY$ Then-year (dollars)
USD (AT&L) Undersecretary of Defense for Acquisition, Technology and Logistics
USAF United States Air Force
VCSAF Vice Chief of Staff Air Force
WBS Work Breakdown Structure
WG Working Group
WIPT Working-Level Integrated Product Team
WSARA Weapon Systems Acquisition Reform Act
WSR Weapon System Reliability

609

112
610 Appendix B: References and Information Sources
611
612 A. Joint Capabilities Integration and Development System (JCIDS) Manual
613 B. CJCSI 3170.01H, JCIDS Instruction
614 C. Capabilities-Based Assessment (CBA) Users Guide
615 D. DODD 5000.01, The Defense Acquisition System
616 E. DODI 5000.02, Operation of the Defense Acquisition System
617 F. Defense Acquisition Guidebook
618 G. DODD 5101.2, DoD Executive Agent for Space
619 H. National Security Space Acquisition (NSSA) PolicyInterim Guidance
620 I. DOD 5000.4-M, Cost Analysis Guidance & Procedures
621 J. Risk Management Guide for DoD Acquisition (and AF-specific implementation)
622 K. AFPD 63-1 Capability-Based Acquisition System
623 L. AFI 10-601 Capabilities-Based Requirements Development
624 M. AFI 10-604 Capabilities-Based Planning
625 N. Information Technology (IT) Related Policies
626 O. Clinger-Cohen Act 1996
627 P. CJCSI 6212.01C - Interoperability and Supportability of IT and NSS
628 Q. DODD 4630.5 Interoperability and Supportability of IT and NSS
629 R. DODI 4630.8 Procedures for Interoperability and Supportability of IT and NSS
630 S. DODD 8100.1 Global Information Grid (GIG) Overarching Policy
631 T. Joint Pub 6-0 Doctrine for C4 Systems Support to Joint Operations
632 U. MIL-HDBK-881B Work Breakdown Structures for Defense Materiel Items (3 October
633 2011)
634 V. DoD Reliability, Availability, Maintainability, and Cost (RAM-C) Rationale Report
635 Manual
636 W. Weapon Systems Acquisition Reform Act (WSARA) of 2009
637 X. OSD Operating and Support Cost-Estimating Guide, May 1992
638

113
639 Appendix C: Study Plan Template
640
641 This appendix contains the AoA Study Plan template required for the AoA. (CAPE desires a
642 maximum of ten to fifteen pages.)

643

644 -----------------------------Cover Page -----------------------------


645

646 <Name of Project Here>


647

648 Analysis of Alternatives (AoA)


649 Study Plan
650
651 <Lead MAJCOM>
652 <Date>
653
654 Distribution Statement
655 Refer to these sources for more information:
656 1. Department of Defense Directive (DODD) 5230.24, Distribution Statements on Technical
657 Documents
658 2. Air Force Pamphlet (AFP) 80-30, Marking Documents with Export-Control and Distribution-
659 Limitation Statements (to be reissued as Air Force Instruction (AFI) 61-204)
660 Ask the Scientific & Technical Information (STINFO) Officer for help in choosing which of the
661 available statements best fits the AoA
662 REMEMBER -- AoA information may be PROPRIETARY, SOURCE SELECTION
663 SENSITIVE, OR CLASSIFIED
664
665
666

114
667 -----------------------Table of Contents---------------------
668 1. Introduction
669 1.1. Background
670 1.2. Purpose and Scope
671 1.3. Study Guidance
672 1.4. Capability Gaps
673 1.5. Stakeholders
674 1.6. Ground Rules, Constraints, and Assumptions
675 2. Alternatives
676 2.1. Description of Alternatives
677 2.2. Operational Concepts
678 2.3. Scenarios and Operational Environment
679 3. Effectiveness Analysis
680 3.1. Effectiveness Methodology
681 3.2. Measures
682 3.3. Sensitivity Analysis Methodology
683 3.4. Analysis Tools and Data
684 3.5. Modeling and Simulation Accreditation
685 4. Cost Analysis
686 4.1. Life Cycle Cost Methodology
687 4.2. Work Breakdown Structure
688 4.3. Cost Tools and Data
689 4.4. Cost Sensitivity and Risk Methodology
690 5. Risk Assessment
691 5.1. Risk Assessment Methodology
692 5.2. Risk Assessment Tools
693 6. Alternative Comparison
694 6.1. Alternative Comparison Methodology and Presentations
695 6.2. Cost/Capability Tradeoff Analysis Methodology
696 7. Organization and Management
697 7.1. Study Team Organization
698 7.2. AoA Review Process
699 7.3. Schedule
700 Appendices
701 A. Acronyms
702 B. References
703 C. CCTD(s)
704 D. Modeling and Simulation Accreditation Plan
705 F. Other appendices as necessary
115
706 ---------------------Plan Section Contents-----------------------
707
708 1. Introduction
709 1.1. Background
710 Briefly describe the history of the effort and related programs. Summarize relevant
711 analyses that preceded this study such as applicable Joint Concept Technology
712 Demonstrations (JCTDs) or Advanced Concept Technology Demonstrations (ACTDs).
713 This should include any lessons learned from previous efforts, especially those that were
714 cancelled.
715 Explain why the study is being conducted now and the key decisions that have been made
716 to this point.
717 1.2. Purpose and Scope
718 Describe the scope and purpose of the AoA. Describe any tailoring or streamlining used
719 to focus the study.
720 Identify potential areas of risk and/or roadblocks pertinent to the study (particularly
721 schedule, lack of required data, lack of stakeholder participation, etc.)
722 Identify the key acquisition or other issues that will be addressed in the analysis. Also
723 explain why any key issues will not be considered or addressed in the analysis.
724 Identify the milestone decision the analysis will inform.
725 1.3. Study Guidance
726 Summarize the AoA study guidance from the Air Force and/or CAPE, as appropriate.
727 Identify the key questions in the guidance.
728 1.4. Capability Gaps
729 Identify and describe the specific AFROC or JROC approved capability gaps that will be
730 addressed in the AoA. Identify the validated sources of these gaps.
731 Identify the threshold/objective requirement values in the ICD and how they will be
732 treated as reference points to explore the tradespace.
733 Identify the timeframe for the operational need.
734 1.5. Stakeholders
735 Identify the stakeholders for this AoA and explain their roles/responsibilities in the AoA.
736 Describe how methodologies, alternatives, evaluation criteria, and results will be reviewed
737 by the stakeholders and oversight groups (e.g., Senior Review Group, Study Advisory
738 Group, etc.).
739 1.6. Ground Rules, Constraints, and Assumptions
740 Identify the AoA ground rules, constraints, and assumptions. Describe the implications of
741 the ground rules, constraints, and assumptions. Reference appropriate assumptions
742 identified in the ICD or AoA guidance and describe their implications to the study.
743 Identify the projected Initial Operating Capability (IOC) and Full Operating Capability
744 (FOC) milestones.
745 2. Alternatives
746 2.1. Description of Alternatives
747 Describe the baseline (existing and planned systems) capability.
116
748 Describe the alternatives specified in the AoA study guidance and how the alternatives
749 will be employed in the operational environment. Explain the rationale for including them
750 in the study. Explain the rationale for excluding any specific types of alternatives in the
751 study.
752 Discuss dependencies associated with each alternative and how the dependencies will be
753 addressed in the analysis.
754 Identify the appendix that contains the CCTD(s) for baseline and each alternative.
755 2.2. Operational Concepts
756 Identify organizational functions and operations performed during the mission. This
757 includes describing logistics and maintenance concepts.
758 Describe what enablers exist and how they interface with the alternatives. This includes
759 identifying the dependencies of each alternative.
760 Discuss significant tactics, techniques, procedures, and doctrine used.
761 Discuss significant interfaces with other systems.
762 Identify any peacetime and contingency operation implications. Describe any deployment
763 issues.
764 2.3. Scenarios and Operational Environment
765 Describe the scenarios that will be used in the AoA and rationale for their selection. This
766 includes an explanation of how the scenarios represent the operational environment.
767 Describe the expected operational environment, including terrain, weather, location, and
768 altitude. Describe how the environment will impact the alternatives.
769 Describe the enemy tactics (include potential countermeasures).
770 3. Effectiveness Analysis
771 3.1. Effectiveness Methodology
772 Describe the effectiveness methodology, including the types of analysis (e.g., parametric,
773 expert elicitation, modeling and simulation, etc.). This includes describing how
774 performance drivers will be identified and fully explored in the sensitivity analysis.
775 Describe how the methodology and associated measures will be reviewed by the
776 appropriate stakeholder and oversight groups (e.g., Senior Review Group, Study Advisory
777 Group, etc.).
778 Describe how the dependencies identified for each alternative will be addressed in the
779 analysis.
780 Describe the decomposition of the capability gaps and how they will be addressed in the
781 analysis.
782 Describe the methodology to explore the tradespace and give a brief description of what
783 sensitivity analysis will be accomplished to determine Key Performance Parameters/Key
784 System Attributes and threshold/objective (T/O) values for the Requirements Correlation
785 Table (RCT). This includes describing how the tradespace around the capability threshold
786 values will be explored to determine if adjustments need to be recommended based on the
787 results.
788 Describe the methodology to assess sustainability concepts such as reliability, availability,
789 and maintainability.
790 3.2. Measures
117
791 Identify the Measures of Effectiveness, Suitability, and Performance.
792 Describe the traceability of the AoA measures to the requirements and associated
793 minimum values identified in the ICD (from the CBA).
794 Describe the traceability of the AoA measures to the capability gaps and mission tasks.
795 Discuss how the measures are measurable and will support the development of the post-
796 AoA documents (e.g., CDD, CPD, TES, TEMP).
797 3.3. Sensitivity Analysis Methodology
798 Describe the sensitivity analysis that will be conducted to determine key performance
799 parameters/key system attributes and threshold/objective values for the RCT.
800 3.4. Analysis Tools and Data
801 Describe the analysis methods and tools that will be used to conduct the analysis and the
802 rationale for selection. Describe the input data to be used and corresponding sources.
803 Discuss how the data for the scenarios, threats, and each of the alternatives will be current,
804 accurate, and unbiased (technically sound and doctrinally correct).
805 Describe how the analysis methods and tools will provide data to address the measures.
806 Illustrate how the analysis methods and tools are linked (suggest using the confederation
807 of tools diagram described in Chapter 4 of this handbook).
808 3.5. Modeling and Simulation Accreditation
809 Describe the modeling and simulation accreditation plan.
810 Discuss any potential model biases, such as man-in-the-loop biases.
811 4. Cost Analysis
812 4.1. Life Cycle Cost Methodology
813 Describe the cost analysis methodology. Describe how the cost drivers will be identified
814 and fully explored in sensitivity analysis.
815 Describe how the cost analysis methodology will be reviewed by the stakeholders and
816 oversight groups (e.g., Senior Review Group, Study Advisory Group, etc.).
817 Describe how the dependencies identified for each alternative will be addressed in the
818 analysis.
819 Identify the economic operating life of the alternatives (e.g., 10 year, 20 year, 25 year
820 Operations and Support cost).
821 Describe the methodology for costing Research and Development (R&S), Investment,
822 Operations and Support (O&S), Disposal, and total LCC for each alternative.
823 Identify the sunk costs for information purposes only.
824 4.2. Work Breakdown Structure
825 Describe the cost work breakdown structure.
826 4.3. Cost Tools and Data
827 Describe the cost analysis methods (e.g., analogy, expert opinion, etc.) and models (e.g.,
828 ACEIT, CRYSTALL BALL, etc.) that will be used and the reason for their selection.
829 Describe the input data to be used and corresponding sources.
830 Discuss any potential model shortfalls.
831 4.4. Cost Sensitivity and Risk Methodology
832 Describe the methodology to identify the cost drivers.

118
833 Describe the methodology for determining the level of uncertainty for each element of
834 LCC and each cost driver.
835 Describe how the cost of each alternative will be assessed with respect to the affordability
836 constraints identified at MDD and in the AoA study guidance.
837 5. Risk Assessment
838 5.1. Risk Assessment Methodology
839 Describe the methodology for identifying risk (operational, technical risk, cost, and
840 schedule). Discuss how empirical data will be used to assess technical risk, especially in
841 the area of integration risk.
842 Describe the methodology to identify schedule drivers.
843 5.2. Risk Assessment Tools
844 Describe the risk assessment tools or models that will be used in the analysis.
845 6. Alternative Comparison
846 6.1. Alternative Comparison Methodology and Presentations
847 Describe the alternative comparison methodology. If using a color scheme (e.g., red,
848 yellow, green), describe how the color rating will be determined from the values.
849 Describe how the alternative comparison methodology will be reviewed by the
850 stakeholders and oversight groups (e.g., SAG).
851 Describe the methodology for performing the sensitivity tradeoff analysis. This includes
852 describing how knee-in-the-curves for cost drivers will be determined to identify cost
853 effective solutions rather than single point solutions.
854 Describe the methodology for identifying the assumptions and variables, when changed,
855 will significantly change the schedule, performance, and/or cost-effectiveness of the
856 alternatives.
857 Describe the methodology for identifying performance parameters, when changed, will
858 significantly change operational effectiveness. Also identify performance parameters, if
859 fixed as performance specifications, are most likely to influence development and
860 production cost.
861 6.2. Cost/Capability Tradeoff Analysis Methodology
862 Describe the cost/capability tradeoff analysis methodology to determine the best value
863 alternative(s) that provide acceptable capability to the warfighter.
864 7. Organization and Management
865 7.1. Study Team Organization
866 Identify how the team is organized and a general description of the responsibilities of each
867 working group.
868 Describe the stakeholders and oversight groups (e.g., Senior Review Group, Study
869 Advisory Group, etc.) and their roles.
870 7.2. AoA Review Process
871 Describe the review process and the oversight groups involved (e.g., Senior Review
872 Group, Study Advisory Group, Milestone Decision Authority, etc.).
873

119
874 7.3. Schedule
875 Describe the AoA schedule (a chart of the timeline with key decision points and events is
876 suggested). Discuss the ability of the study team to execute the study plan according to
877 the schedule. Identify potential schedule risk pertinent to the study.
878
879 APPENDICES
880 A. Acronyms
881 B. References
882 C. CCTD(s)
883 D. Modeling and Simulation Accreditation Plan
884 E. Other appendices as necessary
885

120
886 Appendix D: Final Report Template
887
888 This appendix contains the AoA Final Report template required for the AoA.

889

890 -----------------------------Cover Page -----------------------------


891

892 <Name of Project Here>


893

894 Analysis of Alternatives (AoA)


895 Final Report
896
897 <Lead MAJCOM>
898 <Date>
899
900 Distribution Statement
901 Refer to these sources for more information:
902 1. Department of Defense Directive (DODD) 5230.24, Distribution Statements on Technical
903 Documents
904 2. Air Force Pamphlet (AFP) 80-30, Marking Documents with Export-Control and Distribution-
905 Limitation Statements (to be reissued as Air Force Instruction (AFI) 61-204)
906 Ask the Scientific & Technical Information (STINFO) Officer for help in choosing which of the
907 available statements best fits the AoA
908 REMEMBER -- AoA information may be PROPRIETARY, SOURCE SELECTION
909 SENSITIVE, OR CLASSIFIED
910
911
912

121
913 -----------------------Table of Contents---------------------
914
915 Executive Summary
916 1. Introduction
917 1.1. Purpose and Scope
918 1.2. Study Guidance
919 1.3. Capability Gaps
920 1.4. Stakeholders
921 1.5. Ground Rules, Constraints, and Assumptions
922 1.6. Description of Alternatives
923 2. Operational Effectiveness Analysis Results
924 2.1. Operational Effectiveness Analysis Results
925 2.2. Operational Effectiveness Sensitivity Analysis Results
926 2.3. Requirements Correlation Table (RCT)
927 3. Cost Analysis
928 3.1. Life Cycle Cost Results
929 3.2. Cost Sensitivity and Risk Results
930 4. Risk Assessment
931 4.1. Risk Assessment Results
932 5. Alternative Comparison
933 5.1. Alternative Comparison Results
934 5.2. Sensitivity Analysis Results
935 5.3. Conclusions and Recommendations

936 Appendices
937 A. Acronyms
938 B. References
939 C. CCTD(s)
940 D. Analysis Methodology Details
941 E. Modeling and Simulation Accreditation Final Report
942 F. RAM-C Report (JROC interest and other ACAT I efforts)
943 G. Intelligence Supportability Analysis (ISA)
944 H. Other appendices as necessary
945
946
947
948
949
122
950 --------------------- Report Section Contents-----------------------
951
952 Executive Summary
953 Describe the purpose of the study.
954 Identify key organizations associated with the study.
955 Summarize the results of the study. This must include a summary of the answers to the
956 key questions in the AoA study guidance and identification of where the trade-offs exist,
957 operational risk associated with the performance and to what degree the capability gap(s)
958 have been mitigated by each alternative.
959 Summarize the key parameters in the RCT and the analytical evidence to support them.
960 1. Introduction
961 1.1. Purpose and Scope
962 Describe the scope and purpose of the AoA. Discuss how the AoA scope was tailored to
963 address the AoA study guidance and ADM. Explain the reason for any incomplete
964 analysis and the plan to complete any remaining analysis.
965 Identify any key MDA or other issues that were not considered or addressed in the
966 analysis. Explain the reason for any unanswered questions and the plan to address them.
967 Identify the Milestone Decision the analysis results will inform.
968 1.2. Study Guidance
969 Summarize the AoA study guidance from the AF and CAPE, as appropriate.
970 Identify the key questions in the guidance.
971 1.3. Capability Gaps
972 Identify and describe the specific AFROC or JROC approved capability gaps that were
973 addressed in the AoA. Identify the validated source of these gaps.
974 1.4. Stakeholders
975 Identify the stakeholders for the AoA and explain their roles/responsibilities in the AoA.
976 Describe how the methodologies, alternatives, evaluation criteria, and results were
977 reviewed and accepted by the stakeholders and oversight groups (e.g., Study Advisory
978 Group).
979 1.5. Ground Rules, Constraints, and Assumptions for the AoA
980 Summarize the overarching AoA ground rules, constraints, and assumptions.
981 Describe the expected need timeframe.
982 1.6. Description of Alternatives
983 Describe the baseline (existing and planned systems) capability.
984 Describe each of the alternatives assessed in the AoA (include any discriminating
985 features).
986 Describe what enablers were addressed and how they align with those identified at MDD
987 and in the AoA guidance.
988 Identify all DOTmLPF-P implications for each alternative.
989 2. Operational Effectiveness Analysis
990 2.1. Operational Effectiveness Analysis Results
123
991 Describe the results of the effectiveness and sustainability analysis.
992 2.2. Operational Effectiveness Sensitivity Analysis Results
993 Describe the sensitivity analysis conducted.
994 Identify the key parameters highlighted by the sensitivity analysis (performance drivers)
995 and how they were fully explored.
996 2.3. Requirements Correlation Table (RCT)
997 Identify the key parameters in the RCT and analytical evidence to support the thresholds
998 and objectives identified.
999 3. Cost Analysis
1000 3.1. Life Cycle Cost Results
1001 Describe the results of the cost analysis. This includes presentation of the life cycle cost
1002 estimates (LCCEs) and any total ownership costs.
1003 3.2. Cost Sensitivity and Risk Results
1004 Identify the cost drivers highlighted by the sensitivity analysis and how they were fully
1005 explored.
1006 Identify the level of uncertainty for each cost driver.
1007 Identify how the cost of each alternative aligns with the affordability constraints identified
1008 at MDD and in the AoA study guidance.
1009 4. Risk Assessment
1010 4.1. Risk Analysis Results
1011 Describe the results of the risk analysis. Identify operational and non-operational (e.g.,
1012 technical, cost, schedule) risks.
1013 Describe the initial acquisition schedule for each alternative, assessment of existing
1014 TRLs/MRLs for critical technologies which may impact likelihood of completing
1015 development, integration, operational testing on schedule and within budget. This should
1016 include an assessment of the likelihood of achieving the proposed schedule.
1017 For significant risks, identify practical mitigation strategies to minimize impact to
1018 delivering operational capability and, if applicable, potential workarounds in the event
1019 risks are realized.
1020 5. Alternative Comparison
1021 5.1. Alternative Comparison Results
1022 Describe the results of the alternative comparison.
1023 Explain the rationale for disqualifying any alternatives from further consideration.
1024 If appropriate, identify recommended changes to validated capability requirements for
1025 consideration if changes would result in acceptable tradeoffs.
1026 Explain why alternatives do well or poorly (include rationale for the results).
1027 Describe the results of the cost/capability tradeoff analysis. This must clearly identify
1028 where the tradeoffs exist and to what degree the capability gap(s) have been mitigated.
1029 5.2. Sensitivity Analysis Results
1030 Identify the performance, cost, and risk drivers and how they were fully explored in the
1031 sensitivity analysis.
1032 Identify how sensitive the alternatives are to changes in assumptions and how they are
1033 sensitive to changes in specific scenarios.
124
1034 Identify how sensitive the alternatives are to changes in the threshold and objective values
1035 identified in the RCT (include the associated cost drivers for the values and how sensitive
1036 the cost is to the values).
1037 Explain how variations to CONOPS could mitigate cost drivers or effectiveness
1038 (performance) shortfalls. This should include characterizing the conditions under which
1039 the performance of each alternative improves and degrades.
1040 5.3. Conclusions and Recommendations
1041 Provide conclusions and recommendations based on the analysis.
1042 Provide answers to the key questions identified in the AoA study guidance (must be
1043 answered sufficiently to inform the upcoming decision).
1044 Identify what additional information/analysis is needed prior to initiation of future
1045 acquisition activities and milestone decisions.
1046
1047 APPENDICES
1048 A. Acronyms
1049 B. References
1050 C. CCTD(s)
1051 D. Detailed Description of the AoA methodologies
1052 E. Lessons Learned
1053 F. Modeling and Simulation Accreditation Final Report
1054 G. RAM-C Report (JROC interest and other ACAT I efforts)
1055 H. Intelligence Supportability Analysis (ISA)
1056 I. Other appendices as necessary
1057

125
1058 Appendix E: Study Plan Assessment
1059
1060 This appendix contains the AoA Study Plan assessment criteria used by OAS in their independent
1061 assessment of an AoA Study Plan and associated briefing for presentation to the AFROC and
1062 OSD/CAPE. This assessment will be presented in bullet fashion, highlighting the risk areas with
1063 the credibility and defensibility of the analysis results as it progresses outside of the AF to the
1064 decision makers. OAS will provide an initial assessment and get-well plan after the initial review
1065 to determine readiness for submission to AF/A5R.
1066 1. AoA purpose, definition and scope consistent with guidance
1067 Identification of the specific gaps that are being addressed in the AoA.
1068 Identification of the key questions identified in the AoA study guidance.
1069 Definition of the baseline (existing and planned systems) capability.
1070 Identification of the alternatives identified by the AoA study guidance. This includes
1071 discussion about the implications and/or dependencies identified about the alternative and
1072 how the dependencies will be addressed in the analysis.
1073 Discussion of previous related studies and their relevance to this study.
1074 2. Appropriate stakeholders, issues, constraints addressed
1075 Identification of the stakeholders and their roles/responsibilities in the AoA.
1076 Identification of how each part of the stakeholder and oversight communities will
1077 participate in the study and review processes.
1078 Addresses all assumptions and constraints in guidance. Additional assumptions and
1079 constraints are reasonable and do not artificially constrain the outcome of the study.
1080 3. Analytic Methodology
1081 Measures of Effectiveness, Suitability, and Performance identified.
1082 Modeling and Simulation Accreditation Plan is acceptable.
1083 Decomposition of the capability gaps.
1084 Traceability of the AoA measures to the requirements and associated minimum values
1085 identified in the ICD (from the CBA).
1086 Cost work breakdown structure.
1087 Methodology to determine capability of alternatives to close or mitigate gaps.
1088 Methodology to explore tradespace and description of what sensitivity analysis will be
1089 accomplished to determine key parameters and T/O values for RCT.
1090 Methodology to conduct the cost/capability tradeoff analysis.
1091 Methodology for addressing the dependencies identified for each alternative.
1092 Scenarios to represent the operational environment.
1093 4. Level of effort and schedule is reasonable
1094 Includes a schedule for AoA activities.
1095 Addresses potential milestones that are driving the AoA.
1096 Addresses the ability of the AoA study team to execute the study plan.
1097 Identifies potential areas of risk and/or roadblocks pertinent to the study (particularly
1098 schedule risk, lack of required data, lack of stakeholder participation, etc.).
126
1099 Appendix F: Final Report Assessment
1100
1101 This appendix contains the AoA assessment criteria used by OAS for their independent
1102 assessment of AoA Final Reports and associated briefings for presentation to the AFROC. This
1103 assessment will be presented in bullet fashion, highlighting the risk areas with the completeness,
1104 credibility and defensibility of the analysis results as it progresses outside of the AF to the
1105 decision makers. OAS will provide an initial assessment and get-well plan after the initial review
1106 to determine readiness for submission to AF/A5R.
1107 1. Scope and problem definition consistent with guidance
1108 Description of the scope and purpose of the AoA. Demonstrated consistency with
1109 guidance. Discussed how AoA scope was tailored to address the AoA study guidance
1110 and ADM
1111 Identified any key MDA or other issues that were not considered or addressed in the
1112 analysis (if applicable). This included identification and rationale for any unanswered
1113 questions and/or incomplete analysis and description of the recommended plan to answer
1114 these questions and to bring any remaining analysis to closure.
1115
1116 2. Appropriate stakeholders, issues, constraints addressed
1117 Identification of stakeholder and oversight communities and explanation of their
1118 roles/responsibilities in the AoA
1119 Description of how methodologies, evaluation criteria, and results were reviewed and
1120 accepted by stakeholder and oversight communities
1121
1122 3. Analytic Execution
1123 Assumptions and rating criteria used in the evaluation
1124 Identification of which enablers were addressed and how they align with those outlined at
1125 the MDD and in the AoA guidance
1126 Identification of the performance, cost, and risk drivers and how they were fully explored
1127 in sensitivity analysis.
1128 Identification of how sensitive each of the alternatives are to the analysis assumptions and
1129 if they are sensitive to specific scenarios.
1130 Identification of the key parameters in the RCT and analytical evidence to support the
1131 thresholds and objectives identified. This must include identifying what the associated
1132 cost drivers are for those values and how sensitive the cost is to those values.
1133 Identification of technical feasibility of thresholds and objectives identified in the RCT
1134 based on the affordability constraints identified.
1135 Identification and scoping of what additional information/analysis is needed prior to
1136 initiation of any acquisition activities; to include requesting a milestone decision.
1137 Identification of how the cost of each alternative lines up with the affordability constraints
1138 identified at MDD and in the AoA study guidance.
1139 Identification of Measures of Suitability and how they are intended to be supported in the
1140 intended operational environment.

127
1141 Identification of the metrics used, any weighting factors applied, and the rationale for
1142 applying each weighting factor. Analysis should illustrate interrelationship between the
1143 metrics and cost to facilitate cost/capability/risk/schedule tradespace discussions.
1144 Identification of the operational and non-operational (e.g., technical, cost, schedule) risks.
1145 It is important that the study team address the non-operational risks with the same level of
1146 fidelity/rigor as the operational risks. Non-operational risks can be significant
1147 contributors to future program failure.
1148 Identification of all DOTmLPF-P implications for each alternative
1149 Description of each alternative under consideration including discriminating features
1150
1151 4. Recommendations and Conclusions Supported by AoA Findings
1152 Answers to the key questions identified in the AoA study guidance. These must be
1153 answered sufficiently for decision makers to support the upcoming decisions.
1154 Illustration of the cost/capability/risk tradespace. This must clearly identify for the
1155 decision makers where the trade-offs exist, operational risk associated with the
1156 performance and to what degree the capability gap(s) have been mitigated.
1157 Rationale for disqualifying any alternatives from further consideration.
1158 If appropriate, recommended changes to validated capability requirements for
1159 consideration if change would enable more appropriate tradespace.
1160 Explanation of why alternatives do well or poorly. This must include rationale for the
1161 results.
1162 Explanation of how variations to CONOPS or attributes might mitigate cost drivers or low
1163 ratings on assessment metrics. This should include characterizing the circumstances in
1164 which each alternative appears superior and the conditions under which it degrades.
1165 Identification of estimated schedules for each alternative, assessment of existing
1166 TRLs/MRLs for critical technologies which may impact likelihood of completing
1167 development, integration, operational testing on schedule and within budget. This should
1168 include an assessment of the likelihood of achieving the proposed schedule based on DoD
1169 experience.
1170

128
1171 Appendix G: Lessons Learned
1172
1173 This appendix provides rationale and guidance for capturing and documenting lessons learned.
1174 Lessons learned provide current and future AoA study teams with valuable knowledge derived
1175 from past and present AoA efforts. This knowledge includes information about the strengths and
1176 weaknesses of initiating, planning, and executing an AoA. Lessons learned from the beginning of
1177 the AoA to completion of the AoA process should be thoroughly documented. By capturing and
1178 documenting lessons learned, each AoA team can add to and benefit from the collective wisdom
1179 and Best Practices related to the AoA process.
1180
1181 Some of the most commonly recurring Study Team lessons learned include:
1182
1183 Meet regularly either in person or virtually
1184 Team composition of both Air Force and contractor personnel provides good
1185 complementary technical support
1186 Study Advisory Groups provide guidance, support and sanity checks
1187 The Study Director and the core team must lead the entire effort
1188 Small numbers of people meeting are more productive
1189 Buy-in of the senior leaders at all levels is critical
1190 Things will change documentation and communication is critical
1191 Utilization of High Performance Teams can increase efficiency and has the potential to
1192 shorten timelines. They are especially useful when a team is faced with a very aggressive
1193 schedule
1194
1195

129
1196 Appendix H: OAS Review of Documents for AFROC
1197
1198 This appendix provides a general timeline to follow for review of AoA related documents (study
1199 plans, final reports, and interim status briefings) in preparation for presentation to the AFROC.
1200 This timeline applies a staffing process that begins with the AoA document delivery to OAS and
1201 concludes with the presentation of the document at the AFROC. The staffing process may
1202 conclude prior to AFROC presentation if an intermediate body (AFRRG, MAJCOM, etc.)
1203 recommends that the document either; 1) does not contain sufficient information, or 2) is not
1204 appropriate for presentation at the next scheduled AFROC. This schedule is not fixed, but it does
1205 define the recommended minimum timeline necessary for satisfactory review and staffing by the
1206 organizations with an interest in AoA documentation. The first two weeks of review are designed
1207 to assist the study team in understanding OAS assessment and provides the team with the time
1208 needed to make any adjustments they see fit in preparation for the remainder of the staffing
1209 process.
1210
1211 This timeline only applies to efforts that have had an OAS member embedded with the team
1212 throughout the effort. For studies in which an OAS member has not been imbedded, the AoA
1213 team should plan for a lengthier review process in order for OAS to become familiarized with the
1214 study, its objectives and the products subject to review.
1215
Suspense Activity
6 weeks prior to AoA Team submits document to OAS. Provides presentation and
AFROC document to OAS. OAS works with teams to refine products.
OAS provides assessment of presentation and document to AoA
5 Weeks prior to
Team & MAJCOM/Lead Command. OAS works with teams to
AFROC
refine products.
4 Weeks prior to AoA Team submits documents to AF Functional for review. OAS
AFROC submits assessment to AF Functional.
3 Weeks prior to AF Functional submits document to AF/A5R-P in preparation for
AFROC AFRRG.
2 Weeks prior to AoA Team presents document at AFRRG. OAS submits
AFROC assessment to AFRRG.
1 Week prior to AoA Team revises document based on feedback/direction from
AFROC AFRRG.
AoA Team presents document to AFROC. OAS submits
Week of AFROC
assessment to AFROC.
1216 Figure H-1. Example Timeline for Review of Documents and Briefings to the AFROC

130
1217 Appendix I: Joint DoD-DOE Nuclear Weapons Acquisition
1218 Activities
1219
1220 This appendix provides additional information for those Department of Defense (DoD)/Air Force
1221 Acquisition efforts which are part of the nuclear enterprise. It provides information regarding
1222 important stakeholders, councils, and committees which should be included in the AoA processes.
1223 It also points out the significantly longer process that is used by the Department of Energy (DOE)
1224 during and acquisition of nuclear weapons. These timelines can greatly impact the fielding of a
1225 new capability to the warfighter and close coordination is vital for program success.
1226
1227 Complementary DoD and DOE Departmental Responsibilities
1228
1229 Although there is a dual-agency division of responsibilities between the DoD and the DOE, these
1230 responsibilities are complementary. These complementary responsibilities are based on law and
1231 formal agreements to provide a safe, secure, and militarily effective nuclear weapons stockpile.
1232 All nuclear weapon development, production, sustainment, and retirement projects shall be
1233 coordinated fully between the DoD and the DOE, and shall consider total weapon cost and
1234 performance (including DOE costs and other resource requirements) in establishing military
1235 requirements and design objectives. The DoD and DOE will jointly determine the classification of
1236 developmental systems.
1237

1238
1239
1240 Councils and Committees
1241
1242 In addition to establishing roles and responsibilities two primary entities were established to
1243 provide support and advice in matters involving oversight, guidance, and day-to-day matters
1244 concerning nuclear stockpile activities: the Nuclear Weapon Council and the Nuclear Weapons
1245 Council Standing and Safety Committee.
1246
1247 Nuclear Weapon Council (NWC)

131
1248 The DoDD 3150.1, E2.1.5. defines the Nuclear Weapon Council (NWC) as: An
1249 advisory/approval body established under DoD-DOE MOU (reference (f)) and 10 U.S.C. 179
1250 (reference (g)) to provide high-level oversight, coordination and guidance to nuclear weapons
1251 stockpile activities. It is chaired by USD (AT&L), with the Vice Chairman of the Joint Chiefs of
1252 Staff and a senior representative from the DOE as members. All communication between DoD
1253 and DOE should be transmitted through the NWC.
1254

1255
1256
1257 The Council provides an inter-agency forum for reaching consensus and establishing priorities
1258 between the two Departments. It also provides policy guidance and oversight of the nuclear
1259 stockpile management process to ensure high confidence in the safety, security, and reliability of
1260 U.S. nuclear weapons.
1261
1262 The NWC serves as an oversight and reporting body and is accountable to both the Legislative
1263 and Executive branches of the government. The NWC meets regularly to raise and resolve issues
1264 between the DoD and the NNSA regarding concerns and strategies for stockpile management.
1265 The Council is also required to report regularly to the President regarding the safety and reliability
1266 of the U.S. stockpile.
1267
1268 Air Force Nuclear Weapons Center (AFNWC)
1269
1270 The Air Force Nuclear Weapons Center (AFNWC) was established with the following Strategic
1271 Nuclear Goals in mind:
1272
1273a. Maintain nuclear weapons system surety through timely and credible processes.
1274b. Enable the development and implementation of nuclear technology to maintain air, space and
1275 information dominance.
1276c. Sustain war-winning nuclear capabilities to ensure operational readiness and effectiveness.
1277
1278 Highlighted below are the individual and joint responsibilities of the DoD and DOE. AFNWC has
1279 the DoD lead responsibility to coordinate nuclear weapon arsenal requirements with the DOE for
1280 refurbishments to be acquired within the Joint DOE-DoD 6.X Life Cycle process.
1281

132
1282
1283 National Nuclear Security Administration (NNSA)
1284
1285 The National Nuclear Security Administration (NNSA) is a separately organized and funded
1286 agency within the Department of Energy (DOE) and has a multi-billion dollar per year budget to
1287 maintain the safety and reliability of the nations nuclear weapon stockpile. NNSA manages life
1288 extension efforts using multipart nuclear weapon refurbishment process, referred to as the 6.X
1289 Process, which separates the life extension process into phases.
1290
1291 Nuclear weapons are developed, produced, maintained in the stockpile, and then retired and
1292 dismantled. This sequence of events is known as the nuclear weapons life cycle. As a part of
1293 nuclear weapons management, the DoD and the National Nuclear Security Administration
1294 (NNSA) have specific responsibilities related to nuclear weapons life cycle activities. Therefore,
1295 The DoD and the NNSA share responsibility for all U.S. nuclear weapons/warheads.
1296
1297 The following chart depicts the Joint DoD and DOE relationships. The left side of the diagram
1298 includes the DoD organizational flow, beginning with the PM to the PEO, followed by the
1299 SAF/AQ, AFROC, JROC, up to the Under Secretary of Defense Acquisition, Technology and
1300 Logistics (USD(AT&L)). The USD(AT&L) coordinates with the NWC.
1301
1302 On the DOE side, the flow begins with the NNSA PM to the NA-10 Defense Programs, and up to
1303 the NNSA N-1. The NNSA N-1 coordinates with the NWC. Joint DoD/DOE involves
1304 coordination of the DoD PM and NNSA PM
1305

133
1306
1307
1308
1309 DoD Responsibilities
1310
1311 The DoD is responsible for:
1312
1313 Participating in approved feasibility studies
1314 Developing requirements documents that specify operational characteristics for each
1315 warhead-type and the environments in which the warhead must perform or remain safe
1316 Participating in the coordination of engineering interface requirements between the
1317 warhead and the delivery system
1318 Determining design acceptability
1319 Specifying military/national security requirements for specific quantities of warheads
1320 Receiving, transporting, storing, securing, maintaining, and (if directed by the President)
1321 employing fielded warheads
1322 Accounting for individual warheads in DoD custody
1323 Participating in the joint nuclear weapons decision process (including working groups, the
1324 warhead Project Officer Group (POG), the NWC Standing & Safety Committee
1325 (NWCSSC), and the NWC)
1326 Developing and acquiring the delivery vehicle and launch platform for a warhead; and
1327 storing retired warheads awaiting dismantlement in accordance with jointly approved
1328 plans.
1329
1330 DOE Responsibilities
134
1331
1332 The DOE is responsible for:
1333
1334 Participating in approved feasibility studies
1335 Evaluating and selecting the baseline warhead design approach
1336 Determining the resources (funding, nuclear and non-nuclear materials, facilities, etc.)
1337 required for the program
1338 Performing development engineering to establish and refine the warhead design
1339 Engineering and establishing the required production lines
1340 Producing or acquiring required materials and components
1341 Assembling components and sub-assemblies into stockpile warheads (if approved by the
1342 President)
1343 Providing secure transport within the U.S.
1344 Developing maintenance procedures and producing replacement limited-life components
1345 (LLCs)
1346 Conducting a jointly-approved quality assurance program
1347 Developing a refurbishment planwhen requiredfor sustained stockpile shelf-life
1348 Securing warheads, components, and materials while at DOE facilities
1349 Accounting for individual warheads in DOE custody
1350 Participating in the joint nuclear weapons decision process
1351 Receiving and dismantling retired warheads; and disposing of components and materials
1352 from retired warheads.
1353
1354 Joint Nuclear Acquisition Process
1355
1356 The Joint DoD-NNSA Nuclear Weapons Life Cycle consists of seven phases from initial research
1357 and Concept Design through Retirement, Dismantlement, and Disposal. This process is similar to,
1358 and has many parallels with, the Defense Acquisition System described in DoD 5000 directives
1359 and instructions.
1360
1361 The United States has not conducted a weapon test involving a nuclear yield since 1992. This
1362 prohibition against nuclear testing is based on diplomatic and political considerations including
1363 the international Comprehensive Test Ban Treaty of 1996 which the United States follows even
1364 though the treaty was not formally ratified. Although entirely new nuclear weapons could be
1365 developed without testing involving a nuclear yield, the United States has refrained from
1366 developing new devices and instead has emphasized stockpile management including programs to
1367 upgrade, update, and extend the service life of our current weapons. This sustainment effort is
1368 called the 6.X Process.
1369
1370
1371
1372
1373
135
1374 The Phase 6.X Process
1375

1376
1377
1378 The Nuclear Weapons Council (NWC) has a major role in the refurbishment and maintenance of
1379 the enduring nuclear weapons stockpile. To manage and facilitate the refurbishment process, the
1380 NWC approved the Phase 6.X Procedural Guideline in April 2000.
1381

1382
1383
1384
136
1385 Phase 6.1 Concept Assessment (On-going)
1386
1387 The Concept and Assessment Phase, consists of continuing studies by the DoD, the NNSA, and
1388 the Project Officer Group (POG). A continuous exchange of information, both formal and
1389 informal, is conducted among various individuals and groups. This exchange results in the
1390 focusing of sufficient interest on an idea for a nuclear weapon or component refurbishment to
1391 warrant a Program Study. During the 6.1 Phase, the NWC must be informed in writing before the
1392 onset of any activity jointly conducted by the DoD and the NNSA.
1393
1394 Phase 6.2 Feasibility Study (9-18 months)
1395
1396 After the NWC approves entry into Phase 6.2, the DoD and the NNSA embark on a Phase 6.2
1397 Study, which is managed by the POG for that weapon system. In a Phase 6.2 Study, design
1398 options are developed and the feasibility of a Phase 6.X refurbishment program for that particular
1399 nuclear weapon is evaluated. The NNSA tasks the appropriate DOE laboratories to identify
1400 various design options to refurbish the nuclear weapon. The POG performs an in-depth analysis
1401 of each design option. At a minimum, this analysis considers the following:
1402
1403 Nuclear safety
1404 System design, trade-offs, and technical risk analyses
1405 Life expectancy issues
1406 Research and development requirements and capabilities
1407 Qualification and certification requirements
1408 Production capabilities and capacities
1409 Life cycle maintenance and logistics issues
1410 Delivery system and platform issues
1411 Rationale for replacing or not replacing components during the refurbishment
1412
1413 The Phase 6.2 Study includes a detailed review of the fielded and planned support equipment
1414 (handling gear, test gear, use control equipment, trainers, etc.) and the technical publications
1415 associated with the weapon system. This evaluation is performed to ensure that logistics support
1416 programs can provide the materials and equipment needed during the planned refurbishment time
1417 period.
1418
1419 Military considerations, which are evaluated in tandem with design factors, include (at a
1420 minimum):
1421
1422 Operational impacts and/or benefits that would be derived from the design options
1423 Physical and operational security measures
1424 Requirements for joint non-nuclear testing
1425
1426 Refurbishment options are developed in preparation for the development of the option down-
1427 select package. This package includes any major impacts on the NNSA nuclear weapons complex
137
1428
1429 Phase 6.2A Design Definition and Cost Study (3-6 months)
1430
1431 The NNSA will work with the National Nuclear Laboratories to identify production issues and to
1432 develop process development plans and proposed workload structures for the refurbishment. The
1433 National Nuclear Laboratories will continue to refine the design and to identify qualification
1434 testing and analysis in order to verify that the design meets the specified requirements. Cost
1435 estimates are developed for the design, testing, production, and maintenance activities for the
1436 projected life of the Life Extension Program (LEP) refurbishment
1437
1438 Phase 6.3 Development Engineering (1 - 3 years)
1439
1440 Phase 6.3 begins when the NWC prepares a Phase 6.3 letter requesting joint DoD and NNSA.
1441 The NNSA, in coordination with the DoD, conducts experiments, tests, and analyses to validate
1442 the design option(s). Also at this time, the production facilities assess the producibility of the
1443 proposed design, initiate process development activities, and produce test hardware as required.
1444
1445 At the end of Phase 6.3, the weapon refurbishment design is demonstrated to be feasible in terms
1446 of:
1447
1448 Safety
1449 Use control
1450 Performance
1451 Reliability
1452 Producibility
1453
1454 The design is thereby ready to be released to the production facilities for stockpile production
1455 preparation activities. These activities are coordinated with parallel DoD activities.
1456
1457 The Lead Service may decide that a Preliminary Safety Study of the system is required in order to
1458 examine design features, hardware, and procedures as well as aspects of the concept of operation
1459 that affect the safety of the weapon system. During this Study, the Nuclear Weapon System
1460 Safety Group (NWSSG) identifies safety-related concerns and deficiencies so that timely and
1461 cost-efficient corrections can be made during this Phase.
1462
1463 Phase 6.4 Production Engineering (1-3 years)
1464
1465 When development engineering is sufficiently mature, the NNSA authorizes the initiation of
1466 Phase 6.4. This Phase includes activities to adapt the developmental design into a producible
1467 design as well as activities that prepare the production facilities for refurbishment component
1468 production.
1469

138
1470 Generally, Phase 6.4 ends after the completion of production engineering, basic tooling, layout,
1471 and adoption of fundamental assembly procedures, and when NNSA engineering releases indicate
1472 that the production processes, components, subassemblies, and assemblies are qualified.
1473
1474 Phase 6.5 First Production (3-6 months)
1475
1476 When sufficient progress has been made in Phase 6.4, the NNSA initiates Phase 6.5. During this
1477 Phase, the production facilities begin production of the first refurbished weapons. These weapons
1478 are evaluated by the DoD and the NNSA. At this time, the NNSA preliminarily evaluates the
1479 refurbished weapon for suitability and acceptability. A final evaluation is made by the NNSA and
1480 the Labs after the completion of an engineering evaluation program for the weapon.
1481
1482 If the DoD requires components, circuits, or software for test or training purposes prior to final
1483 approval by the NNSA, the weapons or items would be utilized with the understanding that the
1484 NNSA has not made its final evaluation. The POG coordinates specific weapons requirements for
1485 test or training purposes.
1486
1487 The POG informs the NWCSSC that the Life Extension Programs (LEP) refurbishment program
1488 is ready to proceed to IOC and full deployment of the refurbished weapon. The Lead Service
1489 conducts a Pre-Operational Safety Study at a time when specific weapon system safety rules can
1490 be coordinated, approved, promulgated, and implemented 60 days before Initial Operational
1491 Capability (IOC) or first weapon delivery.
1492
1493 During this Study, the NWSSG examines system design features, hardware, procedures, and
1494 aspects of the concept of operation that affect the safety of the weapon system to determine if the
1495 DoD nuclear weapon system safety standards can be met. If safety procedures or rules must be
1496 revised, the NWSSG recommends draft revised weapon system safety rules to the appropriate
1497 Military Departments.
1498
1499 Phase 6.6 Full-Scale Production (Varies)
1500
1501 Upon NWC approval to initiate Phase 6.6, the NNSA undertakes the necessary full-scale
1502 production of refurbished weapons for entry into the stockpile. Phase 6.6 ends when all planned
1503 refurbishment activities, certifications, and reports are complete.
1504
1505 Aligning the DoE Phase 6.X processes with DoD Acquisition activities takes a lot of planning and
1506 coordination. The timelines for the DoE process are significantly longer than current DoD
1507 projections and data flow has to be tightly coordinated to ensure a successful program execution
1508 to meet warfighter timelines and requirements.
1509

139
1510 Appendix J: Human Systems Integration (HSI)
1511
1512 OPR: SAF/AQ-AFHSIO
1513 hsi.workflow@pentagon.af.mil
1514
1515 Introduction
1516
1517 System hardware and software components are defined and transformed as technology evolves,
1518 but the human being is the one (and often the only) known factor as we begin to define a materiel
1519 solution to a capability gap. To begin planning and developing a new system, it is important to
1520 consider the human first. This is called Human Systems Integration (HSI).
1521
1522 HSI defined:
1523
1524 Human Systems Integration (HSI): interdisciplinary technical and management processes
1525 for integrating human considerations within and across all system elements; an essential
1526 enabler to systems engineering practice (International Council on Systems Engineering
1527 (INCOSE), 2007)
1528
1529 [The goal of HSI is] to optimize total system performance, minimize total ownership
1530 costs, and ensure that the system is built to accommodate the characteristics of the user
1531 population that will operate, maintain, and support the system. DoDI 5000.02
1532
1533 The integrated, comprehensive analysis, design and assessment of requirements, concepts
1534 and resources for system Manpower, Personnel, Training, Environment, Safety,
1535 Occupational Health, Habitability, Survivability and Human Factors AFI 10-601
1536
1537 Air Force HSI is frequently broken out into nine elements, known as domains, to help people
1538 think about and manage various aspects of human involvement/impacts. The AF HSI domains
1539 are: Manpower, Personnel, Training, Environment, Safety, Occupational Health, Human Factors
1540 Engineering, Survivability, and Habitability. These domains are explained later in this appendix.
1541
1542 Integrating the human when developing alternatives
1543
1544 The Department of Defense (DoD) places a high priority on our people, and the policies reflect
1545 that. The Air Force priority of Develop and care for Airmen is a guiding tenet for this work.
1546 The earlier humans are considered in the Integrated Life Cycle, and the consistency with which
1547 they are considered throughout all phases, the better the system will be and the better it will be for
1548 the people who use it. In fact, the DoD requires that acquisition programs give the human equal
1549 treatment to hardware and software as systems are developed:
1550
1551 The human and ever increasingly complex defense systems are inextricably
1552 linked. Systems, composed of hardware and software, enable the ability of
140
1553 humans to perform tasks that successfully project combat power in difficult and
1554 lethal environments. High levels of human effectiveness are typically required for
1555 a system to achieve its desired effectiveness. The synergistic interaction between
1556 the human and the system is key to attaining improvements in total system
1557 performance and minimizing total ownership costs. Therefore, to realize the full
1558 and intended potential that complex systems offer, the Department must apply
1559 continuous and rigorous approaches to HSI to ensure that the human capabilities
1560 are addressed throughout every aspect of system acquisition.The DoD has embraced HSI
1561 as a systemic approach. The concept of HSI embraces the total human involvement with
1562 the system throughout its life cycle. In summary, this means that the human in
1563 acquisition programs is given equal treatment to hardware and software. (FY11
1564 Department of Defense Human Systems Integration Management Plan, 2011. Washington,
1565 DC: DDRE, Director, Mission Assurance, and Director of Human Performance, Training
1566 & Biosystems.)
1567
1568 These principles apply to unmanned platforms as well:
1569
1570 unmanned systems are unmanned in name only. While there may be no Airman
1571 onboard the actual vehicle, there indeed are airmen involved in every step of the process,
1572 including the pilots who operate the vehicles remote controls and sensors and
1573 maintenance personnel. (Gen Fraser, VCSAF, 23 July 2009)
1574
1575 We have learned through the years that because of maintenance and data processing requirements,
1576 unmanned systems often require MORE people to operate and maintain them than traditional
1577 platforms.
1578 The importance of HSI
1579 HSI ensures that the people who touch the system in any way are accommodated and provided
1580 with effective, usable, maintainable, safe equipment, which is integrated and designed properly.
1581 The earlier humans are considered in CONOPS, capability gaps, capability based analysis
1582 assumptions and constraints, and JCIDS documents, the better chance the equipment will be
1583 designed and funded for development with human considerations intact.
1584 Human performance (HP) capabilities and limitations often become serious design
1585 constraints they should be factored into analytical assumptions and constraints
1586 Effective HSI results in systems with excellent usability, availability, safety, suitability,
1587 accessibility, and maintainability
1588 Early recognition and consideration of the human element may preclude some deficiencies
1589 often found in OT&E that may be very costly to redesign
1590
1591 HSI responsibilities
1592
1593 DODI 5000.02, Enclosure 8 (new draft 5000.02 will be Enclosure 2-6) stipulates that the
1594 Program Manager (PM) is responsible for doing HSI during the development of the program. As
141
1595 a member of the Analysis of Alternatives study team, consider the human when addressing
1596 questions, and develop the studys assumptions and constraints. Determine costs related to
1597 mission tasks, threats, environments, manpower requirements, skill levels, effectiveness, system
1598 performance, usability, accessibility, maintainability, safety, etc. The analysis will tell PMs and
1599 System Engineers (SE) about the necessary characteristics and performance parameters of the
1600 new system. In turn, information will be used to make decisions impacting the lives of Airmen. It
1601 is important, therefore, that to provide a thorough basis for analytic decisions.
1602
1603 "A knowledgeable, interdisciplinary HSI team is generally required to address the full
1604 spectrum of human considerations, and the systems engineer is key to ensuring that HSI is
1605 included throughout the systems life cycle" (INCOSE Systems Engineering Handbook,
1606 Para 9.12.1 HSI is Integral to the SE Process, Version 3.2, 2010)
1607
1608 The HSI process
1609 The objective of Human Systems Integration is to appropriately integrate the human into
1610 the system design to optimize total system effectiveness. To accomplish this, all aspects of
1611 human interaction with the system must be understood to the highest degree possible to
1612 optimize the system and human capabilities. For each program or system, different
1613 emphasis is placed on the human interaction depending on the type of system and its
1614 mission and environment.. (FY11 Department of Defense Human Systems Integration
1615 Management Plan, 2011. Washington, DC: DDRE, Director, Mission Assurance, and
1616 Director of Human Performance, Training & Biosystems.)
1617
1618 Human roles, needs and impacts in maintenance, logistics, training, intelligence, security and
1619 other support functions can and should be accounted for in the HSI process but only if they are
1620 first addressed in the CBA and AoA. Starting with the human as the integrating focus is a rational
1621 methodology to inform early systems engineering, development planning and concept
1622 characterization efforts.
1623
1624 How HSI traces through the JCIDS documents if the correct HSI language does not appear in
1625 any one document, it cannot be inserted in the next stage or put onto contract:
1626
1627 1. HSI in CBA: cites inability to safely perform mission, and unacceptable O&S costs due to
1628 long maintenance times.
1629 2. HSI in ICD: cites need for operator safety and accessibility of components for
1630 maintenance.
1631 3. HSI in AoA: MOEs/MOPs for safety, usability, accessibility
1632 4. HSI in CDD: KPP for Force Protection, KPP for Safety, KPP for Sustainment, KSA for
1633 Maintainability, attributes for accessibility
1634 5. HSI in CPD: KPP for Force Protection, KPP for Safety, KPP for Sustainment, KSA for
1635 Maintainability, attributes for accessibility
1636 6. (after JCIDS) HSI in SRD: cites applicable parts of HSI DID and MIL STD 1472 for
1637 safety, usability, accessibility
142
1638
1639 [NOTE: A simple tool for considering human involvement, impacts, constraints, and trade-offs
1640 can be found at the end of this appendix.]
1641
1642 Domains defined
1643
1644 The HSI domains represent a diverse set of human resources-related issues to stakeholders in the
1645 acquisition process, many of whom are primary bill payers during the operations and support
1646 and disposal phases of the system life cycle. Since changes in one domain will likely impact
1647 another, domains should not be considered as separate stove-pipes. An integrated perspective,
1648 balancing the equities of all stakeholders, needs to start prior to MS A and continue through
1649 development and sustainment.
1650
1651 Below are the AF HSI domains and some human considerations that might influence decisions
1652 and thoughts about effectiveness, usability, costs, etc. Notice that some human considerations
1653 appear in more than one domain. The placement of a consideration will often be overlapping, and
1654 it may be contextual (for example: vehicle exhaust may cause an Occupational Health issue, or an
1655 Environment issue, or both):
1656
1657 Manpower:
Wartime/peacetime manning Future technology and human aptitudes
requirements
Deployment considerations System manpower estimates
Force structure Maintenance and logistics concepts
Operating strength BRAC Considerations
Manning concepts Life Cycle Cost implications of manpower
decisions
Manpower policies
1658
1659 Personnel:
Selection and classification Personnel/training pipeline
Demographics Qualified personnel
Knowledge, Skills and Abilities Projected user population/recruiting
Accession/attrition Cognitive, physical, educational profiles
Career progression/retention Life Cycle Cost implications of personnel
decisions
Promotion flow
1660
143
1661 Training:
Training strategy and concepts Training development and methods
Impact on school house or resources Simulation/embedded/ emulation
Virtual applications Operational tempo
Trainer currency Training system costs, Operational Safety,
Suitability, and Effectiveness (OSS&E)
efficiency
Training vs. job aids Refresher and certification training
Required lead time for training, and Manpower and Personnel policy implications for
timeliness of delivery training flow and costs
1662
1663 Environment:
System hazards that affect or impact Natural resources
the human or earth
Air Local communities/political/cultural
Water Pollution prevention
Earth Exhaust/toxic emissions
Noise Disposal
Wildlife Recycle/ Reuse
1664
1665 Safety:
Safety Types: Procedures:
Human Normal
Flight Emergency
Weapon
Ground
CBRNE/NBC
Equipment
Design safety System risk reduction
Human error Redundancy of systems
Total System Reliability Fault reduction
Communication
1666
1667 Occupational Health:

144
Operational Health Temperature
Hazards Humidity/salt spray
Operational Environments Weather
Acoustics Shock and vibration
Biological and chemical Laser protection
Radiation Ballistic spall/fragmentation
Oxygen deficiency Exhaust/toxic emissions
Air/water pressure Health care
Lighting Medications
Weight /load weight distribution Diagnose, treat and manage illness and trauma
Heat, cold, hydration Stress
Exercise & fitness mission Personal protection
readiness
Disease prevention Fatigue
(vaccines/hygiene)
1668
1669 Human Factors Engineering:
Human-centered design Implications of design on performance
Human-system interface (aka Simplicity of operation, maintenance, and
Human-Machine Interface (HMI)) support
Design impact on skill, knowledge, Design-driven human effectiveness, efficiency,
& aptitudes safety and survivability
Cognitive load, workload Costs of design driven human error,
inefficiency, or effectiveness
Human performance Personnel Survivability/Force Protection
Situational Awareness Threats
Ergonomics Fratricide & Identification Friend, Foe, Neutral
Operational arenas Potential damage to crew compartment
Camouflage/ concealment Protective equipment
Sensors Medical injury (self, buddy, etc.)
Fatigue & stress Degraded operations (long/short term)
1670
145
1671 Habitability:
Living/work environment Impact on sustained mission effectiveness,
(ergonomics, bed, toilet, bath, food
prep, rest and eating areas)
Support services (food, medical,Impact on recruitment, retention
cleaning, recreation, etc.)
Ingress/ egress Security for personnel/personal items
Normal
Emergency
Evacuation of casualties
While wearing/carrying
equipment
Storage space for equipment, spares, Power requirements for food safety, lighting,
food, water, supplies temperature control, safety/security
1672
1673
1674 HSI practitioners can assist the AoA teams by providing information and focusing thoughts on
1675 effectiveness, usability, task analysis, safety considerations, habitability/facility needs,
1676 occupational health impacts, etc.
1677
1678 HSI Support throughout the AoA
1679
1680 The Air Force Human Systems Integration Office (SAF/AQ-AFHSIO)
1681 hsi.workflow@pentagon.af.mil should be the first call for HSI support. They can coordinate the
1682 appropriate expertise from across the Air Force including AFLCMC, AFNWC, AFSPC/SMC, the
1683 MAJCOM HSI cells, and the 711 Human Performance Wing.
1684
1685 Human Systems Integration Tool. Below is an easy matrix to help remember the domains and
1686 consider the many users of a program. It is populated with a few examples to provide an idea of
1687 how it is used to help keep all the humans using/touching a system (and the costs associated with
1688 them) in mind as when planning a new system. Use this when incorporating data into the
1689 assessment. Keep in mind that considerations in one domain may cause significant risks or trade-
1690 offs in another.
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700

146
Humans Using System

ground station, fuel,


Other (ex: medical,
supports system)
Supporting 1 (ex:

system provides

system provides
Supported 1 (ex:

Supported 2 (ex:
Intell supports

sensor intell)
Supporting 2
(ex: Weather
Logisticians
Maintainers
HSI Domains

mess, etc)
Operators

transport)
Trainers

Security

system)
Manpower Wartime / Peacetime
Wartime / Weapons How many How many How many How many
numbers Peacetime load, guards, how analysts to loggies Fuelers,
numbers transport many gates, process Medical
load how many data?
numbers shifts?

Personnel Types? Types? Skill Skill levels?


levels? Contract?

Training Pilot, sims, Schools, Pilot and


SERE OJT Mx
Schools,
Train-the-
trainer,
contractors
?

Human Factors Cockpit design? Accessibility Lift / stretch Send threat Send Wx
constraints? updates updates to
Engineering realtime to cockpit?
cockpit?

Environment Exhaust, HAZMATExhaust, Exhaust, Send threat In-flight


HAZMAT HAZMAT info, video, refuel?
sigs to Intell
sqd

Safety Checklists,
emergency
shut-off
valves

Occupational Health Temps, pressurize,Temps,


g- Back strain
strain back strain,
flight line
ear prot.

Personnel Survivability Ejection, restraints,


beacon, chaff
(aka Force Protection)

Habitability Elimination packs,Space for


cockpit temps, tools,
restraints workbenche
s, seating,
moving
large equip

1701
1702 Example 1: Reducing Manpower in maintainers may cause a Safety or Occupational
1703 Health problem for the operator.
1704 Example 2: Decreasing the personnel skill levels of operators may cause a need for longer
1705 and more intense training, more trainers, or more advanced technical design to make up
1706 for the shortfall.
1707

147
1708 Appendix K: Acquisition Intelligence in the AoA Process
1709
1710 Analysis of Alternatives (AoAs) that are intelligence sensitive (i.e. either produce intelligence
1711 products or consume intelligence products during development and/or operation) require
1712 acquisition intelligence support and Intelligence Supportability Analysis (ISA).
1713
1714 Acquisition Intelligence is the process of planning for and implementing the intelligence
1715 information and infrastructure necessary to successfully acquire and employ future Air Force
1716 capabilities. Acquisition Intelligence has two primary goals. 1) to identify Intelligence
1717 requirements early in the life cycle to minimize cost, schedule and performance risks and 2) to
1718 support programs/initiatives throughout their lifecycle with high quality intelligence material
1719 through a set of standard acquisition intelligence processes and tools.
1720
1721 Background
1722
1723 Intelligence integration in support of Air Force and Joint systems development has never been
1724 more important or challenging than it is in today's environment. When intelligence is not fully
1725 integrated into the Air Force's acquisition and sustainment processes, the results often include
1726 costly work-a-rounds or modifications, scheduling delays, unplanned adjustments to
1727 operations/maintenance, and/or delivery of a weapon system that has vulnerabilities to or is less
1728 effective against emerging threats. As future systems become more intelligence-dependent, the
1729 cost of omitting intelligence integration will increase significantly. Late identification of
1730 requirements hampers the ability of the intelligence community to conduct the long-term
1731 planning, funding and development of collection and production capability needed to support
1732 user's requirements. Intelligence customers, in turn, are forced to use existing intelligence
1733 products or contract out intelligence production, significantly impacting both weapon capabilities
1734 and/or increasing program costs.
1735
1736 The expanded role of intelligence in the acquisition and sustainment processes is intended to
1737 minimize program cost, schedule, technical, and performance risk by enabling long term support
1738 planning by the intelligence community. Recent changes to DoDI 5000.02, CJCSI 3170.01, and
1739 AFI 63-101 and the publishing of the new DoDD 5250.01 have significantly increased the
1740 intelligence supportability requirements for weapon system programs. Specifically, program
1741 managers are responsible to ensure an ISA is conducted in collaboration with the local
1742 AFLCMC/IN intelligence office (also referred to as the local Senior Intelligence Officer-SIO,
1743 throughout this document) to establish program intelligence sensitivity, document intelligence
1744 requirements, and ensure current, authoritative threat data is used for analysis throughout the
1745 program life cycle.
1746
1747 Intelligence Supportability Analysis (ISA) for AoAs
1748
1749 After establishing that an AoA is intelligence sensitive, i.e. either producing intelligence products
1750 or consuming intelligence products during development and/or operation, analysis is conducted to
148
1751 fill in the intelligence infrastructure details for capability enablers, their DOTmLPF implications,
1752 and associated costs. That established framework is then applied to the proposed alternatives
1753 resulting in a more complete and accurate picture of the proposals intelligence support
1754 requirements and associated intelligence supportability shortfalls. Ideally, ISA will already have
1755 been conducted on the baseline alternative, consult with the local SIO, requiring analysis only on
1756 the alternatives to identify and to document additional intelligence supportability needs.
1757
1758 ISA is the process by which AF intelligence, acquisition, and operations analysts identify,
1759 document and plan for requirements, needs, and supporting intelligence infrastructure necessary
1760 to successfully acquire and employ AF capabilities, thereby ensuring intelligence supportability.
1761 The ISA results will provide the stakeholders with the needed info to compare a capabilitys
1762 stated or derived intelligence (data and infrastructure) support requirements with the intelligence
1763 support capabilities expected throughout a capabilitys life cycle. ISA results in the identification
1764 of derived intelligence requirements (DIRs) and deficiencies, along with associated impacts to
1765 both acquisition and operational capability if the required intelligence is not provided. Through
1766 ISA, stakeholders identify, document, and plan for derived requirements and supporting
1767 intelligence infrastructure necessary to successfully acquire and field Air Force capabilities.
1768
1769 Several major types of intelligence products and services are needed by weapon systems (see
1770 Figure 1).
Intelligence Products and Services
Needed by Weapons Systems
Threat Geospatial Intelligence
Scenarios Information & Infrastructure
Examples: Examples: Examples:
System Threat Threat Digital Terrain Services (GI&S) Fighter
Assessment Report Elevation Data Squadron intel
(STAR)
assessments (DTED) and personnel Manpower
SAP Annex to STAR Order of battle Digital Point Targeting Acq Intel Clearances
SA-99 Sys Description Positioning personnel
AA-99 FME Report Platform Fit Data Database SAR clearances Training
Jammer study (DPPDB) for FS intel
EWIRDB Characteristics & Controlled Image Paper Maps personnel Procedures
TMAP models Performance data Base (CIB) Electronic Maps SAR clearances Facilities
IR/RF signatures for MSIC
Signature data Images analysts Computer
TTPs
Database access Terrain Data SCI facilities Systems
Dynamic models Vector data SCI tools Connectivity
SIPRNET
CAD models Custom Other
Other products
Cooperative
(mostly Adversary, Targeting Data
but also includes Other
Neutral,
Commercial,
Coalition, and US
systems)

Some products are periodic; most are aperiodic


1771
1772 Figure K-1: Intelligence Products and Services

149
1773 Acquisition Intelligence Analysts Support to AoAs
1774 AoA is an evaluation of the performance, operational effectiveness, operational suitability,
1775 and estimated costs of alternative systems to meet a mission capability. The analysis assesses
1776 the advantages and disadvantages of alternatives versus the baseline capability, including the
1777 sensitivity of each alternative in the available tradespace. Acquisition intelligence has a role
1778 in all of the AoA working groups (WGs) identifying intelligence infrastructure requirements and
1779 implications.
1780
1781 Threats and Scenarios Working Group (TSWG). The TSWG is responsible for identifying and
1782 providing the scenario(s) to be used during an AoA to assess the military utility and operational
1783 effectiveness of solutions being considered for possible AF acquisition to meet a valid
1784 requirement. Additionally, the TSWG provides threat performance and characteristic information
1785 from intelligence sources to enable the AoAs Effectiveness Analysis Working Group (EAWG) to
1786 simulate potential threats to mission effectiveness. The TSWG will be staffed primarily with
1787 acquisition intelligence professionals and SMEs. Members support the TSWG by providing
1788 relevant intelligence information to sustain TSWG decisions. The TSWG is the forum tasked to
1789 track, anticipate, and mitigate issues potentially impacting the identification, selection and
1790 recommendation of scenarios to the AoA WIPT. Other members may be added on an ad hoc basis
1791 to resolve issues, as they arise.
1792
1793 Technology and Alternatives Working Group (TAWG). The TAWG acts as the interface
1794 with alternative providers, crafting the requirements request, receiving alternative data, and
1795 resolving questions between the providers and the rest of the AoA WGs. The acquisition
1796 intelligence specialist's role as a TAWG member is to ensure an ISA is conducted on all of the
1797 alternatives and intelligence needs are identified, and potential intelligence shortfalls are
1798 highlighted, including inputs from the acquisition intelligence costs analyst detailing what data is
1799 required to frame the ISR infrastructure costing analysis report. An example of decomposing
1800 how Alternative A navigates: Alternative A navigates to a target using a seeker. What type of
1801 data is needed to develop or operate the seeker (i.e. electro-optical signatures)? Does the IC
1802 produce that type of data? If not, identify this as a potential intelligence shortfall.
1803
1804 The table below illustrates a typical AoA set of alternatives, from an intelligence supportability
1805 perspective. Each column represents an area of intelligence infrastructure that would be required
1806 for system development or operation. Refer to fig 1 page 2 for intelligence products and services.
1807
Alternative Comm Signatures Training Facilities
Baseline
Baseline + X
A X Shortfall X X
B Y X X X
C Z X X X

150
1808
1809 Operating Concept WG. The acquisition intelligence specialists role is to review the
1810 CONOPS from an intelligence perspective to ensure intelligence supportability issues/needs are
1811 noted. May also be called the Enabling Concept Working Group (ECWG).
1812
1813 Effectiveness Analysis Working Group. The acquisition intelligence specialist participates in
1814 the creation of the analysis assumptions from the perspective of valid intelligence supportability
1815 and aids in the identification/supply of required data.
1816
1817 Cost Analysis Working Group. Acquisition intelligence cost analysts, in coordination
1818 with, members of the other working groups, support the AoA by providing cost data on
1819 intelligence support-related activities external to the proposed solutions/alternatives (i.e.
1820 DOTMLPF).
1821
1822 Acquisition Intelligence Analysts Support to Decision-makers
1823
1824 When an AoA (or AoA Study Plan) is scheduled to go before the AFROC, acquisition
1825 intelligence analysts are asked to assess the AoA and highlight any ISA concerns. This is done
1826 via an Intelligence Health Assessment Memo for Record. In the MFR, the acquisition
1827 intelligence analyst identifies any potential concerns associated with the alternatives, or the AoA
1828 Study Plan if the Plan is being reviewed by AFROC. Sample IHA MFRs are below.
1829
1830 Contact Information
1831 AFLCMC/IN, Ms. Mary Knight, 21st IS, DSN 986-7604, mary.knight@us.af.mil
1832
1833

1834

151
1835 MEMORANDUM FOR RECORD
1836
1837 FROM: AFLCMC/IN
1838 Building 556, Area B
1839 2450 D Street
1840 Wright-Patterson AFB, OH 45433
1841
1842 SUBJECT: Deployable Tactical Radar Replacement (DTR2) Analysis of Alternatives (AoA)
1843 Update
1844
1845 1. Intelligence Supportability Analysis for the DTR2 (YELLOW): AFLCMC/IN is supporting
1846 the development of the 3D Expeditionary Long Range Radar (3DELRR); one of the possible
1847 solutions in the DTR2 AoA. However, the AoAs tradespace did not include the Combined
1848 Reporting Center (CRC) so intelligence (mission data needs) were not considered. This
1849 approach has led to intelligence supportability issues in previous AF ISR systems, such as
1850 Global Hawks integration with DCGS.
1851
1852 2. The intelligence communitys capabilities to support probable mission data needs are
1853 (YELLOW): The solutions to two of the primary operational capability gaps require
1854 information that has either been identified as a gap from other programs or will require
1855 information in a different format or fidelity then what is currently being provided by the
1856 intelligence community. These are:
1857
1858 a. (YELLOW) Gap: Does not detect and track stressing air breathing targets and/or
1859 theater ballistic missiles. The intelligence data required to develop, test, and provide
1860 updates for the mission data to support identification for the CRC are not likely to be
1861 available to the fidelity necessary (new requirement over legacy system).
1862
1863 b. (YELLOW) Gap: Does not have the ability to conduct non-cooperative combat target
1864 recognition (NCTR). NCTR will require signature data for the air breathing and
1865 ballistic targets that will be classified by DTR2. Since tactical radars have not needed
1866 this information in the past, signatures will likely be needed in different frequencies or
1867 fidelities than is currently regularly produced by the intelligence community.
1868
1869 3. Questions can be addressed to the undersigned, DSN XXX-XXXX, @wpafb.af.mil.
1870
1871 //Signed//

152
1872 MEMORANDUM FOR RECORD
1873
1874 FROM: AFLCMC/IN
1875 Building 556, Area B
1876 2450 D Street
1877 Wright-Patterson AFB, OH 45433
1878
1879 SUBJECT: Acquisition Intelligence Input for Synthetic Aperture Radar (SAR)/Moving Target
1880 Indicator (MTI) JSTARS Mission Area (JMA) AoA Topic at 14-15 Sep 11 AFROC
1881
1882 1. ISA performed (YELLOW): AFLCMC/IN performed ISA for the SAR/MTI JMA AoA. The
1883 Intelligence Supportability Working Group (ISWG) worked directly with AoA members and
1884 leadership during the final six months of the AoA. ISA was not included in the tradespace but
1885 was considered parallel analysis. The ISA results were captured as an appendix within the
1886 classified AoA Final Report completed in August 2011 but not integrated into the main
1887 analysis. They were briefed to AoA Leadership during the July 2011 pre-AFROC AoA
1888 Executive WIPT. The results also informed ACC/A2X of recommended planning
1889 considerations associated with alternatives.
1890
1891 2. ISA Results (YELLOW): The ISA resulted in identification of Intel requirements for five
1892 combinations of platforms/sensors with SME-based assessment of risks and cost estimates for
1893 AoA discriminators, with manpower being the greatest based on large amounts of new sensor
1894 data.
1895
1896 - Intel concerns are (YELLOW). The focus of the AoA was exclusively on sensors and
1897 platforms modeled against limited BMC2 missions and simulated against watch box
1898 data. This constraint limited the ability to perform meaningful ISA. The results for the
1899 Target ID Measure of Effectiveness was one of the key indicators that further ISA will
1900 be needed if alternatives are narrowed down for acquisition consideration. Lack of
1901 existing or emerging cohesive SAR/MTI AF Doctrine (BMC2 vs. ISR), coupled with
1902 immature technology for SAR/MTI Processessing Exploitation and Analysis and
1903 Dissemination (PED), are areas of potential future concern.
1904
1905 3. Address questions to the AFLCMC/IN POC, DSN XXX-XXXX.
1906
1907
1908 //Signed//
1909
1910

153
1911 Appendix L: Mission Tasks, Measures Development, and
1912 Data in Detail
1913
1914 1. Data Categories and Levels of Measurement
1915
1916 The Figure L-1 describes data categories and various levels of measurement associated with each
1917 category. Study teams must understand these different levels to determine the type of data to
1918 collect, decide how to interpret the data for a measure, and determine what analysis is appropriate
1919 for the measure. These levels are nominal, ordinal, interval, and ratio. Understanding the levels
1920 of measurement guides how to interpret the data. It helps prevent illogical statements, especially
1921 when comparing results to criteria.
1922

1923
1924 Figure L-1: Categories and Levels of Measurement
1925
1926 Some organizations include an absolute category generally described as data requiring
1927 absolute counts such as number of operators. The four described above; however, are
1928 those most commonly found in analytical and statistical literature.
1929
1930 2. Four Possible Combinations of Categories and Collection Methods
1931
1932 Measures are categorized using the terms quantitative or qualitative, and the terms objective or
1933 subjective. While related, they are not the same thing, but are sometimes used incorrectly or
1934 interchangeably. To clarify, quantitative and qualitative refer to the type of data we need to
1935 collect while objective and subjective refer to the method used to collect the data. There are four
1936 possible combinations of these terms:
1937
154
1938 Quantitative data with objective data collection
1939
1940 Measure example: Target Location Error
1941 Rationale: The measure is quantitative since the measurement scale is on a ratio scale.
1942 The accuracy of a targeting pod is best answered by quantitative data (distance) with the
1943 data collected objectively (using a tape measure).
1944
1945 Quantitative data with subjective data collection
1946
1947 Measure example: Operator estimate of the probability of survival
1948 Rationale: Measure is quantitative since the measurement is on a ratio scale (0.0 to 1.0
1949 probability). The data is collected subjectively by first-person responses to a
1950 questionnaire item.
1951
1952 Qualitative data with objective data collection
1953
1954 Measure example: Color of munitions
1955 Rationale: Measure is qualitative since the measurement is on a nominal scale (e.g., blue,
1956 red, green). The data is collected objectively by noting the color (human reading) or
1957 measuring the wavelength of light.
1958
1959 Qualitative data with subjective data collection
1960
1961 Measure example: Operator rating of display
1962 Rationale: Measure is qualitative since the measurement is on a ordinal scale (e.g.,
1963 Strongly Disagree, Disagree, Neither Agree or Disagree, Agree, Strongly Agree). The
1964 data is collected subjectively (first-person reports) by asking operators via questionnaires
1965 for their opinion or perception.
1966
1967 As discussed above, sometimes these terms are used incorrectly or interchangeably. Quantitative
1968 and objective are often used as synonyms for one another while qualitative and subjective tend to
1969 be treated as synonyms. Figure L-2 clarifies their use and relationship to each other. In addition,
1970 the table describes the appropriate statistical techniques to use for each data type/data collection
1971 type combination.
1972
1973
1974
1975
1976
1977
1978
1979
1980
155
1981
1982

1983
1984 Figure L-2: Data Types and Collection Methods
1985
1986 3. Data Collection Methods
1987
1988 Not all situations allow, or even call for, quantitative-objective data. Qualitative-subjective data is
1989 appropriate in many cases:
1990
1991 Someones judgment or perception of a systems performance, capabilities, and/or
1992 characteristics is important in the assessment
1993 In some cases, data for attributes can only be obtained through judgment or perception of
1994 individuals (e.g., human mental states like workload or situational awareness)
1995 Qualitative-subjective data is needed to help explain quantitative-objective data
1996 Quantitative-objective data may not exist or its impossible to collect and/or analyze
1997
1998 Quantitative-objective data can be more difficult to collect than qualitative-subjective data, but
1999 has many other advantages:
2000
2001 Most informative and easiest to substantiate
2002 Sensitive analytical techniques (e.g., mean, correlation, regression, analysis of variance)
2003 can be applied to analyze data
156
2004 Errors and bias are easier to identify and quantify
2005
2006 In addition to measure data, other information not necessarily used to compute a metric should be
2007 collected such as conditions or factors, stakeholder (user, operator, maintainer) and AoA team
2008 comments, and data from other reports or events
2009
2010 4. Determining how to use data
2011
2012 Data can be used in several different ways:
2013
2014 Compute metrics for measures
2015 Serve as inputs to models
2016 Describe factors or conditions
2017
2018 In past studies, some teams have referred to each of the above ways of using data as their MOPs.
2019 Its up to each study team to determine what data is important enough to be measured, and how
2020 all other data should/should not be used and reported. How the data are used and what it may be
2021 called will likely vary from study to study. Although significant amounts of data may exist, study
2022 teams must consider a number of things in determining how to use data:
2023
2024 Study objectives, questions, limitations, constraints, and guidance
2025 Characteristics of interest in the materiel alternatives being analyzed
2026 Availability of the data and confidence in the data
2027
2028 The example below shows how the altitude data element can be used in different ways:
2029

2030
2031 Figure L-3: Altitude Data Element Example
2032
2033 5. Creating Mission Tasks and Measures
2034
2035 Figure L-4 provides an example of MTs and measures that can be derived from the following
2036 Mission Statement in the ICD:
2037
2038 The Theater commander must provide moving target indicator support to maneuver and surface
2039 forces across a Corps sized area. It must detect, track, and identify a wide range of potential

157
2040 target categories and classes and communicate that information to enable the targeting and
2041 prosecution of those targets.
2042

2043
2044 Figure L-4: Notional MTs/Measures
2045
2046 5.1 Measure Criteria
2047
2048 Measure criteria represent a level of performance against which system characteristics and
2049 capabilities are compared. Measure criteria are expressed as threshold and objective values or
2050 standards:
2051
2052 Threshold: A minimum acceptable operational value of a system capability or
2053 characteristic below which the utility of the system becomes questionable. (CJCSI
2054 3170.01G and AFI 10-601)
2055 Objective: An operationally significant increment above the threshold. An objective
2056 value may be the same as the threshold when an operationally significant increment above
2057 the threshold is not identifiable. (CJCSI 3170.01G and AFI 10-601)
2058
2059 5.2 Types of Measure Criteria
2060
2061 User-established criteria are explicitly stated or implied in a requirements document such as the
2062 ICD, CDD, CPD, and the TDS. They can be qualitative or quantitative.
158
2063
2064 An identified standard can be developed from requirements that describe system characteristics
2065 and performance but have no explicitly stated or implied metrics and criteria standards. Identified
2066 standards can be drawn from sources such as CONOPS, TTPs, SME input, studies, etc. They also
2067 can be qualitative or quantitative.
2068
2069 Any perceived credibility concern can be mitigated by obtaining user concurrence. For both user-
2070 established criteria and identified standards, it is important to document source and rationale. It is
2071 also important to keep in mind that requirements may evolve over the course of the AoA
2072 requiring updates to the measure criteria. Changes occurring after the publication of the study
2073 plan must also be clearly documented in the final report.
2074
2075 5.3 Measures Development Guidelines
2076
2077 Keep the measure as simple as possible a simple measure requires only a single
2078 measurement
2079 Develop measures that are important to understanding and assessing the alternatives as
2080 well as measures that enable discrimination among alternatives
2081 Measures should not be listed more that once for a mission task, but the same measure
2082 may be listed under different mission tasks
2083 Focus on the outputs, results of performance, or the process to achieve the activity
2084 Check to ensure the units of the metric match the criteria values
2085 Understand the type of data being collected and the appropriate statistics that can be used
2086 in the analysis
2087 Do not apply weights to measures, although some measures may be more important than
2088 others
2089
2090 5.4 Measures Examples
2091
2092 Two examples of well-crafted measures and their associated attributes, metrics, and criteria are
2093 provided below. For each example there are two different forms of the same measure. The first
2094 shows a poorly developed measure, the second is the preferred form. These are examples of the
2095 mechanics of building a measure and are provided for illustrative purposes only. They do not
2096 cover the entire range of potential measures the study team may need. For development of the
2097 specific criteria, the team must refer to initial requirements documents, study guidance, etc. For
2098 more comprehensive information regarding building measures, OAS has developed a primer on
2099 measures development and guidelines and can provide in-person training. A third example is
2100 provided to illustrate a subjective/qualitative measure and its associated attribute and standard.
2101
2102 Example 1: Measure descriptions should not contain metrics, criteria, or conditions, although
2103 metrics, criteria, and conditions are always associated with a measure.

159
2104
2105 Example 2: Do not define success with several attributes unless all must be met concurrently for
2106 the process to be successful. Define separate measures for each.
2107

2108
2109
2110 Example 3: Measure is qualitative since it will be measured using an ordinal scale (e.g., Strongly
2111 Disagree, Disagree, Neither Agree or Disagree, Agree, Strongly Agree). The data is collected
2112 subjectively (first-person reports) by asking operators via questionnaires for their opinion or
2113 perception.
2114
2115

160
2116
2117
2118 5.5 Supporting Measures
2119
2120 While not required, there are cases when supporting measures are appropriate. Supporting
2121 measures are used to explicitly show a relationship between measures and can be MOEs, MOSs,
2122 or MOPs that refer to a parent measure (Example 1). They support a parent measure by
2123 providing causal explanation for the parent measure and/or highlighting high-interest aspects or
2124 contributors of the parent measure. A parent measure may have one or more supporting
2125 measures.
2126
2127 As shown in Example 1, the three supporting measures provide more insights into the probability
2128 of survival (parent measure). In this example, the detection, identification, and jamming
2129 capabilities of threat emitters is critically important to aircraft survivability. By using supporting
2130 measures, more information is collected to help explain the probability of aircraft survival. For
2131 instance, low survivability may result from poor detection capability (i.e., the aircraft systems are
2132 incapable of detecting a significant number of threat emitters, thereby making the aircraft
2133 vulnerable to these threats). In other situations, performance in identification and/or jamming
2134 capabilities may explain survivability performance.
2135
2136 Example 1:
2137

2138
2139 It is important not to use supporting measures to roll-up or summarize to a parent measure
2140 (Example 2). A parent measure should stand by itself; it should not be a placeholder or umbrella
2141 for supporting measures. A parent measure should have its own metric and criteria. Aggregating
161
2142 supporting measures in some mathematical or qualitative manner can introduce skepticism since
2143 the aggregation approach used may not be acceptable to all readers. Although one aggregation
2144 approach is shown in this example, there are really many possible ways to aggregate the results
2145 that could produce different parent measure ratings. For example, one might assign a number to
2146 the color code rating for each supporting measure and take an average to compute the parent
2147 measure color code rating or one might use subject matter experts to review the supporting
2148 measure results and assign a color code rating to the parent measure. Finally, the benefit of using
2149 supporting measures to gain insights into the parent measure performance is lost since the parent
2150 measure is not being measured.
2151
2152 Example 2:
2153

2154
2155
2156 6. High Interest Measures Key Performance Parameter (KPP) and Key
2157 System Attribute (KSA)
2158
2159 KPP: Attributes or characteristics of a system that are considered critical or essential to the
2160 development of an effective military capability. Some KPPs are mandatory depending on the
2161 program: Survivability, Net Ready (Interoperability, Information Assurance), Force Protection,
2162 Sustainment (Availability). Additionally, some KPPs are selectively applied depending on the
2163 program: System Training, Energy Efficiency (JCIDS Manual, Appendix A, Enclosure B).
2164
2165 KSA: System attributes considered critical or essential for an effective military capability but not
2166 selected as KPPs. KSAs provide decision makers with an additional level of capability
2167 prioritization below the KPP but with senior sponsor leadership control (generally 4-star level,
2168 Defense Agency commander, or Principal Staff Assistant). For the Sustainment KPP
2169 (Availability), there are two mandatory supporting KSAs: Reliability and Ownership Cost
2170 Efficiency (JCIDS Manual, Appendix A, Enclosure B).
2171
2172 [Note: please refer to JCIDS Manual for explanation of the mandatory KPPs and KSAs]

162
2173
2174 CBAs, AoAs, and other supporting analyses provide the analytic foundation for determining the
2175 appropriate thresholds and objectives for system attributes and aid in determining which attributes
2176 should be KPPs or KSAs. In fact, one of the requirements of the AoA is to produce an initial
2177 RCT which identifies an initial set of possible KPPs and KSAs. This RCT should be included in
2178 the AoA final report and it, or a later refined version of it, will become an essential part of the
2179 CDD.
2180
2181 7. KPP and KSA Development Guidelines:
2182
2183 Determine which attributes are most critical or essential to a system and designate them as
2184 KPPs or KSAs (see JCIDS Manual, Enclosure B for guidance in selecting KPP and KSAs)
2185 Number of KPPs and KSAs beyond the required mandatory should be kept to a minimum
2186 to maintain program flexibility
2187 Must contain sufficient KPPs/KSAs to capture the minimum operational
2188 effectiveness, suitability, and sustainment attributes needed to achieve the overall
2189 desired capabilities for the system
2190 Some mission tasks may have more than one KPP and/or KSA, other mission tasks
2191 may not have a KPP or KSA
2192 Develop KPP/KSA measures (MOEs, MOPs, MOSs) that measure the critical or essential
2193 attribute
2194 Differences between the threshold and objective values set the tradespace for meeting the
2195 thresholds of multiple KPPs and KSAs
2196
2197 8. Rating Measures and Mission Tasks
2198
2199 8.1 Measures
2200
2201 Once the effectiveness analysis has been completed, the values for the measures of each
2202 alternative need to be presented in a comprehensive manner. The following section provides one
2203 method for presenting each alternative using a color scheme to indicate how well each measure
2204 and mission task was accomplished. This is certainly not the only manner in which the results
2205 can be displayed; there are likely a multitude of methods for presenting information. Whatever
2206 the method chosen by the team keep in mind that OAS discourages roll-up and weighting
2207 schemes that tend to mask important information or potentially provide misleading results.
2208
2209 The assessment process begins by rating measures. Measures are the foundation for assessing
2210 mission tasks since they are indicators of the successful (or unsuccessful) accomplishment of the
2211 mission tasks. Measures are typically rated against their threshold evaluation criteria with four
2212 possible measure ratings. As described earlier, these can be either user-established or an
2213 identified standard. However, when objective values are absolutely required for performance
2214 (i.e., not just to provide a trade space), an alternative rating scale which incorporates an
2215 additional rating for objective evaluation criteria can be used. And as discussed previously, in
163
2216 some cases the capability/cost/risk results space may be used to help identify the threshold and
2217 objective values. In the following example, the measure would receive a blue code if it met or
2218 exceeded the objective value:
2219

2220
2221
2222 When a measure does not meet the threshold evaluation criteria, operational significance becomes
2223 a key consideration. Study teams should leverage operational experience to apply judgment and
2224 determine the significance of the identified effectiveness and/or suitability shortfalls on the
2225 mission task. Use all available information (e.g., CONOPS, OPLANS) to help in making a
2226 determination. Key questions to consider when determining the significance of an operational or
2227 suitability shortfall include:
2228
2229 How close to the threshold value is the measure?
2230 What is the operational significance (i.e., what is the consequence or impact on the
2231 mission task if the threshold criterion is missed by a certain amount)?
2232 If the shortfall is only under some operational conditions (e.g., adverse weather,
2233 mountainous terrain), what is the significance of the impact?
2234
2235 The impact on the mission task ultimately determines whether the shortfall is significant or not.
2236 When a shortfall has only minimal operational impact on the mission task, it should be assessed
2237 as not a significant shortfall. However, when a shortfall has substantial or severe operational
2238 impact on the mission task, it should be assessed as a significant shortfall. Inconclusive
2239 ratings are used when there is insufficient information to assess a measure. On the other hand, a
2240 measure should be rated as not assessed when there is no information to assess it, or a decision
2241 is made by the team not to assess it for a specified reason. For instance, the assessment of a
2242 particular measure might be delayed until a later phase in the study. In either case
2243 (inconclusive or not assessed) the final report should explain the reason for either of these
2244 statements.
164
2245
2246 Consider the following guidelines when reporting results:
2247
2248 State the measures supporting the mission task with the associated criteria
2249 Include ratings and visual indicators (Red, Yellow, Green) for the measures
2250 Include a narrative describing how the measures were rated
2251 Apply sound military and operational judgment in determining the significance of
2252 capability shortfalls
2253
2254 Remember, the measures are the foundation for assessing mission tasks.
2255
2256 8.2 Mission Tasks
2257
2258 After all measures have been rated, the focus of the assessment shifts from individual shortfalls at
2259 the measure level to the collective operational impact at the mission task level. In many cases, it
2260 is useful to show how well the alternative solutions accomplish the designated mission tasks. In
2261 many cases, some alternatives will perform some mission tasks well, but not others. Decision
2262 makers should be made aware of the capabilities and limitations of the alternatives as they relate
2263 to accomplishing all mission tasks and how well they do so. However, while it is necessary to
2264 rate the measures, rating the mission tasks is optional. Each study team must make a
2265 determination as to the best way to present the information to the decision makers. Nevertheless,
2266 as with measure assessment, teams should use operational experience and judgment to determine
2267 the overall impact of any capability shortfalls on the mission tasks.
2268
2269 There may be one or more prominent or critical measures (e.g., KPPs) which are very
2270 influential on how well the mission task is achievedsuch measures may drive the degree
2271 of operational impact on the mission task
2272 There may be measures that have interdependencies (e.g., messages can be sent quickly,
2273 but they are incomplete) which need to be understood and considered when determining
2274 the significance of impact
2275 Do not simply rely on the preponderance of measure ratings (e.g., 3 out of 5 measures met
2276 the criteria) to rate the mission taskuse operational judgment and experience
2277 When there is insufficient information to assess a mission task, it should be rated
2278 inconclusive write an accompanying explanation
2279
2280 Four possible mission task ratings include the following:
2281

165
2282
2283
2284 Rating measures and mission tasks is not a simple task. Do not rely on mathematical or heuristic
2285 based measure rollup or weighting schemes to derive a rating. Although simple to use, they are
2286 never the best way to communicate.
2287
2288

166
2289 Appendix M: GAO CEAG, Table 2
2290 Following these 12 steps should result in reliable and valid cost estimates that management can
2291 use for making informed decisions. The entire guide can be found on the GAO website:
2292 http://www.gao.gov/new.items/d093sp.pdf

2293
2294

167
2295
2296

168
2297
169
2298 Appendix N: Developing a Point Estimate

2299

170
2300

171
2301
172
2302
173
2303

174
2304
175
2305

176
2306

177
2307

178
2308

179
2309

180
2310
2311

181
2312 Appendix O: CAPE AoA Study Guidance Template
2313
2314 The following is provided by CAPE as a template to begin drafting the AoA Study Guidance.
2315 The word draft appears to indicate any study guidance developed from this template will be draft
2316 guidance, the template is not a draft.
2317
2318
2319 DRAFT (XXXXX PROGRAM NAME)
2320 ANALYSIS OF ALTERNATIVES GUIDANCE
2321
2322
2323
2324
2325 Month XX, 2xxx
2326
2327
2328
2329
2330

2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
182
2343 Program Name (Abbreviation) Analysis of Alternatives Guidance
2344
2345 Purpose
2346
2347 The goal of Analysis of Alternatives (AoA) guidance is to facilitate high caliber analysis, fair
2348 treatment of options, and decision-quality outcomes to inform the Milestone Decision Authority
2349 (MDA) at the next Milestone and shape/scope the Request For Proposal (RFP) for the next
2350 acquisition phase. CAPE guidance should direct the AoA to explore tradespace in performance,
2351 schedule, risk and cost across a full range of options to address validated capability
2352 requirements. Additionally, the guidance should support an AoA feedback mechanism to the
2353 requirements process of recommended changes to validated capability requirements that, upon
2354 further study, appear unachievable and/or undesirable from a cost, schedule, risk and/or
2355 performance point of view.
2356
2357 Background
2358
2359 The guidance should provide a brief background on why the AoA is being conducted and how we
2360 got here. It should discuss the history of the effort and characterize related programs, to include
2361 lessons learned from previous cancellations. This section should also include a discussion of the
2362 Joint Requirements Oversight Council (JROC)-approved capability gaps and their role in the AoA
2363 study. The guidance should make clear that the values of the capability gaps in the Initial
2364 Capabilities Document (ICD) and draft Capability Development Document (CDD) should be
2365 treated as reference points to frame decision space rather than minimum standards to disqualify
2366 options. The AoA should illuminate the operational, schedule, risk and cost implications of
2367 tradespace around the validated capability gaps.
2368
2369 Assumptions and Constraints
2370
2371 Defining and understanding key assumptions and constraints are important in properly scoping
2372 the issue, defining excursions, and limiting institutional bias. Assumptions that are standard or
2373 trivial and therefore provide limited insight on what is actually driving the answer are not of
2374 interest. Since assumptions can determine outcomes, the guidance should direct the study team to
2375 identify the key assumptions driving the AoA results. Significant assumptions can include U.S.:
2376 enemy force ratios, threat characterization, CONOPs, etc. All major/key assumptions and
2377 constraints should be validated by the Study Advisory Group (SAG) as they are developed, but
2378 prior to beginning analysis.
2379
2380 Alternatives
2381
2382 This section should delineate the base case set of alternatives. These alternatives typically include
2383 a baseline (legacy systems and their approved modifications through the current POM), modified
2384 legacy systems, modified commercial/government/allied off the shelf systems, and new
2385 development alternatives. The alternatives should be distinctly defined, with enough detail to

183
2386 support the analytic approaches used. The alternatives should be grounded in industry, national
2387 lab or other agency responses; the AoA should avoid contriving unrealistic, idealized options.
2388
2389 The guidance should direct the AoA to explore a full range of viable modifications to legacy
2390 systems. For all alternatives, the AoA should assess features that appear to provide substantive
2391 operational benefit and apply to all viable alternatives (e.g., if a type of sensor is found to provide
2392 notably improved effectiveness for one alternative, the AoA should explore incorporating that
2393 feature in all alternatives).
2394
2395 Alternatives should also consider variations or excursions for attributes that are significant cost
2396 drivers. The intent is to find the knee-in-the-curve for the cost driver to ensure consideration of
2397 cost effective solutions rather than single point solutions that turn out to be unaffordable.
2398
2399 Analysis
2400
2401 The analysis should be based on sound methodologies and data that are briefly outlined in the
2402 Study Plan. The guidance should establish an early milestone/date for the AoA team to present
2403 their detailed methodology and data approaches, tools, scenarios, metrics, and data in- depth to
2404 the SAG and other stakeholders.
2405
2406 The AoA should spell out the scenarios and CONOPS used and explain the rationale for the
2407 inclusion of non-standard scenarios. If non-standard scenarios are employed the study team
2408 should explain in depth outcomes unique to those scenarios. The guidance should direct that a
2409 range of less stressing and more stressing scenarios be used, rather than using only highly
2410 demanding scenarios.
2411
2412 The guidance should instruct the AoA to spell out the metrics used, any weighting factors applied
2413 to these metrics, and the rationale for applying each weighting factor. Metrics should include
2414 comparisons between the (weighted) metrics and cost to facilitate cost, performance and schedule
2415 tradeoff discussions.
2416
2417 A problem with many legacy AoAs is that they have focused on operational benefits and
2418 downplayed technical, schedule, and cost risk. To avoid this, the guidance should instruct the
2419 AoA team to give full treatment to non-operational risks, since these factors have been a major
2420 cause of failed programs in the past. Within the technical risk area, empirical data should guide
2421 the AoAs assessment, with particular focus on integration risk.
2422
2423 The guidance should direct the AoA team to explain the rationale for the results, which goes
2424 well beyond simply presenting outcomes. The AoA team should understand that the value of
2425 the analysis is in understanding why options do well or poorly. The study guidance should
2426 require the AoA team to acknowledge the limitations and confidence in the results due to lack of
2427 mature or reliable data at the time of the AoA. The team should also explain how/if variations to
2428 CONOPS or attributes of alternatives might mitigate cost drivers or low ratings on assessment
2429 metrics. Also, many AoAs have presented preferred options only for those cases advantageous to
184
2430 the option. The guidance should instruct the AoA to characterize the circumstances in which a
2431 given option appears superior and the conditions under which its outcomes degrade (a useful
2432 example of this was in the AoA for the replacement of the M113 armored personnel carrier,
2433 which showed how casualties varied according to the explosive weight of improvised explosive
2434 devises).
2435
2436 Cost Analysis. Provide an analysis of life-cycle costs that includes estimates of development,
2437 production, operating and support (O&S), and disposal costs. These estimates should be of
2438 sufficient quality to support acquisition and investment decisions, but are not to be of budget
2439 quality.
2440
2441 O&S cost estimates will cover a common life-cycle period for the system under
2442 consideration (for most, a 20-year period) for all alternatives, consistent with the Operating
2443 and Support Cost-Estimating Guide (Cost Analysis Improvement Group, Office of the
2444 Secretary of Defense, October 2007). The estimates shall include point estimates for the
2445 Average Procurement Unit Cost (APUC), as well as total life-cycle cost.
2446 Life cycle estimates should be calculated as point estimates and also shown as 50% and
2447 80% confidence levels.
2448 The cost analysis will identify APUC estimates for varying procurement quantities, if
2449 applicable. Present-value discounting should be used in comparing the alternatives, in
2450 accordance with OSD and Office of Management and Budget guidelines.
2451 Costs should be expressed in current-year dollars and, if appropriate in the context of
2452 FYDP funding, in then-year dollars. Costs should be presented at the major appropriation
2453 level with defined risk ranges to communicate the uncertainty associated with the estimates.
2454 The cost portion of the analysis should include an assessment of how varying the annual
2455 procurement rate affects cost and manufacturing risk when appropriate (e.g., procuring items
2456 faster to complete the total buy sooner vice buying them more slowly over a longer period of
2457 time).
2458
2459 Schedule and Technology/Manufacturing Risk Assessment. The AoA should include
2460 estimated schedules for each alternative, as well as an assessment of existing Technology Risk
2461 Levels (TRLs)/Manufacturing Risk Levels (MRLs) for critical technologies which may impact
2462 the likelihood of completing development, integration, and operational testing activities on
2463 schedule and within budget. Since legacy AoAs have often proposed development and
2464 procurement schedules that were more aggressive than we actually achieved, future AoAs should
2465 include an assessment of the likelihood of achieving the proposed schedule based on our
2466 experience. Where significant risks are identified, the assessment should outline practical
2467 mitigation strategies to minimize impact to delivering the operational capability to the warfighter,
2468 and if applicable, notional workarounds in the event the risks are realized.
2469
2470 Sensitivity Analysis. The AoA will identify assumptions, constraints, variables and metric
2471 thresholds that when altered, may significantly change the relative schedule, performance, and/or
2472 cost-effectiveness of the alternatives. The sensitivity analysis should identify cost, schedule, and
2473 performance drivers to illuminate the trade space for decision makers. (e.g., identify performance
185
2474 attributes that make the largest changes to the forces mission effectiveness or are likely to most
2475 influence development and/or production cost.)
2476
2477 Other specified analysis as required:
2478
2479 All mandatory Key Performance Parameters (KPPs) as noted in the Joint Capabilities
2480 Integration and Development System (JCIDS) manual should be analyzed, as applicable.
2481 Additionally, if a value has been specified within the requirements documents for these KPPs,
2482 describe the risk incurred for failing to achieve these values.
2483
2484 DOTmLPF-P Assessment. The AoA will evaluate the implications for doctrine,
2485 organization, training, materiel, leadership and education, personnel, facilities, and policy
2486 (DOTmLPF-P) for each alternative.
2487
2488 Operational Energy Assessment. If applicable, the AoA will include an examination of
2489 demand for fuel or alternative energies under each of the alternatives, using fully burdened
2490 costs. The study director will:
2491 o Ensure the Fully Burdened Cost of Energy (FBCE) method is used in
2492 computing costs for the Life Cycle Cost Estimate (LCCE) and documented in
2493 the final report.
2494 o Brief the SAG as to whether FBCE significantly differentiate between the
2495 alternatives being considered.
2496 o In cases where it does not significantly differentiate between alternatives, the
2497 Service shall complete the FBCE work external to the AoA.
2498
2499 Specific questions to be answered by the AoA
2500
2501 Additional program-specific questions should be included that do not repeat the
2502 requirements described elsewhere in the guidance. Rather, these questions should probe issues
2503 that are specific to the program e.g., how a program would achieve high reliability; how a
2504 program might mitigate risk if the technology required fails to materialize; how a program might
2505 trade lethality versus survivability if cost (or weight) is a limiting factor. This section of the
2506 guidance should be a description of ideas that are substantive to the specific program and pose
2507 questions that, when answered, will highlight the truly important aspects of the tradespace for
2508 the program.
2509
2510 Administrative Guidance
2511
2512 A SAG will oversee the conduct of the AoA and ensure that the study complies with CAPE
2513 guidance. The group will be co-chaired by OSD CAPE and a Service representative and will
2514 include representatives from OUSD(AT&L), OUSD(P), OUSD(C), OUSD(P&R), ASD(R&E),
2515 ASD(OEPP), DOT&E, the Joint Staff, and the Services. The SAG is responsible for ensuring that
2516 the study complies with this guidance. The SAG has the authority to change the study guidance.

186
2517
2518 The organization performing the AoA will present an AoA study plan (not to exceed 10 pages)
2519 for CAPE approval 30 days after the issuance of the AoA Study Guidance or no less than 30 days
2520 prior to the Material Development Decision. The organization performing the AoA will work
2521 with OSD CAPE to develop a schedule for briefing the SAG on the AoA study teams progress.
2522 The briefings should be held bimonthly unless needed more frequently. In between briefings to
2523 the SAG, the study lead will maintain dialogue with OSD CAPE.
2524
2525 The guidance should set strict time limits on the analysis timeline shorter is better. If the AoA
2526 analysis is expected to take longer than 6-9 months, the scope of work should be reconsidered to
2527 ensure the analysis planned is truly necessary to inform the milestone decision.
2528
2529 The final deliverables will include a briefing to the SAG and a written report. The written AoA
2530 report is due to D,CAPE at least 60 days prior to the Milestone Decision (to allow for sufficiency
2531 review) and to the other SAG members to properly inform the stakeholders prior to the release of
2532 the RFP for the next acquisition stage. The final report will provide a detailed written record of
2533 the AoAs results and findings and shall be on the order of no more than 50 pages in length, plus
2534 the Executive Summary which should be no more than 10 pages in length.
2535
2536
2537
2538

187

Vous aimerez peut-être aussi