32
2017 Cost And Schedule Symposium Abstract Collection The Strategic Investments Division would like to welcome you to the 2017 NASA Cost and Schedule Symposium. This document contains the names of the authors/presenters and abstracts for all the presentations that will be presented this year. In the Symposium Agenda you will notice that each presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in. This year the NASA Cost and Schedule Symposium has a record number of excellent presentations, engaging special sessions, and an awards presentation full of worthy nominations all packed in an eventful three days! Contents 01_ Commercial Crew’s Meets-the-Intent Joint Confidence Level, Formulation and Execution ................. 4 03_The Zodiac Cost Model: World’s Best Spacecraft CER ............................................................................ 5 04_Standing Review Board Independent Programmatic Assessment Forensics ......................................... 6 05_ Schedule Execution Metrics ................................................................................................................... 7 07_Cobra and Empower – EVM Tools That Can Support Cost and Schedule Estimating ............................. 7 08_ Project Management Capability Survey Findings and Results ............................................................... 7 09_Space System Test Schedule Estimating ................................................................................................. 8 10_ Cloud Solutions – Infrastructure, Platform or Software: Where should you go? ................................. 8 13_ONCE ....................................................................................................................................................... 9 14_NICM ....................................................................................................................................................... 9 15_Being Certain about Uncertainty, Part 1 ................................................................................................. 9 16_2017 NASA Cost & Schedule Symposium.............................................................................................. 10 17_ NASA SMART SER Development .......................................................................................................... 10 18_ Reserve Strategy Using Results & Data from JACS............................................................................... 11 19_NASA Flight Software Estimation Model: A Web-Based Cost Analysis Tool ......................................... 11 20_ Concept to Cost to Feasibility & Assessment: Cluster Based NASA Mission Cost Models .................. 12 22_Trust, but Verify: An Improved Estimating Technique Using the Integrated Master Schedule (IMS) .. 12

2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

2017 Cost And Schedule Symposium Abstract Collection The Strategic Investments Division would like to welcome you to the 2017 NASA Cost and Schedule Symposium. This document contains the names of the authors/presenters and abstracts for all the presentations that will be presented this year. In the Symposium Agenda you will notice that each presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in.

This year the NASA Cost and Schedule Symposium has a record number of excellent presentations, engaging special sessions, and an awards presentation full of worthy nominations all packed in an eventful three days!

Contents 01_ Commercial Crew’s Meets-the-Intent Joint Confidence Level, Formulation and Execution ................. 4

03_The Zodiac Cost Model: World’s Best Spacecraft CER ............................................................................ 5

04_Standing Review Board Independent Programmatic Assessment Forensics ......................................... 6

05_ Schedule Execution Metrics ................................................................................................................... 7

07_Cobra and Empower – EVM Tools That Can Support Cost and Schedule Estimating ............................. 7

08_ Project Management Capability Survey Findings and Results ............................................................... 7

09_Space System Test Schedule Estimating ................................................................................................. 8

10_ Cloud Solutions – Infrastructure, Platform or Software: Where should you go? ................................. 8

13_ONCE ....................................................................................................................................................... 9

14_NICM ....................................................................................................................................................... 9

15_Being Certain about Uncertainty, Part 1 ................................................................................................. 9

16_2017 NASA Cost & Schedule Symposium.............................................................................................. 10

17_ NASA SMART SER Development .......................................................................................................... 10

18_ Reserve Strategy Using Results & Data from JACS............................................................................... 11

19_NASA Flight Software Estimation Model: A Web-Based Cost Analysis Tool ......................................... 11

20_ Concept to Cost to Feasibility & Assessment: Cluster Based NASA Mission Cost Models .................. 12

22_Trust, but Verify: An Improved Estimating Technique Using the Integrated Master Schedule (IMS) .. 12

Page 2: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

24_Showcasing the Product-Based Structure to Manage Project Deliveries ............................................. 13

25_ Integrated Schedule Risk Analysis using JACS ...................................................................................... 14

26_Calcomo ................................................................................................................................................ 14

27_ Beyond RIFT: Improved Metrics to Manage Cost and Schedule .......................................................... 15

28_ EVM: Truth or Fiction? ......................................................................................................................... 15

29_ The Undoing Project: A Year That Changed My Mind About Cost Estimation .................................... 16

31_The Silent S in NICM - NICM Schedule Capabilities .............................................................................. 16

32_ Programmatic Analysis Capability: Five Essential Development Activities ......................................... 16

33_Integrating the Cost, Schedule and Risk processes in the JCL analysis ................................................. 18

34_Using Business Intelligence to Enhance JCL Analysis ............................................................................ 18

35_How to Assess a Joint Confidence Level (JCL) Model............................................................................ 19

37_ Concurrently Verifying and Validating the Critical Path and Margin Allocation Using Probabilistic Analysis ....................................................................................................................................................... 20

39_ Data Unification at Goddard Space Flight Center ................................................................................ 20

40_ SEER for Space Systems Prototype ...................................................................................................... 21

41_The Burden of Government Oversight Activities on Contractors Involved in Space Systems Acquisitions ................................................................................................................................................. 22

43_The Signal and the Noise in Cost Estimating ......................................................................................... 23

44_Cost Analysis Data Requirement (CADRe) ............................................................................................ 24

47_Estimating Planetary Instrument Schedule .......................................................................................... 24

48_Data Science Applications to Scheduling .............................................................................................. 24

49_Techniques for Assessing a Project’s Cost and Schedule Performance ................................................ 25

50_Modeling Schedule and Cost Stochastically.......................................................................................... 25

51_ Schedule Optimization Analysis: Seeing beyond the results ............................................................... 25

54_An Insight into being an Effective Scheduler in the Project Chain of Command .................................. 27

55_Multidimensional Risk Analysis (MRISK) ............................................................................................... 27

Page 3: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

57_Bridging the Schedule Gap-How NASA Glenn and its Contractor Team Set the Standard for ISS Research Projects ........................................................................................................................................ 28

59_Technology Development Cost and Schedule Modeling ...................................................................... 29

60_SO YOUR PROJECT IS FALLING BEHIND? A POLARIS SCHEDULE RISK ANALYSIS CASE STUDY .............. 29

61_ Validation of Allocation of Cost by Top Line (ACTL) 25% Rule ............................................................. 30

62_‘All in one’ programmatic analysis?: assessing how different abstractions of the same variables affects analysis results ................................................................................................................................ 31

65_Mission Operations Cost Estimation Tool (MOCET) FY17 Update ........................................................ 31

66_ Cost Effects of Destination on Space Mission Cost with Focus on L1, L2 Orbits ................................. 32

Page 4: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

01_ Commercial Crew’s Meets-the-Intent Joint Confidence Level, Formulation and Execution Authors: John Aitchison

Abstract: This paper will cover the formulation, development, implementation and execution of cost and schedule analysis/assessment support to the certification phase of the Commercial Crew Program (CCP). There were many challenges to overcome including the unanticipated awarding of two separate contracts, data rights, program support for a Joint Confidence Level (JCL) process, formulation of a quantitative / schedule risk analysis, NASA oversight verses insight, and lack of substantial financial information from the Partners.

The goal of the CCP is twofold, a public purpose: to facilitate U.S. private industry development of safe, reliable, and cost effective human space transportation to and from low Earth orbit and the International Space Station for use by the U.S. Government and other customers. And to enable the eventual purchase by NASA of commercial services to meet its ISS crew transportation needs, once the capability is matured and available. This is done through a firm fixed price contract with high level requirements the partners must meet in order to achieve NASA certification. For a comparison, there are only 227 technical requirements in the CCP firm fixed price contract, Orion’s Exploration Flight Test-1 (EFT-1) had nearly 950 and the Constellation Program’s spacesuit contract alone had over 450, both cost plus contracts.

The contractual deliverables of the partners do not require any traditional financial reports, thus a quantitative cost risk analysis (QRA) was devised to provide program management useful cost risk information about threats to both the Government as well as the Partners. The schedule risk analysis (SRA) was created, independent of the quantitative risk analysis, as a useful tool based on Program and Partner schedule risks, to predict when the Partners would be complete with Contract Line Item Number (CLIN) 001 or the development phase of the contract.

Prior to contract award, Program Management expressed concern about the JCL requirement. Specifically, since the contract would provide much less information than delivered under a traditional cost plus procurement and there are other ways of assessing a program’s cost and schedule commitments, they felt the standard JCL analysis was not a practical methodology for assessing CCP’s programmatic risk posture. The result was a “meets-the-intent” JCL through a quantitative cost risk analysis and an independent schedule risk analysis.

In execution, the strategy was forced to adjust to accommodate the certification of two partners instead of the planned single Partner certification. Additionally, both Partners went through a schedule rebaseline three months after contract award and cost risk data from one partner was impossible to obtain or predict. CCP follows an insight management model, which limits cost and schedule data provided to the Program engineering support team, as compared to previous programs, and ultimately impacted the validity of the cost and schedule risk models developed for CCP. Most existing historical cost data is of limited value, especially for one of the Partners, because the

Page 5: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

commercial model has not previously been used in this way (cost plus development verses fixed price human rated development). Lastly, the contract data requirement deliverable for cost, the quarterly program report, contained only net working capital not highly relevant cost risk information.

The key decision point (KDP) for CCP was equivalent key decision point 1 (eKDP-1). At eKDP-1, the program plan was to establish the Agency Baseline Commitment (ABC) and Management Agreement (MA) using the QRA/SRA as the primary input. In the run up to eKDP-1, the program was independently reviewed by a Standing Review Board (SRB) with a programmatic team reviewing the QRA/SRA inputs, risks, risk consequence scoring, uncertainty, and outputs.

Ultimately, the program successfully passed the programmatic review gate of eKDP-1 and moved into the final certification phase of the contract. In review, while there are some limitations to a QRA/SRA, when appropriately tailored it is an acceptable substitute and meets the intent of the NPR 7120.5E JCL requirement for a firm fixed price human rated development contract.

03_The Zodiac Cost Model: World’s Best Spacecraft CER Authors: Joe Hamaker

Abstract: One day a senior cost estimator arrived at work early and as she was walking across the nearly empty parking lot from her car to her office building, a piece of paper blew across the pavement. Being a good citizen, she picked up the paper, planning to dispose of it properly once inside the building. To her surprise, she noticed that across the top of the

piece of paper, in bold letters, it said “The Zodiac Cost Model: World’s Best Spacecraft CER.” The body of the paper contained an equation. Studying the equation a bit more, she determined that it purported to be a CER that would estimate the development cost of spacecraft based on several variables, variables that didn’t make any engineering sense at all. The variables were:

• The Zodiac sign of the Project Manager coded as follows:

o Mars in Aries = 1 o Mars in Taurus =2 o Mars in Gemini =3 o Etc.

• The Chief Engineer’s shoe size in European units

• The percent of the project team who were vegetarians

• The ratio of PC to Mac users on the team.

• For an indicator variable called Zeus, the model instructed the user to wear a tin foil hat to shield the brain from electromagnetic fields; then flip a coin: Heads assign 0, Tails assign 1 to the variable Zeus.

• The number of alphanumeric characters that were in the headlines of morning edition of the New York Times on the first day of the Project PDR

• Take Project Office cost estimate in millions, multiply by pi, and use the resulting number for the spacecraft mass in kilograms

The CER on the sheet of paper was:

dx * PC Percent

+ (Mass kg)0.5 + Headline

Page 6: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

While recognizing that was ridiculous, our estimator was working on a project for which she thought she could gather the values for the model (although obtaining the Zodiac sign of the PM and the shoe size of the Chief Engineer was a bit awkward). Exhibit 1 shows this data.

Variable ValueZodiac sign of the Project Manager, Mars in Gemini 3Chief Engineer's shoe size in European Units 34Percent of the team that are vegetarians 7%Percent of PC users vs Mac users on the team 55%Zeus variable (from coin flip in tin foil hat) 1Headline variable 11Mass variable (Project estimate of $370M X 3.14 = 1162 kg) 1162

Next our intrepid estimator exercised the CER as shown below, obtaining an estimate of $710.8M compared to the Project estimate of $370M.

Cost in $M = 7-1 * 55%

+(1162)0.5 + 11 == $710.8M

That in itself proved little. So with greater effort over the next several days, as time permitted, she gathered the data for a dozen historical missions and ran them through the CER. The percent errors were all very small—all within a few percentage points of the actual cost and just about evenly distributed between positive and negative. Not a single percent error was over 5%. In fact, the average percent was very close to zero percent. This CER, as measured by percent error, was better than anything she had ever seen before for a spacecraft cost model!

Now as it happened, our estimator was teaching a beginning cost estimating class a few days later and she showed her students the results of the Zodiac model. She told the class exactly how she came by the CER and that she had no knowledge of the data base behind it. She asked the class to ruminate on this over lunch and when they returned, she wanted to

lead them in a discussion of the efficacy, or lack thereof, of this CER.

After lunch, about 1/3 of the students objected to the CER totally, about 1/3 weren’t sure and about 1/3 thought that since it seemed to work so well it should be used.

Our senior estimator prepared to voice her opinion. She paused for dramatic effect. The students all leaned forward expectantly. Then she said…..

What do you think the teacher should have told her class?

04_Standing Review Board Independent Programmatic Assessment Forensics Authors: Robin Smith, Michele King, Diana Abel, Justin Hornback

Abstract: This presentation conveys the results of a recently completed forensics effort capturing and analyzing NASA independent programmatic assessment findings. The effort focused on programs and projects that underwent a life cycle review during the six years prior to the disestablishment of the Independent Program Assessment Office (IPAO), 2009 - 2015. Assessment findings gathered for study were focused on the programmatic aspects of NPR 7120.5E assessment criteria. Findings were then analyzed to uncover commonalities, gaps, and trends, as well as to identify areas for independent improvement.

Page 7: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

05_ Schedule Execution Metrics Authors: Ed Knox and Michelle Jones

Abstract: In response to National Reconnaissance Office (NRO) Senior Management's need for metrics that are leading indicators of program execution challenges, the NRO Cost and Acquisition Assessment Group (CAAG) Earned Value Management Center of Excellence (ECE) developed a suite of Schedule Execution Metrics.

The metrics are now part of the NRO corporate tool box for Senior Leadership decision support. The metrics derived from the contractor's integrated master schedule, and answer the questions:

• Is the contractor executing the baseline plan?

• Is the contractor ahead or behind?

• Is the forecast realistic?

This briefing will define schedule execution metrics, explain how they are being used at NRO, and describe ongoing research tasks to interpret the metrics.

07_Cobra and Empower – EVM Tools That Can Support Cost and Schedule Estimating Authors: David Warren and Kristen Kehrer

Abstract: Cobra and Empower are COTS software tools available to all NASA projects to use for earned value management (EVM). Cobra is used for planning and managing project costs, measuring earned value, and

analyzing budgets, actuals, and forecasts. Empower is a server-based analytical tool that combines earned value, schedule, and other key information to provide an integrated interactive system for proactive management of complex projects. The purpose of this presentation is to familiarize NASA’s cost and schedule community with Cobra and Empower and to demonstrate how data in both tools can support the cost and scheduling communities to:

• Prepare estimates at completion (EACs), including ranges best case/most likely/worst case.

• Develop Cost Estimating Relationships (CERs)

• Understand reasons for cost and schedule growth

• Ensure cost and schedule data validity • Integrate cost and schedule data • Other

08_ Project Management Capability Survey Findings and Results Authors: Jeff Kottmeyer and Kristen Kehrer

Abstract: NASA projects are increasingly a collaboration of multiple Centers and suppliers, which is both exciting and challenging. A certain level of consistency is needed to ensure interoperability of project planning and control systems. This is accomplished through standard structures, which integrate data and hence support project management reporting and insights to produce management decisions and actions.

The NASA EVM Capability Team developed a survey in 2012 to assess the consistency project planning and control software, procedures, and

Page 8: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

training throughout the Agency. The survey data gathered empirical evidence to determine whether the processes and tools are common or distinctive and unique. Questions were asked in the areas of fiscal calendars, organizational breakdown structures, work breakdown structures, rates/resources, project planning, tools, maintenance and implementation. The survey results showed some commonality in these areas, however, there was a high degree of variability among the responses.

Since 2012, strides have been made in these areas to improve project planning and control systems at NASA Centers. GSFC has implemented various process improvements that establish standards while at the same time allowing flexibility. Software tools and the interfaces between these tools are being modified to better support project planning and control and project’s cost and schedule performance.

09_Space System Test Schedule Estimating Authors: Daniel Barkmeyer

Abstract: The National Reconnaissance Office’s parametric model for estimating spacecraft testing schedule duration has been updated. The updated model incorporates data from government and commercial spacecraft contracts. The intent of this presentation is to share the model with the space estimating community. The latest model will be presented, along with a case study of a satellite program that illustrates the importance of the chosen schedule drivers to system test schedule.

10_ Cloud Solutions – Infrastructure, Platform or Software: Where should you go? Authors: Arlene Minkiewicz

Abstract: According to PRNewswire, 90% of medium to large enterprises plan to increase or maintain annual spending on cloud computing solutions in 2016, with 47% of those surveyed citing increased efficiency as the main reason. Clearly cloud computing is here to stay. Furthermore, according to Bernard Golden of CIO Magazine, the battle of the infrastructure is over, applications will be the push going forward. More and more organizations are starting to move their applications into the cloud.

Cloud Computing as defined by National Institute of Standards and Technology (NIST):

“Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”

Although the term cloud computing is relatively new, the concepts and technologies behind cloud computing have been emerging and evolving for some time. Consumers of cloud computing access hardware, software and networking capabilities from a third party provider in much the same way they get electricity or water from their utility companies. This utility computing model offered in the

Page 9: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

cloud is likely to bring benefits – especially to small and medium enterprises as well as startup businesses. In addition to the cost saving associated with not having to purchase the hardware, software and infrastructure associated with running a business, cloud solutions bring agility, scalability, portability and on-demand availability.

While the potential for cost savings is real, there is no such thing as a free lunch. A company with firmly entrenched legacy systems needs to think about the trade-offs associated with migrating from the status quo into the cloud. Migration could spur a host of activities. These include issues of installation and configuration, possible code changes, migration of data, integrations with other systems not to mention the changes in business processes and culture that are sure to occur.

Migration of capability to the cloud comes with several planning and management challenges. How does an organization determine the right solutions to migrate and the right platform for migration? What challenges do the various cloud solutions present: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), or Software as a Service (SaaS). This paper intends to delve into the specifics of each of these three cloud solutions; the challenges, benefits and obstacles. The second section of this paper provides a brief overview of cloud computing and presents the different types of cloud platforms. Section three goes into the particulars of Infrastructure as a Service, Platform as a Service and Software as a Service. Section four presents a case study showing how each of these models might be evaluated by an organization considering a migration of specific capability to the cloud. Section five contains a general discussion about the limitations of the

case study, other cloud related considerations and final thoughts on the subject.

13_ONCE1 Authors: James Johnson, Eric Plumer, Julie McAfee, Mike Blandford

Abstract: Presentation will provide an overview of the One NASA Cost Engineering (ONCE) database as well as cover several of the new features and enhancements that have been incorporated in the last year.

14_NICM2 Authors: Joe Mrozinski

Abstract: The Cryocooler estimating relationship in NICM VII behaves as expected for new, large, one-of-a-kind instrument cryocooler subsystems. However, the NICM team has received feedback that users often short-circuit this equation for smaller systems relying on commercial-off-the-shelf (COTS) parts. The NICM team has developed new cost estimating capabilities for both of these classes of Cryocoolers, and also for other instrument -thermal subsystem costs as well. The analysis and draft equations shall be presented, with the final equations due to be integrated into the NICM VIII release in 2018.

15_Being Certain about Uncertainty, Part 1 Authors: Frank A. (Andy) Prince

Abstract: Doing cost risk analysis is hard because we don’t really know what a good

1 There will also be a Special Sessions demo for ONCE – check your agenda 2 There will also be a Special Sessions demo for NICM – check your agenda

Page 10: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

cost risk analysis looks like. In this paper we will explore the challenges to doing good cost risk analysis and discuss ways to know if your cost risk analysis is any good. We will also examine the phenomena of extreme cost growth and lay the groundwork for future research and analysis.

16_2017 NASA Cost & Schedule Symposium Authors: Brian Alford, Richard Webb, Mark Jacobs, Shawn Hayes

Abstract: The newest release of the Project Cost Estimating Capability (PCEC), version 2.2 (PCEC v2.2), is an incremental update that includes enhancements to the functionality of the tool, new capabilities for life cycle estimating, and some minor updates to existing cost estimating relationships (CERs). The primary usability update in PCEC v2.2 is a change to the Launch feature that allows a user to resume the automated creation of an estimate file at a later time. By leveraging the Launch’s WBS editing capabilities, this makes updating an estimate much easier than before. As a result of new data collection, normalization, and analysis activities, life cycle estimating capabilities have been added to the Robotic Spacecraft (SC) model in this release via Mission Operations & Data Analysis (MO&DA) CERs and a capability to integrate the results from a Space Operations Cost Model (SOCM) estimate. Both will provide additional options and flexibility for estimating spacecraft Phase E costs. The Crewed and Space Transportation Systems (CASTS) model updates in PCEC v2.2 are relatively small: a few CERs have been adjusted to reflect minor data normalization

changes, but v2.2 adds the ability to integrate results from the Operations Cost Model (OCM) into a PCEC estimate file to support life cycle estimates for launch vehicles and crewed spacecraft.

In this presentation, we will talk about the newest release and also discuss several additional PCEC-related topics of interest to the cost and schedule community including an update on CASTS model documentation, guidance on how to use PCEC to conduct integrated architecture analyses, and a preview of ongoing research into the development of a milestone estimating capability for spacecraft. In addition, we will provide a status update on the REDSTAR library.

17_ NASA SMART SER Development Authors: Mohamed Elghefari

Abstract: The Schedule Management and Relationship Tool (SMART) uses analogous comparisons and schedule estimating relationships (SER)s for estimating unmanned earth-orbiting and planetary project development schedule durations. SMART is based on high-level technical and programmatic characteristics of historical NASA flown missions. This paper provides a detailed discussion of the overall project development phase SERs (ATP-LRD) and lower level milestone duration SERs (e.g. SRR-PDR, PDR-CDR, CDR-SIR, etc.) within SMART. The content of each SER, exploratory and statistical regression analysis on data, and the SER development process will be presented.

Page 11: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

18_ Reserve Strategy Using Results & Data from JACS Authors: Rey Carpio and Antonio Rippe

Abstract: This presentation is about how to determine a data-driven cost and schedule reserve recommendation to the Project Manager using Joint Confidence Level (JCL) data. For example, the Project Manager is asking what the cost and schedule reserves should be to establish the project Agency Baseline Commitment (ABC). Where do you start and how do you develop the reserve recommendation to the Project Manager from your JCL data?

This presentation will review the challenges in making practical recommendations on reserve postures (cost & schedule) and provide a concrete example in developing a cost reserve and schedule reserve that the Project Manager can take forward to the stakeholders in Washington, e.g., OCFO/SID. It will include:

• Defining the overall process to determine how the cost and schedule reserves are developed

• Detailing the JCL data needed to develop the reserve postures

• Highlight the importance of data-driven analysis and recommendation

• Example of results and insights generated from this approach

19_NASA Flight Software Estimation Model: A Web-Based Cost Analysis Tool3 Authors: Dr. Jairus Hihn, Elinor Huntington, Alex Lumnah, Michael Saing, Tom Youmans, and James K Johnson Abstract: After several years of scrubbing data and refining algorithms we are finally ready to make ASCoT, the NASA Software Analogy Costing Tool available throughout NASA via ONCE (One NASA Cost Engineering) portal. ASCoT provides a suite of estimation tools to support early lifecycle NASA Flight Software analysis. ASCoT employs advanced statistical methods such as Cluster Analysis to provide an analogy based estimate of software delivered lines of code and development effort, a regression based CER model that estimates cost (dollars), KNN based estimation and a COCOMO II based estimate. The ASCoT algorithms are designed to primarily work with system level inputs such as mission type (earth orbiter vs. planetary vs. rover), the number of instruments, and total mission cost. This allows the user to supply a minimal number of mission-level parameters which are better understood early in the life-cycle, rather than a large number of complex inputs.

ASCoT is a totally web-based application with an associated database. This demonstrates NASA’s capability of moving advanced statistical models to the online environment, which we hope to build on in the future by providing more cost tools via the web. The implementation of a web-based model has allowed ASCoT to expand its capabilities more quickly. For example, the use of data visualization has been greatly

3 There will also be a Special Sessions demo for ASCoT – check your agenda

Page 12: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

increased and ASCoT can now run on both PCs and Macs.

There are a number of major and important changes that have been made to ASCoT in preparation for the official release. Most important a new clustering algorithm has been derived that has significantly reduced estimation variance and increased cluster stability. The variables have been refined. For example, a software size parameter is no longer required. Of course some new mission data has been added. Finally an additional estimation method has been added, Nearest Neighbor.

20_ Concept to Cost to Feasibility & Assessment: Cluster Based NASA Mission Cost Models Authors: Thomas Youmans, Alexander Lumnah, and Jairus Hihn

Abstract: Costing within NASA has done very well using high quality parametric cost models for many years. Recently, advances in computing power and statistical methods have allowed new techniques to be added to NASA’s costing arsenal in order to get more detailed cost estimates, and to estimate costs with fewer input parameters.

A promising new method for cost analysis is cluster based modeling. The NASA Software Cost Tool (ASCoT) and the JPL Analogy Model (JAM) are examples of cluster models currently being used within the NASA community. These tactics can be tremendously helpful for mission design and mission cost assessment. Tools that

employ these algorithms can quickly show what similar missions cost in the past, what a new mission may cost, and whether or not a proposed mission is within family. Various clustering methods can provide this valuable insight – from pure heuristic groupings to principle component clustering and other iterative, machine learning techniques.

By understanding how missions group together, a mission designer or proposal reviewer can quickly assess feasibility and cost-risk using key differences across mission groupings and key similarities within mission groups.

Presented is an example of going from mission concept, to first pass cost estimate, to multiple-option-feasibility assessment, using cluster based models in an interactive web-based environment. Also presented is a technical analysis of multiple clustering methods applied to small datasets for NASA mission software cost modeling.

22_Trust, but Verify: An Improved Estimating Technique Using the Integrated Master Schedule (IMS) Authors: Eric M. Lofgren

Abstract: It has long been the wonder of management why the Integrated Master Schedule (IMS) fails to give advanced warning of impending schedule delays. An estimator may follow authoritative guidance in analysis of schedule health using key metrics, supposing that such checks authenticate schedule realism. Why, then, do practitioners find themselves caught off guard by slips when their IMS appears in good health?

Page 13: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

The inadequacy of status quo metrics stems from the perceived continuing irrelevancy of previous IMS submissions. Without a point of reference to ensure logical evolution, the current IMS can only say so much. It is important to understand, for example, what baseline changes have occurred over time and how actual performance has measured to near-term plans. While schedules are living documents, the initial baseline stands as the only available point of reference. This baseline is valid for three major reasons. First, planners tend to know the major activities involved in the execution of a project. All systems can be said to have historical analogies, even those considered revolutionary. Second, contractors generally have well-defined processes for developing these systems. Third, the IMS undergoes an Integrated Baseline Review (IBR) after which both the contractor and client agree to the plan and its efficacy. Thus, the IMS from its outset may be viewed by estimators primarily as a tool for measuring the scheduler’s ability to plan.

For many Major Defense Acquisition Program (MDAP) contracts, the IMS has a poor history of accurately reflecting schedule risk. In an analysis of 12 MDAP contracts, schedule slips were not registered until late in the project. In Figure 1 below, one can see that the IMS largely does not indicate any delays until after half-way through the original schedule duration. In fact, as a project approaches its expected end date further delays are developed, creating a tail chase. This situation gives decision-makers minimal leeway for management to make tradeoffs and implementing well-informed strategies.

Figure 1: Predicted schedule slip reported by IMS submission

This paper argues that an activity’s baseline from the initial IMS is relevant through subsequent submissions. Planners generally do a good job of laying out major activities, and so early performance on near-term activities should be a good indicator of total schedule realism. Practitioners have often heard anecdotally that early schedule loss cannot be made up despite managerial tactics. This paper will explore that notion and whether it finds support in the data. It will be shown that by tracing near-term activities through subsequent IMSs and comparing them to their original baseline, as opposed to the current baseline, one may extrapolate a more realistic contract finish date far earlier in the project. Part I will lay the groundwork for the method’s rationale by illustrating how real IMS MDAP data behaves over time. Part II will formally present the new method, as well as provide examples and show results from actual contracts.

24_Showcasing the Product-Based Structure to Manage Project Deliveries Authors: Danelle Fogle and Jeffrey Stone

Page 14: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

Abstract: When faced with challenging schedule deadlines, two different projects applied a product-based structure to achieve project deliveries. This presentation will showcase these two projects as case studies for successful implementation of a product-based structure.

First, the challenges: Both projects faced aggressive and tight deadlines for delivery. One of them (a flight project) had a launch deadline; the other project (communication hardware) faced a hard deadline for funding. Missing these deliveries would have had serious consequences.

Second, the solution and its implementation: In each case, the schedule initially was structured around financial disciplines. Also, in each case, to help manage project deliverables, the schedule was reoriented around a product-based structure.

Using a product-based structure provides the best of both worlds: It maintains the value that comes from grouping work around financial disciplines and it also creates a tool for managing project deliveries. The approach for implementing this new structure included schedule summits, clear team communication, collaborative breakout meetings and the background preservation of the financial disciplines.

Third, the output: Implementing this approach brought to light missing work and gaps in logic that would not have been evident otherwise. These discoveries allowed the Teams to create a workflow (including parallel activities where applicable) that maximized the limited amount of time. These discoveries also allowed the Teams to determine deadlines that were reliable and realistic. This product-based

structure also improved the Teams’ capacity to manage progress and delivery.

Fourth, the results: You will hear the respective Teams confirm the value of this approach. One of the best confirmations, of course, is delivery. In the end, each project successfully completed their deliveries on time.

25_ Integrated Schedule Risk Analysis using JACS Authors: The Antonio Rippe

Abstract: What do you do when you have three independent programs that aren’t independent? Integrating these programs to obtain a comprehensive picture of the total risk profile is a difficult achievement, to say the least. Yet this is what’s required in order to maximize the probability of success in meeting the program’s schedule goals.

This presentation will review the challenges in accomplishing the integrated analysis and demonstrate the process using notional data. Modeling was performed using the Joint Analysis of Cost and Schedule (JACS) tool from the ACEIT suite. The author will detail the required steps in preparing and executing the analysis, including:

• Defining a process for integrating the three programs

• Outlining the data needed to accomplish the integrated risk analysis

• The importance of an Analysis Schedule in the process

• Reason behind selecting JACS for the analysis

• Results and insights generated from the analysis

26_Calcomo Authors: Cullen Balinski

Page 15: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

Abstract: Largely known in the cost estimation world is Cocomo, a software cost, effort, and schedule tool developed by Dr. Barry Boehm in 1981 at University of Southern California. Cocomo takes 24 qualitative parameters, as well as a few quantitative inputs (SLOC, etc), and returns a point estimate of what it will take in order to develop the software being estimated.

When looking for resources on how to estimate a software change request to the European Service Module, we looked to Cocomo as many estimators at NASA do. What turned our team away from using this was the “point estimate” output it provided. As we have learned from previous experience, management likes S-Curves and a point estimate would not suffice our argument. Instead of throwing it in the trash, we looked deeper into the working mechanisms of how Cocomo worked (which is readily available on their website).

We found that Cocomo had a formula derived in its manual, and a ranking for how each of the 24 parameters were fed into their algorithm. This provided the capability for us to generate the Cocomo process into our own calculators and replicate identical results. Because we were able to do so in our own platform, we added @risk to build uncertainty on each parameter, run a Monte Carlo on it, and produce a man-hour estimate with a distribution and S-curve.

Building it further, we are able to shape it to generate an easy to use function for technical experts (ranking items on a scale 1-10 and proportioning it to the Cocomo ranking) and get results that match with high precision independent estimates.

While Calcomo was fun to make, and generates huge value to use, the real lesson learned was

given a little time to do homework, the tools are available to make custom tools and help solve the problems our discipline faces every day.

27_ Beyond RIFT: Improved Metrics to Manage Cost and Schedule Authors: Nicholas DeTore

Abstract: “Risk-Informed Finish Threshold” (RIFT) presented an innovative solution to the problem inherent in schedules that risk analysis results (time) cannot be allocated the same way as in cost models (dollars). Developing RIFT validation methods inspired an exploration into analyzing simulation data more meticulously. Methods described here provide unique insight into cost and schedule uncertainty results while introducing powerful new techniques to improve a project’s potential to complete on time, on budget.

28_ EVM: Truth or Fiction? Authors: Jerald Kerby

Abstract: What is EVM? There seems to be a lot of experts, but few really understand it. This presentation will provide insight into the real intent of EVM, while exploring some of the myths. EVM is a government requirement for projects and contracts that reach certain thresholds. The presentation will focus on the what, when, where and why of EVM. In addition, we’ll explore who is the real audience for EVM and what benefits can be derived from it. In addition, lessons learned and best practices will be discussed. Finally, we’ll discuss NASA’s implementation approach and steps that the agency has taken to improve EVM implementation over the past few years.

Page 16: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

29_ The Undoing Project: A Year That Changed My Mind About Cost Estimation Authors: Susan Bertsch

Abstract: With a nod to Michael Lewis, this paper shamelessly co-opts the title of his latest bestseller to frame a rethinking of cost estimation in the spirit of the classic works of Kahneman and Tversky.

Since the early 2000s, NASA has worked diligently to improve its cost estimation capability. Key among these initiatives has been a dedicated effort toward data gathering and normalization, via CADRes and the ONCE database. This has vastly improved our cost estimates and our ability to provide probabilistic assessments to our project and program stakeholders, viewpoints seemingly free of irrational biases.

However, as any long-suffering Rockets fan knows, statistical modeling is not all that there is to a winning formula. Moreover, the aerospace world is rapidly evolving to include a number of different service providers and new technologies whose technical and performance data lie well outside the historical dataset. This is challenging much of what we know as well as our ability to provide meaningful cost analysis.

This paper examines several recent projects that have demanded “outside of the box” thinking and approaches to cost estimating. While not entirely changing the author’s mind on the subject, this work has nevertheless provided new insights into cost estimation for both traditional and non-traditional projects.

31_The Silent S in NICM - NICM Schedule Capabilities Authors: Joe Mrozinski

Abstract: The NASA Instrument Cost Model (NICM) estimates cost, but it also estimates Schedule. This capability was introduced in January 2012 as part of NICM V. The schedule estimating relationships (SERs) have remained constant since that time, but a major update to this capability shall be included in the release of NICM VIII, and will be presented today. The new relationship takes advantage of many new data points, allowing for different families of equations to address different mission types and destinations.

32_ Programmatic Analysis Capability: Five Essential Development Activities Authors: Steve Wilson

Abstract: The Strategic Business and Partnerships Office at JSC’s Exploration Integration and Science Directorate (EISD), performing cost, schedule, and risk programmatic analysis work for most of NASA’s current human space flight programs, has devised a framework for sustaining its capability as the exploration landscape changes and enhancing its analytical power as new challenges arise. Its continuing RD&S (Research, Development, and Sustainability) campaign entails five indispensable, interdependent activities. The content of this paper will adhere to the following outline of these activities and discuss examples of each:

1. Experience Capture of analytical work from both past and present programs and projects.

Page 17: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

a. Example: Our office’s Schedule Analysis Study (SAS), a 16+ hour series of interviews tantamount to a survey of schedule analysis methods from Orion, ISS, European Service Module, Commercial Crew Program, a JSC space suit glove project, and other human space flight efforts.

2. Best Practice and Solution Space determination of what we do well in our realm, a distillation of the Experience Capture activity that establishes time-tested strategies for successful programmatic analysis.

a. Example: An exposure to our office’s growing best practice compendium, including cost/schedule/risk/supporting process guidelines.

3. Training Program creation, the structure, content, and philosophy of which is an extension of Best Practices and Solutions set intended to be promulgated throughout our organization and instilled in our junior analysts at their inception.

a. Example: A trace from truths inherent in the SAS (see above) and relevant programmatic handbooks (e.g. Risk Management, PP&C, Schedule, and Cost Estimating) to our Training Program framework.

4. Gap Analysis of analytical areas our organization can improve upon, an

inversion of our Best Practice and Solution Space.

a. Example: Implications from the SAS and other references, teased out by our Programmatic Analysis Taxonomy, that highlight and prioritize areas of desired evolution.

5. Research, the actualized effort to expand our capability (and otherwise address our Gap Analysis) through new data, methods, and analytical frameworks.

a. Example: Our current Orion schedule data capture, the first of its type in human space flight to track component-level IMS detail over time.

b. Example: The new ISS data capture, led by Susan Bertsch and Eric Plumer.

c. Example: Our current strategy to create Bayesian- and SME-driven cost estimating models in preparation for upcoming exploration developments (like the Deep Space Transport (DST) spacecraft that will transfer humans to Mars).

d. Example: New analysis schedule creation and SRA methods that support the temporal characterization of HEO’s Human Research Program’s (HRP) path to risk reduction.

Page 18: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

33_Integrating the Cost, Schedule and Risk processes in the JCL analysis Authors: Barney Roberts, Michael Copeland, Richard Greathouse, and Wallace Willard

Abstract: The authors have been the analysis team for several NASA programs and projects for more than 12 years. Over that time period, the exigencies of integrating a complex analysis within constrained time period along with the need to examine many options to determine sensitivities and test risk remediation recommendations had driven the team to develop a very efficient and effective analysis process. In addition, the process was exacerbated by the changes in the various data drops and probing questions from the SRB requiring quick updates. This led to several innovations for configuration control of the input and analysis data. The unity of that team over such a long time allowed for the testing and continued improvement of the process. By this paper, the authors wish to share this approach with the community, hope that it will make your job easier and welcome any recommendations for improvement.

An overview of the process is shown in the figure below.

Figure 1. The Integrated Cost and Schedule Analysis Process

Details of the process will be presented along with several Excel-based interface tools that the team developed to facilitate the process. The team also developed post-processing tools that facilitated the configuration control process and enabled quick response time for question response as well as quickly updating the reports using hyper-linked images. In the beginning, executing many options and controlling the data was a nightmare. Often questions were asked how case A looks as compared to B, or C, or say try a new case, and in many cases, vacillating back and forth. The authors finally stabilized on a specific approach that has served them well. The configuration control approach is discussed in this paper.

34_Using Business Intelligence to Enhance JCL Analysis Authors: Dan Friedrich and Darren Elliott

Abstract: Business Intelligence (BI) is a fast-moving and growing field that focuses on analyzing and intuitively visualizing data from multiple sources to enable informed decision making as well as to support problem identification and resolution. This presentation discusses how BI analytics can apply to our programmatic work products, like a JCL analysis, to provide an enriched environment of model exploration, driver indication, scenario analysis, and result communication. This presentation will showcase how available COTS Analytic Modeling and Dashboard tools, like Microsoft Power BI and Tableau, can be used to analyze JCL models and results to generate in minutes effective analysis and visualizations that would normally take hours or days. This environment provides new insights to model results that are not typically provided in the canned reports

Page 19: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

that our cost, schedule, and risk tools generate (and we will present some never-before-seen visualizations). By incorporating BI techniques into our overall analysis framework, analysts can harvest the rich underlying data environment to quickly generate analytics to identify and intuitively communicate key insights bound only by our imagination.

35_How to Assess a Joint Confidence Level (JCL) Model Authors: Brian Berry

Abstract: How do you know if your Joint Confidence Level (JCL) model, inputs, and outputs are sound? This presentation is about how to analyze a JCL model required under NASA Procedural Requirements (NPR) 7120.5E. The key to a JCL model is the underlying Schedule Structure and the underlying Cost, Risk, and Uncertainty Inputs. This briefing steps through the logical steps that the analyst uses for JLC model assessment. They include:

• Identify and track model and input changes over time

• Validate calculations

• Evaluate Active Risks

• Evaluate deterministic critical path and probabilistic critical path and tasks

• Quickly access/understand model results

This presentation uses NASA JCL modeling lessons learned and model outputs to provide practical, real-world guidance on accomplishing these critical steps in developing credible JCL models.

36_ Agency Schedule Initiative Overview

Authors: Michele T. King and Robin K. Smith

Abstract: The Schedule Initiative is an Agency-wide initiative aimed and strengthening and growing the scheduling and schedule analysis/assessment capabilities across the Agency. Its primary goals are to ensure Agency policy and guidance documents reflect consistent information in how to meet scheduling and schedule assessment requirements, reinforce existing and establish new NASA-specific best practices, strengthen the scheduling capability through knowledge sharing of methodologies/tools and training of personnel, provide reach-back support for Programs and projects with respect to scheduling and schedule assessment expertise, and work with other programmatic disciplines to ensure an integrated approach to advancing the scheduling and schedule assessment capabilities. The Scheduling Community of Practice (SCoPe), with membership representation across all NASA Centers, is a mechanism that is used to help fulfill the SI goals. This presentation will provide a summary of the Schedule Initiative efforts-to-date including an overview of the SI Road Show, a status on the Schedule Management Handbook revision, and an update on integrated programmatic endeavors, as well as a summary of SCoPe actions.

Page 20: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

37_ Concurrently Verifying and Validating the Critical Path and Margin Allocation Using Probabilistic Analysis Authors: Michele T. King and Robin K. Smith

Abstract: Government and industry space communities generally use scheduling software tools that can automatically determine the critical path and verify its calculation; however, these tools do not generally take risks and uncertainties into consideration when calculating the critical path. In addition, schedule margin is often allocated throughout the schedule without consideration of when or where these specific risks and uncertainties may occur, or how they might affect the project’s critical path. Thus, the issue associated with not maintaining a risk-informed schedule is multifaceted: the project may not be managing to the most likely critical path and the allocation of margin may not be appropriate to resolve or mitigate the risks and uncertainties most likely to cause schedule delays. This presentation will explore a process in which probabilistic schedule risk analysis can be performed to provide an understanding of the potential individual and cumulative impacts of uncertainties and risks to various paths in the schedule, thereby identifying the most likely paths to be critical, as well as to inform the establishment, allocation, and management of margin throughout the project life cycle.

39_ Data Unification at Goddard Space Flight Center Authors: Cabin Samuels, Jeff Brown

Abstract: Parametric cost estimation of space missions and instruments incorporating new

technology with limited heritage is difficult and can be further constrained by data insufficiency. The Cost Estimating, Modeling, and Analysis (CEMA) Office at Goddard Space Flight Center (GSFC) provides parametric cost estimating support to GSFC flight project concept formulation studies and proposals. CEMA engineering cost analysts utilize the parametric tools outlined in the NASA Cost Estimating Handbook – some of which can be quite detailed and used to estimate hardware at the subsystem and component level.

The One NASA Cost Engineering (ONCE) database and Cost Analysis Data Requirement (CADRe) documents provide cost, schedule, and technical information which can be leveraged to drive high-level cost models and spacecraft estimation. However, the granularity of data captured in ONCE can be insufficient to facilitate space science instrument estimation beyond the system level. CADRe data is populated by NASA flight programs, but projects may capture more detailed data which is not recorded. Additionally, other data sources such as contracts, engineering, and financial documents could benefit parametric cost modelers. In an effort to supplement the project data gathered by CADRes in ONCE the CEMA office has begun searching for and soliciting cost, schedule, technical, and programmatic data from a wide variety of sources, including but not limited to:

• Contractor Financial Management (533) Reports

• GSFC Office of the Chief Financial Officer

o Regional Finance Office

o Program Analysis Office

Page 21: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

• GSFC Flight Projects Division

• GSFC Office of the Chief Knowledge Officer

• GSFC Applied Engineering and Technology Directorate

• GSFC New Opportunities Office

• GSFC Procurement Operations Division

• GSFC Safety & Mission Assurance Directorate

The aim of this effort is to exhaustively search for data sources which may benefit the cost estimation efforts of the CEMA Office. These sources may exist in relative obscurity and/or be difficult to obtain but would be an invaluable resource when pulled together in a single location. This presentation will document our data gathering processes, struggles, and findings as well as discussions of the potential utility of each source to cost estimating. We will also present our initial approach to creating a relational database to house the data and effectively capture and link the various data sources and types. Using one or two recent GSFC missions as a “proof of concept”, the analysis will include an anecdotal discussion of the findings and what insights may be found in the data. Finally we’ll discuss the current state of the project as well as future plans and goals.

40_ SEER for Space Systems Prototype Authors: Joe Hamaker

Abstract: This document provides a software description to serve Galorath as an internal requirements document for a design and implementation of SEER for Space Systems.

SEER for Space Systems is a new SEER family model under development by Galorath Inc. It is intended for estimating the cost and schedule of automated spacecraft flight projects including the spacecraft bus and individual instruments as well as the “wrap costs” of other NASA WBS elements. This paper is focused on an initial Excel prototype implementation of SEER for Space Systems—the final model will be developed in a SEER GUI.

The word “systems” is very deliberate part of the name for this model. SEER for Space Systems will estimate the cost of integrated space systems, specifically

• Spacecraft buses, either the total integrated bus or by major subsystems

• Individual spacecraft instruments and/or instrument suites (i.e. the entire multi-instrument complement on a spacecraft mission)

• The other 9 elements of the 11 element NASA WBS (Exhibit 1)

• Mission Operations and Data Analysis (MO&DA)

Exhibit 1: The NASA WBS

SEER for Space Systems complements (does not compete with) other SEER models historically used to estimate the cost of space projects. SEER for Space Systems can be used where a more detailed hardware/software estimate is

Page 22: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

either not feasible (due to lack of information) or not needed/desired (i.e. where getting the cost at the 11 element NASA WBS is “good enough”—lower level details are not needed).

41_The Burden of Government Oversight Activities on Contractors Involved in Space Systems Acquisitions Authors: Samantha Brainard and Dr. Zoe Szajnfarber

Abstract: Government oversight-related monitoring activities exist to provide the government with the information it needs to evaluate the cost, schedule, and performance of programs. These activities, while necessary, add additional costs to a program. Stakeholders involved in space system acquisition debate the extent of these costs – with estimates ranging from 2-5% of a program’s cost to factors of 3-5 times the cost of commercially available alternative products. This range of estimates leads us to ask if people are measuring poorly, if they are measuring different phenomena, or both. With regards to measurement problems, many estimates of the extent of oversight’s burden stem from anecdotes or are based on memories. These measurements, as a result, can be biased to overestimate the costs of oversight-related work. Additionally, previous measurements of the burden of oversight-related work implicitly use their own definitions of what constitutes oversight-related work. This makes it difficult to compare different measurements to each other.

In order to determine the real-time burden of government oversight activities at the

contractor level, we conducted a 6-month time allocation study of engineers working for a major US aerospace company using the experience/work sampling method. Using a short, non-invasive multi question survey, we were able to capture what engineers were doing at the moment they received a survey prompt mitigating any recall bias. Through the combination of answers to multiple survey questions, we were able to tease out the time spent on oversight-related activities. This data provides an empirically valid estimate of the time spent on government-oversight related work using more granular, traditional categories of work - allowing us to reconcile differences between previously reported estimates of oversight. Moreover, this work demonstrated the usefulness of this method for measuring the burden of government oversight related work.

Measuring the burden of oversight-related work using traditional categories of activities, however, did not fully capture all of the ways that oversight can influence work. In order to better understand the rich nuances associated with oversight compliance, we conducted a qualitative, interview based study with contractor engineers to understand the ways that oversight influences work. This study identified those activities that contractor engineers considered burdensome oversight-related work and connected those activities to types of added program costs.

Using insights from our qualitative work, we propose a new time allocation study of oversight related work at the contractor level. This study would ask more in-depth questions about the activities performed in order to capture the types of burdens identified in from our qualitative work. Moreover, it would enable us to use this method to determine the added

Page 23: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

costs of oversight-related work across stakeholders involved in space systems acquisitions.

43_The Signal and the Noise in Cost Estimating Authors: Christian Smart

Abstract: We seek to extract signal and eliminate noise when building models with historical data to predict future costs. However, there are many pitfalls in this process that can lead you to confuse signal with noise. Overfitting is a common problem that interferes with the attempt to develop accurate predictions. There is a tendency to want to use all the available data for modeling, and to include many parameters. This is appealing since it allows you to account for many different factors in a model which gives you the feeling that your predictions will be more accurate because you can account for a variety of influences - scope, technical parameters, programmatic parameters, heritage, etc. The addition of these parameters is also appealing because it makes the model easier to sell to decision makers, and it makes the model more appealing to consumers of canned models. The more inputs you include, the more the end user feels that they have control of the prediction, and they often feel more comfortable with such a model.

However actual data, like life, is often messy. The final outcome of an event, such as the actual cost of a historic program, is subject to influences that will repeat themselves in a foreseeable way in the future, but it is also subject to a great deal of noise that will not repeat itself in the future in a predictable manner. For example, the Space Shuttle Challenger disaster in the 1980s increased the

cost of several satellite programs, since some programs had find other means for launch to space. Occasionally labor strikes occur at prime contractor facilities. And some of the noise in the data is pure error - reported actuals are sometimes wrong, either at the total level, or some of the lower level elements are mis-allocated. In collecting historical costs, estimators are often like forensic investigators, trying to solve a mystery and put together a story that makes sense. This involves much guesswork and requires assumptions that lead to some amount of distortion of the true historical cost. A recent paper discusses some of the challenges of collecting and validating contractor cost data. All these events are embedded in the cost of these programs, but are not a part of the cost that can be accurately forecasted going forward.

Despite this there is a tendency to try to explain all the variation in historical data, including the noise. This leads to too many independent variables. The famous mathematician, physicist, and computer scientist John von Neumann once said, "with four parameters I can fit an elephant, and with five I can make him wiggle his trunk."

This also leads to trying too many different types of equations and other approaches to estimating, all of which reduces the number of degrees of freedom. Small data sets only exacerbate this issue. In small data sets, you can find patterns where none reliably exist.

We present three solutions for avoiding overfitting: keeping the number of independent variables in your models small relative to the number of data points, splitting the data set into training and validation subsets, and cross-validation.

Page 24: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

Another common problem with confusing the signal and the noise in cost estimating is normalization, which is necessary when making comparisons but can inject noise when used in modeling. We discuss this issue, and provide an example of how it can lead to a degradation in the quality of a model.

We also propose kernel smoothing and distribution fitting as ways to avoid overfitting errors distribution in small data sets.

44_Cost Analysis Data Requirement (CADRe) Authors: Eric Plumer

Abstract: Presentation will provide an overview of the Cost Analysis Data Requirement (CADRe) as well as cover update and progress of CADRe initiative.

47_Estimating Planetary Instrument Schedule Authors: Cindy Fryer, Paul Guill, Monica Gorman

Abstract: Predicting schedule duration accurately is crucial for preventing schedule slips as well as the cost overruns that frequently follow. Missions to other planets (or their moons) have particularly low tolerance for schedule slips, because their success depends on meeting relatively narrow and infrequent launch windows. Because of this, they prioritize their schedule goals more highly than other missions, sometimes even accepting cost overruns rather than missing their launch windows. Models that do not take this into account may overestimate planetary schedule duration. However, maintaining instrument function over long interplanetary cruises

requires high reliability and therefore stringent testing, so schedules cannot be arbitrarily compressed.

The combination of these opposing factors means that merely including a planetary/non-planetary indicator variable does not adequately account for planetary missions’ different approach to schedule planning. This presentation shows that planetary instrument schedule is best predicted using specialized models based on only planetary instrument data, using a different set of drivers from more general instrument schedule models. Additionally, it shows that the correlation between cost and schedule is much weaker among planetary instruments than among other instruments, which reduces the usefulness of schedule duration as a predictor of cost.

48_Data Science Applications to Scheduling Authors: Gideon Bass, Jung Byun, Eric Boulware

Abstract: As part of project management for any large project, a great deal of useful scheduling information is carefully collected. Typically, the analysis is then performed by human schedule experts using basic cost-estimation tools. However, this wealth of data also offers the possibility of using advanced data science and analytics based techniques. This type of analysis offers the possibility of extracting useful information that may be overlooked by a human analyst. Here, we present work using machine learning to extract information from the schedule data for human exploration missions. We analyzed this data with a wide range of different tools, including time-series analysis, visualization techniques, linear regression, association rule mining, k-

Page 25: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

means clustering, and Bayesian networks, and produced several interesting results. We identify clusters of high and low-performance milestones, key groups of worrying milestones, and predict which milestones are likely to show slip in the future.

49_Techniques for Assessing a Project’s Cost and Schedule Performance Authors: Jonathan Drexler, Tom Parkey, Chris Blake

Abstract: Assessing project performance is a now a critical role in our community. This is especially true as our community becomes more involved in independent assessments and Agency standing review boards. To this end, the authors will describe some tools and techniques used at our center to assess project performance for mature projects. The presentation will cover three main topics: conducting a performance-based Schedule Risk Analysis, EVM analysis & reporting on a separate project, and calculating & reporting Earned Schedule for both project examples. The presentation will start with an example of conducting a performance-based Schedule Risk Analysis using the Duration Ratio Method. Next, the authors will demonstrate an Excel-based EVM analysis tool used at GRC to provide project managers with insight into their project’s performance. The tool features a quick way to incorporate monthly status reports and report key performance metrics and project trending using project EVM report formats not available in tools like Empower. Finally, the authors will describe the calculation and reporting of Earned Schedule for both of these two examples and show how these results compare to the traditional approaches.

50_Modeling Schedule and Cost Stochastically Authors: Nick Lanham, Kevin Joy

Abstract: Over the past three years, Nick Lanham and Kevin Joy have developed numerous Cost Estimating Productivity Improvement Tools (CEPIT) and Cost and Schedule Estimating Suite (CaSES) tools while supporting the Naval Center for Cost Analysis (NCCA). These tools were designed to add significant capability to the cost estimating community and provide a free method of efficiently implementing cost and schedule estimating IAW the latest JA CSRUH. In addition, these tools include dynamic time-phasing functions that can be used within any MS Excel based cost model or workbook and include Weibull, Beta, Rayleigh, Uniform, and other custom distributions for increased modeling-flexibility. The CaSES time-phasing functions also allow cost analysts to use any MS Excel based risk and uncertainty software (e.g., Crystal Ball, @Risk, etc.) and do not require the purchase of any other third party software (e.g., MS Project, JACS, ACEIT, Polaris, etc.). While all of these tools provide extreme capability, the CASES time-phasing functions were designed to provide the cost estimating community with a free alternative if needed.

51_ Schedule Optimization Analysis: Seeing beyond the results Authors: James Taylor, Jim Maury, Ed Smetak

Abstract: The process of performing Schedule Risk Analysis (SRA) is somewhat relatively new when compared to the Independent Cost Estimate (ICE) process. While the process for

Page 26: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

structuring and executing an SRA is fairly well understood, we are continuously learning how to better utilize SRA’s as part of a Program Managers (PM) toolbox. Currently SRA’s are utilized to understand how much a schedule is impacted by risks and/or uncertainty simply by reviewing the analysis results and potential schedule drivers. Usually, once analysis results are gathered an assessment is deemed “completed”, but what if there was potential for more? What if there was a way to utilize the SRA to improve a program’s schedule by optimizing workflow?

SRA’s greatly differ from ICE’s in terms of not just the assessment process, but interpretation of results. The reason SRA’s and ICE’s are significantly different is due to one simple fact: Task Logic. For ICE’s it is understand that if one applies a 10% factor on a cost element, it will adjust by that factor accordingly. Now, trying to apply the same factor to a schedule element and the results could widely vary. The reason for that is as risk and/or uncertainty factors are applied to schedule elements, the task logic has potential carry a rippling effect to other areas of a schedule based on the logic. This rippling effect has the potential to have exponential impacts, not linear like an ICE, because of how logic within all schedules flow from one element to the next. This paper is intended to show a real-life business case of a NASA effort in which an SRA was utilized to help optimize a schedule to overcome external constraints.

In 2017 Predictive Analytics (PA) was tasked with helping an Aerospace company develop a Joint Confidence Level (JCL) assessment for a Pre-KDP-A review. This effort was a blend of a review and proposal process, while the company was competing for the work, we developed products typical of a program going

through KDP-A. Part of the requirement was to deliver a JCL with fully implemented risks and uncertainty distributions.

At the start of the effort, we were given an assumed start date of October 2017 (FY2018) with mission requirement of flight by CY2021. As PA started to build cost/schedule/risk profiles, a real-life constraint came to fruition in the form of FY2018 budget cut to the NASA directive responsible for this effort. The initial JCL was constructed ignoring this constraint assuming that the program would receive funding as needed. After the initial results were compiled, we asked the question “How can we achieve mission success in CY2021 while facing a budget cut that would impact funding profiles?”

The team went back to the drawing board to realistically assess if a small amount of technology development funds were utilized earlier than FY2018, what portions of the program could be shifted to FY2019 and still achieve mission success in CY2021. Changes to the schedule logic and funding profile were made to account for shifting a small portion of effort to the left and significant shifts to the right. Once the changes were incorporated and deemed to be realistic, we began compiling JCL results. Initial results showed 0% chance of mission success in CY2021. But then we started to look at schedule drivers and critical path (CP) logic. As we did, certain anomalies came to light within the schedule logic that only came about from risk/uncertainty. We noticed that certain review milestones, LOE tasks and non-essential task logic were driving factors on the CP. As we reviewed these items it was evident that the logic either did not reflect “real-life” work flow or task logic could be restructured realistically

Page 27: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

to start earlier. One thing the team agreed upon is that no logic changes would be incorporated to get an answer NASA wanted, but to establish an executable plan.

Over a 4-day period, our team ran 400-500 analysis simulations in which we utilized the results to assess the logic workflow to identify areas that could be optimized by realistically restructuring the work flow that enabled the program to optimize its schedule. In the end, we utilized the SRA to construct a new, streamlined schedule with a realistic plan for meeting mission success in CY2021.

54_An Insight into being an Effective Scheduler in the Project Chain of Command Authors: Paul R. McMasters

Abstract: When schedulers are responsible for a multitude of functions throughout the agency, how does a scheduler stay focused to meet the customer’s needs? The role of the scheduler is primarily filled by contractors throughout the agency. The scheduler needs to be able to a have voice that is trusted by civil service management and feel free to speak-up to make sure proper NASA scheduling practices are being applied.

Knowing the schedule and the inherent logic flow is crucial to your success as a scheduler. With programs and projects crossing a multitude of management levels, knowing the scheduler’s role and function in keeping a project on time and on schedule is the key to success. Understanding the analysis tools available to the scheduler impacts health checks, risk performance, schedule summaries,

critical path analysis, and comparative baseline vs concurrent status.

With all these tools available to you how do you interpret all the data? How do you decide meaningful metrics that support your recommendations and help management make key decisions? In this presentation we will focus on how to use the tools and data collection to your advantage and how you can apply these tools to your own project. Tools such as task stoplight criteria; easy look ahead schedules to keep the team focused on the tasks at hand; and how financial data (i.e. actual hours) can be linked to schedules like in manufacturing will be discussed. In our discussion we will focus on techniques for determining what it takes to status a schedule, how to effectively run a schedule status meeting, how to go beyond data gathering to ask intelligent questions without being afraid to be wrong, and use of historical data such as durations, logic and resources.

We will explore using the NASA Ares-1X project (Upper Stage build), NASA Glenn ISS Research Experiments, and the NASA Glenn manufacturing division scheduling as illustrations as ways to project confidence in your schedule data and your experience in guiding the team to a successful outcome. In this role you will be a more effective scheduler and provide support that management will learn to rely on.

55_Multidimensional Risk Analysis (MRISK) Authors: Raymond McCollum, Douglas Brown, Sarah Beth O'Shea, William Reith, Jennifer Rabulan, Paul Terwilliger

Page 28: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

Abstract: Risk assessment is essential for informed decision making and effective risk management, particularly in regards to project management. Historically, risk has been assessed by defining risk as probability (the likelihood of an event occurring) times consequence (the impact to the program if the event occurs). However, traditional approaches to estimate risk do not account for the multidimensional properties of consequence, which can include a schedule dimension, a safety dimension, etc. Attempts to adjust for this multidimensional nature have previously included taking a weighted average of the consequence scores (the Averaging method), using the highest consequence score (the Maximization method), or taking the Euclidean distance to combine the consequence dimensions (the Euclidean method). There are draw-backs to all three of these techniques. The Averaging method does not account for the covariance of the dimensions, and it has disproportionate tendencies toward the middle of the scale, possibly deflating higher risks and initiating lower risks. The Maximization method, on the other hand, assumes absolute correlation between all dimensions, creating a disproportional tendency toward a single extreme and resulting in excessively conservative measures. The Euclidean Method does apply well in the multidimensional space, but it assumes all dimensions occupy the same plane. These three methods can work reasonably well with assessments requiring only one or two dimensions, but risk management has evolved to require multiple dimensions over the last several decades. Specifically, NASA now includes cost, schedule, technical, and, most recently, uncertainty in a standard risk assessment. Additional factors that projects have used include current capability, future capability, safety, and mission complexity. Risk

is steadily evolving to be truly multidimensional with varying degrees of correlation. The three previously described methods do not perform well under these requirements and continue to become less desirable as consequence dimensions grow.

This presentation will describe Multidimensional Risk Analysis (MRISK), a multivariate risk metric that accounts for the correlation of the consequence dimensions. It uses the same data used in the previously described legacy methods, but it allows the user to understand the risks from a multidimensional perspective.

MRISK is highly scalable in that it can adequately account for any number of consequence dimensions.

Additionally, MRISK allows for a single metric without artificially forcing values to either an extreme value or the center.

57_Bridging the Schedule Gap-How NASA Glenn and its Contractor Team Set the Standard for ISS Research Projects Authors: Jamie L. Nezbeth, Paul R. McMasters

Abstract: At NASA Glenn we have bridged the scheduling issues/problems of building Space Flight hardware through a synergy of cooperative and collaborative efforts between the Prime Contractor and the NASA Glenn’s ISS and Human Health Office. We will show you how creating this template based on the NASA ISS Research Lifecycle has made a difference in the life of a project at ZIN Technologies and NASA Glenn.

Page 29: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

Using the NASA Glenn ISS and Human Health Office as an example, we will show the development and use of the ISS Research Life Cycle schedule template developed by ZIN. Previously, there was no standard format for schedules set under the program office, so the template was developed through a collaboration between ZIN Technologies and NASA Glenn, the origin was based on the NASA Procedural document 7120.5.

We will show how the template has worked to make each phase of the project successful and how it highlights issues before they arise, early in the forefront of the project lifecycle. Before the template existed, no structure was followed and Project Managers were scheduling and reporting information that was scattered or they had no schedule. Following the template created from the NASA Procedural document #7120.5 has added more functionality and insight to the projects themselves, and ultimately higher success.

We will show how the schedule template fulfills the contract deliverables, the analytic approach and the metrics that yields meaningful schedules, analysis and reports that provide accurate information on each individual project and how they promote Critical Path Analysis. The template was built based on the flow of the hardware/software builds and successful completion, not just documentation for gateway reviews.

Our presentation will show how the Project Reports that have made reporting the project status more in depth and accurate due to the development of the template. Slack Erosion, One Page Summary Schedules, Schedule Health Check Wizard which includes Health Checks, and SASR (Schedule Analyst Summary Report) are some of the reports ZIN Technologies and

NASA Glenn schedulers use to report project status.

The template provides a level of data to NASA Glenn that is used to build higher level schedules and reports that help make decisions and considerations based on the process being successful from the beginning, and to NASA Headquarters who monitors the schedules from all the centers to help make decisions about risk and cost/budget of each project.

59_Technology Development Cost and Schedule Modeling Authors: Chuck Alexander

Abstract: A tangible need exists in the scientific, technology, and financial communities for economic forecast models that improve new or early life-cycle technology development estimating. Industry models, research, technology datasets, modeling approaches, and key predictor variables are first examined. Analysis is then presented, leveraging a robust industry project dataset, applying technology and system-related parameters to deliver several high performing parametric cost and schedule models.

60_SO YOUR PROJECT IS FALLING BEHIND? A POLARIS SCHEDULE RISK ANALYSIS CASE STUDY Authors: Laura Emerick Krepel

Abstract: For months the contractor had been missing milestones and underperforming, however they still insisted they could keep their original project deadline by running items in parallel and overlapping work. The $100M

Page 30: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

overseas laboratory construction project was in jeopardy. As trusted agents to the government owner, Booz Allen was asked to assist. Could the project complete on time with corrective actions? If not, what was a realistic finish date? Booz Allen Hamilton completed a schedule assessment in 2012 that including logic & critical review, on-site visit, performance assessment, and risk review. This work culminated in a Monte Carlo schedule risk analysis showing that the risk adjusted schedule had a 6% chance of meeting the original project deadline. The schedule risk analysis was completed using Polaris, a Booz Allen Hamilton developed tooI for joint cost and schedule risk analysis.

In this case study, we’ll review the steps of the schedule assessment, share the findings, and conclude with indicators that your project may need a schedule assessment. Attendees will leave with a better understanding of GAO’s Scheduling Best Practice 8: Conducting a Schedule Risk Analysis.

61_ Validation of Allocation of Cost by Top Line (ACTL) 25% Rule Authors: Tim Gehringer and Dr. Will Jarvis

Abstract: A need was recognized by the Heliophysics Line of Business at NASA/GSFC to provide guidance to mission Principal Investigators (PI) early on in a mission formulation to determine a “first cut” and “reasonable” Work Breakdown Structure (WBS) allocation of a fixed cost cap competed space flight mission Announcement of Opportunity (AO). The major costs of any unmanned free flying space mission are the spacecraft bus and instrument(s). From a couple decades of

experience with proposals and building mission concepts it appeared that the bus and instrument(s) complement of a flight mission were roughly equivalent. Further it appeared that the two total costs generally approached approximately half of the cost cap. The additional significant requirement for a NASA AO is a minimum 25% reserve for Phase B-D development. The remaining 25% of the cost cap is therefore allocated for the remaining costs associated with the development effort. A hypothesis was generated from this observation that a NASA mission AO has a natural quarter partition of costs within the total mission cost that is segmented by 25% to the science payload (instrument(s)), 25 percent to the production of the spacecraft bus, 25% attributed to a general reserve budget that could be used anywhere in the project that it is needed, and 25% allocated to the supporting “Non Hardware” functions of the mission development. This support function includes all other mission costs such as management, travel, system engineering, Science, Safety and Mission Assurance, Ground system development, Flight Operations and Systems I&T. The purpose of this paper is to use actual flight mission historical cost data to determine if this 25% rule cost allocation is an accurate account of cost allocation within a scientific flight mission. To accomplish this investigation, confidence and prediction limits for allocations by WBS element are determined by a sample of NASA missions from the CADRe database. For every sampled mission and for each WBS element, we compare the point estimate of the population mean to the distribution based on the 25% rule. Confidence intervals for the mean and prediction intervals for the distribution of allocation percentages are provided. A method for correcting for small-sample bias is also provided.

Page 31: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

62_‘All in one’ programmatic analysis?: assessing how different abstractions of the same variables affects analysis results Authors: Zachary Pirtle, Jay Odenbaugh, Zoe Szajnfarber

Abstract: Project managers depend on strong programmatic (cost, schedule, risk) analysis functions to help them manage a project. While there are some requirements from NASA policy on what types of programmatic analysis must be done, there is significant discretion for a project to choose what analysis approaches it implements. Our paper reflects on the nature of programmatic analysis so as to give advice to program managers as they choose among different program analysis approaches. Drawing on literature that examines the nature of modeling (Weisberg 2012, Morgan 2012), we will provide an overview of how programmatic analysis approaches abstract and represent complex development projects.

We especially seek to frame decisions between using different models that assess the same variables. For example, a reading of NASA Policy Requirement 7120.5E would imply that a parametric model should be performed early in a development life cycle at Key Decision Point-B , a Joint Cost and Schedule Confidence Level (JCL) model should be performed at KDP-C and Earned Value Management (EVM) should be performed after KDP-C as a project moves into implementation. Parametric models typically include cost or schedule as a modeling variable, but not both at the same time. Both a JCL model and EVM include cost, schedule and risk as part of their analysis, but they include them

in very different ways. Should the inclusion of multiple variables of cost and schedule at once in JCL or EVM be seen as an improvement over parametric approaches? And what is lost in moving from JCL to EVM, given that they both include all three programmatic variables?

This paper will provide a breakdown about how different types of programmatic analysis approaches idealize and represent development approaches. We will advise managers to reflect carefully on their analysis goals because two modeling approaches that cover the same variables are not necessarily providing the same insight into a system. We will explore the nature of what it means to have two models explore the same phenomena by doing a technical and qualitative contrast between two specific modeling approaches to one another that both include cost, schedule and risk.

65_Mission Operations Cost Estimation Tool (MOCET) FY17 Update Authors: Marc Hayhurst, Brian Wood

Abstract: The Mission Operations Cost Estimation Tool (MOCET) team will present a summary of developments regarding the tool that have occurred in FY17 including: the version 1.2 update and external release on software.nasa.gov. The primary focus of the version 1.2 update is the incorporation of new accumulated data from the planetary missions Juno, MAVEN, and Dawn. Release of the tool on software.nasa.gov also makes MOCET available to the wider proposer community including contractors and universities. Additionally, an overview of the state of the user community

Page 32: 2017 Cost And Schedule Symposium Abstract Collection · presentation has an associated “ID”. These IDs can be used to find the presentation abstract that you are interested in

will also be discussed. MOCET is a model developed by the Aerospace Corporation in partnership with NASA’s Science Office for Mission Assessments (SOMA), which provides the capability to generate cost estimates for the operational, or Phase E, portion of NASA science missions. MOCET is comprised of Cost Estimating Relationships (CERs) that have been derived from historical data for Planetary, Earth Science, and Explorer missions. The resulting CERs and accompanying documentation have been implemented as a standalone Excel based tool which is now available via the One NASA Cost Engineering (ONCE) model portal and software.nasa.gov.

66_ Cost Effects of Destination on Space Mission Cost with Focus on L1, L2 Orbits Authors: Joe Hamaker, Mitch Lasky

Abstract: Principal Investigators (PIs) are increasingly proposing Lagrangian 1 (L1) and Lagrangian 2 (L2) orbits for their science missions; however, typical cost models allow users to select only “earth-orbiting” or “planetary” for Destination or Environment input selection. While space vehicles destined for L1, L2 orbits may require technical attributes that are atypical for earth-orbiting missions, is it justifiable to model and analyze L1, L2 missions as “planetary”? This presentation will examine historical L1, L2 spacecraft and payload costs, compare costs derived from parametric models of earth-orbiting and planetary spacecraft and payloads, examine the characteristics of earth-orbiting, L1, L2, planetary mission trade space, and provide recommendations for cost analysis of L1, L2 missions.