Upload
rosemary-chambers
View
220
Download
0
Tags:
Embed Size (px)
Citation preview
1
Approved for unlimited release as SAND 2010-3325 C
Verification Practices for Code Development Teams
Greg WeirsComputational Shock and Multiphysics Department
Sandia National Laboratories25 May 2010
Sandia is a multiprogram laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Company, for the United States Department of Energy’s National Nuclear Security
Administration under contract DE-AC04-94AL85000.
2
My Experience
• Computational Fluid Dynamics (CFD) of reacting flows – aerospace engineering
• U. of Chicago FLASH code – astrophysics• Sandia ALEGRA code – Solid Dynamics + Magneto-
Hydro-Dynamics (MHD). Most examples I will show are from ALEGRA.
Your mileage may vary.
3
Expectation of Quality
The public
The customer
Analysts
Expectations of the accuracy of scientific simulations vary. Who are you trying to convince?
Code developers
– My house– My job– The company– Your house– Some money
• I’d bet X on the result; X=• Uncertainty Quantification• Error bars on simulation results• Result converges with refinement• Mesh refinement• Eyeball norm• Trends are reasonable• Result is plausible• Result is not ridiculous• Code returns a result
4
ContextWhat makes engineering physics modeling and
simulation software different?
Our simulations provide approximate solutions to problems for which we do not know the exact solution.
This leads to two more questions:• How good are the approximations?• How do you test the software?
5
What is code verification?
How good is your code?
Has the algorithm been correctly implemented?
How can the algorithm be improved?
Governing Equations
(IDEs)
Governing Equations
(IDEs)
Discrete EquationsDiscrete
EquationsNumerical SolutionsNumerical Solutions
Algorithms(FEM, ALE, AMG, etc.)
Implementation(C++, Linux,
MPI, etc.)
Code VerificationSQE
6
What else do I need?
SA/UQValidation
Solution VerificationHow good is your simulation?
How good is your code?
How large is the numerical error?
Are these equations adequate?
Governing Equations
(IDEs)
Governing Equations
(IDEs)
Discrete EquationsDiscrete
EquationsNumerical SolutionsNumerical Solutions
Code VerificationSQE
7
Uncertain Outputs from Uncertain Inputs
Governing Equations
(PDEs)
Governing Equations
(PDEs)
Discrete EquationsDiscrete
Equations
Numerical SolutionsNumerical Solutions
Inputs: parameters to governing equations, algorithms, and discrete
equations
Inputs: parameters to governing equations, algorithms, and discrete
equations
Outputs: metrics of interest
Outputs: metrics of interest
Uncertainty Quantification and Code Verification are
nearly orthogonal
8
Informal Definitions
Software Quality Engineering (SQE)
Manage software complexity
Code Verification Assess algorithms and their implementation vs. exact solutions
Solution Verification Estimate discretization error
Validation Assess physics models vs. experimental data
SA / UQ Assess sensitivity of answer to input parameters
An ingredients list for predictive simulation, not a menu.
9
ALEGRA Test Suite
Unit testsUnit tests
RegressionRegression
VerificationVerificationPerformance
(Scaling)Performance
(Scaling)
PrototypesPrototypes
SQE Apps
10
The Gauntlet
“Prototype” simulations contributed by ALEGRA users • Have no exact solution• User-defined success criteria for each case• Might represent a capability we want to maintain• Might agitate our current capabilities
The Usual Suspects…
11
The Production LineOver time, the ALEGRA code team has developed a
software infrastructure to support testing• Ability to run a large number of simulations
repeatedly• Ability to compare results with a baseline• Ability to report pass/diff/fail• Run on platforms of interest at regular intervals
12
The Production Line
Code verification tests further require:• Ability to handle different types of
reference solutions• Ability to compute error norms and
convergence rates• Flexibility for computing specialized
metrics for specific test problems
*Some of these capabilities are helpful for solution verification
13
Verification Is Not Free
Principal Costs:• Infrastructure development• Test developmentRecurring Costs – A tax on development:• Maintenance of existing tests• Code development becomes a very deliberate
process
14
Verification As A Continuous Process
• To set up a verification problem once takes significant effort – steep learning curve, infrastructure is not in place
• Running a verification analysis you have maintained takes minimal work
• Without regular, automated verification testing, verification results go stale quickly - they do not reflect the current state of the code
15
Code Verification Identifies Algorithmic Weaknesses
One purpose of code verification is to find bugs.• Code verification often finds bugs that are subtle
and otherwise difficult to identify.• The eyeball norm finds most obvious bugs quickly.
Perhaps a better use of code verification is to guide code development.
• Some bugs are algorithmic and conceptual.• Code verification identifies algorithmic
weaknesses.• Large errors are a weakness.
16
Code Developers Do Code Verification
• Code developers best understand the numerical methods they are using
• Code developers are best able to use the results of code verification (and other forms of assessment) to improve the algorithms they use
• Code verification as an accreditation exercise has no lasting impact on code quality
17
Verification Testing Must Be a Team Ethic
• Discipline is required to keep a “clean” test suite – to keep all tests passing; “stop-the-line” mentality
• If only part of the team values verification, that part is always carrying the other developers
• Maintaining an automated verification test suite is probably necessary but definitely not sufficient
• Developers should be using verification tests interactively
18
Costs In Context
• Code developers retain practices that justify their expense.
• Consider version control software (CVS, SVN, git…):1. Do you remember a time before you used it?2. Would you consider going back?
Verification should be as critical