A. Alfonsi, C. Rabiti
06/28/2017
• Verification has been performed (code is bug free)
• All uncertain parameters are accessible (eases sampling
needs)
• All uncertain parameter distributions are known (otherwise
Bayesian)
Assumptions
• Perform uncertainty propagation for all experiments (UQ)
• Compare simulation with experiments (Validation versus
experiments)
• Determine the uncertainties in “target” prediction
(extrapolation)
RAVEN current capabilities covers: • Uncertainties Quantification
(mature) • Validation vs. experiments (initial) • Extrapolation
(planned)
Validation Process
UQ
• Goal: determining the Probability Distribution Function (PDF) of
the Figure of Merits (FOM)
• Select the right sampler: – Number of variables – Non linearity
of the model
• Sample the model according sampler chosen and distributions
• Analyze the FOM dispersion (mean, sigma etc.)
UQ Process
Statistical post processing
RAVEN supports many forward samplers • Monte Carlo • Grids:
– equal-spaced in probability – equal-spaced in value – mixed
(probability, custom, value) – Custom (used provided
values/probability)
• Stratified (LHS type) – equal-spaced in probability –
equal-spaced in value – mixed (probability, custom, value) – Custom
(used provided values/probability)
• Generalized stochastic collocation polynomial chaos
A different sampling strategies can be associated to each variable
separately
Sampling strategies
• Response Surface Designs: – Box-Behnken – Central Composite
Box-Behnken
Probability Distribution Function
Truncated Form Available
N-Dimensional Distributions (CROW)
• Micro Sphere • Inverse Weight • N-Dimensional spline • Gaussian
process • Polynomial (stochastic and not) • Linear regressors •
Many more (raven.inl.gov) • Ensemble models
Available Surrogate Models
A Road Map for Collocation Methods
“Usually weighted by the probability”
• Full SCgPC (~10) – A priori knowledge of the degree of the
function is imposed
• Full generalized Sobolev decomposition (~10) – A priori knowledge
of the degree of the function is imposed
• Sparse grid (~10) – Known separability is required
• Adaptive SCgPC (~100) – Separability and almost linearity
improves performance
• Adaptive generalized Sobolev decomposition (~100) – Separability
and almost linearity improves performance
Stochastic Collocation Generalized Polynomial Chaos SCgPC and
Sobolev Indexes Based
Grid Filling
% change in the answer)
Power Spike
Power History
J. Cogliati, J. Chen, J. Patel, D. Mandelli, D. Maljovec, A.
Alfonsi, P. Talbot, C. Wang, C. Rabiti “Time-Dependent Data Mining
in RAVEN“INL/EXT-16-39860
Example: Bison + RAVEN
dimensionless
0.9
1.1
Grain_diffCoeff
dimensionless
0.6
1.4
Clad_creeprate
dimensionless
0.9
1.1
Grain_res_param
Pa
max_hoop_stress
Pa
max_vonmises_stress
Pa
midplane_vonmises_stress
Pa
max_clad_temp
K
fis_gas_released
MWd/kgU
• 7000 MC Runs • 128 BISON simulations simultaneously with each
using 16 MPI processes (total
of 2048 cores simultaneously used)
Uncertainty On FOM
Probability distribution of the input space Probability
distribution of the experimental readings (FOMs)
Experimental Data Input Variables
(Figures of Merit (FOM))
0.02
0.04
0.06
0.08
0.1
0.12
0.14
A Probabilistic Reading of Experimental Data
Model
0.1 0.12 0.14
0
0.05
0.1
0.15
Comparative metrics
The Process
• The goal is to achieve a numerical representation a probability
density function of a set of point in the output space
• The less distorting representation is generate by the binning
(histogram)
• The number of bins and its boundaries should be choose to
regularize the function without altering its meaning
• Binning algorithms – Square root: – Sturge’s Formula:
Reconstruction of the FOM Distribution
How we compare the simulation to the experiment?? • Mean and Sigma
are not enough to compare the model output
distributions to the experimental reading distributions • The
metric should be more extensive and to consider the whole
PDF:
– Minkowski L1 Metric
– Distance Probability Distribution Function
• FOM – Mass Flow Rate – Temperature Cold Leg – Temperature Hot
Leg
• Code RELAP-7 (2014)
Validation Objectives (Figure of Merits)
• The uncertainties on the figure of merits are connected to the
type of measurements and not specific to a particular detector and
location
• ± 0.1 MPa (Primary Pressure)
Mass Flow
Fluid Temperature
Bin Midpoint Bin Count 0.187921 1 0.188897 4 0.189874 17
0.19085 63 0.191826 205 0.192802 518 0.193778 938 0.194755 1419
0.195731 1638 0.196707 1629 0.197683 1042 0.198659 541 0.199636 213
0.200612 64 0.201588 13
C O U N T S
Binning of the Data Generated
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
C D
PD F
µ (d) = 0.0662 σ (d) = 0.0325
0
2
4
6
8
10
12
14
PD F
z [kg/s]
space and different weight-point association • The Voronoi
tessellation is a common statistical representation
33
Voronoi Tessellation
• For each point a weight is computed in the probability space •
These weights are used to construct the variate distribution in
the
response space and, if applied on the input space, can be used for
the computation of the statistical moments without ad-hoc
strategies dependent on the sampling methodology
Response Space Tessellation of the Response Space
Y
• The methodology has been applied to generalize the validation
methodologies previously implemented in RAVEN, demonstrating its
validity
• The generality of the approach provides the following advantages:
– The validation metrics are not dependent on the employed
sampling strategy – If the reliability weights are computed in the
CDF space, there
is no need to create ad-hoc weight generation strategies for the
future sampling methods
– The approach can be used for the approximated computation of
joint probability functions (crucial for the computation of
correlation and covariance matrices when the targeted variables are
differently weighted)
Voronoi tessellation advantages
• Distribution modeling and sampling strategies have a good degree
of maturity
• Static comparison between model output and experimental
distribution is feasible but in an early stage
• Developments are needed in: – Time dependent – Correlation in
input/output in experimental data needs
to be accounted for
• Extrapolation is a new field which hopefully will see growing
capabilities in the next years
Conclusions
Sampling strategies
Stochastic Collocation Generalized Polynomial Chaos SCgPC and
Sobolev Indexes Based
Grid Filling
…then the Comparison
Next Step….
Uncertainties on Readings
Minkowski L1 Metric
Distance Probability Distribution Function
Use of Voronoi Tessellation