36
On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments M. Panesi, K. Miki, S. Prudhomme, and A. Brandis by ICES REPORT 11-36 November 2011 The Institute for Computational Engineering and Sciences The University of Texas at Austin Austin, Texas 78712 Reference: M. Panesi, K. Miki, S. Prudhomme, and A. Brandis, "On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments", ICES REPORT 11-36, The Institute for Computational Engineering and Sciences, The University of Texas at Austin, November 2011.

On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to

Shock Tube Experiments

M. Panesi, K. Miki, S. Prudhomme, and A. Brandisby

ICES REPORT 11-36

November 2011

The Institute for Computational Engineering and SciencesThe University of Texas at AustinAustin, Texas 78712

Reference: M. Panesi, K. Miki, S. Prudhomme, and A. Brandis, "On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments", ICES REPORT 11-36, The Institute forComputational Engineering and Sciences, The University of Texas at Austin, November 2011.

Page 2: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

On the Assessment of a Bayesian Validation Methodology for DataReduction Models Relevant to Shock Tube Experiments

M. Panesi∗,a, K. Mikia, S. Prudhommea, and A. Brandisb

aCenter for Predictive Engineering and Computational Sciences (PECOS), Institute for Computational Engineeringand Sciences (ICES), The University of Texas at Austin

bCenter for Turbulence Research, Stanford University

Abstract

Experimental raw data provided by measuring instruments often need to be converted into mean-

ingful physical quantities through data reduction modeling processes in order to be useful for

comparison with outputs of computer simulations. These processes usually employ mathematical

models that have to be properly calibrated and rigorously validated so that their reliability can be

clearly assessed. A validation procedure based on a Bayesian approach is applied here to a data

reduction model used in shock tube experiments. In these experiments, the raw data, given in terms

of photon counts received by an ICCD camera, are post-processed into radiative intensities. Simple

mathematical models describing the nonlinear behavior associated with very short opening times

(gate widths) of the camera are developed, calibrated, and not invalidated, or invalidated, in this

study. The main objective here is to determine the feasibility of the methodology to precisely quan-

tify the uncertainties emanating from the raw data and from the choice of the reduction model. In

this analysis of the methodology, shortcomings, suggested improvements, and future research areas

are also highlighted. Experimental data collected at the Electric Arc Shock Tube (EAST) facility

at the NASA Ames Research Center (ARC) are employed to illustrate the validation procedure.

Key words: Uncertainty quantification, Bayesian analysis, Model calibration, Parameter

identification

1. Introduction

Traditional procedures for validation of models in computational sciences and engineering are

often related to data fitting in the sense that the closer the output quantities computed by mod-

els are to experimental observations, the better the models should be. The Center for Predictive

∗Corresponding author. E-mail address: [email protected]. Address: ICES, The University of Texas atAustin, 1 University Station, C0200, Austin, TX 78712, USA.

Preprint submitted to CMAME November 5, 2011

Page 3: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

Engineering and Computation Sciences (PECOS) has recently proposed a novel methodology al-

lowing for the calibration of model parameters, the validation of models, and the quantification

of uncertainties in specific model predictions. This new approach is currently being evaluated for

simulating and predicting the multi-physics heating environment encountered during atmospheric

entry. However, it is clear that, for such processes, experimental data play a critical role, not only

for the calibration of the model parameters (e.g. reaction rate constants, spectroscopic constants,

etc.), but also for the validation of the model itself. Unfortunately, it is not uncommon that legacy

data, available for example in the literature, lack a complete description of associated uncertainties

and that the data reduction procedures, used to convert the raw data into meaningful physical

quantities, are usually reported with minimal information. It thus becomes virtually impossible

to account for the systematic errors due to experimental manipulations and, more importantly, to

estimate the influence and validity of the data reduction models.

Data collected from experiments are in general indirectly related to physical observations. Raw

data, usually in the form of electric signals (e.g. voltage) from electronic instruments or pixel counts

in digital images, are mapped into quantities such as stress components or radiative intensity by

using data reduction models. These models, as with physico-mathematical models that describe

physical phenomena, provide sometimes only crude approximations of the operating functions of

the instruments. They may also contain model parameters that need to be calibrated and whose

uncertainties have to be quantified. In certain situations, the reduced models need to be validated

as the instruments may be used in regimes outside of the calibration range and uncertainties should

then be correctly propagated to the reduced data. Figure 1 illustrates this concept and shows the

strong interaction between the validation of physical models and the validation of data reduction

models. The left box of the diagram describes the calibration and validation processes for the

data reduction models while the actual calibration and validation processes of the physical models

of interest are described in the right box of the diagram. The output of the first box consists of

experimental reduced data with associated uncertainty that are later utilized in the second box for

the calibration and validation of the physical model parameters.

The present work is devoted to the first stage of this two-step problem. Recently, experimental-

ists at NASA Ames Research Center (ARC) have embarked on an extensive experimental campaign

aiming at characterizing the radiative properties of high-temperature shock-heated gas in the con-

text of the CEV Aerosciences Project (CAP) [10, 8]. Data collected from the Electric Arc Shock

Tube (EAST), using imaging spectrographs, have been utilized for the calibration of kinetic and

2

Page 4: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

Data ReductionModel

Reduced DataRaw Data

Data Reduction Modeling Computational Modeling

Validation

Calibration

Event

MathematicalModel

Instrument

Calibration

Validation

Figure 1: The validation of physical models requires the calibration and the validation of the data reduction models.

radiative model parameters as discussed in [13]. Results for the forward predictions of the radiative

intensity are summarized in Fig. 2. The plot shows excellent agreement between the predicted mean

intensity and the experimental observations, but also indicates that uncertainties, characterized by

the 2.5th and 97.5th percentile curves, are very large. At that stage of the investigation, it was un-

clear whether the sources of uncertainties originated from errors in the experimental procedures, in

the data reduction models, or in the physical models. The objectives in the present work are to re-

visit the data reduction models describing the functional elements of one intensified charge-coupled

device (ICCD) camera used in the shock tube experiments at NASA Ames Research Center and

to submit these models to the validation process proposed at PECOS. During this process, a tight

cooperation between modelers and experimentalists was developed in order to better understand

the functionality of the instruments and of the modeling capabilities.

Measuring tools or instruments should ideally be employed when operated in their linear regime.

However, in the case of the EAST experiments, such an assumption cannot be respected as the

acquisition time is extremely short due to minimizing the smearing of the data on the spectrometers

as the flow is traveling at very high speeds (about 10 km/s) in the shock tube. In consequence,

the ICCD camera exhibits a non-linear response for such small gate widths (time during which

the “shutter” of the camera is open) as the time required to open the camera slit is comparable,

or sometimes even greater than, the nominal time the gate is open during the experiment. One

possibility to characterize the non-linear response of the camera is to make a series of measurements

at small gate widths using calibrated sources of radiation. Unfortunately, the intensity of the

lamps used for calibration is considerably smaller than that of the radiative source during actual

3

Page 5: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

0 1 2 3 4 5Z (cm)

0

0.05

0.1

0.15

0.2

Inte

grat

ed I

nten

sity

(W

/cm

3 -sr) Posterior 2.5% percentile

Posterior meanPosterior 97.5% percentile EAST exp. data (Shot# 47)

Figure 2: Comparison of numerically predicted integrated intensities and experimental observations as presented

in [13]. The mean intensity (Posterior mean) is in good agreement with the measured data (EAST exp. data (Shot

#47)). However, the 2.5th and 97.5th percentile curves indicate large uncertainties in the computed intensity.

experiments. Therefore, the correction factors accounting for the non-linear response at small gate

widths, although representative of the response of the camera at those calibration conditions, may

become inadequate when applied to the actual experimental data. Furthermore, at small gate

widths, the number of photon counts is drastically reduced when using calibration lamps, which

results in an increased scattering and noise that would be far too large for the actual experimental

data. This issue has been the motivation for constructing a mathematical model of the camera

shutter that can predict the response of the instrument at low gate widths. Simple models describing

the electronic circuit responsible of the gating of the photo-cathode in cameras have thus been

developed in this work following earlier works [26, 1, 19], and have been calibrated and validated

using the validation process suggested in [3, 17, 18] based on a Bayesian statistics framework [4, 6].

We emphasize here that our motivation has been to construct the simplest physical models that

would reasonably quantify the uncertainty in the reduced experimental data.

The paper is organized as follows. Section 2 describes the EAST experiments and experimental

set-up. Possible data reduction models that convert photon counts into radiative intensities are

described in Section 3. The Bayesian framework and the validation methodology used throughout

this investigation are detailed in Section 4. Results are presented and discussed in Section 5 while

concluding remarks are provided in Section 6.

4

Page 6: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

2. Brief description of experimental apparatus

The Electric Arc Shock Tube (EAST) facility at NASA Ames Research Center is used to gen-

erate high-enthalpy gas tests for the simulation of high-speed atmospheric entries. The facility is

composed of a long tube and a chamber. In the chamber, also referred to as the driver, one sets

an electromagnetic discharge, giving rise to a sudden increase of the gas temperature and pressure.

This high?pressure gas then bursts the diaphragm that initially separates the driver gas from the

test gas, forming a shock wave that travels at high speed downstream of the tube toward the test

section where spectral measurements are performed. As the shock propagates downstream, the

shock-heated gas radiates energy. The radiative signature contains useful information concerning

the thermo-chemical and radiative properties of the medium, which can be inferred using emission

spectroscopy. For this purpose, the spectrally and spatially resolved shock-layer radiance is ana-

lyzed by taking a snapshot of the shock wave as it passes in front of the optical access window.

The experimental apparatus is composed of the collecting optics, spectrometers, and intensified

cameras. The light radiated by the high-temperature gas in the shock tube is directed toward the

spectrometers by the collecting optics as described in [8]. The incoming light is then decomposed

into different harmonics through the spectrographs. Finally, the spectrally and spatially resolved

radiation is recorded by means of the ICCD cameras.

2.1. Emission spectroscopy

Emission spectroscopy consists of the analysis of radiation emitted by a source through inspec-

tion of the radiative spectral signature. The spectral decomposition of the incident radiation is

detected by images taken by ICCD cameras that record pixel counts in proportion to the inten-

sity of radiation. As the measurement of absolute radiative intensities is the primary quantity of

interest, the cameras need to be calibrated. Calibration is generally performed by measuring the

radiation emitted from a well-characterized source. The choice of the calibration source depends

on the spectral interval to be observed, which in the present investigation ranges from 500 to 900

nm. Furthermore, at the nominal shock speed of 10 km/s, the shock travels 1 cm each microsec-

ond, which requires the usage of high-speed gating cameras to avoid substantial smearing effects

in the measurement. Image intensifier coupled with CCD cameras are thus used. Besides enhanc-

ing the sensitivity in low-light conditions, ICCD cameras can capture extremely small time-scale

phenomena due to the possibility of high-speed gating capabilities.

5

Page 7: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

������������������������������������������������������������������������

������������������������������������������������������������������������

PhosphorScreen

Fiber OpticOutputWindow

Photocathode

MicrochannelPlate (MCP)

Photons

Photo−electron

Applied voltage

Applied voltage

Figure 3: Schematic of the intensifier in an ICCD cam-

era.

Wavelength [nm]

Dis

tance [cm

]

Shot B38

Shock Front

Atomic Line

Radiation

Contact

Discontinuity

Figure 4: Image obtained using an ICCD camera in a

shock tube experiment (Shot B38 from EAST).

2.2. Intensified charge-coupled device cameras

The intensifier is composed of 1) a photo-cathode, 2) a multi-channel plate (MCP), and 3) a

phosphor screen (see Fig. 3). Gating of the intensifier is made possible by controlling the voltage

across the photo-cathode. In other words, if the photo-cathode is biased more positively than the

MCP, electrons are repelled and the intensifier is gated off. The dynamic response of the gate

system can be represented by a resistor capacitor (RC) circuit, as discussed in greater detail in the

next section. It is important to observe that even when the photo-cathode is gated off, the current

across the intensifier is not zero. In fact, even in this case a fraction of photons not absorbed by

the photo-cathode may strike the MCP releasing photoelectrons. Furthermore, another source for

the build-up of charges on the CCD comes from photoelectrons released by the photo-cathode due

to thermal excitation. A reduction of the flux of photoelectrons can be achieved by simultaneously

gating the photo-cathode and the MCP.

Figure 4 shows an example of the spectrally resolved radiation output from an EAST exper-

iment. The colors indicate the photon counts recorded at each pixel of the image. The region

between the shock front and the contact discontinuity constitutes the area of interest for the mea-

surements of the radiative properties of the gas. The horizontal axis represents the wavelength

variation in nm while the vertical axis represents the distance in cm. The mapping from the raw

counts to the radiative intensity is obtained by defining the ratio between the counts collected

when using a source of known radiation (i.e. tungsten lamps). The objective in the remainder of

the paper is to develop a model of this ratio, to calibrate the model parameters, and to invalidate

6

Page 8: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

(or not) the model for low gate widths.

3. Data reduction modeling

In this section, we derive simple data reduction models to be used to convert the photon

counts into radiative intensity. We focus in particular on modeling the correction factor, given as

a function of the gate width ∆t of the camera and which defines the deviation with respect to a

linear behavior, for small gate widths. We also list some of the hypotheses that we introduce to

construct the models.

3.1. Modeling and hypotheses

The number of photon counts N∆t(i, j) for a given ∆t are collected at each pixel (i, j) of the

image, 1 ≤ i, j ≤ 512, and can easily be mapped into photon counts N∆t(z, λ) at each position z

and wavelength λ, where z varies roughly between 0 and 12 cm and λ between 400 and 950 nm.

We note that ideally, we would like to have a linear relationship between the intensity I(z, λ)

and the photon counts per unit time, i.e.

I(z, λ) = K N∆t(z, λ)

∆t(1)

with K constant. Unfortunately, the ICCD camera does not have a perfectly linear response and

the coefficient K should be corrected as follows:

K = κ× γ(G)× ξe(λ)× ρ(∆t) (2)

where:

1. κ is a new constant coefficient.

2. γ(G) is a correction factor that depends on the amplification gain G of the ICCD camera that

controls the power of the signal. For large gains, we may observe some saturation effects in

the number of counts which would lead to a non-linear response.

3. ξe(λ) is a correction factor that depends on the quantum efficiency of the camera, which is

defined as the ratio between the incoming flux of photons and the output flux of electrons

through the photo-cathode.

4. ρ(∆t) is the correction factor of interest (also referred to as the “non-reciprocity”) that departs

from unity at very low gate widths ∆t.

7

Page 9: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

In the following study, the gain is fixed so that the coefficient factor γ(G) can be ignored. Actually,

due to the noise in the number of photon counts from one pixel to the other, we consider an averaged

value of N∆t over the region A = ∆z ×∆λ around a given position z and frequency λ. Therefore,

introducing κ = κ γ(G) ξe(λ), we have

I(z, λ) =1

|A|

∫∫AI(z, λ) dzdλ =

1

|A|

∫∫Aκ ρ(∆t)

N∆t(z, λ)

∆tdzdλ (3)

where I denotes the averaged intensity and |A| the area of A = ∆z ×∆λ. Using the fact that κ is

roughly constant in A, it follows that

I(z, λ) = κρ(∆t)

∆t

(1

|A|

∫∫AN∆t(z, λ) dzdλ

)= κ ρ(∆t)

N∆t(z, λ)

∆t(4)

where N∆t denotes the averaged photon count captured at z and λ during the time period ∆t.

For a large gate width ∆tL, the correction factor ρ should be unity, i.e. ρ(∆tL) ≈ 1. Therefore,

using the same calibration lamp, the same gain G, and considering the same λ and z, we would get

for a small gate width ∆t,

I(z, λ) = κ ρ(∆tL)N∆tL(z, λ)

∆tL= κ ρ(∆t)

N∆t(z, λ)

∆t(5)

that is

ρ(∆t) ≈ N∆tL

∆tL

(N∆t

∆t

)−1

(6)

where we have dropped the dependency on (z, λ). This result actually shows that the non-

reciprocity can be evaluated by defining a model for the number of counts with respect to the

gate width. Note that it could also be determined using the known intensity I(z, λ). However,

for small gate widths and dim radiative sources such as tungsten lamps, the amount of photons

collected by the camera might be too low, which would yield large sources of uncertainties. These

uncertainties would later be incorrectly propagated to the actual measurements as the radiation

from the shock-heated gas is much brighter. The objective is thus to develop a model that can be

calibrated using data obtained at large gate widths and can predict the non-reciprocity at ∆t = 0.1

µs.

Figure 5 gives an interpretation of the fundamental issues associated with the calibration of

the camera. Calibration using the tungsten lamp needs to be performed using a large gate width

(∆t ≥ 10 µs) in order to capture an adequate amount of photon counts emitted from the lamp.

On the other hand, actual experiments are made at much shorter gate widths (∆t ∼ 0.1 µs) to

minimize the spatial smearing due to the very high shock speed. It is thus important to consider

8

Page 10: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

Inte

nsity Data Reduction Model

Photon

Count

Gate width

Calibration

ExperimentRegime

Regime

Figure 5: Schematic interpretation of the data reduction model. Calibration of the camera can reliably be performed

at small intensity (tungsten lamp), small counts, and large gate widths. Actual experiments, on the other hand, are

performed with large intensity, large photon counts, and small gate widths.

and validate a data reduction model during calibration of the instrumentation. Figure 6 shows the

total number of photon counts N∆t corresponding to the set of data reported in Table 2. We indeed

observe that the deviation from a purely linear response estimated using ∆t = 10 µs (dashed line)

becomes large as the gate width approaches zero (see inset).

3.2. Models for the non-reciprocity factor

From our previous discussion, and in particular Eq. (6), the correction factor ρ(∆t) can be

estimated by modeling the number of photons N∆t collected by the camera. We assume that this

number should be roughly proportional to the total integrated time the camera remains open and

to the manner the camera opens and closes. Gating of the camera is essentially controlled by the

photo-cathode. Following [26], we propose to model the number of counts based on the linear RC

circuit shown in Fig. 7, composed of one capacitor and two resistors. In this model, the capacitance

C1 takes into account the surface change between the outer and the inner faces of the cathode

and the resistor R1 models the combination of all loads related to the system except that of the

photo-cathode. The voltage Vin(τ), a step function, is introduced as the input signal mimicking

the switch “on” and “off” of the photo-cathode and the output signal is the current I2(τ) through

the resistor R2, or simply the voltage V (τ) = R2I2(τ) (Ohm’s law). Here, τ represents the time

measured from the instant at which the photo-cathode is turned on.

The main hypothesis of the models presented below is that the photon counts are related to

9

Page 11: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

Figure 6: Total number of photon counts N∆t collected in calibration experiments with respect to the gate width ∆t

(solid line) and comparison with a purely linear response estimated using ∆tL = 10 µs (dashed line).

the integrated value of the voltage V (τ) over time, i.e.

N∆t ∝∫ ∞

0V (τ) dτ (7)

The tension V (τ) is governed by the differential equation:

dV

dτ+R1 +R2

R1C1R2V =

1

R1C1Vin(τ) (8)

or simplydV

dτ+ α V = α β Vin (9)

where we have used the notation:

α =R1 +R2

R1C1R2, β =

R2

R1 +R2. (10)

Since the input step function Vin(τ) is piecewise constant, i.e.

Vin(τ) =

1, if 0 < τ < ∆t

0, if τ > ∆t(11)

∆t being the gate width, the general solution of the differential equation is given by:

V (τ) = β Vin(τ) + Ce−ατ (12)

10

Page 12: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

Resistor

Input

CCondenser

R RResistor

2

in 1

1

VVoltage

Figure 7: A linear RC circuit model for the photo-cathode (see e.g. [26]).

with C a constant that depends on the initial condition. Prescribing V (0) = 0, the solution of the

problem is given by

V (τ) =

β(1− e−ατ

), if 0 ≤ τ ≤ ∆t

β(1− e−α∆t

)e−α(τ−∆t), if τ > ∆t

(13)

In the above definition of the voltage, we assumed that the same RC circuit controls the opening

and closing of the camera. We could imagine that two different circuits are responsible for the two

actions with respective characteristic time 1/α1 and 1/α2. In that case, the solution for the voltage

reads:

V (τ) =

β(1− e−α1τ

), if 0 ≤ τ ≤ ∆t

β(1− e−α1∆t

)e−α2(τ−∆t), if τ > ∆t

(14)

Inserting the solution (14) for the voltage V into (7), the integral is given by:∫ ∞0

V (τ) dτ =

∫ ∆t

0V (τ) dτ +

∫ ∞∆t

V (τ) dτ

=

∫ ∆t

0β(1− e−α1τ

)dτ +

∫ ∞∆t

β(1− e−α1∆t

)e−α2(τ−∆t) dτ

= β(∆t−

(1− e−α1∆t

)/α1

)+ β

(1− e−α1∆t

)/α2

= β[∆t−

(α2 − α1

)(1− e−α1∆t

)/α2α1

](15)

We observe that the second term in the expression for N∆t represents a correction with respect to

the linear regime, denoted as:

Λ(∆t, α1, α2, β) = β(α2 − α1

)(1− e−α1∆t

)/α2α1 (16)

so that

N∆t ∼ β∆t− Λ(∆t, α1, α2, β) (17)

We are now in a position to define several models for the photon counts N∆t:

11

Page 13: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

Model M1: In the first model, the simplest, we suppose that the rising and decaying rates are the

same, i.e. α1 = α2 = α. In this case, the correction Λ(∆t, α, α, β) vanishes and the photon

counts is straightforwardly given by:

N∆t = β∆t (18)

We know that this model should fail to provide accurate predictions but will still be considered

to test the reliability of the validation process proposed in the next section.

Model M2: In the second model, we relax the previous assumption and now suppose that the

rising and decaying rates are different. The photon count is thus given by (17). Note that for

∆t = 0, the correction Λ vanishes while for large gate widths, Λ tends to the constant value

β(α2 − α1)/α1α2.

Model M3: In this model, we assume that the input parameter ∆t is not precisely controlled by

the camera. We thus introduce a new parameter δ which represents a correction to the gate

width and which will be referred to as the time delay in the remainder of the sequel. The

model for the photon counts reads:

N∆t = β (∆t+ δ)− β(α2 − α1

)(1− e−α1(∆t+δ)

)/α2α1 (19)

Note that δ can take positive or negative values. In this model, the value of N∆t may be

nonzero as ∆t approaches zero.

Model M4: An alternative to the time delay model is to consider a white-noise type of uncertainty

(e.g., anode dark current due to the thermionic emission [11]) generated by the environment.

We suspect that the noise may become significant at small gate widths and that its magnitude

may vary according to the time period during which experiments are made, which constitutes

an extra source of uncertainty. The noise level is supposed here uniform and equal to ν during

the entire observation period so that the new model is given by:

N∆t = (β + ν)∆t− β(α2 − α1

)(1− e−α1∆t

)/α2α1 (20)

Model M5: In the last model, we suppose that the camera is submitted to a dark noise as in

Model M4 and that there is a time delay as in Model M3. The model reads:

N∆t = (β + ν)(∆t+ δ)− β(α2 − α1

)(1− e−α1(∆t+δ)

)/α2α1 (21)

12

Page 14: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

Table 1: Proposed physical models and corresponding model parameters. The symbols 3 or 7 indicate that the given

parameter is or is not part of the model, respectively.

Model α1 α2 β δ ν Photon counts N∆t

M1 7 7 3 7 7 β∆t

M2 3 3 3 7 7 β∆t− Λ(∆t, α1, α2, β)

M3 3 3 3 3 7 β(∆t+ δ)− Λ(∆t+ δ, α1, α2, β)

M4 3 3 3 7 3 (β + ν)∆t− Λ(∆t, α1, α2β)

M5 3 3 3 3 3 (β + ν)(∆t+ δ)− Λ(∆t+ δ, α1, α2β)

Long gate width

Short gate width

Ltt !""

#

different from1

2!

!

Long gate width

Short gate width

Ltt "##

$

Figure 8: Solution of (14) in the cases where α1 = α2 (left) and α1 < α2 (right). In both cases, we show the solutions

for a short gate width ∆t and a long gate width ∆tL. For the short gate width, we observe that not enough time

is left for the camera to fully open. The shaded areas correspond to the value of the integral in (7). Note that the

contributions to the integral from the opening and closing processes become negligible as the gate widths become

large.

Remark 3.1. Additional models could be considered if the electronic system is treated as a more

complicated circuit consisting of resistors, inductors, and capacitors (RLC circuit) [19]. Compari-

son of all different data reduction models will be the subject of future works.

The five models with their parameters are summarized in Table 1. The number of physical

parameters varies between 1 and 5. Finally, we show in Figure 8 examples of the voltage described

by (14).

13

Page 15: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

4. Validation process

The contemporary view of predictive computational modeling for physical events presumes that

observational data be acquired for the calibration and validation of models for simple scenarios of

the theory. In addition, one has to consider the predictive model scenario, denoted Sp, for which

one hopes to predict the quantity of interest using a model that survives the validation process. We

briefly describe below the calibration, validation, and prediction processes in an abstract manner,

illustrated in Fig. 9, and based on the work presented in [2, 16, 17, 18]. Other approaches for model

validation in scientific computing have been proposed in [25, 21, 15] and in the references therein.

4.1. Brief description of the proposed validation process

For the purpose of the description, we introduce the following abstract model problem that

consists in finding the solution u such that:

A(θ, S, u(θ, S)) = 0 (22)

where A is an operator representing the model, θ the model parameters, and S some possible

scenario for which the model has been determined. Here, some of the model parameters are assumed

unknown and need to be identified through calibration with respect to given data.

Calibration. Simple calibration scenarios Sc are first considered for which experiments can be run

that provide observables represented by data Dc. One then needs:

1) to introduce the likelihood density function π(Dc|θ) for the calibration scenario based on the

theory represented by (22),

2) to provide the prior probability density function πc(θ) for the parameters and probability

density function π(Dc) for the data,

3) to solve the inverse problem for the posterior pdf σc(θ|Dc) using Bayes’ Theorem (see e.g. [22]),

i.e.

σc(θ|Dc) =πc(θ)π(Dc|θ)

π(Dc)(23)

using sampling methods such as those described in [24, 23, 12, 5, 20]. For simplicity in the

notation, we will write σc(θ) rather than σc(θ|Dc) in what follows.

4) to solve the forward problem (22) for u(σc(θ), Sp), using the prediction scenario Sp, such that

A(σc(θ), Sp, u(σc(θ), Sp)) = 0 (24)

14

Page 16: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

(!) (!)

v(D )"

"(D )c

(!) (!)c

v#

c#

"v

"

Estimate

Feedback Control

p

v

c

S

S

S No

Data

Data

Model withRe−CalibratedParameter(s)

Model withCalibrated

Parameter(s)

Validation

v

vScenarioValidation

ScenarioCalibration

Sensitivity/Uncertainty

ScenarioPrediction

usingQoI

Q

Quantification

Yes

Decision

Calibration

c $D ,(Q Q ) <

cQ

Invalid

ConfidenceIncreased

is

Model not

Invalid

Model

Figure 9: The flow of a calibration, validation, and prediction processes. Prior pdf’s of model parameters and pdf

of observational data π(Dc) are provided to calibrate the model for given calibration scenarios Sc. Inverse analysis

yields the posterior pdf σc(θ). This can serve as the prior for the more elaborate validation process which involves new

scenarios Sv and validation data with pdf π(Dv). Inverse analysis yields the posterior σv(θ). The stochastic forward

problems are solved using calibrated and validated data and the corresponding prediction QoI’s are compared. If

they meet a preset tolerance, the model is not invalidated and the calibrated model is used to make the predictions.

Otherwise the model is declared invalid.

and to predict the probability density function πc(Qp) associated with the quantity of interest

Qp estimated using u(σc(θ), Sp).

Validation. Validation scenarios Sv are now selected, usually with the objective of checking one or

more hypotheses of the theory, to produce validation observations θv. One may also hope that these

observations will reflect the ability of the model to deliver acceptable predictions of the quantity of

interest Qp. In the first stage of the validation, the model parameters are re-calibrated by repeating

the same steps as in the calibration process, i.e. one has:

1) to introduce the likelihood function for the validation process, π(Dv|θ),

2) to define a prior pdf for the validation process πv(θ) (one may use the posterior pdf σc(θ))

and a probability density function π(Dv) for the data,

15

Page 17: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

3) to solve the inverse problem for the posterior pdf σv(θ) = σv(θ|Dv), i.e.

σv(θ) =πv(θ)π(Dv|θ)

π(Dv)

4) to solve for the solution u(σv(θ), Sp) of the forward problem on the prediction scenario,

A(σv(θ), Sp, u(σv(θ), Sp)) = 0 (25)

and to compute the probability density function πv(Qp) associated with the quantity of in-

terest Qp obtained using u(σv(θ), Sp).

Note that the mathematical model chosen for the prediction can never be validated; it can,

at best, be not invalidated for the specific validation experiments performed. The determination

of a criterion for accepting a model as “not-invalidated” is a subjective decision that requires the

acceptance of a metric to compare the predicted quantities produced by the calibration and the

validation processes, and a tolerance that we establish as an acceptable measure of the predictability

of the model. Thus, our validation process involves comparing πc(Qp) and πv(Qp). Let D denote

a metric and let γtol denote a preset tolerance. We will declare the model as not invalid if

D(πc(Qp), πv(Qp)) < γtol (26)

Several metrics could be thought of to define D in (26). For example, computing the cumulative

probability density functions

cdfc(Qp) =

∫ Qp

−∞πc(Q) dQ and cdfv(Qp) =

∫ Qp

−∞πv(Q) dQ (27)

a comparison of πc(Qp) and πv(Qp) is then afforded by

D(πc(Qp), πv(Qp)) = sup

p∈[0.1,0.9]

∣∣cdf−1c (p)− cdf−1

v (p)∣∣ (28)

Such a metric is illustrated in Fig. 10 and was suggested in [9]. However, the difference between the

cdf’s is evaluated here only in the range p ∈ [0.1, 0.9] in order to exclude the content of the tails due

to possible approximation errors in the estimation of the pdf’s. An immediate consequence is that

the quantity thus computed will be necessarily equal to or smaller than the quantity one would

evaluate using the whole unit interval and that one could be conservative in accepting models.

However, this could be easily offset by the choice of the tolerance in the rejection criterion.

16

Page 18: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

p

c vpdf pdf

1

Quantity of Interest

pdf

Q00

cdf =cumulativedensityfunction c v

p

Metric distance < tol

cdf cdf

cdf

Q00

1EXAMPLE OF METRIC

Quantity of Interest

Figure 10: Illustration of the metric D defined in (28).

4.2. Application of the validation process to the non-reciprocity models

As mentioned above, calibration and validation processes rely on data. In our problem, data-

sets consist in 512× 512 matrices representing the photon counts as a function of wavelength and

position. An example of a digital image obtained during calibration of the instrumentation is shown

in Fig. 11 and was acquired using a tungsten lamp. Since the time response of the camera is the same

regardless of the position and wavelength considered, we can restrict ourselves to the particular area

of the CCD array where the camera is most sensitive. Referring to (3)–(4), observable quantities

consist of averaged photon counts N∆t over the area shown in Fig. 11 to reduce the effect of noise

in the measurement, the effect of input radiation and wave-dependent quantum efficiency, and

position dependence. We also see in Fig. 11 that the measurements are fairly reproducible.

We show in Fig. 12 (left) the numbers of counts N∆t averaged using either the wavelength

range 600–700 nm or 700–800 nm. We observe that the choice of the wavelength range has little

effect on the observations. We also show in Fig. 12 (right) the value of the correction factor (non-

reciprocity) using data obtained from experiments performed at different dates. The non-linear

behavior is observable only for gate widths smaller than 1.5 µs and the results are in fair agreement

with each other for values larger than 1 µs. However, we also observe severe discrepancies (as high

as 30 percent) for small values of the gate width. These large variations are attributed to the small

number of counts at such gate widths and reinforce here the idea that a model is needed to predict

the correction factor at ∆t = 0.1 µs. We provide in Table 2 one set of data that we shall use for

17

Page 19: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

500 600 700 800 900Wavelength [nm]

0

50

100

150

200

Cou

nts

[-]

- 1 Test- 2 Test- 3 Test- 4-11 Test

Spectrally integrated curve

Figure 11: Example of data used in the calibration of the data reduction models. (Left) Digital image obtained when

exposing the camera to a tungsten lamp. (Right) Reproducibility of experiments: the number of counts N∆t is shown

versus the wavelength λ at the distance z indicated on the right image in the case of several calibration shots.

the validation of our data reduction models.

The likelihood density functions are constructed as follows. Let Di be the observable data for

a given gate width ∆ti and let X(∆ti) be the model output (i.e. the number of counts N∆ti) for

the same gate width, i = 1, . . . , ND (ND = NC for the calibration data-set and ND = NV for the

validation data-set). Assume that the data points are statistically independent from each other

and consider a multiplicative error e between Di and X(∆ti) so that the latter is always of the

same sign as that of the former (positive), i.e.

Di = Xi exp(e) or, simply, e = lnDi − lnX(∆ti) (29)

The error e is chosen here to be Gaussian with zero mean and variance σ2 and includes the ex-

perimental and modeling errors. The choice of the error model is based on the assumption that

the errors are proportional to the values of the observations and that the errors at the data points

are all statistically independent of each other. Results of the calibration process may indeed de-

pend on the choice of the error model (see e.g. [14]). Nevertheless, the value of σ2 should provide

here some useful information about the model adequacy. Let σexp and σmod refer to the standard

deviations associated with the experimental errors and the modeling errors, respectively. These

can be viewed as unknown parameters of the error model and can be identified along with the

other model parameters during the calibration process. In this case, the prior pdf’s of the stan-

dard deviation (equivalently the variance) need to be provided: for σexp, it is based on some prior

18

Page 20: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

Table 2: Data set used in the calibration and validation of the data reduction models.

Gate width ∆t Values of photon counts N∆t Coefficient of

in µs (× 105) Variation

0.05 0.25773 0.25878 0.25562 0.25573 0.523

0.10 0.44993 0.45040 0.45401 0.44889 0.427

0.20 0.78555 0.78846 0.78787 0.78460 0.202

0.30 1.18851 1.19294 1.18762 1.19091 0.175

0.40 1.65956 1.65786 1.66407 1.65570 0.185

0.50 2.15651 2.16631 2.16610 2.15334 0.266

0.60 2.70125 2.70294 2.69529 2.69464 0.112

0.70 3.30536 3.30940 3.29062 3.30003 0.213

0.80 3.93567 3.94550 3.94070 3.95361 0.167

0.90 4.64449 4.63364 4.61162 4.62549 0.259

1.00 5.34160 5.33976 5.33305 5.34784 0.098

2.00 12.98473 12.99352 12.87610 12.96557 0.360

3.00 20.57746 20.61848 20.48911 20.56103 0.227

4.00 28.20695 28.28952 28.23362 28.27682 0.117

5.00 35.88943 36.03104 35.78025 35.86519 0.251

10.00 74.72930 74.72930 0.000

Table 3: Prior probability density functions for the calibration of the model parameters.

Parameters α1 α2 β δ ν log(σ2)

Uniform pdf’s π [0.3,100.0] [0.3,100.0] [5.0,7.0] [0.0,1.0] [0.0,5.0] [-8.0,-1.0]

19

Page 21: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

0 0.5 1 1.5 2Gate Width [µs]

0

0.05

0.1

0.15

0.2

Nor

mal

ized

Int

egra

ted

coun

ts [

nm]

λ = 600-700 nm λ = 700-800 nmTime linear curveFitting Parabola

τ = 0.293 µs

0.01 0.1 1 10Gate Width [µs]

0

0.5

1

1.5

2

2.5

3

Rec

ipro

city

Fac

tor

Pecos - Data (Nov)Parabola -LSEAST - dataPecos - Data (Dec)Pecos - Data (Mar)

Figure 12: (Left) Numbers of counts N∆t versus ∆t, averaged using either the wavelength range 600–700 nm or 700–

800 nm. (Right) Value of the correction factor (non-reciprocity) using data obtained from experiments performed at

different dates.

knowledge of the experimental uncertainty; for σmod, the pdf should be chosen as large as possible.

However, in practice, it is in general difficult to distinguish one from the other, especially if σexp

is small. Consequently, the two components are usually combined together into the total variance

σ2 = σ2exp + σ2

mod. In the present study, the value of σ2 obtained from the calibration process will

provide an indication of the value of σ2mod, i.e. the modeling error, since the same experimental

data will be used for all models. Based on these assumptions, the likelihood function reads

π(D|θ) =1

(√

2πσ)NDexp

[− 1

2σ2

ND∑i=1

(Di − Xi)2

](30)

where Di and Xi stand for lnDi and lnX(∆ti), respectively. In the numerical experiments shown

below, the posterior probability density functions are obtained by sampling the parameter spaces

using the Markov Chain Monte Carlo method proposed in [7]. For the calibration process, we use

up to 1,000,000 samples to sweep the parameter spaces and for the forward problem, we consider

5,000 samples generated from the parameter posterior distributions to evaluate the pdf’s and cdf’s

of the quantity of interest. Note that the whole process takes here less than ten minutes on a recent

serial Linux machine thanks to the simplicity of the models. In addition, extensive statistical

convergence tests using different sample sizes were performed in order to ensure that the pdf’s of

the parameters and of the quantities of interest are accurately approximated.

Depending on the models Mj , j = 1, . . . , 5, the set of model parameters θ will consist of subsets

of (α1, α2, β, δ, ν) and of the variance σ2 for the error. The prior probability density functions for

20

Page 22: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

0 1 2 3 4 5 6 7 8 9 10Gate Width [µs]

0

0.2

0.4

0.6

0.8

1

No

rma

lize

d I

nte

gra

ted

co

un

tsManufactured data

0 0.5 1 1.5 20

0.05

0.1

0.15

Figure 13: Manufactured data using Model M5.

1 1.5 2 2.5 3α

1

0

5

10

15

20

Marg

inal P

DF

s

[0.5,10.0]

[0.9,10.0]

(a) Model parameter α1

0.06 0.08 0.1 0.12 0.14δ

0

1

2

3

4

Ma

rgin

al P

DF

s

[0.5,10.0]

[0.9,10.0]

(b) Model parameter δ

Figure 14: Marginal posterior pdf’s of α1 and δ obtained using Model M5 and manufactured data.

the model parameters are in general chosen based on existing information. In this absence of such

information, we choose uniform prior pdf’s for the different parameters as indicated in Table 3.

4.3. Verification of the validation process with manufactured data

Before using the actual experimental data, we propose to check that the validation process

can provide adequate results. WIth that goal in mind, we use one of the mathematical models

presented above, namely M5, to produce observable quantities to which noise is added to generate

the calibration and validation data (see Fig. 13) similar to those from the actual experiments (see

Fig. 6(a)). These data sets will be referred to as “manufactured data” in the following. The nominal

values of the model parameters are chosen as follows: α1 = 2.0, α2 = 50.0, log(β) = 6.0, δ = 0.1

and log(ν) = 5.0. Additional multiplicative noise based on a Gaussian distribution with zero mean

21

Page 23: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

0 1 2 3 4 5Reciprocity

0

0.2

0.4

0.6

0.8

1

Ma

rgin

al C

DF

s

[0.5,10.0][0.9,10.0]

22%

0 1 2 3 4 5 6 7 8 9 10Reciprocity

0

0.2

0.4

0.6

0.8

1

Ma

rgin

al C

DF

s

[0.5,10.0][0.9,10.0]

81%

0 1 2 3 4 5 6 7 8 9 10Reciprocity

0

0.2

0.4

0.6

0.8

1

Ma

rgin

al C

DF

s

[0.5,10.0][0.9,10.0]

21%

0 1 2 3 4 5 6 7 8 9 10Reciprocity

0

0.2

0.4

0.6

0.8

1

Ma

rgin

al C

DF

s

[0.5,10.0][0.9,10.0]

91%

0 1 2 3 4 5 6 7 8 9 10Reciprocity

0

0.2

0.4

0.6

0.8

1

Ma

rgin

al C

DF

s

[0.5,10.0][0.9,10.0]

16%

(a) (b)

(c) (d)

(e)

Figure 15: Marginal posterior cumulative density functions of the non-reciprocity at ∆t = 0.1 using (a) M1, (b) M2,

(c) M3, (d) M4, and (e) M5.

22

Page 24: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

and standard deviation σ = 0.01 is considered here. In the insert of Fig. 13, we observe that the

solid line obtained from fitting the data in the linear region diverges from the data computed at

small gate widths due to the non-linear behavior of the model. The posterior pdf’s of parameters

α1 and δ are shown in Fig. 14 as illustration of the calibration process. The prior pdf’s were

chosen uniform, as in Table 3, but with smaller ranges since we know here the nominal values of

the parameters. The dashed lines were obtained using the data in the range [0.9,10.0] while the

solid lines were obtained considering the data in the whole range [0.5,10.0]. We observe that the

nominal values α1 and δ are recovered once a sufficient amount of data is used.

We show in Fig. 15 the cdf’s of the non-reciprocity computed at ∆t = 0.1 using the calibrated

models. Recall that Model M1 is a linear model and as such should be rejected. Although the

posterior cdf’s from the two data sets remain close to each other, we do observe that the variance

of the corresponding pdf’s slightly diverges when more data is used in the calibration process.

Therefore, the model can be rejected based on the latter criterion. In the case of both M2 and M4,

the cdf’s using the two data sets are far apart. These models should be invalidated. Finally, in the

case of M3 and M5, the maximum distance between the cdf’s (quantified as 21% for M3 and 16% for

M5) remains relatively small. We also see that the variance decreases as the amount of data used in

the calibration increases. Therefore, we may conclude that these two models cannot be invalidated.

Such a result is actually expected in the case of M5 since the manufactured data was generated

using that model. However, it is interesting to see that Model M3 also provides acceptable results,

in the sense that the model cannot be invalidated by the proposed methodology. We will observe

similar results in the next section using the actual data. Moreover, this study reveals that the

introduction of the white-noise into the model through parameter ν is not as important as the time

delay δ. We may conclude from this study that it is actually unclear how ∆t is measured with

respect to the opening and closing times of the camera.

5. Numerical results

The main objective in the following experiments is to assess the validation process proposed

in Section 4 and to analyze the various outcomes and possible shortcomings of the process using

actual data. Our view is that this process involves many a priori choices and that the effects of

these choices should be carefully examined before drawing any conclusions. We recall that the

quantity of interest for assessing the validity of the models Mj , j = 1, . . . , 5, is the correction factor

(non-reciprocity) at ∆t = 0.1 µs, which accounts for the nonlinear behavior with respect to the

23

Page 25: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

gate width of the camera. The most important choice that is made is maybe the distribution of

the available data into a calibration data set and a validation data set. In this study, we select

for the calibration stage the data points corresponding to ∆ti = 0.9, . . . , 10.0 µs while we use for

the validation stage the data points ∆ti = 0.5, . . . , 10.0 µs, as specified in Table 2. Presently,

it is not clear what the best criterion is for determining whether experiments can be prescribed

to be either calibration or validation data. Determining this criterion is a topic requiring further

research. However, for the purpose of this paper, several variations on the choice of calibration and

validation experiments are used in order to assess the proposed Bayesian validation methodology.

Furthermore, it should also be highlighted that the calibration and validation scenarios are similar

in this study, the only difference being in the choice of the input parameter ∆ti. However, at this

point of the study, we assume it is the only available data, a fact that one is often faced with as

experimental data are usually scarce. Finally, the choice of the prior distributions for the various

model parameters are provided in Table 3 and the form of the likelihood functions is as given

in (30).

5.1. Results of the validation process

In this section, the validation process is exercised exactly as described above using the cumu-

lative density function of the non-reciprocity at ∆t = 0.1 µs to assess the validity of the models

according to the criterion (28). The tolerance is arbitrarily set here to 25 percent (just for the

purpose of the discussion here, i.e. not based on particular decision-making). The estimated cdf’s

for the five models are shown in the middle column of Fig. 16. We also display in the left column

of the same figure the mean and 95 percent confidence interval (CI) of the non-reciprocity with

respect to the gate width. Although each model Mj , j = 1, . . . , 4, can be viewed as a submodel of

M5, we observe that the predictions from the validation process are very different from one model

to the other.

Model M1. In this case, the predicted non-reciprocity as a function of the gate width is simply

constant, as expected, due to the linear nature of the model; the relative distance between the cdf’s

is estimated at 20 percent. Such a result could be viewed as acceptable; however, the issue here is

that the mean value of parameter σ is fairly large, as observed in the right column of Fig. 16 and

that the variance of the posterior pdf clearly increases as the amount of data used in the calibration

of the model increases.

24

Page 26: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

0.2 0.4 0.6 0.8 1Gate width (µs)

0

0.5

1

1.5

2

2.5

3R

ecip

roci

ty[0.5,10.0] 95 % CI[0.5,10.0] mean[0.9,10.0] 95% CI[0.9,10.0] mean

0.2 0.4 0.6 0.8 1Gate width (µs)

0

5

10

15

20

Rec

ipro

city

[0.5,10.0] 95 % CI[0.5,10.0] mean[0.9,10.0] 95% CI[0.9,10.0] mean

0 1 2 3 4 5 6 7Reciprocity

0

0.2

0.4

0.6

0.8

1

Mar

gina

l CD

Fs

[0.5,10.0][0.9,10.0]

65%

-8 -7 -6 -5 -4 -3 -2 -1 0Log( 2)

0

0.2

0.4

0.6

0.8

1

Mar

gina

l CD

Fs

[0.5,10.0][0.9,10.0]

0.2 0.4 0.6 0.8 1Gate width (µs)

0

5

10

15

20

Rec

ipro

city

[0.5,10.0] 95 % CI[0.5,10.0] mean[0.9,10.0] 95% CI[0.9,10.0] mean

0 1 2 3 4 5 6 7Reciprocity

0

0.2

0.4

0.6

0.8

1

Mar

gina

l CD

Fs

[0.5,10.0][0.9,10.0]

92%

-8 -7 -6 -5 -4 -3 -2 -1 0Log( 2)

0

0.2

0.4

0.6

0.8

1

Mar

gina

l CD

Fs

[0.5,10.0][0.9,10.0]

0 0.2 0.4 0.6 0.8 1Gate width (µs)

0

5

10

15

20

Rec

ipro

city

[0.5,10.0] 95 % CI[0.5,10.0] mean[0.9,10.0] 95% CI[0.9,10.0] mean

0 1 2 3 4 5 6 7Reciprocity

0

0.2

0.4

0.6

0.8

1

Mar

gina

l CD

Fs

[0.5,10.0][0.9,10.0]

98%

-8 -7 -6 -5 -4 -3 -2 -1 0Log( 2)

0

0.2

0.4

0.6

0.8

1

Mar

gina

l CD

Fs

[0.5,10.0][0.9,10.0]

Figure 16: (Left) Posterior mean and 95% CI for the non-reciprocity, (middle) marginal posterior cumulative density

functions of the non-reciprocity at ∆t = 0.1 µs, and (right) posterior pdf of parameter σ obtained for models M1 to

M5 (top to bottom).

25

Page 27: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

0 20 40 60 80 1001,

2

0

0.2

0.4

0.6

0.8

1

Mar

gina

l CD

Fs

1[0.5,10.0]

2[0.5,10.0]

1[0.9,10.0]

2[0.9,10.0]

Figure 17: Marginal posterior cdf’s for α1 and α2 for M2.

Model M2. We observe that the non-reciprocity monotonically increases as the gate width tends to

zero, using either the data set [0.9, 10.0] or [0.5, 10.0]. However, the predicted reciprocities obtained

using the calibrated and re-calibrated models diverge from each other at small ∆t as the former

provides a non-reciprocity that tends to infinity while the latter predicts an almost constant non-

reciprocity as a function of the gate width. The large difference in the two results is mainly due to

the fact that the cdf of α2 significantly changes from using the first data set to using the second one

(see Fig. 17); hence the large discrepancy between the two cdf’s profiles, as seen in Fig. 16. This

could have been expected using a simple expansion analysis around ∆t = 0. In this case, we easily

derive an estimate for N∆t as β∆t α2/α1 which would allow us to conclude that the non-reciprocity

at ∆t = 0 should approximately converge to α2/α1. In other words, when α2 becomes much larger

than α1, the non-reciprocity should tend to ∞.

Model M3. The introduction of the time delay δ yields a new feature in the non-reciprocity, in that

the correction factor now vanishes at ∆t = 0. Nevertheless, we still observe a large difference in the

cdf of the non-reciprocity due maybe to the fact that the mean value of δ becomes far from being

negligible when using the validation data set as shown in Fig. 18. We also show in the figure the

cdf’s of the calibrated parameters α1 and α2. We observe that the mean of α1 is small and that of

α2 is large, suggesting that the camera opens at a slow rate and quickly closes.

Model M4. The results obtained for M4 seem to be a combination of the ones obtained with models

M3 and M2. In Fig. 19, we show the posterior cdf’s of ν for M4, which significantly differ with

respect to the chosen data sets. For small values of ν, M4 behaves more as M2, for large values, it

behaves as M3. However, the distance between the cdf’s of the non-reciprocity obtained using the

26

Page 28: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.2

0.4

0.6

0.8

1M

argi

nal C

DFs

[0.5,10.0][0.9,10.0]

0 20 40 60 80 1001,

2

0

0.2

0.4

0.6

0.8

1

Mar

gina

l CD

Fs

1[0.5,10.0]

2[0.5,10.0]

1[0.9,10.0]

2[0.9,10.0]

Figure 18: Marginal posterior cdf of δ (left) and cdf’s of α1 and α2 (right) in the case of M3.

0 1 2 3 4 5 6Log(ν)

0

0.2

0.4

0.6

0.8

1

Ma

rgin

al C

DF

s

ν[0.5,10.0]ν[0.9,10.0]

0 1 2 3 4 5 6Log(ν)

0

0.2

0.4

0.6

0.8

1

Ma

rgin

al C

DF

s

ν[0.5,10.0]ν[0.9,10.0]

Figure 19: Marginal posterior cumulative density functions of ν in the case of M4 (left) and M5 (right).

calibration and validation data sets is still large for this model.

Model M5. Finally, in the case of M5, we observe slightly better results than those obtained with

the other models, as shown in Fig. 16. However, it is not clear that the addition of the back noise

was necessary to improve the model. In fact, we see that the non-reciprocity as a function of ∆t

has similar profiles in this case as those obtained using model M3. We also show in Fig. 19 the

behavior of ν for M5. We observe in this case that the two cdf’s remain close to each other when

using the calibration or validation sets.

5.2. Discussion

With the criterion D(πc(Qp), πv(Qp)) < γtol and γtol = 25 percent, we would have to conclude

that M1 is the only valid model among the five models. Such a conclusion would be contradictory

with our assumptions, as M1 is a linear model when the non-reciprocity has clearly a nonlinear

27

Page 29: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

behavior with respect to the the gate width. Moreover, none of the other models Mi, i = 2, . . . , 5,

satisfies the proposed acceptance criterion as the distance between the cdf’s of the non-reciprocity

is in all cases too large.

At this stage of the analysis, the main question that we should ask is why the validation process

yields such a deceptive conclusion. For model M1, the answer is relatively straightforward. The

modeling error, embodied by the standard deviation σ, is large, meaning that the model is able to

reproduce the calibration data simply by increasing the modeling error. In other words, the misfit

between the observations and the quantities predicted by the model is too large for the model to be

acceptable. In the case of the other models, the modeling error remains small indicating that the

models are able to predict relatively well the non-reciprocity at large gate widths, i.e. ∆t ≥ 0.5.

If we look closely at the values of σ obtained using the data at ∆ti = 0.5, . . . , 10.0 µs for each

model Mi, i = 2, . . . , 5, we observe that these are slightly smaller for M3 and M5 than for M2 and

M4. This clearly indicates that the introduction of the time delay δ into the model reduces the

modeling error, unlike the white-noise ν. This observation is consistent with the results obtained

using the manufactured data. The fact that the difference between the cdf’s is large suggests that

the solution of the inverse problem is still changing a lot when going from the calibration data to

the validation data. The question here is whether the calibration data set includes enough data

points to properly identify the parameters of the models. We observe for instance in Fig. 12 that

the data points corresponding to ∆t = 0.9, . . . , 10 µs only describe the beginning of the nonlinear

behavior of the correction factor. We show in Fig. 20 the evolution of the cdf of the non-reciprocity

when the parameters of models M3 and M5 are calibrated by adding one data point at a time

from the initial calibration set. From the results in the figure, it is clear that the conclusions from

the validation would be different if we would have chosen different data sets such as [0.5, 10.0] and

[0.6, 10.0], or even [0.7, 10.0] in the case of M5. It follows that another issue is to decide how to

split the available data into calibration and validation data sets. It appears that this choice may

be crucial and it would be extremely useful to devise methods that would help confirm that the

choice made is actually an appropriate choice.

The present application of the validation process is a particular case; indeed, the calibration

and validation scenarios are similar and the observed data and final quantity of interest are simply

the non-reciprocity evaluated at different gate widths. Nevertheless, if one considers the proposed

validation process as a general procedure, it should also work for the current validation of the data

reduction model. Examining the results shown above just reinforces the idea that many systematic

28

Page 30: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

0 1 2 3 4 5 6 7Reciprocity

0

0.2

0.4

0.6

0.8

1M

argi

nal C

DFs

[0.5,10.0][0.6,10.0][0.7,10.0][0.9,10.0]

0 1 2 3 4 5 6 7Reciprocity

0

0.2

0.4

0.6

0.8

1

Mar

gina

l CD

Fs

[0.5,10.0][0.6,10.0][0.7,10.0][0.9,10.0]

Figure 20: Evolution of the cumulative density functions of the non-reciprocity predicted using M3 (left) and M5

(right) calibrated by adding one data point at a time from the initial calibration set.

sanity checks need to be developed in order to design a robust validation process. This is work in

progress as well as the topic of future research as model validation for scientific computing is by no

means a solved problem.

5.3. Comparisons with actual experimental data

When calibrating data reduction models, it is in general desirable and preferable that calibration

data be collected in the same regimes as those of the actual experiments. In this case, it would be

unnecessary to validate the models since they would already be calibrated for the regime of interest.

However, it may happen, as it is the case here, that the conditions of the actual experiments are to

difficult to reproduce by other means – e.g. the tungsten lamp is much dimer than the radiation

produced in the shock tube. We nevertheless choose here to compare the numerical predictions

of the photon counts using our models with calibration measurements obtained for ∆t ≤ 0.5

µs, although we are aware that the accuracy of the measuring instruments at small gate widths

is questionable (see Fig. 12) and that these data were deliberately left aside in the calibration

process of the models. The results are shown in Fig. 21 where all data points corresponding to

∆t = 0.5, . . . , 10 µs are used for the calibration of the models.

We observe that the data lie within the 97.5 percent CI only in the case of model M1. This was

actually expected as we already know that the model exhibits large uncertainties. Uncertainties

from the other models are in general much smaller and the data points at small gate widths do

not always lie within the 97.5 percent CI. It is nevertheless noticeable that models M3 and M5

predict more accurately the number of photon counts at ∆t = 0.1 µs than models M2 and M4 do.

29

Page 31: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

Figure 21: Predicted number of photon counts N∆t using the calibrated models Mi, i = 1, . . . , 5.

30

Page 32: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

Time

Figure 22: Comparison of the photon flux obtained from experiments involving a pulsing lamp and that predicted

using model M3.

We may conclude that parameter δ (the time delay) plays a fundamental role in these models, in

other words, that the input parameter ∆t is not necessarily well controlled for small openings of

the camera. However, it is still an open question to decide which of the two models M3 and M5 is

the most appropriate.

Since the tungsten lamp is rather dim and does not allow to understand how the camera opens

and closes, a new calibration setting has been considered during the course of the research project.

The main idea was to use a very small time scale repetitively pulsing lamp to record the photon

flux with respect to time and observe the opening and closing of the spectrometer slit. Preliminary

results from one measurement at ∆t = 0.5 µs are summarized in Fig. 22. This new set of data

constitutes an invaluable source of information: 1) the gate of the camera seems to open slowly and

to quickly shut down; 2) the input parameter ∆t does not seem to be precisely controlled at small

gate widths; 3) there seems to be a long period during which the camera collects photons before

the gate fully opens, a phenomenon that is not directly described by our models. Our plan in the

near future is to improve this new experimental setting and to use the data as a new validation

data set for the validation of our models. By then, it will maybe become necessary to explore new

models of the camera, using for instance the concept of an RLC circuit, based on new evidence

that the current models fail to correctly predict the non-reciprocity at the gate width of interest.

31

Page 33: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

6. Conclusions

This paper deals with the validation of data reduction models used in the acquisition of radiative

intensities by cameras during shock tube experiments. The objectives were to show that the

validity of data reduction models should be assessed in a manner similar to that used for the

validation of mathematical models of physical phenomena, to develop simple data reduction models

to convert photon counts into radiative intensities, to apply the validation process described in

Section 4 to these models, and to analyze our confidence level in the models as the result of this

process. A clear advantage of the proposed validation process is that it allows to directly quantify

uncertainties in predicted quantities of interest, namely the correction factor with respect to the

number of counts at small gate widths in this study. We have developed in this work five different

models of various complexity that are supposed to describe the opening and closing of intensified

charge-coupled device cameras and analyzed their capability to predict the correction factor. The

preliminary results for the assessment of this validation methodology have highlighted the need for

a robust criterion to distinguish between calibration and validation experimental data. Therefore,

a definitive conclusion cannot yet be provided to accept or reject the various prediction models.

Furthermore, the assessment has shown that rigorous physical checks of results are required. This

was clearly highlighted with the validation of the simple linear model. Despite knowledge that this

model lacks sufficient physics to offer accurate predictions over the full domain of gate widths, the

model past the set criterion.

This study has allowed us to identify and put forth a number of issues that would need to be

addressed to develop a robust validation process. The current process involves the calibration of the

model parameters using two different data sets. One issue is thus to decide whether the available

data provide sufficient information to properly calibrate the parameters and when the solution of

the calibration problem is sufficiently accurate. Another issue is to assess whether the data is

correctly split into calibration data and validation data, in other words, whether the selection of

scenarios is adequate to decisively compare the cumulative density functions of the quantity of

interest so as to validate or invalidate one of the hypotheses about the models. These theoretical

issues will be the subject of future work.

Acknowledgments. The support of this work by the Department of Energy under Award Number

DE-FC52-08NA28615 is gratefully acknowledged. In addition, the authors would like to thank

Drs. Bogdanoff, Clemens, Cruden, Jagodzinski, and Varghese for their insight during many fruitful

32

Page 34: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

discussions. They are also sincerely grateful to Mr. Martinez at NASA for his help with the

experimental calibration procedure. In fact, the modelers were fortunate to be given access to the

raw data, to be allowed to visit the EAST facility, and to interact with the personnel there, as

it constituted an invaluable opportunity, if not a requirement, to develop and validate the data

reduction models.

References

[1] T. Araki, T. Uchida, and S. Minami. A simple photomultiplier gating circuit for the nanosecond

region. Japanese Journal of Applied Physics, 15:2421–2426, 1976.

[2] I. Babuska, F. Nobile, and R. Tempone. A stochastic collocation method for elliptic partial

differential equations with random input data. SIAM Journal on Numerical Analysis, 45:1005–

1034, 2007.

[3] I. Babuska, F. Nobile, and R. Tempone. A systematic approach to model validation based

on Bayesian updates and prediction related rejection criteria. Computer Methods in Applied

Mechanics and Engineering, 197:2517–2539, 2008.

[4] J. L. Beck and L. S. Katafygiotis. Updating models and their uncertainties. i: Bayesian

statistical framework. ASCE Journal of Engineering Mechanics, 124:455–461, 1998.

[5] D. Calvetti and E. Somersalo. Introduction to Bayesian Scientific Computing. Springer, New

York, 2007.

[6] S. H. Cheung and J. L. Beck. New Bayesian updating methodology for model validation and

robust predictions of a target system based on hierarchical subsystem tests. Computer Methods

in Applied Mechanics and Engineering, 2009. Accepted for publication.

[7] S. H. Cheung and J. L. Beck. Calculation of posterior probabilities for Bayesian model class

assessment and averaging from posterior samples based on dynamic system data. Computer-

Aided Civil and Infrastructure Engineering, 25:304–321, 2010.

[8] B. Cruden, R. Martinez, J. Grinstead, and J. Olejniczak. Simultaneous vacuum-ultraviolet

through near-IR absolute radiation measurement with spatiotemporal resolution in an electric

arc shock tube. In Proceedings of 41st AIAA Thermophysics Conference, 2009.

33

Page 35: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

[9] S. Ferson, W. L. Oberkampf, and L. Ginzburg. Model validation and predictive capability for

the thermal challenge problem. Computer Methods in Applied Mechanics and Engineering,

197:2408–2430, 2008.

[10] J. H. Grinstead, J. Olejniczak, M. C. Wilder, M. W. Bogdanoff, G. A. Allen, and R. Lilliar.

Shock-heated air radiation measurements at lunar return conditions: Phase I EAST Test

Report. NASA Report EG-CAP-07-142, NASA, 2007.

[11] C. B. Johnson and L. D. Owen. Image tube intensified electronic imaging. In Handbook of

Optics, Vol. 1, pages 21.1–21.32. McGraw-Hill, New York, 1995.

[12] J. Kaipo and E. Somersalo. Statistical and Computational Inverse Problems. Springer, New

York, 2005.

[13] K. Miki, M. Panesi, and E. Prudencio. On the (in)validation of a thermochemical model with

east shock tube radiation measurements. 48th AIAA Aerospace Sciences Meeting Including

the New Horizons Forum and Aerospace Exposition, Orlando, Florida, 2010.

[14] K. Miki, M. Panesi, E. E. Prudencio, and S. Prudhomme. Probabilistic models and uncertainty

quantification for the ionization reaction rate of atomic Nitrogen. Journal of Computational

Physics, 2011. Submitted.

[15] W. L. Oberkampf and C. J. Roy. Verification and Validation in Scientific Computing. Cam-

bridge University Press, 2010.

[16] J. T. Oden, A. Hawkins, and S. Prudhomme. General diffuse-interface theories and an approach

to predictive tumor growth modeling. Mathematical Models and Methods in Applied Sciences,

20(3):477–517, 2010.

[17] J. T. Oden, R. Moser, and O. Ghattas. Quantification of uncertainty in computer predictions

– Part I. SIAM News, 43(9), November 2010.

[18] J. T. Oden, R. Moser, and O. Ghattas. Quantification of uncertainty in computer predictions

– Part II. SIAM News, 43(10), December 2010.

[19] R. Pasqualotto and P. Nielsen. GaAs photomultiplier for LIDAR Thomson scattering. Review

of Scientific Instruments, 74:1671–1674, 2003.

34

Page 36: On the Assessment of a Bayesian Validation Methodology for ... · On the Assessment of a Bayesian Validation Methodology for Data Reduction Models Relevant to Shock Tube Experiments

[20] E. Prudencio and K. Schulz. The parallel C++ statistical library QUESO: Quantification of

uncertainty for estimation, simulation and optimization. IEEE IPDPS, 2011, Submitted.

[21] P. J. Roache. Fundamentals of Verification and Validation. Hermosa Publishers, 2009.

[22] C. Robert. The Bayesian Choice. Springer Verlag, second edition, 2004.

[23] C. P. Robert and G. Casella. Monte Carlo Statistical Methods. Springer Verlag, second edition,

2005.

[24] A. D. Sokal. Monte Carlo Methods in Statistical Mechanics: Foundations and New Algorithms.

Lectures at the Cargese Summer School on “Functional Integration: Basics and Applications”,

September 1996.

[25] A. Tarantola. Inverse Problem Theory. SIAM, 2005.

[26] T. Woldeselassie. Improved photomultiplier tube for positron emission tomography. Medical

& Biological Engineering & Computing, 27:281–287, 1989.

35