6
The practical operational safety of nuclear objects is of fundamental importance for assessing the future prospects under discussion and selecting a strategy for the development of nuclear power. It is shown that the methods currently being used for making safety predictions do not contain an analysis of the unavoidable errors and uncertainties of the models used or the initial and boundary conditions under which the physical processes that develop into serious accidents arise and develop. It is proposed that the method of quantile estimates of the uncertainties, which is free of the drawbacks of the Monte Carlo method and which increases the reliability of safety predictions in nuclear power, be used. Considering life expectancy and the threat of serious accidents, the safety of operating nuclear objects is of funda- mental importance for making assessments of the prospects and selecting a strategy for the development of nuclear power [1]. The methods which are currently used to make safety predictions are based on stationary and dynamic deterministic models which generalize past experience and, as a rule, do not contain an analysis of the unavoidable errors and uncertainties in the models or the initial and boundary conditions under which the physical processes that develop into serious accidents arise and develop. In contrast to building airplanes or rockets, the development of full-scale prototypes to investigate the safety of nuclear systems experimentally and to eliminate the inaccuracies in the theory and computational models is still considered to be too expensive. For this reason, safety predictions in nuclear power must be based on a generalization of the past limit- ed experience in building and operating its objects. To increase their reliability, safety predictions must include explicitly an analysis of the possible uncertainties of the models used and of the external events which influence the risk and are associ- ated with this field of human activity. However,analysis of the uncertainties only relatively recently became a subject of sys- tematic study in attempts to find validated methods for taking into account explicitly the incompleteness of our knowledge in predictive assessments of the safety of objects in nuclear power. The objective of the present article is to analyze some acceptable methods for predicting the safety of the objects in nuclear power and to demonstrate the need for improving the existing approaches to simulating the physical processes in a manner that explicitly takes account of the incompleteness of our knowledge of the initial and boundary conditions under which processes that turn into serious accidents can arise and develop. The analysis is based on results obtained using the method of quantile estimates of the uncertainties [2–4]. This method encompasses the range from almost exact to sparse knowledge with the possibility that the ratio of the width of the 90% confidence interval to the median of the distribution varies over several orders of magnitude. The method operates with high-entropy logarithmic distributions, including distri- butions ranging from log-uniform to log-normal. The method does not contain the methodological errors that are inherent to the Monte Carlo method and is applicable in analytical and finite-difference models. Possibilities of Predicting the Characteristics of Nuclear Reactors. Any analysis of the safety of an object of nuclear power must include an analysis of the uncertainties associated with the expected characteristics of the cores of, first and foremost, conceptually new nuclear reactors, since experimental information about them is either limited or nonexistent. It is shown in [5] that for the fast BN-1600 reactor the sources of the constant (cross section) component of the errors in cal- Atomic Energy, Vol. 102, No. 2, 2007 UDC 621.039.58 Russian Science Center Kurchatov Institute. Translated from Atomnaya Énergiya, Vol. 102, No. 2, pp. 80–86, February, 2007. Original article submitted March 23, 2006. 1063-4258/07/10202-0094 © 2007 Springer Science+Business Media, Inc. 94 A. N. Rumyantsev SAFETY PREDICTION IN NUCLEAR POWER

Safety prediction in nuclear power

Embed Size (px)

Citation preview

Page 1: Safety prediction in nuclear power

The practical operational safety of nuclear objects is of fundamental importance for assessing the future

prospects under discussion and selecting a strategy for the development of nuclear power. It is shown that

the methods currently being used for making safety predictions do not contain an analysis of the

unavoidable errors and uncertainties of the models used or the initial and boundary conditions under

which the physical processes that develop into serious accidents arise and develop. It is proposed that the

method of quantile estimates of the uncertainties, which is free of the drawbacks of the Monte Carlo method

and which increases the reliability of safety predictions in nuclear power, be used.

Considering life expectancy and the threat of serious accidents, the safety of operating nuclear objects is of funda-

mental importance for making assessments of the prospects and selecting a strategy for the development of nuclear power [1].

The methods which are currently used to make safety predictions are based on stationary and dynamic deterministic models

which generalize past experience and, as a rule, do not contain an analysis of the unavoidable errors and uncertainties in the

models or the initial and boundary conditions under which the physical processes that develop into serious accidents arise

and develop. In contrast to building airplanes or rockets, the development of full-scale prototypes to investigate the safety of

nuclear systems experimentally and to eliminate the inaccuracies in the theory and computational models is still considered

to be too expensive. For this reason, safety predictions in nuclear power must be based on a generalization of the past limit-

ed experience in building and operating its objects. To increase their reliability, safety predictions must include explicitly an

analysis of the possible uncertainties of the models used and of the external events which influence the risk and are associ-

ated with this field of human activity. However, analysis of the uncertainties only relatively recently became a subject of sys-

tematic study in attempts to find validated methods for taking into account explicitly the incompleteness of our knowledge

in predictive assessments of the safety of objects in nuclear power.

The objective of the present article is to analyze some acceptable methods for predicting the safety of the objects in

nuclear power and to demonstrate the need for improving the existing approaches to simulating the physical processes in a

manner that explicitly takes account of the incompleteness of our knowledge of the initial and boundary conditions under

which processes that turn into serious accidents can arise and develop. The analysis is based on results obtained using the

method of quantile estimates of the uncertainties [2–4]. This method encompasses the range from almost exact to sparse

knowledge with the possibility that the ratio of the width of the 90% confidence interval to the median of the distribution

varies over several orders of magnitude. The method operates with high-entropy logarithmic distributions, including distri-

butions ranging from log-uniform to log-normal. The method does not contain the methodological errors that are inherent to

the Monte Carlo method and is applicable in analytical and finite-difference models.

Possibilities of Predicting the Characteristics of Nuclear Reactors. Any analysis of the safety of an object of

nuclear power must include an analysis of the uncertainties associated with the expected characteristics of the cores of, first

and foremost, conceptually new nuclear reactors, since experimental information about them is either limited or nonexistent.

It is shown in [5] that for the fast BN-1600 reactor the sources of the constant (cross section) component of the errors in cal-

Atomic Energy, Vol. 102, No. 2, 2007

UDC 621.039.58

Russian Science Center Kurchatov Institute. Translated from Atomnaya Énergiya, Vol. 102, No. 2, pp. 80–86,

February, 2007. Original article submitted March 23, 2006.

1063-4258/07/10202-0094 ©2007 Springer Science+Business Media, Inc.94

A. N. Rumyantsev

SAFETY PREDICTION IN NUCLEAR POWER

Page 2: Safety prediction in nuclear power

culating Keff, the reactivity ∆K, and the production rate of new fuel (KV-1) are the uncertainties in the capture cross section

and inelastic scattering cross section of 238U and the capture and fission cross sections of 239Pu and 241Pu. The total errors

are estimated to reach 2.4% for Keff, up to 21% for the reactivity ∆K (or 0.5% for ∆K /Keff), and up to 33% for the produc-

tion rate of new fuel (KV-1) and up to 18% at 450 days.

The errors present in the nuclear data place substantial restrictions on the prediction of the dynamical characteristics

of known and new types of nuclear reactors. Even for the nuclide 235U, which has been studied in greatest detail, the total rel-

ative yield of delayed neutrons β(0) = (6.87 ± 0.3)·10–3 in the best-understood thermal-neutron energy, close to zero [6], i.e., the

possible error is approximately ±4.5%. Here and below, the errors and uncertainties are presented as values of the half-width

of the 90% confidence interval. The errors in the delayed-neutron fraction over groups Ai = βi/β are approximately ±10% and

do not depend on the group number i. The error in the decay constant λi for delayed-neutron groups is approximately ±2.5%.

It increases to ±(3–4)% for the second, third, and fourth groups. In [6], it is estimated that, overall, the possible range of uncer-

tainty in the near-critical range of the reactivity is approximately ±8%.

The possibilities of making computational-theoretical prediction of Keff and other parameters of the cores of existing

and conceptually new nuclear reactors are limited by not only the errors in the constants and models but also the unavoidable

errors in determining the initial fuel composition, estimated to be at least ±1%. The computational-theoretical models and the

computer program systems which implement them for predicting the characteristics describing the spatially distributed neu-

tron-physical and thermohydraulic processes, including processes which are important for safety, require verification using all

accessible experimental data. The results of calculations of the local power changes occurring during the motion of a manual

control rod and the corresponding experiment in the No. 2 unit of the Smolensk nuclear power plant with a RBMK-1000 reac-

tor [7] show that the maximum absolute error in the relative power of one cell (channel) reaches ±8%. It is also concluded in [7]

that in order to test models and computer programs, aside from an extensive set of experimental data, an analysis of the sensi-

tivity of the computational results to the error in the initial data is needed for different types of transient processes in order to

find the most significant parameters, errors in setting which can result in the largest deviations from experiment.

On this basis, it can be concluded that an analysis of the uncertainties must be included in the existing and newly

developed computational-theoretical models for predicting the characteristics of nuclear reactors and it is necessary to switch

from deterministic to probabilistic models which more fully reflect the reality of the physical world. An effective method of

analyzing small and large uncertainties is the method of quantile estimates [2–4], which as an approximate, calibrated, ana-

lytical method makes it possible to estimate the 90% confidence intervals of the uncertainty in the results of predictions and

to reveal the most important parameters which are responsible for the largest uncertainties and which require additional exper-

imental study.

Prediction of the Development of Nonstationary Processes. Any analysis of nonstationary processes, including

analysis of the development of the initial events into an accident, is based on a study of the characteristics of nonstationary

models of such processes. The sources of the uncertainties are the sparseness of our knowledge about the initial and bound-

ary conditions under which a nonstationary process is initiated, the incompleteness of our knowledge of the mechanisms of

the flow of the processes themselves, and the limited nature of the experimental information on such processes. The method

of quantile estimates makes it possible to analyze the confidence intervals of the results of nonstationary processes, includ-

ing accidents. Knowledge of the variance and confidence intervals of the results at each moment of time analyzed in the devel-

opment process can be used to determine the time after which subsequent simulation of the nonstationary processes becomes

physically pointless because the error in the results obtained is too large. This same knowledge can be used to determine the

parameters and processes which contribute the largest uncertainty and on which experimenters must focus their attention.

An example illustrating the need for taking into account the errors in the initial information even for strictly deter-

ministic nonstationary problems of classical mechanics is given in [8]. This example has a direct bearing on an unobservable

process in the development of an accident with a loss of control by the personnel in the nuclear power plant, which occurs,

for example, when heat removal from the core is disrupted. The problem is to predict as a function of time the coordinate of

a sphere moving without friction, between two rigid walls, in a channel of a certain length and undergoing ideal reflections

from the end walls. When the unavoidable errors in determining the initial coordinate of the sphere and its initial momentum

are taken into account, after some time has elapsed the error in determining the coordinate of the sphere becomes equal to

95

Page 3: Safety prediction in nuclear power

the length of the channel. Subsequently, one can only say that the sphere is located somewhere in the channel. The conclu-

sion is obvious: any prediction problems must take account of the errors in the initial and boundary conditions even when the

nonstationary process is strictly deterministic.

Prediction of the Temperature of Fuel-Element Cladding with Partial Rupture of the Circulation Loop inRBMK. This example is based on quantitative estimates obtained using the analytical model described in [9]. This work exam-

ines the case of the adiabatic redistribution of accumulated heat along a fuel element with complete loss of heat removal as a

result of an accident with a leak in the circulation loop. Estimates of the temperature of the fuel-element cladding, which can

reach 1400 K, are presented and it is concluded that a steam-zirconium reaction followed by unsealing and destruction of a fuel

element is unavoidable. The uncertainties in the prediction of the temperature of fuel-element cladding are not analyzed in [9].

The method of quantile estimates was used to calculate the range of uncertainties of the maximum temperature of

the fuel-element cladding on the basis of the model adopted in [9] using the same numerical values but supplemented with

moderate estimates of the possible ranges of their uncertainties. The final estimate of the cladding temperature with a 90%

confidence interval is T = 1400 (+236, –213) K. The full width of the 90% confidence interval of the temperature of the

fuel-element cladding is 1636 – 1187 = 449 K. Since the rate of the steam-zirconium interaction is a nonlinear function of

the temperature and starts to increase rapidly at T > 1200 K and taking the confidence interval into account, it can be con-

cluded that under their model assumptions the authors of [9] are correct and a steam-zirconium reaction is unavoidable.

However, when estimating the temperature of fuel-element cladding it is necessary to deal with the unavoidable uncer-

tainties in the knowledge of the coefficient of heat transfer through the gas gap between the fuel-element cladding and a fuel pel-

let. As indicated in [9], this coefficient depends strongly on the burnup and the actual state of the fuel pellet (cracking or retain-

ing its shape). In addition, the uncertainties in our knowledge of the ranges of the other parameters in the analytical model must

be dealt with. When the intervals of the possible values of the parameters of the analytical model are increased up to physically

validated limits, which preserve the physical meaning of the unobserved process being simulated, the final estimate of the

cladding temperature is T = 1400 (+800, –600) K [4]. The full width of the 90% confidence interval is 2200 – 800 = 1400 K,

which is comparable to T = 1400 K. With such large uncertainties, it is hardly possible to assert anything definitely (yes or no)

about the possibility of the development of a steam-zirconium reaction. It is only possible to conjecture that a steam-zirconium

reaction will start with probability P = (1400 + 800 – 1200)/1400 ≈ 0.7 and the probability that such a reaction will not occur is

0.3. This can be interpreted as the branching point of the process. When this point is reached, subsequent computational-theo-

retical simulation must clearly take account of the possible paths which the process can take.

The estimates presented can be extended to the case of large leaks in VVÉR-1000 vessel reactors for which steam-

ing of the core also becomes possible with adiabatic equalization of the fuel-element temperature as a result of the redistri-

bution of the accumulated heat, including due to the residual power release.

It can be concluded on the basis of this analysis that it is obviously necessary to take into account explicitly the

uncertainties in the initial data and the models used to predict scenarios for the development and consequences of an acci-

dent. If such uncertainties are ignored, any predictions based on a deterministic approach can be helpful but they will be inad-

equate for the realities of the physical world and the predictions could be physically baseless. This conclusion extends to all

known computer codes used for predicting emergency situations irrespective of how well the results obtained, be it random-

ly or systematically, agree with the experimental results. Actually, all existing codes employ a deterministic approach with-

out analyzing the uncertainties; simple examples have proved the fallacy of this approach.

Statistical Analysis of the Uncertainties in Thermohydraulic Calculations Performed with DeterministicCodes. This example of the usefulness of the method of quantile estimates is based on the arguments and results presented

in [10] and previously analyzed in [4]. Reference 10 examines the influence of the uncertainties in the initial parameters on

the simulation of nonstationary thermohydraulic processes. The analysis is performed using statistical methods and setting

the initial uncertainties as random quantities with a known distribution law (GRS, Germany; IPSN, France; ENUSA, Spain).

The authors of [10] implemented the procedure for performing a statistical analysis of the uncertainties in the thermohy-

draulic calculations on the basis of the GRS method [11], upgraded the RELAP5/MOD3.2 code [12], and analyzed the uncer-

tainties for certain problems. The objective of the calculations was to estimate the confidence interval of the random vari-

able x (with no information on its distribution function), obtained with a prescribed reliability γ and confidence probability β.

96

Page 4: Safety prediction in nuclear power

Citing [13], the authors concluded that in practice approximately 100 calculations must be performed in order to assert with

probability 95% that at least 95% of all possible realizations of x fall within the range from Xmin to Xmax. Since the proba-

bility of finding x in this interval is 0.95 × 0.95 ≈ 0.9, the interval obtained can be identified with the 90% confidence inter-

val. The authors present, specifically, the results of calculations of the maximum temperature of fuel-element cladding in

experiments on the introduction of excess reactivity on a model RBMK-1000 fuel assembly.

However, these results cast well-founded doubts in the technique which the authors chose for the statistical analysis

of the uncertainties. Specifically, the description of the investigations of the model of the RBMK-1000 fuel assembly indi-

cates that in the basic calculation with the uncertainties held constant the maximum temperature of the fuel-element cladding

was T = 870 K. In the deterministic model used, this value must correspond to the mathematical expectation T0 of the ran-

dom variable T, which is taken to be the maximum temperature of the fuel-element cladding. However, the computed 90%

confidence interval of the temperature is estimated to be 827–891 K. This interval corresponds to the mathematical expecta-

tion T0 = 859 K, which corresponds to an error of not more than 1 K for normal, uniform, and log-uniform distributions of

the maximum temperature of the fuel-element cladding on the indicated interval. The value computed holding the uncertain-

ties constant is thereby 11 K higher than the value following from the results obtained with variation of the uncertainties.

The difference obtained proves the unsoundness of the entire approach to analyzing uncertainties using deterministic models

and by varying the uncertainties to analyze the errors of a prediction.

An arbitrary statistical sample can claim some reliability only if the average value of T which it gives is close the

mathematical expectation T0 and if their difference ∆T is much smaller than the standard deviation of the sample σ(T). For the

temperature interval indicated by the authors, σ(T) ≈ 19 K and the relative difference of the mathematical expectations is

∆T = 11 K, which clearly does not satisfy the requirement ∆T << σ(T). This can be taken as evidence for the statistical unten-

ability of the sample obtained from 100 calculations.

The statistical untenability of the sample obtained is probably due to the unavoidable intrinsic “noise” of the pseu-

dorandom number generators used to vary the uncertainties. The relation given in [13] and used by the authors of [10] to

determine the required number n of calculations in the form nβ(n–1) – (n – 1)βn = 1 – γ assumes that each value of x deter-

mined by a computational experiment is an independent random variable. This means that each such quantity should be sim-

ulated by its own pseudorandom number generator and that each pseudorandom number generator must be “ideal,” i.e.,

it should have no intrinsic noise and should reproduce exactly a uniform distribution of the random variable on the interval

[0, 1]. When pseudorandom number generator noise is present, the statistically correct size n of the sample must be deter-

mined from the condition that the effect of the noise on the resulting sample is minimal. Let M(R) be the average value for a

generation of R generated pseudorandom numbers. Then the intrinsic noise of the pseudorandom number generator can be

determined as the standard deviation σ(x) of the value of M(R) from the mathematical expectation of a random quantity that

is uniformly distributed on the interval [0, 1] and equals exactly 0.5:

As shown in [2, 3], the intrinsic noise of a pseudorandom number generator for which the best known generation

algorithm was used [14] is at least 2.8% in a series of 100 such numbers. Other known pseudorandom number generators, for

example algorithm No. 133 [15], produce even more noise. This noise decreases to 0.8% if the series contains at least 1000

pseudorandom numbers and to 0.25% if the series contains at least 10000 numbers. The absolute standard error of the sam-

ple obtained in [10] was σ(T) ≈ 19 K, which corresponds to the relative standard error σ(T)/T ≈ 19/859 ≈ 0.022 or 2%. This

error is comparable to the intrinsic noise of pseudorandom number generators. This noise can be reduced to statistically neg-

ligible levels only if the number of pseudorandom numbers in a series is increased to 10000 or more. However, even then, all

known pseudorandom number generators introduce unavoidable bias errors into the mathematical expectations. These errors

were analyzed in detail in [16].

The noise of the pseudorandom number generators, which, evidently, was ignored in [10], is the most likely reason

why the sample obtained from 100 calculations is statistically untenable. A relatively more representative sample can be

obtained by increasing the number of calculations by a factor of 100 or more.

σ( ) ( ) . .x M R= −2 20 5

97

Page 5: Safety prediction in nuclear power

However, aside from the noise problem, there is also another problem in simulating sequences of random numbers.

As shown in [10], the total number of closure relations (empirical relations) used in the RELAP5/MOD3.2 code reaches 28.

This means that 28 independent pseudorandom number generators must be used in each calculation. The real number of

generators used in [10] is not indicated. If one pseudorandom generator is used, as is usually the case in practice, the prin-

ciple of independence – on which method of variation of the uncertainties is based – of the values of the random variable

is violated [13]. The errors which are then introduced into the resulting sample are unavoidable systematic errors. According

to the data in [2, 3], using a single pseudorandom number generator to simulate only two independent random variables with

the same distributions and mathematical expectations, which range from 10–4 to 103, results in an unavoidable relative error

ranging from 1.5 to 6.6% in the mathematical expectations of their product. This range depends on the values of the math-

ematical expectations of the variables, the variances of the initial distributions of the variables, and the length of the series

of pseudorandom numbers, reaching in the calibration calculations 62000. The difference obtained in [10] of the sample

average and the mathematical expectations at the level ∆T = 11 K has a relative difference ∆T /T0 = 11/870 ≈ 1.3%, which

corresponds to the indicated range of unavoidable systematic errors when using a single pseudorandom number generator

and correlates with the results of [16].

The range of possible values of the maximum temperature obtained for fuel-element cladding using the determinis-

tic model implemented in the RELAP5/MOD3.2 code can be determined by the method of quantile estimates and without

performing any calculations in which the uncertainties are varied. Since the value T0 = 870 K is reached with transcritical

heat emission to a vapor film, the error in determining this temperature is mainly due to the errors in the empirical relations

for the heat-emission coefficient. We set approximately T ≈ Ts + αq, where Ts is the water saturation temperature, α is the

transcritical heat emission coefficient, and q is the effective heat flux. Neglecting the uncertainties in the saturation tempera-

ture and in the nonstationary process and using the method quantile estimates, the variance of the cladding temperature can

be set equal to the variance of the coefficient α of the transcritical heat emission: V(lnT) ≈ V(lnα). Using in the

RELAP5/MOD3.2 code the relation given in [12] for calculating the coefficient α of transcritical heat emission and other

relations from [17], we obtain ±5% for the estimated half-width of the 90% confidence interval of the error; this corresponds

to the variance of the logarithm of this coefficient σ(lnα) ≈ 0.03. The half-width of the 90% confidence interval of the

cladding temperature is ±29 K with limits from 841 to 897 K instead of the sample limits from 827 to 891 K obtained in [10].

In a subsequent work by essentially the same group of authors [18], the final result of the computational investiga-

tions of the maximum cladding temperature for the two accident processes studied became the estimates of the confidence

intervals for the temperature whose mathematical expectation lies in the range 888–920 K. The relative width of the com-

puted 90% confidence intervals (interval width divided by the mathematical expectation) does not exceed 0.033. Since the

total width of the 90% confidence interval is 3.2σ with an error not exceeding 5%, where σ is the standard error [2, 3], for a

wide class of high-entropy probability density distributions, the estimated relative value of σ for all calculations performed

does not exceed 0.01 or 1%. However, the coefficient of transcritical heat transfer α, just as other closure relations, includ-

ing the critical heat flux, has an estimated half-width of the 90% confidence interval of the error of at least ±5%, which cor-

responds to σ ≈ 0.03. The absolute error in the relative power of one cell (channel) determined experimentally for the No. 2

unit of the Smolensk nuclear power plant with an RBMK-1000 reactor [7] reaches ±8%, which corresponds to σ ≈ 0.05.

To sum up, the lower estimate of the value of σ even neglecting other factors affecting the uncertainty of the temperature of

the fuel element cladding is σ ≈ 0.058 or almost six times greater than obtained computationally in [18]. Since the physical-

ly predetermined width of the confidence interval is almost six times greater than the computed values, the computational

experiment could not show what it was supposed to show – it could not give a prediction of the physically validated uncer-

tainties of the maximum temperature of the fuel-element cladding.

The estimates presented above show that the entire approach to analyzing uncertainties using deterministic models

to analyze the error of a prediction by varying the uncertainties in the initial data using pseudorandom number generators is

untenable in practice. One possible optimal solution to the problem of analyzing uncertainties using the RELAP5/MOD3.2

code is to upgrade the code by explicitly including in its computational algorithm the approximate analytical method of quan-

tile estimates, which does not have the drawbacks of the Monte Carlo method or the demonstrated drawbacks of the approach-

es to analyzing the uncertainties on the basis of the GRS method of [11] and similar methods.

98

Page 6: Safety prediction in nuclear power

REFERENCES

1. “Strategy for the development of nuclear power in Russia in the first half of the 21st century,” Byull. Tsentra Obshch.

Inf. At. Énerg., No. 6, 4–17 (2000).

2. A. N. Rumyantsev and Yu. A. Ostroumov, “Method of quantile estimates of the uncertainties in the analysis of

frequencies, development, and consequences of rare and improbable accidents,” Report No. 210.06-04/01 (1993),

I. V. Kurchatov Institute of Atomic Energy.

3. A. N. Rumyantsev and Yu. A. Ostroumov, “Method of quantile estimates of the uncertainties in the analysis of fre-

quencies, development, and consequences of rare and improbable accidents,” Preprint IAÉ-6295/15 (2003).

4. A. N. Rumyantsev, “Prediction of the development of nuclear power and analysis of the uncertainties in predictive

estimates,” Preprint IAÉ-6296/15 (2003).

5. V. B. Glebov, G. G. Kulikov, V. V. Khromov, et al., “Calculation of the sensitivity of reactor characteristics to the

constants taking account of the change in the nuclide compostion of the reactor during a run,” Vopr. At. Nauk. Tekh.,

Ser. Fiz. Yad. Reakt., No. 1, 11–17 (1991).

6. A. Yu. Gagarinskii and L. S. Tsygankov, “On the influence of the uncertainty in the nuclear data on the results of an

analysis of kinetic measurements in reactors with 235U operating on thermal neutrons,” ibid., No. 9(46), 65–69

(1984).

7. A. G. Markitan, V. M. Panin, and L. N. Podlazov, “Verification of complex program systems for calculating the

dynamics of power reactors based on experimental data,” ibid., No. 5, 22–25 (1991).

8. L. Brillouin, Scientific Uncertainty and Information [Russian Translation], Mir, Moscow (1966).

9. A. I. Dostov and A. Ya. Kramerov, “Investigation of the safety of RBMK reactors during accidents initiated by par-

tial ruptures of the circulation loop,” At. Énerg., 92, No. 1, 23–30 (2002).

10. D. A. Afremov, Yu. V. Zhuravleva, Yu. V. Mironov, and V. E. Radkevich, “Method for performing a statistical anal-

ysis of the uncertainties of thermohydraulic calculations,” At. Énerg., 93, No. 2, 101–109 (2002).

11. H. Glaeser, E. Hofer, M. Kloss, and T. Skorek, “Uncertainty and sensitivity analysis of post-experiment calculation

in thermal hydraulics,” Reliability Eng. System Safety, No. 45, 19–33 (1994).

12. RELAP5/MOD3 Code Manual, NUREG/CR-5535 (1995).

13. S. Wilks, Mathematical Statistics [Russian translation], Izd. Inostr. Lit., Moscow (1967).

14. A. D. Frank-Kamenetskii, Simulation of Neutron Trajectories in the Monte Carlo Calculation of Reactors,

Atomizdat, Moscow (1978).

15. M. I. Ageev, L. S. Krivonos, and Yu. I. Markov, Algorithms (101–150), Vychislit. Tsentr Akad. Nauk SSSR, Moscow

(1966).

16. L. V. Maiorov, “Estimate of the bias in the results of Monte Carlo calculations of reactors and storage sites for nucle-

ar fuel,” At. Énerg., 97, No. 4, 243–256 (2004).

17. M. E. Deich and G. A. Filippov, Gas Dynamics of Two-Phase Media, Énergiya, Moscow (1968).

18. D. A. Afremov, Yu. V. Zhuravleva, Yu. V. Mironov, et al., “Analysis of the uncertainties of calculations of loss-of-

coolant accidents for the No. 1 unit of the Kursk nuclear power plant,” At. Énerg., 98, No. 6, 422–428 (2005).

99