32
TCS Confidential "O"QQMJDBUJPOPG4JNVMBUJPO *O 4PGUXBSF3FMJBCJMJUZ1SFEJDUJPO $BTF4UVEZ #Z7PKP#VCFWTLJ

An Application of Simulation In Software Reliability Prediction

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: An Application of Simulation In Software Reliability Prediction

TCS Confidential

An Application of Simulation In Software Reliability Prediction

Case Study

By Vojo Bubevski

Page 2: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 2 of 32 October 2008

Abstract

The quantitative approach to Software Quality Management is a standard requirement for all software development projects compliant with Capability Maturity Model ­ CMM™ Level 4. Software Reliability is also one of the main aspects of Software Quality. Thus, achieving the software reliability goals is a major objective for software development organisations as it is a critical constraint on their projects. For example, the software will not be released to the customer for operation until the reliability goals have been achieved.

Software Reliability is the probability of failure­free software operation for a specified time period. Predicting the software system reliability at some point in the future based on data already available, is one of the important challenges of software projects. The implicit objective of management is to achieve the software system reliability goals with minimal costs and schedule of projects. Therefore, prediction in this sense is very useful in supporting software project management to achieve this objective.

This paper presents an approach to applying simulation in software reliability prediction using Palisade™ @RISK®. The purpose of the paper is to demonstrate the practical aspects of software reliability simulation, so the theory is referenced only.

A proof of the concept is established by experimenting on the reliability simulation of a real system. A unique data transformation method is elaborated and applied for this purpose. The method transforms the raw unusable failure­count data into data usable for simulation, without affecting the reliability principles. The objective of the initial experiments is to select the most suitable model for this specific system. The selected simulation model then is used for reliability prediction of the real system. The simulation results, compared with the actual data, are satisfactory, thus proving the concept.

Also, a prediction of reliability of a hypothetical financial software system is elaborated. The simulation experiment is to use the data of the supposed current release, in order to predict the reliability of the supposed next release. Important feasibility assumptions are discussed. Comparing the experiment results with the supposed actual data of the next release, the results are satisfactory. This model is simple but could be upgraded for complex simulations.

In addition, some important future work recommendations are provided for supporting the software projects in achieving reliability goals with minimal costs and schedule, i.e. to develop optimization models for this purpose.

Page 3: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 3 of 32 October 2008

Introduction

Software reliability is defined as the probability of failure­free software operation for a specified period of time (American National Standards Institute – ANSI). It quantifies the failures of software systems and is the key factor in software quality [1]. It is also a major subject of Software Reliability Engineering (SRE) – a discipline which quantitatively studies the operational behavior of software systems with respect to reliability requirements of the user.

The quantitative study of software systems concerning reliability involves software reliability measurement. Measurement of software reliability includes two activities, i.e. software reliability estimation and software reliability prediction. Software reliability estimation determines current software reliability based on the failure data obtained in the past. Its main purpose is to assess the current reliability of the software system. Software reliability prediction however, determines future reliability of software systems based upon software metrics data available now.

The software code size is measured by source Lines of Code (LOC), and KLOC is one thousand LOC. The term defect is generically used in the paper referring to either, fault (i.e. the cause of a failure), or failure (i.e. the effect of a fault) [1]. Cumulative Failure Function is defined as the mean cumulative failures associated with each point of time [1]. Failure Intensity Function represents the rate of change of the cumulative failure function [1]. Mean Time to Failure (MTTF) is defined as the expected time that the next failure will occur [1].

Classical approach to software reliability prediction is based on analytic models using statistical analysis of the past failure data in order to predict future reliability. These models have been available in literature since the early 1970s. The major software reliability analytic models are very well reviewed by Lyu [1].The main characteristic of the analytic models is that unrealistic and oversimplified assumptions are required to obtain a simple analytic solution [1, 2].

The need of modern approach to software reliability was recognized in 1993 byVon Mayrhauser et al. [2] “Software reliability engineering must develop beyond statistical analysis of data and analytic models which frequently require unrealistic assumptions. We must develop a viable discipline of simulation to aid experimental and industrial application of software reliability engineering.” It seems that with this pioneering work, the application of simulation in software reliability engineering was initiated.

Since 1993, the application of simulation in software reliability engineering has emerged and substantial work has been published. Some examples are the articles by Tausworthe, Lyu, Gokhale, and Trivedi [3, 4, 5, 6]. It should be highlighted that results from these works indicated that “the simulation technique potentially may lead to more accurate tracking and more timely prediction of software reliability than obtainable from analytic modeling techniques” [3]. Also, the simulation models appeared to be subject to only a few fundamental assumptions, such as the independence of the causes of failures [1, 7].

Page 4: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 4 of 32 October 2008

The software reliability models are classified by the type of distribution of failures that occurred by time t [1]. The most important types of models within this classification are the Poisson and Binomial models [1].

A very interesting piece of work on software reliability simulation was published by Tausworthe and Lyu [7] as a chapter in a handbook of software reliability engineering [1]. This work elaborates the application of simulation techniques to typical software reliability processes eliminating the simplifying assumptions needed for analytic models [7]. Special­purpose simulation tools for software reliability were designed, built and used in simulation experiments on a real­world project, e.g. Galileo project at the Jet Propulsion Laboratory [1]. The simulation results were very close to the real system’s data. Also, the simulation results were compared with the prediction results obtained from analytic models, which demonstrated that the analytic models do not seem to adequately predict the reliability of the system [7].

In contrast, this paper presents an application of a general­purpose simulation tool – Palisade™ @RISK® in software reliability prediction. Monte Carlo simulation is used with the Poisson distribution – a very important distribution used in practice [1].

The purpose of the paper is to demonstrate the practical aspect of software reliability simulation. Therefore, the theory of software reliability simulation is not discussed. The reader should refer to the cited references for theory discussions [1, 7].

Firstly, a proof of the concept is demonstrated by experimenting on the reliability simulation of a real system. The published data of the Galileo project at the Jet Propulsion Laboratory [1] is used for this purpose (i.e. the same data that was used by Tausworthe and Lyu in their work [7]). A unique data transformation method is elaborated and applied in practice. The method transforms the raw unusable failure­ count data into data usable for simulation, without affecting the reliability principles. This method is used to transform the Galileo raw failure­count data and simulate the reliability of the testing. The results of the simulations are compared with the actual data and discussed. It should be emphasised that the experimental results are satisfactory. Therefore, the concept is proven.

Secondly, the prediction of reliability of a hypothetical financial software system (Project TRPC) is elaborated. Two sets of data are available for this system, i.e. the data from two subsequent releases of the system – Release (i) and Release (i+1). The experiment is to use the data of Release (i) (e.g. the current release), in order to predict the reliability of Release (i+1) (e.g. the next release). Important feasibility assumptions relating to this specific experiment are discussed. The experiment results are also discussed. After comparing with the actual data of Release (i+1), the results are satisfactory. Thus, the experiment is successful. This simulation experiment is relatively simple including only the testing and operation failure data. The model could be expanded to consider the analysis and design failures if data is available. It should be noted that this simulation model is for illustration purposes.

Page 5: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 5 of 32 October 2008

Finally, recommendations for future work are given to provide for achieving the software project reliability goals with minimal costs and schedule. For example, it will be very useful to develop specific optimization models for this purpose.

In conclusion, the presented approach to software reliability prediction, including the unique method for transforming the raw failure­count data to be usable for simulation, is generic and applicable to any software project compliant with CMM™ Level 4. The experiments have demonstrated that Palisade™ @RISK® (a general purpose simulation tool) can be used to predict the software reliability. The obtained experimental results are satisfactory and acceptable. Using Palisade™ @RISK® is much easier than using the special­purpose software reliability tools. Also, Palisade™ @RISK® tools provide for comprehensive data presentation and analysis of the simulation results, which is not the case with the special­purpose tools. The demonstrated simulation models are simple. However, the models could be easily upgraded to provide for more complex reliability prediction if data is available.

Page 6: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 6 of 32 October 2008

Proof of Concept

In order to prove the concept of this paper, we experiment with real system data to simulate software reliability using the Palisade™ @RISK® tools. For this purpose, the data of the Galileo project at the Jet Propulsion Laboratory [1] is used. The following outlines how to prove the concept.

1. Present and analyse the actual (raw) Galileo data; 2. Transform the Galileo data for simulation, as the raw data is unusable; 3. Simulate the reliability using two different simulation models; 4. Compare the two simulations results with the actual data in order to select

the better simulation model for future Galileo simulations; 5. Use the selected model to show how we can predict the reliability at the end

of testing, supposing that we are in the middle of the testing stage.

The Proof­of­Concept approach is described in the following sections.

Galileo Actual Data

The Galileo failure­count data is collected during the testing period of 41 weeks. The data is given in Appendices, Table 1. Figure 1 shows the Galileo project actual failure intensity function.

Galileo Actual Data

­5

0

5

10

15

20

25

30

35

0 10 20 30 40 50

Calendar Week

Failu

res pe

r Wee

k

Figure 1: Galileo project actual failure intensity function

Page 7: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 7 of 32 October 2008

The total number of defects detected and removed during 41 weeks of testing is 351. The reliability is measured by Mean Time to Failure (MTTF) which is calculated assuming uniform distribution of the defects during a specific time period. Thus, we simply use calendar time in weeks to calculate the Mean Time to Failure (MTTF) for Galileo project in testing as follows: MTTF = 41/351 = 0.1168 Weeks.

A simple analysis of the data shown in Figure 1 is as follows. The numbers of failures detected in each time interval are independent Poisson random variables [1], so it is impossible to correlate the number of failures detected each week. The failure intensity function exhibits a strong­zigzag decreasing behavior. Consequently, the data is raw and the failure intensity function is not practical for simulation.

However, we can transform the raw data to be usable for simulation without changing the reliability principal values (i.e. the number of failures detected in each time interval, the time period and the total number of failures detected). The following section explains the transformation of the raw data in order to be usable for simulation.

Method to Transform Raw Data for Simulation

The method to transform the data without changing the reliability principals is as follows.

Considering the fact that the numbers of failures detected in each time interval are independent Poisson random variables [1], we can reorder the time intervals preserving: a) the number of failures detected in each interval; b) the time period; and c) the total number of failures detected during the time period. The criterion for reordering the intervals is that the number of failures detected in each interval must be put in descending order. This will transform the failure intensity function from a strong­zigzag decreasing type to a smooth decreasing type, which is usable for simulation.

For example, the Galileo reliability measure i.e. MTTF is not changed by sorting the data in descending order, as we have not changed the numbers of failures detected in each week, thus preserving the 41 weeks period and the total number of 351 defects. That is, the MTTF of the raw Galileo data is equal to the MTTF of sorted Galileo data (i.e. MTTF = 41/351 = 0.1168).

The sorted data, as well as the raw data, is presented in Appendices, Table 1. The intensity function of the sorted data is shown in Figure 2.

Page 8: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 8 of 32 October 2008

Galileo Sorted Data

­5

0

5

10

15

20

25

30

35

0 10 20 30 40 50

Time ­ Week

Failu

res pe

r Wee

k

Figure 2: Galileo sorted failure intensity function

The sorted failure intensity function presented in Figure 2 is now usable for software reliability simulation of the Galileo project testing during 41 weeks period. This is possible because the reliability measure MTTF has not changed.

Galileo Reliability Simulation 1: Exponential Failure Intensity

This simulation model uses the Poisson distribution with an exponential failure intensity function. The exponential approximation of the Galileo failure intensity function is presented in Figure 3.

Exponential Failure Intensity Function

y = 34.74e ­0.0884x

R 2 = 0.9295

0 5 10 15 20 25 30 35

0 10 20 30 40

Time Week

Failu

res per Week

2. Failures per Week

Expon. (2. Failures per Week)

Figure 3: Exponential approximation of Galileo failure intensity function

Page 9: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 9 of 32 October 2008

The approximation of the failure intensity y (i.e. failures per week) as a function of time x (i.e. week) is y = 34.74*EXP(­0.0884*x). Therefore, we simulate the reliability using the Poisson distribution. The mean of the Poisson distribution is equal to the value of the failure intensity function for time t. The simulation results and statistics are shown in Appendices, Table 2 & Table 3. The simulation distribution is shown in Figure 4.

Figure 4: Distribution of the Galileo simulation with exponential failure intensity

From the results, we can see that the predicted total number of defects is 361, which is quite close to the actual of 351, with Standard Deviation 19 (i.e. 5.3%).

Galileo Reliability Simulation 2: Logarithmic Failure Intensity

We use the Poisson distribution with a logarithmic failure intensity function in this simulation. The logarithmic failure intensity function is shown in Figure 5.

Logarithmic Failure Intensity Function

y = ­8.6744Ln(x) + 32.687 R 2 = 0.9819

­5 0 5 10 15 20 25 30 35

0 10 20 30 40 50

Time Week

Failu

res per W

eek

2. Failures per Week

Log. (2. Failures per Week)

Figure 5: Logarithmic approximation of Galileo failure intensity function

Page 10: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 10 of 32 October 2008

The approximation of the failure intensity y (i.e. failures per week) as a function of time x (i.e. week) is y = ­8.6744Ln(x) + 32.687. Again, we simulate the reliability using Poisson distribution. The mean of the Poisson distribution is equal to the value of the failure intensity function for time t. The simulation results and statistics are given in Appendices, Table 4 and Table 5. The simulation distribution is shown in Figure 6.

Figure 6: Distribution of the Galileo simulation with logarithmic failure intensity

For this simulation, the predicted total number of defects is 352 with Standard Deviation 19 (i.e. 5.4%). This result is almost equal to the actual of 351.

Comparing Results and Selecting Simulation Model for Galileo

The simulation results are as follows: 1. Simulation 1: Predicted total number of defects is 361 with Standard

Deviation 19 (i.e. 5.3%). 2. Simulation 2: Predicted total number of defects is 352 with Standard

Deviation 19 (i.e. 5.4%).

Comparing the actual number of errors, i.e. 351, with the results of Simulation 1 and Simulation 2, it is obvious that the Simulation 2 result is much better. Thus, the Simulation 2 model with the logarithmic failure intensity function is selected for future simulations of Galileo.

Simulation 3: Predicting 41 Weeks Reliability after 21 Weeks

Supposing that we are at the end of week 21, we can predict the reliability until the end of testing stage, i.e. 41 weeks. This means that we only have data available for the first 21 weeks. Thus, we use Poisson distribution with logarithmic failure intensity function having 21 weeks of data available. The total number of defects

280

300

320

340

360

380

400

420

Page 11: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 11 of 32 October 2008

for 21 weeks is 272. Thus, we can calculate the reliability measure as MTTF = 21/272 = 0.0772 Weeks.

The actual data from the first 21 weeks of testing is shown in Figure 7.

21 Weeks Actual Data

0

5

10

15

20

25

30

35

0 5 10 15 20 25

Calendar Week

Failu

res per W

eek

Figure 7: Actual Galileo data from 21 weeks testing

We now need to prepare the raw data for simulation by sorting it in descending order. Please, note that by doing this sort, we are not changing the reliability measure, i.e. MTTF = 21/272 = 0.0772 Weeks.

After sorting the data, the logarithmic failure intensity function is shown in Figure 8.

Logarithmic Failures per Week

y = ­8.8835Ln(x) + 32.149 R 2 = 0.9653

0 5

10 15

20 25

30 35

0 5 10 15 20 25

Time Week

Failures pe

r Wee

k

2. Failures per Week

Log. (2. Failures per Week)

Figure 8: Logarithmic failure intensity function for 21 weeks

The approximation of the failure intensity y (i.e. failures per week) as a function of time x (i.e. week) is y = ­8.8835Ln(x) + 32.149. Again, we simulate the reliability

Page 12: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 12 of 32 October 2008

using the Poisson distribution. The mean of the Poisson distribution is equal to the value of the failure intensity function for time t. The simulation results and statistics are given in Appendices, Table 6 and Table 7. The simulation distribution is shown in Figure 9.

Figure 9: Distribution of the Galileo simulation based on 21 weeks data

For this simulation, the predicted total number of defects is 310 with Standard Deviation 17 (i.e. 5.5%). This result is not very good compared with the actual result of 351 defects, but that is the best we can get. The prediction would have been better if we have more data available. We will show this in the next simulation.

Simulation 4: Predicting 41 Weeks Reliability after 23 Weeks

Supposing that we are now at the end of week 23, we can predict the reliability until the end of the testing stage, i.e. 41 weeks. This means that we only have data available for the first 23 weeks. Thus, we use Poisson distribution with logarithmic failure intensity function having 23 weeks of data available. The total number of defects for 23 weeks is 301. Thus, we can calculate the reliability measure as MTTF = 23/301 = 0.0764 Weeks.

The actual data from the first 23 weeks of testing is shown in Figure 10.

260

280

300

320

340

360

380

Page 13: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 13 of 32 October 2008

Galileo Data 23 Weeks

0 5

10 15 20 25 30 35

0 5 10 15 20 25 Time Week

Failu

re per W

eek

Figure 10: Actual Galileo data from 23 weeks testing

We now need to prepare the raw data for simulation by sorting it in descending order. Please, note that by doing this sort, we are not changing the reliability measure, i.e. MTTF = 23/301 = 0.0764 Weeks.

After sorting the data, the logarithmic failure intensity function is shown in Figure 11.

Logarithmic Failure Intensity Function

y = ­8.5579Ln(x) + 32.289 R 2 = 0.9672

0 5 10 15 20 25 30 35

0 5 10 15 20 25

Time Week

Failu

res per W

eek

2. Failures per Week

Log. (2. Failures per Week)

Figure 11: Logarithmic failure intensity function for 21 weeks

The approximation of the failure intensity y (i.e. failures per week) as a function of time x (i.e. week) is y = ­8.5579Ln(x) + 32.289. Again, we simulate the reliability using the Poisson distribution. The mean of the Poisson distribution is equal to the value of the failure intensity function for time t. The simulation results and statistics are given in Appendices, Table 8 and Table 9. The simulation distribution is shown in Figure 12.

Page 14: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 14 of 32 October 2008

Figure 12: Distribution of the Galileo simulation based on 23 weeks data

For this simulation, the predicted total number of defects is 346 with Standard Deviation 19 (i.e. 5.49 %). This result is very good compared with the actual of 351 defects. Thus, we have shown that the prediction with 23 weeks data proves the concept.

Proof of Concept Summary

The actual Galileo failure­count data is presented and analysed. The analysis shows that the data is raw and the failure intensity function exhibits a strong­zigzag decreasing behavior, so it is unusable for simulation. However, the raw data can be transformed to provide for simulation without changing the reliability principals.

The method to transform the raw data is to reorder the time intervals in descending order of number of failures, preserving: a) the number of failures detected in each interval; b) the time period; and c) the total number of failures detected. This method is feasible because the numbers of failures detected in each time interval are independent Poisson random variables [1]. The method transforms the failure intensity function from a strong­zigzag decreasing type to a smooth decreasing type, which is usable for simulation.

The reliability is simulated using the Poisson distribution with two different approximations to the failure functions: a) Exponential; and b) Logarithmic. Comparing the results of the two simulations with the actual data shows that the Logarithmic model is the better simulation model for Galileo. This model is used in two predictions of the reliability at the end of testing, supposing that we are in the middle of the testing stage. The 41 weeks testing reliability predictions for Galileo are as follows.

260

280

300

320

340

360

380

400

420

Page 15: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 15 of 32 October 2008

The first prediction is at the end of week 21, so the simulation is based on 21 weeks data. The predicted total number of defects is 310 with Standard Deviation 17 (i.e. 5.5%).

The second simulation is based on 23 weeks data as the prediction is at the end of week 23. The predicted total number of defects is 346 with Standard Deviation 19 (i.e. 5.49 %).

Compared with the actual total of 351 defects, the first result of 310 defects is not bad (i.e. ­11.68% error) whereas the second result of 346 defects is very good (i.e. ­ 1.42% error). The second prediction is better than the first prediction, simply because the simulation was carried out on data taken over a longer time period, i.e. the simulation is based on more available data. In conclusion, the experimental results are satisfactory. Therefore the concept is proven.

Page 16: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 16 of 32 October 2008

TRPC Next Release Simulation – Hypothetical Experiment

In this experiment of reliability simulation, we use the current release data to simulate the reliability of the next release of a hypothetical software system. It is supposed that we have collected data of two subsequent releases of the hypothetical financial software system – Project TRPC. It should be noted that this simulation experiment is hypothetical, so it is for illustration purposes only. The following is an outline of this chapter.

1. Feasibility assumptions are discussed; 2. TRPC Project two releases data are presented; 3. Simple Simulation model is demonstrated; 4. Simulation results and actual data are compared.

Even though this experiment is hypothetical, it illustrates how we can simulate the next release of a software system, using the data of the current release.

Feasibility Assumptions for Software Reliability Simulation

The software reliability simulation uses failure data collected in the past. However, having only the collected data is not sufficient to provide for feasibility of the software reliability simulation. Other much more complex criteria must be met, in addition to the data, to provide for feasible software reliability simulation.

For example, if an organisation collects and has a history of data for their software projects, but does not meet the other criteria, the software reliability simulation is not feasible. The data provide for running a simulation, but any reliability prediction initiative is unrealistic and even dangerous because the simulation results are inconsistent and any decision made based on these results is very risky.

The following defines the fundamental assumptions for the feasibility of the software reliability simulation. That is, Software Reliability Simulation / Prediction are feasible only if the software organisation and the software project are compliant with Capability Maturity Model – CMM™ Level 4.

CMM™ Level 4 requires quantitative management of software processes and products within an organisation. The criteria are as follows: “Detailed measures of the software process and product quality are collected. Both the software process and products are quantitatively understood and controlled.” [8]. Some aspects of software reliability prediction relating to CMM™ LEVEL 4 are discussed by Lakey [9].

Page 17: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 17 of 32 October 2008

TRPC Project Data

We suppose that we have collected data for two subsequent releases of the TRPC Project, i.e. “Current TRPC Release” and “Next TRPC Release” Project data. The “Current TRPC Release” Project data is presented in Appendices, Table 10. The “Next TRPC Release” Project data is given in Appendices, Table 11. We will use the “Current TRPC Release” Project data to simulate the “Next TRPC Release” Project reliability.

TRPC Project Next Release Simulation

In our simulation experiment, we will use the “Current TRPC Release” Project data to simulate the “Next TRPC Release” Project reliability (i.e. all parameters of the simulation model are calculated from the current release). The simulation model is as follows.

1. We expect that the size of the new code will be around 40 KLOC (i.e. 40,000 source lines of code). Thus, we calculate the size of the new code using the Normal distribution with a mean value equal to 40 and Standard Deviation of 2 (5%).

1. New Code Size Prediction (Normal Distribution) New Code Predicted Size KLOC: 40.00 Mean Value (µ): 40.00 Standard Deviation (σ): 2.00

2. We expect that the size of the changed code will be around 25 KLOC. We use the Normal distribution with a mean value equal to 25 and Standard Deviation of 1.2 (5%) to calculate the changed code size.

2. Changed Code Size Prediction (Normal Distribution) Changed Predicted Size KLOC: 25.00 Mean Value (µ): 25.00 Standard Deviation (σ): 1.25

3. We assume that Defect Injection Rate (DIR) for the new code will be equal to the current release rate, i.e. 47.69 Defects/KLOC. To calculate this parameter we use the Normal distribution with a mean value equal to 47.69 and 5% Standard Deviation (i.e. 2.38).

3. New Code Defect Injection Rate (DIR) Prediction Based on Current Release Data (Normal Distribution)

Current Release DIR per KLOC: 47.69 New Code Predicted DIR per KLOC: 47.69 Mean Value (µ): 47.69 Standard Deviation (σ): 2.38

Page 18: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 18 of 32 October 2008

4. For the changed code, the Defect Injection Rate (DIR) will be equal to the current release rate, i.e. 29.44 Defects/KLOC. To calculate this parameter we use the Normal distribution with a mean value equal to 29.44 and Standard Deviation of 1.47 (5%).

4. Changed Code Defect Injection Rate (DIR) Prediction Based on Current Release Data (Normal Distribution)

Current Release DIR per KLOC: 29.44 Changed Code Predicted DIR per KLOC: 29.44 Mean Value (µ): 29.44 Standard Deviation (σ): 1.47

5. We assume that for the new code the effort required to test and fix defects during testing is equal to the current release effort, i.e. 64.14 Man­Days / KLOC. To calculate this parameter we use the Normal distribution with a mean value equal to 64.14 and Standard Deviation of 3.21 (5%).

5. Effort for Testing & Fixing New Code Prediction Based on Current Release Data (Normal Distribution)

Current Release Man­Days per KLOC: 64.14 Test & Fix Predicted Man­Days per KLOC: 64.14 Mean Value (µ): 64.14 Standard Deviation (σ): 3.21

6. Similarly, for the changed code, we assume that the effort required to test and fix defects during testing is equal to the current release effort, i.e. 59.69 Man­ Days / KLOC. To calculate this parameter we use the Normal distribution with a mean value equal to 59.69 and Standard Deviation of 2.98 (5%).

6. Effort for Testing & Fixing Changed Code Prediction Based on Current Release Data (Normal Distribution)

Current Release Man­Days per KLOC: 59.69 Test & Fix Predicted Man­Days per KLOC: 59.69 Mean Value (µ): 59.69 Standard Deviation (σ): 2.98

7. We expect that the new code Defect Removal Rate (DRR) will be the same as the current rate, i.e. 1.79 Man­Days / Defect. Thus, we calculate this parameter using the Normal distribution with a mean value equal to 1.79 and Standard Deviation of 0.09 (5%).

7. New Code Defect Removal Rate (DRR) Prediction Based on Current Release Data (Normal Distribution)

Current Rel. DRR Man­Days per Defect: 1.79 New Code Predicted DRR Man­Days per Defect: 1.79 Mean Value (µ): 1.79 Standard Deviation (σ): 0.09

Page 19: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 19 of 32 October 2008

8. Similarly, the changed code Defect Removal Rate (DRR) will be the same as the current rate, i.e. 2.69 Man­Days / Defect. Thus, we calculate this parameter using the Normal distribution with a mean value equal to 2.69 and Standard Deviation of 0.13 (5%).

8. Changed Code Defect Removal Rate (DRR) Prediction Based on Current Release Data (Normal Distribution)

Current Release DRR Man­Days per Defect: 2.69 Chang. C. Predicted DRR Man­Days per Defect: 2.69 Mean Value (µ): 2.69 Standard Deviation (σ): 0.13

9. Using the parameters above, we calculate the Defect Injection and Defect Removal Intensities for new and changed code, as given below. We simulate now the numbers of injected and removed defects (shown in Blue below) using the Poisson distribution with the mean value equal to the associated defect intensity. The total number of defects in operation (shown in bold Blue below) is the difference between injected and removed defects.

9. Next Release Defect Totals Prediction (Poisson Distribution)

New Code Defect Injection Intensity: 1902.60 Chang. Code Defect Injection Intensity: 735.94 New Code Defect Removal Intensity: 1435.90 Chang. Code Defect Removal Intensity: 554.72 Predicted New Code Defects Injected: 1908.00 Predicted Chang. Code Defects Injected: 736.00 Predicted New Code Defects Removed: 1436.00 Predicted Chang. C. Defects Removed: 555.00 Predicted Total Defects in Operation: 648.00

The simulation distribution is given in Figure 13.

Page 20: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 20 of 32 October 2008

Figure 13: Distribution of the TRPC Project simulation

For this simulation, the predicted total number of defects is 648 with Standard Deviation 164 (i.e. 25.31%). The high Standard Deviation of 25.31% is caused by the high number of random variables used in the model.

TRPC Project Simulation Results Vs Actual Data

The comparison of the simulation results with the actual hypothetical data is given below.

Simulation Results Vs Actual Data Actual Simulation Error %

New Code Size KLOC: 38 40 5.27 Changed Code Size KLOC: 26 25 ­3.85 Total Code Size KLOC: 64 65 1.56 Total Number of Defects in Operation 690 648 ­6.09

The results are quite good (­6.09% error for total number of defects), so our hypothetical experiment is successful.

TRPC Next Release Simulation Summary

This simulation experiment is hypothetical; however it demonstrates how we can simulate the next release of a software project, using the data of the current release. The simulation model is simple considering only the testing and operational phase of the software project. The experiment is for illustration purposes only. The model can be easily expanded to involve the analysis and design phase of the project if the failure data is available.

0.0

0.2

0.4

0.6

0.8

1.0

1.2

Page 21: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 21 of 32 October 2008

The assumptions are discussed first in order to establish the criteria for which the reliability simulation is feasible. Then, the TRPC Project data of the “current release” and “next release” are presented. Also, the simulation model is demonstrated, which predicts the reliability of the “next release” using the “current release” data. Finally, the simulation results are compared with the actual “next release” confirming that the experiment is successful.

Recommendations for Future Work

The management objective is to achieve the software system reliability goals with minimal costs and schedule of projects. Software reliability simulation is very useful in supporting software project management to achieve this objective. Thus, as a future work, it is recommended to develop optimization models for this purpose. For example, supposing that the management want to employ extra resources on the project, but the resources are limited. An appropriate optimization model can provide for an optimal solution to this problem, i.e. to maximize the reliability improvement by optimal utilization of the limited resources.

Page 22: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 22 of 32 October 2008

Conclusion

The paper presents experiments of software reliability simulation using Palisade™ @RISK® (a general­purpose simulation tool). The simulation models use Monte Carlo sampling with the Poisson distribution. The purpose of the paper is to demonstrate the practical aspect of simulation, so the theory is not discussed.

A proof of concept is demonstrated with simulations of a real system, i.e. the Galileo project at the Jet Propulsion Laboratory [1]. A unique method is elaborated and applied in practice to transform raw unusable failure­count data into data usable for simulation. The method does not affect the software reliability principles. It should be emphasised that the method is generic and applicable to any software reliability simulation. The Galileo project testing is simulated in the experiments and the results are compared with the actual data. The experimental results are satisfactory, hence proving the concept

In addition, a simulation of a hypothetical system (Project TRPC) is elaborated. The purpose of the experiment is to use the data of the current release in order to simulate the next release. Important feasibility assumptions are discussed. The simulation model is presented and the experiment results are compared confirming that the experiment is successful. The simulation model is relatively simple including only the testing and operation phases. The model can be expanded to consider the analysis and design failures if data is available. It should be noted that this simulation model is for illustration purposes only as hypothetical data is used.

As a future work, it is recommended to develop optimization models in order to support the management in achieving the software system reliability goals with minimal costs.

The following are the major conclusions to emphasise:

Firstly, the presented approach to software reliability prediction is generic and applicable to any CMM™ Level 4 software project.

Secondly, the method for transforming the unusable raw failure­count data into data usable for simulation is unique and generic.

Thirdly, the general purpose simulation tools such as Palisade™ @RISK® can be used for software reliability simulation. The experimental results presented in this paper are satisfactory. Using Palisade™ @RISK® is much easier than using the special­purpose tools. Also, Palisade™ @RISK® tools provide for comprehensive data presentation and analysis, which is not the case with the special­purpose tools.

Finally, the presented simulation models are simple. However, the models can be upgraded to provide for more complex reliability simulation if data is available.

Page 23: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 23 of 32 October 2008

Appendices

Projects data discussed in this paper are presented in this section. Also, the simulation results and statistics are shown. Some rows and columns are hidden for practical purposes.

Page 24: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 24 of 32 October 2008

Galileo CDS Raw Data Galileo CDS Sorted Data

Data Format: Failure­count data; Time Unit: Weeks

Calendar Week

No of Failures per week

Time ­ Week

Failures per Week

1 4 1 29 2 12 2 28 3 15 3 23 4 9 4 22 5 28 5 19 6 29 6 19 7 8 7 15 8 7 8 14 9 4 9 13 10 8 10 12 11 9 11 12 12 12 12 12 13 8 13 12 14 4 14 10 15 14 15 9 16 19 16 9 17 23 17 9 18 12 18 8 19 22 19 8 20 12 20 8 21 13 21 7 22 19 22 7 23 10 23 7 24 5 24 5 25 5 25 5 26 5 26 5 27 7 27 4 28 7 28 4 29 1 29 4 30 3 30 3 31 1 31 2 32 2 32 2 33 0 33 1 34 2 34 1 35 9 35 1 36 1 36 1 37 0 37 1 38 0 38 0 39 0 39 0 40 1 40 0 41 1 41 0

Total 351 Total 351

Table 1: Raw and sorted Galileo failure­count data

Page 25: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 25 of 32 October 2008

Galileo CDS Monte Carlo Simulation with Exponential Failure Intensity Function ­ 41 Weeks Predicted System's Data (41 Weeks) Data Format: Failure­count data (time unit in weeks)

Column1: Time Interval Column2: Failure Intensity y = 34.74*EXP(­0.0884*x) Column3: Predicted Failures/Week using Poisson Distribution with mean equal to the Failure Intensity value at time x.

1. Time Week

2. Failure Intensity

3. Predicted Failures/Week

2. Actual Failures/Week

1 31.80080999 32 29 2 29.11029119 29 28 3 26.64740469 27 23 4 24.39289157 24 22 5 22.32912234 22 19 6 20.43995903 20 19 34 1.719944049 2 1 35 1.574427573 2 1 36 1.441222571 1 1 37 1.319287424 1 1 38 0 0 0 39 0 0 0 40 0 0 0 41 0 0 0

Total: 361 351

Table 2: Simulation 1 Results

@RISK Detailed Statistics MCC2 Performed By: BUBEVSKI Date: 06 August 2008 20:41:25

Name Total: / 3. Predicted Failures/Week

1 / 3. Predicted Failures/Week

37 / 3. Predicted Failures/Week

Description Output RiskPoisson($B$12) RiskPoisson($B$48)

Cell MC EXP Simulation !C53 MC Simulation C2!C12 MC Simulation C2!C48

Minimum 306 17 0

Maximum 427 50 6

Mean 361.795 31.725 1.362 Std Deviation

18.89736 5.729765 1.145543

Variance 357.1101 32.8302 1.312268

Skewness ­0.02712095 0.1931779 0.8212588

Kurtosis 3.164034 2.922069 3.572076

Target #1 (Value) 351

Target #1 (Perc%) 0.285

Table 3: Simulation 1 Statistics

Page 26: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 26 of 32 October 2008

Galileo CDS Monte Carlo Simulation with Logarithmic Failure Intensity Function – 41 Weeks Predicted System's Data (41 Weeks) Data Format: Failure­count data (time unit in weeks) Column1: Time Interval Column2: Failure Intensity y = ­8.6744Ln(x) + 32.687 Column3: Predicted Number of Failures (Poisson Distribution)

1. Time Week 2. Failure Intensity 3. Predicted Failures/Week

2. Actual Failures/Week

1 32.687 33 29 2 26.6743641 27 28 3 23.15719756 23 23 4 20.66172819 21 22 5 18.72609177 19 19 6 17.14456166 17 19 34 2.097938265 2 1 35 1.846488775 2 1 36 1.60212332 2 1 37 1.364453659 1 1 38 1.133122616 1 0 39 0.907800857 1 0 40 0.688184063 1 0 41 0.473990465 0 0

Total: 352 351

Table 4: Simulation 2 Results

@RISK Detailed Statistics Performed By: BUBEVSKI

Date: 06 August 2008 20:13:22

Name Total: / 3. Predicted Failures/Week

1 / 3. Predicted Failures/Week

41 / 3. Predicted Failures/Week

Description Output RiskPoisson($B$12) RiskPoisson($B$52) Cell MC LOG Simulation !C53 MC Simulation C1!C12 MC Simulation C1!C52

Minimum 290 17 0

Maximum 416 54 4

Mean 351.747 32.946 0.492

Std Deviation 19.18116 5.743873 0.6902264

Variance 367.9169 32.99208 0.4764124

Skewness 0.1393759 0.1426186 1.447088

Kurtosis 3.035162 3.122335 5.290806 Target #1 (Value)

351

Target #1 (Perc%)

0.514

Table 5: Simulation 2 Statistics

Page 27: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 27 of 32 October 2008

Galileo CDS Monte Carlo Simulation with Logarithmic Failure Intensity Function Based on 21 Weeks Data Predicted System's Data (41 Weeks) using Poisson Distribution Note: The mean of the Poisson Distribution is equal to the Failure Intensity Function value Column1: Time Interval Column2: Failure Intensity Function: y = ­8.8835Ln(x) + 32.149 Column3: Predicted Number of Failures (Poisson Distribution) 1. Time Week

2. Failure Intensity

3. Predicted Failures/Week Real Sorted Failures/Week

1 32.149 32 29 2 25.99142702 26 28 3 22.38947773 22 23 4 19.83385404 20 22 5 17.8515583 18 19 6 16.23190476 16 19 31 1.643174669 2 2 32 1.361135107 1 2 33 1.087775078 1 1 34 0.82257628 1 1 35 0.565065496 1 1 36 0.31480951 0 1 37 0.071410723 0 1 38 0 0 0 39 0 0 0 40 0 0 0 41 0 0 0

Total: 309.8860796 310 351

Table 6: Galileo Simulation 3 results

@RISK Detailed Statistics Performed By: BUBEVSKI Date: 06 August 2008 16:38:50

Name Total: / 3. Predicted Failures/Week 1 / 3. Predicted Failures/Week

41 / 3. Predicted Failures/Week

Description Output RiskPoisson(ABS(B12)) RiskPoisson(ABS(B52))

Cell MCLog21W Simulation!C53 MCLog21W Simulation!C12

MCLog21W Simulation!C52

Minimum 265 17 0

Maximum 369 52 6

Mean 310.015 32.3 0.84 Std Deviation

17.20543 5.743953 0.9355615

Variance 296.0268 32.99299 0.8752753

Skewness 0.1575875 0.3057922 1.116838

Kurtosis 3.058967 3.138245 4.476752 Target #1 (Value)

351

Target #1 (Perc%)

0.988

Table 7: Statistics for Galileo Simulation 3

Page 28: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 28 of 32 October 2008

Galileo CDS Monte Carlo Simulation with Logarithmic Failure Intensity Function Based on 23 Weeks Data

Predicted System's Data (41 Weeks) using Poisson Distribution Note: The mean of the Poisson Distribution is equal to the Failure Intensity Function value

Column1: Time Interval Column2: Failure Intensity Function: y = ­8.5579Ln(x) + 32.289 Column3: Predicted Number of Failures

1. Time Week 2. Failure Intensity

3. Predicted Failures/Week

Real Sorted Failures/Week

1 32.289 32 29 2 26.35711574 26 28 3 22.88718589 23 23 4 20.42523149 20 22 5 18.51559129 19 19 35 1.862686825 2 1 36 1.621603277 2 1 37 1.387125595 1 1 38 1.158901404 1 0 39 0.936605789 1 0 40 0.71993852 1 0 41 0.50862161 1 0

Total: 347.955619 346 351

Table 8: Galileo Simulation 4 results

@RISK Detailed Statistics Performed By: BUBEVSKI

Date: 06 August 2008 16:56:35

Name Total: / 3. Predicted Failures/Week 1 / 3. Predicted Failures/Week

41 / 3. Predicted Failures/Week

Description Output RiskPoisson(B12) RiskPoisson(B52)

Cell MCLog23W Simulation!C53 MCLog23W Simulation!C12 MCLog23W Simulation!C52

Minimum 275 14 0

Maximum 414 51 3

Mean 346.367 32.052 0.478 Std Deviation

18.60474 5.541842 0.6928174

Variance 346.1364 30.71201 0.479996

Skewness ­0.0413433 0.1567768 1.390907

Kurtosis 3.174928 3.09322 4.522202 Target #1 (Value)

351

Target #1 (Perc%)

0.603

Table 9: Statistics for Galileo Simulation 4

Page 29: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 29 of 32 October 2008

Project TRPC Current Release(i) Data

1. New Code New Code Size KLOC: 29 New Code Operation Defects in 40 Weeks: 342 New Code Test & Fix Effort Man­Days: 1860 New Code Defects Found in Test & Fixed: 1041 Total New Code Defects 1383 Total New Code Defects per 1 KLOC 47.68965517

New Code Testing Phase Defect Profile

Testing Phase Defects Removed Effort Effort per Defect

Component Test 271 250 0.922509225 Component Integration Test 185 280 1.513513514 System Integration Test 462 870 1.883116883 User Acceptance Test 123 460 3.739837398 Total: 1041 1860 1.786743516

2. Changed Code Changed Code Size KLOC: 16 Changed Code Operation Defects in 40 Weeks: 116 Changed Code Test & Fix Effort Man­Days: 955 Changed Code Defects Found in Test & Fixed: 355 Total Changed Code Defects 471 Total Changed Code Defects per 1 KLOC 29.4375

Changed Code Testing Phase Defect Profile

Testing Phase Defects Removed Effort Effort per Defect

Component Test 91 130 1.428571429 Component Integration Test 59 145 2.457627119 System Integration Test 151 450 2.98013245 User Acceptance Test 54 230 4.259259259 Total: 355 955 2.690140845

3. Release Quality in Operation Operation Hours per Day 24 Operation Days per Week 7 Operation Time Period (Weeks) 40 Operation Time Period (Hours) 6720 Total Number of Defects in Operation 458 Quality: Mean Time To Failure (MTTF) in Hours 14.67248908

Table 10: Current Release TRPC Project Data

Page 30: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 30 of 32 October 2008

Project TRPC Next Release(i+1) Data

1. New Code New Code Size KLOC: 38 New Code Operation Defects in 40 Weeks: 489 New Code Test & Fix Effort Man­Days: 2460 New Code Defects Found in Test & Fixed: 1365 Total New Code Defects 1854 Total New Code Defects per 1 KLOC 48.78947368

New Code Testing Phase Defect Profile

Testing Phase Defects Removed Effort

Effort per Defect

Component Test 365 330 0.904109589 Component Integration Test 246 370 1.504065041 System Integration Test 612 1180 1.928104575 User Acceptance Test 142 580 4.084507042 Total: 1365 2460 1.802197802

2. Changed Code Changed Code Size KLOC: 26 Changed Code Operation Defects in 40 Weeks: 201 Changed Code Test & Fix Effort Man­Days: 1580 Changed Code Defects Found in Test & Fixed: 571 Total Changed Code Defects 772 Total Changed Code Defects per 1 KLOC 29.69230769

Changed Code Testing Phase Defect Profile

Testing Phase Defects Removed Effort

Effort per Defect

Component Test 151 210 1.390728477 Component Integration Test 105 240 2.285714286 System Integration Test 248 745 3.004032258 User Acceptance Test 67 385 5.746268657 Total: 571 1580 2.767075306

3. Release Quality in Operation Operation Hours per Day 24 Operation Days per Week 7 Operation Time Period (Weeks) 40 Operation Time Period (Hours) 6720 Total Number of Defects in Operation 690 Quality: Mean Time To Failure (MTTF) in Hours 9.739130435

Table 11: Next Release TRPC Project Data

Page 31: An Application of Simulation In Software Reliability Prediction

Bubevski, Vojo, “An Application of Simulation in Software Reliability Prediction”, Case Study, 2008 Palisade™ Risk & Decision Analysis Conference, New York City, November 13 th & 14 th , 2008

___________________________________________________________________________________________

Page 31 of 32 October 2008

References

1. [1] Lyu, Michael R., “Handbook of Software Reliability Engineering”, IEEE Computer Society Press, 1996.

2. [2] Von Mayrhauser, A., et al., “On the need for simulation for better characterization of software reliability”, Proceedings, Fourth International Symposium on Software Reliability Engineering, 1993.

3. [3] Tausworthe, Robert C., Lyu, Michael R., “A Generalized Software Reliability Process Simulation Technique and Tool”.

4. [4] Gokhale, Swapna S., Lyu, Michael R., “A Simulation Approach to Structure­Based Software Reliability Analysis”.

5. [5] Gokhale, Swapna S., Lyu, Michael R., Trivedi, Kishor S., “Reliability Simulation of Fault­Tolerant Software and Systems”, Proc. of Pacific Rim International Symposium on Fault­Tolerant Systems, 1997.

6. [6] Gokhale, Swapna S., Lyu, Michael R., Trivedi, Kishor S., “Reliability Simulation of Component­Based Software Systems”, Proceedings of Ninth International Symposium on Software Reliability Engineering, 1998.

7. [7] Tausworthe, Robert C., Lyu, Michael R., “Software Reliability Simulation” Chapter 16, Handbook of Software Reliability Engineering, IEEE Computer Society Press, 1996.

8. [8] Software Engineering Institute, “The Capability Maturity Model for Software”, Version 1.1, Carnegie Mellon University, 1998.

9. [9] Lakey, Peter B., “Software Reliability Prediction is not a Science… Yet”, Cognitive Concepts, St. Louis, 2002.

Page 32: An Application of Simulation In Software Reliability Prediction

TCS Confidential

Thank You