22
European Journal of Scientific Research ISSN 1450-216X Vol.49 No.3 (2011), pp.332-353 © EuroJournals Publishing, Inc. 2011 http://www.eurojournals.com/ejsr.htm Regression Test Suite Minimization Using Dynamic Interaction Patterns with Improved FDE S. Selvakumar Corresponding Author Department of Information Technology Thiagarajar College of Engineering, Madurai, India E-mail: [email protected], [email protected] Tel: +91-9789916648 N. Ramaraj Ph.D, Department of Computer Science &Engineering, G.K.M college of Engineering, Chennai, India E-mail: [email protected] Abstract An EFSM (Extended Finite State Machine) model-based regression test suite minimization method based on dynamic dependence analysis with improved fault deduction efficiency (FDE) is proposed. Given an EFSM representing the requirements of a system under test and a set of elementary modifications (EM) on the EFSM, various interaction patterns are identified related to each type of EM, i.e., adding and deleting transitions in the EFSM. The proposed method automatically identifies the difference between the original model and the modified model as a set of elementary model modifications. For each EM, regression test minimization strategies are used to reduce the regression test suite based on the EFSM dependence analysis. This proposed method reduces the size of a given RTS (Regression test suite) by examining the various interaction patterns covered by each test case in the given RTS. Whenever the software system undergoes modification, there is need for regression testing and while performing regression testing many tests appear to be redundant. Our approach of Regression Test Suite Minimization Using Dynamic Interaction Patterns identifies those redundant test cases and removes them. Also, we improved the fault detection ability by applying the dynamic dependencies in the place of the static dependencies. Keywords: Extended Finite State Machine, Control Dependence, Data Dependence, Regression Test Suite Reduction, Regression Test Suite Minimization. 1. Introduction Testing is a process of technical investigation that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate. This includes the process of executing a program or application with the intent of finding errors. The study by the National Institute of Standards & Technology [11] found that “the national annual costs of an inadequate infrastructure for software testing is estimated to range from $22.2 to $59.5 billion” (p. ES- 3) or about 0.6 percent of the US gross domestic product. This number does not include costs associated with catastrophic failures of mission-critical software (such as the $165 million Mars Polar

My Academic project work

Embed Size (px)

DESCRIPTION

My Academic project work

Citation preview

Page 1: My Academic project work

European Journal of Scientific Research

ISSN 1450-216X Vol.49 No.3 (2011), pp.332-353 © EuroJournals Publishing, Inc. 2011 http://www.eurojournals.com/ejsr.htm

Regression Test Suite Minimization Using Dynamic Interaction

Patterns with Improved FDE

S. Selvakumar

Corresponding Author Department of Information Technology

Thiagarajar College of Engineering, Madurai, India

E-mail: [email protected], [email protected] Tel: +91-9789916648

N. Ramaraj

Ph.D, Department of Computer Science &Engineering, G.K.M college of Engineering, Chennai, India

E-mail: [email protected]

Abstract

An EFSM (Extended Finite State Machine) model-based regression test suite minimization method based on dynamic dependence analysis with improved fault deduction efficiency (FDE) is proposed. Given an EFSM representing the requirements of a system under test and a set of elementary modifications (EM) on the EFSM, various interaction patterns are identified related to each type of EM, i.e., adding and deleting transitions in the EFSM. The proposed method automatically identifies the difference between the original model and the modified model as a set of elementary model modifications. For each EM, regression test minimization strategies are used to reduce the regression test suite based on the EFSM dependence analysis. This proposed method reduces the size of a given RTS (Regression test suite) by examining the various interaction patterns covered by each test case in the given RTS. Whenever the software system undergoes modification, there is need for regression testing and while performing regression testing many tests appear to be redundant. Our approach of Regression Test Suite Minimization Using Dynamic Interaction Patterns identifies those redundant test cases and removes them. Also, we improved the fault detection ability by applying the dynamic dependencies in the place of the static dependencies. Keywords: Extended Finite State Machine, Control Dependence, Data Dependence,

Regression Test Suite Reduction, Regression Test Suite Minimization.

1. Introduction Testing is a process of technical investigation that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate. This includes the process of executing a program or application with the intent of finding errors. The study by the National Institute of Standards & Technology [11] found that “the national annual costs of an inadequate infrastructure for software testing is estimated to range from $22.2 to $59.5 billion” (p. ES-3) or about 0.6 percent of the US gross domestic product. This number does not include costs associated with catastrophic failures of mission-critical software (such as the $165 million Mars Polar

Page 2: My Academic project work

Regression Test Suite Minimization Using Dynamic Interaction Patterns with Improved FDE 333

Lander shutdown in 1999). According to another report, the U.S. Department of Defense alone loses over four billion dollars a year due to software failures [10]. Testing activity consumes about 50% of software development resources, thus any technique aimed at reducing software testing costs is likely to produce positive effects on cost reduction.

To ensure correctness, developers write unit tests for a particular section of the code. Each time, a new functionality is added to the project the new tests are run in addition to the old ones in order to check for regression in the project. As software systems grow in size, the size of the test suite also increases. Regression testing is very important to the correctness of the software, but a developer cannot afford to wait long for a test suite to run. It is the process of validating the modified software to increase our confidence that the changed parts of the software behave as intended and that the unchanged parts of the software have not been adversely affected by the modifications.

There exist two types of regression testing: Code-based and specification based regression testing. It has been shown that code-based testing and specification-based testing complement each other. Most regression testing techniques are code-based, i.e., these techniques select test cases using the source code of the original and modified programs. There exists limited research on specification-based regression testing techniques. Most of these techniques select regression tests using only the modified system specification. A greater degree of attention has been paid to regression test selection where code-based techniques have been effective at unit-level testing. Model based testing is one of the techniques that can be used on the system level. When a system model is changed, one can apply model-based testing techniques on the modified model, and partially test the system under test with respect to chosen requirements. But the size of these test suites may be very large even for relatively smaller systems. Also, model-based testing techniques fulfill some coverage criterion for constructing a test suite.

In this paper, we present a novel approach of model-based regression test minimization (Dynamic Dependence Graph) that uses the EFSM model dependence analysis to reduce a given regression test suite. This approach has good fault detection ability when compared to that of the Static Dependence Graph approach, by considering all the interaction patterns instead of ignoring the patterns of the same dependencies between transitions which occur during the traversal of the model in an iterative manner. Our initial experience shows that this approach may significantly reduce the size of regression test suites and also improve the fault detection capability.

In the next section, we present related work. Section 3 presents the test suite minimization using Dynamic Interaction Patterns, and motivates the need of the method. Section 4 presents procedure for the EFSM Model Specification, discusses dynamic interaction patterns which identify the reduced test cases that provide the FDE and a minimized cost for executing the deliverables. In Section 5, the experimental study is present and the empirical results that evaluate the proposed method. Finally, Section 6 presents conclusions and discusses some future work.

2. Related Work In the literature, almost all the approaches to test case generation consider how to avoid generating redundant test cases [3, 18]. On the other hand, many efforts have also been put into research on how to reduce the size of a previously acquired test suite while maintaining its effectiveness. Typical test suite minimization techniques also called test suite reduction include heuristic approaches [14, 16], the genetic algorithm based approach [12], and approaches based on integer linear programming (ILP) [6].

In [17], the reduction of requirement based test suites is done using EFSM dependence analysis. Requirement-based automated test case generation is a model-based technique for generating test suites related to individual requirements. These techniques may significantly reduce a number of test cases with respect to a requirement under test as opposed to a complete system testing. However, the number of test cases may still be very large especially for large systems. Different types of dependencies are identified between the elements of the EFSM system model. These dependencies capture potential

Page 3: My Academic project work

334 S. Selvakumar and N. Ramaraj

interactions between the elements of the model and are used to determine parts of the model that affect a requirement under test. This information is used to reduce the test suite by identifying repetitive tests, i.e., tests that exhibit the same pattern of interactions with respect to the requirement under test. The work presented in [7] uses the EFSM model dependence analysis to reduce the regression test suites for the model-based regression testing. Model based testing is a system testing technique used to test software systems modeled by formal description languages, e.g., an Extended Finite State Machine (EFSM). System models are frequently changed because of specification changes. Selective test generation techniques are used to test the modified parts of the model. However, the size of regression test suites may still be very large. This approach automatically identifies the difference between the original model and the modified model as a set of elementary model modifications. For each EM, regression test minimization strategies are used to reduce the regression test suite based on the EFSM dependence analysis.

In [19], the set of EMs on the EFSM interaction patterns are identified related to each type of EMs, i.e., adding, deleting, and changing transitions in the EFSM. These interaction patterns capture the effects of the model on the EMs, the effects of the EMs on the model, and the side-effects of the EMs. The method proposed in [20] considers an SDL model representing the requirements of a system under test and a set of modifications on this model and applies dependence analysis to identify interaction patterns related to each type of modifications, i.e., adding, deleting, and changing transitions in the SDL model, and reduces the size of a given regression test suite by examining interaction patterns covered by each test case in the test suite. The work [13] presents the tool support and evaluation of the state-based selective regression testing methodology for evolving state-based systems. START is an Eclipse-based tool for state-based regression testing compliant with UML 2.1 semantics. START deals with dependencies of state machines with class diagrams to cater to the change propagation.

3. Test Suite Minimization Using Dynamic Interaction Patterns Our proposed method reduces the size of a given RTS by examining interaction patterns covered by each test case in the given RTS. Our work differs from the work of Vaysburg et al., [17] Korel et al.[7] and Yanping et al.[19] in terms of its efficiency in fault deduction. Our proposed approach to test suite minimization using dynamic interaction patterns identifies the reduced test cases that provide the best coverage of the requirements and a minimized cost for executing the deliverables. We use the concept of EFSM Dynamic Dependence in which each Data and Control Dependencies is identified for test case and the Dynamic Dependence Graph is constructed and the test cases are reduced by eliminating the redundancies. Then the test cases are prioritized based on the efficient order of their execution. 3.1. Motivational Example

Initially we tested with other programs that are developed as an academic project by students like the Global banking system. We present this example program (i.e. the global banking system) as a running example throughout this paper, to motivate our idea of test suite minimization using dynamic interaction patterns. This identifies the reduced test cases with improved FDE that provide minimized cost for executing the deliverables. 3.1.1. Modular Design

Modular software design refers to a design strategy in which a system is composed of relatively small and autonomous routines that fit together. The basic idea underlying modular design is to organize a complex system as a set of distinct components that can be developed independently and then plugged together. The proposed system is composed of separate components that can be connected together. Each and every module is developed separately and then combined. In our system there are four Modules. The first module is Application Specification Selection. The second module is EFSM Model

Page 4: My Academic project work

Regression Test Suite Minimization Using Dynamic Interaction Patterns with Improved FDE 335

Generation. The third is Test Case Identification. The fourth module is test case reduction by the dynamic dependency approach of the EFSM Model. (i). Application Specification Selection Module

The desired application specification is selected for which test cases are to be generated. From the selected application specification the properties are extracted and positioned in the property list. Considering the global banking application, there are two modules whose properties and specifications are listed as follows.

Administrator module Account Holder module

• View Pending A/C opening applications. • View Balance

• Viewing details of particular Account. • Transferring funds

• Edit Press Releases. • Previous transaction details

• Edit Jobs listing. • Change User name and Password

• Change User name and Password • Request for Help

(ii). EFSM Model Generation

In the EFSM Model Generation module, An EFSM model is generated for our specified application requirements and specifications listed in the above module. (iii). Test Case Identification

In Test Case Identification, various possible test cases are identified from the application by traversing the model in all the possible paths. (iv). Test Case Reduction Using Dynamic Dependencies

The test suite obtained might contain redundancies. This module helps to remove the redundant test cases by applying the Dynamic Dependency approach of the EFSM model.

4. Procedure for EFSM Model Specification Fig. 1 shows the Procedure for Test suite minimization by interaction pattern identification.

Figure 1: Procedure for Test suite minimization by interaction pattern identification.

Step 1: Identify the Test cases

Step 2: Find Data and Control Dependencies for the Test cases from the EFSM model

Step 3: Construct the Dynamic Dependency Graphs

Step 4: Identify the Interaction Patterns

Step 5: Compare all the Interaction Patterns and remove the Redundant Patterns

Step 6: Retain the remaining Interaction Patterns to form a Reduced Test Suite

4.1. The EFSM Model

The model based regression testing techniques [7] use only a modified system model in which the modified elements (states and transitions) are tested using selective test generation techniques, i.e., each regression test case contains a modified model element. In this section we present an approach of model-based regression testing that can be used for any modification of the EFSM system model. An EFSM is a 5-tuple <S, I, O, V, T> where

Page 5: My Academic project work

336 S. Selvakumar and N. Ramaraj

• S is a nonempty finite set of states with two states designated as Start and Exit states of the EFSM.

• I is a nonempty finite set of input interactions, each with a (possibly empty) set of input interaction parameters.

• O is a nonempty finite set of output interactions, each with a (possibly empty) set of output interaction parameters.

• V is the nonempty finite set of all variables which is the union of set of all local variables and set of all interaction parameters.

• T is a nonempty finite set of transitions. An EFSM model consists of states and the transitions between them. There will be start (Initial)

and exit (Final) nodes and every other node has access to them. A transition is triggered when an event occurs and the condition associated with the event is satisfied. When a transition is triggered, actions may be performed which may read the input, manipulate the variables or produce outputs. The EFSM models are graphically represented as nodes and transitions as direct edges between states. A transition has the following elements namely

1) an event, 2) a condition, and 3) a sequence of actions.

Figure 2: EFSM Transition

A simplified EFSM model of a global banking system is shown in fig. 3. For detailed information on the generation of ESFM, fig 3.A, 3.B, 3.C in Appendix may be referred.

Figure 3: The EFSM Model for a Global Banking System

The Banking system consists of two types of login namely administrator and the account holder. The user must choose the type of login and then type the user name and password which is verified with those stored in the bank database. The user is allowed a maximum of three attempts to enter the valid user name and password. If users log into the system, they can perform the operations as listed in figure 2. For example, the transition t2 is triggered if the user enters an invalid user name or password and then the value of the variable att is incremented by one. Similarly, various transitions are invoked if the users perform various operations. Since transitions represent active elements of the

Page 6: My Academic project work

Regression Test Suite Minimization Using Dynamic Interaction Patterns with Improved FDE 337

EFSM model, their modifications rather than those on the states are concentrated. In an EFSM model, data and control dependences may exist between transitions. These dependencies are identified using the Dependency Analysis.

• Data Dependence: [17] captures the notion that one transition defines a value for a variable and the same or some other transition may potentially use this value. For example, in fig. 4 there exist data dependence between transitions t1 and t2 because transition t1 assigns a value to the variable att and transition t2 uses that variable

• Control Dependence: The concept of Control Dependence in the EFSM [17] exists between transitions and captures the notion that one transition may affect the traversal of another transition. Control between transitions can be defined in terms of the concept of post-dominance. Let Y and Z be two states (nodes) and t be an outgoing transition (arc) from Y. State Z post-dominates state Y iff Z is on every path from Y to the exit state. State Z post-dominates transition t iff Z is on every path from Y to the exit state through transition t. For example, in the above figure, transition t3 has control dependence on transition t5 because state S2 does not post dominate state S1 and state S2 post-dominates the transition t4.

Figure 4: Data and Control Dependence

• Static Dependence Graph: The Static Dependence Graph (SDG) graphically represents the Data Dependencies (DDs) and Control Dependencies (CDs) in an EFSM (Fig. 5.). Here the nodes represent the transitions and the directed arcs represent the Data and Control Dependencies.

Figure 5: Static Dependence Graph Figure 6: Dynamic Dependence Graph

Test minimization using Static Interaction Patterns is appropriate in the initial stages of testing,

when a relatively small number of “high quality” tests are supposed to be used. However, test minimization using Static Interaction Patterns ignores repetitions of the same dependencies (interactions) between transitions. Therefore, the test minimization using more sophisticated interaction

Page 7: My Academic project work

338 S. Selvakumar and N. Ramaraj

patterns that take repetition of the same interactions is presented. This leads our approach to the Dynamic Interaction Patterns. 4.2. Dynamic EFSM Dependencies

Our approach is, in principle, similar to the approach described in the previous section except that during the traversal of a test (sequence) each transition is represented as a separate node in the dependence graph. We refer to this graph as the Dynamic EFSM Dependence Graph. The approach works as follows: Given a test (sequence of transitions), during traversal each traversed transition is represented as a node in the dynamic EFSM dependence graph, and each identified data or control dependence is represented by an arc between corresponding transitions. This process results in a dynamic EFSM dependence graph. In the next step, all the dependencies in the dynamic EFSM dependence graph that influence the transition(s) under test are identified by traversing backwards from the transition(s) under test and marking all traversed dependencies. All unmarked dependencies are removed from the dynamic dependen dependence graph. The resulting dynamic EFSM dependence sub-graph is referred to as a Dynamic Interaction Pattern, where data and control dependencies represent the interactions between transitions. An example for a Dynamic Dependence Graph is shown in Fig. 6.

Figure 7: Static Dependence Graph for the above tests

Figure 8: Dynamic Dependence Graph for the above tests

TEST 1

TEST 2 TEST 1 TEST 2

Test 1: t1 t2 t2 t2 t3 t5 t14 t6 t14 Test 2: t1 t2 t2 t2 t2 t3 t5 t14 t6 t14

Fig.7 shows the static dependence graph for the above tests, Test1 and Test 2. Here both the

tests provide the same interaction patterns even though the test suites are different. So, one of them is removed in the case of the Static Dependence Graphs (SDG). Fig. 8 shows the dynamic dependence graph for the above tests. Here the interaction patterns differ from each other and hence should not be ignored. This is the problem in the case of the SDG and this can be overcome this problem by the DDG. Thus the dynamic dependence graph promotes the improved fault detection ability.

Page 8: My Academic project work

Regression Test Suite Minimization Using Dynamic Interaction Patterns with Improved FDE 339

4.2.1. Test Case Minimization Using Dynamic Dependencies

209 test cases were generated for the Global Banking application. Some test cases have been considered as example case from the Original test suite for the Global banking system and these show how we have reduced them (Fig. 9). The summarization of the test cases of this application is shown in Tables in Appendix. Fig. 10 and Fig. 11 Shows the dynamic dependence graphs for the correspoding tests.

Figure 9: The representative test cases for the Global banking system

Test case A: t1 t2 t2 t4 t10 t14a Test case B: t1 t2 t2 t4 t10 t14a t11 Test case C: t1 t2 t4 t12 t14a Test case D: t1 t2 t4 t12 t14a t17 Test case E: t1 t2 t4 t11 t14a Test case F: t1 t2 t3 t5 t14 Test case G: t1 t2 t3 t5 t14 t16 Test case H: t1 t2 t2 t3 t5 t14 t6 t14 Test case I : t1 t2 t2 t3 t5 t14 t6 t14 t16

Figure 10: Dynamic Dependence Graph for the above tests

Page 9: My Academic project work

340 S. Selvakumar and N. Ramaraj

Figure 11: Dynamic Dependence Graph for the above tests

REDUCED TESTS: A C E F H

REDUNDANT TESTS: B D G I

The redundant test cases are removed and the reduced test cases are retained. Likewise, for the

entire test suite consisting of 209 test cases, Dynamic Dependence Graphs were constructed. 4.2.2. Test Case Prioritization

Test case prioritization techniques schedule test cases for execution in an order that attempts to maximize some objective function. A variety of objective functions are applicable; and one such function involves the rate of fault detection — a measure of how quickly faults are detected within the testing process. An improved rate of fault detection during regression testing can provide faster feedback on a system under regression test and let debuggers begin their work earlier than might otherwise be possible.

Test case prioritization can address a wide variety of objectives, including the following: 1) Testers may wish to increase the rate of fault detection, that is, the likelihood of revealing faults

earlier in a run of regression tests. 2) Testers may wish to increase the rate of detection of high-risk faults, locating those faults

earlier in the testing process. 3) Testers may wish to increase the likelihood of revealing regression errors related to specific

code changes earlier in the regression testing process. 4) Testers may wish to increase their coverage of coverable code in the system under test at a

faster rate. 5) Testers may wish to increase their confidence in reliability

Optimal prioritization: To measure the effects of prioritization techniques on rate of fault detection, our empirical study utilizes programs that contain known faults. It can determined, for any test suite, which test cases expose which faults, and thus an optimal ordering of test cases in a test suite for maximizing that suite’s rate of fault detection. Table 1 shows the reduced prioritized test suite.

Page 10: My Academic project work

Regression Test Suite Minimization Using Dynamic Interaction Patterns with Improved FDE 341

Table 1: Reduced prioritized test suite

admin module admin module iteration account holder module account holder module

iteration

t1 t3 t5 t14 t16 t1 t3 t5 t14 t5 t14 t16 t1 t4 t9 t14a t17 t1 t4 t9 t14a t9 t14a t17 t1 t3 t6 t14 t16 t1 t3 t5 t14 t6 t14 t16 t1 t4 t10 t14a t17 t1 t4 t9 t14a t10 t14a t17 t1 t3 t7 t14 t16 t1 t3 t5 t14 t7 t14 t16 t1 t4 t11 t14a t17 t1 t4 t9 t14a t11 t14a t17 t1 t3 t8 t14 t16 t1 t3 t5 t14 t8 t14 t16 t1 t4 t12 t14a t17 t1 t4 t9 t14a t12 t14a t17 t1 t2 t2 t2 t3 t6 t14 t16 t1 t3 t6 t14 t5 t14 t16 t1 t4 t13 t14a t17 t1 t4 t9 t14a t13 t14a t17 t1 t2 t2 t2 t3 t7 t14 t16 t1 t3 t8 t14 t7 t14 t16 t1 t2 t4 t9 t14a t17 t1 t4 t10 t14a t9 t14a t17 t1 t2 t2 t2 t3 t8 t14 t16 t1 t3 t8 t14 t8 t14 t16 t1 t2 t2 t4 t10 t14a t17 t1 t4 t10 t14a t10 t14a t17

exit without any operation t1 t2 t3 t5 t14 t5 t14 t16 t1 t2 t2 t4 t11 t14a t17 t1 t4 t10 t14a t11 t14a t17

t1 t15 t1 t2 t3 t5 t14 t6 t14 t16 t1 t2 t2 t4 t12 t14a t17 t1 t4 t10 t14a t12 t14a t17 t1 t3 t16 t1 t2 t3 t5 t14 t7 t14 t16 t1 t2 t2 t4 t13 t14a t17 t1 t4 t10 t14a t13 t14a t17 t1 t4 t17 t1 t2 t3 t5 t14 t8 t14 t16 t1 t2 t2 t2 t4 t9 t14a t17 t1 t4 t13 t14a t9 t14a t17 t1 t2 t3 t16 t1 t2 t3 t6 t14 t5 t14 t16 t1 t2 t2 t2 t4 t10 t14a t17 t1 t4 t13 t14a t10 t14a t17 t1 t2 t4 t17 t1 t2 t3 t6 t14 t6 t14 t16 t1 t2 t2 t2 t4 t12 t14a t17 t1 t2 t4 t10 t14a t13 t14a t17 t1 t2 t2 t3 t16 t1 t2 t3 t6 t14 t7 t14 t16 t1 t2 t2 t2 t4 t13 t14a t17 t1 t2 t4 t11 t14a t9 t14a t17 t1 t2 t2 t4 t17 t1 t2 t3 t8 t14 t7 t14 t16 t1 t2 t4 t12 t14a t13 t14a t17 t1 t2 t2 t2 t3 t16 t1 t2 t3 t8 t14 t8 t14 t16 t1 t2 t4 t13 t14a t9 t14a t17 t1 t2 t2 t2 t4 t17 t1 t2 t2 t3 t5 t14 t5 t14 t16 t1 t2 t4 t13 t14a t10 t14a t17 t1 t2 t2 t3 t5 t14 t6 t14 t16 t1 t2 t4 t13 t14a t11 t14a t17 t1 t2 t2 t3 t5 t14 t7 t14 t16 t1 t2 t4 t13 t14a t12 t14a t17 t1 t2 t2 t3 t5 t14 t8 t14 t16 t1 t2 t4 t13 t14a t13 t14a t17 t1 t2 t2 t3 t6 t14 t5 t14 t16 t1 t2 t2 t4 t10 t14a t9 t14a t17 t1 t2 t2 t3 t6 t14 t6 t14 t16 t1 t2 t2 t4 t10 t14a t10 t14a t17 t1 t2 t2 t3 t6 t14 t7 t14 t16 t1 t2 t2 t4 t10 t14a t11 t14a t17 t1 t2 t2 t3 t6 t14 t8 t14 t16 t1 t2 t2 t4 t10 t14a t12 t14a t17 t1 t2 t2 t3 t7 t14 t5 t14 t16 t1 t2 t2 t4 t10 t14a t13 t14a t17 t1 t2 t2 t3 t7 t14 t6 t14 t16 t1 t2 t2 t4 t11 t14a t10 t14a t17 t1 t2 t2 t2 t3 t8 t14 t5 t14 t16 t1 t2 t2 t4 t11 t14a t11 t14a t17 t1 t2 t2 t4 t11 t14a t12 t14a t17 t1 t2 t2 t4 t11 t14a t13 t14a t17 t1 t2 t2 t4 t12 t14a t9 t14a t17 t1 t2 t2 t4 t12 t14a t10 t14a t17 t1 t2 t2 t4 t12 t14a t11 t14a t17 t1 t2 t2 t2 t4 t12 t14a t9 t14a t17 t1 t2 t2 t2 t4 t12 t14a t10 t14a

t17

5. Experimental Study 5.1. Subject Programs, Faulty Versions, and Test Case Pools

An experimental set up similar to that used by Rothermel et. al [4] and Jeffrey [2]was followed. The Siemens programs described in Table 2 was used as the subject programs. All programs, faulty versions, and test pools used in our experiments were assembled [8, 9, 1] by researchers from the Siemens Corporation. These programs, their faulty versions and the associated test pools were obtained from [1]. The types of errors introduced in the faulty versions of each subject program were examined and identified six distinct categories of seeded errors: (1) changing the operator in an expression, (2) changing an operand in an expression, (3) changing the value of a constant, (4) removing code, (5) adding code, and (6) changing the logical behavior of the code (usually involving a few of the other categories of error types simultaneously in one faulty version). Other objects that are retrieved from software infrastructure repository [1] and the programs developed as an academic project by students described in Table 3 and Table 4 as the subject programs were also experimented.

Environment Setting: Pentium IV 2.8GHZ, 512M RAM, and Windows XP operating system.

Page 11: My Academic project work

342 S. Selvakumar and N. Ramaraj

Table 2: Siemens suite subject programs

Name Lines of code Faulty version count Test pool size Program Description

tcas 162 41 1608 Altitude separation

totinfo 346 23 1052 Information measure schedule 299 9 2650 Priority scheduler schedule2 287 10 2710 Priority scheduler printtokens 378 7 4130 Lexical analyzer printtokens2 366 10 4115 Lexical analyzer replace 514 32 5542 Pattern replacement Space 9127 38 13,585 Array definition language interpreter

Table 3: Academic subject programs

Name Lines of code No. of Classes

Triangle 123 2 Sample 66 1 Average 131 1 Greatest number 186 1 Gcd 142 2

Table 4: Objects from Software Infrastructure Repository [1]

Name Lines of code No. of Classes

Binary-Search-Tree 130 3 Array-Partition 13 1 Doubly-Linked-List 277 1 Sorting 130 1 Vector 254 1 Binary-Heap 72 2 Disjoint-Set 35 1 Stack 114 5 Elevator 934 12 OrdSet 229 2 deadlock 24 4 accountsubtype 89 6 account 66 3 Producer-consumer 99 8 Alarm-clock 125 6 linkedlist 121 5

5.2. Measures

In this paper, the following criteria are used to judge the performance of the proposed approach. (i) The percentage of suite size reduction (SSR) is defined as

| Tred | 100 %

TSSR

T

= × where

|T| = Number of test cases in the original suite and |Tred| = Number of test cases in the minimized/reduced suite. A higher SSR means a better reduction capability.

(ii) The percentage of fault detection effectiveness loss (FDE Loss)

Fred 100 %

FFDE loss

F

= ×

where |F| = Number of distinct faults exposed by the original suite.

Page 12: My Academic project work

Regression Test Suite Minimization Using Dynamic Interaction Patterns with Improved FDE 343

|Fred| = Number of distinct faults exposed by the minimized/ reduced suite. For the subject programs, the fault-exposing information of each test case is provided. Some

test cases of a test suite may expose the same faults, but a fault exposed by different test cases of a suite will be counted only once. The closer the FDE Loss is to zero, the better the fault-revealing capability.

5.3. Experiment SDG versus DDG

The results for this experiment are shown in the columns labeled SDG and DDG in Table 5. Table 5: Experimental Results for Global Banking Application

Size of the original Test

Suite

Size of the Reduced Test Suite Fault Detection Ability (%)

SDG DDG SDG DDG 10 6 6 60 60 20 11 11 55 55 40 23 24 62 64 60 31 34 65 68 80 38 43 49 62

100 49 56 53 57 120 53 62 56 63 140 59 69 68 74 160 64 74 73 86 180 71 82 81 89 200 80 93 74 77 209 85 97 71 84

Figure 12: Test Suite Minimization for Global Banking Application

Graph 1: Test Su ite Reduction

0

50

100

150

200

250

1 2 3 4 5 6 7 8 9 10 11 12

I te ra tions

Siz

e o

f th

e T

est

Su

ite

S ize o f t he original Tes t Su ite

S ize o f t he Reduced Tes t S uite

S DG

S ize o f t he Reduced Tes t S uite

DDG

Figure 13: Fault Detection Ability for Global Banking Application

Graph 2: Fault Detection Ability

0

50

100

150

200

250

1 2 3 4 5 6 7 8 9 10 11 12

I terations

Fau

lt d

ete

cti

on

Size of the original Tes t Suite

Fault Detection Ability(%) SDG

Fault Detection Ability(%) DDG

Page 13: My Academic project work

344 S. Selvakumar and N. Ramaraj

The results obtained from Fig.12 and Fig. 13 show that the size of the reduced test suite in DDG approach is slightly higher than the SDG and shows an improved Fault Detection Ability.

5.4. Comparison with Random, Greedy, Heuristic, Delgreedy and 2-Optimal, SDA

This section discusses the gist of some of the test suite minimization methods viz., Random, Greedy, Heuristic, Delgreedy and 2-Optimal, SDA, DDA. Fig. 14 represents the size of representative sets for these algorithms. Random

• All the tests to satisfy the whole requirement are applied.

• From that the reduced test suite is chosen randomly which satisfies more requirements. Greedy Algorithm

• Tests covering more requirements than other tests are desired.

• Choose the tests that cover the most requirements. Heuristic Algorithm

• Every requirement must be covered in order to maintain coverage.

• For requirements that are covered by the fewest tests, those tests have a high probability of being chosen.

• First it must from the tests that are more likely to be chosen, and then the tests that are less likely.

Delgreedy Algorithm

• A test whose set of covered requirements is a subset of another test’s set of covered requirements does not need to be considered.

• A requirement whose set of covering tests is a superset of another requirement’s set of covering tests does not need to be considered.

• If a requirement is only covered by one test, that test is to be chosen.

• If there are no requirements covered by only one test, is to be chosen greedily. 2-Optimal Algorithm

• 2-Optimal is a step towards brute force search.

• Compare every pair to every other pair of tests.

• Generalizes to KWay Static Dependence Analysis

Nodes represent EFSM transitions and directed edges represent DDs and CDs. Let D and C be the set of all DDs and CDs in an EFSM, respectively. That is, D = {(t

j, t

k, v)| (t

j, t

k, v) is a DD from t

j to t

k w.r.t.

v} and C = {(tj, t

k)| (t

j, t

k) is a CD from t

j to t

k}. The SDG of a given EFSM is constructed as a directed

graph G(N, E) as follows: Let t

j, t

k Є T and v Є V of the EFSM.

E ← ∅; N ← {ni| n

i for each t

i Є T}

For each (tj, t

k) Є C, E ← E Є{a dashed edge from t

j to t

k }.

For each (tj, t

k, v) Є D, E ← E Є{a solid edge from t

j to t

k }.

Page 14: My Academic project work

Regression Test Suite Minimization Using Dynamic Interaction Patterns with Improved FDE 345

Figure 14: Size of representative sets for Random, Greedy, Heuristic, Delgreedy and 2-Optimal, SDA, DDA.

0

10

20

30

40

50

60

70

80

90

100

Random

Greedy

Heuristic

Delgreedy

2-optimal

SDA

DDA

Original test suite Reduced test suite Redundancy

5.5. Experiments using DDA to Reduce Test Suites Generated from Specifications of SIR

Programs

5.5.1. Experimental Results, Analysis, and Discussion

For the experimental program, branch coverage adequate test suites were created for six different suite ranges named as Br, Br+0.1, Br+0.2, Br+0.3, Br+0.4 and Br+0.5. For each suite range, Initially X * LOC test cases were selected randomly from the test pool and added to the test suite, where X is 0, 0.1, 0.2, 0.3, 0.4 and 0.5 respectively and LOC is the number of lines of code for each program. Then, the randomly-selected test cases are added into the test suite as necessary so long as each test case increases the cumulative branch coverage of the suite, until the test suite becomes adequate with respect to branch coverage. In this way, the developed test suites have various types and varying levels of redundancy exist between them. For each program, 1000 such branch coverage adequate test suites in were created each suite size range. In order to gather branch coverage information of test cases, all programs were hand-instrumented. Both the SDG and the DDG were applied to the generated suites with respect to branch coverage as the testing criterion. The results of this experiment are shown in the columns labeled SDG and DDG in Table 2. The values in each row of the table are average values for 1000 suites in each range. In this table, |T| indicates the original suite size, |F| the number of faults exposed by the original suite, |Tred| the reduced suite size, |Fred| the number of faults exposed by the reduced suite size, %Size reduction the percentage suite size reduction and %Fault Loss the percentage fault detection loss. Table 6: Experimental Results for Experiments SDA and DDA

Program/ suite

size Range

|T| |F| |T red| |F red| %Size Reduction %Fault Loss

SDA DDA SDA DDA SDA DDA SDA DDA

tcas Br 5.71 7.47 5.37 5.36 6.81 6.92 12.06 9.91 7.83 6.39 tcas Br+0.1 9.56 9.15 5.72 6.41 6.97 7.46 35.55 30.18 20.82 16.53 tcas Br+0.2 15.2 11.73 6.73 7.04 7 7.83 50.9 45.54 34.97 28.56 tcas Br+0.3 21.39 14.02 6.86 7.86 7.11 8.25 60.34 55.23 42.96 35.62 tcas Br+0.4 29.07 16.29 6.94 7.87 7.21 8.56 67.47 62.95 49.53 41.79 tcas Br+0.5 35.63 17.76 7.83 7.92 7.05 8.59 71.74 67.57 54.06 46.13 totinfo Br 7.3 12.49 6.41 6.38 11.83 11.87 24.7 23.06 5.08 4.77 Totinfo Br+0.1 18.68 14.62 6.63 5.61 12.43 12.63 63.26 60.04 14.13 12.85 totinfo Br+0.2 35.61 16.73 6.23 6.22 12.79 13.11 76.71 73.54 22.35 20.48 totinfo Br+0.3 52.07 17.7 6.21 6.19 13.01 13.19 81.99 79.21 25.09 24.05

Page 15: My Academic project work

346 S. Selvakumar and N. Ramaraj

Table 6: Experimental Results for Experiments SDA and DDA. - (Continued).

totinfo Br+0.4 69.62 18.55 6.02 5.86 13.2 13.27 86.15 83.82 27.51 27.07 totinfo Br+0.5 87.73 19.16 6.73 5.54 13.18 13.15 88.67 86.62 30.04 30.15 sched Br 7.31 3.38 6.64 6.42 3.09 3.09 22.9 21.99 8.02 7.76 Sched Br+0.1 18.44 4.58 6.81 7.21 3.21 3.25 62.8 60.77 28.21 27.16 Sched Br+0.2 32.09 5.18 6.84 7.44 3.16 3.23 74.39 72.57 38.22 36.79 Sched Br+0.3 47.91 5.61 6.92 7.88 3.21 3.33 80.66 79.12 42.01 39.81 Sched Br+0.4 58.83 5.77 6.93 7.91 3.24 3.37 82.65 81.28 42.88 40.62 Sched Br+0.5 74.94 5.96 6.97 7.96 3.16 3.27 85.79 84.51 45.93 44.46 Sched2 Br 8.01 2.21 5.73 5.79 1.98 1.98 27.04 26.38 8.65 8.46 Sched2 Br+0.1 18.61 2.57 5.77 6.12 2.05 2.08 62.62 60.8 16.99 15.99 Sched2 Br+0.2 33.19 3.23 5.75 6.23 2.05 2.13 75.02 73.53 32.17 30.37 Sched2 Br+0.3 47.44 3.77 5.77 6.38 2.08 2.15 81.11 79.74 39.55 38.27 Sched2 Br+0.4 61.6 4.35 5.84 6.54 2.28 2.42 84.04 82.8 43.14 40.05 Sched2 Br+0.5 76.34 4.73 5.86 6.71 2.25 2.44 86.6 85.36 46.67 43.15 ptok Br 15.76 3.38 7.51 7.63 2.99 3.03 51.15 50.39 9.9 9.19 ptok Br+0.1 27.64 3.64 7.56 7.76 3.05 3.06 69.34 68.62 14.5 14.21 ptok Br+0.2 46.03 3.96 7.44 7.75 3.06 3.11 78.95 78.26 20.62 19.53 ptok Br+0.3 63.84 4.28 7.36 7.76 3.09 3.15 82.77 82.16 25.16 24.07 ptok Br+0.4 83.44 4.54 7.32 7.8 3.12 3.19 85.89 85.27 28.65 27.36 ptok Br+0.5 101.87 4.75 7.23 7.73 3.15 3.22 87.91 87.38 30.73 29.46 ptok2 Br 11.77 7.36 8.78 9.04 7.25 7.25 23.96 21.96 1.49 1.45 ptok2 Br+0.1 27.56 7.8 10.05 11.79 7.45 7.49 55.54 50.02 4.24 3.82 ptok2 Br+0.2 49.74 8.17 10.05 12.76 7.63 7.63 70.35 65.06 6.38 6.34 ptok2 Br+0.3 75.01 8.45 9.92 13.22 7.79 7.86 78.56 73.68 7.58 6.78 ptok2 Br+0.4 100.34 8.58 9.9 13.41 7.84 7.89 82.59 78.57 8.4 7.82 ptok2 Br+0.5 121.73 8.6 9.89 13.51 7.85 7.94 84.43 80.71 8.52 7.52 replace Br 18.63 11.13 14.53 14.92 10.32 10.42 21.5 19.43 7.11 6.2 replace Br+0.1 34.59 14.1 15.86 17.49 11.61 12 48.83 44.46 16.61 13.97 replace Br+0.2 56.67 16.8 16.31 19.13 12.52 13.12 63.14 58.45 23.9 20.49 replace Br+0.3 82.49 19.01 16.7 20.54 12.98 13.82 71.45 66.84 29.6 25.54 replace Br+0.4 105.06 19.96 16.8 21.27 13.28 14.11 74.96 70.63 31.27 27.34 replace Br+0.5 134.59 21.43 16.95 22.39 13.52 14.53 80.48 76.1 34.97 30.38 Space Br 154.75 31.12 126.41 127.12 30.88 30.89 18.27 17.81 0.77 0.74 Space Br+0.1 363.32 31.68 128.28 132.89 31.77 31.14 57.64 56.49 1.79 1.69 Space Br+0.2 650.35 32.09 127.25 135.84 31.07 31.1 72.58 71.33 3.15 3.03 Space Br+0.3 959.4 32.48 126.41 137.58 31.1 31.2 78.49 77.31 4.19 3.89 Space Br+0.4 1243.22 32.77 125.92 138.46 31.12 31.24 82.66 81.57 4.97 4.61 Space Br+0.5 1559.31 32.93 125.16 138.84 31.21 31.26 84.9 83.89 5.15 5

Fig. 15 and Fig. 16 depict the sizes of the representative sets generated by the test suite reduction techniques for the academic and for the programs retrieved from the software infrastructure repository subject programs. In the figure the horizontal axis denotes the subject programs whereas the vertical axis denotes the size of the representative set generated by the dependence graphs. From Fig. 15 and Fig 16, it can be seen that the DDA method can significantly reduce the sizes of test suites. Fig. 17 shows the fault detection loss for the reduced suites for experiments SDA and DDA. For most of the ranges, the SDA experiment resulted in higher percentage fault detection loss on average than the DDA experiment.

Figure 15: Sizes of the reduced test suite versus the sizes of the original test suite for Academic projects.

Page 16: My Academic project work

Regression Test Suite Minimization Using Dynamic Interaction Patterns with Improved FDE 347

Figure 16: Sizes of the reduced test suite versus the sizes of the original test suite for SIR objects.

Figure 17: Number of faults detected of reduced test suites verses Range X

5.5.2. Experiment SDA versus DDA

Both the SDA technique and the DDA technique were applied to the generated suites with respect to branch coverage as testing criterion. The results also show that both the SDA and the DDA reduce the suite to a certain extent, which indicates the effectiveness of the proposed system in determining redundant test cases. However, the suite size reduction increases for larger suites, due to the high number of test cases providing more opportunities for the DDA to select among test cases. And the results show that the average fault detection loss has been improved except for printtokens2. In addition, the amount of fault loss for the tcas program and printtokens2 program is relatively high among other programs. For the tcas program, this may be due to the simplicity of this program. The redundant test cases that satisfy the same branches are redundant. But these test cases exercise unique

Page 17: My Academic project work

348 S. Selvakumar and N. Ramaraj

execution paths with respect to some other testing criteria. Hence, using different or fine-grained criteria would result in significant improvements in fault loss. For the printtokens2, the fault loss is high simply due to low number of faults.

To determine whether the degradation in fault detection loss observed for the DDA over the SDA is statistically significant, A hypothesis test was conducted for the difference of the two means

[5]. The samples are the number of distinct faults exposed by each of the 1000 reduced test suites for suite size range B5 using the SDA technique and the proposed technique. The null hypothesis was considered that there is no difference in the mean number of the exposed faults by the two techniques. Table 7 shows the resulting p-Values computed for the hypothesis test along with the percentage confidence with which we may reject the null hypothesis. It’s visible that the difference in the mean number of faults brought out by the SDA and the DDA is statistically significant. Table 7: Computed p-Value and the corresponding percentage of confidence for rejecting the null hypothesis

for each program

Program Name Computed p-Value Percentage of confidence for

rejecting the null hypothesis

tcas 1.32 >84.99% totinfo 7.12 >99.99% schedule 2.34 >97.8% schedule2 0.89 >80.23% printtokens 2.36 >99.99% printtokens2 0.23 >70.82% replace 3.11 >99.99% Space 6.46 >98.5%

The observations made from the analysis are: In all subject programs except for printtokens2

and schedule2, there are far more suites with increased fault detection than decreased fault detection, when going from the SDA technique to the DDA technique. This shows that the SDA technique, while not always improving the fault detection of suites, has a much greater likelihood of increasing fault detection effectiveness than of decreasing it. A large number of suites remained unaltered in the reduction of size in programs tcas, totinfo schedule, schedule2, and printtokens, for both SDA and DDA. For printtokens2, about half of the suites and for replace about one third of suites remained unchanged or more reduced by SDA than DDA. Programs schedule, schedule2, printtokens, and

printtokens2 have a relatively larger number of suites in which the fault detection effectiveness remained unchanged in the case of SDA and DDA techniques. This is most likely due to the fact that these four programs have the fewest number of faulty versions available; so there are fewer opportunities for detecting new distinct faults with these four programs. 5.5.3. Threats to Validity

Q1: Does DDA test suite minimization perform differently between different types of applications?

The subject programs utilized by us are of moderate size; larger programs might have different characteristics. They were chosen because they are well-understood from previous research and because there was no access to other programs with human-generated tests and faulty versions. It is suspected that these programs differ from large programs less than the machine-generated tests and the faults differ from real ones.

Q2: How do the size and fault detection effectiveness of the DDA test suites compare to those of

suites reduced on the basis of existing minimization techniques?

Specifically, it was intended to directly compare the DDA technique to different types test suite minimization techniques, the measurement for the percent of fault loss assumes simple model for cost which treats all faults as equally severe. But in practice, the severity of testing software has a wide range of severity Index.

Page 18: My Academic project work

Regression Test Suite Minimization Using Dynamic Interaction Patterns with Improved FDE 349

Q3: How does the fault detection effectiveness of the DDA reduced test suites compare to suites

of the same size created using other approaches?

By the experimental investigation it is concluded that it is possible that reduced suites created using a given technique have better fault detection effectiveness exclusively to the conception that the technique selects more test cases on average than other techniques. Here, it was desired to investigate whether test suites created by the DDA preserved more fault-detecting ability than other reduced suites of the same size, as well as Heuristic based, greedy reduced suites augmented with the conception.

6. Conclusion and Future Work A new approach of Regression Test Suite Minimization using Dynamic Dependency Graph with improved FDE for the accurate performance evaluation of components has been presented in this work. The RTS minimization using a Static Dependence Graph had the problem of ignoring the necessary test cases and their interaction patterns during the traversal of the system in an iterative manner. But using the Dynamic Dependencies, All the possible test cases were considered with their interaction patterns. Then each and every test case was analyzed and the test cases with the same interaction patterns similar to another test case were removed. Finally, a Reduced Test Suite was formed with the remaining test cases and then the various application objects were tested with our Reduced Tested Suite. The performance of the new approach has also been experimentally evaluated; our results indicate that the proposed method had higher fault detection ability than the previous methods.

There are several interesting directions for future work. The first is to apply the approach to additional sets of components and evaluate different non-functional attributes to improve the validation of the approach. We also plan to enhance our prioritization methods to obtain the prioritized test suite in a cost-effective and efficient manner. Furthermore, our methodology can be extended by identifying the test suites in an automated manner and building a test suite consisting of efficient test cases. The impact of the test cases obtained will be less redundant than impact of the previous methods.

Acknowledgments We thank Dr. Gregg Rothermel, Dept. of Computer Science, University of Nebraska, for providing the Siemens Suite of programs and other experimental objects.

References [1] http://www.cse.unl.edu/~galileo/sir [2] D. Jeffrey, N. Gupta, 2007. “Improving Fault Detection Capability by Selectively Retaining

Test Cases during Test Suite Reduction”, IEEE Transaction.on Software Engineering 33, pp. 108-123.

[3] E. Gizdarski, H. Fujiwara, 2000. “Spirit: Satisfiability problem implementation for redundancy identification and test generation”, Proceedings of Ninth Asian Test Symposium, Taiwan, IEEE Computer Society, pp. 171.

[4] G. Rothermel,M. J. Harrold, J. Ostrin, and C. Hong., 1998. "An empirical study of the effects of minimization on the fault detection capabilities of test suites", International Conference on

Software Maintenance, pp. 34–43. [5] James E. Gentle, 2010. A companion for mathematical Statistics,

http://mason.gmu.edu/~jgentle/books/books_index.htm [6] J. Black, E. Melachrinoudis, D. Kaeli, 2004. “Bi-criteria models for all-uses test suite

reduction”, Proceedings of 26th International Conference on Software Engineering, IEEE

Computer Society, Washington,DC, USA, pp. 106–115.

Page 19: My Academic project work

350 S. Selvakumar and N. Ramaraj

[7] Korel, B., Tahat, L.H., and Vaysburg, B., 2002. "Model-based regression test reduction using dependence analysis", Proc. of IEEE ICSM’02.

[8] M. Balcer, W. Hasling, and T. Ostrand. 1989. “Automatic generation of test scripts from formal test specifications”, Proceedings of the 3rd Symposium on Software Testing, Analysis, and

Verification, pp. 210–218. [9] M. Hutchins, H. Foster, T. Goradia, and T. Ostrand, May 1994. “Experiments on the

Effectiveness of Dataflow and Controlflow-based Test Adequacy Criteria,” 16th International

Conference on Software Engineering, pages 191–200. [10] Mark Last , Menahem Friedman , Abraham Kandel, 2003. “The data mining approach to

automated software testing”, Proceedings of the ninth ACM SIGKDD international conference

on Knowledge discovery and data mining, August 24-27, Washington, D.C. [11] National Institute of Standards & Technology, May 2002. “The Economic Impacts of

Inadequate Infrastructure for Software Testing”, Planning Report 02-3. [12] N. Mansour, K. El-Fakih, 1999. “Simulated annealing and genetic algorithms for optimal

regression testing”, Journal of Software Maintenance 11 (1) 19–34. [13] Qurat-ul-ann Farooq, Muhammad Zohaib Z. Iqbal, Zafar I. Malik, Matthias Riebisch, 2010. "A

Model-Based Regression Testing Approach for Evolving Software Systems with Flexible Tool Support," Engineering of Computer-Based Systems, IEEE International Conference on the

Engineering of Computer-Based Systems, pp. 41-49. [14] S. Sampath, V. Mihaylov, A. Souter, and L. Polloc, 2005. "An Empirical Comparison of Test

Suite Reduction Techniques for User-Session-Based Testing of Web Applications", Proceedings of the 21st IEEE International Conference on Software Maintenance.

[15] T. Ostrand and M. Balcer. , June 1988. “The category-partition method for specifying and generating functional tests”, Communications of the ACM, 31(6):676–686.

[16] T.Y. Chen, M. Lau, 1998. “A new heuristic for test suite reduction”, Information and Software

Technology 40 (5-6), 347–354. [17] Vaysburg, B., Tahat, L., Korel, B., 2002. "Dependence Analysis in Reduction of Requirement

Based Test Suites", Proceedings of IEEE International Symposium on Software Testing and

Analysis (ISSTA). [18] V. Rusu, L. du Bousquet, T. Jeron, 2000. “An approach to symbolic test generation”,

Proceedings of Second International Conference on Integrated Formal Methods, Springer-

Verlag, London, UK, pp. 338–357. [19] Yanping Chen, Robert L. Probert, Hasan Ural, 2007. "Regression test suite reduction using

extended dependence analysis", Fourth international workshop on Software quality assurance, pp. 62-69.

[20] Yanping Chen, Robert L., Probert, Hasan Ural, 2009. "Regression test suite reduction based on SDL models of system requirements", Journal of Software Maintenance and Evolution:

Research and Practice archive, John Wiley & Sons, Inc, Vol. 21,no. 6, pp. 379-405, Nov.

Page 20: My Academic project work

Regression Test Suite Minimization Using Dynamic Interaction Patterns with Improved FDE 351

Appendix

Figure 3.A: EFSM model of a Global Banking System

start

S1

S2

S4

S3

exit

Check(uname,pw,Ltype)

t1

Att=0

disp login page

(Ltype=accholder)/display menu

t2

t3

att>3

t5

t6

t7

t8

t9

t10

t11

t12

logoff()

t14a

t16

t17

t15

Vpendin acc()

Vparticular acc()

Chuname()

Chpassword()

Vbalance()

Vprev trans()

Transferfund()

Chuname()

Chpassword()

t14a

t13

logoff()

t4

Continue/disp menu

Continue/disp menu

Retry() / att=att+1

(Ltype=Admin)/display menu

Figure 3.B: EFSM model of the Banking System with added transition

start S1

S2

S4

S3

exit

Check(uname,pw,Ltype)

t1

att=0

disp login page

(Ltype=Admin)/display menu

(Ltype=accholder)/display menu

t2

t3

att>3

t5

t6

t7

t8

t9

t10

t11

t12

logoff()

t14a

t16

t17

t15

Vpendin acc()

Vparticular

Chuname()

Chpassword()

Vbalance()

Vprev trans()

Transferfund()

Chuname()

Chpassword()

t14a

t13

logoff()

t4

Continue/disp menu

Continue/disp menu

balance/disp() t18

Retry() / att=att+1

Page 21: My Academic project work

352 S. Selvakumar and N. Ramaraj

Figure 8.C: EFSM model of the Banking System with deleted transition

start S1

S2

S4

S3

exit

Check(uname,pw,Ltype)

t1

att=0

disp login page

(Ltype=Admin)/display menu

(Ltype=accholder)/display menu

t2

t3

att>3

t5

t6

t7

t8

t9

t10

t11

t12

logoff()

t14a

t16

t17

t15

Vpendin acc()

Vparticular acc()

Chuname()

Chpassword()

Vbalance()

Vprev trans()

Transferfund()

Chuname()

Chpassword()

t14

t13

logoff()

t4

Continue/disp menu

Continue/disp menu

balance/disp() t18

Retry() / att=att+1

t19

Table 8: Original test suite for the Global banking system

admin module admin module iteartion account holder module account holder module iteration

t1 t3 t5 t14 t16 t1 t3 t5 t14 t5 t14 t16 t1 t4 t9 t14a t17 t1 t4 t9 t14a t9 t14a t17 t1 t3 t6 t14 t16 t1 t3 t5 t14 t6 t14 t16 t1 t4 t10 t14a t17 t1 t4 t9 t14a t10 t14a t17 t1 t3 t7 t14 t16 t1 t3 t5 t14 t7 t14 t16 t1 t4 t11 t14a t17 t1 t4 t9 t14a t11 t14a t17 t1 t3 t8 t14 t16 t1 t3 t5 t14 t8 t14 t16 t1 t4 t12 t14a t17 t1 t4 t9 t14a t12 t14a t17 t1 t2 t3 t5 t14 t16 t1 t3 t6 t14 t5 t14 t16 t1 t4 t13 t14a t17 t1 t4 t9 t14a t13 t14a t17 t1 t2 t3 t6 t14 t16 t1 t3 t6 t14 t6 t14 t16 t1 t2 t4 t9 t14a t17 t1 t4 t10 t14a t9 t14a t17 t1 t2 t3 t7 t14 t16 t1 t3 t6 t14 t7 t14 t16 t1 t2 t4 t10 t14a t17 t1 t4 t10 t14a t10 t14a t17 t1 t2 t3 t8 t14 t16 t1 t3 t6 t14 t8 t14 t16 t1 t2 t4 t11 t14a t17 t1 t4 t10 t14a t11 t14a t17 t1 t2 t2 t3 t5 t14 t16 t1 t3 t7 t14 t5 t14 t16 t1 t2 t4 t12 t14a t17 t1 t4 t10 t14a t12 t14a t17 t1 t2 t2 t3 t6 t14 t16 t1 t3 t7 t14 t6 t14 t16 t1 t2 t4 t13 t14a t17 t1 t4 t10 t14a t13 t14a t17 t1 t2 t2 t3 t7 t14 t16 t1 t3 t7 t14 t7 t14 t16 t1 t2 t2 t4 t9 t14a t17 t1 t4 t11 t14a t9 t14a t17 t1 t2 t2 t3 t8 t14 t16 t1 t3 t7 t14 t8 t14 t16 t1 t2 t2 t4 t10 t14a t17 t1 t4 t11 t14a t10 t14a t17 t1 t2 t2 t2 t3 t5 t14 t16 t1 t3 t8 t14 t5 t14 t16 t1 t2 t2 t4 t11 t14a t17 t1 t4 t11 t14a t11 t14a t17 t1 t2 t2 t2 t3 t6 t14 t16 t1 t3 t8 t14 t6 t14 t16 t1 t2 t2 t4 t12 t14a t17 t1 t4 t11 t14a t12 t14a t17 t1 t2 t2 t2 t3 t7 t14 t16 t1 t3 t8 t14 t7 t14 t16 t1 t2 t2 t4 t13 t14a t17 t1 t4 t11 t14a t13 t14a t17 t1 t2 t2 t2 t3 t8 t14 t16 t1 t3 t8 t14 t8 t14 t16 t1 t2 t2 t2 t4 t9 t14a t17 t1 t4 t12 t14a t9 t14a t17

exit without any operation t1 t2 t3 t5 t14 t5 t14 t16 t1 t2 t2 t2 t4 t10 t14a t17 t1 t4 t12 t14a t10 t14a t17

t1 t15 t1 t2 t3 t5 t14 t6 t14 t16 t1 t2 t2 t2 t4 t11 t14a t17 t1 t4 t12 t14a t11 t14a t17 t1 t3 t16 t1 t2 t3 t5 t14 t7 t14 t16 t1 t2 t2 t2 t4 t12 t14a t17 t1 t4 t12 t14a t12 t14a t17 t1 t4 t17 t1 t2 t3 t5 t14 t8 t14 t16 t1 t2 t2 t2 t4 t13 t14a t17 t1 t4 t12 t14a t13 t14a t17 t1 t2 t3 t16 t1 t2 t3 t6 t14 t5 t14 t16 t1 t4 t13 t14a t9 t14a t17 t1 t2 t4 t17 t1 t2 t3 t6 t14 t6 t14 t16 t1 t4 t13 t14a t10 t14a t17 t1 t2 t2 t3 t16 t1 t2 t3 t6 t14 t7 t14 t16 t1 t4 t13 t14a t11 t14a t17 t1 t2 t2 t4 t17 t1 t2 t3 t6 t14 t8 t14 t16 t1 t4 t13 t14a t12 t14a t17 t1 t2 t2 t2 t3 t16 t1 t2 t3 t7 t14 t5 t14 t16 t1 t4 t13 t14a t13 t14a t17 t1 t2 t2 t2 t4 t1 t1 t2 t3 t7 t14 t6 t14 t16 t1 t2 t4 t9 t14a t9 t14a t17 t1 t2 t3 t7 t14 t7 t14 t16 t1 t2 t4 t9 t14a t10 t14a t17 t1 t2 t3 t7 t14 t8 t14 t16 t1 t2 t4 t9 t14a t11 t14a t17 t1 t2 t3 t8 t14 t5 t14 t16 t1 t2 t4 t9 t14a t12 t14a t17 t1 t2 t3 t8 t14 t6 t14 t16 t1 t2 t4 t9 t14a t13 t14a t17 t1 t2 t3 t8 t14 t7 t14 t16 t1 t2 t4 t10 t14a t9 t14a t17 t1 t2 t3 t8 t14 t8 t14 t16 t1 t2 t4 t10 t14a t10 t14a t17 t1 t2 t2 t3 t5 t14 t5 t14 t16 t1 t2 t4 t10 t14a t11 t14a t17 t1 t2 t2 t3 t5 t14 t6 t14 t16 t1 t2 t4 t10 t14a t12 t14a t17 t1 t2 t2 t3 t5 t14 t7 t14 t16 t1 t2 t4 t10 t14a t13 t14a t17 t1 t2 t2 t3 t5 t14 t8 t14 t16 t1 t2 t4 t11 t14a t9 t14a t17

Page 22: My Academic project work

Regression Test Suite Minimization Using Dynamic Interaction Patterns with Improved FDE 353

Table 8: Original test suite for the Global banking system (Continued)

t1 t2 t2 t3 t6 t14 t5 t14 t16 t1 t2 t4 t11 t14a t10 t14a t17 t1 t2 t2 t3 t6 t14 t6 t14 t16 t1 t2 t4 t11 t14a t11 t14a t17 t1 t2 t2 t3 t6 t14 t7 t14 t16 t1 t2 t4 t11 t14a t12 t14a t17 t1 t2 t2 t3 t6 t14 t8 t14 t16 t1 t2 t4 t11 t14a t13 t14a t17 t1 t2 t2 t3 t7 t14 t5 t14 t16 t1 t2 t4 t12 t14a t9 t14a t17 t1 t2 t2 t3 t7 t14 t6 t14 t16 t1 t2 t4 t12 t14a t10 t14a t17 t1 t2 t2 t3 t7 t14 t7 t14 t16 t1 t2 t4 t12 t14a t11 t14a t17 t1 t2 t2 t3 t7 t14 t8 t14 t16 t1 t2 t4 t12 t14a t12 t14a t17 t1 t2 t2 t3 t8 t14 t5 t14 t16 t1 t2 t4 t12 t14a t13 t14a t17 t1 t2 t2 t3 t8 t14 t6 t14 t16 t1 t2 t4 t13 t14a t9 t14a t17 t1 t2 t2 t3 t8 t14 t7 t14 t16 t1 t2 t4 t13 t14a t10 t14a t17 t1 t2 t2 t3 t8 t14 t8 t14 t16 t1 t2 t4 t13 t14a t11 t14a t17 t1 t2 t2 t2 t3 t5 t14 t5 t14 t16 t1 t2 t4 t13 t14a t12 t14a t17 t1 t2 t2 t2 t3 t5 t14 t6 t14 t16 t1 t2 t4 t13 t14a t13 t14a t17 t1 t2 t2 t2 t3 t5 t14 t7 t14 t16 t1 t2 t2 t4 t9 t14a t9 t14a t17 t1 t2 t2 t2 t3 t5 t14 t8 t14 t16 t1 t2 t2 t4 t9 t14a t10 t14a t17 t1 t2 t2 t2 t3 t6 t14 t5 t14 t16 t1 t2 t2 t4 t9 t14a t11 t14a t17 t1 t2 t2 t2 t3 t6 t14 t6 t14 t16 t1 t2 t2 t4 t9 t14a t12 t14a t17 t1 t2 t2 t2 t3 t6 t14 t7 t14 t16 t1 t2 t2 t4 t9 t14a t13 t14a t17 t1 t2 t2 t2 t3 t6 t14 t8 t14 t16 t1 t2 t2 t4 t10 t14a t9 t14a t17 t1 t2 t2 t2 t3 t7 t14 t5 t14 t16 t1 t2 t2 t4 t10 t14a t10 t14a t17 t1 t2 t2 t2 t3 t7 t14 t6 t14 t16 t1 t2 t2 t4 t10 t14a t11 t14a t17 t1 t2 t2 t2 t3 t7 t14 t7 t14 t16 t1 t2 t2 t4 t10 t14a t12 t14a t17 t1 t2 t2 t2 t3 t7 t14 t8 t14 t16 t1 t2 t2 t4 t10 t14a t13 t14a t17 t1 t2 t2 t2 t3 t8 t14 t5 t14 t16 t1 t2 t2 t4 t11 t14a t9 t14a t17 t1 t2 t2 t2 t3 t8 t14 t6 t14 t16 t1 t2 t2 t4 t11 t14a t10 t14a t17 t1 t2 t2 t2 t3 t8 t14 t7 t14 t16 t1 t2 t2 t4 t11 t14a t11 t14a t17 t1 t2 t2 t2 t3 t8 t14 t8 t14 t16 t1 t2 t2 t4 t11 t14a t12 t14a t17 t1 t2 t2 t4 t11 t14a t13 t14a t17 t1 t2 t2 t4 t12 t14a t9 t14a t17 t1 t2 t2 t4 t12 t14a t10 t14a t17 t1 t2 t2 t4 t12 t14a t11 t14a t17 t1 t2 t2 t4 t12 t14a t12 t14a t17 t1 t2 t2 t4 t12 t14a t13 t14a t17 t1 t2 t2 t4 t13 t14a t9 t14a t17 t1 t2 t2 t4 t13 t14a t10 t14a t17 t1 t2 t2 t4 t13 t14a t11 t14a t17 t1 t2 t2 t4 t13 t14a t12 t14a t17 t1 t2 t2 t4 t13 t14a t13 t14a t17 t1 t2 t2 t4 t9 t14a t9 t14a t17 t1 t2 t2 t2 t4 t9 t14a t10 t14a t17 t1 t2 t2 t2 t4 t9 t14a t11 t14a t17 t1 t2 t2 t2 t4 t9 t14a t12 t14a t17 t1 t2 t2 t2 t4 t9 t14a t13 t14a t17 t1 t2 t2 t2 t4 t10 t14a t9 t14a t17 t1 t2 t2 t2 t4 t10 t14a t10 t14a t17 t1 t2 t2 t2 t4 t10 t14a t11 t14a t17 t1 t2 t2 t2 t4 t10 t14a t12 t14a t17 t1 t2 t2 t2 t4 t10 t14a t13 t14a t17 t1 t2 t2 t2 t4 t11 t14a t9 t14a t17 t1 t2 t2 t2 t4 t11 t14a t10 t14a t17 t1 t2 t2 t2 t4 t11 t14a t11 t14a t17 t1 t2 t2 t2 t4 t11 t14a t12 t14a t17 t1 t2 t2 t2 t4 t11 t14a t13 t14a t17 t1 t2 t2 t2 t4 t12 t14a t9 t14a t17 t1 t2 t2 t2 t4 t12 t14a t10 t14a t17 t1 t2 t2 t2 t4 t12 t14a t11 t14a t17 t1 t2 t2 t2 t4 t12 t14a t12 t14a t17 t1 t2 t2 t2 t4 t12 t14a t13 t14a t17 t1 t2 t2 t2 t4 t13 t14a t9 t14a t17 t1 t2 t2 t2 t4 t13 t14a t10 t14a t17 t1 t2 t2 t2 t4 t13 t14a t11 t14a t17 t1 t2 t2 t2 t4 t13 t14a t12 t14a t17 t1 t2 t2 t2 t4 t13 t14a t13 t14a t17