Chapter Complete1

Embed Size (px)

Citation preview

  • 7/30/2019 Chapter Complete1

    1/96

    1

    Chapter-1

    INTRODUCTION

    Defects can exist in the software, as it is developed by human beings who can make mistakesduring the development of software. However, it is the primary duty of a software vendor to

    ensure that software delivered does not have defects and the customers day-to-day

    operations do not get affected. This can be achieved by rigorously testing the software.

    Testing the product means adding value to it, which means raising the quality or reliability of

    the program. Raising the reliability of the product means finding and removing errors. Hence

    one should not test a product to show that it works; rather, one should start with the

    assumption that the program contains errors and then test the program to find as many errors

    as possible. Thus a more appropriate definition is: Testing is the process of executing a

    program with the intent of finding errors [1].

    Different types of testing approaches exist such as black box testing, white box testing and

    grey box testing. For the purpose of doing static and dynamic testing, various techniques are

    followed. System testing is also one of major type. In it load testing [2], stress testing and

    volume testing are used for checking performance, recovery testing to check recovery point,

    configuration testing to check various configuration on the system and regression testing [3,

    4, 5] for the purpose of revalidation of the new versions of the existing software and many

    more.

    Regression testing [6] as the name suggests is used to test/check the effect of changes made

    in the code. Ideally, Regression testing executes the existing as well as additional test cases

    due to changes. However, this is impractical to execute all the test cases due to constraints of

    time and cost of the project. Most of the time the testing team is asked to checks last minute

    changes in the code just before making a release to the client, in this situation the testing team

    needs to check only the affected areas. So in short for the regression testing the testing team

    should get the input from the development team about the nature/amount of change in the fix

    so that testing team can first check the fix and then the side effects of the fix.

    It is impractical and inefficient to re-execute every test for every program function once a

    change has occurred. Regression tests should follow on critical module function. To make

  • 7/30/2019 Chapter Complete1

    2/96

    2

    regression testing easier, software engineers typically reuse test suites of the original

    program, but also new test cases may be required to test new functionalities of the new

    version. The focus here is on the reuse of test cases as most ideas about costs and benefits

    come from test suite granularity. There are four methodologies, [6] considered here, that

    reuse the test suites of the original version of the software: retest-all, regression test selection

    (RTS) [7, 8, 9, 10 ] , test suite reduction (TSR) [11, 12 ] and test case prioritization (TCP).

    Retest all, reruns every test case of the test suite. It is not a feasible approach as time period

    to complete the work is fixed. RTS, on the other hand selects some of the test cases from the

    test suite on temporary basis where as TSR permanently removes test cases from the test suite

    depending upon the modification in the existing project. Selection and removal of test cases

    from test suite can be problematic in some situation. So, a new methodology is there namely

    test case prioritization.

    Test case prioritization techniques arrange test cases so that those test cases that are better at

    achieving the testing objectives are run earlier in the regression testing cycle [13]. For

    instance, software engineers might want to schedule test cases so that code coverage is

    achieved as quickly as possible or increase the possibility of fault detection [14, 15] early on

    in the testing. The improved rates of fault detection can provide early feedback on the

    software being tested. This reduces regression testing costs by allowing the software

    engineers to tackle the discovered faults and begin debugging earlier in testing than what

    might otherwise be possible.

    Software testing [16] is a strenuous and expensive process. Research has shown that at least

    50% of the total software cost is comprised of testing activities. Companies are often faced

    with lack of time and resources, which limits their ability to effectively complete testing

    efforts. Prioritization of test cases in the order of execution in a test suite can be beneficial.

    Test case prioritization (TCP)[17] involves the explicit prior planning of the execution order

    of test cases to increase the effectiveness of software testing [18] activities by improving the

    rate of fault detection earlier in the software process. To date, TCP has been primarily applied

    to improve regression testing [19] efforts of white box, code-level test cases. Currently,

    regression TCP techniques use structural coverage criteria to select the test cases. Structural

    coverage techniques, such as statement or branch coverage, are applicable at the code level.

  • 7/30/2019 Chapter Complete1

    3/96

    3

    1.1MOTIVATION1.1.1 Design phase of SDLC is ignored while prioritizing Test cases.

    Test Case Prioritization methods concerning to statement level coverage, branch level

    coverage and function level coverage, consider individual class and accordingly for that

    particular class, test cases are prioritized. Architectural design and detailed design are

    important for getting the outline of the project having different classes. Using coupling it is

    possible to identify relation among classes. But sometimes ignorance of relationship among

    classes can generate new problems.

    1.1.2 Inter dependence of classes on the basis of coupling after change in one class was

    not considered earlier.

    During maintenance phase or integration testing [20] suppose there is change in one class.

    That particular changed class would be linked to some other classes in the form of either

    caller or callee. It is not clear which class will be highly affected because of that change. It

    may also be possible that all the classes would not be affected. So, in this case, it is better to

    check first the control flow in the form of classes and then prioritize the highly affected class

    and then its test cases.

    3. Every test case detects some fault that faults are new or detected earlier, if new then

    how critical is that.

    Consider all the test cases of a class and calculate number of faults detected per unit time then

    select the first test case and then calculate new faults per unit time of each test case and select

    the best one. New fault means which are not discovered by selected test cases. Keep

    repeating this process until hundred percent faults are detected.

    1.2 OBJECTIVES

    In the light of above discussion, the objectives of the thesis are:

    1. To utilize the limited resources (viz. cost, time, test tools [21, 22], man power etc) in an

    efficient manner.

    2. To perform the two levels regression test case prioritization.

    1st level is Class Prioritization that means- badly affected Class would be prioritized on the

    basis of non-zero entries of the first order dependence matrix.

  • 7/30/2019 Chapter Complete1

    4/96

    4

    2nd level is TEST CASE Prioritization, that means test cases corresponding to the badly

    affected Class would be prioritized on the basis of Fault coverage per unit time.Keeping in

    view of above objectives, the work has been done for designing an algorithm that could be

    used for prioritizing the test cases of the affected module where the effect of any module

    change has been propagated.

    1.3 THESIS ORGANIZATION

    The thesis has been organized in the following chapters:

    Chapter 2: Literature Review

    This chapter explores details related to regression testing, test case prioritization, ripple

    effect, coupling, cohesion and bug classification.

    Chapter 3: Proposed Work

    In this chapter an idea of work flow has been mentioned. Coupling and cohesion concepts are

    used to find the relation among classes. The fault coverage per unit time is explained for test

    case prioritization.

    Chapter 4: Experimental Evaluation and Analysis of Proposed Work

    Experimental evaluation of the proposed work is presented in this chapter by considering a

    case study having 3 classes. Results and analysis of the proposed work is done using average

    percentage of fault detection (APFD) metric.

    Chapter 5: Conclusion and Future Work

    The conclusion and future work of the proposed work has been mentioned here.

  • 7/30/2019 Chapter Complete1

    5/96

    5

    CHAPTER-2

    LITERATURE REVIEW

    This chapter is about review of some related topics of work to be carried out. Test case

    prioritization is one of the approaches of regression testing. Ordering of test cases is to be

    done so that maintenance cost can be reduced and resources can be utilized in a proper

    manner. Statement level and branch level prioritization are concerned with single class only.

    For considering dependence between classes concept of ripple effect with coupling and

    cohesion is considered. For prioritization of test cases, fault coverage per unit time is

    considered.

    2.1 REGRESSION TESTING

    Regression testing is the execution of an ensemble of test cases on a program in order to

    ensure that its revision does not produce unintended faults, does not "regress"- that is,

    become less effective than it has been in the past. Given program P, its modified version P,

    and a test set T that was used to previously test P, find a way to utilize T to gain sufficient

    confidence in the correctness of P[23]

    2.1.1 Regression Testing Process

    Identify and modify/remove the obsolete test cases from T if specifications have changed.

    Test case revalidation problem

    Select TT, a set of test cases to execute on P

    Regression test selection problem

    Test P with T, establishing Ps correctness w.r.t. T

    Test suite execution problem

    If necessary, create T, a set of new functional or structural test cases for P

    Coverage identification problem

    Test P with T, establishing Ps correctness w.r.t. T

  • 7/30/2019 Chapter Complete1

    6/96

    6

    Test suite execution problem

    Create T, a new test suite and test execution profile for P, from T, T, and T.

    Test suite maintenance problem

    2.1.2 Methodologies

    Regression testing is the process of validating modified software to detect whether new errors

    [24] have been introduced into previously tested code and to provide confidence that

    modifications are correct. As developers maintain a software system, they periodically

    regression tests it, hoping to find errors caused by their changes and provide confidence that

    their modifications are correct. To support this process, developers often create an initial test

    suite, and then reuse it for regression testing. The simplest regression testing strategy, retest

    all, reruns every test case in the initial test suite. This approach, however, can be

    prohibitively expensivererunning all test cases in the test suite may require an unacceptable

    amount of time.

    1) Regression test case selection techniques- Screen2) Test case prioritization techniques- Order3) Test suite reduction techniques- Remove

    (Reduce testing costs (by reducing the test suite) [25] by permanently eliminating

    redundant test cases from test suites in terms of code or functionalities exercised)

    2.1.2.1Regression Test Selection (RTS)

    Select T a subset of T and use this test suite for regression testing purpose.

    Regression Test Case Selection Techniques [26,27] :-

    (A) Minimisation Techniques. Minimization-based regression test selection techniques

    attempt to select minimal sets of test cases from Tthat yield coverage of modified or affected

    portions ofP.

    (B) Dataflow Techniques. Dataflow-coverage-based regression test selection techniques

    select test cases that exercise data interactions that have been affected by modifications.

    (C) Safe Techniques. Most regression test selection techniquesminimization and dataflow

    techniques among themare not designed to besafe. Techniques that are not safe can fail to

    select a test case that would have revealed a fault in the modified program. In contrast, when

  • 7/30/2019 Chapter Complete1

    7/96

    7

    an explicit set of safety conditions can be satisfied, safe regression test selection techniques

    guarantee that the selected subset, contains all test cases in the original test suite Tthat can

    reveal faults.

    (D) Ad Hoc/Random Techniques. When time constraints prohibit the use of a retest-all

    approach, but no test selection tool is available, developers often select test cases based on

    hunches, or loose associations of test cases with functionality. Another simple approach is

    to randomly select a predetermined number of test cases from T.

    (E)Retest-All Technique. The retest-all technique simply reuses all existing test cases. To test

    modified program, the technique effectively selects all test cases in T.

    Pros:

    1. Test fixed bugs immediately.2. Can check side effects of fix (depends on the how broad is the test).3. Reduce time of rerunning the tests.

    Cons:

    1. Require development time.

    2.1.2.2 Test Suite Reduction (TSR)

    This technique uses information about program and test suite to remove the test cases, which

    have become redundant with time, as new functionality is added. It is different from

    regression test selection as former does not permanently remove test cases but selects those

    that are required. Advantage of this technique is that it reduces cost of validating, executing,

    managing test suites over future releases of software, but the downside of this is that it might

    reduce the fault detection capability with reduction of test suite size [28].

    2.1.2.3 Test Case Prioritization (TCP)

    Test case prioritization techniques arrange test cases so that those test cases that are better at

    achieving the testing objectives are run earlier in the regression testing cycle. For instance,

    software engineers might want to schedule test cases so that code coverage is achieved as

    quickly as possible or increase the possibility of fault detection early on in the testing. Studies

    have shown that some simple test case prioritization methods can remarkably improve testing

    performance [29], especially the rates at which test suites detect faults. The improved rates of

    fault detection can provide early feedback on the software being tested. This reduces

    regression testing costs by allowing the software engineers to tackle the discovered faults and

    begin debugging earlier in testing than what might otherwise be possible.

  • 7/30/2019 Chapter Complete1

    8/96

    8

    Test Case Prioritization Types:-

    2.1.2.3.1.1 General and Version Specific Test Case Prioritization

    The General Test Case Prioritization technique [30] prioritizes in finding the order for the test

    cases that will be the most useful for the following s/w versions as well. The Version specific

    Test Case Prioritization technique tries to find the most useful order for a specific version of

    the software.

    2.1.3.1.2 Statement level and Branch Covering Test Case Prioritization

    Elbaum et al. and Rothermel et al. [6] introduce altogether 20 different test case

    prioritization techniques out of which six are briefly covered here as an illustration of some

    prioritizing techniques. Both Elbaum et al. and Rothermel et al. introduce statement level

    techniques that are of fine granularity: T1 - T4. Rothermel et al. also introduce two branch-

    covering techniques: M5 and M6.

    Code Mnemonic Description

    T1 st-total prioritize on coverage of statements

    T2 st-addtl prioritize on coverage of statements on yet covered

    T3 st-fep-total prioritize on probability of exposing faults

    T4 st-fep-addtl prioritize on probability of faults, adjusted to consider previous test

    cases

    M5 branch-total prioritize in order of coverage of branches

    M6 branch-addtl prioritize in order of coverage of branches not yet covered

    T1: Total statement coverage prioritization.It is possible to measure the coverage of

    statements in a program by its test cases. Then the test cases can be prioritized in terms of the

    total number of statements they cover by sorting them in the order of coverage achieved.

    T2: Additional statement coverage prioritization.This is like T1 but it relies on feedback on

    the coverage achieved so far in testing so that it then focuses on statements not yet covered.

    For illustration in Program 2.1, both T1 and T2 first choose test case 3 as it covers most of

    the statements of procedure P. Then T1 selects test case 1 as it covers more statements, but

    here T2 does not select test case 1 as all of its statements have already been covered, instead

    it chooses test case 2. After this T1 selects test case 2, but T2 finishes as it does not select test

    case 1 for its statements have been covered as previously stated. (Refer Table 2.1)

  • 7/30/2019 Chapter Complete1

    9/96

    9

    Program 2.1

    Begin

    1 s12 While (c1) do

    3 if (c2) then

    4 exit

    else

    5 s3

    endif

    6 s4

    endwhile

    7 s5

    8 s6

    9 s7

    End

    TABLE 2.1 Statement Coverage

    Test Case 1 Test Case 2 Test Case 3

    * * *

    * * *

    * *

    *

    *

    *

    * * *

    * * *

    * * *

    M5: Total branch coverage prioritization. M5 works in the same way as T1 except, instead

    of measuring statement coverage, M5 measures branch coverage. Branch coverage is

    obtained when every possible overall outcome of a condition is covered e.g if or while

    statements must be tested with both True or false values.

    M6: Additional branch coverage prioritization.M6 works with feedback the same way as T2,

    but like M5 by measuring branch coverage instead of statement coverage. To illustrate M5

  • 7/30/2019 Chapter Complete1

    10/96

    10

    and M6 from Program 2.1, M5 chooses first test case 3, 2 & then 1 in respect with the

    descending number of uncovered branches. M6 chooses the same order as M5 because full

    branch coverage is achieved only by traversing through all possible paths.

    TABLE 2.2 Branch Coverage

    Statement No Test case 1 Test case 2 Test case 3

    Entry * * *

    2-True * *

    2-False * *

    3-True *

    3-False *

    2.2 INCREMENTAL REGRESSION TESTING

    The purpose of regression testing is to ensure that bug fixes and new functionality introduced

    in a new version of software do not adversely affect the correct functionality inherited from

    the previous version [50]. Different techniques under incremental Regression Testing [30] are

    based upon some assumptions. These assumptions are given below:

    1. If a statement is not executed under a test case, it cannot affect the program output

    for that test case.

    2. Not all statements in the program are executed under all test cases.

    3. Even if a statement is executed under a test case, it does not necessarily affect the

    program output for that test case.

    4. Every statement does not necessarily affect every part of the program output.

    Conditions:

    1. No existing edges are deleted and no new ones are introduced.

    2. No changes are made to the left-hand-sides of assignment statements in the

    program.

  • 7/30/2019 Chapter Complete1

    11/96

    11

    2.3 RIPPLE EFFECT:-

    The ripple effect [32] of a change to the source code of a software system is defined as the

    propagation of its impacts on the rest of the system. In 1972, Haney mentioned for the first

    time the ripple effect in software engineering [33]. He used a probabilistic connection matrix

    which subjectively models the dependencies between modules to determine how a change in

    one module necessitates changes in the others. Yau and Collofello introduced a stability

    measure [34]. They define stability of a system as the resistance to the potential ripple effect

    that the system would have when it is modified. Black reformulated Yau and Collofellos

    algorithm to improve the computation process [35]. Li and Offut defined potential changes in

    an object oriented software, then they presented five algorithms to calculate the impact for

    these changes and their ripple effect

    S G

    Link(S,G,H,I,F) H

    I

    F

    Fig. 2.1 Ripple Effect

    The concept of ripple effect is better described in Fig 2.1. Initially, Class_A is changed,

    Impacted classes are calculated with the change impact model. Class_B is one of the

    impacted classes. Ripple effect is defined in two steps. In the first step, identify the nature of

    the change or changes in the class directly impacted (Class_B in Fig 2.1). Once this achieve,

    the original change impact model is applied to class_B, based on the change or changes

    identified in step 1 above. In Class_B, change identification is done by reasoning on the typeof the link between Class_A and Class_B.

    Class C1

    Class C2

    Class AClass C3

    Class B

    Class C4

    Class C5

  • 7/30/2019 Chapter Complete1

    12/96

    12

    2.4 COUPLING

    Coupling refers to the degree [36] of interdependence among the components of a software

    system. Good software design should obey the principle of low coupling[51,52]. Strong

    coupling makes a system more complex; highly inter-related modules are harder to

    understand, change or correct. By minimize coupling, one can avoid propagating errors

    across modules.

    2.4.1 Coupling Type:-

    Itrefers to the type of interactions between classes (or their elements): It may be Class-

    Attribute interaction, [36]Class-Method interaction, or Method-Method interaction.In the

    following, when we discuss attributes and methods of a class C, we only mean newly defined

    or overriding methods and attributes of C, (not ones inherited from Cs ancestors in the

    inheritance hierarchy).

    Program 2.2

    Class B{

    public:

    A* public_ab1;

    A public_ab2;

    Private:

    int I;

    float r;

    void mb1(A &);

    A mb2((void *)());

    };

    Void B::mb1(A & a1)

    {

    A a2;

    ----

    a2=mb2(&A::ma1);

    };

    Class A {

    Public:

    int aa;

  • 7/30/2019 Chapter Complete1

    13/96

    13

    void ma1;

    };

    2.4.1.1 Class-Attribute (CA) interaction:-

    From program2.2 notice the Class-Attribute interaction between class A and class B through

    the attributes public_ab1 and public_ab2. Clearly, if class A is changed, class B is impacted

    via the two public attributes of class B that depend on class A (more precisely: its data type

    Identified by the class name). By definition, there is no Class-Attribute interaction between

    class B and class A directly,but only between elements of class B and class A.

    2.4.1.2 Class-Method (CM) interaction:-

    Consider the declarations of class A and class B presented in program 2.2; there is a Class-

    Method interaction between class A and mb1, a method of class B.

    2.4.1.3 Method-Method (MM) interaction:-

    Consider the declarations of class A and class B in program 2.2. There is a MM interaction

    between class A and class B through the method mb2 and ma1, since ma1 is used as a

    parameter by the method mb2. This kind of interaction occurs frequently in the design of

    graphical user interfaces [37], e.g., call-back procedures; such low-level design interactions

    also occur frequently in the design patterns literature (e.g., Bridge, Adapter, Observer etc).

    Indeed, the OMT-based notation ([38],[39], [40]) used for design patterns indicates these

    Method-Method interactions.

    2.4.2 Designing Loosely Coupled Modules

    There are different types of interfaces that can be used to communicate between modules.

    Coupling between modules can be achieved by passing parameters, using global data areas,

    or by referencing the internal contents of another module. Coupling is a measure of the

    quality of a design. The objective is to minimize coupling between modules, i.e., make

    modules as independent as possible. In structured design, loosely coupled modules are

    desirable. Loose coupling between modules is an indication the system is well partitioned.

    Loose coupling is achieved by:

    1. eliminating unnecessary relationships,2. minimizing the number of necessary relationships,3. Reducing the complexity of necessary relationships.

  • 7/30/2019 Chapter Complete1

    14/96

    14

    Advantages of Loose coupling

    1.The fewer connections between classes, the less chance of a defect in one causing adefect in another.

    2.The risk of having to change other classes as a result of changing one module isreduced.

    3.The need to know about the internals of other classes is reduced when maintainingthe details of other classes.

    2.4.3 Levels of Traditional Coupling

    GOOD COUPLING

    1.Data coupling (BEST)2.Stamp coupling3.Control coupling4.Type use coupling

    BAD COUPLING

    1.Common coupling2.Content coupling (WORST)3.Routine call coupling4.Inclusion or import coupling5.External coupling

    2.4.3.1 Data Coupling

    Data coupling occurs between two modules when data are passed by parameters using a

    simple argument list and every item in the list is used[41]. In it Modules in classes

    communicate by parameters. Each parameter is an elementary piece of data. Each parameter

    is necessary to the communication to occur. The bandwidth of communication between

    classes and components grows and the complexity of the interface increases.

    Data coupling is when modules share data through, for example, parameters. Each datum is

    an elementary piece, and these are the only data which are shared (e.g. passing an integer to a

    function which computes sum and difference of two numbers).

    And class2 is inherited by class1, so member function of class2 can return value to member

    function of class1, and member function of class1 can call member function of class2 by

    passing long string of arguments.

  • 7/30/2019 Chapter Complete1

    15/96

    15

    Program 2.3

    Void class1::func( )

    {

    Int n1, n2, result, result1;

    Coutn1,n2;

    Result=addnum(n1,n2);

    Result1=diffnum (n1, n2);

    Cout

  • 7/30/2019 Chapter Complete1

    16/96

    16

    2.4.3.2 Stamp Coupling

    Stamp coupling occurs when CLASS B is declared as a type for an argument of an operation

    of CLASS A. because CLASS B is now a part of definition of CLASS A, modifying the

    system becomes more complex [41].

    Stamp coupling is when modules share a composite data structure and use only a part of it,

    possibly a different part (e.g. passing a whole record to a function which only needs one field

    of it). This may lead to changing the way a module reads a record because a field, which the

    module doesn't need, has been modified[42].

    Program 2.4

    Class a

    {

    Public:

    Struct student

    {

    Int roll_number;

    Int fee;

    Char name[10];

    };

    Student s1;

    Void getinfo();

    }

    Void a::getinfo()

    {

    Couts1.roll_number,s1.fee,s1.name;

    }

    Class b:public a

    {

    Public:

    Void Modify( s1 );

    Void final_info(s1);

    };

  • 7/30/2019 Chapter Complete1

    17/96

    17

    Void b::modify(s1)

    {

    getinfo();

    S1.fee=6000;

    }

    Void b::final_info()

    {

    Modify();

    Cout

  • 7/30/2019 Chapter Complete1

    18/96

    18

    Program 2.5

    Class a

    {

    Public:

    Int m1,m2;

    Void Calculate_grade ();

    };

    Void a::calculate_grade()

    {

    Coutm1>>m2;

    Grade=((m1+ m2)/2))/20;

    }

    Class b;

    Public:

    Print_grade()

    {

    Calculate_grade();

    Switch(grade)

    {

    case 1: cout

  • 7/30/2019 Chapter Complete1

    19/96

    19

    void main()

    {

    b obj;

    obj.print_grade();

    getch();

    }

    Strength of Control Coupling:-

    1. It is a moderate form of coupling.

    Weaknesses of Control Coupling:-

    1. Control flags passed down the hierarchy require the calling program to know aboutthe internals of the called program (i.e., the called program is not a black box).

    2. Control flags passed up the hierarchy cause a subordinate module to influence theflow of control in the parent module (i.e., the subordinate module has an effect on a

    module it does not control).

    2.4.3.4 Common Coupling or Global Coupling

    Common coupling occurs when a number of components all make use of a global variable

    [41]. Although this is sometimes necessary (e.g., for establishing default values that are

    applicable throughout an application), common coupling can lead to uncontrolled error

    propagation and unforeseen side effects when changes are made.

    Common coupling is when two modules share the same global data (e.g. a global variable).

    Changing the shared resource implies changing all the classes using it.

    Program 2.6

    #include

    #include

    void factorial();

    /* Global variable declarations */

    int i,fact=1,sum=0,n=5;

    class a

    {

    Public:

  • 7/30/2019 Chapter Complete1

    20/96

    20

    Void factorial()

    {

    For ( i =n ; i >=1; i--)

    {

    Fact=fact* i;

    }

    cout

  • 7/30/2019 Chapter Complete1

    21/96

    21

    2. Modules referencing global data are tied to specific data names, unlike modules thatreference data through parameters. This greatly limits the module's re-usability.

    3. It may take considerable time for a maintenance programmer to determine whatmodule updated the global data. The more global data used, the more difficult the job

    becomes to determine how modules obtain their data.

    4. It is more difficult to track the modules that access global data when a piece of globaldata is changed (e.g., a field size is changed).

    2.4.3.5 Content Coupling or Pathological Coupling

    Content coupling occurs when one component surreptitiously modifies data that is internal

    to another component this violates information hiding-a basic design concept [41]. Content

    coupling is when one class modifies or relies on the internal workings of another class (e.g.

    accessing local data of another class).

    Program 2.7

    Class b

    {

    Public:

    Int a,b;

    Int diffnum ()

    {

    Int diff;

    Couta>>b;

    diff=val1-val2;

    Return (diff);

    }

    };

    Class c:public b

    {

    Public:

    addnum()

    {

    Int c;

  • 7/30/2019 Chapter Complete1

    22/96

    22

    C=a+5;

    Int sum,result1;

    Sum=a+c;

    result1=diffnum();

    cout

  • 7/30/2019 Chapter Complete1

    23/96

    23

    2.4.3.7 Type Definition Use Coupling:-

    Type use coupling occurs when component A uses a data type defined in component B(e.g.,

    this occurs whenever a class declares an instance variable or a local variable as having

    another class for its type [41]. if the type definition changes, every component that uses the

    definition must also change.

    Program 2.9

    Class time : public study

    {

    Int hours;

    Int minutes;Public:

    Void gettime(int h, int m)

    {

    If(h

  • 7/30/2019 Chapter Complete1

    24/96

    24

    Hours=minutes/60;

    Minutes=minutes%60;

    Hours=hours+t1.hours+t2.hours;

    }

    };

    Void main ()

    {

    Time t1, t2;

    Usetime t3;

    T1.gettime (2, 45); // get t1

    T2.gettime (3, 30); // get t2

    T3.Sum (t1, t2); //t3=t1+t2

    Cout

  • 7/30/2019 Chapter Complete1

    25/96

    25

    2.4.3.9 External Coupling:-

    External coupling occurs when a component communicates or collaborates with

    infrastructure components (e.g., operating system functions, database capability, and

    telecommunication functions). Although this type of coupling is necessary, it should be

    limited to a small number of components or classes within a system. Software must

    communicate internally and externally [41].

    For example graphics in C language are not supported by window 7, because window 7 does

    not have DOS associated with it.

    2.5 COHESION

    Cohesion, as the name suggests, is the adherence of the code statements within a class [43]. It

    is a measure of how tightly the statements are related to each other in a class. Structures that

    tend to group highly related elements from the point of view of the problem tend to be more

    modular. Cohesion is a measure of the amount of such grouping. Cohesion is the degree to

    which class serves a single purpose. Cohesion is a natural extension of the information-hiding

    concept. A cohesive class performs a single task within a software procedure, requiring little

    interaction with procedures being performed in other parts of a program. There should be

    always high cohesion. Cohesion classes should interact with and manage the functions of a

    limited number of lower-level classes. There are various types of cohesion which are

    described below:

    2.5.1 Functional Cohesion

    A functionally cohesive class contains elements that all contribute to the execution of one and

    only one problem-related task. Examples of functionally cohesive classes are

    1. COMPUTE COSINE OF ANGLE2. CALCULATE NET EMPLOYEE SALARY

    Notice that each of these classes has a strong, single-minded purpose. When its boss calls it,

    it carries out just one job to completion without getting involved in any extracurricular

    activity.

    2.5.2 Sequential Cohesion

    A sequentially cohesive class is one whose elements are involved in activities such that

    output data from one activity serves as input data to the next. To illustrate this definition,

  • 7/30/2019 Chapter Complete1

    26/96

    26

    imagine that youre about to repaint your bright orange Corvette a classy silver color. The

    sequence of steps might be something like this:

    1. CLEAN CAR BODY2. FILL IN HOLES IN CAR3. SAND CAR BODY4. APPLY PRIMER

    2.5.3 Communicational Cohesion

    A communicational cohesive class is one whose elements contribute to activities that use the

    same input or output data.

    1. FIND TITLE OF BOOK2. FIND PRICE OF BOOK3. FIND PUBLISHER OF BOOK4. FIND AUTHOR OF BOOK

    These four activities are related because they all work on the same input data, the book,

    which makes the class communicationally cohesive.

    2.5.4 Procedural Cohesion

    A procedurally cohesive class is one whose elements are involved in different and possibly

    unrelated activities in which control flows from each activity to the next. (Remember that in a

    sequentially cohesive class data, not control, flows from one activity to the next.) Here is a

    list of steps in an imaginary procedurally cohesive class.

    1. CLEAN UTENSILS FROM PREVIOUS MEAL2. PREPARE TURKEY FOR ROASTING3. MAKE PHONE CALL4. TAKE SHOWER5. CHOP VEGETABLES6. SET TABLE

    2.5.5 Temporal Cohesion

    A temporally cohesive class is one whose elements are involved in activities that are related

    in time. Picture this late-evening scene:

    1. PUT OUT MILK BOTTLES2. PUT OUT CAT

  • 7/30/2019 Chapter Complete1

    27/96

    27

    3. TURN OFF TV4. BRUSH TEETH

    These activities are unrelated to one another except that theyre carried out at a particular

    time. Theyre all part of an end-of-day routine. A temporally cohesive class also has some of

    the same difficulties as a procedurally cohesive one. The programmer is tempted to share

    code among activities related only by time, and the class is difficult to reuse, either in this

    system or in others.

    Elements are grouped into a class because they are all processed within the same limited time

    period

    Common example:

    "Initialization" modules that provide default values for objects

    "End of Job" modules that clean up

    procedure initializeData()

    {

    font = "times";

    windowSize = "200,400";

    foo.name = "Not Set";

    foo.size = 12;

    foo.location = "/usr/local/lib/java";

    }

    Cure: Each object should have a constructor and destructor

    class foo

    {

    public foo()

    {

    foo.name = "Not Set";

    foo.size = 12;

    foo.location = "/usr/local/lib/java";

    }

    }

  • 7/30/2019 Chapter Complete1

    28/96

    28

    2.5.6 Logical Cohesion

    A logically cohesive class is one whose elements contribute to activities of the same

    general category in which the activity or activities to be executed are selected from outside

    the class. Keeping this definition in mind, consider the following example. Someone

    contemplating a journey might compile the following list:

    1. GO BY CAR2. GO BY TRAIN3. GO BY BOAT4. GO BY PLANE

    What relates these activities? Theyre all means of transport, of course. But a crucial

    point is that for any journey, a person must choose a specific subset of these modes of

    transport. Its unlikely anyone would use them all on any particular journey.

    A logically cohesive class contains a number of activities of the same general kind. Thus, a

    logically cohesive class is a grab bag of activities. The activities, although different, are

    forced to share the one and only interface to the class. The meaning of each parameter

    depends on which activity is being used; for certain activities, some of the parameters will

    even be left blank (although the calling class still needs to use them and to know their

    specific types).

    Class performs a set of related functions, one of which is selected via function parameter

    when calling the class. Similar to control coupling classes performs a set of related functions,

    one of which is Selected via function parameter when calling the class

    Cure:

    Isolate each function into separate operations

    public void sample( int flag )

    {

    switch ( flag )

    {

    case ON:

    // bunch of on stuff

    break;

    case OFF:

    // bunch of off stuff

    break;

  • 7/30/2019 Chapter Complete1

    29/96

    29

    case CLOSE:

    // bunch of close stuff

    break;

    case COLOR:

    // bunch of color stuff

    break;

    }

    }

    2.5.7 Coincidental Cohesion

    Little or no constructive relationship among the elements of the class Common Object

    Occurrence: Collection of commonly used source code as a class inherited via multiple

    inheritances

    class Rous

    {

    public static int findPattern( String text, String pattern)

    { // blah}

    public static int average( Vector numbers ){ // blah}

    public static OutputStream openFile( String fileName )

    { // blah}

    };

    Coincidental cohesion is when parts of a class are grouped arbitrarily (at random); the parts

    have no significant relationship (e.g. a class of frequently used functions). A coincidentally

    cohesive class is one whose elements contribute to activities with no meaningful relationship

    to one another; as in

    1. FIX CAR2. BAKE CAKE3. WALK DOG4. FILL OUT ASTRONAUT-APPLICATION FORM5. HAVE A BEERA coincidentally cohesive class is similar to a logically cohesive one. Its activities are related

    neither by flow of data nor by flow of control [45, 46]. (One glance at the above list should

  • 7/30/2019 Chapter Complete1

    30/96

    30

    convince you that youd be unlikely to carry out all seven activities at one time!) However,

    the activities in a logically cohesive class are at least in the samecategory; in a coincidentally

    cohesive class, even that is not true.

    2.6 AVERAGE PERCENTAGE OF FAULTS DETECTED (APFD) METRIC:-

    To quantify the goal of increasing a subset of the test suite's rate of fault detection, there is a

    metric called APFD developed by Elbaum[47,48,49] that measures the average rate of fault

    detection per percentage of test suite execution. The APFD is calculated by taking the

    weighted average of the number of faults detected during the run of the test suite. APFD can

    be calculated using a notation:

    Let T -> The test suite under evaluation

    m -> the number of faults contained in the program under test P

    n -> The total number of test cases and

    TFi -> The position of the first test in T that exposes fault i.

    APFD=1- TF1+TF2+.+TFm+ 1

    n*m 2n

    So as the formula for APFD shows that calculating APFD is only possible when prior

    knowledge of faults is available. APFD calculations therefore are only used for evaluation.

    2.7 BUG CLASSIFICATION

    Bugs are classified based on SDLC. Bugs become visible in any phase of the SDLC, bugs

    can be classified based on SDLC phase described in [23] . Bugs goes undetected in early

    stages of software development can have more severe effect in later stage and difficult to

    identify, it will increase the cost of testing.

    The Software quality factors are affected by the bugs, impact on the software quality is based

    on the criticality of the bugs. So Bugs are also classified based on their impact on the quality

    of software. This category can be used for the prioritization of bugs, as it is not possible to

    resolve all bugs in one release; all bugs are not equally severe.

    Followings are the category of bugs based on criticality:

  • 7/30/2019 Chapter Complete1

    31/96

    31

    2.7.1 Critical Bugs:-

    Critical bugs have more severe effect, such as it stops or hang the normal functionality of the

    software, system need to be restart.

    2.7.2 Major Bugs:-

    This category of bugs leaves the system functionality working normally but makes the

    software not fulfill the desired requirements. For example wrong output is given.

    2.7.3 Medium Bugs:-

    Effect is less as compared to the above category of bugs. For example redundant output is

    given. Bug which gives the wrong value is more critical than the bug giving no value.

    2.7.4 Minor Bugs:-

    This type of bug can be negligible because they do not affect the expected behavior of the

    software. They do not affect the functionality.

  • 7/30/2019 Chapter Complete1

    32/96

    32

    Chapter-3

    PROPOSED WORK

    In this work, regression test case prioritization for object oriented programs is proposed.

    Whenever a change is made in software, regression testing is needed to validate the modified

    version of the software. Software written in any object oriented language consists of classes,

    and the classes are interconnected with each other. And if classes are more interconnected

    then there is more probability of error propagation. Therefore a technique has been proposed

    here which order the classes on the basis of interconnection between classes. And there are

    test cases to check the functionality of class, and every test case detects some fault. And

    faults are assigned weight on the basis of criticality, and fault can be new or detected earlier,

    so the test cases which are detecting critical and new faults (which are not detected earlier)

    are prioritized over other test cases.

    When a change occurs in any class of software, So many classes can be affected by making a

    change in class, and ripple effect calculate the impacted area where the change is propagating

    and coupling and cohesion is added in the proposed technique with ripple effect to measure

    the nature of change in rest of the system. Testing should be done for changed class and for

    affected classes. And it is impossible to run every test case of the program. A technique is

    proposed to order the affected classes intended to find faults quickly. Affected classes are

    linked with changed class and there is coupling in between changed and affected classes, and

    on the basis of coupling it is decided that which class is affected badly. And those classes

    which are affected by high coupling are prioritized first. And every type of coupling is

    checked if exist between classes and weight is given to classes on the basis of coupling.

    After calculating weights of every class, classes are prioritized. And second level of

    prioritization is the ordering of test cases of each selected class and 2nd level of prioritization

    is done by technique fault coverage per unit time taken. Every test case is designed when

    program is developed. And test case is stored with time taken by it and number of faults

    detected by it and each fault is assigned a weight on the basis of its criticality.

  • 7/30/2019 Chapter Complete1

    33/96

    33

    3.1 FIRST LEVEL PRIORITIZATION

    First level prioritization is a technique to prioritize classes of software written in any object

    oriented language on the bases of coupling.

    3.1.1 Block Diagram of First Level Prioritization:-

    1st level is concerned with class prioritization using first order dependence matrix values;

    block diagram of first level prioritization is show below in Fig. 3.1. And cohesion and

    coupling weights are used to calculate first order dependence matrix.

    Fig. 3.1 Block Diagram of 1st

    Level Prioritization

    3.1.1.1 Coupling and Cohesion Identifier

    This component is helpful for the purpose of identifying dependency (type of coupling)

    among the classes and cohesion type in the class. Coupling indicates the degree of

    Program

    Coupling and Cohesion

    Identifier

    First Order Dependence

    Matrix Calculator

    Design Graph, classes

    are Vertex and non zero

    entry in D matrix are

    edges

    PRIORITIZED

    CLASSES Calculate Weight

    Of All Classes

  • 7/30/2019 Chapter Complete1

    34/96

    34

    interdependence between two classes whereas cohesion is the measure of the strength of

    functional relatedness of elements within a class. Here, element can be an instruction, a group

    of instructions, a data definition, a call to another function within the class or outside the

    class.

    3.1.1.2 First Order Dependence Matrix Calculator

    This component performs four major functions which are described below.

    1. Creation of coupling matrix C.2. Creation of cohesion matrix S.3. On the basis of C and S create dependency matrix D.4. Make a graph; edges are assigned weights on the basis of first order

    dependence matrix.

    3.1.1.3 Graph of classes

    After designing first order dependence matrix a graph is designed. In this graph classes are

    vertices and all non-zero entries are edges. Weight of each class is calculated by adding the

    weight of edges incident on class. And on the bases of weight classes are prioritized.

    3.1.1.4 Class Level Prioritizer

    Output of this component is completion of 1 st phase of the block diagram. It prioritizes the

    classes of program. Highly affected classes are prioritized over less affected classes.

    Probability of finding error in highly affected classes is more than in less affected classes.

    Thus test cases of highly affected classes execute before test cases of less affected classes.

    3.1.2 Algorithm of First Level Prioritization

    C and S matrix are designed on the bases of coupling and cohesion weight, and then D matrix

    is prepared on the bases of C and D. And after designing D matrix a graph is prepared, where

    classes are vertex and non zero entry in D matrix are edges. And weight of every class is

    calculated by adding weights of the entire edges incident on class. And on the bases of class

    weight, classes are prioritized. Algorithm for the same has been shown in Algorithm 3.1.

  • 7/30/2019 Chapter Complete1

    35/96

    35

    Algorithm 3.1 First_Level_Prioritization

    Module dependence is primarily affected by coupling and cohesion. A quantitative

    measure of the dependence of modules will be useful to find out the stability of the design.

    Different values [44] are assigned for various types of class coupling and cohesion as shown

    in Table 3.1 and Table 3.2.

    3.1.3 Proposed Coupling and Cohesion Table

    Each type of coupling and cohesion is assigned a weight in Table 3.1 and Table 3.2

    respectively. On the basis of weights defined in Table 3.1 and Table 3.2, first order

    dependence matrix is prepared.

    Table 3.1 Coupling

    Coupling ValueContent 0.95

    Common 0.70

    External 0.65

    Control 0.50

    Type definition 0.35

    Stamp 0.35

    Import 0.30

    Routine call 0.20

    Data 0.20

    FIRST_LEVEL_PRIORITIZATION (P, n)

    Where P is the complete program and n is number of classes

    Begin

    1. Identify type of coupling between classes and create coupling matrix C using coupling

    values.

    2. Identify type of cohesion in the individual class and create cohesion matrix S using

    cohesion values.

    3. Using C and S Matrix construct first order dependence matrix D.

    4. Make a graph of classes, where all the classes became vertex and non zero entry in D are

    edges.5. Calculate weight of each class by adding weight of all edges incident on that class

    (Highest value class will get priority over other class)

    End

  • 7/30/2019 Chapter Complete1

    36/96

    36

    Table 3.2 Cohesion

    Cohesion Value

    Coincidental 0.95

    Logical 0.40

    Temporal 0.60

    Procedural 0.40

    Communicational 0.25

    Sequential 0.20

    Functional 0.20

    3.1.4 First Order Dependence Matrix

    A matrix can be obtained by using these two tables, which gives the dependence among all

    the classes in a program. This, dependence matrix describes the probability of having to

    change class i, Given that class j must be changed.

    The first order dependence matrix defines all first order dependencies among all the classes in

    the program and it is the first step in deriving the complete dependence matrix. First order

    dependence matrix is derived using the following three steps.

    STEP1. Determine the coupling among all of the classes in the program. Construct an m*m

    coupling matrix, where m is the number of classes in the program. Using Table 3.1 fill eachelement in the matrix C. Element Cij represent the coupling between class i and class j.

    The matrix is symmetric i.e.

    Cij = Cji for all i & j.

    Also, the elements on the diagonal are all 1(C ii =1 for all i)

    STEP2. Determine strength of each class in the program. Using Table3.2 Record the

    Corresponding numerical values of cohesion in vector S (a vector of cohesion of m elements

    STEP3. Construct the first order dependence matrix D by the formula:

    Dij = 0.15(Si + Sj) + 0.7 Cij, whereCij ! = 0

    Dij = 0 where Cij = 0

    Dii = 0 for all i.

  • 7/30/2019 Chapter Complete1

    37/96

    37

    3.1.5 Example:-

    To illustrate the algorithm, consider the program having 10 classes having following

    characteristics.

    1. Classes 2, 3, 9 have functional cohesion.2. Classes 1, 5, 6, 7 have sequential cohesion.3. Class 8 has coincidental cohesion.4. Classes 4 and 10 have logical cohesion.5. Classes 7, 8 have external coupling.6. Classes (1,2),(1,3),(1,4),(2,5),(2,6),(3,7),(4,8),(8,9),(8,10) have data coupling.

    The coupling matrix C (STEP 1) is shown below in Table 3.3, the strength vector S (STEP 2)

    is shown below in Table 3.4 and the resultant first order dependence matrix D (STEP 3) is

    shown in Table 3.5.

    This matrix can be represented by an undirected graph (Fig. 3.2), where the classes are

    represented by nodes and non zero matrix elements are represented by edges, and weight of

    each node is shown in Table 3.6

    Table 3.3 Coupling Matrix

    C =

    Table 3.4 Cohesion Matrix

    S =

    0.0 0.2 0.2 0.2 0.0 0.0 0.0 0.0 0.0 0.0

    0.2 0.0 0.0 0.0 0.2 0.2 0.0 0.0 0.0 0.0

    0.2 0.0 0.0 0.0 0.0 0.0 0.2 0.0 0.0 0.0

    0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.0 0.0

    0.0 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

    0.0 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

    0.0 0.0 0.2 0.0 0.0 0.0 0.0 0.6 0.0 0.0

    0.0 0.0 0.0 0.2 0.0 0.0 0.6 0.0 0.2 0.2

    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.0 0.0

    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.0 0.0

    0.2 0.2 0.2 0.4 0.2 0.2 0.2 0.95 0.2 0.4

  • 7/30/2019 Chapter Complete1

    38/96

    38

    Table 3.5 First Order Dependence Matrix

    D =

    This matrix can be represented by an undirected graph (Fig. 3.2), where the classes

    are represented by nodes and non-zero matrix elements are represented by edges.

    0.2 0.2 0.23

    0.2 0.2 0.2 0.34

    0.59

    0.31 0.34

    Fig. 3.2 Graph

    Table 3.6 Weight of Each Node

    Node 1 2 3 4 5 6 7 8 9 10

    weight 0.63 0.6 0.4 0.57 0.2 0.2 0.79 0.99 0.31 0.34

    Prioritized classes after first level prioritization are: 8, 7, 1, 2, 4, 3, 10, 9, 5, and 6

    1.0 0.2 0.2 0.23 0.0 0.0 0.2 0.0 0.0 0.0

    0.2 1.0 0.0 0.0 0.2 0.2 0.0 0.0 0.0 0.0

    0.2 0.0 1.0 0.0 0.0 0.0 0.2 0.0 0.0 0.00.23 0.0 0.0 1.0 0.0 0.0 0.0 0.34 0.0 0.0

    0.0 0.2 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0

    0.0 0.2 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0

    0.0 0.0 0.2 0.0 0.0 0.0 1.0 0.59 0.0 0.0

    0.0 0.0 0.0 0.34 0.0 0.0 0.59 1.0 0.31 0.34

    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.31 1.0 0.0

    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.34 0.0 1.0

    2 3 4

    5 6 78

    109

    1

  • 7/30/2019 Chapter Complete1

    39/96

    39

    3.2 SECOND LEVEL PRIORITIZATION:-

    Second level prioritization is a technique to prioritize test cases on the bases of fault coverage

    per unit time.

    3.2.1 Block diagram of Second Level Prioritization:-

    Second level prioritization is shown below in Fig 3.3.And fault category and time taken by

    test case is input for this prioritization technique.

    TEST SUITE T

    IDENTIFY WEIGHT OF EVERY

    FAULT

    CALCULATE FAULT COVERAGE

    PER UNIT TIME OF EACH TEST

    CASE

    REMOVE THE BEST ONE FROM T

    AND ADD IT TO T

    UNTIL T != EMPTY

    Fig. 3.3 Block Diagram of Second Level Prioritization

    calculate new fault coverage weight of

    each test case in t per unit time and

    remove from T whose weight value is 0

    REMOVE THE BEST ONE AND ADD

    IT TO T

  • 7/30/2019 Chapter Complete1

    40/96

    40

    3.2.2 Algorithm of Second Level Prioritization

    A technique has been proposed here which order the test cases on the bases of fault coverage.

    Every test case detects some fault. And faults are assigned weight on the basis of criticality,

    and fault can be new or detected earlier, so the test cases which are detecting critical and new

    faults (which are not detected earlier) are prioritized over other test cases, Algorithm 3.2 is

    proposed for 2nd level prioritization.

    Algorithm 3.2 Second_Level_Prioritization

    3.2.3 Proposed Fault Table

    Fault can be categorized on the bases of severity, and faults are assigned a weight on the

    bases of criticality, weights of faults are shown in Table 3.7.

    Table 3.7 fault weight

    Category weight

    Critical 4

    Major 3

    Medium 2

    Minor 1

    PRIORITIZATION OF TEST CASES OF PRIORITIZED CLASS

    //there are M test cases and N faults and each fault is assigned some weight.

    Begin

    1. T is original test suite, T is prioritized test suite, time_taken[M],FAULT[N]

    2. Calculate fault_weight per unit time by each test case.

    3. Arrange them in decreasing order.

    4. Remove the best one from T and add it to T.

    5. Repeat step 6 and 7 until T is not empty.

    6. Calculate weight of new faults detected per unit time of each test case.

    // New Fault means those fault which are not detected by any test case in T.

    7. Remove the best one from T and add it to T

    8. Go to step 5.

    9. Return T.

    End

  • 7/30/2019 Chapter Complete1

    41/96

    41

    3.2.4 Example

    To illustrate the proposed algorithm, an example is taken. In this example 10 test cases and

    10 faults are there, and all faults are minor so having weight 1, test cases and faults are shown

    in Table 3.8.

    Table 3.8 Random Test Suite

    Fault

    1

    * *

    Fault

    2

    * * * *

    Fault

    3

    * * * *

    Fault

    4

    *

    Fault

    5

    * * *

    Fault

    6

    *

    Fault

    7

    * *

    Fault

    8

    * *

    Fault

    9

    *

    Fault

    10

    * *

    No. of

    fault

    2 3 1 3 2 3 2 2 2 2

    Time

    taken

    5 7 11 4 10 12 6 15 8 9

    T1 T2 T3 T4 T5 T6 T7 T8 T9 T10

  • 7/30/2019 Chapter Complete1

    42/96

    42

    APFD Result of Test Suite before Prioritization:-

    TFi=ith fault is detected by which test case.

    N=total number of test cases

    M=total number of fault

    APFD=1- TF1+TF2+.+TFm + 1

    n*m 2n

    TF1=1 TF2=4

    TF3=2 TF4=7

    TF5=2 TF6=4

    TF7=4 TF8=2

    TF9=6 TF10=1

    APFD= 1- (1+4+2+7+2+4+4+2+6+1) + 1

    10*10 2*10

    APFD= 1-33/100+1/20

    APFD=1-.33+.05

    APFD= .72

    APFD= 72%

    Prioritization of test suite based on proposed algorithm

    STEP1: VTI=FAULT/TIME (RATE OF FAULT DETECTION)

    VT1=2/5=0.40 VT2=3/7=0.42

    VT3=1/11=0.09 VT4=3/4=0.75

    VT5=2/10=0.2 VT6=3/12=0.25

    VT7=2/6=0.33 VT8=2/15=0.133

    VT9=2/8=0.25 VT10=2/9=0.22

    STEP2: SORTING OF VTI

    T4 T2 T1 T7 T6 T9 T10 T5 T8 T3

    STEP3: REMOVE T4 FROM T AND ADD T4 TO T

  • 7/30/2019 Chapter Complete1

    43/96

    43

    Now T contain= T4.

    T contain=T1, T2, T3, T5, T6, T7, T8, T9, T10

    STEP 4: UNTIL T IS NOT EMPTY

    STEP5: NEW FAULT COVERAGE OF TEST CASES PER UNIT TIME

    VT1=2/5=0.4 VT2=3/7=0.42

    VT3=1/11=0.09 VT5=1/10=0.1

    VT6=3/12=.25 VT7=1/6=0.16

    VT8=1/15=0.06 VT9=1/8=.12

    VT10=2/9=0.22

    STEP6: REMOVE T2 FROM T AND ADD TO T

    T contain=T1, T3, T5, T6, T7, T8, T9, T10.

    T contain=T4, T2.

    STEP7: GO TO STEP 4

    STEP4: UNTIL T IS NOT EMPTY

    STEP5: NEW FAULT COVERAGE OF TEST CASES PER UNIT TIME

    VT1=2/5=0.4 VT3=0/11=0.00

    VT5=0/10=0.00 VT6=2/12=.16

    VT7=1/6=0.16 VT8=0/15=0.00

    VT9=0/8=0.00 VT10=1/9=0.11

    STEP6: REMOVE T1 FROM T AND ADD TO T

    T contain= T6, T7, T10, T3, T5, T8, T9.

    T contain=T4, T2, T1.

    STEP7: GO TO STEP 4

    STEP4: UNTIL T IS NOT EMPTY

  • 7/30/2019 Chapter Complete1

    44/96

    44

    STEP5: NEW FAULT COVERAGE OF TEST CASES PER UNIT TIME

    VT3=0/11=0.00 VT5=0/10=0.00

    VT6=1/12=0.08 VT7=1/6=0.16

    VT8=0/15=0.00 VT9=0/8=0.00

    VT10=0/9=0.00

    STEP6: REMOVE T7 FROM T AND ADD TO T

    T contain= T6, T3, T5, T8, T9, T10.

    T contain=T4, T2, T1, T7.

    STEP7: GO TO STEP 4

    STEP4: UNTIL T IS NOT EMPTY

    STEP5: NEW FAULT COVERAGE OF TEST CASES PER UNIT TIME

    VT6=1/12=.08 VT3=0/11=0.00

    VT5=0/10=0.00 VT8=0/15=0.00

    VT9=0/8=0.00 VT10=0/9=0.00

    STEP6: REMOVE T6 FROM T AND ADD TO T

    T contain= T3, T5, T8, T9, T10

    T contain=T4, T2, T1, T7, T6.

    SIMILARLY REMAINING TEST CASES ARE ALSO ADDED INTO T

    STEP8: RETURN T WHICH IS PRIORITIZED TEST SUITE.

    Prioritized test suite is shown in Table 3.9.

    Table 3.9 Prioritized Test Suite

    T1(T4) T2(T2) T3(T1) T4(T7) T5(T6) T6(T3) T7(T5) T8(T8) T9(T9) T10

    FAULT

    1

    * *

    FAULT

    2

    * * * *

  • 7/30/2019 Chapter Complete1

    45/96

  • 7/30/2019 Chapter Complete1

    46/96

    46

    APFD=1-.24+.05

    APFD= .81

    APFD= 81%

    3.2.5 Comparison of Prioritized and Non Prioritized Test Suite

    Prioritized test suite is better as compare to random test suite. Using APFD metric

    comparison is shown in Table 3.10:-

    Table 3.10 APFD Metric

    TEST SUITE APFD

    RANDOM 72

    PRIORITIZED 81

  • 7/30/2019 Chapter Complete1

    47/96

    47

    CHAPTER-4

    EXPERIMENTAL EVALUATION AND ANALYSIS OF PROPOSED WORK

    In this chapter the proposed technique has been verified and analyzed by taking a case study.

    Case study consists of three classes, study, time and usetime. In class study a function

    schedule is defined, schedule call a function which is defined in class time. And in class

    usetime a function is defined which take object of class time as arguments.

    4.1 CASE STUDY

    #include

    #include

    Class study

    {

    Schedule( int a, int b )

    {

    Gettime(a,b);

    If (hour

  • 7/30/2019 Chapter Complete1

    48/96

    48

    Case 4:

    Cout

  • 7/30/2019 Chapter Complete1

    49/96

    49

    }

    };

    // -----------there is type definition coupling between class 2 and class 3-----------

    Class usetime : public time

    {

    Public:

    // definition with object of class time as argument

    //-------------------------procedural cohesion in class 3 -------------------------------

    Void sum (time t1, time t2)

    {

    Minutes=t1.minutes+t2.minutes;

    Hours=minutes/60;

    Minutes=minutes%60;

    Hours=hours+t1.hours+t2.hours;

    }

    };

    Void main ()

    {

    Time t1, t2;

    Usetime t3;

    T1.gettime (2, 45); // get t1

    T2.gettime (3, 30); // get t2

    T3.Sum (t1, t2); //t3=t1+t2

    Cout

  • 7/30/2019 Chapter Complete1

    50/96

    50

    coupling is good for software. So control coupling is considered between class 1 and class 2.

    And there is temporal cohesion in class 1 and sequential cohesion in class 2 and procedural

    cohesion in class 3.Coupling and cohesion in case study are defined in Table 4.1 and Table

    4.2 respectively.

    Table 4.1 CouplingClass pair coupling

    Class1-class2 Data coupling

    Class1-class2 Control coupling

    Class2-class3 Type definition usecoupling

    Table 4.2 CohesionClass Cohesion

    Classs1 Temporal

    Class2 SequentialClass3 Procedural

    4.1.1.1 Implementation of Case Study

    Screen 1

  • 7/30/2019 Chapter Complete1

    51/96

    51

    Screen 2

  • 7/30/2019 Chapter Complete1

    52/96

    52

    This screen shows the prioritized classes on the bases of first order dependence matrix.

    Screen 3

    First level prioritization of case study is implemented by appendix a

    4.1.2 Second Level Prioritization:-

    Test case prioritization technique based on fault coverage per unit time is proposed. And test

    cases are generated on the bases of branch coverage. And an example explained where apfd

    metric is used to analyze the proposed technique.

  • 7/30/2019 Chapter Complete1

    53/96

  • 7/30/2019 Chapter Complete1

    54/96

    54

    6 If (hour

  • 7/30/2019 Chapter Complete1

    55/96

    55

    4.1.2.2.1 Flow graph of function schedule of class 1

    Flow graph of function schedule in class study is shown below in Fig 4.2.

    Fig 4.2 Flow Graph of Class 1

    23

    21,22

    20

    18,19

    17

    15,16

    14

    13

    912

    7,8 10,11

    6

    3

    4,5

    2926

    24,25

    35

    30,3127,28

    36

    33,34

    328

    MAIN

    1,2

    3

    4

    5

    6

    7

    9

    10

    11,12

  • 7/30/2019 Chapter Complete1

    56/96

  • 7/30/2019 Chapter Complete1

    57/96

    57

    8 ABCDFHIMTXY

    9 ABCDEGINUXY

    10 ABCDFHINUXY

    11 ABCDEGIOVXY

    12 ABCDFHIOVXY

    13 ABCDEGIPWXY14 ABCDFHIPWXY

    Fault can be detected in class 1:

    Fault1:- at node C, in definition of function

    Fault2:- At D node, checking condition

    Fault3:-at node I, switch statement

    Fault4:-at node j

    Fault5:-at node k

    Fault6:-at node l

    Fault7:-at node m

    Fault8:-at node n

    Fault9:-at node o

    Fault10:-at node p

    Each fault is assigned a weight as shown below in Table 4.4.

    Test cases and fault of class 1 are shown below in Table 4.5.

    Table 4.4 Fault WeightFault Number Fault Weight

    1 2

    2 2

    3 2

    4 1

    5 1

    6 1

    7 1

    8 1

    9 1

    10 1

    Table 4.5 Random Test Suite and Fault Table

    T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 T12 T13 T14

    F1 * * * * * * * * * * * * * *

    F2 * * * * * * * * * * * * * *

    F3 * * * * * * * * * * * * * *

    F4 * *

    F5 * *

    F6 * *

    F7 * *F8 * *

  • 7/30/2019 Chapter Complete1

    58/96

  • 7/30/2019 Chapter Complete1

    59/96

  • 7/30/2019 Chapter Complete1

    60/96

    60

    Screen 8

  • 7/30/2019 Chapter Complete1

    61/96

    61

    Screen 9

    Fault detection corresponding to each test case of random and prioritized test suites areshown in Table 4.6. Fault percent detected by test case of random and prioritized test suite is

    shown in Fig 4.4 and Fig 4.5 respectively.

    Table 4.6 Test Suite and Fault % detected

    Random test suite Fault % detected Prioritized test suite Fault % detected

    1 40 1 40

    2 40 2 50

    3 50 3 60

    4 50 4 70

    5 60 5 806 60 6 90

  • 7/30/2019 Chapter Complete1

    62/96

    62

    7 70 7 100

    8 70 8 100

    9 80 9 100

    10 80 10 100

    11 90 11 100

    12 90 12 10013 100 13 100

    14 100 14 100

    Screen 10

  • 7/30/2019 Chapter Complete1

    63/96

    63

    Fig 4.4 Analysis i

    Screen 11

  • 7/30/2019 Chapter Complete1

    64/96

    64

    Fig 4.5 Analysis ii

    Prioritized order of test suite of class 1:-

    14, 12, 10, 8, 6, 4, 2, 1, 3, 5, 7, 9, 11, 13

    4.1.2.2.5 Comparison of prioritized and random test suite

    In previous section APFD is calculated of prioritized and non prioritized test suite,

    comparison is shown in Table 4.7.

    Table 4.7 Comparison of test suits

    TEST SUITE APFD

    Random 53

    Prioritized 74

    4.1.2.3 Test Case Prioritization of Class 2:-

    1 Class time : public study

    2 {

    3 Int hours;

    4 Int minutes;

    5 Public:6 Void gettime(int h, int m)

  • 7/30/2019 Chapter Complete1

    65/96

    65

    7 {

    8 If(h

  • 7/30/2019 Chapter Complete1

    66/96

    66

    Fig 4.6 Flow Graph of Class 2

    Prioritized order of test case of class 2:-

    TC2, TC1, TC3

    Because fault detection per unit time of test case 2 is more than that of test case 1and 3

    4.1.2.4 Test Case Prioritization of Class 3:-

    Class usetime : public time

    1 {

    2 Public:

    // definition with object of class time as argument

    //-------------------------temporal cohesion in class 3 -------------------------------

    3 Void sum (time t1, time t2)

    gettime

    12

    13

    11

    10

    9

    8

    18

    17

    16

    10

    9

    8

    7

    2

    6

    3

    4

    11

    12

    15

    5

    1

    puttime

    main

  • 7/30/2019 Chapter Complete1

    67/96

    67

    Prioritized order of test case of class 2:-

    TC2, TC1, TC3

    Because fault detection per unit time of test case 2 is more than that of test case 1and 3

    4.1.2.4 Test Case Prioritization of Class 3:-

    Class usetime : public time

    1 {

    2 Public:

    // definition with object of class time as argument

    //-------------------------temporal cohesion in class 3 -------------------------------

    3 Void sum (time t1, time t2)

    4 {

    5 Minutes=t1.minutes+t2.minutes;

    6 Hours=minutes/60;

    7 Minutes=minutes%60;

    8 Hours=hours+t1.hours+t2.hours;

    9 }};

    On the basis of code coverage test cases are designed here.

    There is only one function in class 3.Only one test case is enough to cover all statements of

    function sum in class 3. So there is no need to prioritize test cases of class 3.

    4.1.3 Prioritized Classes with Their Prioritized Test Cases:-

    Classes of case study are prioritized in first level prioritization and the test cases of each class

    are prioritized in second level of prioritization, and result is shown in Table 4.9.

    Table 4.9 Result of Proposed Technique

    Class2 TC2 TC1 TC3

    Class1 TC14

    TC12

    TC10

    TC8

    TC6

    TC4

    TC2

    TC1

    TC3

    TC5

    TC7

    TC9

    TC11

    TC13

    Class3 TC1

    The analysis shows that proposed technique is better as compared to random test case

    prioritization approach.

  • 7/30/2019 Chapter Complete1

    68/96

    68

    Chapter 5

    CONCLUSION AND FUTURE WORK

    The proposed technique for Regression Test Case Prioritization in object oriented program

    using coupling information and fault coverage is a beneficial technique for the purpose of

    saving resources such as time and cost. In this technique badly affected class are prioritized

    over less affected class. It is proved to be effective because it makes use of two phases

    including identification of badly affected classes using first order dependence matrix and

    doing prioritization of test cases of affected class using fault coverage per unit time approach.

    This prioritization is better because test cases which are detecting new faults (which are not

    detected till now by any test case) and critical faults in less time are considered first. In this

    thesis, automation of two components (first order dependence matrix calculator, and class

    level prioritizer) has been done in C language. The experimental evaluation has also been

    performed using an example (program written in C++). To illustrate the effectiveness of the

    proposed prioritization technique, Average percentage of fault detection (APFD) metric has

    been used. The analysis shows that proposed technique is better as compared to random test

    case prioritization approach.

    .

    The 1st level prioritization can be made more efficient by adding scope of variables.

    The first order dependence matrix cannot be final for prioritizing classes of program. Scope

    of affected variables in classes should be considered and a final matrix should be prepared.

    Additional weight is assigned to classes by calculating, how much affected variables are

    propagating in a class of program and how much affected variables are passing to other

    classes by this class. Thus by adding the scope of variables in the classes with coupling will

    make general prioritization technique.

  • 7/30/2019 Chapter Complete1

    69/96

  • 7/30/2019 Chapter Complete1

    70/96

    70

    [13].Fischer, F. Raji, and A. Chruscicki, A method-ology for retesting modified software,

    in Pro-ceedingsof the National Telecommunications Con-ference, pp B6.3.1-B6.3.6. IEEE

    Computer Society Press, 1981.

    [14] G. Rothermel, M. J. Harrold, J. Ostrin, and C. Hong. An empirical study of the effects of

    minimization on the fault detection capabilities of test suites. In Proceedings of the

    International Conference onSoftware Maintenance, pages 34-43, November 1998.

    [15] M. C. Thompson, D. J. Richardson, and L. A. Clarke. An information flow model of

    fault detection. InProceedings of the ACM International Symposium on Software Testing and

    Anal ysis, pages 182-192, June 1993.

    [16] R. G. Hamlet. Testing Programs with the Aid of a Compiler. IEEE Transactions on

    Software Engi-neering, SE-3(4):279-290, July 1977.

    [17] G. Rothermel, R.H. Untch, C. Chu, and M.J. Harrold, Prioritizing Test Cases for

    Regression Testing, IEEE Trans. Software Eng., vol. 27, no. 10, pp. 929-948, Oct. 2001.

    [18] D. Hoffman and C. Brealey. Module test case generation. In 3rd Workshop on Softw.

    Testing, Analysis, and Verif., pages 97102, December 1989.

    [19] R. Gupta, M. J. Harrold, and M. L. Soffa, An ap -proach to regression testing

    usingslicing, in Pro-ceedingsof the Conference on Software Maintenance, pp 299-308,

    Orlando, FL, November 1992.

    [20] H. K. N. Leung and L. J. White. A study of integration testing and software regression at

    the integration level. InProceedings of the Conference on Software Maintenance, pages 290-

    300, November 1990.

    [21] M. Balcer, W. Hasling, and T. Ostrand. Automatic generation of test scripts from formal

    test specifications. InProceedings of the 3rd Symposium on Software Testing, Anal ysis, and

    Veri_cation, pages 210-218, December 1989.

    [22] M. E. Delamaro and J. C. Maldonado. Proteum-A Tool for the Assessment of Test

    Adequacy for C Programs. In Proceedings of the Conference on Performabil ity in

    Computing Systems (PCS 96), pages79-95, New Brunswick, NJ, July 25-26 1996.

    [23] Naresh Chauhan, Principal and Practices in software testing Oxford University Press

    India,2010.

    [24] T. Goradia. Dynamic impact analysis: A cost-effective technique to enforce error-

    propagation. In Proceedings of the ACM International Symposium on Software Testing and

    Anal ysis, pages 171-181, June 1993.

  • 7/30/2019 Chapter Complete1

    71/96

    71

    [25] M. J. Harrold, R. Gupta, and M. L. So_a. A methodology for controlling the size of a test

    suite. ACM Transactions on Software Engineering and Methodology, 2(3):270-285, July

    1993.

    [26]Y. F. Chen, D. S. Rosenblum, and K. P. Vo. TestTube: A system for selective regression

    testing. InProceedings of the 16th International Conference on Software Engineering, pages

    211-222, May 1994.

    [27] R. A. DeMillo, R. J. Lipton, and F. G. Sayward. Hints on Test Data Selection: Help for

    the Practicing Programmer. Computer, 11(4):34{41, April 1978.

    [28] Praveen Ranjan Srivastava, Test case prioritization ,Computer science and information

    system group, Journal of theoretical an applied information technology.2005 - 2008 JATIT.

    [29] B. Beizer. Software Testing Techniques. Van Nostrand Reinhold, New York, NY,

    1990.[5] D. Binkley. Semantics guided regression test cost reduction. IEEE Transactions on

    Software Engineer-ing, 23(8):498{516, August 1997.

    [30] Agrawal, H., Horgan, J. R., Krauser, E. W., and London, S. A. Incremental Regression

    Testing. Proceedings of the IEEE Conference on Software Mainte-nance, Montreal,

    Canada,1993.

    [31] Zengkai Ma, Jianjun Zhao Test Case Prioritization Based on Analysis of Program

    StructurePages 471-478 Year of Publication: 2008

    [32] Hind Kabaili, Rudolf K. Keller and Franois Lustman, A Change Impact Model

    Encompassing Ripple Effect and Regression Testing.

    [33] F. M. Haney. Module Connection Analysis - a Tool for Scheduling Software

    debugging activities. InProceedings of AFIPS Joint Computer Conference, pages

    173-179, December, 1972.

    [34] Stephens S. Yau and James S. Collofello. Some Stability Measures for Software

    Maintenance. InIEEE Transactions on Software Engineering, 6(6):545-552,

    November 1980.

    [35] Sue Black. Measuring Ripple Effect for Software Maintenance. InProceedings of

    International Conference on Software Maintenance, Oxford, England, August 1999.

    IEEE.

    [36] Lionel Briand(Germany), Prem Devanbu (USA), Walcelio Melo ( Canada), An

    Investigation into Coupling Measures for C++.

    [37] D. A. Young, "Object-Oriented Programming with C++ and OSF/Motif", Prentice-Hall,

    1992.

  • 7/30/2019 Chapter Complete1

    72/96

    72

    [38] W. Li and S. Henry. Object-oriented metrics that predict maintainability.,JSS.

    23(2):111122, 1993.

    [39] Selby and A. Porter. Learning from Examples: Generation and Evaluation of Decision

    Trees for Software Resource Analysis.,IEEE TSE, 14(2): 1743-1747. 1988.

    [40] E. Gamma, R. Helm, R. Johnson, and J. Vlissides, Design Patterns: Elements of

    Reusable Object- Oriented Software Addison Wesley. October 1994.

    [41] Pressman , software engineering.

    [42] Johann Eder,Gerti Kappel,Michael Schrefl, Coupling and Cohesion in object oriented

    system

    [43]D.W. Embley and S.N. Woodfield, Cohesion and Coupling for Abstract Data Types,:in

    6th international phoenix conference on Computers and Communications,pp.292-234, IEEE

    Computer Socity Press, Arizona,1987

    [43] H.K.N. Leung and L. White. Insights into regression testing. In Proceedings of the

    Conference on Software Maintenance, pages 6069, October 1989.

    [44] K.K. Aggarwal, Software Engineering, edition 3, New Age International publisher.

    [45] M. Hutchins, H. Foster, T. Goradia, and T. Ostrand. Experiments on the effectiveness of

    data flow and control flow-based test adequacy criteria. In Proceedings of the 16th

    International Conference onSoftware Engineering, pages 191-200, May 1994.

    [46] S. Rapps and E. J. Weyuker. Selecting software test data using data flow information.

    IEEE Transactions on Software Engineering, 11(4):367-375, April 1985.

    [47] Alexey G. Malishevsky, Joseph R. Ruthruff, Gregg Rothermel, Sebastian Elbaum,

    Costcognizant Test Case Prioritization, 2006

    [48] S. Elbaum, A. Malishevsky, and G.Rothermel Test case prioritization: A family of

    empirical studies. IEEE Transactions on Software Engineering, February 2002.

    [49] Sebastian Elbaum, Gregg Rothermel, Satya Kanduri, Alexey G. Malishevsky, Selecting

    a Cost-Effective Test Case Prioritization Technique, April 20, 2004

    [50] K. Onoma, W-T. Tsai, M. Poonawala, and H. Suganuma. Regression testing in an

    industrial environment.

    [51] P. Coad and E. Yourdon. Object-Oriented Analysis. Prentice-Hall, Englewood

    Clis NJ, 1991.

    [52] Coad and E. Yourdon. Object-Oriented Analysis. Prentice-Hall, Englewood

    Clis NJ, 2nd edition, 1991.

  • 7/30/2019 Chapter Complete1

    73/96

    73

    APPANDIX A

    #include

    #includevoid main()

    {

    int

    totalmodule,i,j,k,content,common,external,type_def,control,stamp,import,routine,data,sn[15],

    row,coloumn;

    int float coupling[9],cohesion[8],fodm[15][15],cpmt[15][15],ch[15],weight[15],temp;

    coupling[1]=0.95;

    coupling[2]=0.70;

    coupling[3]=0.65;

    coupling[4]=0.60;

    coupling[5]=0.50;

    coupling[6]=0.35;

    coupling[7]=0.30;

    coupling[8]=0.20;

    coupling[9]=0.20;

    // defining cohesion values

    cohesion[0]=0.00;

    cohesion[1]=0.95;

    cohesion[2]=0.60;

    cohesion[3]=0.40;

    cohesion[4]=0.40;

    cohesion[5]=0.25;

    cohesion[6]=0.20;

    cohesion[7]=0.20;

    printf("\nenter the number of classes in the system\n");

    scanf("%d",&totalmodule);

    printf("we need to create a %d * %d first order dependence

    matrix\n\n",totalmodule,totalmodule);

  • 7/30/2019 Chapter Complete1

    74/96

    74

    printf("\n\n\n enter the cohesion type \n\n\n");

    printf("\ncohesion 0 is no cohesion\n\n");

    printf("\ncohesion 1 is coincidental cohesion\n\n");

    printf("\ncohesion 2 is temporal cohesion\n\n");

    printf("\ncohesion 3 is logical cohesion\n\n");

    printf("\ncohesion 4 is procedural cohesion\n\n");

    printf("\ncohesion 5 is communicational cohesion\n\n");

    printf("\ncohesion 6 is sequential cohesion\n\n");

    printf("\ncohesion 7 is functional cohesion\n\n");

    for(i=1;i

  • 7/30/2019 Chapter Complete1

    75/96

    75

    printf("\nfor how many class pair there is content coupling\t");

    scanf("%d",&content);

    if(content==0)

    {

    }

    else

    {

    for(i=1;i

  • 7/30/2019 Chapter Complete1

    76/96

    76

    else

    {

    for(i=1;i

  • 7/30/2019 Chapter Complete1

    77/96

    77

    {

    printf("\nenter first component\t");

    scanf("%d",&j);

    printf("\n enter second component\t");

    scanf("%d",&k);

    cpmt[j][k]=0.65;

    cpmt[k][j]=0.65;

    }

    printf("\n\n");

    // printing the coupling matrix after defining external coupling

    for(i=1;i

  • 7/30/2019 Chapter Complete1

    78/96

  • 7/30/2019 Chapter Complete1

    79/96

  • 7/30/2019 Chapter Complete1

    80/96

    80

    {

    for(j=1;j

  • 7/30/2019 Chapter Complete1

    81/96

    81

    printf("\n\n");

    }

    }

    // defining the routine call coupling coupling

    printf("\nfor how many class pair there is routine call coupling\t");

    scanf("%d",&routine);

    if(routine==0)

    {

    }

    else

    {

    for(i=1;i

  • 7/30/2019 Chapter Complete1

    82/96

    82

    scanf("%d",&data);

    if(data==0)

    {

    }

    else

    {

    for(i=1;i

  • 7/30/2019 Chapter Complete1

    83/96

    83

    fodm[i][j]=0;

    }

    else

    {

    fodm[i][j]=0.15*(ch[i]+ch[j])+0.7*(cpmt[i][j]);

    }

    }

    }

    printf("\n\n\t\t\t\tfirst order dependance matrix\n\n\n");

    for(i=1;i

  • 7/30/2019 Chapter Complete1

    84/96

    84

    }

    for(i=1;i

  • 7/30/2019 Chapter Complete1

    85/96

    85

    APPENDIX B

    // PROGRAM FOR TEST CASE PRIORITIZATION

    #include

    #include

    #include

    void testcase();

    //void newtech();

    //void prioritization();

    int test[15][15],tf[15],tf1[15],original[15][15],op[15][15];

    //int numof_fault[10];

    float apfd,apfd1,time_taken[15],tt[15],numof_fault[15],temp[10],temp1[10],temp2,swptt[15];

    int

    l,tempgraph[15],sum[15],oldcount=0,count=0,original_graph[15],sum_weight[15],temp3,tota

    ltest,totalfault ;

    int ch,testnew_graph[15],fault_weight[10]/*[11]={1,1,1,1,1,1,1,1,1,1,1},*/

    ,m,n,o,completed,sn[15];

    void main()

    { int i;

    getch();

    testcase();

    //newtech();

    //prioritization();

    getch();

    }

    void testcase()

    {

    int i,j,k,n;

    float vd[10];

    FILE *f;

    printf("which file do you want to prioritized");

    printf("\n\n class\t1\tstudy\n\nclass\t2\ttime\n\nclass\t3\tusetime\n\n");

    scanf("%d",&ch);printf("\n\n");

  • 7/30/2019 Chapter Complete1

    86/96

  • 7/30/2019 Chapter Complete1

    87/96

    87

    {

    fscanf(f,"%d",&test[i][j]);

    } } }

    for(j=1;j

  • 7/30/2019 Chapter Complete1

    88/96

    88

    printf("\n\n");

    printf("time taken ");

    for(i=1;i

  • 7/30/2019 Chapter Complete1

    89/96

  • 7/30/2019 Chapter Complete1

    90/96

    90

    apfd=apfd*100;

    printf("%5.2f",apfd);

    getch();

    printf("\n\n");

    printf("\n--------prioritized test suite on the bais of fault coverage per unit time-------");

    for(i=1;i

  • 7/30/2019 Chapter Complete1

    91/96

    91

    original[k][j]=temp[k];

    }

    } } }

    ////////////take first test case as prioritized first test case//////////////

    for(i=1;i

  • 7/30/2019 Chapter Complete1

    92/96

    92

    }

    }

    }

    }

    o=completed+1;

    for(i=o;i

  • 7/30/2019 Chapter Complete1

    93/96

  • 7/30/2019 Chapter Complete1

    94/96

    94

    printf("%d ",op[i][j]);

    }

    printf("\n\n");

    }

    printf("\n\n\n\t\t\t\t prioritized order of test cases \n\n\n\n\n");

    for(i=1;i

  • 7/30/2019 Chapter Complete1

    95/96

    95

    {

    for(j=1;j1)

    {

    for(l=1;l

  • 7/30/2019 Chapter Complete1

    96/96

    printf("%d\t",testnew_graph[i]);

    }

    // graph data for original test suite

    count=0;

    oldcount=0;

    printf("\n\n\n new fault detected per test case of original test suite for graphical

    representation\n\n\n");

    for(i=1;i