83
Nov. 17, 2005 1 Verification and Validation Verification: checks that the program conforms to its specification. – Are we building the product right? Validation: checks that the program as implemented meets the expectations of the user. – Are we building the right product?

Nov. 17, 2005 1 Verification and Validation F Verification: checks that the program conforms to its specification. –Are we building the product right?

  • View
    213

  • Download
    0

Embed Size (px)

Citation preview

Nov. 17, 2005 1

Verification and Validation

Verification: checks that the program conforms to its specification.– Are we building the product right?

Validation: checks that the program as implemented meets the expectations of the user.– Are we building the right product?

Nov. 17, 2005 2

The Test Plan

A ``living document''. It is born with the system and evolves as the system evolves. It is what the key decision makers use to evaluate the system.

User objectives. System Description and traceability Matrices. Special Risk Elements. Required Characteristics-- operational and

technical. Critical Test Issues -- operational and technical.

Nov. 17, 2005 3

Management Plan

Integrated Schedule Roles and Responsibilities Resources and Sharing

Nov. 17, 2005 4

Verification Outline

Verification to Date Previous Results Testing Planned

– Unresolved Issues– Issues arising during this phase– Scope of Planned tests– Test Objectives

Special Resources Test Articles

Nov. 17, 2005 5

Validation Outline

Validation to Date Previous Results Testing Planned

– Unresolved Issues– Issues arising during this phase– Scope of Planned tests– Test Objectives

Special Resources Test Articles

Nov. 17, 2005 6

Test Results and Traceability

Test Procedures Test Reporting Development Folders

Nov. 17, 2005 7

Static Verification

Program inspection Formal method

Nov. 17, 2005 8

Code Walkthroughs and Inspections

Code reviews lead to rapid and thorough fault detection– Up to 95 percent reduction in maintenance

costs

Nov. 17, 2005 9

Verification and Proofs of Correctness

Formally specify the desired functionality, then verify that the program is a correct implementation of the specification.

Nov. 17, 2005 10

Hoare's Rules

Program fragments and assertions are composed into triples {P}S{Q}– where P is the precondition assertion, Q is the

postcondition assertion,and S is program statements.

– Interpretation: If P is true before S is executed, then when S terminates, Q is satisfied.

Nov. 17, 2005 11

Proofs of Correctness

Partial correctness: if the precondition is true, and the program terminates, then the postcondition is satisfied.

Total correctness: is partial correctness, plus a proof of termination.

Nov. 17, 2005 12

The Assignment Rule

{P} x:=f {Q} where P and Q are the same, except that all

the occurrences of x in P have been replaced by f.

Backward substitution {x=5} x = x + 1 { x = 6} {z > y + 50} x = z – 43 { x > y + 7}

Nov. 17, 2005 13

Rule for Sequencing Statements{F1} S1; S2{F3}

{F1}S1{F2}, {F2}S2{F3}

{F1}S1;S2{F3}

Nov. 17, 2005 14

Rule for Conditions and Loops

{P & C} S1 {Q}, {P & ~C} S2 {Q}

{P} if C then S1 else S2 endif {Q}

{I & C} S { I }

{I} while C do S od {I & ~C}

Nov. 17, 2005 15

Cleanroom (contd)

Prototype automated documentation system for the U.S. Naval Underwater Systems Center

1820 lines of FoxBASE– 18 faults were detected by “functional verification”

– Informal proofs were used

– 19 faults were detected in walkthroughs before compilation

– There were NO compilation errors

– There were NO execution-time failures

Nov. 17, 2005 16

Software Testing

Software Requirements Specifications– Describes the expected runtime behaviors of

the software. A Test Plan

– Describe how to test each behavior. The software (source code or executable)

Nov. 17, 2005 17

Testing

Failure: – The departure of program operation from user

requirements. Fault:

– A defect in a program that may cause a failure. Error:

– Human action that results in software containing a fault.

Nov. 17, 2005 18

Types of Faults

algorithmic faults computation and precision faults documentation faults stress or overloaded faults capacity or boundary faults timing or coordination faults throughput or performance faults

Nov. 17, 2005 19

IBM Orthogonal Defect Classification

Function: fault that affects capability, end-user interfaces, product interface with hardware architecture, or global data structure.

Interface: fault in interfacing with other components or drives via calls, macros, control blocks, or parameter lists.

Checking: fault in program login that fails to validate data and values properly before they are used.

Assignment: fault in data structure or code block initialization. Timing/serialization: fault that involves timing of shared and real-time

resources. Build/package/merge: fault that occurs because of problems in

repositories, management changes or version control. Documentation: fault that affects publications and maintenance notes Algorithm: fault involving efficiency or correctness of algorithm

or data structure but not design.

Nov. 17, 2005 20

The Testing Process

Unit testing Component testing Integration testing System testing Acceptance testing

Nov. 17, 2005 21

Testing Strategies

Top-down testing Bottom-up testing Thread testing Stress testing Back-to-back testing

Nov. 17, 2005 22

Traditional Software Testing Techniques

Black box testing– program specifications : functional testing, graph-based testing– operational profile: random testing, partition testing– Boundary value analysis

White box testing – statement coverage– branch coverage– data flow coverage– path coverage

Others– Stress testing– Back-to back testing

Nov. 17, 2005 23

Graph-Based Testing

Transaction flow modeling Finite state modeling Data flow modeling Timing modeling

Nov. 17, 2005 24

Partition Testing

If an input condition specifies a range, one valid and two invalid equivalence classes are defined.

If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.

If an input condition specifies a member of a set, one valid and one invalid class are defined.

If an input condition is Boolean, one valid and one invalid class are defined.

Nov. 17, 2005 25

Boundary Value Analysis Select test cases on or just to one side of the boundary of

equivalence classes If an input condition specifies a range bounded by values a

and b, test cases should be designed with values a and b , just above and just below a and b, respectively.

If an input condition specifies a number of values, test cases should be developed that exercise the minimum and maximum numbers. Values just above and below minimum and maximum are also tested.

Apply guidelines 1 and 2 to output conditions. If internal program data structures have prescribed

boundaries, be certain to design a test case to exercise the data structure at its boundary

Nov. 17, 2005 26

Partition/Equivalence Testing

Any one member of an equivalence class is as good a test case as any other member of the equivalence class

Range (1..16,383) defines three different equivalence classes:– Equivalence Class 1: Fewer than 1 record– Equivalence Class 2: Between 1 and 16,383 records– Equivalence Class 3: More than 16,383 records

Nov. 17, 2005 27

Database Example (contd)

Test case 1: 0 records Member of equivalence class 1 and adjacent to boundary value

Test case 2: 1 record Boundary value Test case 3: 2 records Adjacent to boundary

value Test case 4: 723 records Member of

equivalence class 2

Nov. 17, 2005 28

Database Example (contd)

Test case 5: 16,382 records Adjacent to

boundary value Test case 6: 16,383 records Boundary value Test case 7: 16,384 records Member of

equivalence class 3 and adjacent to boundary value

Nov. 17, 2005 29

Equivalence Testing of Output Specifications

We also need to perform equivalence testing of the output specifications

Example: In 2003, the minimum Social Security (OASDI) deduction from any one paycheck was $0.00, and the maximum was $5,394.00

– Test cases must include input data that should result in deductions of exactly $0.00 and exactly $5,394.00

– Also, test data that might result in deductions of less than $0.00 or more than $5,394.00

Nov. 17, 2005 30

Overall Strategy

Equivalence classes together with boundary value analysis to test both input specifications and output specifications– This approach generates a small set of test data

with the potential of uncovering a large number of faults

Nov. 17, 2005 31

White-Box Testing

Statement Coverage Criterion Branch coverage criterion Data flow coverage criterion Path coverage criterion

Nov. 17, 2005 32

Statement Coverage

while (…) {

….

while (…) {

…..

break;

}

…..

break;

Nov. 17, 2005 33

Branch Coverage

if ( StdRec != null) StdRec.name = arg[1];………….

write (StdRec.name)

Nov. 17, 2005 34

Path Coverage

read (x, y);

if x> 0

z = 1;

else

z = 0;

if y < 0

write (z)

else

write (y / z);

x = 1, y = 1 -> 1

x = -1, y = -1 -> 0

x = 1, y = -1 -> 1

x = -1, y = 1 -> failure

X>0

Y < 0

Nov. 17, 2005 35

Data Flow

Data-flow graph: is defined on the control flow graph by defining sets of variables DEF(n), C-USE(n) and P-USE(n), for each node n.

Variable x is in DEF(n) if– n is the start node and x is a global, parameter, or static local

variable.

– x is declared in a basic block n with an initializer.– x is assigned to the basic block with the =, op=, ++, or

--operator.

Nov. 17, 2005 36

Data flow Testing

read (x, y);

if x> 0

z = 1;

else

z = 0;

if y < 0

write (z)

else

write (y / z);

n2

n3 n4

n5

n6 n7

X>0

Y < 0

n1DEF(n1) = {x, y}

P-USE(n2) = {x}

DEF(n3) = {z} DEF(n4) = {z}

USE(n5) = {y}

USE(n6)= {z} C-USE(n7} = {y, z}

x: (n1, n2)y: (n1, n5), (n1, n7)z: ( n3, n6), (n3, n7), (n4, n6), (n4, n7)

Nov. 17, 2005 37

C-Use

Variable x is in C-USE(n) if x occurs as a C-USE expression in basic block as– a procedure argument,

– an initializer in a declaration,– a return value in a return statement,– the second operand of “=“– either operand of “op=“– the operand of ++, --, *– the first operand of . , =>

Nov. 17, 2005 38

P-Use

Variable x is in P-USE(n) if x occurs as a P-USE expression in basic block– as the conditional expression in a if, for, while,

do, or switch statement.– as the first operand of the condition expression

operator (?:), the logical and operator (&&), or the logical or operator (||)

Nov. 17, 2005 39

C-Use Coverage

A C-Use is a variable x and the set of all paths in the data-flow graph from node na to nb such that– x is in DEF(na), and– x is not in DEF(ni) for any other node ni on the paths

(definition clear path), and– x is in C-USE(nb).

A C-Use is covered by a set of tests if at least one of the paths in the C-Use is executed when the test is run.

Nov. 17, 2005 40

P-Use Coverage

A P-Use is a variable x and the set of all paths in the data-flow graph from node na to nb such that– x is in DEF(na), and– x is not in DEF(ni) for any other node ni on the paths

(definition clear path), and– x is in P-USE(nb).

A P-Use is covered by a set of tests if at least one of the paths in the P-Use is executed when the test is run.

Nov. 17, 2005 41

all-paths

all-du Paths

all-uses

all-c-uses/some-p-uses

all-c-uses

all-p-uses/some-c-uses

all-p-uses

Branch

Statementall-defs

Nov. 17, 2005 42

Complexity of White-Box Testing

Branch Coverage:– McCabe’s Cyclomatic complexity:CC(G) = #E - #V + 2P (G = (E, V))

All-defs: M + I * V All-p-uses, all-c-uses/some-p-uses, all-p-uses/some-c-uses, all-uses: N 2 All-du, All-Path: 2N: N is the number of

conditional statements

Nov. 17, 2005 43

14.15 Comparison of Unit-Testing Techniques

Experiments comparing – Black-box testing– Glass-box testing– Reviews

[Myers, 1978] 59 highly experienced programmers– All three methods were equally effective in finding faults– Code inspections were less cost-effective

[Hwang, 1981] – All three methods were equally effective

Nov. 17, 2005 44

Comparison of Unit-Testing Techniques (contd)

[Basili and Selby, 1987] 42 advanced students in two groups, 32 professional programmers

Advanced students, group 1– No significant difference between the three methods

Advanced students, group 2– Code reading and black-box testing were equally good– Both outperformed glass-box testing

Professional programmers– Code reading detected more faults– Code reading had a faster fault detection rate

Nov. 17, 2005 45

Mutation Testing

A program under test is seeded with a single error to produce a ``mutant'' program.

A test covers a mutant if the output of the mutant and the program under test differ for that test input.

The mutation coverage measure for a test set is the ratio ofmutants covered to total mutants

Nov. 17, 2005 46

DebuggingProgram Slicing Approach

Static slicing: decomposes a program by statically analyzing data-flow and control flow of the program.– A static program slice for a given variable at a given

statementcontains all the executable statements that could influence the value of that variable at the given statement.

– The exact execution path for a given input is a subset of the static program slice with respect to the output variables at the given checkpoint.

– Focus is an automatic debugging tool based on static program slicing to locate bugs.

Nov. 17, 2005 47

Dynamic Slicingdynamic data slice: A dynamic data slice with

respect to a given expression, location, and test case is a set of all assignments whose computations have propagated into the current value of the given expression at the given location.

dynamic control slice:A dynamic control slice with respect to a given location and test case is a set of all predicates that enclose the given location.

Nov. 17, 2005 48

Object-Oriented Testing Process

Unit testing - class Integration testing - cluster System testing - program

Nov. 17, 2005 49

Class Testing

Inheritance Polymorphism Sequence

Nov. 17, 2005 50

Class Testing Techniques

Testing inheritance Testing polymorphism State-oriented testing Data Flow testing Function dependence class testing

Nov. 17, 2005 51

Inheritance

A subclass may re-define its inherited functions and other functions may be affected by the re-defined functions.

When this subclass is tested, which functions need to be re-tested?

Class foo { int local var; ... int f1() { return 1; } int f2() { return 1/f1(); }}Class foo child :: Public foo

{// child class of fooint f1() { return 0; }}

Nov. 17, 2005 52

Testing Inheritance

``Incremental testing of object-oriented class structures.'' Harrold et al., (1992)

New methods: complete testing Recursive methods: limited testing Redefined methods: reuse test scripts

Nov. 17, 2005 53

Polymorphism

An object may be bound to different classes during the run time.

Is it necessary to test all the possible bindings?

// beginning of function foo {

...

P1 p;

P2 c;

...

return(c.f1()/p1.f1());

// end of function foo

Nov. 17, 2005 54

Testing Polymorphism

``Testing the Polymorphic Interactions between Classes.” McDaniel and McGregor (1994) Clemson University

“Coupling-based Testing of OO Programs.” Roger Alexander and Jeff Offutt, Springer's Journal of Universal Computer Science: Special Issue on Breakthroughs and Challenges in Software Engineering (invited), 10(4):391-427, April 2004.

“ Fragment Class Analysis for Testing of Polymorphism in Java Software.” Atanas Rountev, Ana Milanova, and Barbara G. Ryder, IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 30, NO. 6, JUNE 2004

Nov. 17, 2005 55

State-Oriented Testing

``The state-based testing of object-oriented programs,'' 1992, C. D. Turner and D. J. Robson

``On Object State Testing,'' 1993, Kung et al.

``The testgraph methodology: Automated testing of collection classes,'' 1995, Hoffman and Strooper

The FREE approach: Binder http://www.rbsc.com/pages/Free.html

Nov. 17, 2005 56

Data Flow Testing

``Performing Data Flow Testing on Class,'' 1994, Harrold and Rothermel.

``Object-oriented data flow testing,'' 1995, Kung et al.

Nov. 17, 2005 57

Function Dependence Relationship

A function uses a variable means that the value of the variable is referenced in a computation expression or used to decide a predicate.

A function defines a variable means that the value of the variable is assigned when the function is invoked.

A variable x uses a variable y means the value of x is obtained from the value of y and others. x is affected when the value of y is changed.

Nov. 17, 2005 58

Function Dependence Relationship

f1 depends on f2

f1 uses a variable x that is defined in f2, f1 calls f2 and uses the return value of f2, f1 is called by f2 and uses a parameter p that is

defined in f2. f1 uses a variable x and x uses a variable y

which is defined in f2.

Nov. 17, 2005 59

Object-Oriented Pitfalls

Type I faults– inheritance/polymorphism faults

Type II faults– object management faults

Type III faults– traditional faults

Nov. 17, 2005 60

Object-Oriented Pitfalls - Type I

Class foo {

int local_var

.

int f1() {return 1;}

int f2() {return 1/f1( );}

}

class foo_derived{

int f1() {return 0;}

}

main() {

int x, y;

foo Obj;

cin >> x, y;

if ( x > 0 )

Obj = new foo( );

else

Obj = new foo_derived( );

if ( y > 0 )

cout << Obj.f2( );

else

cout << Obj.f1( );

}

Nov. 17, 2005 61

Object-Oriented Pitfalls - Type II

class foo {

Int *m_data;

foo() {

m_data = new Int;

*m_data = 0; }

~foo() {delete m_data;}

print() {

cout << *m_data;}

inc() {*m_data++;}

main() {L1: foo *obj1 = new foo( );L2: obj1->inc();L3: foo *obj2 = new foo( );L4: *obj2 = *obj1; if (P1) L5: obj2->inc( );L6: obj2->print( ); if (P2)L7: obj2->~foo( ); if (P3)L8: obj1->print( );L9: obj1->~foo( );}

Nov. 17, 2005 62

Empirical Study - Applications

System A : GUI System B : Data logging system System C : Network communication

program

Nov. 17, 2005 63

Fault Summary

System System A System B System CLOC 5.6k 21.3k 16.0kNOF 35 80 85Type I 5 15 10Type II 6 13 7Type III 24 52 68OOF (%) 31% 35% 20%

Nov. 17, 2005 64

Functional Testing

System System A System B System C NOT 100 383 326 Type I 1(4) 4(11) 5(5) Type II 1(5) 6(7) 3(4) Type III 15(9) 32(20) 44(24) OOF (%) 2(18%) 10(35%) 8(47%) NOF 17(48%) 42(52%) 52(61%)

Nov. 17, 2005 65

Statement Testing

System System A System B System C NOT 46 55 52 Type I 0(4) 1(10) 0(5) Type II 1(4) 1(6) 1(3) Type III 5(4) 6(14) 4(20) OOF (%) 1(11%) 2(11%) 1(11%) NOF 6(33%) 8(21%) 5(15%)

Nov. 17, 2005 66

Branch Testing

System System A System B System C NOT 23 41 33 Type I 0(4) 1(9) 1(4) Type II 1(3) 1(5) 0(3) Type III 1(3) 6(8) 5(15) OOF (%) 1(11%) 2(11%) 1(11%) NOF 2(11%) 8(21%) 6(18%)

Nov. 17, 2005 67

Code-Based Testing

System System A System B System C NOT 69 96 85 OO faults 2(22%) 2(22%) 2(22%) NOF 8(44%) 12(42%) 13(33%)

Nov. 17, 2005 68

All-States

System System A System B System CNOT 21 37 38Type I 1(3) 3(7) 2(3)Type II 0(5) 2(4) 1(2)Type III 4(5) 8(6) 10(10)OOF (%) 1(11%) 5(28%) 3(33%)NOF 5(28%) 13(34%) 13(40%)

Nov. 17, 2005 69

All-Transitions

System System A System B System CNOT 43 79 75Type I 1(2) 2(5) 0(3)Type II 2(3) 1(3) 1(1)Type III 4(1) 6(0) 8(2)OOF (%) 3(33%) 3(17%) 1(11%)NOF 7(39%) 9(24%) 9(27%)

Nov. 17, 2005 70

State-Based Testing

System System A System B System CNOT 64 116 113OOF (%) 4(44%) 8(44%) 4(44%)NOF 12(66%) 22(58%) 22(67%)

Nov. 17, 2005 71

Object-Flow Based Testing

Object– an object is an instance of a class

Define– an object is defined if its state is initiated or

changed Use

– an object is used if one of its data members is referenced

Nov. 17, 2005 72

Object-Flow Coverage Criteria

All-du-pairs– at lease one definition-clear path from every

definition of every object to every use of that definition must be exercised under some test.

All-bindings– Every possible binding of every object must be

exercised at least once when the object is defined or used.

Nov. 17, 2005 73

Weak Object-Flow Testing

An object is defined – the constructor of the object is invoked;– a data member is defined;– a method that initiates/modifies the data

member(s) of the object is invoked

Nov. 17, 2005 74

Object-Oriented Pitfalls - Type I

Class foo {

int local_var

.

int f1() {return 1;}

int f2() {return 1/f1();}

}

class foo_derived{

int f1() {return 0;}

}

main() {

int x, y;

foo Obj;

cin >> x, y;

if ( x > 0 )

Obj = new foo();

else

Obj = new foo_derived();

if ( y > 0 )

cout << Obj.f2();

else

cout << Obj.f1();

}

Nov. 17, 2005 75

Object-Oriented Pitfalls - Type II

class foo {

Int *m_data;

foo() {

m_data = new Int;

*m_data = 0; }

~foo() {delete m_data;}

print() {

cout << *m_data;}

inc() {*m_data++;}

main() {L1: foo *obj1 = new foo();L2: obj1->inc();L3: foo *obj2 = new foo();L4: *obj2 = *obj1; if (P1) L5: obj2->inc(); if (P2)L6: obj2->~foo(); if (P3)L7 obj1->print(); obj1->~foo();}

Nov. 17, 2005 76

All-DU-Pairs

System System A System B System CNOT 54 89 65Type I 2(2) 6(5) 1(4)Type II 4(1) 6(1) 4(0)Type III 3(6) 7(13) 3(21)OOF (%) 6(67%) 12(67%) 5(56%)NOF 9(50%) 19(50%) 8(24%)

Nov. 17, 2005 77

All-Bindings

System System A System B System CNOT 21 37 32Type I 1(1) 3(2) 3(1)Type II 1(0) 1(0) 0(0)Type III 2(4) 3(10) 4(17)OOF (%) 2(22%) 4(22%) 3(33%)NOF 4(22%) 7(18%) 7(21%)

Nov. 17, 2005 78

Object-Flow Based Testing I

System System A System B System C NOT 75 126 97 OOF 8(1) 16(2) 8(1) NOF 13(5) 26(12) 15(18)

Nov. 17, 2005 79

Object-Flow Based Testing I’

System System A System B System C NOT 115 155 145 OOF 8(1) 16(2) 8(1) NOF 16(2) 31(7) 21(12)

Nov. 17, 2005 80

Object-Flow Based Testing II

System System A System B System C NOT 187 295 247 OOF (%) 9(0) 18(0) 9(0) NOF 17(1) 37(1) 28(5)

Nov. 17, 2005 81

Object-Flow Based Testing II’

System System A System B System C NOT 264 391 332 OOF 9(0) 18(0) 9(0) NOF 17(1) 37(1) 29(4)

Nov. 17, 2005 82

Integrated Testing

Functional testing Code-based testing Object-flow based testing I State-based testing

Nov. 17, 2005 83

Integrated Approach

System System A System B System C NOT 126 131 167 OOF 9(0) 18(0) 9(0) NOF 17(1) 36(2) 29(4)