91
Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

  • View
    218

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Model Based Software Testing Preliminaries

Aditya P. MathurPurdue UniversityFall 2005

Last update: August 18, 2005

Page 2: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 2

Learning Objectives: This course

Methods for test assessment The coverage principle and the saturation effect

Tools:

Methods for test generation

Software test process

AETG: Test generation xSUDS: Test assessment , enhancement, minimization,

debugging CodeTest: Test assessment, performance monitoring VisualTest: GUI testing Test RealTime: Test assessment, performance monitoring

Ballista: Robustness testing

Page 3: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 3

Learning Objectives

How and why does testing improve our confidence in program correctness?

What is coverage and what role does it play in testing?

What are the different types of testing?

What is model-based testing? How does it differ from (formal) verification?

What are the formalisms for specification and design used as source for test and oracle generation?

Page 4: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 4

Testing: Preliminaries What is testing?

The act of checking if a part or a product performs as expected.

Why test?

Gain confidence in the correctness of a part or a product.

Check if there are any errors in a part or a product.

Page 5: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 5

What to test? During software lifecycle several products are generated.

Examples:

Requirements document

Design document

Software subsystems

Software system

Page 6: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 6

Test all! Each of these products needs testing.

Methods for testing various products are different.

Examples:

Test a requirements document using scenario construction and simulation.

Test a design document using simulation. Test a subsystem using functional testing.

Page 7: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 7

What is our focus? We focus on testing programs using formal models.

Programs may be subsystems or complete systems.

These are written in a formal programming language.

There is a large collection of techniques and tools to test programs.

Page 8: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 8

Source of Tests

An Abstraction of the MBT Process

Develop/Add Tests

Run Tests

Debug and removedefects

Test adequate?

No Yes Proceed to the next step

Raw requirements

Formal specifications

Finite State Machines

State Charts

Sequence Diagrams

Code, etc.

Tests

Behavior

Modified document

Page 9: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 9

A Few Terms

Program:

A collection of functions, as in C, or a collection of classes as in java.

Specification:

Description of requirements for a program. This might be formal or informal.

Page 10: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 10

Few Terms (contd.)

A set of values of input variables of a program. Values of environment variables are also included.

Test case or test input

Test set

Set of test inputs

Program execution

Execution of a program on a test input.

Page 11: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 11

Few Terms (contd.) Oracle

A function that determines whether or not the results of executing a program under test is as per the program’s specifications.

Verification Human examination of a product, such as design

document, code, user manual, etc., to check for correctness. Inspections an walkthroughs are the generally used methods for verification.

Validation The process of evaluating a system or a subsystem

to determine whether or not it satisfies the specified requirements.

Page 12: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 12

Correctness

Let P be a program (say, an integer sort program).

For sort let S be:

Let S denote the specification for P.

Page 13: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 13

Sample Specification

Let K denote any element of this sequence,

. )1(0 somefor eeK −≤≤

P takes as input an integer N>0 and a sequence of N integers called elements of the sequence.

P sorts the input sequence in descending order and prints the sorted sequence.

Page 14: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 14

Correctness again

P is considered correct with respect to a specification S if and only if:

For each valid input the output of P is in accordance with the specification S.

Page 15: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 15

Errors, defects, faults

Error: A mistake made by a programmer

Example: Misunderstood the requirements.

Defect/fault: Manifestation of an error in a program.

Example: Incorrect code: if (a<b) {foo(a,b);}Correct code: if (a>b) {foo(a,b);}

Page 16: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 16

Failure

Incorrect program behavior due to a fault in the program.

Failure can be determined only with respect to a set of requirement specifications.

A necessary condition for a failure to occur is that execution of the program force the erroneous portion of the program to be executed. What is the sufficiency condition?

Page 17: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 17

Errors and failure

InputsError-revealing inputs cause failure

Program

OutputsErroneous outputs indicatefailure

Page 18: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 18

Debugging

Suppose that a failure is detected during the testing of P.

The process of finding and removing the cause of this failure is known as debugging.

The word bug is slang for fault.

Testing usually leads to debugging

Testing and debugging usually happen in a cycle.

Page 19: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 19

Test-debug Cycle

Test

Debug

Yes

Testingcomplete?

No

Done!

Yes No

Failure?

Page 20: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 20

Testing and Code Inspection

Code inspection is a technique whereby the source code is inspected for possible errors.

Code inspection is generally considered complementary to testing. Neither is more important than the other.

One is not likely to replace testing by code inspection or by verification.

Page 21: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 21

Testing for correctness?

Identify the input domain of P.

Execute P against each element of the input domain.

For each execution of P, check if P generates the correct output as per its specification S.

Page 22: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 22

What is an input domain ? Input domain of a program P is the set of all valid inputs

that P can expect.

The size of an input domain is the number of elements in it.

An input domain could be finite or infinite.

Finite input domains might be very large!

Page 23: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 23

Identifying the input domain

For the sort program:

N: size of the sequence, K: each element of the sequence.

Example: For N<3, e=3, some sequences in the input domain are:

[0]: A sequence of size 1 (N=1)

[2 1]: A sequence of size 2 (N=2).

[ ]: An empty sequence (N=0).

Page 24: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 24

Size of an input domain

Suppose that

The size can be computed as:

The size of the input domain is the number of all sequences of size 0, 1, 2, and so on.

6100 ≤≤N

. somefor )1(0 eeK −≤≤

∑=

610

0i

ie Can you derive this formula?

Page 25: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 25

Testing for correctness? Sorry! To test for correctness P needs to be executed on all inputs.

For our example, it will take an exorbitant amount of time to execute the sort program on all inputs on the most powerful computers of today!

Page 26: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 26

Exhaustive Testing

This form of testing is also known as exhaustive testing as we execute P on all elements of the input domain.

For most programs exhaustive testing is not feasible.

What is the alternative?

Page 27: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 27

Formal Verification

Formal verification (for correctness) is different from testing for correctness.

There are techniques for formal verification of programs that we do not plan to discuss.

Page 28: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 28

Partition Testing

In this form of testing the input domain is partitioned into a finite number of sub-domains.

P is then executed on a few elements of each sub-domain.

Let us return to the sort program.

Page 29: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 29

Sub-domains

Suppose that 0<=N<=2 and e=3. The size of the partitions is:

We can divide the input domain into three sub-domains as shown.

133333 2102

0=++=∑

=i

i

1

3

9

0=N 2=N

1=N

Page 30: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 30

Fewer test inputs

Now sort can be tested on one element selected from each domain.

For example, one set of three inputs is:[ ] Empty sequence from sub-domain 1.[2] Sequence from sub-domain 2.[2 0] Sequence from sub-domain 3.

We have thus reduced the number of inputs used for testing from 13 to 3!

Page 31: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 31

Confidence

Confidence is a measure of one’s belief in the correctness of the program.

Correctness is often not measured in binary terms: a correct or an incorrect program.

Instead, it is measured as the probability of correct operation of a program when used in various scenarios.

Page 32: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 32

Measures of Confidence

Reliability: Probability that a program will function correctly in a given environment over a certain number of executions.

Test completeness: The extent to which a program has been tested and errors found have been removed.

Page 33: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 33

Example: Increase in Confidence

We consider a non-programming example to illustrate what is meant by “increase in confidence.”

Example: A rectangular field has been prepared to certain specifications.

One item in the specifications is:

“There should be no stones remaining in the field.”

Page 34: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 34

Rectangular Field

Search for stones inside a rectangular field.

X

Y

0 L

W

Page 35: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 35

Testing the Rectangular Field

The field has been prepared and our task is to test it to make sure that it has no stones.

How should we organize our search?

Page 36: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 36

Partitioning the field

We divide the entire field into smaller search rectangles.

The length and breadth of each search rectangle is one half the expected length and breadth of the smallest stone one expects to find in the field.

Page 37: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 37

Partitioning into search rectangles

Stone

12345678

Y

Wid

th

1 2 3 4 5 6 7

X Length

Another Stone

Two stones inside one rectangle

A tiny stone

Page 38: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 38

Input Domain

Input domain is the set of all possible valid inputs to the search process.

In our example this is the set of all points in the field. Thus, the input domain is infinite!

To reduce the size of the input domain we partition the field into finite size rectangles.

Page 39: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 39

Rectangle size

The length and breadth of each search rectangle is one half that of the smallest stone.

This is an attempt to ensure that each stone covers at least one rectangle. (Is this always true?)

Page 40: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 40

Constraints

Testing must be completed in less than H hours.

Any stone found during testing is removed.

Upon completion of testing the probability of finding a stone must be less than p.

Page 41: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 41

Number of search rectangles

LetL: Length of the fieldW: Width of the fieldl: Expected length of the smallest stonew: Expected width of the smallest stone

Size of each rectangle: l/2 x w/2

Number of search rectangles (R)=(L/l)*(W/w)*4

Assume that L/l and W/w are integers.

Page 42: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 42

Time to Test

Let t be the time to peek inside one search rectangle. No rectangle is examined more than once.

Let o be the overhead incurred in moving from one search rectangle to another.

Total time to search T=R*t+(R-1)*o

Testing with R rectangles is feasible only if T<H.

Page 43: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 43

Partitioning the input domain

This set consists of all search rectangles (R).

Number of partitions of the input domain is finite (=R).

However, if T>H then the number of partitions is too large and scanning each rectangle once is infeasible.

What should we do in such a situation?

Page 44: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 44

Option 1: Do a limited search

Of the R search rectangles we examine only r where r is such that (t*r+o*(r-1)) < H.

This limited search will satisfy the time constraint.

Will it satisfy the probability constraint?

Question:What do the probability and time constraints correspond to in a commercial test ?

Page 45: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 45

Distribution of Stones

si < Ri

To satisfy the probability constraint we must scan enough search rectangles so that the probability of finding a stone, after testing, remains less than p.

Let us assume that there are Si stones remaining after i test cycles.

Page 46: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 46

Distribution of Stones

There are Ri search rectangles remaining after i test cycles.

Stones are distributed uniformly over the field.

An estimate of the probability of finding a stone in a randomly selected remaining search rectangle is pi = si / Ri .

Page 47: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 47

Probability Constraint We will stop looking into rectangles if

pi <= p Can we really apply this test method in practice?

Page 48: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 48

Confidence

Number of stones in the field is not known in advance.

Hence we cannot compute the probability of finding a stone after a certain number of rectangles have been examined.

The best we can do is to scan as many rectangles as we can and remove the stones found.

Page 49: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 49

Coverage

Suppose that r rectangles have been scanned from a total of R. Then we say that the (rectangle) coverage is r/R.

After a rectangle has been scanned for a stone and any stone found has been removed, we say that the rectangle has been covered.

Page 50: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 50

Coverage and Confidence

What happens when coverage increases?

As coverage increases (and stones found are removed) so does our confidence in a “stone-free” field.

In this example, when the coverage reaches 100%, (almost) all stones have been found and removed. Can you think of situations when this might not be true?

Page 51: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 51

Option 2: Reduce number of partitions

If the number of rectangles to scan is too large, we can increase the size of a rectangle. This reduces the number of rectangles.

Increasing the size of a rectangle also implies that there might be more than one stone within a rectangle.

Is this good for a tester?

Page 52: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 52

Rectangle Size

As a stone may now be smaller than a rectangle, detecting a stone inside a rectangle is not guaranteed.

Despite this fact our confidence in a “stone-free” field increases with coverage.

However, when the coverage reaches 100% we cannot guarantee a “stone-free” field.

Page 53: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 53

Coverage vs. Confidence

1(=100%)

1

Coverage

Con

fide

nce

0

Does not imply that the fieldis “stone-free”.

Page 54: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 54

Rectangle Size (again!)

p=Probability of detecting a stone inside a rectangle, given that the stone is there.

t=time to complete a test.

Rectangle sizesmall large

t, p

Page 55: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 55

Analogy

Field: Program

Stone: ErrorScan a rectangle: Test program on one inputRemove stone: Remove errorPartition: Subset of input domainSize of stone: Size of an errorRectangle size: Size of a partition

Page 56: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 56

Input domain

Analogy (contd.)

Size of an error is the number of inputs in the input domain each of which will cause a failure due to that error.

Inputs that cause failuredue to Error 1

Inputs that cause failure due to Error 2.

Error 1 is largerthan Error 2.

Does this imply that failures due to error 1 will occur more frequently than those due to error 2?

Page 57: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 57

Confidence and Probability

It might not increase the probability that the field is “stone-free”.

Increase in coverage increases our confidence in a “stone-free” field.

Important: Increase in confidence is NOT justified if detected stones are not guaranteed to be removed !

Page 58: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 58

Types of Testing

Basis forclassification

Object under test

All of these methods can be

applied here.

Source of clues fortest input construction

Page 59: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 59

Testing: Based on Source of Test Inputs

Functional testing/specification testing/black-box testing/conformance testing: Clues for test input generation come from requirements.

White-box testing/coverage testing/code-based testing Clues come from program text.

Page 60: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 60

Testing: Based on Source of Test Inputs

Stress testing Clues come from “load” requirements. For

example, a telephone system must be able to handle 1000 calls over any 1-minute interval. What happens when the system is loaded or overloaded?

Page 61: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 61

Testing: Based on Source of Test Inputs

Performance testing Clues come from performance requirements. For

example, each call must be processed in less than 5 seconds. Does the system process each call in less than 5 seconds.

Fault- or error- based testing Clues come from the faults that are injected into the

program text or are hypothesized to be in the program.

Page 62: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 62

Testing: Based on Source of Test Inputs

Random testing

Robustness testing

Clues come from requirements. Test are generated randomly using these clues.

Robustness is the degree to which a software component functions correctly in the presence of exceptional inputs or stressful environmental conditions.

Clues come from requirements. The goal is to test a program under scenarios not stipulated in the requirements.

Page 63: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 63

Testing: Based on Source of Test Inputs

OO testing Clues come from the requirements and the design of

an OO-program.

Protocol testing Clues come from the specification of a protocol. As, for example, when testing for a

communication protocol.

Page 64: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 64

Testing: Based on Item Under Test

Unit testingTesting of a program unit. A unit is the smallest testable piece of a program. One or more units form a subsystem.

Subsystem testing Testing of a subsystem. A subsystem is a collection

of units that cooperate to provide a part of system functionality

Page 65: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 65

Testing: Based on Item Under Test

Integration testing Testing of subsystems that are being integrated to

form a larger subsystem or a complete system.

System testing Testing of a complete system.

Page 66: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 66

Testing: Based on Item Under Test

Regression testing

And the list goes on and on!

Test a subsystem or a system on a subset of the set of existing test inputs to check if it continues to function correctly after changes have been made to an older version.

Page 67: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 67

Test input construction and objects under test

Test object

Sou

rce

of c

lues

for

te

st in

puts

unit subsystem system

Requirements

Code

Page 68: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 68

Combinatorial Design

Context: A telephone switch

Problem: Determine what inputs to use to test the switch.

Call Type Billing Access Status

Local Caller Loop Available

Long Dist Collect ISDN Busy

Intl. 800 PBX Blocked

Total parameters: 4 Values for each parameter: 3

Total number of scenarios: 34=81

Parameters:

Sample inputs:

Page 69: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 69

Reducing the Input Space

Suppose that 81 test is too many for the telephone switch under test.

An alternative is to select a default value for each parameter and then vary each parameter until all values are covered.

Page 70: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 70

Test Plan with Default Parameter Values

Call Type Billing Access Status

Local Caller Loop Available

Long Dist Caller Loop Available

Intl. Caller Loop Available

Local Collect Loop Available

Local 800 Loop Available

Local Caller ISDN Available

Local Caller PBX Available

Local Caller Loop Busy

Local Caller Loop Blocked

Total inputs: 9

Coverage: 30 of the 54 pair wise interactions.

Page 71: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 71

Another Test Plan

Call Type Billing Access Status

Local Collect PBX Busy

Long Dist 800 Loop Busy

Intl. Caller ISDN Busy

Local 800 ISDN Blocked

Long Dist Caller PBX Blocked

Intl. Collect Loop Blocked

Local Caller Loop Available

Long Dist Collect ISDN Available

Total inputs: 9

Coverage: All pair wise interactions covered

Intl 800 PBX Available

Page 72: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 72

Combinatorial Explosion

What if the program under test had 10 parameters each with 3 values?

Total parameter combinations= 310

Number of tests using the default value method= ?

Number of pair-wise combinations = ?

Number of pair-wise combinations covered = ?

Page 73: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 73

Answers to Questions

Tests with default value method=n+ (n-1) x (k-1)

Pair-wise combinations=(k x (k-1)/2) x n2

For k parameters each with n possible values:

Pair-wise combinations covered=(k-1)+n*(k-1)+(n-1)*(k-1)*(k-1)

Later we shall discuss how to handle the combinatorial explosion.

Page 74: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 74

Finite State Machines (FSMs)

A state machine is an abstract representation of actions taken by a program or anything else that functions!

It is specified as a quintuple: A: a finite input alphabet Q: a finite set of states q0: initial state which is a member of Q.

Page 75: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 75

FSMs (contd.) T: state transitions which is a mapping

Q x A--> Q F: A finite set of final states, F is a subset of Q.

Example: Here is a finite state machine that recognizes integers ending with a carriage return character.

A={0,1,2,3,4,5,6,7,8,9, CR} Q={q0,q1,q2} q0: initial state

Page 76: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 76

FSMs (contd.)

T: {((q0,d),q1),(q1,d),q1), (q1,CR),q2)} F: {q2}

A state diagram is an easier to understand specification of a state machine. For the above machine, the state diagram appears on the next slide.

Page 77: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 77

State diagram

q0 q1d

d

CRq2

Final state indicatedby concentric circles.

States indicated by circles.

State transitions indicatedby labeled arrows from one statethe another (which could be the same). Each label must be from the alphabet. It is also known asan event.

d: denotes a digit

Page 78: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 78

State Diagram-Actions

q0 q1 q2d/set i to d

d/add 10*d to i

CR/output i

Can you describe what this machine computes?Can you construct a regular expression that describes all strings recognized by this state machine?

x/y: x is an element ofthe alphabet and y is an action.

i is initialized to d when the machine moves from state q0 to q1.i is incremented by 10*d when the machine moves from q1 to q1.The current value of i is output when a CR is encountered.

Page 79: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 79

State Machine: Languages

Each state machine recognizes a language.

The language recognized by a state machine is the set S of all strings such that: when any string s in S is input to the state machine

the machine goes through a sequence of transitions and ends up in the final state after having scanned all elements of s.

Testing state machines? Later!

Page 80: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 80

The Unified Modeling Language

Unified Modeling Language (UML) is a notation to express requirements and designs of software systems.

Requirements are represented using:

a collection of use cases, each use case being a representative of a collection of scenarios.

a collection of system sequence diagrams that explain the interaction between a user and the application for each use case.

Page 81: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 81

UML: Design Representation

Design of an application is represented in UML by a collection of diagrams of the following types (not an exhaustive list):

Sequence (or collaboration) diagrams depict the sequence of actions initiated due to an external event. This sequence is depicted in terms of messages sent from one object to another.

Statecharts depict the relationship amongst various states of an object.

Class diagrams capture the relationships amongst classes.

Page 82: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 82

ECG Monitor

Use Case Diagram (partial)

Physician

Remote Display

Display waveforms

Capture waveform

Processalarms

Calibratesensors

<<uses>>

Service Personnel

Page 83: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 83

A Sequence Diagram (partial)

Passenger 1 is on floor 6 and 2 on floor 2.

Passenger 1Elevator Controller Passenger 2

Request UP elevator

Light UP indicator

Request DN elevatorDoor closes, E moves, passes floor 2.

Door opensArrives at floor 6.

Queue request

Request floor 8

Light DN indicator

Light floor buttonOne sequence diagram for each use case.

Page 84: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 84

What else can one indicate on a sequence diagram?

Broadcast messages sent by one object and received by more than one.

Timing marks to show timing constraints m between two events.

Event identifiers can be attached to an event; an ID can be referenced in other parts of the diagram.

State marks are placed on the object timing line to indicate state changes for that object.

Page 85: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 85

A State Diagram

Idle

Transmitting

Waiting

Message Ready/Trans-count=0

[Trans-count<=limit]

tm (wait-time)

[else]/inform sender of failure

ACK

Behavior of a Message Transaction Object

Invalid ACK

Done/Start-timerTrans-count++

Page 86: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 86

UML State Charts

Similar to state diagrams.

States can be nested within states. Inner states are known as substates.

History connector allows the specification of default initial state in a super state.

Page 87: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 87

S1

UML State Charts

entry: f1()

exit: f3()do: f2()

entry: g1()

exit: g3()do: g2()

S2

Y2

SS1

SS2

Y1

S3P

R Q

X1

X2

X3

T1

T2

T3

T4/C1 C2T5H

[G1][G1]

History connector: SS2 is the default initial state in the absence of history, else the last active state is the default.Each state may have entry and exit actions as well as activities.

Entry (exit) actions are executed in the (reverse) order of nesting.

Page 88: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 88

Transitions in UML Statecharts

event name (parameters) [guard] / action list^ event list

event name: name of the event triggering the transition

parameters: List of parameters passed with the event signal.

guard: Boolean expression that must evaluate to true for the transition to take place.

action list: List of actions to be executed when the transition is taken.

event list: List of events generated, and propagated to other state machines, when the transition is taken.

Page 89: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 89

Summary

Testing and debugging

Specification

Correctness versus confidence

Input domain

Exhaustive testing and combinatorial explosion

UML artifacts: Use cases, FSM, State Charts, Sequence diagrams

Page 90: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 90

Summary: Terms

Reliability Coverage Error, defect, fault, failure Debugging, test-debug cycle Types of testing, basis for

classification

Page 91: Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

Software Testing and Reliability Aditya P. Mathur 2003 91

Summary: Questions

What is the effect of reducing the partition size on probability of finding errors?

How does coverage effect our confidence in program correctness?

Does 100% coverage imply that a program is fault-free?

What decides the type of testing?