42
Software Testing A Crash Course in SW testing Techniques and Concepts

Testing Course - Birgitta

Embed Size (px)

Citation preview

Page 1: Testing Course - Birgitta

Software Testing

A Crash Course in SW testing Techniques and Concepts

Page 2: Testing Course - Birgitta

2

Overview

Introduction to testing concepts Levels of testing General test strategies Testing concurrent systems Testing of RT systems

Page 3: Testing Course - Birgitta

3

Testing is necessary

To gain a sufficient level of confidence for the system Risk information Bug information Process information

Perfect development process infeasible Building without faults implies early testing

Formal methods not sufficient Can only prove conformance to a model Perfect requirements are cognitive infeasible Error prone

Page 4: Testing Course - Birgitta

4

Testing for quality assurance

Traditionally testing focuses on functional attributes E.g. correct calculations

Non-functional attributes equally important E.g. reliability, availability, timeliness

Page 5: Testing Course - Birgitta

5

How much shall we test?

Testing usually takes about half of development resources

Stop testing is a business decision There is always something more to test Risk based decision. Tester provides

risk estimation

Page 6: Testing Course - Birgitta

6

When do we test?

The earlier a fault is found the less expensive it is to correct it

Testing is not only concerning code. Documents and models should also be

subject to testing. As soon as a document is produced,

testing can start.

Page 7: Testing Course - Birgitta

7

Levels of Testing

Component/Unit testing Integration testing System testing Acceptance testing Regression testing

Page 8: Testing Course - Birgitta

8

Component Testing (1/2)

Require knowledge of code High level of detail Deliver thoroughly tested components

to integration Stopping criteria

Code Coverage Quality

Page 9: Testing Course - Birgitta

9

Component Testing (2/2)

Test case Input, expected outcome, purpose Selected according to a strategy, e.g.,

branch coverage Outcome

Pass/fail result Log, i.e., chronological list of events

from execution

Page 10: Testing Course - Birgitta

10

Integration Testing (1/2)

Test assembled components These must be tested and accepted

previously Focus on interfaces

Might be interface problem although components work when tested in isolation

Might be possible to perform new tests

Page 11: Testing Course - Birgitta

11

Integration Testing (2/2)

Strategies Bottom-up, start from bottom and add one at a

time Top-down, start from top and add one at a time Big-bang, everything at once Functional, order based on execution

Simulation of other components Stubs receive output from test objects Drivers generate input to test objects Note that these are also SW, i.e., need testing

etc.

Page 12: Testing Course - Birgitta

12

System Testing (1/2)

Functional testing Test end to end functionality Requirement focus

Test cases derived from specification Use-case focus

Test selection based on user profile

Page 13: Testing Course - Birgitta

13

System Testing (2/2)

Non-functional testing Quality attributes

Performance, can the system handle required throughput?

Reliability, obtain confidence that system is reliable

Timeliness, testing whether the individual tasks meet their specified deadlines

etc.

Page 14: Testing Course - Birgitta

14

Acceptance Testing

User (or customer) involved Environment as close to field use as

possible Focus on:

Building confidence Compliance with defined acceptance

criteria in the contract

Page 15: Testing Course - Birgitta

15

Re-Test and Regression Testing

Conducted after a change Re-test aims to verify whether a

fault is removed Re-run the test that revealed the fault

Regression test aims to verify whether new faults are introduced Re-run all tests Should preferably be automated

Page 16: Testing Course - Birgitta

16

Strategies

Code coverage strategies, e.g. Decision coverage Data-flow testing (Defines -> Uses)

Specification-based testing, e.g. Equivalence partitioning Boundary-value analysis Combination strategies

State-based testing

Page 17: Testing Course - Birgitta

17

Code Coverage (1/2)

Statement coverage Each statement should be executed by

at least one test case Minimum requirement

Page 18: Testing Course - Birgitta

18

Code Coverage (2/2)

Branch/Decision coverage All decisions with true

and false value Subsumes all

statements coverage MC/DC for each variable

x in a boolean condition B, let x decide the value of B and test with true and false value

Example: if(x1 and x2) {

S}

Used for safety critical applications

x1 x2 x1 and x2

true true true

false true false

true true true

true false false

Page 19: Testing Course - Birgitta

19

Mutation testing

Create a number of mutants, i.e., faulty versions of program Each mutant contains one fault Fault created by using mutant operators

Run test on the mutants (random or selected) When a test case reveals a fault, save test case

and remove mutant from the set, i.e., it is killed Continue until all mutants are killed

Results in a set of test cases with high quality

Need for automation

Page 20: Testing Course - Birgitta

20

Specification-based testing (1/2)

Test cases derived from specification Equivalence partitioning

Identify sets of input from specification Assumption: if one input from set s leads to

a failure, then all inputs from set s will lead to the same failure

Chose a representative value from each set

Form test cases with the chosen values

Page 21: Testing Course - Birgitta

21

Specification-based testing (2/2)

Boundary value analysis Identify boundaries in input and output For each boundary:

Select one value from each side of boundary (as close as possible)

Form test cases with the chosen values

Page 22: Testing Course - Birgitta

22

Combination Strategies (1/5)

Equivalence partitioning and boundary analysis give representative parameter values

Test case often contain more than one input parameter

How do we form efficient test suites, i.e., how do we combine parameter values?

Page 23: Testing Course - Birgitta

23

Combination Strategies (2/5)

Each choice Each chosen parameter value occurs in at least

one test case Assume 3 parameters A, B, and C with 2,

2, and 3 valuesTest case

A B C

TC1 1 1 1

TC2 2 2 2

TC3 any any 3

Page 24: Testing Course - Birgitta

24

Combination Strategies (3/5)

Pair-wise combinations Each pair of chosen

values occurs in at least one test case

Efficiently generated by latin squares or a heuristic algorithm

Cover failures due to pairs of input values

1 2 3

3 1 2

2 3 1

Latin square

A B C

TC1 1 1 1

TC2 1 2 3

TC3 1 any 2

TC4 2 1 2

TC5 2 2 1

TC6 2 any 3

TC7 any 1 3

TC8 any 2 2

TC9 any any 1

Page 25: Testing Course - Birgitta

25

Combination Strategies (4/5)

All combinations, i.e., n-wise Each combination

of chosen values occurs in at least one test case

Very expensive

A B C

TC1 1 1 1

TC2 1 1 2

TC3 1 1 2

TC4 1 2 1

TC5 1 2 2

TC6 1 2 3

TC7 2 1 1

TC8 2 1 2

TC9 2 1 3

TC10 2 2 1

TC11 2 2 2

TC12 2 2 3

Page 26: Testing Course - Birgitta

26

Combination Strategies (5/5)

Base choice For each

parameter, define a base choice, i.e., the most likely value

Let this be base test case and vary one value at a time

In the example, let A-1, B-2, and C-2 be base choices

A B C

TC1 1 2 2

TC2 2 2 2

TC3 1 1 2

TC4 1 2 1

TC5 1 2 3

Page 27: Testing Course - Birgitta

27

State-Based Testing

Model functional behavior in a state machine

Select test cases in order to cover the graph Each node Each transition Each pair of transitions Each chain of transitions of length n

Page 28: Testing Course - Birgitta

28

Concurrency Problems

(Logical) Parallelism often leads to non-determinism in actual execution E.g. Synchronization errors may occur in some

execution orders Order may influence the arithmetic results or

the timing

Explosion in number of required tests Need confidence for all execution orders that

might occur

Page 29: Testing Course - Birgitta

29

Silly example of Race Conditions with shared data

W(A,1)

R(B,1)

W(B,1)

R(A,1)

W(A,1)

R(B,1)

W(B,1)

R(A,0)

W(A,1)

R(B,0)

W(B,1)

R(A,1)

Page 30: Testing Course - Birgitta

30

Observability Issues

Probe effect (Gait,1985) “Heisenbergs's principle” - for computer

systems Common “solutions”

Compensate Leave probes in system Ignore

Must observe execution orders Gain coverage

Page 31: Testing Course - Birgitta

31

Controllability Issues

To be able to test correctness of a particular execution order we need control Input data to all tasks

Initial state of shared data/buffers

Scheduling decisions Order synchronization/communication

between tasks

Page 32: Testing Course - Birgitta

32

Few testing criteria exists for concurrent systems

Number of execution orders grow exponentially with # synchronization primitives in tasks Testing criteria needed to bound and selecting

subset of execution orders for testing

E.g. Branch / Statement coverage not sufficient for concurrent software Still useful on serializations Execution paths may require specific behavior from

other tasks

Data-flow based testing criteria has been adapted E.g. define-use pairs

Page 33: Testing Course - Birgitta

33

Summary :Determinism vs. Non-Determinism

Deterministic systems Controllability is high

input (sequence) suffice Coverage can be claimed after single test execution

with inputs E.g. Filters, Pure “table-driven” real-time systems

Non-Deterministic systems Controllability is generally low Statistical methods needed in combination with

input coverage E.g.

Systems that use random heuristics Behavior depends on execution times / race conditions

Page 34: Testing Course - Birgitta

34

Test execution in concurrent systems

Non-deterministic testing “Run, Run, Run and Pray”

Deterministic testing Select a particular execution order and force it E.g. Instrument with extra synchronizations

primitives (No timing constraints make this possible)

Prefix-based Testing (and Replay) Deterministically run system to a specific

(prefix) point Start non-deterministic testing at that specific

point

Page 35: Testing Course - Birgitta

35

Real-time systems testing

Inherits issues from concurrent systems Problems becomes harder due to time-constraints

More sensitive to probe-effects Timing/order of inputs become more significant

Adds new potential problems New failure types

E.g. Missed deadlines, Too early responses… Test inputs Execution times Faults in real-time scheduling

Algorithm implementation errors Assumption about system wrong

Page 36: Testing Course - Birgitta

36

Real-time systems testing

Pure time-triggered systems Deterministic Test-methods for sequential software usually apply

Fixed priority scheduling Non-deterministic

Limited set of possible execution orders Worst-case w.r.t timeliness can be found from

analysis

Dynamic (online) scheduled systems Non-deterministic

Large set of possible execution orders Timeliness needs to be tested

Page 37: Testing Course - Birgitta

37

Testing Timeliness

Aim : Verification of specified deadlines for individual tasks

Test if assumptions about system hold E.g. worst-case execution time estimates, overheads,

context switch times, hardware acceleration efficency, I/O latency, blocking times, dependency-assumptions

Test system temporal behavior under stress E.g. Unexpected job requests, overload management,

component failure, admission control scheme

Identification of potential worst-case execution orders

Controllability needed to test worst-case situations efficiently

Page 38: Testing Course - Birgitta

38

Testing Embedded Systems

System-level testing differs

Performed on target platform to keep timing

Closed-loop testing Test-cases consist of

parameters sent to the environment simulator

Open-loop testing Test-cases contain

sequences of events that the system should be able to handle

EnvironmentSimulator

Real-time (control) system

Testparameters

Real-time (control) system

Test Cases

Page 39: Testing Course - Birgitta

39

Approach of TETReS

Real-time database background Dynamic scheduling, flexibility requirements, mixed

load… Two general approaches

Investigate how architectural constraints impacts testability

Keep flexibility – Limit/avoid testing problems E.g. Database environment provide some nice

properties

Investigate methods for supporting automated testing of timeliness

Testing criteria for timeliness Generation of tests automatically Automated, prefix-based test execution

Page 40: Testing Course - Birgitta

40

Approach of TETReS test generation and execution

Open-loop System-level testing of timeliness Model-based test-case generation

Architecture information E.g. Locking policies, Scheduling etc…

Timing assumptions/constraints of tasks Assumptions about normal environment

behavior Mutation-based testing criteria Prefix-based Test-case execution

Page 41: Testing Course - Birgitta

41

Summary

Motivation Test methods

Examples on different test strategies has been presented

Concurrency and Real-time issues TETReS approach

Page 42: Testing Course - Birgitta

42

Mutant Generator

Model Checker

Test Specs. Generator

Test coordinator

StateEnforcer

TestDriver

Testing Criteria

Formal modeling

Task unitTesting

RTS

Spec.MutatedSpecs.

CounterExample Traces

Test SpecsInput Data to Tasks

Parameterized eventsequences

Pre-State description