All Testing

  • Upload
    clux

  • View
    215

  • Download
    0

Embed Size (px)

Citation preview

  • 7/29/2019 All Testing

    1/103

    Model Based testing:FSM-based Testing

    Instructor: Rachida DssouliEmail: [email protected]

    Office: EV 007.648

    URL: http://www.ciise.concordia.ca/~dssouli

    October, 2007

    mailto:[email protected]://www.ciise.concordia.ca/~dssoulihttp://www.ciise.concordia.ca/~dssoulimailto:[email protected]
  • 7/29/2019 All Testing

    2/103

    Outline

    Protocol testing Concepts, fault models, related definitions, general approach

    Methodologies based on FSM

    T-Method (Transition Tour method)

    D-Method (Distinguishing sequences) W-Method (Characterizing sequences)

    U-Method (Unique input/output sequences)

  • 7/29/2019 All Testing

    3/103

    Introduction and motivation

    Testing in the software development cycle

    The software development cycle

    Development of test cases (starting during analysis phase)

    Analysis of test results, the oracle, diagnostics

    What results can we expected from testing?

    Testing vs. verification

    Finite test suite vs. infinite behavior

    Definition oftest suite

    Conformance relations: What is a correct implementation ? The coverage problem and the fault models

    Defining the correct behavior: modeling and specification

    languages

  • 7/29/2019 All Testing

    4/103

    Introduction and motivation

    Testing in the software development cycle

    The software development cycle

    Development of test cases (starting during analysis phase)

    Analysis of test results, the oracle, diagnostics

    What results can we expected from testing?

    Testing vs. verification

    Finite test suite vs. infinite behavior

    Definition oftest suite

    Conformance relations: What is a correct implementation ? The coverage problem and the fault models

    Defining the correct behavior: modeling and specification

    languages

  • 7/29/2019 All Testing

    5/103

    Why do we test ?

    for detecting errors in the implementation

    /debugging

    for demonstrating conformance to a specification or

    users needs e.g. protocol conformance testing

    for proving the correctness !!

  • 7/29/2019 All Testing

    6/103

    Against what are testing?

    Specifications:

    Users needs (Requirements)

    Objectives (specific)

    Informal specification

    Formal specification

    System Under Test ?

    The answer will help test team to establish a clear relationship between

    the system under test, the specification and the objective to satisfy.

  • 7/29/2019 All Testing

    7/103

    Correctness and how to achieve it

    How do we achieve the correctness of a given system?

    What is the impact of this process on the final software product?

    program proving(using theorem prover)

    exhaustive Testing

    testing with

    coverage

    The choice among these alternatives is based on:

    cost (function of # parameters: time, resources , human expertise,..)

    feasibility of proof or exhaustive testing

    the target quality

  • 7/29/2019 All Testing

    8/103

    Models of Specification and

    Implementation

    Conformance testing

    abstractmodel of S abstractmodel of I

    precisespecification S

    implementation I

    conformance relation

    conformance relation

    assumptions/

    test hypothesis

    assumptions/

    test hypothesis

  • 7/29/2019 All Testing

    9/103

    Distinguishing of the non-conforming implementations

    Universe of all possible implementations of a given system

    conforming

    non-conforming

    detected non-detected

    Pass TSFail TS

    Question: How to choose a small (finite) test suite TS and obtain

    the maximum power of error detection?

    all possible implementations

    all implementationsin the fault model

  • 7/29/2019 All Testing

    10/103

    Protocol Conformance Testing

    To confirm if an implementation conforms to its standard External tester applies a sequence of inputs to IUT and verifies its

    behavior

    Issue1: preparation of conformance tests in coverage of IUTs all aspects

    Issue2: time required to run test should not be unacceptably long

    Two main limitations Controllability: the IUT cannot be directly put into a desired state,usually requiring several additional state transitions

    Observability: prevents the external tester from directly observing thestate of the IUT, which is critical for a test to detect errors

    Formal conformance testing techniques based on FSM

    Generate a set of input sequences that will force the FSMimplementation to undergo all specified transitions

    Black box approach: only the outputs generated by the IUT (uponreceipt of inputs) are observable to the external tester

  • 7/29/2019 All Testing

    11/103

    Fault Models

    A fault modelis a hypothetical model of what types offaults may occur in an implementation Most fault models are "structural", i.e. the model is a refinement

    of the specification formalism (or of an implementation model)

    E.g. mutationsof the specification or of a correct implementation

    It may be used to construct the fault domainused for definingwhat "complete test coverage" means

    E.g. single fault hypothesis (or multiple faults)

    A fault model is useful for the following problems:

    Test suite development for given coverage objective Formalization of "test purpose"

    For existing test suite: coverage evaluation and optimization

    Diagnostics

  • 7/29/2019 All Testing

    12/103

    Fault Model for FSM

    Output faultthe machine provides an output differentfrom the one specified by the output function

    Transfer faultthe machine enters a different state thanthat specified by the transfer function

    Transfer faults with additional states:number of states ofthe system is increased by the presence of faults,additional states is used to model certain types of errors

    Additional or missing transitions:one basic assumptionis that the FSM is deterministic and completely defined(fully specified). So the faults occur when it turns out tobe non-deterministic and/or incompletely (partially)specified

  • 7/29/2019 All Testing

    13/103

    Fault Models for FIFO Queue and

    Petri Nets FSM with several FIFO input queues

    Ordering fault. FIFO ordering is not preserved, or in case ofmultiple input queues, some input event enters a wrong inputqueue

    Maximum length fault:the maximum length implemented is less

    than the one specified, or if an input event gets lost while queueis not overflow

    Flow control fault:errors of ordering or of loss occur, in case thenumber of submitted input events overflows the maximum queuelength specified

    Petri Nets

    Input or output arc fault:one of the input or output arcs isconnected to the wrong place, missing, or exists in addition tothose specified

    Missing or additional transition:the number of transitions is notthe same as in the specification

  • 7/29/2019 All Testing

    14/103

    FSM Related Definitions (1/2)

    Directed graph G=(V, E) representing FSM M Set of vertices V = {vi, v2, ..., vn) represents the set of states Sin M

    Directed edge (v,, Vj)eErepresent a transition from state s; to state s; inM

    An edge in G is represented by a triple (v,, vf, L), L=ai/oiis theinput/output operation corresponding to the transition from s; to s; in M

    Some other definitions & assumptions Deterministic FSM:predictable behavior in a given state for a given

    input

    Strongly connected:for each state pair (s;, sj) there is a transition pathgoing from s; to Sj, I.e. each state can be reached from any other state

    Fully specified:form each state it has a transition for each input symbol.

    Otherwise partially specified Minimal:the number of states of M is less than or equal to the number

    of states of any equivalent machine

  • 7/29/2019 All Testing

    15/103

    FSM Related Definitions (2/2)

    Start state soeS, usually the state when power-up Often, there is a special input taking Mto state s0from any other state

    with a single transition. In this case, Mis said to have the resetcapabilityand the input which performs the reset is denoted by "rf

    Sequences for testing A test subsequenceofMis a sequence of input symbols for testing

    either a state or a transition ofM A/^-sequenceforMis a concatenation of test subsequences for testing

    all transitions ofM

    A test sequenceforMis a sequence of input symbols which can beused in testing conformance of implementations ofMagainst thespecification ofM

    An optimize test sequenceis a test sequence such that nosubsequence of it is completely contained in any other subsequence

    So, the problem is how to obtain a "optimize test sequence" forM

  • 7/29/2019 All Testing

    16/103

    Transition Level Approach

    The methods for protocol conformance test sequence generation Produce a test sequence which checks the correctness of each

    transition of the FSM implementation

    By no means exhaustive, I.e. no guarantee to exhibit correct behaviorgiven every possible input sequence. The intent is to design a testsequence which guarantees "beyond a reasonable doubt"

    Three basic steps for checking a transition (si, sy; L), L=ak/oi Step 1:The FSM implementation is put into state s*; (e.g.

    reset+transfer) Difficulty in realizing this is due to the limited controllability of the

    implementation

    Step 2:Input a* is applied and the output is checked to verify that it is

    oi, as expected; Step 3:The new state of the FSM implementation is checked to verifythat it is Sj, as expected

    Difficulty in verifying this is due to the limited observability of theimplementation

  • 7/29/2019 All Testing

    17/103

    Testing based on Finite State Models

    The finite state machine (FSM) model

    An infinite fault model

    Conformance relations: based on I/O sequences

    Testing based on FSM specifications

    Fault model

    Test derivation methods Transition Tour

    State identification methods

    Fault coverage guarantees

    Overview and assumptions

    Testing based on partially specified behavior Testing against non-deterministic specifications

    Testing non-deterministic FSMs with input queuing

    Coverage analysis

  • 7/29/2019 All Testing

    18/103

    FSM

    S1 S2

    S4 S3

    t1: 1/1

    t2: 2/2 t4: 2/2t3:1/1

    t6: 2/2

    t7: 1/2

    t8: 2/2

    t5: 1/2

    S1 is an initial state

    Is a transition

    it has a starting state

    S1,

    and an ending state S2

    Its label is t1

    The input is 1 and anoutput 1

    / separates the input

    from the output

    T1: 1/1

  • 7/29/2019 All Testing

    19/103

    : Ds --> S: Ds --> Y

    Mealy Machine

    state

    set

    initial

    state

    M = < S, S1, X, Y, Ds, , >

    input

    set

    output

    set

    spec.

    domain

    transfer

    function

    output

    function

    Ds S x X

    partially defined (specified), deterministic, initialized

    S = {S1, S2, S3, S4}

    X = {1, 2}Y = {1, 2}Ds = S x X - {}

    S1 S2

    S4 S3

    t1: 1/1

    t2: 2/2t4: 2/2

    t3:1/1

    t6: 2/2

    t7: 1/2

    t8: 2/2

    t5: 1/?

    ?

    An FSM Example

  • 7/29/2019 All Testing

    20/103

    1) Output fault: point a in FSM fault model.

    2) Transfer fault: point b in FSM fault model.

    3) Transfer fault with additional states: point c in FSM faultmodel.

    4) Additional or missing transitions: point d in FSM fault model.

    5) additional or missing states

    Fault Model for Finite State Machine (FSM)

  • 7/29/2019 All Testing

    21/103

    Specification

    S1 S2

    S4 S3

    t1: 1/1

    t2: 2/2t4: 2/2t3:1/1

    t6: 2/2

    t7: 1/2

    t8: 2/2

    t5: 1/2

    S1 S2

    S4 S3

    t1: 1/2

    t2: 2/2

    t4

    : 2/2t3:1/1

    t6: 2/2

    t7: 1/2

    t8: 2/2

    t5: 1/2

    Output Fault on transition t1

    Implementation under test

    IUT

  • 7/29/2019 All Testing

    22/103

    S1 S2

    S4 S3

    t1: 1/1

    t2: 2/2

    t4

    : 2/2t3:1/1

    t6: 2/2

    t7: 1/2

    t8: 2/2

    t5: 1/2

    S1 S2

    S4 S3

    t1: 1/1

    t2: 2/2 t4: 2/2t3:1/1

    t6: 2/2

    t7: 1/2

    t8: 2/2

    t5: 1/2

    Transfer fault on t2The ending state is now S3

    Specification IUT

  • 7/29/2019 All Testing

    23/103

    S1 S2

    S4 S3

    t1: 1/1

    t2: 2/2t4: 2/2

    t3:1/1

    t6: 2/2

    t7: 1/2

    t8: 2/2

    t5: 1/?

    ?

    S1 S2

    S4 S3

    t1: 1/1

    t2: 2/2t4: 2/2

    t3:1/1

    t6: 2/2

    t7: 1/2

    t8: 2/2

    t5: 1/2

    Transfer fault on t5 with

    Additional state

    Specification IUT

  • 7/29/2019 All Testing

    24/103

    Example of implementation with additionalstate

    S1

    S2

    S0

    b/f

    a/e

    a/f

    c/

    b/ f

    c/ec/e

    b/e

    a/f

    I0

    b/f

    a/e

    a/f

    c/e

    b/e

    I1

    I2

    b/fc/e

    a/f

    c/f

    c/

    a/f

    I1

    I2

    Io

    b/f

    a/e

    a/f

    b/f

    c/ec/e

    b/e

    I3

    a/e

    b/f

    c/e

    Specification Impl. 1

    Impl. 2

    E l f t t it

  • 7/29/2019 All Testing

    25/103

    Example of a test suite

    S1 S2

    S4 S3

    t1: 1/1

    t2: 2/2t4: 2/2

    t3:1/1

    t6: 2/2

    t7: 1/2

    t8: 2/2

    TS = { r.1.1.2.1, r.2.2.1.2.2}

    A test suite is a set of inputsequences starting from theinitial state of the machine

    r.1.1.2.1

    r.2.2.1.2.2

    Test Case MS

    1.1.2.2

    2.2.1.2.2

    MIMI

    1.1.2.2

    2.2.1.2.2

    1.1.2.2

    2.2.2.2.2

    Conforming Non-conforming

    Pass TS Fail to pass TS

  • 7/29/2019 All Testing

    26/103

    Possible changes made by a developer

    Type 1: change the tail state of a transitionType 2: change the output of a transition

    Type 3: add a transition; and

    Type 4: add an extra state.

    S1 S2

    S4 S3

    t1: 1/1

    t2: 2/2t4: 2/2

    t3:1/1

    t6

    : 2/2

    t7: 1/2

    t8: 2/2

    t5: 1/?

    ?

    No limitation on the number

    of such changes allows for

    an infinite set of possible

    implementations !!!

  • 7/29/2019 All Testing

    27/103

    Fault model for FSM specifications

    For the given transition: change the output (output fault) change the next state (transfer fault)if a new state can be added, then

    assume an upper bound on thenumber of states in implementations.

    For the example above, there are (SxO)SxI = 4x74x5=2820 mutants

    with up to 4 states. Among them, 36 mutants represent single

    (output or transfer) faults, as only 9 transitions are specified.

    An example of a very specific fault domain: Only the transitionsrelated to data transfer may be faulty. These are 4 transitions. This

    results in only 284 mutants (faulty implementations in mplf).s3 s4

    DT1/IDATind,AK1

    DT0/IDATind,AK0

    DT0/AK0 DT1/AK1

    mutations

    s1

    IDISreq/DR

    CR/ICONinds3s2 s4

    ICONresp/CCDT1/IDATind,AK1

    DT0/IDATind,AK0

    DT0/AK0 DT1/AK1

    IDISreq/DRIDISreq/DR

    Example of fault detection by the TS

  • 7/29/2019 All Testing

    28/103

    S1 S2

    S4 S3

    t1: 1/1

    t2: 2/2t4: 2/2

    t3:1/1

    t6: 2/2

    t7: 1/2

    t8: 2/2

    S1 S2

    S4 S3

    t1: 1/1

    t2: 2/2t4: 2/2

    t3:1/1

    t6: 2/1

    t7: 1/2

    t8: 2/2

    t5: 1/2

    S1 S2

    S4 S3

    t1: 1/1

    t2: 2/2t4: 2/2

    t3:1/1

    t6: 2/2

    t7: 1/2

    t8: 2/2

    t5: 1/2

  • 7/29/2019 All Testing

    29/103

    Test Derivation Methods

  • 7/29/2019 All Testing

    30/103

    T-Method: Transition Tour Method

    [Nait 81] For a given FSM S, a transitiontouris a sequence which

    takes the FSM S from the initial state so, traverses everytransition at least once, and returns to the initial state s0. Straightforward and simple scheme

    New state of the FSM is not checked

    Fault detection power Detects all output errors

    There is no guarantee that all transfer errors can be detected

    The problem of generating a minimum-cost test

    sequence using the transition tour method is equivalentto the so-called "Chinese Postman" problem in graphtheory First studied by Chinese mathematician Kuan Mei-Ko in 1962

  • 7/29/2019 All Testing

    31/103

    T-Method Example -1

    The specification S.

    A transition tour is

    a,a,a,b,b,b

    The implementation

    I1 contains an output

    error. Our transition

    tour will detect it

    The implementation

    I2 contains a

    transition error. Our

    transition tour will notdetect it.

  • 7/29/2019 All Testing

    32/103

    32

    An input sequence is a distinguishing sequence(DS) for an FSM S, if the output produced by theFSM S is different when the input sequence isapplied to each different state. A DSis used as a

    state identification sequence.

    Detects all output errors, Detects all transfer errors, A DSmay not be found for a given FSM.

    DS-method[Gonenc 70]

  • 7/29/2019 All Testing

    33/103

    33

    DS methodExample

    a/x

    b/x

    b/y

    a/x

    a/y

    b/y

    1

    2

    3

    The specification S

    S

    A distinguishing sequence is :b.b

    If we apply it from : state 1 we obtain y.y state 2 we obtain y.x state 3 we obtain x.y

    a/x

    b/x

    b/y

    a/x

    a/y

    b/y

    1

    2

    3

    I2

    A test case which allow thedetection of the transfer error is :

    a.b.b.b

    If we apply it from the initial state of : the specification we obtain x.x.y.y the implementation we obtainx.x.x.x

    Impl.

  • 7/29/2019 All Testing

    34/103

    34

    DS method

    a/x

    b/x

    b/y

    a/x

    a/y

    b/y

    1

    2

    3

    Phase 1: Identification of all states/ State cover

    From state 1, we can reach state 2 with b/y

    and state 3 with a/x

    We assume that the reset exist,

    Q = { , a, b}

    DS = b.bTest suite = {r.b.b, r.a.b.b, r.b.b.b}

    Phase 2, to cover all transitions for output faults

    and transfer faults

    P = { , a, b, a.b, a.a, b.b, b.a}

    Test suite:{r.b.b, r.a.b.b, r.b.b.b, r.a.b.b.b, r.a.a.b.b,r.b.b.b.b, r.b.a.b.b}

  • 7/29/2019 All Testing

    35/103

    35

    General methodology for state identification based methods

    A) Test generation based on Specification

    A-1)Find the Q set or the State cover: minimal inputs thatreach a state from the initial one

    A-2) Find the P set or Transition cover: that will cover all remaining transitions

    Generate Test Suites using Q and P sets

    B) Fault detection

    B-1) Apply the generated test suites to the specificationto obtain Expected OutputsB-2) Apply the generated test suites to the implementation

    to obtain Observed Outputs

    Compare the expected and observed outputs (test results)

    If they are different then the verdict is fail

    otherwise it is a pass for the applied test suites.

    UIO M th d

  • 7/29/2019 All Testing

    36/103

    36

    The UIO-method can be applied if for each state ofthe specification, there is an input sequence such

    that the output produced by the machine, when it isinitially in the given state, is different than that of allother states.

    The UIOv-method is a variant of the UIO-method. itcheck the uniqueness of the applied identification

    sequences on the implementation, meaning thateach identification sequence must be applied oneach state of the implementation and the outputs arecompared with those expected from thespecification.

    UIO-Method[Sabnani 88]

    and UIOv-Method [Vuong 89]

  • 7/29/2019 All Testing

    37/103

    37

    a/x

    b/x

    b/y

    a/x

    a/y

    b/y

    1

    2

    3

    The specification S

    SUIO sequences are : state 1 : a.b state 2 : a.a state 3 : a

    We assume the existenceof a reset transition with nooutput (r/-) leading to theinitial state for every stateof S

    A transition cover set is :P={e, a, a.b, a.a, b, b.a, b.b}

    The test sequences

    generated by the UIO-method are :r.a.b, r.a.a, r.a.b.a.b,r.a.a.a.a, r.b.a.a, r.b.a.a.b,r.b.b.a

    UIO Example

  • 7/29/2019 All Testing

    38/103

    38

    The W-method involves two sets of input sequences : W-set is a characteristic set of the minimal FSM,

    and consists of input sequences that candistinguish between the behaviors of every pair ofstates P-set is a set of input sequences such that foreach transition from state A to state B on input x,

    there are input sequences p and p.x in P such thatp takes the FSM from the initial state into state A.

    Method W[Chow 78]

    W th d E l

  • 7/29/2019 All Testing

    39/103

    39

    a/e

    a/f

    b/f

    b/e

    c/eb/f

    c/e

    c/f

    a/f

    1

    3

    2

    The specification S

    We assume the existence ofa reset transition with nooutput (r/-) leading to theinitial state for every state ofS

    A characterization set is W={a, b}W1 state 1 : a/e,W2 state 2 : a/f, b/f W3 state 3 : b/e

    W = Union of all Wi

    A transition cover set for thespecification Sis :P={e, a, b, c, b.a, b.b, b.c, c.a, c.b, c.c}P set is not unique you may select b aspreamble instead ofa

    The W-method generates the

    following test sequences: (P.W) =r.a, r.b, r.a.a, r.a.b, r.b.a, r.b.b,r.c.a, r.c.b, r.b.a.a, r.b.a.b, r.b.b.a,r.b.b.b, r.b.c.a, r.b.c.b, r.c.a.a,r.c.a.b, r.c.b.a, r.c.b.b, r.c.c.a,r.c.c.b

    W method Example

  • 7/29/2019 All Testing

    40/103

    40

    This method is a generalization of the UIOv method

    which is always applicable. It is as the same time anoptimization of the W-method. The main advantage ofthe Wp-method, over the W-method, is to reduce thelength of the test suite. Instead of using the set W tocheck each reached state si, only a subset of W is used

    in certain cases. This subset Wi depends on thereached state si, and is called an identification set forthe state si.

    Wp method[Fujiwara 90]

    E l f W th d

  • 7/29/2019 All Testing

    41/103

    The specification S

    for state 1 : a/e for state 2 : c/f for state 3 : b/e

    We assume the existence of areset transition with no output(r/-) leading to the initial statefor every state of S

    The identification sets are :W1={a}, distinguishes the state 1 fromall other states

    W2={c}, distinguishes the state 2 fromall other statesW3={b}, distinguishes the state 3from all other states

    a/e

    a/f

    b/f

    b/e

    c/eb/f

    c/e

    c/f

    a/f

    1

    3

    2

    Example of Wp method (1/3 )

    state 1 2 3

    a e f f

    b f f e

    Derivation of W

    c e f e

  • 7/29/2019 All Testing

    42/103

    A state cover set for the specification Sis : Q={, b, c}

    A transition cover set for the specification Sis :P={, a, b, b.c, b.a, b.b, c, c.a, c.c, c.b}

    P-Q={a, b.c, b.a, b.b, c.a, c.c, c.b}Based on these sets, the Wp-method yields the following testsequences :

    Phase 1: Q.Wi={r.a1, r.b.c2, r.c.b3}The ending state Wi is given in subscript

    Phase 2 : (P-Q).Wi ={r.a.c2, r.b.c.c2, r.b.a.a1, r.b.b.b3,r.c.a.b3, r.c.c.c2, r.c.b.a1}

    Example of Wp method (2/3)

    W 1 : { a/e } , W 2 : { c/f } , W 3 : { b/e }

  • 7/29/2019 All Testing

    43/103

    a/e

    a/f

    b/f

    b/e

    c/eb/f

    c/e

    c/f

    a/f

    1 2

    3

    A faulty implementationI

    I contains a transfer error 2-a/f->1 (fat arrow) instead of2-a/f->2 as defined in thespecification S

    The application of the test sequencesobtained in Phase 2 leads to the followingsequences of outputs :

    r.a.c2, r.b.c.c2, r.b.a.a1, r.b.b.b3, r.c.a.b3, r.c.c.c2, r.c.b.a1S: -.e.f -.f.f.f -. f.f.e -. f.f.e -.e.f.e -. e.e.f -.e.e.e

    I: -.e.f -.f.f.f -. f.f.e -. f.f.e -.e.f.f -. e.e.f -.e.e.e

    The output printed in bigger size isdifferent from the one expected accordingto the specification. Therefore, the transfererror in the implementation is detected bythis test sequence.

    Example of Wp method (3/3)

  • 7/29/2019 All Testing

    44/103

    44

    The specification S

    A characterization set is W={a, b} for W method for state 1 : a/e

    for state 2 : {a/f, b/f } this will increase the size of the test suiteThat why c/f should be selected as W for the state 2. for state 3 : b/e

    We assume the existence ofa reset transition with nooutput (r/-) leading to theinitial state for every state ofS

    The identification sets are :W1={a}, distinguishes the state 1from all other states

    W2={a, b}, distinguishes the state 2from all other states but it is notoptimizedW3={b}, distinguishes the state 3from all other states

    a/e

    a/f

    b/f

    b/e

    c/eb/f

    c/e

    c/f

    a/f

    1

    3

    2

    Wrong Example and trade of for Wp method (1/3 )

    state 1 2 3

    a e f fb f f e

    Derivation of W

  • 7/29/2019 All Testing

    45/103

    45

    Examples

  • 7/29/2019 All Testing

    46/103

    46

    All state identification Methods

    Distinguishing Sequence, UIO, WTest hypothesisH1) Strongly connected machine

    H2) Contain no equivalent states

    H3) deterministic

    H4) Completely specified machine

    H5) the failure which increases the number of states doesnt occur

    The method is applied in two phases from the initial state

    phase 1) -sequence to check that each state defined by the specification

    also exist in the implementation.

    phase 2) -sequence to check all the individual transitions in the specification

    for correct output and transfer in the implementation.

  • 7/29/2019 All Testing

    47/103

    47

    DS Method

    S0 S1

    S4 S2

    a/0

    b/1

    a/1

    a/0

    b/1

    b/0

    Assume that Reset transition r/- existQ1) Verify if a.a is a DS for S and explain why?

    Q2) Find a DS different from a.a with length 2 for S

    b/0

    a/0

  • 7/29/2019 All Testing

    48/103

    48

    W method

    Assume that the reset exist and it brings the machine from any state to the initial state.

    a) Find characterization set W and generate the set of test cases for the specification S

    using the W method.

    b) Does S have a DS sequence? If not explain why?

    a/0

    b/0 b/1 a/1a/0

    S0

    S2

    S1

    b/0

  • 7/29/2019 All Testing

    49/103

    49

    W method

    S0 : b/1

    S1 : a/1

    S2 : a/0, b/0

    W= U Wi

    W={a, b}

    Q = { ,a,b} State Cover

    P = { ,a, b, a.b, a.a, b.a, b.b} transition Cover

    P-Q = { a.b, a.a, b.a, b.b}P-Q is used for the 2 steps with alpha and beta sequences to avoid redundancy.

    Phase 1 , Q.W = {r.a, r.b, r.a.a, r.a.b, r.b.a, r.b.b}Phase 2, (P-Q).W = {r.a.b.a, r.a.b.b, r.a.a.a, r.a.a.b, r.b.a.a, r.b.a.b, r.b.b.a, r.b.b.b}

    a/0

    b/0b/1

    a/1a/0

    S0

    S2

    S1b/0

    Examples Suite

  • 7/29/2019 All Testing

    50/103

    50

    S1

    S0

    S2

    a/0

    b/1

    a/0b/0

    a/1

    b/0

    S0State

    Input

    Output

    a

    0

    S1 S2

    a a

    1 0

    S0 S1 S2

    b b b

    1 0 0

    S0 S1 S2

    a.b a.b a.b

    0.1 1.0 0.0

    Derive a DS of length up to 2 for S

    a.b is a DS for S

    Specification S

    Comment: a as input at each state will loop on the state, sequence of a.a. cannot be a DS, the output will

    be 0.0.. or 1.1

    Transition tour:

    Input

    Output

    a.b.a.b.a.b

    0.1.0.0.1.0

    Examples Suite

    Q set: permits to reach each state

  • 7/29/2019 All Testing

    51/103

    51

    from the initial state

    Q = { , b,b.b}

    The first b to reach the state S2

    b.b to reach the state S1.P set is transition cover, permits to execute

    each transition at least one starting from the

    initial stateS1

    S0

    S2

    a/0

    b/1

    a/0b/0

    a/1

    b/0

    S0

    a

    bb

    a

    b

    b

    b

    b

    b

    b

    b

    a

    How to derive P set: find all

    Path starting from the size1 and up and each transition

    should be traversed at least once

    P = {, a, b, b.a, b.b, b.b.a, b.b.b}

    more than one p set may exist, this depends on the alternative

    paths that the automata may have.

  • 7/29/2019 All Testing

    52/103

    52

    The goal of the Phase 1 is identification of

    the states in the implementation

    DS = a.b, Q = { , b,b.b}, P = {, a, b, b.a, b.b, b.b.a, b.b.b}

    Phase 1

    Q.DS = {r.a.b, r.b.a.b, r.b.b.a.b} Expected output of phase 1is:{-.0.1, -.1.0.0, -.1.0.1.0}

    Phase 2 ( DS in bold)

    P.DS= {r.a.b, r.a.a.b,r.b.a.b, r.b.a.a.b, r.b.b.a.b, r.b.b.a.a.b, r.b.b.b.a.b}

    {-,0.1, -.0.0.1, -.1.0.0.0, -.1.0.1.0, -.1.0.1.1.0, -.1.0.0.0.1}

    S1

    S0

    S2

    a/0

    b/1

    a/0b/0

    a/1

    b/0

    Note that, the test suites for phase 1 and 2 should be

    Derived from the specification and applied to the

    implementation to check it for output and

    transfer faults.

  • 7/29/2019 All Testing

    53/103

    53

    S1

    S0

    S2

    a/0

    b/1

    a/0b/0

    a/1

    b/0

    S1

    S0

    S2

    a/0

    b/1

    a/0

    b/0a/1

    b/0

    Specification SImplementation I

    Apply the transition tour to the implementation I and comment

    Transition tour applied to S

    Input

    Output of S

    Output of I

    a.b.a.b.a.b0.1.0.0.1.0

    0.1.0.0.1.0

    The implementation I has a transfer fault,

    the fault is not detected by

    Transition tour.

    Transition tour detects all output faults but

    Doesnt guarantee the detection of transfer faults

  • 7/29/2019 All Testing

    54/103

    54

    S1

    S0

    S2

    a/0

    b/1

    a/0

    b/0a/1

    b/0

    he goal of the Phase 1 is identification of

    the states in the implementation

    DS = a.b, Q = { , b,b.b}, P = {, a, b, b.a, b.b, b.b.a, b.b.b}

    Phase 1

    .DS = {r.a.b, r.b.a.b, r.b.b.a.b} Expected output of phase 1is:

    {-.0.1, -.1.0.0, -.1.0.1.0}

    {-.0.1, -.1.0.0, -.1.0.1.0} ) observed outputs from I

    Phase 2 ( DS in bold)

    P.DS= {r.a.b, r.a.a.b, r.b.a.a.b, r.b.b.a.b, r.b.b.a.a.b, r.b.b.b.a.b}{-.0.1, -.0.0.1, -.1.0.0.0, -.1.0.1.0, -.1.0.1.1.0, -.1.0.0.0.1} expected output

    {-.0.1, -.0.0.1, -.1.0.0.0, -.1.0.1.0, -.1.0.1.1.0, -.1.0.0.0.0}observed output from I, transferfault detected

    Implementation I

  • 7/29/2019 All Testing

    55/103

    55

    S2

    S0

    S1

    a/0

    b/0

    a/0b/0

    c/0

    a/1C/0

    b/0

    c/1

    Specification S

    State

    Input

    Output

    S0 S1 S2 S0 S1 S2 S0 S1 S2 S0 S1 S2

    a a a b b b c c c a.c a.c a.c

    0 0 1 0 0 0 1 0 0 0.1 0.0 1.1

    Derive a UIO sequence for S

    UIO state S0 = c/1

    UIO state S2 = a/1

    UIO state S1 = a/0.c/0

    Transition tour for S

    a.b.a.b.c.a.c.b.c

    0.0.0.0.0.1.1.0.0

    U Method Uniq e Inp t/P to t

  • 7/29/2019 All Testing

    56/103

    U-Method: Unique Input/Putout

    Sequences In DS and CS, requirement of state identification is too strong

    Answer the question of"what is the current state of the implementation?"

    For testing it is sufficient to know an error has been detected

    UIO sequence of a state of a FSM An I/O behavior that is not exhibited by any other state of the FSM

    Answer the question of 7s the implementation currently in state x?"

    Advantages against DS & CS Cost is never more than DS and in practice is usually much less

    (shorter)

    Nearly all FSMs have UIO sequences for each state

    DS - same for all states; UIO sequence - normally different for eachstate

    To check state s by using UIO sequence of s Apply input part of UIO, compare output sequence with the expected

    one

    If the same, then the FSM is in the state s; otherwise, not in the state s

    If not in state s, no information about the identity of the actual state s'

  • 7/29/2019 All Testing

    57/103

    Analysis

    Fault Testing Coverage Fault coverage for D-, W-, and U-methods is better than of T-method

    Fault coverage for D-, W-, and U-methods are the same

    Summary All of these four methods assume minimal, strongly connected and fully

    specified Mealy FSM model of protocol entities

    On average, T-method produces the shortest test sequence, W-methodthe longest. D- and U- methods generate test sequence of comparablelengths

    T-method test sequences are able to detect output faults but nottransition

    D-, W-, and U-methods are capable of detecting all kinds of faults and

    give the same performance. U-method attracts more and more attentions and there are several

    approaches based on the basic idea with some improvements

  • 7/29/2019 All Testing

    58/103

    Examples

    G l h d l f id ifi i b d h d

  • 7/29/2019 All Testing

    59/103

    General methodology for state identification based methods

    ) Test generation based on Specification

    A-1)Find the Q set or the State cover: minimal inputs thatreach a state from the initial one

    A-2) Find the P set or Transition cover: that will cover all remaining transitions

    Generate Test Suites using Q and P sets

    B) Fault detection

    B-1) Apply the generated test suites to the specification to obtain Expected Outputs

    B-2) Apply the generated test suites to the implementation to obtain Observed Outputs

    Compare the expected and observed outputs (test results)f they are different then the verdict is fail otherwise it is a pass for the applied test suites.

    DS th d E l

  • 7/29/2019 All Testing

    60/103

    DS method Example

    a/x

    b/x

    b/y

    a/x

    a/y

    b/y

    1

    2

    3

    The specification S

    S

    A distinguishing sequence is :b.b

    If we apply it from : state 1 we obtain y.y state 2 we obtain y.x state 3 we obtain x.y

    a/x

    b/x

    b/y

    a/x

    a/y

    b/y

    1

    2

    3

    I2

    A test case which allow thedetection of the transfer error is :

    a.b.b.b

    If we apply it from the initial state of : the specification we obtain x.x.y.y the implementation we obtain x.x.x.x

    Impl.

    DS th d

  • 7/29/2019 All Testing

    61/103

    DS method

    a/x

    b/x

    b/y

    a/x

    a/y

    b/y

    1

    2

    3

    Phase 1: Identification of all states/ State cover

    From state 1, we can reach state 2 with b/y

    and state 3 with a/x

    We assume that the reset exist,

    Q = { , a, b}

    DS = b.bTest suite = {r.b.b, r.a.b.b, r.b.b.b}

    Phase 2, to cover all transitions for output faults

    and transfer faults

    P = { , a, b, a.b, a.a, b.b, b.a}

    Test suite:{r.b.b, r.a.b.b, r.b.b.b, r.a.b.b.b, r.a.a.b.b,r.b.b.b.b, r.b.a.b.b}

    DS method

  • 7/29/2019 All Testing

    62/103

    The test cases are :state 1:

    state 3 :

    state 2 :

    Test case structure:

    preamble.tested transition.state identification

    a.b.bb.b.ba.a.b.ba.b.b.bb.a.b.bb.b.b.b

    DS methodExample

  • 7/29/2019 All Testing

    63/103

    Transition Tour example

    Transition tour

    TT: t1, t4, t3, t9, t2, t3, t6, t7, t8

    TT (input/expected output): a/1.b/2.a/1.a/2.b/2.a/1.b/2.a/2.b/2

    S1 S2

    S4 S3

    t1: a/1

    t2: b/2

    t4: b/2t3:a/1

    t6: b/2

    t7: a/2

    t8: b/2 t9: a/2

    Test hypothesis: Initially connected machine

    All state identification Methods

  • 7/29/2019 All Testing

    64/103

    All state identification Methods

    Distinguishing Sequence, UIO, WTest hypothesis

    H1) Strongly connected machine

    H2) Contain no equivalent states

    H3) deterministic

    H4) Completely specified machine

    H5) the failure which increases the number of states doesnt occur

    The method is applied in two phases from the initial state

    phase 1) -sequence to check that each state defined by the specification also

    exist in the implementation.

    phase 2) -sequence to check all the individual transitions in the specification for

    correct output and transfer in the implementation.

  • 7/29/2019 All Testing

    65/103

    DS Method

    S0 S1

    S4 S2

    a/0

    b/1

    a/1

    a/0

    b/1

    b/0

    Assume that Reset transition r/- existQ1) Verify if a.a is a DS for S and explain why?

    Q2) Find a DS different from a.a with length 2 for S

    b/0

    a/0

  • 7/29/2019 All Testing

    66/103

    W method

    Assume that the reset exist and it brings the machine from any state to the initial state.

    a) Find characterization set W and generate the set of test cases for the specification S

    using the W method.

    b) Does S have a DS sequence? If not explain why?

    a/0

    b/0 b/1 a/1a/0

    S0

    S2

    S1

    b/0

  • 7/29/2019 All Testing

    67/103

    W method

    S0 : b/1

    S1 : a/1

    S2 : a/0, b/0

    W= U Wi

    W={a, b}

    Q = {e ,a,b} State Cover

    P = {e ,a, b, a.b, a.a, b.a, b.b} transition Cover

    P-Q = { a.b, a.a, b.a, b.b}

    Phase 1 , Q.W = {r.a, r.b, r.a.a, r.a.b, r.b.a, r.b.b}Phase 2, (P-Q).W = {r.a.b.a, r.a.b.b, r.a.a.a, r.a.a.b, r.b.a.a, r.b.a.b, r.b.b.a, r.b.b.b}

    a/0

    b/0b/1

    a/1a/0

    S0

    S2

    S1b/0

    Examples Suite

  • 7/29/2019 All Testing

    68/103

    S1

    S0

    S2

    a/0

    b/1

    a/0b/0

    a/1

    b/0

    S0State

    Input

    Output

    a

    0

    S1 S2

    a a

    1 0

    S0 S1 S2

    b b b

    1 0 0

    S0 S1 S2

    a.b a.b a.b

    0.1 1.0 0.0

    Derive a DS of length up to 2 for S

    a.b is a DS for S

    Specification S

    Comment: a as input at each state will loop on the state, sequence of a.a. cannot be a DS, the output will

    be 0.0.. or 1.1

    Transition tour:

    Input

    Output

    a.b.a.b.a.b

    0.1.0.0.1.0

    Q set: permits to reach each state

    from the initial state

  • 7/29/2019 All Testing

    69/103

    from the initial state

    Q = { , b,b.b}

    The first b to reach the state S2

    b.b to reach the state S1

    .P set is transition cover, permits to execute

    each transition at least one starting from the

    initial stateS1

    S0

    S2

    a/0

    b/1

    a/0b/0

    a/1

    b/0

    S0

    a

    bb

    a

    b

    b

    b

    b

    b

    b

    b

    a

    How to derive P set: find all

    Path starting from the size1 and up and each transition

    should be traversed at least once

    P = {, a, b, b.a, b.b, b.b.a, b.b.b}

    more than one p set may exist, this depends on the alternative

    paths that the automata may have.

  • 7/29/2019 All Testing

    70/103

    The goal of the Phase 1 is identification of

    the states in the implementation

    DS = a.b, Q = { , b,b.b}, P = {, a, b, b.a, b.b, b.b.a, b.b.b}

    Phase 1Q.DS = {r.a.b, r.b.a.b, r.b.b.a.b} Expected output of phase 1is:

    {-.0.1, -.1.0.0, -.1.0.1.0}

    Phase 2 ( DS in bold)

    P.DS= {r.a.b, r.a.a.b,r.b.a.b, r.b.a.a.b, r.b.b.a.b, r.b.b.a.a.b, r.b.b.b.a.b}

    {-,0.1, -.0.0.1, -.1.0.0.0, -.1.0.1.0, -.1.0.1.1.0, -.1.0.0.0.1}

    S1

    S0

    S2

    a/0

    b/1

    a/0b/0

    a/1

    b/0

    Note that, the test suites for phase 1 and 2 should be

    Derived from the specification and applied to the

    implementation to check it for output and

    transfer faults.

    Implementation I

  • 7/29/2019 All Testing

    71/103

    S1

    S0

    S2

    a/0

    b/1

    a/0b/0

    a/1

    b/0

    S1

    S0

    S2

    a/0

    b/1

    a/0

    b/0

    a/1

    b/0

    Specification SImplementation I

    Apply the transition tour to the implementation I and comment

    Transition tour applied to S

    Input

    Output of S

    Output of I

    a.b.a.b.a.b

    0.1.0.0.1.0

    0.1.0.0.1.0

    The implementation I has a transfer fault,

    the fault is not detected by

    Transition tour.

    Transition tour detects all output faults but

    Doesnt guarantee the detection of transfer faults

  • 7/29/2019 All Testing

    72/103

    S1

    S0

    S2

    a/0

    b/1

    a/0

    b/0

    a/1

    b/0

    The goal of the Phase 1 is identification of

    the states in the implementation

    DS = a.b, Q = { , b,b.b}, P = {, a, b, b.a, b.b, b.b.a, b.b.b}

    Phase 1

    Q.DS = {r.a.b, r.b.a.b, r.b.b.a.b} Expected output of phase 1is:

    {-.0.1, -.1.0.0, -.1.0.1.0}

    {-.0.1, -.1.0.0, -.1.0.1.0} ) observed outputs from I

    Phase 2 ( DS in bold)

    P.DS= {r.a.b, r.a.a.b, r.b.a.a.b, r.b.b.a.b, r.b.b.a.a.b, r.b.b.b.a.b}

    {-.0.1, -.0.0.1, -.1.0.0.0, -.1.0.1.0, -.1.0.1.1.0, -.1.0.0.0.1} expected output

    {-.0.1, -.0.0.1, -.1.0.0.0, -.1.0.1.0, -.1.0.1.1.0, -.1.0.0.0.0}observed output from I, transfer

    fault detected

    Implementation I

    S ifi i S

  • 7/29/2019 All Testing

    73/103

    S2

    S0

    S1

    a/0

    b/0

    a/0b/0

    c/0

    a/1

    C/0

    b/0

    c/1

    Specification S

    State

    Input

    Output

    S0 S1 S2 S0 S1 S2 S0 S1 S2 S0 S1 S2

    a a a b b b c c c a.c a.c a.c

    0 0 1 0 0 0 1 0 0 0.1 0.0 1.1

    Derive a UIO sequence for S

    UIO state S0 = c/1

    UIO state S2 = a/1

    UIO state S1 = a/0.c/0

    Transition tour for S

    a.b.a.b.c.a.c.b.c

    0.0.0.0.0.1.1.0.0

  • 7/29/2019 All Testing

    74/103

    Testing Assumptions and Hypothesis

    Objective to Reduce the Set of Test Cases

    Assumptions about specifications

  • 7/29/2019 All Testing

    75/103

    completeness: completely specified or partially specified

    connectedness: strongly connected or initialy connected

    reducibility: reduced or non-reduced

    determinism: deterministic or non-deterministic

    13

    Assumptions about specifications

    Assumptions about implementations

  • 7/29/2019 All Testing

    76/103

    Assumptions about implementations

    t7: 1/2

    S1 S2

    S4 S3

    t1: 1/1

    t2: 2/2t4: 2/2

    t3:1/1

    t6: 2/2

    t8: 2/2

    r/-

    r/-

    r/-

    r/-

    Deterministic

    Completely defined

    react to any input

    Limited extra states

    Reliable reset

    not necessary

    15

    Regularity, a testing assumption

  • 7/29/2019 All Testing

    77/103

    g y, g p

    This type of assumption allows to limit testing to a finite set of behaviors in thecase of systems that exhibit an infinite behaviors. Examples are

    programs (or specifications) with loops and integer input and outputparameters

    finite state machines

    reactive systems, en general

    Principle: assume that the implementation has a regular behavior, whichmeans that the number of control states of the implementation is limited.

    If the number of states is not bigger than the corresponding numberof states of the specification, then all loops (of the specification) haveto be tested only once.

    This is the idea behind the FSM fault model where the number ofimplementation states is limited to n, or to some number m > n.

    This is also the idea behind certain approaches for testingprogram loops and for testing in respect to specifications in theform of abstract data types.

    Independency, a testing assumption

  • 7/29/2019 All Testing

    78/103

    Principle:

    The different submodules of the system under test are

    independent, and faults in one module do not affect the possibilityof detecting the faults in the other modules.

    This is a controversial assumption:

    In most complex systems, modules or components are dependent.

    The reasons are: they share resources (e.g. memory)

    they have explicit interactions

    Example:

    several connections supported by a protocol entity

    test only one connection in detail (it is independent of theothers)

    the others need not be tested, since they are all equal(uniformity assumption, see below)

    I d d ( i )

  • 7/29/2019 All Testing

    79/103

    Independency (suite)

    The independency relation is a reasonable assumption in certain cases.

    Example:

    Equipment to test

    Entity N Entity NEntity N

    Entity N+1

    SAPSAP SAP

    Uniformity, a testing assumption

  • 7/29/2019 All Testing

    80/103

    Uniformity assumption / Congruence

    Origin: Partition Testing [Weyuker 91]

    Principle

    There exist similar behaviors. If they are grouped under anequivalence relation, then it is sufficient to test one behavior ofeach equivalence class for conformance testing.

    Special cases:

    Principle of partition testing: Apply test for at least onerepresentative for each partition of the input domain (softwaretesting, EFSM testing)

    Equivalent actions for EFSM

    Equivalent states for FSM

    Fairness in respect to non-determinism

  • 7/29/2019 All Testing

    81/103

    Fairness in respect to non determinism

    Many systems have a non-deterministic nature. In particular, the

    parallelism of distributed systems introduces many possible interleaving

    of individual actions within the different system components.

    The assumption is that all the execution paths effectively realized

    during testing cover all paths that are pertinent for detecting the

    possible implementation faults.

    a/1

    a/2

    a/4

    s1

    s2

    s3s4

    non-determinism

    Partially defined FSMs

  • 7/29/2019 All Testing

    82/103

    Non-specified transitions need not be tested. However differentinterpretations of undefinedness have an impact on testing:

    completeness assumption

    non-specified transition is implicitly defined, e.g. stay in samestate (as in SDL), or go to an error state

    methods for completely defined FSMs may be applied, however,

    test will rely on implied transitions

    dont care

    no specific behavior is specified

    non-specified transitions must be avoided by test cases

    robustness tests may be applied to check the reaction of theimplementation for non-specified situations

    forbidden

    not possible to invoke non-specified transitions

  • 7/29/2019 All Testing

    83/103

    Fault Coverage Evaluation

    Methods for Fault Coverage Evaluation

  • 7/29/2019 All Testing

    84/103

    Methods for Fault Coverage Evaluation

    The definition of fault coverage always depends on fault model!

    Exhaustive mutation analysis

    Monte-Carlo simulation method

    Deciding completeness

    minimize an FSM which is given in the form of the TS, if its minimalform is equivalent to the given FSM then TS is complete (the max #states is assumed), otherwise it is not complete [see Yao]

    Structural Analysis it evaluates the fault coverage of a given test suite by directly

    analyzing the test suite against the given FSM. Count the number ofstates distinguished and transitions checked by the test suite. Anumeric measure easy to evaluate (linear complexity) [see Yao]

    Different possible measures

    compare number of implementations (common approach) compare the log of number of implementations (corresponds to

    counting transitions covered) [called order coverage by Yao]

    T t A hit t

  • 7/29/2019 All Testing

    85/103

    Test Architectures

    How do we stimulate protocol entities for

    testing purposes ?

    OSI T i l

  • 7/29/2019 All Testing

    86/103

    OSI Terminology

    C f T ti T i l

  • 7/29/2019 All Testing

    87/103

    Conformance Testing Terminology

    The PCO has two FIFO queues: Send (from tester to IUT) Receive (by tester from IUT )

    ASP: Abstract ServicePrimitive

    PCO: Point of Control and

    Observation

    IUT: Implementation UnderTest

    PDU: Protocol Data Unit

    A PCO maps to a SAP (ServiceAccess Point) in the OSIreference model

    C t l T t A hit t (1/3)

  • 7/29/2019 All Testing

    88/103

    Conceptual Test Architecture (1/3)

    There can be several LTs and UTs being simultaneously used

    C t l T t A hit t (2/3)

  • 7/29/2019 All Testing

    89/103

    Conceptual Test Architecture (2/3)

    Testing contexts Single-party testing is for testing an IUT which communicates with exactly one

    real open system, represented by a single lower tester

    Multi-party testing is for testing an IUT communicating with multiple real opensystems, represented by more than one lower tester

    Configuration for IUT components can be homogeneous or heterogeneous

    Lower tester(LT) controls and observes the ILJT's lower service boundary,

    indirectly, via the underlying service provider In single-party testing, behaves as the peer entity to IUT

    In multi-party testing, act as peer entities working in parallel

    Lower tester control function (LTCF) coordinating all LTs Assign the test case verdicts

    Mandatory in multi-party context, inapplicable in single-party context

    C t l T t A hit t (3/3)

  • 7/29/2019 All Testing

    90/103

    Conceptual Test Architecture (3/3)

    Upper tester(UT) controls and observes IUT's upperservice boundary, by operator access, API, or hardwareinterface In single-party context, UT behaves as a user of IUT

    In multi-party context, UTs working in parallel act as users of IUT

    Test coordination procedures (TCPs) are used to ensurecooperation between the UTs and LTs How tester shall respond

    Passing (preliminary) results

    Synchronisation

    TCP is NOT Transport Control Protocol, as in TCP/IP

    ATM Classification

  • 7/29/2019 All Testing

    91/103

    ATM Classification

    ATMs for multi-party testing Several parallel upper and lower testers

    In complex situation a upper tester control function (UTCF) is needed

    Special cases include only one upper tester, or even no upper tester atall

    ATMs for single-party testing

    Local Test Method (L) Upper Tester and Lower Tester in Test System

    Distributed Test Method (D)

    Upper Tester in SUT, Lower Tester in Test System

    Co-ordinated Test Method (C)

    As above, uses Test Management Protocol

    Remote Test Method (R)

    Lower Tester in Test System, no Upper Tester

    Test Case

  • 7/29/2019 All Testing

    92/103

    Test Case

    Recall service primitives Request

    Indication

    Response

    Confirm

    Local Test Method

  • 7/29/2019 All Testing

    93/103

    Local Test Method

    Upper Tester is located in Test System

    Requires an upper interface on IUT IUT is built in the tester

    No ATSs for this method

    Good for the testing of a hardware component

    Example: Ethernet driver

    Local Test Method

  • 7/29/2019 All Testing

    94/103

    Local Test Method

    Distributed Test Method

  • 7/29/2019 All Testing

    95/103

    Distributed Test Method

    UT in SUT, LT remote

    Requires synchronization

    Suitable for upper layer protocols / protocols offering API

    Example: socket communication

    Distributed Test Method

  • 7/29/2019 All Testing

    96/103

    Distributed Test Method

    Co ordinated Test Method

  • 7/29/2019 All Testing

    97/103

    Co-ordinated Test Method

    UT in SUT but no access, LT remote No assumption of upper interface to the IUT

    Use only one PCO below the LT

    Uses Test Management Protocol (TMP) embedded in ASPs

    Suitable for mid-layer protocols

    Co ordinated Test Method

  • 7/29/2019 All Testing

    98/103

    Co-ordinated Test Method

    Remote Test Method

  • 7/29/2019 All Testing

    99/103

    Remote Test Method

    No Upper Tester Upper Tester can be native application

    or a user accessible interface

    Manual co-ordination

    Limited, but easy to use

    Remote Test Method

  • 7/29/2019 All Testing

    100/103

    Remote Test Method

    ATMs Put Together

  • 7/29/2019 All Testing

    101/103

    ATMs Put Together

    References (1/2)

  • 7/29/2019 All Testing

    102/103

    References (1/2)

    C. E. Chow. Introduction to protocol engineering. 2004. cs.uccs.edu/~cs522/pe/ G O.Chistokhvalov, Communication software and architecture, lecture notes. 2002.www.it.lut.fi/kurssit/02-03/010607000/index_eng.html

    G.J. Holzmann. Design and validation of computer protocols. Chapter 8-11. Prentice-Hall. 1991. ISBN 0-13-539925-4, spinroot.com/spin/Doc/Book91.html

    A. Petrenko, Introduction to the theory of experiments on finite state machines,lecture notes. 2003. www.bretagne.ens-cachan.fr/DIT/People/Claude.Jard/sem_13_05_2003_petrenko_trans.pdf

    Igor Potapov . Protocol engineering, lecture notes. 2004.www.csc.liv.ac.uk/~igor/COMP201/

    Chris Ling. The Petri Net method, lecture notes. 2001.www.csse.monash.edu.au/courseware/cse5510/Lectures/lecture2b.ppt

    Gabriel Eirea, Petri nets, lecture notes, UC Berkeley, 2002,www.cs.unc.edu/~montek/teaching/spring-04/petrinets.ppt

    T.-Y. Cheung. Petri nets for protocol engineering. Elsevier Computer

    Communications. 19. 1996: 1250-1257 R.Zurawski and M.C. Zhou, Petri Nets and industrial applications: a tutorial, IEEE

    Trans. Industrial Electronics, vol. 41, no. 6, 1994: 567-583

    References (2/2)

  • 7/29/2019 All Testing

    103/103

    References (2/2)

    T. Murata. Petri nets: properties, analysis and applications. Proceedings of the IEEE.vol. 77. no. 4, 1989:541-580

    G.V. Bochmann and R. Gotzhein, Deriving protocol specifications from servicespecifications, ACM Trans, on Computer Systems, vol. 8, no. 4, 1990: 255-283

    R.L. Probert and K. Saleh, Synthesis of communication protocols: survey andassessment, IEEE Trans. Computers, vol. 40, no. 4, 1991: 468-476

    Mark Claypool, Modeling and performance evaluation of network and computersystems, lecture notes, 2004, www.cs.wpi.edu/~claypool/courses/533-S04/

    R. Dssouli and F. Khendek, Test development for distributed system, 2000,www.ece.concordia.ca/~dssouli/Testing.pdf

    R. Lai. A survey of communication protocol testing. Elsevier Journal of Systems andSoftware. 62,2002:21-46

    G.V. Bochmann and A. Petrenko. Protocol testing: review of methods and relevancefor software testing, Proc. ACM ISSTA, Seattle Washington, USA, 1994: 109-124

    A.T. Dahbura, K.K. Sabnani, and M.U. Uyar, Formal methods for generating protocol

    conformance test sequences. Proceedings of the IEEE, vol. 78, no. 8, 1990: 1317-1326

    D.P. Sidhu and T.-K. Leung, Formal methods for protocol testing: a detailed study,IEEE Trans. Software Engineering, vol. 15, no. 4, 1989: 413-426