26
1 ECEN5033 University of Co lorado, Testing OO Softwa re Part Two January 20, 2002 Testing Object-Oriented Software – Testing Models Software Engineering of Standalone Programs University of Colorado

January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

  • View
    213

  • Download
    1

Embed Size (px)

Citation preview

Page 1: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

1ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

Testing Object-Oriented Software – Testing Models

Software Engineering of Standalone Programs University of Colorado

Page 2: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

2ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

Adapting Inspections to Models

1. Specify scope – a body of material or set of use cases (for small project, scope may be entire model)

2. Specify depth – level of detail to be covered3. Identify basis from which the model under test (MUT)

was created – set of models from the previous phase4. Develop test cases for each of the evaluation criteria to

be applied using the contents of the basis model as input – scenarios from the use case model are a good starting point for test cases for many models

5. Establish criteria for measuring test coverage – sufficient use cases to touch every class in a class diagram

Page 3: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

3ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

Adapting Inspections to Models -- continued

6. Perform the static analysis with an appropriate checklist; ensure consistency between the MUT and the basis model

7. “Execute” the test cases8. Evaluate the effectiveness of the tests using the coverage

measurement. Calculate the coverage percentage.• Testing of analysis models and design models is so

high-level that 100% coverage is necessary to achieve good results

9. If coverage insufficient, expand the test suite and apply the additional test cases or terminate the testing if additional test cases (e.g. use cases) need to be written

Page 4: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

4ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

Coverage in Models

• Model elements – class, relationships, object, message• A test case “covers” an element if it uses that element

as part of a test case.– single test case using a particular element probably

does not exhaust all possible values of the attributes of that element

– e.g. using an object from a class to receive a single message does not test the other methods in the same class

• Farther in the development cycle, model detail increases and coverage detail increases

Page 5: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

5ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

Levels of coverage• For domain model, creating a single object from a class

will be enough to consider that class “covered”– Coverage at this level is a percentage of classes and

relationships covered• Design level – use every method in an interface before

saying a class is covered– Coverage may be stated by counting all of the methods

in the model rather than all of the classes• The more abstract the classes, the higher the level of

coverage required

Page 6: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

6ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

Selecting Test Cases

• Usually there are many test cases that can be developed from a specific use case

• Test case selectors– equivalence classes– logical paths – use case paths– Orthogonal Defect Classification triggers– Use profiles – see operational profiles notes– Risk as a test case selector is appropriate

• during development, actively searching for defects• not after development – looking for a level of

reliability – then the weight shifts to use profiles

Page 7: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

7ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

ODC

• Orthogonal Defect Classification developed at IBM– “Triggers” are activities that cause a defect to be

detected – IBM analyzed a large amount of data and identified

triggers– They grouped these triggers based on when they

occurred, such as during reviews and inspections– If one structures the guided inspection to encounter

as many triggers as possible, then the tests are more likely to trigger as many failures as possible

Page 8: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

8ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

Orthog. Def. Classif. “review & inspection” triggers

• Design conformance – comparison of basis and current model; comparison of current model to requirements

• Operation semantics – tracing the logic• Concurrency – examining synchronization between

threads/processes• Backward compatibility – comparison to previous products• Lateral compatibility – comp. with interfaces using this one• Rare situation – examining unspecified system behavior• Side effects – behavior outside scope of the current product• Document consistency/completeness • Language dependencies – exam’ing for lang. specific details

Page 9: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

9ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

“Executing test cases”

• If prototype exists, create executable test cases• If not, interactive session with testers and developers

– perform a symbolic execution to simulate processing– walk testers through scenarios provided by test cases

while using documents available – state diagrams, sequence diagrams, class diagrams, etc.

• Resist the urge for this to switch into another design session– As problems are uncovered, dev’rs want to change the

MUT – stops testing, becomes debugging instead– Diverts attention from finding other defects– Problems recorded

Page 10: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

10ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

Criteria for a Requirements Inspection• Completeness in this model means

– Use cases represent all of the functionality needed for a satisfactory product. No use case is included that is not required functionality.

– If possible, done by independent group of domain experts and product definition people.

• Correctness – Each use case accurately represents a req.– Act of writing test cases for a guided insp. identifies

many req’s not precise enough to result in a test case• Consistency – Any system functionality is specified in

the same manner everywhere it is described– End-to-end scenarios help locate inconsistencies

Page 11: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

11ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

Testing the requirements1. Rank use cases

2. Determine total # of test cases that can be constructed given resources available

3. Ration the tests based on the ranking

4. Write scenarios based only on knowledge of a domain expert (not a developer, not those who wrote the use cases, if possible)

5. The inspection – see next slide

Page 12: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

12ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

Testing requirements

5. The inspection – Writer presents a scenario– Requirements modelers identify the use cases that

contain the test scenario as a main scenario, extension, exception, or alternate path

– If no match, identify as incompleteness defect– If scenario could be represented by 2 or more use

cases on the same level of abstraction, identify as inconsistency defect

– In either case, ask if use case is incorrect in a limited way such that, if corrected, it would handle scenario

Page 13: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

13ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

Test Reuse

• Reusable assets– Ranking of use cases– Construction of test cases

• requirements model will be the basis for testing several other models

Page 14: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

14ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

Domain Analysis Model

• Most helpful if domain model can be created by one group of domain experts and tested by a different group of domain experts

Page 15: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

15ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

Criteria for domain model inspection

• Completeness – the concepts are sufficient to cover the scope of the content specified. Sufficient detail is given to describe concepts to the required depth

• Correctness – The descriptions of domain concepts are accurate; the algorithms will produce the expected results

• Consistency – Model elements should be consistent with the company’s definitions and meanings.

• Note: A test case at this level only states details to the level of the domain concepts.

Page 16: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

16ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

Detailed Class Design Model

• Detailed class design populates the architectural model with classes that will implement the interfaces defined in the architecture– Set of class diagrams– Pre and Post conditions for every method of every

class– State diagrams for each class– Suggest activity diagrams for significant algorithms

• Focus on compliance with the architecture

Page 17: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

17ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

Criteria for a class design model inspection• Completeness – Classes are defined for each interface

in the architecture. The preconditions for each method specify sufficient information so that the user can safely use the method. The post conditions for a method show error conditions as well as the normally expected result.

• Correctness – Each class accurately implements the semantics of an interface. For those classes that correspond to interfaces in the architecture, the class’ specification must correspond to that interface.

• Consistency – The behaviors in the interface of each class provides either a single way to accomplish a task or they provide the same behavior but with different preconditions if there are multiple ways to accomplish a task.

Page 18: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

18ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

Guided inspection of design model

• Test execution in this context is an interactive session– Construct a message-sequence diagram that includes

preconditions for a test case• Verification of results

– When output from tests is in the form of diagrams, the resulting diagrams must be verified by domain experts

• Evaluating quality attributes– test cases used for the basic inspection can be used

to analyze the expected performance – more later

Page 19: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

19ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

Incremental, iterative development

• If product is being developed in multiple iterations per increment and multiple increments– Tests must be repeatable– Write down the test cases used in various

inspections– On successive iterations, reapply

• all tests that failed the last time• some tests that passed• add or enhance tests to cover new features

Page 20: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

20ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

Special Inspections for Extensibility, etc.

• “Charter” may be to achieve more aggressive goals such as development of extensible design, reusable framework, etc.

• Products of analysis and design phases are most critical for achieving these types of objectives

• Test scenarios are developer actions not user actions• Question is, “How must the classes of the system be

changed to provide the newly required behavior?”• Maintain these as change cases – a use case that is not

a requirement but is an anticipated change

Page 21: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

21ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

Testing special objectives• Explicitly state the objective• Construct a “change case” including a specific scenario

that illustrates the objective.• Create test cases by sampling from the range permitted

by the change case.• Enumerate the work needed to achieve the objective by

specifying the differences in state and behavior required for the new objective. This can be accomplished by identifying the new subclasses that must be defined.

• Evaluate the current design relative to the design required to achieve the objective.

• Repeat with add’l test scenarios until all proposed changes are examined.

Page 22: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

22ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

Test Process

• In chapter 3, McGregor and Sykes deal extensively with the Testing Process and Test Planning– 5 dimensions of determining a testing process

• Who performs testing?• Which pieces will be tested?• When will testing be performed?• How will testing be performed?• How much testing is adequate?

– Using a risk analysis to rank the importance of the use cases to the development effort

Page 23: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

23ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

Test Planning

• Test Planning Activities– Scheduling testing activities– Estimating based on a use-case unit– Selecting an organization model for the testing staff– Test Plan templates for project test plan, component

test plans, integration test plans, use case test plans, and system test plan.

– IEEE 829 Standard Test Plan

Page 24: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

24ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

IEEE 829 Standard Test Plan Outline1. Introduction2. Test Items3. Tested Features4. Features not tested (per test cycle)5. Testing Strategy and Approach

1. Syntax2. Description of functionality3. Arguments for Tests4. Expected Output5. Specific Exclusions6. Dependencies7. Test Case Success/Failure Criteria

Page 25: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

25ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

IEEE 829 Standard Test Plan Outline6. Pass/Fail Criteria for the Complete Test Cycle7. Entrance Criteria/Exit Criteria8. Test-Suspension Criteria and Resumption

Requirements9. Test Deliverables/Status Communications Vehicles10. Testing Tasks11. Hardware and Software Requirements12. Problem Determination and Correction

Responsibilities13. Staffing and Training Needs/Assignments14. Test Schedules15. Risks and Contingencies16. Approvals

Page 26: January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering

26ECEN5033 University of Colorado, Testing OO Software Part Two

January 20, 2002

Topics in this section• Testing Models• Coverage• Selecting test cases• Execution of test cases• Criteria for and testing of

– Requirements– Domain model – Detailed Class Design Model

• Incremental, iterative development of test cases• Extensibility inspections• Special objective inspections• Test Process, Test planning, IEEE Standard for Test Plan