13
Software Testing ©Dr. David A. Workman School of EE and Computer Science March 19, 2007

Software Testing ©Dr. David A. Workman School of EE and Computer Science March 19, 2007

Embed Size (px)

Citation preview

Page 1: Software Testing ©Dr. David A. Workman School of EE and Computer Science March 19, 2007

Software Testing

©Dr. David A. Workman

School of EE and Computer Science

March 19, 2007

Page 2: Software Testing ©Dr. David A. Workman School of EE and Computer Science March 19, 2007

October 2, 2006 (c) Dr. David A. Workman 2

Software Testing

• ReferenceSoftware Engineering: Theory and Practice

by Sheri Lawrence PfleegerPrentice Hall, © 2001, ISBN = 0-13-029049-1

Page 3: Software Testing ©Dr. David A. Workman School of EE and Computer Science March 19, 2007

October 2, 2006 (c) Dr. David A. Workman 3

Software Faults & Failures

Why Does Software Fail?– Useful Software Systems are large and complex and require complicated

processes to build – complexity increases the likelihood that errors will be made in development and that faults will be introduced.

– The customer and users of a system may not be clear on what they want or need, or may simply change their mind. Uncertainty and confusion increases the likelihood that errors will be made and faults introduced. Changes to the requirements, design and code increase the opportunities for errors and the introduction of faults.

What do we mean by “Software Failure”?We usually mean that the software does not do what the requirements describe.

Page 4: Software Testing ©Dr. David A. Workman School of EE and Computer Science March 19, 2007

October 2, 2006 (c) Dr. David A. Workman 4

Software Faults & Failures

Reasons for Software Failures:– The specification may be wrong or misleading. It may not reflect the actual

customer or user needs.

– The specification may contain a requirement that is impossible to meet given prescribed hardware and/or software.

– The system design may contain a fault.

– The software design may contain a fault.

– The program code may incorrectly implement a requirement.

Purpose of Software Testing:– Fault Identification: the process of determining what fault or faults caused an

observed failure.

– Fault Correction & Removal: the process of making changes to the software to remove identified faults

Page 5: Software Testing ©Dr. David A. Workman School of EE and Computer Science March 19, 2007

October 2, 2006 (c) Dr. David A. Workman 5

Software Faults & Failures

Types of Faults– Algorithm Faults (logic errors): algorithm does not give the correct output when

presented with a given input. Examples:• Branching in the wrong place

• Testing the wrong condition

• Forgetting to initialize variables

• Forgetting to check for data and parameters outside design limits

• Comparing values of incompatible types.

– Computation & Precision Faults: the implementation of a formula is wrong or does not compute the result with sufficient accuracy; e.g. truncation, rounding, use of real data when integers are called for.

– Documentation Faults: comments do not describe what the code is doing or should be doing; requirements are poorly or ambiguously stated, or perhaps even wrong.

– Stress & Overload Faults: data structures are filled beyond their capacity; e.g. array index out of bounds.

– Capacity or Boundary Faults: system performance deteriorates when design limits are approached.

Page 6: Software Testing ©Dr. David A. Workman School of EE and Computer Science March 19, 2007

October 2, 2006 (c) Dr. David A. Workman 6

Software Faults & Failures

Types of Faults– Timing & Coordination Faults: synchronizing processing steps and/or events in

time.

– Throughput & Performance Faults: the system does not perform at the speed specified by the requirements – insufficient work accomplished per unit time.

– Recovery Faults: the system does not behave as prescribed by requirements when execution failures are detected; e.g., Word does not correctly recover when power goes out.

– Hardware & System Software Faults: third party hardware or software components (reusable components or COTS(Commerical Off The Shelf) or GOTS(Governmen Off The Shelf) products) do not actually work according to documented operating conditions and procedures.

– Standards and Procedure Faults: failing to follow prescribed standards may foster an environment where faults are more likely.

Page 7: Software Testing ©Dr. David A. Workman School of EE and Computer Science March 19, 2007

October 2, 2006 (c) Dr. David A. Workman 7

Software Faults and Failures

Orthogonal Defect ClassificationIBM & Hewlett-Packard (and others) capture, classify and track various types of

software faults.

Historical information about faults can help predict what faults are likely to occur.

This information helps focus testing efforts and makes the overall testing process more efficient and effective.

Fault patterns and fault frequencies may indicate deficiencies in the development process.

Specification/Requirements Design Code

Environment/Support Documentation Other

•Requirements•Specifications•Functionality

• HW Interface•SW Interface•User Interface• Functional Description

• Inter-process Communication• Data Definition• Module Design• Logic Description• Error Checking• Standards

• Logic• Computation• Data Handling• Module Interface/ Implementation• Standards

• Test HW• Test SW• Integration SW• Development Tools

Missing Unclear Wrong Changed Better-Way

ORIGIN

TYPE

MODE

Hewlett-Packard(Grady 1997)

See Notes

Page 8: Software Testing ©Dr. David A. Workman School of EE and Computer Science March 19, 2007

October 2, 2006 (c) Dr. David A. Workman 8

Software Faults and Failures

Testing Issues– Test Organization & Stages

1. Module (Component)(Unit) Testing – testing the smallest software building blocks in a controlled environment to meet functional requirements.

2. Integration Testing – testing component aggregates to ensure interface requirements are met and that inter-module communication works according to design.

3. Function Testing – testing system functionality (use cases) to insure it meets system requirements specifications.

4. Performance Testing – testing speed and capacity of the system to verify the system meets non-functional execution requirements and constraints. (Validation)

5. Acceptance Testing – customer testing to insure that end-users are satisfied that the system meets quality requirements. (Validation)

6. Installation Testing – making sure the system runs in the target environment

• Testing Goals• Black Box Testing – treat test object as a "black box" with inputs and outputs,

internal structure and logic pathways not a consideration in designing tests.

• Clear or White Box Testing – design tests to exercise internal components and execution pathways. (cf. McCabe's Metric)

Page 9: Software Testing ©Dr. David A. Workman School of EE and Computer Science March 19, 2007

October 2, 2006 (c) Dr. David A. Workman 9

Software Faults and Failures

1. Unit TestingBasic steps:

(1) Write code from design or a Unit specification.(2) Manually review code to make sure it agrees with the specification. (Verification)(3) Compile code to eliminate syntax and some semantic errors.(4) Design test cases to achieve unit testing goals.(5) Run test cases.(6) Remove faults and repeat (5) until testing goals are met.

Manual Code Reviews: form a team consisting of the author and three technical experts – people in the developer's organization that are technically qualified to conduct the review.

– Code Walkthroughs: code + documentation presented to the review team who comments on correctness; author presents the code and supporting documentation; the process is informal; the focus is on finding faults, not fixing them; the discovery of faults should not reflect on the author's competence.

Page 10: Software Testing ©Dr. David A. Workman School of EE and Computer Science March 19, 2007

October 2, 2006 (c) Dr. David A. Workman 10

Software Faults and Failures

1. Unit Testing (continued)Manual Code Reviews: form a team consisting of the author and three technical

experts – people in the developer's organization that are technically qualified to conduct the review.

– Code Inspections: more formal than Walkthroughs; typically performed in three steps;(1) Author presents code and documentation to Inspection team, much like Walkthrough, except the focus in more on informing the Inspection team and making them familiar with the code and documentation;(2) Inspection team members individually scrutinize inspection materials and form two lists: a list of discovered faults, and a list of concerns or points of confusion that may or may not be faults – items that need clarification.(3) Author meets with Inspection Team to discuss the lists of actual faults and potential faults. A scribe documents the identified faults for future tracking. Author and Inspection team signoff on the outcome of the meeting – that is, the meeting and the findings of the Inspection team is formally recorded and tracked.

Page 11: Software Testing ©Dr. David A. Workman School of EE and Computer Science March 19, 2007

October 2, 2006 (c) Dr. David A. Workman 11

Software Faults and Failures

1. Unit Testing (continued)

Development Artifact Preparation Time Meeting Time

Requirements Document 25 pages/hr 12 pages/hr

Functional Specification 45 pages/hr 15 pages/hr

Logic Specification 50 pages/hr 20 pages/hr

Source Code 150 SLOC/hr 75 SLOC/hr

User Documents 35 pages/hr 20 pages/hr

Inspection Preparation and Meeting Times (Capers Jones 1991)

Discovery Activity Faults/KSLOC

Requirements review 2.5

Design review 5.0

Code Inspections 10.0

Integration Tests 3.0

Acceptance Tests 2.0

Faults Found During Discovery (Capers Jones 1991)

Page 12: Software Testing ©Dr. David A. Workman School of EE and Computer Science March 19, 2007

October 2, 2006 (c) Dr. David A. Workman 12

Software Faults and Failures

Unit Testing StrategiesDefinitions:Test Case – a particular choice of input data that has a predictable outputTest Objective – a well-defined outcome that demonstrates the presence or absence of a particular type of faultTest – a collection of Test Cases relating to a single Test Objective

All Paths

All Def-Use Paths

All Use Paths

All Defs

All Computational& Some Predicate

Uses

All ComputationalUses

All Predicate& Some Computational Uses

All Predicate Uses

All Branches

All Statements

Relative Strengths of Test Strategies(Beizer 1990)

Page 13: Software Testing ©Dr. David A. Workman School of EE and Computer Science March 19, 2007

October 2, 2006 (c) Dr. David A. Workman 13

Software Faults and Failures

Integration Testing– Bottom-up Integration: components at the lowest level of the system call

hierarchy are tested individually first using specially written test drivers, then test those that make immediate calls to components already tested, again test drivers may have to be written. Useful approach when: many low level routines are general-purpose and are called often by others, or when the design is OO, or when the system is integrating a large number of standalone reusable components.Disadvantages: most important modules are tested last; faults at the top levels may indicate design flaws – these should be detected sooner, rather than later.Advantages: most appropriate for OO designs.

– Top-Down: components at the top of the call hierarchy are tested first, replacing any modules they call by temporary stubs; then modules at the next call level are integrated replacing their stubs but including stubs for modules they call, etc.Advantages: special test drivers need not be written, the modules themselves are the test drivers; the highest level modules tend to be more control oriented and less data oriented – design flaws in system level processing and timing will be detected early; development and testing can focus on delivering complete use cases – typically sooner than the bottom-up approach.Disadvantages: many stubs have to be written, and these may not be trivial to write.