28
LITTLE BOOK OF TESTING VOLUME II IMPLEMENTATION TECHNIQUES JUNE 1998 For additional information please contact the Software Program Managers Network (703) 521-5231 • Fax (703) 521-2603 E-Mail: [email protected] http://www.spmn.com SOFTWARE COUNCIL AIRLIE

test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

LITTLE BOOK

OF

TESTING

VOLUME II

IMPLEMENTATION

TECHNIQUES

JUNE 1998

For additional information please contact the

Software Program Managers Network

(703) 521-5231 • Fax (703) 521-2603

E-Mail: [email protected]

http://www.spmn.com

SOFTWARE COUNCILAIRLIE

test v2_ML_book 2/1/99 2:23 PM Page 1

Page 2: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

i

THE AIRLIE SOFTWARE COUNCIL

This guidebook is one of a series of guidebookspublished by the Software Program Managers Network(SPMN). Our purpose is to identify best managementand technical practices for software development andmaintenance from the commercial software sector, andto convey these practices to busy program managersand practitioners. Our goal is to improve the bottom-line drivers of software development andmaintenance—cost, productivity, schedule, quality,predictability, and user satisfaction.

The Airlie Software Council was convened by a

Department of the Navy contractor in 1994 as a focusgroup of software industry gurus supporting the SPMNand its challenge of improving software across themany large-scale, software-intensive systems within theArmy, Navy, Marine Corps, and Air Force. Councilmembers have identified principal best practices thatare essential to managing large-scale softwaredevelopment and maintenance projects. The Council,which meets quarterly in Airlie,Virginia, is comprised ofsome 20 of the nation's leading software experts.These little guidebooks are written, reviewed, generallyapproved, and, if needed, updated by Council members.Your suggestions regarding this guidebook, or othersthat you think should exist, would be muchappreciated.

This publication was prepared for the

Software Program Managers Network4600 North Fairfax Drive, Suite 302Arlington, VA 22203

The ideas and findings in this publication should not beconstrued as an official DoD position. It is published in theinterest of scientific and technical information exchange.

Norm BrownDirector, Software Program Managers Network

Copyright © 1998 by Computers & Concepts Associates

This work was created by Computers & Concepts Associatesin the performance of Space and Naval Warfare SystemsCommand (SPAWAR) Contract Number N00039-94-C-0153for the operation of the Software Program Managers Network (SPMN).

test v2_ML_book 2/1/99 2:23 PM Page 3

Page 3: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

The Software Program Managers Network (SPMN)provides resources and best practices for softwaremanagers. The SPMN offers practical, cost-effectivesolutions that have proved successful on projects ingovernment and industry around the world.

This second volume in the testing series answers thequestion:“How can I test software so that the end productperforms better, is reliable, and allows me, as a manager, tocontrol cost and schedule?” Implementing the basic testingconcepts and ten testing rules contained in this bookletwill significantly improve your testing process and result inbetter products.

It is not easy to change the way systems and software aretested. Change requires discipline, patience, and a firmcommitment to follow a predetermined plan. Managementmust understand that so-called silver bullets during testingoften increase risk, take longer, and result in a higher-costand lower-quality product. Often what appears to be thelongest testing process is actually the shortest, sinceproducts reach predetermined levels of design stabilitybefore being demonstrated.

Managers can implement the concepts and rules in thisguidebook to help them follow a low-risk, effective testprocess. An evaluation of how you test software againstthese guidelines will help you identify project risks andareas needing improvement. By using the principlesdescribed in this guide, you can structure a test programthat is consistent with basic cost, schedule, quality, and usersatisfaction requirements.

1

THE PURPOSE OF THIS

LITTLE BOOK

Norm BrownExecutive Director

test v2_ML_book 2/1/99 2:23 PM Page ii

Page 4: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

INTRODUCTION

3

Testing by software execution is the end of thesoftware development process. When projects arediscovered to be behind schedule, a frequent worst-practice way out is to take shortcuts with test. This is aworst practice because effective test by softwareexecution is essential to delivering quality softwareproducts that satisfy users’ requirements, needs, andexpectations. When testing is done poorly, defects thatshould have been found in test are found duringoperation, with the result that maintenance costs areexcessive and the user-customer is dissatisfied. When asecurity or safety defect is first found in operation, theconsequences can be much more severe thanexcessive cost and user dissatisfaction.

Although metrics show that the defect removalefficiency of formal, structured peer reviews istypically much higher than that of execution test, thereare certain types of errors that software-execution testis most effective in finding. Full-path-coverage unit testwith a range of path-initiating variable values will finderrors that slip through code inspections. Tests of theintegrated system against operational scenarios andwith high stress loads find errors that are missed withformal, structured peer reviews that focus on smallcomponents of the system. This book describes testingsteps, activities, and controls that we have observed onprojects.

Michael W. EvansPresidentIntegrated Computer Engineering, Inc.

Frank J. McGrath, Ph.D.Technical DirectorSoftware Program Managers Network

test v2_ML_book 2/1/99 2:23 PM Page 2

Page 5: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

5

This section presents concepts that are basic tounderstanding the testing process. From the start it iscritical that managers, developers, and usersunderstand the difference between debugging andtesting.

• Debugging is the process of isolating andidentifying the cause of a software problem, andthen modifying the software to correct theproblem. This guidebook does not addressdebugging principles.

• Testing is the process of finding defects inrelation to a set of predetermined criteria orspecifications. The purpose of testing is toprove a system, software, or softwareconfiguration doesn’t work, not that it does.

There are two forms of testing: white box testing andblack box testing. An ideal test environment alternateswhite box and black box test activities, first stabilizingthe design, then demonstrating that it performs therequired functionality in a reliable manner consistentwith performance, user, and operational constraints.

White box testing is conducted on code componentswhich may be software units, computer softwarecomponents (CSCs), or computer softwareconfiguration items (CSCIs). These tests exercise theinternal structure of a code component, and include:

BASIC TESTING CONCEPTS

• Execution of each statement in a codecomponent at least once

• Execution of each conditional branch in the codecomponent

• Execution of paths with boundary and out-of-bounds input values

• Verification of the integrity of internal interfaces

• Verification of architecture integrity across arange of conditions

• Verification of database design and structure

White box tests verify that the software design is validand that it was built according to the specified design.White box testing traces to configuration management(CM)-controlled design and internal interfacespecifications. These specifications have beenidentified as an integral part of the configurationcontrol process.

Black box testing is conducted on integrated,functional components whose design integrity hasbeen verified through completion of traceable whitebox tests. As with white box testing, thesecomponents include software units, CSCs, or CSCIs.

Black box testing traces to requirements focusing onsystem externals. It validates that the software meetsrequirements without regard to the paths of execution

test v2_ML_book 2/1/99 2:23 PM Page 4

Page 6: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

7

taken to meet each requirement. It is the type of testconducted on software that is an integration of codeunits.

The black box testing process includes:

• Validation of functional integrity in relation toexternal stimuli

• Validation of all external interfaces (includinghuman) across a range of nominal and anomalousconditions

• Validation of the ability of the system, software,or hardware to recover from or minimize theeffect of unexpected or anomalous external orenvironmental conditions

• Validation of the system’s ability to address out-of-bound input, error recovery, communication,and stress conditions

Black box tests validate that an integrated softwareconfiguration satisfies the requirements contained in aCM-controlled requirement or external interfacespecification.

Ideally, each black box test should be preceded by awhite box test that stabilizes the design. You neverwant to try to isolate a software or system problem byexecuting a test case designed to demonstrate systemor software externals.

BASIC TESTING CONCEPTS

(cont.)

test v2_ML_book 2/1/99 2:23 PM Page 6

Page 7: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

9

Key to Terms:

DT&E–Development Test andEvaluation

HWCI–HardwareConfiguration Item

ORD-OperationalRequirements Document

OT&E–Operational Test andEvaluation

SDF–Software Design File

SRS–Software RequirementsSpecifications

SSDD–System SubsystemDesign Document

SSS–System SegmentSpecification

SWDD–Software DesignDocument

products against criteria at the point where they areproduced, but also reduce the rework rate from 44percent to 8 percent. Use of inspections as a form oftesting can reduce the cost of correction up to 100times. A complete test program consists of thefollowing nine discrete testing levels, each feeding thenext:

Figure 1 illustrates the testing process. Testing is acontinuum of verification and validation activitiesconducted continuously from the point where aproduct is defined until it is finally validated in aninstalled system configuration.

Testing need not be performed on a computer. Projectsthat regularly apply structured inspections not only test

Test Levels

Level 1

Level 2

Level 3

Level 4

Level 5

Level 6

Level 7

Level 8

Test Activity

Computer Software Unit Testing

CSCI Integration Testing

CSCI Qualification Testing

CSCI/HWCI Integration Testing

System Testing

DT&E Testing

OT&E Testing

Site Testing

Test Type

Structured Inspections

Non-Computer-BasedTest

White Box Testing

White Box Testing

Black Box Testing

White Box Testing

Black Box Testing

Black Box Testing

Black Box Testing

Black Box Testing

Documentation Basis for Testing

VariousLevel 0

SDF

SWDD

SRS

SSDD

SSS

User Manuals

ORD or User Requirements Document

Transition Plan (Site Configuration)

BASIC TESTING CONCEPTS

(cont.)

Figure 1. Test Levels

Test Responsibility

Inspection Team

Test Focus

Various

Developer

Independent Test

Independent Test

Independent Test

Independent Test

Acquirer Test Group

Operational Test

Site Installation Team

Software Unit Design

CSCI Design/Architecture

CSCI Requirements

System Design/Architecture

System Requirements

User Manual Compliance

Operational Requirements

Site Requirements

test v2_ML_book 2/1/99 2:23 PM Page 8

Page 8: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

11

Level 0—These tests consist of a set of structuredinspections tied to each product placed underconfiguration management. The purpose of Level 0 testsis to remove defects at the point where they occur, andbefore they affect any other product.

Level 1—These white box tests qualify the code againststandards and unit design specification. Level 1 teststrace to the Software Design File (SDF) and are usuallyexecuted using test harnesses or drivers. This is theonly test level that focuses on code.

Level 2—These white box tests integrate qualified CSCsinto an executable CSCI configuration. Level 2 teststrace to the Software Design Document (SWDD). Thefocus of these tests is the inter-CSC interfaces.

Level 3—These black box tests execute integratedCSCIs to assure that requirements of the SoftwareRequirements Specification (SRS) have beenimplemented and that the CSCI executes in anacceptable manner. The results of Level 3 tests arereviewed and approved by the acquirer of the product.

Level 4—These white box tests trace to the SystemSubsystem Design Document (SSDD). Level 4 testsintegrate qualified CSCIs into an executable systemconfiguration by interfacing independent CSCIs andthen integrating the executable software configurationwith the target hardware.

Level 5—These black box tests qualify an executablesystem configuration to assure that the requirements ofthe system have been met and that the basic concept ofthe system has been satisfied. Level 5 tests trace to theSystem Segment Specification (SSS). This test levelusually results in acceptance or at least approval of thesystem for customer-based testing.

Level 6—These black box tests, planned and conductedby the acquirer, trace to the user and operator manualsand external interface specifications. Level 6 testsintegrate the qualified system into the operationalenvironment.

Level 7—These independent black box tests trace tooperational requirements and specifications. Level 7tests are conducted by an agent of the user to assurethat critical operational, safety, security, and otherenvironmental requirements have been satisfied and thesystem is ready for deployment.

Level 8—These black box tests are conducted by theinstallation team to assure the system works correctlywhen installed and performs correctly when connectedto live site interfaces. Level 8 tests trace to installationmanuals and use diagnostic hardware and software.

BASIC TESTING CONCEPTS

(cont.)

test v2_ML_book 2/1/99 2:23 PM Page 10

Page 9: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

1. Schedule difficulty due to extended development and code time

2. An inability to develop executable test cases fromprocedures

3. Insufficient interface definition and inability to duplicate exact test conditions

4. Ambiguous user specifications or system operator instructions

5. Schedule delay between system releases when new software configurations stabilize

6. Delays associated with correction of problems

7. Baselines lost during development and corrections lost during test

8. Inadequate hardware availability or reliability

9. a. Insufficient manpower available

b. Loss of key individual

10. Diversion of key resource

Insufficient time to test software

Test scripts not adequate represcharacteristics

Software systems with known p

Procedures do not match syste

Significant schedule impacts duof software releases

Repeated slips due to problem

System releases not controlled

Significant hardware-caused d

Excessive overtime required

Project delay, poor productivity

Scheduling impacts and projec

of a problem can be minimized or avoided. This level ofrisk management requires that common testing risks beidentified early, a mitigation strategy identified, andresources reserved to implement the strategy if the needarises. Figure 2 examines impact and planning factorsfor risk areas commonly encountered during testing.

The process of testing must not be compromised by theoccurrence of preventable problems. In most cases, theeffects of problems can be minimized by the early executionof a preplanned mitigation strategy. Risk managementrequires a management belief that risk is a problem that hasnot yet occurred. Through risk management, the full effect

TESTING RISKS

AND SOLUTIONS

13

e resulting in schedule slip

esentation of software execution

problems delivered to customer

em support requirements

ue to poor initial performance

m correction delays

or of unknown content

delays on the software schedule

y, inadequate support

ct delays

Phase software testing into build testing

Phase development personnel into test planning activities

Hard code test scripts readable through simulator

Complete high-level test plan early and fill in details as systemor software specifications become available

Schedule informal software burn-in period with each release

Plan for software configuration management and overlap system releases

Plan, implement, and integrate strong, effective configurationmanagement

Find alternate hardware source of simulation facility at begin-ning of project

Provide for manpower pool, sharing resources between test,development, and support

Find and train key man backups early in project

Back up all planned resources

Figure 2. Predictable Resource and Test & Integration Problems

RISK AREA IMPACT PLANNING

test v2_ML_book 2/1/99 2:23 PM Page 12

Page 10: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

TESTING QUESTIONS THAT

MANAGERS SHOULD

BE ABLE TO ANSWER

TEN COMMANDMENTS

OF TESTING

15

TESTING QUESTIONS THAT MANAGERS SHOULD BE ABLE TOANSWER:

1. Have I defined, and do I understand, my overallstrategy for software testing?

2. Are the test planning requirements clearlydefined, consistent, assigned for development,and supportable by the staff responsible for theirimplementation?

3. Are the test methods and techniques defined, arethey consistent with the strategy, and can they beimplemented in the software environment?

4. Is each test activity traceable to a controlled setof requirements and/or design data?

5. Are the configuration management control, andquality disciplines adequate to support the teststrategies, methods, and techniques, and are theyreally in place?

6. Do I know when to quit?

THE TEN COMMANDMENTS OF TESTING

1. Black box tests that validate requirements musttrace to approved specifications and be executedby an independent organization.

2. Test levels must be integrated into a consistentstructure.

3. Don't skip levels to save time or resources: testless at each level.

4. All test products configurations, tools, data, andso forth, need CM during tests.

5. Always test to procedures.

6. Change must be managed.

7. Testing must be scheduled, monitored, andcontrolled.

8. All test cases have to trace to something that'sunder CM.

9. You can't run out of resources.

10. Always specify and test to criteria.

test v2_ML_book 2/1/99 2:23 PM Page 14

Page 11: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

17

RULE 1: Tests Must Be Planned, Scheduled, andControlled if the Process Is to Be Effective.

• Test planning is an essential managementfunction that defines the strategies and methodsto be used to integrate, test, and validate thesoftware product.

• Test planning defines how the process ismanaged, monitored, and controlled.

• Planning of the software test environment is thebottom of a complex planning tree thatstructures and controls the flow of testing. Theplanning tree shows the products move fromdeveloper, through the software organization, tothe system organizations, and finally to theacquirer and the operational test organizations.

• Test planning is “preparing for the future,” whileTest Plan implementation is “making sure wehave what we want when we get there.”

• The test planning process must be consistentfrom one level to the next. The Test Plan must beconsistent with the CM Plan, which must beconsistent with the standards, and so on.

• The control discipline must be a completerepresentation of the Test Plan as adapted to thespecific needs of the project to which it isapplied.

THE TEST PLANNING TREE

• Program Plan defines the programrequirements, obligations, strategies, constraints,and delivery requirements and commitment. Itidentifies test requirements to be completedprior to delivery.

• Test and Evaluation Management Plan usuallytraces to user specifications. It sets the entiretesting strategy and practices.

• System Engineering Management Plan (SEMP)identifies engineering plans and methods,engineering standards, management processes,and systems for integration and testrequirements.

• System Test Plan defines the system’s testingplan, strategies, controls, and testing processes.Test case requirements are discussed, and criteriafor success or failure are identified.

• Software Development Plan describes what thesoftware project must accomplish, how thesesoftware process and product standards must bemet, how the process must be managed andcontrolled, and what the criteria are for overallintegration, test, and completion.

RULE 1

test v2_ML_book 2/1/99 2:23 PM Page 16

Page 12: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

19

• Software Test Plan defines software integrationand test plans, strategies, software test controlsand processes, test case requirements, and testcase completion criteria.

• Software Test Description details therequirements for individual test cases. Thisdescription serves as the test case designrequirement.

• Software Test Procedure takes each test case andprovides a step-by-step definition of testexecution requirements traceable to arequirement specification.

• Software Test Case Definition details the testcase design and specifies criteria for successfuland unsuccessful execution of a test step.

• Software Test Scenario identifies specific testdata, data sources, interface address or location,critical test timing, expected response, and teststep to take in response to test condition.

• Test Case Implementation is the actual test caseready for execution.

These plans may be rolled up into fewer individualdocuments, or expanded based on project size, numberof organizations participating, size of risk and contract,and customer and technical requirements of the testprogram.

RULE 1

(cont.)

Figure 3. The Test Planning Tree

test v2_ML_book 2/1/99 2:23 PM Page 18

Page 13: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

21

The test structure as documented in these plans mustbe consistent with the overall strategy used—granddesign, incremental development, or evolutionaryacquisition.

• Test planning is the joint responsibility of theorganization receiving the capability—either thehigher life cycle or the user organization that willultimately apply the product—and theengineering organization that will build ormodify the product.

• The test planning process starts with a definitionby the ultimate user of the system of the level ofrisk that is acceptable when the product isdelivered. The integration of the testing structuredisciplines helps assure the acceptability of theproduct when it reaches the operational testinglevels.

Key guidelines for planning a test environment are:

• Improve probability that the delivered productswill meet the documented and perceivedrequirements, operational needs, andexpectations of the user of the system.

• Increase schedule and budget predictability byminimizing rework and redundant effort.

• Maintain traceability and operational integrity ofengineering products as they are produced andchanged, and maintain the relationship betweenthe formal documentation and the underlyingengineering information.

• Assure that software is not debugged duringoperational testing because of test shortcuts orinadequacies at lower test levels.

RULE 1

(cont.)

test v2_ML_book 2/1/99 2:23 PM Page 20

Page 14: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

23

RULE 2: Execution Test Begins with White BoxTest of Code Units and Progresses through aSeries of Black Box Tests of a ProgressivelyIntegrated System in Accordance with aPredefined Integration Test Build Plan.

• Software test flow must be documented in plansand procedures.

• Traceability between test cases at each level mustbe maintained, allowing tests at one level toisolate problems identified at another.

• The integrity of the planned white-box, black-boxtest sequence must not be violated to meetbudget or schedule.

• Software cannot be adequately tested if there isno formal, controlled traceability of requirementsfrom the system level down to individual codeunits and test cases.

• Test levels (see Figure 1) must be defined so thatthey support partitioning that is consistent withthe project’s incremental or evolutionaryacquisition strategy.

• In the event of a failure at any test level, it mustbe possible to back up a level to isolate thecause.

• All test levels must trace to CM-controlledengineering information, as well as to baselineddocumentation that has been evaluated through astructured inspection.

• A test level is not complete until all test cases arerun and a predetermined quality target isreached.

RULE 2

test v2_ML_book 2/1/99 2:23 PM Page 22

Page 15: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

25

RULE 3: All Components of a Test Must Be underCM Control after They Have Been Verified byStructured Peer Reviews.

During test, CM must provide:

• A way to identify the information approved foruse in the project, the owners of the information,the way it was approved for CM control, and themost recent latest approved release

• A means to assure that, before a change is madeto any item under configuration control, thechange has been approved by the CM changeapproval process, which should include arequirement that all impacts are known before achange is approved

• An accessible and current record of the status ofeach controlled piece of information that isplanned for use by test

• A build of each release, from code units tointegration test

• A record of the exact content, status, and versionof each item accepted by CM, and the versionhistory of the item

• A means to support testing of operationalsoftware versions by exactly reproducingsoftware systems for tests with a knownconfiguration and a controlled set of test results

CM IMPLEMENTATION RULES DURING TESTING

• CM should be an independent, centralizedactivity, not buried within an engineering orassurance organization.

• CM can never be a bottleneck in the testschedule, because the process will not befollowed.

• CM can never be eliminated for any reason,including the need to meet budget or schedule.

• CM change control must receive reports of allsuspected and confirmed problems duringtesting, and all problems must only be fixed inaccordance with a change control processmanaged by CM.

• CM must assure that changes to configuration-controlled items that do not follow the CMprocess cannot be made to systems under test,including patches to binary code.

• CM must not accept changes made to softwareuntil all related documentation and otherconfiguration-controlled items affected by thechange have also been updated.

• CM must have control of and responsibility for alltest builds from Level 2 and above.

RULE 3

test v2_ML_book 2/1/99 2:23 PM Page 24

Page 16: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

27

• CM must ensure that all releases of integratedsoftware from CM during test are accompaniedby a current, complete, and accurate SoftwareVersion Description (SVD).

• CM must verify that all new releases of test tools,including compilers, are regression-tested prior torelease for use during testing.

• CM must process every suspected or observedproblem, no matter how small.

• CM must ensure that all tests trace to controlledinformation that has been approved through abinary quality gate.

• CM staff during integration and test must havethe technical skills necessary to generate eachintegration build delivered from CM forintegration test.

RULE 4: Testing Is Designed to Prove a SystemDoesn’t Work, Not to Prove It Does.

• A large, integrated software system cannot beexhaustively tested to execute each path over thefull range of path-initiating conditions. Therefore,testing must consist of an integrated set of testlevels, as described in Figure 1. These testinglevels maximize the coverage of testing againstthe cost across all design and requirements.

• Criteria must be used to evaluate all testactivities.

• All tests executed that trace to a requirementmust be executed in three ways:

– Using a nominal data load and validinformation

– Using excessive data input rates in acontrolled environment

– Using a preplanned combination of nominaland anomalous data

• The ideal test environment is capable of driving asystem to destruction in a controlled fashion. Forexample, the data aids and data combinationsmust be varied until the system no longerexecutes in an acceptable manner. The point at

RULE 3

(cont.)RULE 4

test v2_ML_book 2/1/99 2:23 PM Page 26

Page 17: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

29

RULE 5: All Software Must Be Tested UsingFrequent Builds of Controlled Software, withEach Build Documented in SVDs and ChangeHistories Documented in Header Comments forEach Code Unit.

• Build selection and definition must be based onavailability of key test resources, software unitbuild and test schedules, the need to integraterelated architectural and data components into asingle test configuration, availability of requiredtesting environments and tools, and therequirement to verify related areas offunctionality and design.

• Builds must be ordered to minimize the need todevelop throw-away, test-driver software.

• Builds must not be driven by a need todemonstrate a capability by a certain date (aform of political build scheduling).

• Build definition must consider technical,schedule, and resource factors prior to thescheduling of a test session.

• Test cases used during frequent builds must beplanned (structured in a Test Plan, documented ina procedure), traceable to controlledrequirements or design, and executed using

which system support becomes unacceptablemust be identified and documented.

• Tests must be run at all levels to test not onlynominal conditions but also all potential testfailure conditions documented in controlledspecifications.

– This is critical even though the testenvironment may be difficult toestablish.

• No observed testing problem, however small,must ever be ignored or closed withoutresolution.

• All test case definitions must include success andfailure criteria.

• Tests must be made against scenarios (timedsequences of events) designed to cover a broadrange of real-world operational possibilities aswell as against static system requirements.

• System test must be made on the operationalhardware/operating system (OS) configuration.

RULE 4

(cont.)RULE 5

test v2_ML_book 2/1/99 2:23 PM Page 28

Page 18: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

31

RULE 5

(cont.)RULE 6

controlled configurations, just as with builds thatare not frequent.

• Test support functions, such as CM integrationbuild and change approvals, must be staffed to asufficient level and must follow processes so thatthey do not delay frequent integration builds andtests.

• The build planning must be dynamic and capableof being quickly redefined to address testing andproduct realities. For example, a key piece ofsoftware may not be delivered as scheduled; aplanned external interface may not be availableas planned; a hardware failure may occur in anetwork server.

• Frequent builds find integration problems early,but they require substantial automation in CM,integration, and regression test.

RULE 6: Test Tools Must Only Be Built orAcquired If They Provide a Capability NotAvailable Elsewhere, or If They Provide Control,Reproducibility, or Efficiency That Cannot BeAchieved through Operational or Live TestSources.

• Test tools must be used prior to the point atwhich the system is considered stable andexecuting in a manner consistent withspecifications.

– This requirement reflects the need to controltest conditions and to provide a controlledexternal environment in which predictabilityand reproducibility can be achieved.

• Some test functions are best provided byautomated tools and can substantially lower thecost of regression test during maintenance.

– For example, automating key build inputsreduces operator requirements and the costof providing them during test sessions.

test v2_ML_book 2/1/99 2:23 PM Page 30

Page 19: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

33

RULE 6

(cont.)

CLASSES OF TEST TOOLS THAT EVERY PROGRAM SHOULDCONSIDER

• Simulation When Test Is Done Outside theOperational Environment—Drives system fromoutside using controlled information andenvironments. For example, it can be used toexchange messages with the system under test,or to drive a client/server application as thoughthere were many simultaneous users on manydifferent client workstations

• Emulation When Hardware Interfaces Are NotAvailable—Software to perform exactly asmissing components in system configurations,such as an external interface

• Instrumentation—Software tools thatautomatically insert code in software, and thenmonitors specific characteristics of the softwareexecution such as test coverage

• Test Case Generator—Identifies test cases to testrequirements and to achieve white box testcoverage

• Test Data Generator—Defines test informationautomatically, based on the test case or scenariodefinition

• Test Record and Playback—Records keystrokesand mouse commands from the human tester,

and saves corresponding software output forlater playback and comparison with saved outputin regression test

• Test Analysis—Evaluates test results, reducesdata, and generates test reports

RULES FOR TEST TOOLS

• Test tools must only be acquired and used if theydirectly trace to a need in the Test Plan and/orprocedures, or if they will significantly reducethe cost of regression test during themaintenance phase.

• All tools used and the data they act upon mustbe under CM control.

• Test tools must not be developed if there is aquality COTS test tool that meets needs.

• Plan test tool development, acquisition, anddeployment for completion well in advance ofany need for tool use item.

• Test tools that certify critical components mustbe certified using predefined criteria beforebeing used.

• Always try to simulate all external data inputs,including operator inputs, to reduce test cost andincrease test reproducibility.

test v2_ML_book 2/1/99 2:23 PM Page 32

Page 20: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

35

RULE 6

(cont.)RULE 7

QUESTIONS TO ASK WHEN DEFINING A TESTINGENVIRONMENT AND SELECTING TEST METHODS OR TOOLS

1. Have the proposed testing techniques and toolsbeen proven in applications of similar size andcomplexity, with similar environments andcharacteristics?

2. Does the test environment include thehardware/OS configuration on which thesoftware will be deployed, and will the test toolsexecute on this operational hardware/OS?

3. Do the proposed test methods and tools give thecustomer the visibility into software quality thatis required by the contract?

4. Is there two-way traceability from systemrequirements through design, to code units andtest cases?

5. Have the organizational standard test processes,methods, and use of test tools been tailored tothe characteristics of the project?

RULE 7: When Software Is Developed by MultipleOrganizations, One Organization Must BeResponsible for All Integration Test of theSoftware Following Successful White Box Testingby the Developing Organization.

• Development tests—those that test units andCSCs prior to integration—must be theresponsibility of the developers.

• When development tests are complete,inspections must be conducted to ensure theprocess was followed; the requirements of theTest Plan were successfully completed; thesoftware is documented adequately; the sourcecode matches the documentation; and thesoftware component is adequate to supportsubsequent test activities.

• All software builds that support test activitiessubsequent to development testing must be builtunder control of CM.

• Testing subsequent to development testing mustbe executed by an independent testingorganization. After development testing ends, anychanges to controlled configurations must beevaluated and approved prior to incorporation inthe test configuration.

test v2_ML_book 2/1/99 2:23 PM Page 34

Page 21: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

RULE 7

(cont.)

37

RULES FOR ORGANIZING TEST ACTIVITIES

• A testing organization must be an integral part ofthe program and software project plans from thebeginning, with staffing and resources allocatedto initiate early work.

• While some testing will be done internally by theengineering organizations, the white box andblack box test activities documented in Test Plansand procedures must be executed by anindependent test organization.

• Test organizations must be motivated to findproblems rather than to prove the productsunder test work.

• Test organizations must have adequateguaranteed budget and schedule to accomplishall test activities without any unplannedshortcuts.

• A tester must never make on-the-fly changes toany test configuration. A tester’s role is to findand report, not to fix. During testing, the role ofthe tester is to execute the test case. All fixesshould be made by an organization responsiblefor maintaining the software, after processing inaccordance with the change control process ofthe program.

• Test organizations may delegate tasks such as testplanning, test development, tool development, ortool acquisition to engineering, but responsibility,control, and accountability must reside with thetest organization.

• The relationship between testing and otherprogram and software organizations must beclear, agreed to, documented, and enabledthrough CM.

test v2_ML_book 2/1/99 2:23 PM Page 36

Page 22: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

39

RULE 8: Software Testing Is Not Finished until theSystem Has Been Stressed above Its RatedCapacity and Critical Factors—Closure of SafetyHazards, Security Controls, Operational Factors—Have Been Demonstrated.

• Software functional testing and system testing musthave a step that demonstrates anticipated realistic,not artificial, system environments. In this step, thesystem must be loaded to the maximum possiblelevel as constrained by external data services. Also,potential failure conditions must be exercised instressed environments.

• All safety hazard control features and securitycontrol features must be exercised in nominal andstressed environments to ensure that the controlsthey provide remain when the system isoverloaded.

• The test data rates must be increased until thesystem is no longer operationally valid.

• Test scenarios that are used in stress testing mustdefine nominal system conditions. The specifics ofthe scenarios must then be varied, increasing in anorderly manner operator positions, database loads,external inputs, external outputs, and any otherfactors that would affect loading and, indirectly,response time.

• During execution of these tests, the availablebandwidth must be flooded with low-prioritytraffic to ascertain if this traffic precludes high-priority inputs.

• Specific tests must exist to assure the capability of afully loaded or overloaded system to gracefullydegrade without the failure of some hardwarecomponent. Specific shutdown options of thesystem must be explicitly exercised in a stressedenvironment to assure their viability.

• On certain systems that have potential externalinput sources, the ability of an unauthorized usergaining access in periods of system stress must betested. These penetration tests must be consistentwith the security policy of the program.

• Satisfaction of predefined quality goals must be thebasis for completion of all test activities, includingindividual test step execution, cases describedthrough a test procedure, test level completion, andcompletion and satisfaction of a Test Plan. Thesequality goals must include stress and critical factorcomponents.

• All software that impacts safety, or software that ispart of the Trusted Computing Base segment of asecure application, must be tested to assure that

RULE 8

test v2_ML_book 2/1/99 2:23 PM Page 38

Page 23: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

41

security and hazard controls perform as required.All testing of critical components must beexecuted in nominal and stressed environmentsto assure that controls remain in place in periodsof system stress.

• Free play testing must only be used when thedesign has been verified through Level 2 or 4testing or when requirements have beenvalidated through Level 3 or 6 testing. Free playtests of unstable design or nonvalidatedrequirements are not always productive.

• Black box tests of requirements (Levels 3, 5, 6, 7,or 8) must be defined, or at least validated, bysomeone with operational or domain experienceso that the tests reflect the actual userequirements of the capability. Traceabilitybetween test cases and requirements must bemaintained.

• All test procedures and cases must be defined toidentify what the software does, what it doesn'tdo, and what it can't do.

• All critical tests, especially those that relate tocritical operational safety or securityrequirements, must be executed at 50 percent ofthe rated system capacity, 100 percent of the

rated capacity, and 150 percent of the ratedcapacity.

• Documentation of failures must include sufficientinformation to evaluate and correct the problem.Knowledge of the stress load the system wasunder can be used to identify points at whichoperational risk is excessive due to systemloading.

RULE 8

(cont.)

test v2_ML_book 2/1/99 2:23 PM Page 40

Page 24: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

43

RULE 9: A Robust Testing Program ReducesProgram Acquisition Risks.

• System acquisition is becoming increasingly morecomplex. More comprehensive testing strategiesthat can identify defects earlier in thedevelopment process are required.

ACTIVITIES THAT CAN SIGNIFICANTLY REDUCE PROGRAMRISKS WHEN PROPERLY TIED TO THE TEST PLAN:

• Ensure that the Program Plan addresses thetesting strategy for the entire acquisition lifecycle. It must consider schedule, cost, andperformance objectives.

• Ensure that requirements traceability and the teststrategy builds confidence in the programdevelopment process during each phase.

• Develop an exhaustive Test Plan that retainsrequirements traceability from requirementsdocumentation through product delivery. TheTest Plan should define a testing strategy fromcradle to grave while bolstering credibility toprogram performance capabilities. This Plan is anessential element of any software developmentproject.

• Establish a Configuration Management Plan. TheCM Plan is key to a successful testing strategy.

It helps to ensure that all requirements areincorporated, tracked, tested, and verified at eachdevelopment stage.

• Establish a Quality Assurance Program. The QAProgram verifies that requirements andconfiguration management processes supporttesting plan objectives, and that systemarchitecture and engineering requirements arebeing achieved.

• Ensure visibility and monitoring of theseindependent but highly integrated activities tojudiciously reduce programmatic risks.

ISSUES AND CONCERNS THAT MUST BE RAISED BY PROGRAMMANAGERS, RISK OFFICERS, AND TEST PERSONNEL:

• Has a CM Program been established to trackdocumentation and data that is shared across themultilevel organization?

• Have requirements documents been incorporatedinto the CM Program?

• Has a requirements traceability matrix beendeveloped?

• Has a testing strategy been developed thatdefines test processes from program conceptionto system delivery?

RULE 9

test v2_ML_book 2/1/99 2:23 PM Page 42

Page 25: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

45

• Does the testing strategy incorporate the use ofcommercial best practices into the Test Plan?

• Does the Test Plan define quality gates for eachdevelopment step that requires either formal orinformal testing?

• Does the Test Plan identify testability criteria foreach requirement?

• Are processes in place to monitor and controlrequirements changes and incorporation of thosechanges into the Test Plan?

• Are all activities integrated throughout thedevelopment process to facilitate continuousfeedback of testing results and consideration ofthose results in determining program status?

• How does the consumption of program reservesduring testing factor into program schedule andcost updates? How is the consumption ofprogram reserves during testing recognized?

• Has Quality Assurance verified that alldevelopment processes and procedures complywith required activities?

RULE 9

(cont.)RULE 10

RULE 10: All New Releases Used in a TestConfiguration Must Be Regression-Tested Priorto Use.

• Regression testing is the revalidation ofpreviously proven capabilities to assure theircontinued integrity when other componentshave been modified.

• When a group of tests is run, records must bekept to show which software has been executed.These records provide the basis for schedulingregression tests to revalidate capabilities prior tocontinuing new testing.

• Every failure in a regression test must beaddressed and resolved prior to the execution ofthe new test period.

• Regression test environments must duplicate tothe maximum extent possible the environmentused during the original test execution period.

• Regression tests must be a mix of white box andblack box tests to verify known design areas aswell as to validate proven capability.

• All regression tests must use components that arecontrolled by configuration management.

test v2_ML_book 2/1/99 2:23 PM Page 44

Page 26: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

47

Testing is no longer considered a stand-alone and end-of-the-process evolution to be completed simply as anacquisition milestone. Rather, it has become a highlyintegral process that complements and supports otherprogram activities while offering a means tosignificantly reduce programmatic risks. Early defectidentification is possible through comprehensivetesting and monitoring. Effective solutions andmitigation strategies emerge from proactive programmanagement practices once risks have been identified.

Successful programs have demonstrated that integratedtesting activities are key to program success withminimal associated risks.

SUMMARYRULE 10

(cont.)

• Regression testing must be a normal part of alltesting activities. It must be scheduled, planned,budgeted for, and generally supported by thetesting organization. “Let’s just run it to see if itworks” is not a valid regression-testing strategy.

test v2_ML_book 2/1/99 2:23 PM Page 46

Page 27: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

49

INDEX

A

Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

B

Black box test . . . . . . . 4-7, 8, 10-11, 15, 22, 36, 40, 45

C

Configuration management . 5, 7, 14, 15, 16, 23, 24-26,. . . . . . . . . . . . . . . . . . . . . . . 30, 33, 35, 37, 42-43, 45

Critical test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

D

Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4, 21

Development test . . . . . . . . . . . . . . . . . . . . . . . . . 35

I

Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-10, 35

P

Program Plan . . . . . . . . . . . . . . . . . . . . . . . 17, 18, 42

Q

Quality Assurance . . . . . . . . . . . . . . . . . . . . . . . 43-44

R

Regression testing . . . . . . . . . . . . . . . . . . . . 31, 45-46

Rework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9, 21

Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-13, 42, 43

Risk management . . . . . . . . . . . . . . . . . . . . . . . 12-13

S

Software Design Document. . . . . . . . . . . . . . . . . 8-10

Software Design File . . . . . . . . . . . . . . . . . . . . . . 8-10

Software Development Plan . . . . . . . . . . . . . . . 17, 18

Software Requirements Specification . . . . . . . . . 8-10

Software Test Case Definition . . . . . . . . . . . . . . 18, 19

Software Test Case Implementation . . . . . . . . . 18, 19

Software Test Description. . . . . . . . . . . . . . . . . 18, 19

Software Test Procedure . . . . . . . . . . . . . . . . . . 18, 19

Software Test Scenario . . . . . . . . . . . . . . . . 18, 19, 38

Software Version Description . . . . . . . . . . . . . . 26, 29

Stress testing. . . . . . . . . . . . . . . . . . . . . . . . . . . 38-41

System Engineering Management Plan . . . . . . . 17, 18

System Segment Specification . . . . . . . . . . . . . 8-9, 11

System Subsystem Design Document . . . . . . . . . 8-10

test v2_ML_book 2/1/99 2:23 PM Page 48

Page 28: test v2 ML book · Software Unit Design CSCI Design/Architecture CSCI Requirements System Design/Architecture System Requirements User Manual Compliance Operational Requirements Site

INDEX

(cont.)

T

Test and Evaluation Management Plan . . . . . . . 17, 18

Test build . . . . . . . . . . . . . . . . . . . . . . . . 22, 26, 29-30

Test environment . . . . . . . . . . . . . . . . . . 16, 27-28, 34

Test organization. . . . . . . . . . . . . . . . . . . . . . . . 36-37

Test Plan . . . . . . . . 16, 17, 18, 19, 29, 33, 35, 39, 42, 44

Test planning . . . . . . . . . . . . . . . . . . . . . 14, 16-21, 37

Test planning tree . . . . . . . . . . . . . . . . . . . . . . . 16-19

Test support functions. . . . . . . . . . . . . . . . . . . . . . 30

Test tools. . . . . . . . . . . . . . . . . . . . . . . . . . . 26, 31-34

Testing definition . . . . . . . . . . . . . . . . . . . . . . . . . 4, 8

Testing levels . . . . . . . . . . . . . . 8-11, 15, 25, 27, 28, 40

Testing process . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-9

W

White box test. . . . . . . . 4-5, 7, 8, 10, 22, 32, 35, 36, 45

THE AIRLIE SOFTWARE COUNCIL

Victor Basili University of Maryland

Grady Booch Rational

Norm Brown Software Program Managers Network

Peter Chen Chen & Associates, Inc.

Christine Davis Texas Instruments

Tom DeMarco The Atlantic Systems Guild

Mike Dyer Lockheed Martin Corporation

Mike Evans Integrated Computer Engineering, Inc.

Bill Hetzel Qware

Capers Jones Software Productivity Research, Inc.

Tim Lister The Atlantic Systems Guild

John Manzo 3Com

Lou Mazzucchelli Gerard Klauer Mattison & Co., Inc.

Tom McCabe McCabe & Associates

Frank McGrath Software Focus, Inc.

Roger Pressman R.S. Pressman & Associates, Inc.

Larry Putnam Quantitative Software Management

Howard Rubin Hunter College, CUNY

Ed Yourdon American Programmer

test v2_ML_book 2/1/99 2:23 PM Page 50