Software Engineering 2007/2008 Chapter 8 Testing the Programs

  • Published on
    01-Jan-2016

  • View
    214

  • Download
    0

Embed Size (px)

Transcript

  • Learning objectives Learning Objectives:After studying this chapter, you should be able to:Define different types of faults and how to classify them.Define the purpose of testing().Describe unit testing and integration testing() and understand the differences between them.Describe several different testing strategies() and understand their differences.Describe the purpose of test planning.Apply several techniques for determining when to stop testing.

  • 8.1 SOFTWARE FAULTS AND FAILURESFault identification (, ) is the process of determining what fault or faults caused the failure, and fault correction() or removal is the process of making changes to the system so the faults are removed.

  • 8.1 SOFTWARE FAULTS AND FAILURESTypes of Faultsalgorithmic fault branching too soon()branching too late()testing for the wrong condition()forgetting to initialize variables or set loop invariants()forgetting to test for a particular condition (such as when division by zero might occur)( )comparing variables of inappropriate data types()

  • 8.1 SOFTWARE FAULTS AND FAILURESComputation and precision faults Occur when a formulas() implementation is wrong or does not compute the result to the required degree of accuracy. For instance, combining integer and fixed-or floating-point variables in an expression may produce unexpected results. documentation faultsOccur When the documentation does not match what the program actually does.

  • 8.1 SOFTWARE FAULTS AND FAILURESoverload() faults occur when these data structures (such as queues, buffers, tables, and so on ) are filled past their specified capacity. capacity() or boundary() faults occur when the system's performance becomes unacceptable as system activity reaches its specified limit. Timing(,) or coordination(,) faults occur when the code coordinating these events is inadequate.

  • 8.1 SOFTWARE FAULTS AND FAILURESThroughput(, ) or performance (,) faults occur when the system does not perform at the speed prescribed by the requirements. Recovery(,) faults occur when a failure is encountered() and the system does not behave as the designers desire or as the customer requires.hardware and system software faults arise(,) when the supplied hardware and system software do not actually work according to the documented operating conditions and procedures.

  • 8.2 Test organizationModule( component , unit )testing verifies that the component functions properly with the types of input expected from studying the component's design. Integration testing is the process of verifying() that the system components work together as described in the system and program design specifications.

  • 8.2 Test organizationfunction testing evaluates the system to determine if the functions described by the requirements specification are actually performed by the integrated system. performance testing compares the system with the remainder(,) of these software and hardware requirements.

  • 8.2 Test organizationacceptance

    confer() with the customer to make certain that the system works according to customer expectations.

    installation test(, )

    make sure that the system still functions as it should.

  • 8.2 Test organization

  • 8.3 UNIT TESTINGExamining the Code Proving Code(,) CorrectTesting Program Componentsopen- and closed-box testing Closed-box testing suffers from() uncertainty() about whether the test cases selected will uncover a particular fault. open-box testing always admits() the danger of paying too much attention to the codes internal processing. We may end up testing what the program does instead of what it should do.

  • 8.3 UNIT TESTINGby considering the program as a closed box, we can use the program's external specifications to generate initial test cases. These cases should incorporate not only the expected input data, but also boundary conditions() for the input and output, as well as several cases of invalid() data. we may include a test case for each of the following:

  • 8.3 UNIT TESTING a very large positive integer() a positive integer a positive, fixed-point(,) decimal(,) a number greater than 0 but less than 1 zero a negative number() a nonnumeric() character

  • 8.3 UNIT TESTINGTest Thoroughness()Statement testing Every statement in the component is executed at least once in some test.Branch testing For every decision point() in the code, each branch is chosen at least once in some test.Path testing() Every distinct() path through the code is executed at least once in some test.

  • 8.3 UNIT TESTINGDefinition-use() path testing Every path from every definition of every variable to every use of that definition is exercised in some test.

  • 8.4 INTEGRATION TESTINGExternal DocumentationDescribing the problem In the first section of the codes documentation, you should explain what problem is being addressed by the component. Describing the Algorithms You should explain each algorithm used by the component, including formulas, boundary or special conditions, and even its derivation or reference to the book or paper from which it is derived.

  • 8.4 INTEGRATION TESTINGBottom-up Integrationeach component at the lowest level of the system hierarchy is tested individually() first. Then, the next components to be tested are those that call the previously() tested ones. This approach is followed repeatedly until all components are included in the testing.

  • 8.4 INTEGRATION TESTINGA component being tested may call another that is not yet tested, so we write a stub(), a special-purpose program to simulate the activity of the missing component. The stub answers the calling sequence and passes back output data that lets the testing process continue.

  • 8.4 INTEGRATION TESTINGTop-down Integration() A disadvantage to top-down testing is the possibility that a very large number of stubs may be required.

  • 8.5 TESTING OBJECT-ORIENTED SYSTEMS Testing the CodeRumbaugh et al. (1991) propose that you begin testing object-oriented systems by asking several questions:. When your code expects a unique() value, is there a path that generates a unique result?. When there are many possible values, is there a way to select a unique result?. Are there useful cases that are not handled?

  • 8.5 TESTING OBJECT-ORIENTED SYSTEMSyou find asymmetric() associations or generalizations(, )you find disparate() attributes and operations on a classone class is playing two or more rolesan operation has no good target classyou find two associations with the same name and purpose

  • 8.6 TEST PLANNING we must plan each of these test steps:1. establishing(,) test objectives2. designing test cases3. writing test cases4. testing test cases5. executing() tests6. evaluating(,) test results

  • 8.7 AUTOMATED TESTING TOOLS Code Analysis Tools 1. Code analyzer: The components are evaluated automatically for proper syntax. Statements can be highlighted if the syntax is wrong, if a construction is faultprone, or if an item has not been defined.2. Structure checker: This tool generates a graph from the components submitted as input. The graph depicts the logic flow and the tool checks for structural flaws.3. Data analyzer: The tool reviews the data structures, data declarations, and component interfaces, and then notes improper linkage among components, conflicting data definitions, and illegal data usage.4. Sequence checker: The tool checks sequences of events; if coded in the wrong sequence, the events are highlighted.