23
Software Testing Testing is a set of activities that can be planned in advance and conducted to find errors within a document. A strategy for software testing must accommodate low level tests that are necessary to verify that a small source code segment has been correctly implemented as well as high level tests that validate major system functions against consumer requirements. Verification and Validation- Verification-Verification refers to the set of activities that ensure that software correctly implements a specified function i.e. refers to code Validation- Validation refers to different set of activities that ensure that the software that has been built is traceable to customer requirements. Testing provides the last bastion from which quality can be assessed and errors can be uncovered. Quality cannot be tested only at the end; it should be incorporated at every step in software testing. Stakeholders in testing process -The software developer is always responsible for testing the individual units of the program, ensuring that each performs the function or exhibits the behavior for which it was designed. Developer also conducts integration testing- a testing step that leads to the construction of the complete software architecture. The role of the independent test group (ITG) is to remove the inherent problems associated with testing. Developer and

Software Testing

Embed Size (px)

DESCRIPTION

a guide to software testing

Citation preview

Page 1: Software Testing

Software Testing

Testing is a set of activities that can be planned in advance and conducted to find errors within a document.

A strategy for software testing must accommodate low level tests that are necessary to verify that a small source code segment has been correctly implemented as well as high level tests that validate major system functions against consumer requirements.

Verification and Validation-

Verification-Verification refers to the set of activities that ensure that software correctly implements a specified function i.e. refers to code

Validation- Validation refers to different set of activities that ensure that the software that has been built is traceable to customer requirements.

Testing provides the last bastion from which quality can be assessed and errors can be uncovered. Quality cannot be tested only at the end; it should be incorporated at every step in software testing.

Stakeholders in testing process-The software developer is always responsible for testing the individual units of the program, ensuring that each performs the function or exhibits the behavior for which it was designed.

Developer also conducts integration testing- a testing step that leads to the construction of the complete software architecture.

The role of the independent test group (ITG) is to remove the inherent problems associated with testing. Developer and ITG work closely throughout the software project to ensure that through tests will be conducted.

Software testing(parts) can also be viewed as spiral model that can be divided into the following four phases-

a) Unit Testing- Unit testing begins at the vertex of the spiral and concentrates each unit ( i.e. component) of the software as implemented in software code

b) Integration Testing- The focus is on design and construction of the software architecture.

c) Validation Testing- Requests established as part of the software requirements analysis are validated against the software that has been constructed.

d) System Testing- The software and other system elements are tested as a whole.e) Object Oriented Testing- It involves testing the classes for data and then classes

are integrated into an object-oriented architecture, a series of regression tests are run to cover error due to connections and collaborations between classes and side effects caused by addition of new classes.

Page 2: Software Testing

Details of Testing

Unit Testing- Unit testing focuses verification effort on smallest unit of software design- the software component or module.

Using the component-level design description as a guide, important control paths are tested to uncover errors within boundary of the module. The unit test focuses on the internal logic and data structure within boundaries of the component.

Local data structures are examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm’s execution. All independent paths through the control structure are exercised to ensure that all statements in a module have been executed at least once.

Among common errors in unit testing are-1.) Incorrect arithmetic precedence2.) Mixed mode operations3.) Incorrect initialization4.) Precision inaccuracy5.) Boundary value failures

For testing a unit a driver program must be developed. Driver is nothing but the main program that accepts test data, passes such data to the component and prints relevant results. Stubs serve to replace modules that are subordinate to the component to be tested.

Integration Testing-

Integration testing is a systematic technique for considering the software architecture while at the same conducting tests to uncover errors associated with interfacing. The objective is to take unit test components and build a program structure that has been dictated by design.

Types of Integration-

1. Incremental Testing- Incremental testing is the antithesis of the big bang approach. The program is constructed and tested in small increments where errors are easier to isolate and correct.

2. Top down Integration Testing- It is an integral approach to contribution of the software architecture. Modules are integrated by moving downward through the control hierarchy, beginning with the main control module. It is again of two types-

a) Depth-first integration- It integrates all components on a major control path of a program structure. Selection of a major path is somewhat arbitrary and depends on application specific characteristic.

Page 3: Software Testing

b) Breadth first integration- It incorporates all components directly subordinate at each level moving across structure horizontally.

3. Bottom-up integration testing- Bottom up integration testing begins construction and testing with atomic modules (i.e. components at lowest levels) in the

program. Because components are integrated from the bottom-up testing required for the components subordinate is always available.

4. Regression Testing- Each time a new module is added as part of integration testing, software changes. New data flow paths are established, new I/O may occur and new control logic is invoked. As functions are added there is a need to test them to avoid any side effects to the whole software. Regression testing may be done manually by re-executing a subset of test cases or may use an automated software tool.

5. Smoke testing- It is a testing approach that is mainly used while software products are being developed. It can be applied to complex, time-critical software products. The following steps are involved-

a) All software components are integrated into a build. A build includes all data files, libraries, reusable modules and engineered components.

b) A series of tests are designed to expose errors that will keep the build From properly performing it’s functions.

c) The build is integrated with other builds and the entire product is smoke tested.

6. Sandwich Testing- Selection of integration testing strategy depends on the software characteristics and the project schedule. A combined approach called sandwich testing uses top-down integration tests for top levels and bottom-up approach for bottom levels.

Integration testing must always identify critical modules which have high level of control, definite performance requirements, are complex and are error-prone.

7. Object-Oriented software testing strategies (Unit and Integration Testing)

i) Unit testing- An encapsulated class is the focus of unit testing. Operations within the class are smallest testable units. Also the operation should be tested in each of the classes within which it is used.

ii) Integration Testing- There are three different approaches for integration testing an OO system.

a. Thread based testing- It integrates the set of classes required to respond to one input or event of the system.

Page 4: Software Testing

b. User based testing- First the independent classes called the server classes are tested followed by the dependent classes called the stub classes.

c. Cluster testing- A cluster of collaborating classes is exercised by designing test cases that attempt to uncover errors in the collaboration.

Validation Testing (for system requirements)

Validation testing tests the software for software functions in a manner that can be reasonably expected by the customers. These expectations are defined in the software requirements specification which contains the visible attributes of the software.

A test-plan outlines the classes of tests to be conducted and a test procedure defines specific test cases. Both the plan and procedure are designed to ensure that all performance requirements are attained, documentation is correct and usability requirements are met.

An important element of validation process is a configuration review. The purpose of the review is to test the software configuration and the necessary details to help in the support phase.

Alpha and beta testing-

Acceptance testing- When customer software is built for a customer, a series of acceptance tests are conducted to enable the customer to validate all requests. An acceptance test can range from an informal test drive to a planned and systematically executed series of tests.

Alpha testing- Alpha testing is conducted at developer’s site by end users.

Beta testing- The beta testing is conducted at end user’s site. Beta testing is a live application of the software in an environment that cannot be controlled by the developer.

System Testing-

It involves testing everything including hardware, software, people and information. System testing is actually a series of different tasks whose primary purpose is to fully exercise the computer-based system. Although each test has a different purpose all work is to verify that system elements have been properly integrated and perform allotted functions. It involves the following steps-

a) Recovery testing- Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed. Automatic recovery, reinitialization, checkpointing mechanisms, data recovery and restart

Page 5: Software Testing

are evaluated for correctness. If recovery requires human intervention, the mean time to restart (MTTR) is evaluated to determine whether it is within acceptance limits.

b) Security testing- Security testing verifies that protection mechanisms built into the system will infact protect it. The tester may attempt to acquire passwords through external means, may attack the system with custom software designed to break-down any defence that may have been constructed, may overwhelm the system denying service to others, purposely cause system errors, may browse through insecure data.

c) Stress testing- Stress testing executes a system in a manner that demands

resources in an abnormal quantity or frequency or volume. For example special tests may be designed that generate ten interrupts per second, input data rates may be increased, maximum memory utilization tests may be done, excessive disk reading tests may done.

d) Performance testing- Performance testing is done to test the run time performance

of the software within context of the integrated system. It is important to monitor execution intervals, log events and sample machine states on regular basis.

e) Debugging- Debugging occurs as a consequence of successful testing. When a test case uncovers an error, debugging is an action that results in the removal of the error. The debugging process begins with the execution of a test-case. Debugging strategies include brute force (using memory dumps and output), backtracking (step wise tracing), and cause elimination (formulating a reason and testing it).

Testing Tactics

Definition- Software testability is how easily a software program can be tested. Characteristics for testing a system-

a) Operability- If a system is designed and implemented with quality in mind, relatively few bugs will block the execution of tests, allowing testing to progress without fits and starts.

b) Observability- Inputs provided as part of testing produce distinct outputs

c) Controllability- The better we can control the software the more the testing can be automated and optimized. Controlling the software and hardware variables is important.

c) Decomposibility- The software system is built from independent modules that canbe tested independently.

Page 6: Software Testing

d) Simplicity- There should be functional simplicity , structural simplicity and Code simplicity.

e) Stability- Changes to the software are infrequent , controlled when they occur and do not invalidate existing tests.

f) Understandability- The architectural design dependencies between external and , internal and shared components are well understood.

Black Box and White Box testing-

Black Box test examines some fundamental aspect of a system with little regard for the internal logical structure of the software.

White Box testing involves examination of the procedural details, logical paths through the software and collaborations between components are tested by providing test-cases that examine specific set of conditions and or loops.

1. White Box Testing- White box testing sometimes called glass-box testing is a test design philosophy that uses the control structure described as part of component-level design to derive test cases. Using the white box testing method the software engineer can derive test-cases that cover areas like check all independent paths, exercise logical decisions, execute loops and use internal data structures.

White-Box testing methodologiesa) Basis-path testing- Basis path testing is a testing technique proposed by

McCabe. For basis path testing we use-Flow Graph- It depicts logical control flow. We can derive a flow graph from a flow chart. Each statement is a node and control structures control the flow path.

b) Independent Path-n independent path is any path through the program that introduces at one set of processing statements. It can be derived from a flow-graph. For example Path 1 : 1-11Path 2 : 1-2-3-4-5-10-11

c) Cyclomatic complexity- It is a software metric that provides a quantitive measure of the logical complexity of a program. Cyclomatic complexity defines the no. of independent paths in the basis set of a program and provides an upper bound on the number of tests. It can be computed as No. of regionsV (G) = E-N+2 where E is no of edges and N is no of nodes

d) Deriving test cases- It involves steps like- drawing a flow graph from a flow chart, determining cyclomatic complexity, determine set of linearly independent paths and determine test cases for each set of paths.

Page 7: Software Testing

e) Graph Metrics- To prepare a software tool we can store the nodes of flowgraph and compute the independent paths to derive test cases.

f) Control structure testing- It involves condition testing in which a test caseis designed that check the logical condition.

It involves data flow testing which is a method that selects test paths of a program according to location of definition and use of variables.

A definition-use (DU) chain of a variable X is {X, S, S’} where definition of variable x in statement S is live at statement S’ i.e. there exists a path from S to S’. All definition-use chains should be covered at least once.

It also involves loop-testing that focuses exclusively on loop constructs covering different types of loops like simple loop, nested loop, concatenated loop and unstructured loops.

2. Black Box Testing-

Black Box testing also called behavioral testing focuses on the functional requirements of the software. Black box testing attempts to find errors in the following categories- incorrect or missing function, interface errors , errors in data structure, performance or behavior errors, initialization or termination errors.

a) Graph based testing methods- A graph is created which is a collection of nodes that represent objects, links that represent relation between objects , node weights that describe properties of a node, link weights that describe some characteristics of a link. This link can be a directed link, symmetric link, bidirectional link or parallel link.Objects to be tested can be document window, document text etc.

b) Transaction flow modeling- It involves finite state modeling, data flowmodeling and timing modeling.

c) Equivalence Partitioning- Equivalence partitioning is a black box testing technique which divides the input domain into classes of data from whichtest cases can be derived. Equivalence partitioning strives to define a testcase that covers a class of users, thereby reducing the total number of testcases that can be developed.

d) Boundary value analysis- A great number of errors occur at the boundaryOf the input domain. BVA leads to a selection of test cases that exercise boundary values. It complements equivalence partitioning because it leads to selection of test cases at the edges of a class.

Page 8: Software Testing

f) Orthogonal Array Testing- Orthogonal array testing can be applied to problems in which the input domain is relatively small but too large to accommodate exhaustive testing. If there are three input X, Y and Z instead of taking one input at a time, it is possible to consider all different permutations i.e. 3^3=27. It’s like testing all sides of a cube.

3. Object Oriented Testing Methods- Object oriented testing is mainly used for testing object oriented testing software. It involves the following methods-

a) Fault Based Testing- The objective of fault-based testing within an OO sSystem is to design tests that have a high likelihood of uncovering plausible faults. To determine whether faults exist , test cases are designed to exercise the design or code.

b) Integration Testing- Integration testing looks for plausible faults in Operation calls or message connections. Integration testing applies to attributes or operations. Testing should exercise the attributes to determine whether proper values occur for distinct types of object behaviour.

c) Testing class hierarchy- Tests have to be done for both the base and Derived classes.

d) Scenario based testing- Scenario based testing concentrates on what theUser does. This means capturing tasks that the user has to perform then applying them and their variants as tests. Scenarios uncover interaction errors, but to accomplish these test cases must be more complex and more realistic than fault based tests.

e) Testing surface and deep structure- Surface structure refers to externally Observable structure of an OO program that is the structure that is visible.Deep structure refers to internal technical details of an OO program. That is the structure that is understood by executing the design and the code.

f) Testing at class level- Random Testing- If a class contains a number of operations, a variety of operation sequences may be generated randomly. E.g. for banking application Test case r1 : open.setup.deposit.closePartition Testing- Partition testing reduces the number of test cases required to exercise the classes by partitioning the input domain. Inputs and outputs are categorized and test cases are designed for each category.

g) Inter-class test case design- Test case design becomes more complicated As integration of object oriented system begins. It is at this stage that collaboration of classes must begin.Multiple-class testing- For each client class use the list of operations to generate a series of random test sequences. The operation will send messages to other server classes.For each message that is generated determine the collaboration classes and the corresponding operations in the server class.For each operation in the server object determine the messages that it transmits.

Page 9: Software Testing

For each of the messages determine the next level of operations that are involved and incorporate these into test sequence.

h) Tests from behavior models- The state diagram for a class can be used to Help derive a sequence of tests that will exercise the dynamic behavior of a class and those classes that collaborate with it. The tests to be designed should achieve all state coverage.

4. Testing for specialized environments- Unique guidelines and approaches are Required to test different environments.

a) Testing GUI’s- Finite state modeling graphs may be used to derive a Series of tests that address specific data and program objects involving GUI.

b) Testing client-server architecture- Testing client-server software occursAt three levels. i) Individual client applications are tested in a disconnected mode. ii) Client applications and server software are tested together. iii) Complete client/server architecture including network operations and performance are tested. Other tests include application function tests, server tests, database tests, transaction tests and network connection tests.

c) Testing documentation and help facilities- Documentation testing can beApplied in two steps- i) Review and inspection examines the document for editorial clarity ii) Live test uses documentation in conjunction with the actual program.

d) Testing for real time systems- Comprehensive test case design methods For real-time systems continue to evolve. The steps involved are – i) Task testing tests each task independently. ii) Behavioral testing simulates the behaviour of real time system and examines its behavior as a consequence of external events. iii) Inter-task testing tests asynchronous tasks that are known to communicate with one another and testing them with different data rates and processing load to determine if inter-task synchronization errors. iv) Software and hardware are integrated and full range of system tests is conducted in an attempt to uncover errors at the software/hardware level.

Product Metrics

1. Software quality - Software quality in conformance to explicitly statedFunctional and performance requirements, explicitly documented development standards and that are expected of all professionally developed software. A few important factors should be kept in mind-

a) Software requirements are the foundation from which quality is measured. Lack of conformance to requirements is lack of quality.

Page 10: Software Testing

b) Specified standards define a set of development criteria that guide The manner in which software is engineered.

c) There is a set of implicit requirements that often goes unspecified e.g.Ease of use.

2. McCall’s quality factors -a) Correctness- The extent to which a customer satisfies it’s specification

And fulfills customer’s mission objectives.b) Reliability- The extent to which a program can expect to perform its

Intended function.c) Efficiency- The amount of computing resources and code required by a

Program to perform its functions.d) Integrity- The extent to which access to software or data by unauthorized

Persons can be controlled.e) Usability- The effort required to learn, operate, prepare input for and

Interprets output of operations.f) Maintainability- The effort required to locate and fix an error in a

Program.g) Flexibility- The effort required to modify an operational program.h) Testability- The effort required to test a program to ensure that

It performs its intended function.i) Portability- The effort required to transfer a program from one

Hardware/software environment to another.j) Reusability- The extent to which a program can be reused in another

Application.k) Interoperability- The effort required to couple one system to another.

3. Quantitative View - a) Measure- Measure provides a quantitative indication of the extent ,

Amount, dimension, capacity or size of some attribute of a product or process.

b) Metric- A quantitative measure of the degree to which a system, Component or process processes a given attribute.

c) Indicator- An indicator is a metric that provides insight into the softwareProcess, software project or the product itself.

d) Characteristics- Metrics can be characterized by five activities- i. Formulation – The derivation of software measures and metrics

appropriate for the representation of the software being considered.ii. Collection- The mechanism used to accumulate data required to

derive formulated metrics.iii. Analysis- The computation of metrics and application of

mathematical tools.iv. Interpretation- The evaluation of metrics in an effort to gain insight

into quality.v. Fault detection- Detect faults using developed metrics.

Page 11: Software Testing

4. Goal oriented software measurement - GQM emphasizes the need to-a) Establish an explicit measurement goal.b) Define set of questions to be answered to achieve the goal.c) Identify the metrics that help to achieve the goal.

5. Attributes of effective software metrics -a) Simple and computableb) Empirically and intuitively persuasivec) Consistent and objectived) Consistent in the use of units and dimensionse) Programming language independentf) Effective mechanism for high quality feedback

6. Metrics for analysis model a) Function point metric- It is used for measuring the functionality

Delivered by a system. The metric is used to-i) Estimate cost or effort for design and testingii) Predict no. of errorsiii) Forecast no of computations

It depends on-i) No. of external input (EI)ii) No. of external outputs (EO)iii) No. of external enquiries (EQ)iv) No. of internal logical files (ILF)v) No. of external interface files (EIF)

FP= Count * [ 0.5 + 0.01 * Fi ]Fi= Value adjusted factor depending on other factors

b) Metrics for specifying quality- Analysis model can also be used to testCompleteness, correctness, understandability, verification, internal and external consistency, achievability, conciseness, traceability, modifiability,Precision and reusability.Nu=Nf+NnfNu= Total requirementsNf= No of functional requirementsNnf= No of non-functional requirementsSpecificity of requirements Qf= Nui/Nt Nui= Common or unambiguous requirementsCompleteness-Qz= Nu/ [ Ni * Ns ]Nu= No. of unique functional requirementsNi= No. of input stimuliNs= No. of state specifications

Page 12: Software Testing

7. Metrics for design model -a) Architectural design metrics- These are also called black box metrics and

These do not require knowledge of any software component. The design metrics are-

i) Structural complexity- S(i)= f out(i)^2Fout= No. of outputs

ii) Data complexity- D(i) = V(i) / [ fout(i) + 1 ] V(i)= No. of input and output variables iii) System complexity- C(i)=S(i)+D(i) S(i)= Structural complexity D(i)= Data Complexity

b) Metrics based on tree diagram of module structure- Size= n + a n= no of nodes a= no of arcs arc to node ratio= a/n Another form of metric is S1= total no of modules S2= no of modules whose correct function depends on source of Data input S3= no of modules whose correct function depends on prior Processing S4= no of database items S5= total no of unique data items S6= no of database segments S7= no of modules with single entry and single exit Program structure D1=1 (OOPS) D1=0 (Structural) Module independence D2= 1- S2/S1 Modules not dependent on processing D3= 1- (S2/S1) Database Size D4= 1- S5/ S4 Database compartmentalization D5= 1- S6/ S4 Module entry and exit characteristics DSQ = wi Dic) Metrics for object oriented design-

Size, Complexity, Coupling, Sufficiency, Completeness,Cohesion, Primitiveness, Similarity, Volatillity

8. Class oriented metrics -a) Ck metrics suite ( Chidamber and Kemerer)-

i) Weighted methods per class (WML)- Assume that n methodsOf complexity C1, C2,……..Cn are defined for class C. The specific complexity metric that is chosen should be normalized so that nominal complexity for a method takes on a value of 1.0.WMC= Ci

Page 13: Software Testing

For i= 1 to nii) Depth of inheritance tree (DIT)- This metric is the maximum

Length from the node to the rest of the tree. More the depth of class hierarchy more is the complexity.

iii) Number of children- The subclasses that are subordinate to a Class are considered its children. As NOC increases complexity will increase.

iv) Coupling between object classes- CBO is the number of Collaborations listed for a class on its CRC index card. As CBO increases the reusability of a class increases and so does its complexity.

v) Response for a class (RFC)- It is a set of methods that can Potentially be executed in response to a message received by an object of that class. As RFC increases so does design complexity.

vi) Lack of cohesion in methods (LCOM)- LCOM is the number Of methods that access one or more of the same attributes. If LCOM is high so is coupling. Cohesion should be high where as LCOM should be low.

b) MOOD metrics suite - Metrics for OO design are quantitative in nature.i) Method inheritance factor (MIF)- It is the amount of inheritance

Used. MIF= Mi(Cj) / Mi(Cj) Ma(Cj)= Md(Cj) + Mi(Cj) Ma(Cj)= the number of methods that can be involved in the association with Cj. Md(Cj)= the number of methods declared in class Cj Mi(Cj)= the number of methods inherited in Cj.

ii) Coupling Factor (CF)- Coupling can be defined in the following way-Cf= ij is_client(Ci,Cj) / ( Tc^2-Tc)I= 1 to Tc and j= 1 to TcIs_client=1 if relationship exists between Ci and Cj =0 otherwise

c) OO metrics by Lorenz and Kidd i) Class Size- no of operations and no of attributes NOA.ii) Component level design metrics- It involves five concepts

and measures. Data Slice ( data values), Data tokens ( variables) Glue tokens ( attached to data slice) SuperGlue tokens (attached to every data slice) Stickiness ( no of data slices bound)

iii) Coupling metrics- Dhama proposed a metric for module Coupling that encompasses data and control flow, global coupling and environmental coupling. The formula is-M= Di + ( a * Ci) + D0 + ( b * C0 ) + Gd + ( c * Gc) + w + r

Page 14: Software Testing

Di= no of input data parametersCi= no of input control parametersD0= no of output data parametersC0= no of output control parametersGd= no of global variables used as dataGc= no of global variables used as controlW= no of modules called or fan-outR= no of modules calling the modules under consideration

d) Complexity Metrics - The most important complexity metric is

Cyclomatic complexity. Operation oriented metrics proposed are-i) Average operation size (OSavg)- Although lines of code

Could be used as operation size, the LOC measure suffers from a set of problems.

ii) Operation complexity (OC)- The complexity of an operation can be computed using any of the complexity metrics as in conventional software.

iii) Average no of parameters per operation (NPavg) - The larger the number of operation parameters, the more complex the collaboration between objects.

iv) User interface design metrics- A typical GUI uses layout entities- graphic icons, text, menus, windows etc.The metrics are the time required to achieve a specific operation, no of operations required, no of data or content objects.

9. Metrics for source code - Metrics for source code include

Program Length N= n1 log n1 + n2 log n2n1= no of distinct operationsn2= no of distinct operandsN1= total no of operator occurrencesN2= total no of operand occurrencesProgram Volume V = N log ( n1 + n2 )

10. Metrics for testing -Program Level PL= 1 / [ n1/2 * N2/n2 ]Effort L = V/PLMetrics for object oriented testing include- lack of cohesion in methods (LCOM),Percentage of public and private variables (PAP), public access to data members (PAD), no of root classes (NOR), fan in (FIN), no of children (NOC), depth of inheritance (DIT)

Page 15: Software Testing