5
7/27/2019 Testing Effectiveness http://slidepdf.com/reader/full/testing-effectiveness 1/5 divine QA Testing 4 Evaluating Testing Effectiveness 4.1 Overview The measurement of effectiveness of testing servers two purposes: First, it evaluates the performance of the testing project and perhaps more important, the results of the evaluation can be used to modify the test process. Identifying the ineffective aspects of testing identifies the areas of important. Evaluation of testing effectiveness includes coverage analysis and defect analysis. Coverage analysis provided insight into how much testing has been done for what  parts of the system based on code coverage, user interface path coverage, use case  path coverage, acceptance criteria coverage, etc.!. "efect analysis provides insight in terms of distribution of defects by functionality, by use case path, by user interface  path, etc.!. This information can be used to evaluate the testing effectiveness to: #naly$e variance from test plan and identify test wea%ness and problems in test  process. #ssess comprehensiveness of &real user usage coverage' and coverage of ris%s. #ssess bug information, with special attention when no bugs or too many bugs have been raised in a particular part of the software. #naly$e testing effort information by functionality, by use case path, by user interface path!. Identify lessons learned e.g. we wasted lot time in understanding test cases that were not documented properly, etc.!  Identify test practices used e.g., reproducibility of defects were verified before  posting in the defect trac%er, etc.! Identify the need for new test tools. Typical inputs used for evaluation of test effectiveness include  (umber of tests planned, conducted, bloc%ed. )esources e*panded in testing + build wise. "efects + found, closed, open, rejected. "efect distribution + category wise, severity wise, functionality use case path, user interface path, user reuirement wise!. -i$e of software tested. "ays to correct defects. "elivered defects defects found by user after testing!. Test productivity information + test case design, testing. age / of 0

Testing Effectiveness

Embed Size (px)

Citation preview

Page 1: Testing Effectiveness

7/27/2019 Testing Effectiveness

http://slidepdf.com/reader/full/testing-effectiveness 1/5

divine QA Testing

4 Evaluating Testing Effectiveness

4.1 Overview

The measurement of effectiveness of testing servers two purposes: First, it evaluatesthe performance of the testing project and perhaps more important, the results of theevaluation can be used to modify the test process. Identifying the ineffective aspects of

testing identifies the areas of important.

Evaluation of testing effectiveness includes coverage analysis and defect analysis.

Coverage analysis provided insight into how much testing has been done for what

 parts of the system based on code coverage, user interface path coverage, use case path coverage, acceptance criteria coverage, etc.!. "efect analysis provides insight in

terms of distribution of defects by functionality, by use case path, by user interface

 path, etc.!.

This information can be used to evaluate the testing effectiveness to:

• #naly$e variance from test plan and identify test wea%ness and problems in test

 process.

• #ssess comprehensiveness of &real user usage coverage' and coverage of ris%s.

• #ssess bug information, with special attention when no bugs or too many bugs

have been raised in a particular part of the software.

• #naly$e testing effort information by functionality, by use case path, by user

interface path!.

• Identify lessons learned e.g. we wasted lot time in understanding test cases that

were not documented properly, etc.!•  Identify test practices used e.g., reproducibility of defects were verified before

 posting in the defect trac%er, etc.!

• Identify the need for new test tools.

Typical inputs used for evaluation of test effectiveness include

•  (umber of tests planned, conducted, bloc%ed.

• )esources e*panded in testing + build wise.

• "efects + found, closed, open, rejected.

• "efect distribution + category wise, severity wise, functionality use case path,user interface path, user reuirement wise!.

• -i$e of software tested.

• "ays to correct defects.

• "elivered defects defects found by user after testing!.

• Test productivity information + test case design, testing.

age / of 0

Page 2: Testing Effectiveness

7/27/2019 Testing Effectiveness

http://slidepdf.com/reader/full/testing-effectiveness 2/5

divine QA Testing

4.2 Coverage Analysis

There are many types of coverage analysis techniues that are used in the softwareindustry. -ome of the standard coverage analysis techniues used are:

• -tatement coverage code coverage!

• ath coverage code path, data path!

• 1ranch coverage

• 2uality ris% coverage

• Configuration coverage.

• Functionality coverage 3se case path, 3ser interface path, 3ser reuirement!

7.1.2 Code Coverage Analysis

The following are the three main stages in the process of code coverage 4 code pathcoverage analysis:

First Coverage Swee

erform a first coverage sweep on a relatively stable 5build6 of a product to establish

 baseline coverage number for the products e*isting test suites 4 test cases. The mainsteps involved in this process are as follows:

/. Instrument your system with a coverage tool, compiling and lin%ing to generate afully instrumented e*ecutable system.

7. )un all e*isting test suites 4 test cases against the instrumental system + usuallyinstrumentation causes some degradation in performance, so more time should beallocated for coverage runs than for normal test runs.

8. Categori$e all modules and routines into a hit and not9hit category based on the

result from step 7.

. "etermine which of the not9hit modules 4 routines 4 code path fall into a categoryof &not feasible to cover'.

0. -ubtract all the &not feasible to cover' modules 4 routines 4 code path that has been

covered.;. Create 4 update a database to store gathered and derived information on hit and not

hit modules and routines. #lso store cross9reference information of hit and not9hit

code path to actual source lines.

Su!se"uent Test #$rove$ent

For each release of the product, follow these steps:

/. <odify 4 update the e*isting product blac%9bo* tests to hit modules 4routines thatwere not hit, or create new tests if necessary= the rule of the thumb here is to get

greatest return per unit or effort and blac%9 bo* testing tends to be cheaper than the

age 7 of 0

Page 3: Testing Effectiveness

7/27/2019 Testing Effectiveness

http://slidepdf.com/reader/full/testing-effectiveness 3/5

divine QA Testing

white9bo* methods 9 hence try to test cover as much as possible with blac% 9 bo*

tests.7. )e9compile the system without the coverage instrumentation.

8. Instrument 4 compile the target &feasible' modules that were not hit.

. >in% the instrumented objects with the rest of the &regular' objects of the system togenerate a partially instrumental system.

0. )eturn the system with the modified updated and 4 or newly created test suites.

;. 3pdate the hit 4 not 9 hit database.?. "etermine the modules 4routines 4code path still not hit and repeat steps / to ;

units further coverage is not possible with current test suits 4 test cases, or until

coverage criteria have been satisfied.

@. If you cannot get the reuired coverage from steps / to ? device test cases for therest of the not9hit modules 4 routines 4 code path and repeat steps / to ; with these

tests.

%eeat %egression Testing and Coverage &date

Ance the initial coverage sweep, and subseuent improvement have made to get

optimal coverage %eep up with changes in the code to ma%e sure the percentage ofcoverage does not slide down over time. #dd new test suites to cover introduction of

new features into the system, built is also important to ma%e sure that these new

features do not cause reduction of coverage with e*isting code and its test suites.

/. "evice a test suite for a new feature or an update.

7. Instrument the source file that contains new features or bug fi*es.8. >in% the instrumented objects into a regular system.

. )un the partially instrumented system with all the test suites including new ones.

0. 3pdate the coverage database.;. Benerate a cross9 reference source list with code paths that contain not9hit modules

4 routines to test suite modification.?. <a%e improvements to the test suite for better coverage.

@. )epeat from step 7 until optimal coverage, is achieved with this type of test suitesblac%9bo* test suites!.

. "evise test cases for the remaining not9hit areas.

/D. )epeat steps until acceptable coverage is achieved.

4.' (efect Analysis 

e can analy$e detected defects in product to reveal useful information about current

uality.

The detection of many defects in a particular area of the software productindicates a potential bottlenec% in fi*ing defects. -imilarly if the defect detection rates

are consistently higher than the resolution rates, more resources need to be applied for

defect fi*ing to clear the bac%log. There may be serious flaws in the product itself ifthis differences in detection 4 resolution rates continue over a long time.

hile there are number of ways to analy$e defect information, the following are themost useful ones:

age 8 of 0

Page 4: Testing Effectiveness

7/27/2019 Testing Effectiveness

http://slidepdf.com/reader/full/testing-effectiveness 4/5

divine QA Testing

• Total number of open defects

• "efect detection rate

• "efect resolution rate

• Total number of defects found over time

• )oot cause chart

• "istribution of defects 9 subsystems 4 reuirement4 use case path 4 user interface

 path wise• riority wise defect detection

• riority wise defect resolution

To interpret defect rates you also need to %now the effort applied to achieve them.

#nother important aspect in defect analysis is defect priority.

The process of closing defects in the defect database lets you record information about

which code modules reuired changes to implement the fi*, and reason code

identifying what type of defect it was.

Avertime this data can be analy$ed to identify inadeuately tested areas of code and places where more code review is reuired. "efect ta*onomy information can alsoindicate any generic areas of defects that are more common than others: porting

e*ceptions, memory lea%s etc. # feedbac% process could incorporate this information

into an improved development process.

-ome of the ways of defect analysis are described below:

Total )u$!er of Oen (efects

The total number of defects in an open state at any given time is an indication of the

current availability of the product. Apen defects are plotted cumulatively on daily 4wee%ly basis.

• In the initial software builds focus is on feature addition. Averall freuency of

code change is relatively high. The number of open defects could be lower,

 because many features may not be available or stable. Testing effort is also lower

during this period.

• Ance the software build is stabili$ed with a lower rate of feature addition, testing

effort increases and defect detection would increase rapidly.

• "uring this period if defect detection rates are higher than the defect resolution

rates than the open defects would continue to increase. If both rates were eualthen the open defects would be stable. If more defects were being fi*ed than being

detected then open defects would show a healthy decreasing trend and may

 become $ero.

(efect (etection %ate

 

The defect detection rate is an indication of the amount of testing effort beinge*panded on a product and also an indicator of product stability. Ance the testing

effort becomes stable, the wee%ly defect detection rate should level off and begin to

age of 0

Page 5: Testing Effectiveness

7/27/2019 Testing Effectiveness

http://slidepdf.com/reader/full/testing-effectiveness 5/5

divine QA Testing

decrease gradually. "efect detection rates are calculated and plotted on daily or

wee%ly basis.

hen a stable testing effort is applied, defect detection rates

• ill be lower for the initial software builds

• ould gradually increase and pea% when the product stabili$es

• ould gradually decrease as defects are closed and would become $ero ultimately.

(efect %esolution %ate

The number of defects that moved from open to closed status per wee% indicates how

effectively the defects are being fi*ed, as well as how the product is stabili$ing.

It is a common practice to plot the defect detection, defect resolution and net defect

rate defects resolved minus defects detected! on wee%ly basis. The shape of the curve

may show potential problems in advance, if the net defect curve is consistentlynegative and decreasing, more and more bugs are arriving in the product, but

insufficient effort is going into fi* them, or they are serious bugs ta%ing longer than

normal time to resolve. Either way, it is not good.

If the net defect curve is consistently positive and the defect resolution rate is high

relative to the defect detection rate, this is usually a good sign. The defects are being

resolved atleast as uic%ly as they are being detected and progress is being made.

The time needed to resolve defects can be affected by the accuracy and detail of the

defect report and its reproducibility. #lso resolution of the defect may ta%e time. Insuch case, the defect resolution rate may be time shifted.

Total )u$!er of (efects Found Over Ti$e

This measure is a cumulative running total of all defects detected against the product,

irrespective of their current status resolved defects are not subtracted.

Testing Effort

To validate defect data testing effort must also be considered and plotted along withdefect detection rate, defect resolution rate and net defect rate.

# reduced testing effort may actually result in reduced defect detection that inturn mayshow a better (et defect rate. This may lead to unnecessary confusion, when testing

effort is not plotted along with the defect measures.

age 0 of 0