Test_Vol_1

Embed Size (px)

Citation preview

  • 7/28/2019 Test_Vol_1

    1/24

    LITTLE

    BOOK

    OF

    TESTING

    VOLUME I

    OVERVIEW ANDBEST PRACTICES

    For additional information please contact the

    Software Program Managers Network

    (703) 521-5231 Fax (703) 521-2603

    E-Mail: [email protected]

    http://www.spmn.com

    APRIL 1998

    SO FTWA RE C O UN CIL

    A IRLIE

  • 7/28/2019 Test_Vol_1

    2/24

    i

    This guidebook is one of a series of guidebookspublished by the Software Program Managers Network(SPMN). Our purpose is to identify best managementand technical practices for software development andmaintenance from the commercial software sector,and

    to convey these practices to busy program managersand practitioners. Our goal is to improve the bottom-line drivers of software development andmaintenancecost,productivity,schedule,quality,predictability,and user satisfaction.

    The Airlie Software Council was convened in 1994 as afocus group of software industry gurus supporting theSPMN and its challenge of improving software acrossthe many large-scale,software-intensive systems withinthe Army,Navy,Marine Corps,and Air Force. Councilmembers have identified principal best practices thatare essential to managing large-scale softwaredevelopment and maintenance projects. The Council,

    which meets quarterly in Airlie,Virginia, is comprisedof some 20 of the nation's leading software experts.

    These little guidebooks are written,reviewed,generallyapproved,and,if needed,updated by Councilmembers. Your suggestions regarding this guidebook,or others that you think should exist,would be muchappreciated.

    THE AIRLIE SOFTWARE COUNCIL

    This publication was prepared for the

    Software Program Managers Network4600 North Fairfax Drive, Suite 302Arlington, VA 22203

    The ideas and findings in this publication should not beconstrued as an official DoD position. It is published in the

    interest of scientific and technical information exchange.

    Norm BrownDirector, Software Program Managers Network

    Copyright 1998 by Computers & Concepts Associates

    This work was created by Computers & Concepts Associatesin the performance of Space and Naval Warfare SystemsCommand (SPAWAR) Contract Number N00039-94-C-0153for the operation of the Software Program ManagersNetwork (SPMN).

  • 7/28/2019 Test_Vol_1

    3/24

    1

    As the significance of software in defense systemshas increased,so has the demand for effective softwaretesting and the prudent management of software-relatedrisks. The costs of software testing and the timerequired for it have also grown;consequently,the

    quality of systems in most organizations is often belowstandard. Knowledge of the testing practices outlinedin this guide can help managers face the challenges ofsoftware testing and risk management.

    Collected here is wisdom on software testing andevaluation (T&E) gathered from successful softwaremanagers and consultants in government and industry

    around the world. This first volume of the SoftwareProgram Managers Network testing series offersstrategies for effective testing of large-scale softwareprograms.

    With practical, real-world advice on how to managetesting and evaluation programs,this guidebookdemonstrates that good testing practices are a keyelement of all modern software programs,leadingtoward the development and implementation ofsignificant quality improvements.

    Norm Brown

    Executive Director

    THE PURPOSE OF THIS

    LITTLE BOOK

  • 7/28/2019 Test_Vol_1

    4/24

    3

    WHY IS IT IMPORTANT?

    Five important reasons why testing should be rankedas a top concern,especially at the start of a project:

    1. A poor testing program can cause mission failure,can significantly impact operational performance

    and reliability,and can double or triple field supportand maintenance costs.

    2. A good testing program is a major project cost.Complex programs can spend more than half theirtotal program effort on T&E activities. To maketesting effective,you have to take the time up frontto plan and organize it properly.

    3. A good testing program will help significantly asyou define your early requirements and designwork. That help is critical to getting the projectstarted right,and it can have a major influence onoverall project success.

    4. A good testing program forces you to face and dealwith problems as the work is done,and when thecost of rework and fixes is much lower.

    5. A good testing program cant totally make up for apoor software project,but it does help preventmany ills and will at least let you know you are introuble early. Testing is insurance,and many projectmanagers have learned the hard way that most

    need big policies!

    WHAT IS SOFTWARE TESTING?

    Testing plays many roles in the development ofsoftware.

    1. Testing is exercising or simulating a system orprogram operation.

    2. Testing is establishing confidence that a programdoes what it is supposed to,and doesnt do what itisnt supposed to.

    3. Testing is analyzing a program with the intent offinding problems and errors.

    4. Testing is measuring system functionality and

    quality.5. Testing is evaluating the attributes and capabilities

    of programs and project work products,andassessing whether they achieve required oracceptable results.

    6. Testing includes inspections and structured peerreviews of requirements and design,as well as

    execution test of code.

    Q: So what is testin g really?

    A: All of the above!

  • 7/28/2019 Test_Vol_1

    5/24

    5

    DEMONSTRATION

    1. Gain confidence that systems can be used withacceptable risk.

    2. Try out features and functions under unusualconditions and situations.

    3. Assure that a work product is completed andready for use or integration.

    DETECTION

    1. Discover defects,errors,and system

    deficiencies.2. Define system capabilities and limitations.

    3. Provide information on the quality ofcomponents,systems,and work products.

    PREVENTION

    1. Clarify system specifications and performance.

    2. Provide information that prevents or reducesthe likelihood of errors being made.

    3. Detect errors earlier in the process.

    4. Identify risks and problems and ways to avoidthem in the future.

    TESTING OBJECTIVES

    To help clearly describe system behavior.

    To find defects in requirements, design,documentation, and code as early aspossible.

    Historically, testing has focused on exercising softwareto obtain confidence in its readiness for use and todemonstrate that it is working satisfactorily. This leadsto an emphasis on the detection and removal ofdefects. Modern testing continues to emphasize

    demonstration and detection,but also recognizes thatmanymaybe even mostmajor defects are rooted inrequirements and design misunderstandings,omissions,and inconsistencies. Early,structured peer reviews arenow commonly used to help prevent problems fromever being coded. Demonstration,detection,andprevention have all become important objectives andbenefits of good testing.

    PURPOSES AND BENEFITS OF

    GOOD TESTING

  • 7/28/2019 Test_Vol_1

    6/24

    7

    COMPLETETESTING ISNT POSSIBLE

    No matter how much you test, it is impossible toachieve total confidence.

    The only exhaustive test is one that leaves thetester exhausted!

    TEST WORKIS CREATIVE AND DIFFICULT

    You must understand and probe what the system issupposed to do.

    You must understand and stress the limitations and

    constraints.You must understand the domain and application indepth.

    TESTING IS RISK-BASED

    We cant identify all risks of failure.

    Risk assessments indicate how much to test and

    what to focus on.

    ANALYSIS, PLANNING, AND DESIGN ARE IMPORTANT

    Test objectives must be identified and understood.

    Tests must be planned and designed systematically.

    Without a road map,you will get lost.

    MOTIVATION IS IMPORTANT

    You cannot be effective if you dont care about thejob.

    You must want to find problems and enjoy trying to

    break the system.

    TIME AND RESOURCES ARE IMPORTANT

    You cant be effective if you dont have the time orresources to do the job.

    TIMING OFTEST PREPARATION MATTERS A LOT

    Early test preparation leads to an understanding ofproject requirements and design.

    Early test preparation uncovers and preventsproblems.

    Early tests improve the effectiveness of subsequentreviews and inspections.

    MEASURING ANDTRACKING COVERAGE IS ESSENTIAL

    You need to know what requirements,design,andcode have and have not been covered.

    Complex software is too difficult to cover withoutsystematic measurement.

    PRINCIPLES OF GOOD TESTING

  • 7/28/2019 Test_Vol_1

    7/24

  • 7/28/2019 Test_Vol_1

    8/24

    11

    MAKE EVALUATION EVERYONESJOBA TEAMRESPONSIBILITY

    In sist on maki ng T&E an in tegral element of all

    professional work. Cul tivate the atti tude and

    responsibi li ty to take it ser i ously, and coopera te as aproject team to produ ce well-tested, hi gh-quali ty

    work products.

    Most of us consider testing and evaluation as anintegral part of any work we do. We want to test ourown work carefully and thoroughly,and we want ourteam to deliver the highest-quality results possible.

    We inherently understand that good testing iseveryones job. We know it is part of the process,andin most instances we even know what needs to bedone to achieve it.

    However,the reality is that under the pressures oftime and the heat of battle,T&E tasks are often thefirst to be delayed,trimmed,or dropped altogether.Why is this,and what should we be doing to geteveryone on the team to meet their testingresponsibilities better?

    Much of the problem is psychological:

    Testing when we know there are many moreproblems to find is discouraging.

    Testing when we feel fairly sure everything alreadyworks is boring.

    Weak testing lets us pretend we have achievedmore progress than we have.

    To developers,testing exposes more problems andmeans more work or rework.

    To managers,testing looks like overhead that canbe cut when in trouble.

    Users and customers willingly trade and accept lowquality for more features.

    The solution isnt a big mystery. Management has tocultivate the right attitudes and insist that T&Eresponsibilities be taken seriously by everyone on theteam,not just testers. This takes leadership andsometimes means making tough decisions,but effectivemodern testing is a team activity that requires directionand discipline.

    Best Practice #1 is both the easiest to achieve and thehardest to maintain. It is built on the organizationalculture and the determination as a project team tocollectively produce high-quality work products all thetime.

    BEST TESTING PRACTICE #1

  • 7/28/2019 Test_Vol_1

    9/24

    13

    ESTABLISH AN EARLY INTEGRATED MASTERTESTING ANDEVALUATION PLAN

    Provi de and maintai n a Master Testi ng and

    Evalu ati on Management Plan (MTP) cover i ng all

    an ti cipa ted T&E activi ti es and deli verables. In tegratethe test plans wi th the overa ll project and program

    plans, and ensu re that resour ces and responsibil it i es

    are under stood and assigned as ear ly i n the project

    as possible.

    Most projects fail to address T&E issues early enough.The MTP helps overcome this problem and makes the

    testing effort and strategy visible to and understood byeveryone on the project. At a minimum,the MTPshould encompass the total T&E effort and shouldassign responsibilities for all major tasks anddeliverables at all levels of evaluation. The intent is toprovide the big picture and to help coordinate theoverall T&E effort.

    The MTP is intended for all members of the team,including users,customers,and managers. Activitiesfor everyone involved in evaluation are described,including those assigned to developers,such as unitreview and testing. Program managers and thoseoutside the project will find the MTP helpful inrelating the testing process to the overall project and

    its associated risks. The MTP supplements the ProjectPlan and addresses management and technical issuesassociated with evaluation. As the project proceeds,the MTP is revised and updated to reflect currentexpectations,deliverables,and assignments.

    Creating the MTP does not require much effort, and itneed not be an especially lengthy or weightydocument. Much of the content can be developedeffectively in a RAD-like,brainstorming session early inthe project.

    BEST TESTING PRACTICE #2

  • 7/28/2019 Test_Vol_1

    10/24

    15

    MAKE PREVENTIVETESTING PART OF ALL SPECIFICATIONWORK

    Establi sh systemati c evalu ati on and prelim ina r y test

    design as a si zable par t of a ll system engineer i ng

    and speci ficati on wor k. Design test scena r i os ear ly,and u se them in reviews and i nspecti ons to help

    ensu re you are bu il di ng the ri ght featu res and to

    better under stan d acceptable an d requ i red system

    behaviors.

    The central idea here is to exploit the power of earlytest definition and the proven benefits of systematic

    evaluation (such as analysis, reviews,and inspections)to ensure that the right system and requirements arespecified,and that design problems are discovered asearly as possible. In commercial industry this issometimes referred to as preventive testing. It is builton the observation that one of the most effective waysof specifying something is to describe in detail your

    criteria for accepting (or testing) a product underdevelopment.

    Key Elemen ts of Prevent i ve Testi ng

    Tests are used as requirements and usage models.

    Testware design is closely linked with softwaredesign.

    Software and testware specifications are reviewedand inspected together.

    Testers and developers work together.

    Most defects are detected before the software is

    built and tests are run.

    With preventive testing,requirements and tests areclosely linked. The hardest job with many tests isunderstanding what results are required or acceptable.By detailing specific test cases and conditions ahead oftime,preventive testers can interpret and clarifyrequired system behavior,and many defects can be

    detected before a single test is run.

    Another challenge testers face is the temptation tobuild,test,and rework designs and features that arentneeded or shouldnt be used. Early T&E emphasizesthe rightwork and stresses simplicity as well as defectdetection. Perhaps the worst waste of projectresources is to test a complicated system thoroughlywith the wrong features and design.

    Preventive testing boils down to one lesson:Its notabout bug counts or density;its about being smarter!

    BEST TESTING PRACTICE #3

  • 7/28/2019 Test_Vol_1

    11/24

    17

    Use of test gates also significantly improves projecttracking and schedule prediction accuracy. It is mucheasier to estimate during development coding howlong it will take to complete a small number of featuresthan it is to estimate during system test how long itwill take to close bugs that range across all or most ofthe system functionality. Completing featuresincrementally also means that if you get behind,youcan consider shipping without the unfinished features.

    The philosophy is simple:If a design,unit,object,orbuild doesnt pass its tests with suitable coverage,it

    isnt done and cant be reported as coded orcompleted.

    USETESTS AS PROGRESS AND MILESTONE GATES

    Requ i re the use of tests to ver i fy tha t a ll proj ect

    deli verables and components are complete, and to

    demonstra te and tra ck t r ue project progress. Avoid

    delay i n addressing problems by mak ing testcompleti on a gate to be achieved before wor k

    product sign-off.

    Many projects seem to be in good shape until theyenter integration and system test,when suddenly evensimple tests dont run. You can avoid surprise failureslike this by using tests as gates to control and sign off

    project work products. When you insist on runningtests to demonstrate that a feature or object iscomplete,you ensure that the design and code aretruly complete. Without a program test gate,developers under pressure to show interim progressmay be tempted to turn over designs or code onschedule but with key internal design decisions

    unmade and significant (even fatal) open bugs andissues. Testing becomes unmanageable when it revealsthat previous development phases have postponedkey decisions and design issues. Using tests as gatesensures that major design issues are closed when theyneed to be.

    BEST TESTING PRACTICE #4

  • 7/28/2019 Test_Vol_1

    12/24

    19

    BEST TESTING PRACTICE #5

    INVENTORY TEST OBJECTIVES AND DESIGN FORTESTABILITY

    Develop and m ain tai n a r isk-pri or it ized list of test

    requi rements an d objectives: requi rements-based,

    design-based, and code- or implementat ion -based.

    Use the inventor y to help ensu re work produ cts aredesigned for testabi li ty, to gui de test design an d

    developm ent, and to trace or cross-reference what

    each test cover s.

    A key modern testing practice is to create inventorylists of objectives to be covered before tests aredesigned and built. The lists detail what needs to be

    tested,and are used to guide and drive test design anddevelopment work. Prioritization by risk (usually intolow,medium,and high) helps concentrate testing onthe highest-risk areas.

    Gui deli nes for Obtaini ng Good Test In ven tor ies

    Start early and cover any requirements or

    functional design specifications. Use brainstorming and informal meetings to create

    worry lists.

    Evaluate the inventories during reviews.

    Group the inventories into three major increments:requirements,design,and code.

    Break each increment into a small number oflogical groups.

    Since inventory lists can get quite long,it helps togroup them into related categories or classes. Missingobjectives are easier to identify from these groups.

    As each test is developed,related inventory objectives

    and groups are recorded as part of the test description.With some simple test management tooling support,you can report on the test coverage of requirementsand objectives,especially of objectives not yet coveredby any tests.

  • 7/28/2019 Test_Vol_1

    13/24

    21

    BEST TESTING PRACTICE #6

    TEST EARLY, TEST OFTEN

    Use tests as ear ly an d as often as possible to

    provi de developer feedback an d get pr oblems found

    and fi xed as they occu r. Emphasi ze testware

    engin eer i ng as a concu r rent li fe cycle processti ghtly coupled wi th softw are design and

    development.

    An important philosophy of modern testing is to getas much feedback as early and as often as possible.

    This implies that tests have been prepared early andare shared with the development teams for use in

    review walkthroughs,unit testing,prototypeevaluation,early simulations,and so forth. The aim ofearly testing is to uncover any surprises and badnews as early as possible,and to help developersproduce higherquality and bettertested units.

    This approach assumes that the problems and bugsfound in the early testing will be addressed and fixed.

    Many project managers postpone fixes untildevelopers complete the feature design and code.

    This is analogous to remodeling every room in yourhouse at once while leaving all the little problemsyou run into to be fixed later. It is generally muchbetter to complete one room or feature at a time andmake sure it is really done properly. Run as many

    tests during development as possible,and fix as manyof the problems and bugs as soon as you can.

    Reasons to Fix Problems Now, Not Later

    Changes made weeks or months later are highlyerror-prone.

    Design decisions and subtle code limitations and

    conditions are easily forgotten.

    Time that would be required to reanalyze yourdesigns and code is saved.

    Early feedback on problems helps prevent similarerrors from being made.

    Developer testing is more thorough when all code

    is tested together. Defect- and problem-tracking overhead is

    significantly reduced.

    Rework of the design and coding is feasible,ifnecessary.

  • 7/28/2019 Test_Vol_1

    14/24

    23

    BEST TESTING PRACTICE #7

    DESIGN AND DEVELOPTESTWARE AS DELIVERABLECOMPONENTS

    Design and develop m ajor testware componen ts

    and procedu res w it h the same disciplin e and care

    reser ved for software componen ts. This in cludesplanni ng, analysis, design, review, configur ation

    management, and change contr ol and

    management reporti ng.

    It is important to recognize that testware (such astest suites,simulators and drivers,special evaluationprograms,and utilities) is also software and needs to

    be managed and engineered just like other softwarecomponents in your application. The testware isreally a specialized application system whose mainpurpose is to test and evaluate some otherapplication or system. If the application is critical,the testware that tests it is critical and must be builtwith good engineering discipline,including proper

    testing and evaluation.In tegra ted Testw are Characteri sti cs

    Recognized as true project deliverable(s)

    Engineered and developed in parallel withsoftware deliverables

    Changed and updated as the software changes

    Used for retesting the software as it evolves

    The life cycle process for engineering testware isessentially the same as for software. All the conceptsand principles of good software engineering applyequally to good testware engineering. The basic phasesof analysis,design,development,and implementation instaged increments apply just as they would for anyother software component.

    The key to effective testware engineering is to get thetiming right. If you build your testware late,after most

    of the software components have been designed andcoded,you dont get the prevention and positivefeedback or test-gating benefits. If you try to build ittoo early,before the software designs and requirementshave stabilized,you cant develop your testrequirements and objectives effectively,and you face alot of rework as the software changes.

  • 7/28/2019 Test_Vol_1

    15/24

    BEST TESTING PRACTICE #8

    25

    PROVIDE INTEGRATEDTESTTOOLING AND INFRASTRUCTURESUPPORT

    Suppor t an d enable proj ect team testi ng and

    evalu ati on w i th a T&E database or r eposit or y and

    an in tegrated test management system fordescr ibi ng, docum enti ng, and trackin g tests; also

    supply support tools, such as simul ator s, coverage

    ana lyzer s, capture playback tools, test dr iver s, and

    util ities.

    Large-scale testing,like large-scale software,needssignificant automated support to be performed and

    managed effectively. This is particularly true whenwe recognize evaluation as everyones job (Best

    Testing Practice #1) and coordinate workflow acrossthe entire project team.

    Types of Tools and Appli cati ons

    T&E asset tracking and management

    Test plans,objectives,cases,procedures,andscripts

    Analysis,review,and test results and reports

    Work product analyzers

    Test drivers,simulators,and playback systems

    Test coverage analyzers

    Defect and problem tracking

    Process-effectiveness analysis and metrics

    A key automation support element is a central projectrepository for all T&E deliverables and work products.

    This is referred to as a T&E management system,andincludes facilities for saving,describing,documenting,and tracking tests,and recording and tracking reviewand test observations and results. The best tools makethis information easily accessible to the project team,and provide substantial workflow support to simplifyand track the software development process. These

    tools feature templates for all document types,built-inanalyzers and checkers for conformance to standardsand the T&E process,and templates for periodictracking reports and metrics.

    Other important tool support includes test beds andsimulator tools to support various test environments;static analyzers and comparators to provide before- and

    after-change analysis and assessments of work productrisk and complexity;test driver and capture playbacktools to run and rerun automated tests;and dynamicanalyzers to measure and report test results andcoverage.

  • 7/28/2019 Test_Vol_1

    16/24

    27

    BEST TESTING PRACTICE #9

    MEASURETEST COSTS, COVERAGE, RESULTS, ANDEFFECTIVENESS

    Collect i nformati on to moni tor T&E costs, resu lts,

    and benefi ts. Measu re test coverage of

    requ i rements, design, and code at a ll levels and forall major bu ilds and test envi ronments. Measur e

    what tests are ru n on what softw are ver sions and

    what each test level or major activi ty fi nds or

    m isses. Use the in formati on on an ongoin g basis to

    assess and eva lu ate T&E effecti veness.

    You cant manage what you cant see or understand,

    and to understand T&E you have to collect and trackdata about the test process and its effectiveness.

    What Shou ld Be Measu red?

    Effort and costs spent

    Test coverage against requirements and objectivesinventories

    Test coverage against code (requires dynamicanalyzer tools)

    Status of the test effortwhat has been done,what hasnt

    Defects and problems found

    Defects and problems missed (collected during

    operational use)

    Quality of the productrequired vs.expected enduser perceptions

    Quality of the testing effortprevented and foundvs. impact of what is missed

    Think about your last completed program or project.Did you track what tests were run and what theycovered? Did you report what was spent at each testlevel (reviews,unit, integration,system,and acceptancetesting)? Did you analyze and assess the effectivenessof that effort and expenditure? Do you know whateach test level missed or failed to discover? Can you

    relate these measures to other projects and past testefforts?

    If you cant answer yes to these questions,you donthave the information you need to manage effectively.At a minimum,you should have visibility of the effortand costs spent on T&E activities,the deliverables andwork products produced,and the overall results and

    effectiveness of the test coverage.

  • 7/28/2019 Test_Vol_1

    17/24

    29

    BEST TESTING PRACTICE #10

    COACH AND MANAGE THETEAM

    Provi de ongoin g T&E leader shi p an d m anagement

    support so tha t ever yone on the team kn ows wha t

    is expected and takes testi ng ser i ously. Under stand

    your testi ng activi ti es and process, and makemeasu rement -dr i ven poli cy an d pr ocess changes as

    needed to steadi ly improve over time.

    Testing in most organizations is more mismanagedthan managed. Many software managers act morelike fans and cheerleaders than quarterbacks. Anyreally professional team needs good coaching and

    software. T&E is no exception. Modern testingrequires managers to get off the sidelines and leadthe team,to shape priorities and set direction.

    Managers should get more personally involved in T&Epolicy and direction. T&E issues seldom involvepurely technical or simple yes/no decisions. Thereare almost always tradeoffs to consider and risks to

    judge and balance. Managements job is to providetimely,appropriate guidance on everything fromscoping the T&E effort to deciding when to stop orwhat to do when quality targets and schedules aremissed.

    The principles for managing testing and evaluation arefundamentally the same as those for managing anythingelse. There are no easy solutions. Youve got to assignresponsibilities and accountabilities,establish a solid,integrated process with strong automation support,involve good people who know what they are doing,and consistently support improvement over time. Thebasics of good planning,risk management,measurement,tools,organization,andabove allprofessionalism apply to T&E just as they do to anyother project area.

  • 7/28/2019 Test_Vol_1

    18/24

    31

    AUTOMATED TESTING

    Many automated tools are now available to supportsoftware testing. Integrated tools are just as importantto testing and evaluation as they are to softwaredevelopment and management,and use of these toolsshould enable the following major benefits:

    1. TEST REPEATABILITY AND CONSISTENCY

    Automated tests should provide the same inputsand test conditions no matter how many times theyare run. Human testers can make errors and loseeffectiveness,especially when they are fairlyconvinced the test wont find anything new.

    2. EXPANDED AND PRACTICALTEST REUSE

    Automated tests provide expanded leverage due tothe negligible cost of executing the same testmultiple times in different environments andconfigurations,or of running slightly modified testsusing different input records or input variables,

    which may cover conditions and paths that arefunctionally quite different.

    3. PRACTICAL BASELINE SUITES

    Automated tests make it feasible to run a fairlycomprehensive suite of tests as an acceptance orbaseline suite to help ensure that small changes

    have not broken or adversely impacted previouslyworking features and functionality. As tests arebuilt, they are saved,maintained,and accumulated.Automation makes it practical to run the tests againfor regression-testing even small modifications.

    NOT THE SILVER (EVEN TIN OR IRON) BULLETThese benefits are not easy to realize. The toolsthemselves are fairly easy to use,but ease of usedoesnt mean success. A hammer and saw are easy touse,but good cabinetwork takes great skill and muchmore than just good tools. Similarly,a new testautomation tool,by itself, will do little for your test

    program. Most new owners of testing tools tend tooverpromise and underdeliver,at least initially,so becautious about raising expectations too high,too soon.

  • 7/28/2019 Test_Vol_1

    19/24

    33

    REQUIREMENTS FOR TEST

    AUTOMATION SUCCESS

    Successful test automation generally goes hand in handwith two critical ingredients:

    1. A WELL-UNDERSTOOD AND STABLE APPLICATIONBEHAVIOR

    You cant automate something that is not welldefined. Applications that are still undergoingsignificant requirements and design changes makepoor candidates for test automation. Even fairlystable applications may be problematic if the testersresponsible for automation dont understand anapplications behavior or the relevant domain-

    specific issues and needs. To automate successfully,you must understand and be able to predictapplication behavior,and then design the test insuch a way that your test plan stays with the tool.

    2. A DEDICATED AND SKILLEDTEAM WITH SUFFICIENTTIMEAND RESOURCES ALLOCATED

    Remember,testware is software. Test automationshould be managed like a separate support projectwith its own budget and resources. You willgenerally fail if you give your whole project team anew automation tool and ask everyone to do a littleautomation in their spare time. (Imagine trying toautomate your accounting or payroll process by

    giving everyone in the finance department aspreadsheet and asking them to work on it whenthey have some free time.)

    HOW NOT TO SUCCEED WITH TEST AUTOMATION

    Buy a tool and give it to everyone on the team.

    Presume automation will save project time ormoney.

    Expect a lot of bugs to be found.

    Test functions and features you dontunderstand.

    Test functions and features that are changingsignificantly.

    Automate tests you dont understand how torun manually.

    Avoid a project.

    Avoid any dedicated resources.

    Eliminate training and support.

    Ignore synchronization.

    Ignore test documentation.

    Forget about environments and configurations.

    Ignore test management.

  • 7/28/2019 Test_Vol_1

    20/24

    35

    TESTING PARADOXES

    Following are some common paradoxes faced by thoseinvolved in testing. Contemplate these for yourbenefitand at your own risk!

    1. The best way to test and verify requirements is to

    figure out how to test and verify requirements.2. The best testing process is one that forces

    developers to build the software right and test it asthey go,which in turn makes it largely unnecessaryto have a best testing process.

    3. The best way to build confidence is to try todestroy it.

    4. Testing is looking for what you dont want to find.A successful test is one that fails,and a failure is onethat succeeds.

    5. Managers who fail to spend appropriately ontesting end up with failed projects because theycant meet their deadlines if they do spend

    appropriately on testing.6. Unsystematic testing can and should be very

    systematic.

    7. Heavy use of automated tests eliminates the needfor heavy use of automated tests.

    8. A big improvement in testing can ruin a project bygiving it more problems than it has time to deal

    with and making the software appear much worsethan it is.

    9.Quality is free if you pay for it.

    WHEN IS IT OK NOT TO TEST?

    When the responsibility for failure can be

    shifted to someone else

    When the impact or significance of any problemis insignificant

    When the software does not have to work

    When no one is or will be using the system

    When the project has already failed

  • 7/28/2019 Test_Vol_1

    21/24

    37

    TESTING ISNOTA PHASE!

    A central principle of modern testing is that good T&Eproceeds in parallel with software development.

    Testing should not be viewed as just execution or justthe phase that happens after coding. For maximumbenefit and leverage,test development and preparationneed to be completed before,rather than after,coding.

    Testware engineering is more significant and integralto the analysis and design stages than it is to thecoding and implementation stages. This is analogous totesting in school: The final may be important fordemonstrating knowledge and graduating,but it is thetesting and evaluation during the semester that reallyhelps students gauge their progress.

    FUNDING FOR THETESTING PROCESS DEPENDS ON:

    The RISK

    The application complexity and change

    The talents and skills on the team The tooling and infrastructure support

    The quality of the software effort

    The availability of existing tests

    The process and methods

    The degree of retesting required

    Plan for a minimum of 25 percent of total programcosts.

    High-risk components (such as software modules,units,and configuration items) as well as systemfeatures tied to mission-critical or safety concerns needmore thorough testing (more insurance) and may well

    drive up the proportionate cost of testing andevaluation to 50 percent or more of the programbudget.

    WHAT SHOULD YOU SPEND ON

    TESTING?

  • 7/28/2019 Test_Vol_1

    22/24

    39

    KEYS TO TESTING AND

    EVALUATION IMPROVEMENT

    MANAGET&E AS AGGRESSIVELY ASYOUMANAGE DEVELOPMENT

    Take testing seriously.

    Foster a quality culture that wants to find andprevent problems.

    Set clear directions and expectations.

    Delegate responsibility and accountability to goodpeople.

    Manage the process and the project.

    Emphasize early planning and preparation.

    Make testware a real deliverable.

    Insist on tests to demonstrate progress.

    Invest in team tooling and support infrastructure.

    Measure effectiveness.

    Reward team contributions.

    CAUTION

    Reengineering the T&E process is known to be a tougheven wickedproblem because managers,developers,andtesters are often highly resistant to change. Cantitudewillflourish,and attacking it may be harmful to your well-being.

    Bear in mind this advice to mountaineers: It is very

    dangerous to leap chasms in two or more bounds.

    There are many excellent publications,training courses,and conferences that address software testing andmodern,best testing practices.Useful resources include:

    BOOKS

    Boris Beizer, Software Testi ng Techniques & BlackBox Testi ng,Van Nostrand Rheinhold

    Tom Gilb and Dot Graham,Softw are In specti on,Addison Wesley

    Bill Hetzel,Complete Guide to Software Testi ng,John Wiley & Sons

    Cem Kaner et al.,Testi ng Computer Softw are,

    Course TechnologyEd Kit,Software Testi ng in the Real Wor ld,AddisonWesley

    Brian Marick,Cra ft of Softwar e Testi ng,Prentice Hall

    William E.Perry and Randall W.Ricer,Sur vi vin g theTop Ten Cha ll enges of Softw ar e Testi ng, DorsetHouse

    WEB SITES

    Client Server Software Testing Technologies:www.csst-technologies.com

    Software Quality Engineering:www.sqe.com

    Software Testing Institute:www.ondaweb.com/sti

    Software Testing Laboratories:www.stlabs.com

    ADDITIONAL INFORMATION

  • 7/28/2019 Test_Vol_1

    23/24

    ACKNOWLEDGMENTS

    Much of the material in this booklet has been drawnfrom my consulting and teaching experience atSoftware Quality Engineering and from several of my

    own publications,especiallyThe Complete Guide toSoftw ar e Testi ng.

    The importance of effective test and evaluationpractices for managers cannot be stressed enough. It ismy hope that this booklet adequately addresses theneed to support this all-too-often-neglected process.

    Special acknowledgment and appreciation are due toEd Kit, Dot Graham,Dave Gelperin,Boris Beizer,LindaHayes,Tom DeMarco,and Steve Maguire for their ownpublications and insights that are directly reflected inthe assembled material.

    Bill Hetzel

    NOTES

  • 7/28/2019 Test_Vol_1

    24/24

    THE AIRLIE SOFTWARE COUNCIL

    Victor Basili University of Maryland

    Grady Booch Rational

    Norm Brown Software Program Managers Network

    Peter Chen Chen & Associates, Inc.

    Christine Davis Texas Instruments

    Tom DeMarco The Atlantic Systems Guild

    Mike Dyer Lockheed MartinMike Evans Integrated Computer Engineering, Inc.

    Bill Hetzel Qware

    Capers Jones SPR Inc.

    Tim Lister The Atlantic Systems Guild

    John Manzo 3Com

    Lou Mazzucchelli Gerard Klauer Mattison & Co., LLC

    Tom McCabe McCabe & Associates

    Frank McGrath Software Focus, Inc.

    Roger Pressman R.S. Pressman & Associates, Inc.

    Larry Putnam Quantitative Software Management

    Howard Rubin Hunter College, CUNY

    Ed Yourdon American Programmer

    NOTES