Upload
fpatras
View
216
Download
0
Embed Size (px)
Citation preview
8/8/2019 A Road Map for Testing
1/30
A Road Map for Testing
8/8/2019 A Road Map for Testing
2/30
A Road Map For Testing2 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
3/30
Contents
Purpose & Scope.................................................................................5
Testing throughout the life cycle.............................................................7
Model(s) for Testing ............................................................................7
V-Model...........................................................................................7
Waterfall model..................................................................................8
Spiral model......................................................................................9
Test Levels......................................................................................11
2.2.1 Unit Testing .............................................................................11
2.2.2 Integration Testing .....................................................................12
2.2.3 System Testing ..........................................................................13
2.2.4 Acceptance Testing ....................................................................14
Test Types.......................................................................................15
Functional Testing.............................................................................15
Non Functional Testing .......................................................................15
Structural Testing..............................................................................15
Changes related Testing ......................................................................16
Maintenance Testing...........................................................................16
Software Testing ...............................................................................18
WHAT is software testing?....................................................................18
Testing (Validation) Principles...............................................................18
Test Data and Data Verification..............................................................19
A Road Map For Testing3 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
4/30
Test (Validation) Design, Methods, & Techniques.........................................21
Verification Types & Techniques.............................................................21
Test Attack Summary User Interface Attacks............................................22
Test Attack Summary System Interface Attacks.........................................22
Levels / Categories of Testing................................................................23
Web Site Specific Testing....................................................................24
WHY do software testing?.....................................................................24
WHO does the testing?.........................................................................25
Key Players and Their Roles..................................................................25
Test Plan, Test Design & Cases, Test Report/Results.....................................27
Test Plan Content..............................................................................27
Test Design & Cases Content.................................................................28
Test Procedures Content......................................................................29
Test Report......................................................................................29
Risk Analysis.....................................................................................30
A Road Map For Testing4 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
5/30
Purpose & Scope
This document provides a comprehensive, high-level, Roadmap summary of testing
technical guidelines, checklist items, and brief documentation templates. The testing
technical data included herein is meant to be a menu of items to select from based on the
unique specifics of a given testing trip (project). It is not intended to be applicable in
totality to any given project. However, it is recommended that all included items should at
least be considered/ analyzed for applicability based on the specifics of a given project.
There should be some sound, technically defensible, logic or reason for not including an
item. It should be noted that project riskmay increase in some proportion to the items not
selectedyou may get lost somewhere along the way of your testing trip.
This Roadmap is meant to be useful for all sorts of testing tripsfrom a simple,eXtremely shortwalk-around-the-block-testing-trip to a Biblical-40-year-testing-trek-through-the-desert constrained by stone tablet regulations.
Examples:
1. For eXtreme testing walk-around-the-block-trips, extreme developments, and smallsimple projects / systemsthe recommended, cost effective, and risk-appropriatetesting activities may be narrow in scope and described by a single paragraph or page.As an eXtreme minimum example, Dr. James Whittaker recommends little or no formaltest documentation at all, but instead recommends developing a Fault model of your
application under test. Then Brain oneyes openTEST!using various Attacks(listed below).
2. For Biblical-40-year-testing-treks-through-the-desert constrained by stone-tablet-regulationslarge complex projectsessentially the opposite end of the spectrum ofeXtreme testing tripsone can take a Biblical and Regulatory approach to testactivities and documentation:
Biblical - Test everything - A worthy spiritual goal, but in general
not possible in this lifetimebut one can focus on progress, not perfection.
Regulatory- If it isnt documented, it didnt happen. All test
activities are completely documented. Each of the technical testing activities may
be separate or multiple documents/volumes (e.g., a Test master plan with
subordinate multiple testing plans, designs, cases, procedures, and reports).
Experience Note:
As effort intensive as it may seem, this Biblical and Regulatoryapproach may besomewhat appropriate for high risk systems such as weapons/combat systems, biomedicalsystems, nuclear reactor control systems, and spacecraft control systems.
A Road Map For Testing5 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
6/30
Generally however, most testing trips are of some middle length/size, complexity,
and risk requiring a somewhat less than Biblical and Regulatoryapproach. That is, the
scope of activities and extent of formal documentation should be proportional to the
project / product size, complexity, and risks.
An attempt has been made to include a broad scope of summarized, current,testing technology. More detail is provided in the included references. That is, thisdocument is applicable to projects / products of all sizes including initial/new developmenttesting as well as the testing of subsequent computer system software revisions / releases.For a completely new release, this document may be analyzed for applicability to all lifecycle phases of development; or, in part for minor releases.
Experience Notes:
1. You never know what is enough [testing] unless you know what is more than enough.2. To err is human. To really foul things requires a computer.
3. Great engineering [testing] is simple engineering.4. Myth #1: The computer only does what you tell it.5. There's always one more bug.
6. The Definition of an Upgrade: Take old bugs out, put new ones in.7. Computers make very fast, very accurate mistakes.8. It did what? Well, it's not supposed to do that.9. My software never has bugs. It just develops random features.10. If debugging is the process of removing bugs, then programming must be the process of
putting them in.11. The sooner a bug is found and fixed, the cheaper.
A Road Map For Testing6 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
7/30
Testing throughout the life cycle
In any life cycle model, there are several characteristics of good testing:
For every development activity there is a corresponding testing activity
Each test level has test objectives specific to that level
The analysis and design of tests for a given test level should begin duringthe corresponding development activity
Testers should be involved in reviewing documents as soon as drafts areavailable in the development life cycle
Model(s) for Testing
V-Model
Integrates development and testing activities
Shows development activities on the left-hand and the corresponding testactivities on the right-hand
Testing is not seen as a phase that happens at the end of development
For every stage of development, an equal stage of testing needs to occur
Testing based on documents/Errors found early/less expensive/in buildquality
Correct tests are created
Testing viewed as throughout the life cycle
A Road Map For Testing7 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
8/30
Waterfall model
The waterfall model is a sequentialsoftware development model (a process for the
creation of software) in which development is seen as flowing steadily downwards (like a
waterfall) through the phases of requirements analysis, design, implementation, testing
(validation), integration, and maintenance. The origin of the term "waterfall" is often cited
to be an article published in 1970 by Winston W. Royce although Royce did not use the
term "waterfall" in this article. Ironically, Royce was presenting this model as an example of
a flawed, non-working model.
A Road Map For Testing8 of30
April 10, 200710 - Version 0.1
http://en.wikipedia.org/wiki/Sequencehttp://en.wikipedia.org/wiki/Software_development_modelhttp://en.wikipedia.org/wiki/Waterfallhttp://en.wikipedia.org/wiki/Requirements_analysishttp://en.wikipedia.org/wiki/Software_designhttp://en.wikipedia.org/wiki/Implementationhttp://en.wikipedia.org/wiki/Software_testinghttp://en.wikipedia.org/wiki/Enterprise_application_integrationhttp://en.wikipedia.org/wiki/Software_maintenancehttp://en.wikipedia.org/wiki/1970http://en.wikipedia.org/wiki/Sequencehttp://en.wikipedia.org/wiki/Software_development_modelhttp://en.wikipedia.org/wiki/Waterfallhttp://en.wikipedia.org/wiki/Requirements_analysishttp://en.wikipedia.org/wiki/Software_designhttp://en.wikipedia.org/wiki/Implementationhttp://en.wikipedia.org/wiki/Software_testinghttp://en.wikipedia.org/wiki/Enterprise_application_integrationhttp://en.wikipedia.org/wiki/Software_maintenancehttp://en.wikipedia.org/wiki/19708/8/2019 A Road Map for Testing
9/30
Spiral model
The spiral model is a software development process combining elements of both
design and prototyping-in-stages, in an effort to combine advantages of top-down and
bottom-up concepts. Also known as the spiral lifecycle model, it is a systems development
method (SDM) used in information technology (IT). This model of development combines
the features of the prototyping model and the waterfall model. The spiral model is
intended for large, expensive and complicated projects.
A Road Map For Testing9 of30
April 10, 200710 - Version 0.1
http://en.wikipedia.org/wiki/Software_development_processhttp://en.wikipedia.org/wiki/Designhttp://en.wikipedia.org/wiki/Prototypinghttp://en.wikipedia.org/wiki/Top-down_and_bottom-up_designhttp://en.wikipedia.org/wiki/Top-down_and_bottom-up_designhttp://en.wikipedia.org/wiki/Information_technologyhttp://en.wikipedia.org/wiki/Waterfall_modelhttp://en.wikipedia.org/wiki/Software_development_processhttp://en.wikipedia.org/wiki/Designhttp://en.wikipedia.org/wiki/Prototypinghttp://en.wikipedia.org/wiki/Top-down_and_bottom-up_designhttp://en.wikipedia.org/wiki/Top-down_and_bottom-up_designhttp://en.wikipedia.org/wiki/Information_technologyhttp://en.wikipedia.org/wiki/Waterfall_model8/8/2019 A Road Map for Testing
10/30
The steps in the spiral model can be generalized as follows:
1. The new system requirements are defined in as much detail as possible. This usuallyinvolves interviewing a number of users representing all the external or internalusers and other aspects of the existing system.
2. A preliminary design is created for the new system.3. A first prototype of the new system is constructed from the preliminary design. This
is usually a scaled-down system, and represents an approximation of thecharacteristics of the final product.
4. A second prototype is evolved by a fourfold procedure:1. evaluating the first prototype in terms of its strengths, weaknesses, and
risks;
2. defining the requirements of the second prototype;3. planning and designing the second prototype;4. constructing and testing the second prototype.
5. At the customer's option, the entire project can be aborted if the risk is deemedtoo great. Risk factors might involve development cost overruns, operating-costmiscalculation, or any other factor that could, in the customer's judgment, result ina less-than-satisfactory final product.
6. The existing prototype is evaluated in the same manner as was the previousprototype, and, if necessary, another prototype is developed from it according tothe fourfold procedure outlined above.
7. The preceding steps are iterated until the customer is satisfied that the refinedprototype represents the final product desired.
8. The final system is constructed, based on the refined prototype.
A Road Map For Testing10 of30
April 10, 200710 - Version 0.1
http://en.wikipedia.org/wiki/System_requirementshttp://en.wikipedia.org/wiki/Prototypehttp://en.wikipedia.org/wiki/System_requirementshttp://en.wikipedia.org/wiki/Prototype8/8/2019 A Road Map for Testing
11/30
9. The final system is thoroughly evaluated and tested. Routine maintenance is carriedout on a continuing basis to prevent large-scale failures and to minimize downtime.
Test Levels
2.2.1 Unit Testing
A fault discovered and corrected in the unit testing phase is more than a hundred times
cheaper than if it is done after delivery to the customer. Barry Bohem
Testing of individual software components (BS 7925 1)
Also known as Component testing, Module testing, Program testing
Searches for defects in, and verifies the functioning of software
components (Programs, Objects, Classes etc)
Simulators may be used
May include testing of functionality and specific non-functionalcharacteristics such as memory leaks etc
Also includes structural testing (e.g.: branch coverage) with access to thecode
Usually done by the programmer who wrote the code
Defects are typically fixed as soon as they are found, without formallyrecording incidents
Prepare and automate test cases (e.g.: J Unit)
Sample Entry and Exit Criteria for Unit Testing
Entry Criteria
Business Requirements are at least 80% complete and have been approvedto-date
Technical Design has been finalized and approved
Development environment has been established and is stable
Code development for the module is complete
All the test cases are documented
Exit Criteria
Code has version control in place
No known major or critical defects prevents any modules from moving to
A Road Map For Testing11 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
12/30
System Testing
A testing transition meeting has be held and the developers signed off
Project Managers approval has been received
All test cases completed successfully
No outstanding critical defects
All test results have been documented
2.2.2 Integration Testing
Tests interfaces between components
Interactions with different parts of a system, such as the operating system,file system, hardware
Tests interfaces between systems
Testers concentrate only on the integration itself (e.g.: if they areintegrating module A with module B they are interested in testing thecommunication between the modules, not the functionality of either module)
Both functional and structural approaches may be used
More than one level of integration:
Tests the interactions between software components and is done aftercomponent testing
Tests the interactions between different systems and may be done aftersystem testing. In this case, the developing organization may control only oneside of the interface and this leads to a high risk.
Ideally, testers should understand the architecture and influence integration planning. If
integration tests are planned before components or systems are built, they can be built in
the order required for most efficient testing.
Sample Entry and Exit Criteria for Integration Testing
Entry Criteria
System testing has been completed
Outstanding issues and defects have been identified and documented
Test scripts and schedule are ready
The integration testing environment is established
A Road Map For Testing12 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
13/30
Exit Criteria
All systems involved passed integration testing and meet agreed uponfunctionality and performance requirements
Outstanding defects have been identified, documented, and presented tothe business sponsor
Stress, performance, and load tests have been satisfactorily conducted
The implementation plan is final draft stage
A testing transition meeting has been held and everyone has signed off
2.2.3 System Testing
Testing the system based on what it is supposed to do (Functional) and howwell it is doing (Non-Functional)
Two Types
o Functional
o Non-functional include: Security, Reliability, Usability, Recovery,
Interoperability etc
Sample Entry and Exit Criteria for System Testing
Entry Criteria
Unit Testing for each module has been completed and approved
Each module is under version control
An incident tracking plan has been approved
A system testing environment has been established - corresponding to thefinal target or production environment as much as possible in order to minimizethe risk of environment-specific failures
The system testing schedule is approved and in place
Exit Criteria
Application meets all documented business and functional requirements
No known critical defects prevent moving to the Integration Testing
All appropriate parties have approved the completed tests
A testing transition meeting has be held and the developers signed off
A Road Map For Testing13 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
14/30
2.2.4 Acceptance Testing
To determine whether a system satisfies its acceptance criteria or not
To enable the customer to determine whether to accept the system or not
To test the software in the "real world" by the intended audience
Purpose of acceptance testing
To verify the system or changes according to the original needs
Procedures for conducting the acceptance testing:
Define the acceptance criteria
o Functionality requirements
o Performance requirements
o Interface quality requirements
o Overall software quality requirements
Develop an acceptance plan
o Project description
o User responsibilities
o Acceptance description
o Execute the acceptance test plan
Sample Entry and Exit Criteria for Acceptance Testing
Entry Criteria
Integration testing was obtained
Business requirements have been met or renegotiated with the BusinessSponsor or representative
UAT test scripts are ready for execution
The testing environment is established
Security requirements have been documented and necessary user accessobtained
A Road Map For Testing14 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
15/30
Exit Criteria
UAT has been completed and approved by the user community in atransition meeting
Change control is managing requested modifications and enhancements
Business sponsor agrees that known defects do not impact a productionrelease
No remaining defects are rated
Test Types
Functional Testing
The functions are what the system does
Considers the external behavior of the software (black-box testing)
Based on functions and features (described in documents) and theirinteroperability with specific systems
May be performed at all test levels
Test validation of an application or Web site conforms to its specificationsand correctly performs all its required functions
Non Functional Testing
It is the testing of how the system works
Non-functional testing includes, but is not limited to, performance testing,load testing, stress testing, usability testing, maintainability testing, reliabilitytesting and portability testing
May be performed at all test levels
The term non-functional testing describes the tests required to measure
characteristics of systems and software that can be quantified on a varyingscale, such as response times for performance testing
Structural Testing
Structural (white-box) testing may be performed at all test levels
Structural techniques are best used after specification-based techniques, inorder to help measure the thoroughness of testing through assessment ofcoverage of a type of structure
Coverage is the extent that a structure has been exercised by a test suite,
expressed as a percentage of the items being covered. If coverage is not 100%,
A Road Map For Testing15 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
16/30
then more tests may be designed to test those items that were missed and,therefore, increase coverage
At all test levels, but especially in component testing and componentintegration testing
Structural testing may be based on the architecture of the system
Changes related Testing
Retesting
Retesting or confirmation: after a defect is detected and fixed, thesoftware should be retested to confirm that the original defect has beensuccessfully removed
Regression
The repeated testing of an already tested program, after modification, todiscover any defects introduced or uncovered as a result of the change(s).
It is performed when the software, or its environment, is changed
May be performed at all test levels
Applies to functional, non-functional and structural testing
The scope is related to the risk of the change, the size of the existing
system and to the size of the change
!!!Regression testing is a strong candidate for automation
Maintenance Testing
Done on an existing operational system, and is triggered by modifications,migration, or retirement of the software or system
Modifications include planned changes (e.g. release-based), corrective andemergency changes, and changes of environment, such as planned operatingsystem or database upgrades
Maintenance testing for migration (e.g. from one platform to another)include operational tests of the new environment, as well as of the changedsoftware
Maintenance testing for the retirement of a system may include the testingof data migration or archiving
Includes extensive regression testing to parts of the system that have notbeen changed.
May be done at any or all test levels and for any or all test types
A Road Map For Testing16 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
17/30
Determining how the existing system may be affected by changes is calledimpact analysis, and is used to help decide how much regression testing to do.
Maintenance testing can be difficult if specifications are out of date ormissing.
A Road Map For Testing17 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
18/30
Software Testing
WHAT is software testing?
Software Testing is a process of verifying and validating that a software application or
program:
Meets the business and technical requirements that guided its design anddevelopment
Works as expected
Verification and validation must be applied at each stage in the software process
Software Testing also identifies defects or errors in the application code that must be
fixed.
Software testing can reveal the presence of defects NOT their absence. A defect is a
variance between the expected and actual result. The defects source may be traced to a
fault introduced in the specification, design, or development (coding) phases.
Testing (Validation) Principles
1. The primary role of software testing is two-fold: determine if the system meetsspecifications (developer view); determine whether the system meets operationalbusiness and user needs (sponsor/customer/user view). In addition, there are several
secondary roles for testing in an organization that include: raising issues; instillingconfidence in the system; providing insight into the software delivery process;continuously improving the test process.
2. Testing is a detection activity. It can only demonstrate the presence of errors. Testingcannot prove the absence of errors. Thus, testing is the process of analyzing a softwareitem to detect the differences between required outputs (or conditions) and actualoutputs (or conditions). You can't test quality into software - Testing will not improvebad coding techniques or poor coding structure.
3. Present test results as observationsnot judgements. This approach goes a long way indeveloping a congenial constructive communication with the developers.
4. As a general testing principal, all requirements should be validated by one of thefollowing methods: testing, inspection, demonstration and analysis. This documentfocuses primarily on the testing method of validation. Some requirements may not be
able to be cost-effectively validated by testing. In such cases, requirements should bevalidated by some other method than testing.
5. Project interim deliverables/documentation may be verified to provide in-progressquality assurance of successful system/software validation. See examples below.
6. Test planning begins as soon as requirements are established, and must be modifiedwhenever the requirements are modified.
7. The testing effort is planned under the assumption that errors will be found.8. The testing process is documented (plans, design, cases, procedures and reports) and
approved.9. Every test case/procedure/result should be repeatable.10. All hardware or software changes should be tested prior to releasing the system to the
production environment. This includes regression testing of all software componentsthat could be impacted by the change(s).
A Road Map For Testing18 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
19/30
11. Testing standards should be established to address: Documentation, Deliverables, Tools& techniques, Quality requirements
12. Software developers are not the only ones to test the software. Independenttesters/users (independent of the engineer responsible for development) may be usedconduct system, acceptance, and other types of testing.
13. Testers never assume that the system/software is correct. The software testing effortprovides that evidence.
14. Start early. Even though you might not have all of the details at hand, you cancomplete a great deal of the planning effort by starting on the general and workingtoward the specific. By starting early, you can also identify resource needs and plan forthem before other areas of the project subsume them.
15. Keep the test plan flexible. Make it easy to add test cases, test data, and so on. Thetest plan itself should be changeable, but subject to change control.
16. Frequently review the test plan. Other people's observations and input greatly facilitateachieving a comprehensive test plan. The test plan should be subject to quality controljust like any other project deliverable.
17. Keep the test plan concise and readable. The test plan does not need to be large andcomplicated. In fact, the more concise and readable it is, the more useful it will be.
Remember that the test plan is intended to be a communication document. The detailsshould be kept in a separate reference document.
18. Calculate the planning effort. You can count on roughly one-third of the testing effortbeing spent on each of the following test activities: planning, execution, andevaluation.
19. Spend the time to do a complete test plan. The better the test plan, the easier it willbe to execute the tests.
20. Always write down what you do and what happens when you do exploratory testing.The primary objectives of testing during the requirements phase are to:
Determine that the requirements fairly represent what the is needed
Determine that the needs have been defined and documented
Verify that a reasonable and valid cost/benefit study has been performed
Determine that the business problem has been solved
Verify that the control requirements have been specified
Verify that a reasonable process was followed in developing the business solution
Verify that a reasonable alternative was selected among the most probablealternative solutions
21. eXtreme Testing - The process of testing software in a dynamic environment wherebythe program is executing while testing is underway to identify defects and enhanceimproved quality and productivity. The methodology combines the best practices oftraditional software testing methods and processes with the ability to apply these time-tested methods in a rapid development paradigm. It focuses on collaboration betweendeveloper, customer, and tester, testing is accomplished in shorter timeframes with anemphasis on developing the software correctly the first time, thereby eliminating theneed for lengthy test cycles and redundant tests. This is a highly customizable approach
to testing software and is adaptable to any software development environment. Thebest success is achieved in organizations that already have mature development andtesting processes and have attained a level of capability beyond simplistic means.
Test Data and Data Verification
It is important to consider the verification, validity, accuracy, and completeness of
any test data. Queries or utilities may be constructed to independently verify the contents
of a test data set. Data sets and data verification queries, utilities, etc., should be
controlled as configuration items and form part of the system baseline.
Data sets intended for testing use in general may include:
A Road Map For Testing19 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
20/30
1. snapshots of production data2. data constructed of specific data entries designed to match particular test procedures;
specific expected results3. messages4. transactions
5. records6. files7. scripts8. automated tool generated data9. ad hoc key strokes10. DBMS views11. Stored procedures12. Queries
VERIFICATION VALIDATIONVerification is a static testing procedure. Validation is dynamic testing procedure.
It involves verifying the requirements,
detailed design documents, test plans,
walkthroughs and inspections of various
documents produced during the development
and testing process.
Validation involves actual testing of the product as
per the test plan (unit test, integration test,
system test and acceptance test etc).
It is a preventive procedure. It is a corrective procedure.
Are we building the product RIGHT? Are we building the RIGHT product?
It involves more then two to three persons
and is a group activity.
It involves the testers and sometimes user.
It is also called Human testing, since it
involves finding the errors by persons
participating in a review or walk through.
It is also called Computer testing, since errors are
found out by testing the software on a computer.
Verification occurs on Requirements, Design
and code.
Validation occurs only on code and the executable
application.
Verification is made both in the Executable
and Non Executable forms of a work product
Validation is done only on Executable forms of a
work product.
Verification finds errors early in the
requirement & design phase and hence
reduces the cost of errors.
Validation finds errors only during the testing stage
and hence cost of errors reduced is less than
Verification.
An effective tool for verification tool is aChecklist.
Various manual and automated test tools areavailable for Validation.
It requires cooperation and scheduling of
meetings and discussions.
It is to check that the product satisfies the
requirements and is accepted by the user.
Verification tasks include:
1) Planning
2) Execution
Validation tasks include:
1) Planning
2) Testware Development
3) Test Execution
4) Testware Maintenance
Verification activities include: Validation activities include:
A Road Map For Testing20 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
21/30
VERIFICATION VALIDATION1) Requirements Verification
2) Functional design verification
3) Internal Design Verification
4) Code Verification
1) Unit testing
2) Usability testing
3) Function testing
4) System testing
5) Acceptance testing
Verification deliverables (work products) are:
1) Verification test plan
2) Inspection report
3) Verification test report
Validation deliverables are:
1) Test plan
2) Test Design Specification
3) Test Case Specification
4) Test Procedure Specification
5) Test log
6) Test incident report
Test (Validation) Design, Methods, & Techniques
Evaluating a systemto determine if it satisfies specified requirements. For aspecific project, an analysis should be accomplished to determine all ppropriate/applicabletest design, methods, and techniques. Those selected should be documented along withrational/justification. The following is a partial list of various test design, methods, andtechniques that, as a minimum should be considered for applicability to a specific projector product:
requirements based functional testing domain analysis testingdata flow testing control flow testing
orthogonal array testing waterfall testing
boundary value testing risk based testing
error guessing test automation
string testing eXtreme testing
test attacks (see summaries below) graybox testing
white box testing/ structural testing / glassbox testing
black box testing/behavioral testing
transaction testing state transition testing
all pairs / pairwise testing decision table testing
equivalence class testing translucent-box testing
operation testing recovery testingcompliance (to process) testing execution testing (desired level of proficiency)
path testing error handling / messages testing
business rules decision tables Inter-systems/interface testing (data passing)
manual support (people-computerinteraction) testing
fault model
Verification Types & Techniques
Verification methods are used to ensure that interim life cycle items (e.g., oftware,
hardware, documentation):
1. comply with standards, procedures, guidelines, and processes
A Road Map For Testing21 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
22/30
2. satisfy the conditions imposed at the start of that [life cycle] phase3. are consistent with items from a previous life cycle phase.
Experience Note:
Review early and review often.
Management ReviewsTechnical reviews
Inspections Walkthroughs
Audits Concept of Operations Review/Walk-through
Requirements Review/Walk-through Design Review/Walk-through
Code Inspection/ Walk-throughTest Readiness Review/ Walk-through
Requirements tracing(e.g., throughout the
life cycle; requirements versus design;
requirements versus tests.)
Phase-end reviews
Physical Audit (correct versions ofhardware & software components)
(Experience Note: Before you release theprogram, you MUST check that the releasedisk [media] contain the most recentversion of every file.)
In-Progress Review
Post Implementation Reviews Peer reviews
Test Attack Summary User Interface Attacks
1. Apply inputs that force all errormessages.
10. Force the screen to refresh.
2. Apply inputs that force software to
establish default values.
11. Apply inputs using a variety of initial
conditions.
3. Explore allowable character sets and
data types.
12. Force a data structure to store too many or
too few values.
4. Overflow input buffers. 13. Investigate alternate ways to modify
internal data constraints.
5. Find inputs that mat interact and test
combinations of their values.
14. Experiment with invalid operand and
operator combinations.
6. Repeat the same inputs numerous times. 15. Force a function to call itself recursively.7. Force different outputs to be generated
for each input.
16. Force computation results to be too large
or too small.
8. Force invalid outputs to be generated. 17. Find features that share data or interact
poorly.
9. Force properties of an output to change.
Test Attack Summary System Interface Attacks
1. Fill the system to capacity. 4. Assign an invalid file name.
A Road Map For Testing22 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
23/30
2. Force the media to be busy or unavailable. 5. Vary access permissions.
3. Damage the media. 6. Vary corrupt file contents.
Levels / Categories of Testing
For a specific project, an analysis should be accomplished to determine all
appropriate/applicable/cost-effective levels of tests. Those selected should be
documented along with rational/justification. The following is a partial list of important
levels of validation/testing that, as a minimum, should be considered for applicability to a
specific project or product:
1. Functional testing (Generally required for all products. The purpose of functionaltesting is to reveal defects related to the products/components functionality andconformance to documented functional requirement specifications.)
2. Unit testing - usually accomplished by developers; Computer Software Unit testing
3. Structural testing4. Exploratory testing (Always write down what you do and what happens when you run
exploratory tests.)5. Component/Sub-component testing Computer Software Component testing.6. Walkthroughs, inspections, desk-checking7. Verification (e.g., reviews, examinations, walkthroughs, desk-checking, or inspection of
interim work products such as requirements, design, specifications, documentation,prototypes, code; early detection of errors are highly cost effective.)
8. Developmental integration testing9. Developmental system testing10. User acceptance testing (Generally required for all products. The purpose of
acceptance testing is convincing the user that the product fulfills expected user needs.)11. Performance/load/stress testing12. Security/access testing13. Usability testing14. Operational procedure documentation verification15. Regression testing (Reliability)16. Alpha & Beta Testing17. Smoke Test - establish that the system is stable and all major functionality is present
and works under normal conditions.18. Pilot testing19. Recovery testing - can involve the manual functions of an application, loss of input
capability, loss of communication lines, hardware or operating system failure, loss ofdatabase integrity, operator error, or application system failure.
20. Operations testing / Usability Testing (Ease of Operations)
21. Compliance testing - verifies that the application was developed in accordance withinformation technology standards, procedures, and guidelines.22. Manual-Support Testing - Systems commence when transactions originate and conclude
with the use of the results of processing. The manual part of the system requires thesame attention to testing as does the automated segment.
23. Intersystem [interface] Testing24. Parallel Testing (e.g., matching test results between current live system and new
system.)25. Compliance Testing (Authorization) - Testing should verify that the authorization rules
have been properly implemented and evaluate compliance with them. Test conditionsshould include unauthorized transactions or processes to ensure that they are rejected,as well as ensuring authorized transactions are accepted.
A Road Map For Testing23 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
24/30
Web Site Specific Testing
Testing specific to Web sites should address:
Visual appeal, usability, readability, accessibility Site navigation and hyperlinks
Accurate, reliable and consistent information. Correct data capture
Transactions completion Gateway software (Common Gateway Interface)
HTML validation / W3C compliance Alternative Graphics Text
Shopping cart, price, tax, and shipping
calculations
Server Load testing
Database / file / table validation, integrity,
authenticity of facts provided
Query Response time
Multiple browser compatibility User/page response time
Recovery after failure/error; fail-over; reliability;
availability
Network security
Payment transaction security Printer-friendly pages
Style guides/sheets Client versus server validation & functional
partitioning
Search engine visibility / meta-tags Site map
Pop-ups Audio/video streaming content
Database middleman software Cookies
Screen size and pixel resolution Scalability/capacity growth
Lists Searchable documents
Forms Frames
Image weight (load time) Information architecture - chunking
Usability22,23,24&25 Data & access security
Site user impact Backup / recovery
WHY do software testing?
A clever person solves a problem. A wise person avoids it.Albert Einstein
Software testing answers questions that development testing and code reviews cant:
Does it really work as expected?
Does it meet the users requirements?
Is it what the users expect?
Do the users like it?
Is it compatible with our other systems?
How does it perform?
How does it scale when more users are added?
Which areas need more work?
Is it ready for release?
A Road Map For Testing24 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
25/30
What can we do with the answers to these questions?
Save time and money by identifying defects early
Avoid or reduce development downtime
Provide better customer service by building a better application
Know that weve satisfied our users requirements
Build a list of desired modifications and enhancements for later versions
Identify and catalog reusable modules and components
Identify areas where programmers and developers need training
WHO does the testing?Software testing is not a one person job. It takes a team - larger or smaller depending on
the size and complexity of the application being tested.
The programmer(s) who wrote the application should have a reduced role in the testing if
possible. The concern here is that theyre already so intimately involved with the product
and know that it works that they may not be able to take an unbiased look at the results
of their labors.
Testers must be cautious, curious, critical but non-judgmental, and good communicators.
One part of their job is to ask questions that the developers might not be able to ask
themselves or are awkward, irritating, insulting or even threatening to the developers:
How well does it work?
What does it mean to you that it works?
How do you know it works? What evidence do you have?
In what ways could it seem to work but still have something wrong?
In what ways could it seem to not work but really be working?
What might cause it to not to work well?
Key Players and Their Roles
Business sponsor(s) and partners
Provides funding
Specifies requirements and deliverables
Approves changes and some test results
A Road Map For Testing25 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
26/30
Project manager
Plans and manages the project
Software developer(s)
Designs, codes, and builds the application
Participates in code reviews and testing
Fixes bugs, defects, and shortcomings
Testing Coordinator(s)
Creates test plans and test specifications based on the requirements andfunctional, and technical documents
Tester(s)
Executes the tests and documents results
A Road Map For Testing26 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
27/30
Test Plan, Test Design & Cases, Test Report/Results
The recommended basic components described in this section should be included in
all documents, unless there is sufficient and acceptable justification any of them in whole
or in part to be not applicable. These basic components may have different names,
groupings, or sequences in different projects and it may be necessary to determine if there
is an equivalent or suitable substitute. Additional information may be added as required.
A practical note from some experts: A test plan [documentation] is a valuable tool
to the extent that it helps you manage your testing project and find bugs. Beyond that it is
a diversion of resources.
Experience Note: Lots of documents do not make good documentation.
Test Plan Content
1. Experience Notes:To fail to plan is to plan to fail.
A good plan today is better than a perfect plan tomorrow.-Wag the Dog, movie dialog
No plan survives the departure point; keep the plan current.
2. Introduction: testing policies; testing standards & conventions; purpose of projecttests; scope; project testing objectives; system description; references;definitions/terms; assumptions; constraints; testing documentation approach.
3. Project Testing Communication Approach: Communication mechanisms e.g.,formal andinformal meetings; working sessions; processes (such as defect tracking); tools (issueand defect tracking, electronic bulletin boards, notes databases, Intranet sites);bulletin boards; email; status reports; in-progress review meetings.
4. Testability: Is necessary and sufficient documentation available to plan testing; is thedocumentation available/provided written with sufficient clarity, accuracy, and detailto allow the system/product/software to be unambiguously tested.
5. Identification of Configuration Item(s) Under Test: e.g., Hardware (e.g., Specialty/TestEquipment, Communication Server, Terminal, Applications Server, Computer, etc. -nomenclature, version, date); Software (e.g., modules, units, systems, sub-systems,CSCI, CSCs, CSUs, data structures - name, version/revision, and date); UserOperational Procedures (document name, version/revision, and date)
6. Risks & Contingency Planning*: Identify both project-oriented testing activity risks aswell an application-oriented risks. Both types of risks may potentially affect thesuccess of the testing. (*if not already included in project management planningdocumentation.)
7. Functions/Features/Design to be Tested8. Functions/Features/Design Not to be Tested9. Traceability: all requirements/features should be traceable to at least one test
procedure; conversely, are all test procedures should be traceable to at least onerequirement/feature; this traceability is best documented in a traceability matrix.
10. Test Technical Approach: project test strategy (e.g., risk reduction); testing levels; test
design methods/techniques; test tools; test metrics; test automation strategy.
A Road Map For Testing27 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
28/30
11. Test Data: creation, verification, and use.12. Test Results Recording, Data Collection, Log, & Reports: e.g., instructions for
performing tests, documenting results, reporting and resolving deviations,documentation/log of errors discovered, error corrections, re-testing, sign-off/approvalprovisions.
13. Test Execution: e.g., important factors that affect test execution; test cycles; testresources
14. General Pass/Fail Criteria15. Acceptance Criteria16. Suspension / Resumption Requirements/Criteria: e.g., corrective action taken when
problems or errors are encountered may include correct documentation, re-run test,re-code, re-design, regression testing
17. Pre-Test Needs/Test Environment: e.g., hardware/test equipment & documentation;test support software & documentation; test data files; test tools & documentation(Test driver software may need to be developed or procured to test some applications;test design documentation should address this situation.); description of the test datadesign/validation; installation instructions; multiple environment migrations (e.g.,development-> test-> live).
18. Organizational Roles and responsibilities: e.g., groups and individuals executing testtasks, groups and individuals providing test support functions. (*If not already includedin project management planning documentation.)
19. Miscellaneous: Test Deliverables*(e.g., plans, design, cases, procedures, reports, logs);Tasks*; Schedule*; Milestones*; Staffing Needs*; Training Needs*; Pre-requisites Tasks*(e.g., completion of interfacing systems) (*if not already included in projectmanagement planning documentation.)
Test Design & Cases Content
1. Introduction: e.g., same as test plan or note exceptions or differences.
2. Test Approach Refinements: e.g., Coverage (operational scenarios, use cases,requirements, design, data domain, source code, executable code, logic paths)3. Test Automation: e.g., automated test regime/environment, techniques, selected test
tools, applicability and of test automation for specific project testing activities,division between manual and automated tests.
4. Test Case Selection, Rationale, Justification, Traceability: e.g., with respect tofeatures or specifications
5. Test Case Identification (for each test case)6. Specific Pass/Fail Criteria for Each Feature/Function/Scenario
Positive Testing = (Not showing error when not supposed to) + (Showing error whensupposed to). So if either of the situations in parentheses happens you have a positive testin terms of its result - not what the test was hoping to find. The application did what it was
supposed to do. Here user tends to put all positive values according to requirements.
Negative Testing = (Showing error when not supposed to) + (Not showing error whensupposed to). (Usually these situations crop up during boundary testing or cause-effecttesting.) Here if either of the situations in parentheses happens you have a negative test interms of its result - again, not what the test was hoping to find. The application did what itwas not supposed to do. User tends to put negative values which may crash the application.
Example
Positive test case
A Road Map For Testing28 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
29/30
In Registration Form, for Name field, user should be allowed to enter only alphabets. Herefor Positive Testing, tester will enter only alphabets and application should run properlyand should accept only alphabets.
Negative test case, in the same case user tries to enter numbers, special characters and if
the case is executed successfully, negative testing is successful.
Test Procedures Content
1. Introduction: e.g., same as test plan or note exceptions or differences.
2. Test Preparations: e.g., test case, procedure, scripts, and test datanames/identification; pre-test/set-up procedures; hardware preparation; softwarepreparation; other Pre-Test Preparations; system/software initialization; testinputs/outputs; criteria for evaluating results; assumptions, constraints, dependencies
3. Test Procedure(s) Description: e.g., detailed steps, expected results, actualresults/outcome/log; deviations from test procedures; variances, and problemsencountered ; test assessment
4. Errors/Enhancements: e.g., bug / incident tracking / documentation / prioritization(errors are inevitable); enhancement / change tracking / documentation/prioritization(changes are inevitable).
Test Report
1. Experience note: Record observations of system behavior. Avoid makingjudgementsabout products or development staff producing those products.
2. Introduction: e.g., same as test plan or note exceptions or differences.
3. Expected results versus actual results: Define any problems; identify the work and inputsequences leading to a suspected problem; identify the test case/procedure producingthe suspected problem; identify the system elements involved; identify the flow ofwork and/or data input leading to the condition; identify the tester(s); assure the
problem is repeatable.4. Detailed, specific verification artifacts/evidence/results should be captured that can
be compared against equally detailed expected results, rather than a simple pass/failformat. As an example, these artifacts may begenerated reports,screen prints, andtest database contents. All artifacts should be unambiguously labeled with the testcase/procedure number, step number, date, and tester identification. Actual ResultsDeviations from Test Procedures, Variances, and Problems Encountered
5. Errors/Enhancements: e.g., bug/incident reporting; enhancement/change reporting.6. Summary of Test Activities and Actual Results.7. Comprehensive Test Evaluation, Notes, and Recommendations
A Road Map For Testing29 of30
April 10, 200710 - Version 0.1
8/8/2019 A Road Map for Testing
30/30
Risk Analysis
Risk analysis is a technique to identify and assess factors that may jeopardize thesuccess of a project or achieving a goal. This technique also helps to define preventivemeasures to reduce the probability of these factors from occurring and identifycountermeasures to successfully deal with these constraints when they develop to avertpossible negative effects on the competitiveness of the company.
One of the more popular methods to perform a risk analysis in the computer field iscalled Facilitated Risk Analysis Process (FRAP).
Risk Management Guide for Information Technology Systems.pdf
An introduction to the risk management process identifying risks, analyzing them,mitigating them, contingency planning, and monitoring. How to use a risk-based approachto prioritize testing tasks and plan the appropriate level of resources to apply to testing,given that you cant test everything.
a. Risk Management Processi. Identifyii. Analyzeiii. Planiv. Mitigatev. Track
b. Risk-Based Testingi. Using a Risk Analysis to Plan Testingii. Quality Criteriaiii. Risk Catalogsiv. Risk-Based Release Decisionsv. When to Use Alternate Methods to Mitigate Risk
A Road Map For Testing
http://en.wikipedia.org/wiki/Projecthttp://en.wikipedia.org/w/index.php?title=Facilitated_Risk_Analysis_Process&action=edit&redlink=1http://d/Improvements/Risk%20Management%20Guide%20for%20Information%20Technology%20Systems.pdfhttp://en.wikipedia.org/wiki/Projecthttp://en.wikipedia.org/w/index.php?title=Facilitated_Risk_Analysis_Process&action=edit&redlink=1http://d/Improvements/Risk%20Management%20Guide%20for%20Information%20Technology%20Systems.pdf