Software Testing Concepts and Terminology

  • Published on

  • View

  • Download

Embed Size (px)


<ul><li><p>8/8/2019 Software Testing Concepts and Terminology</p><p> 1/17</p><p>divine QA Testing</p><p>1 TESTING CONCEPTS AND TERMINOLOGY</p><p>1.1 Software Testing Strategy</p><p>Testing is a set of activities that can be planned in advance and conductedsystematically. Software testing strategy has the following characteristics:</p><p> Testing begins at the component (unit) level and works "outward" towards the</p><p>integration of the entire software system.</p><p> Different testing techniques are appropriate at different points of time</p><p> Testing is conducted by the developer of the software and an independent test</p><p>group (QA testing group)</p><p> Testing and debugging are different activities, but debugging must be</p><p>accommodated in any testing strategy.</p><p>A strategy for software testing must accommodate low level tests that are</p><p>necessary to verify that a small source code segment has been correctlyimplemented as well as high level tests that validate major system functions</p><p>against customer requirements.</p><p>Test granularity refers to the finesse or coarseness of a test's focus. A highly</p><p>granular test allows the tester to check low - level details, often internal to the</p><p>system; a less granular test provides the tester with information about generalsystem behavior. Test granularity can be though of as running along a spectrum</p><p>ranging from structural (white box) to behavioral (black box and live).</p><p>Structural test (also known as white box tests) find bugs in low level operations</p><p>such as those that occur at the level of line of code / object level. Structural testinginvolves a detailed knowledge of the system.</p><p>Behavioral tests (also known as black-box tests) are often used to find bugs in</p><p>high-level operations, at the levels of features, operational profiles and customer</p><p>scenarios.</p><p>Testing strategy for testing a software system should encompass both structural</p><p>and behavioral testing.</p><p>1.2 Testing Phases</p><p>A sequential view of software development and testing is shown in Fig 1.</p><p>Considering the process from a procedural point of view, testing within thecontext of software engineering is actually the series of four steps that are</p><p>implemented sequentially. Initially test focus is on each component individually</p><p>ensuring that it functions properly as a unit. This phase of testing is known as unit</p><p>testing. Unit testing makes heavy use of white box testing techniques, exercising</p><p>Page 1 of 17</p></li><li><p>8/8/2019 Software Testing Concepts and Terminology</p><p> 2/17</p><p>divine QA Testing</p><p>specific paths in a modules control structure to ensure complete coverage and</p><p>maximum error detection.</p><p>Fig. 1: Sequential view of software development and testing</p><p>Components, Classes are packaged and integrated to form the complete software</p><p>system. Integration testing addresses the issues associated with the dual problems</p><p>of verification and program construction. Black-box test case design techniques</p><p>Page 2 of 17</p><p>Requirements Design</p><p>High level Design</p><p>Implementation Design</p><p>Code</p><p>Unit Testing</p><p>Integration Testing</p><p>System Testing</p><p>Acceptance Testing</p><p>Acceptance Test Design</p><p>System Test Design</p><p>Integration Test Design</p><p>Unit Test Design</p></li><li><p>8/8/2019 Software Testing Concepts and Terminology</p><p> 3/17</p><p>divine QA Testing</p><p>are the most prevalent during integration, although a limited amount of white -</p><p>box testing may be used to ensure coverage of major control paths.</p><p>Acceptance and validation criteria (Established during requirement analysis) must</p><p>be tested. Validation testing provides the final assurance that software meets all</p><p>functional, behavioral and performance requirement. Black-box testing techniquesare used exclusively during validation. Validations testing can be combined with</p><p>integration or system testing.</p><p>Software once validated, must be combined with other system elements</p><p>(hardware, network, databases, etc) system testing verifies that all elements mesh</p><p>properly and that overall system function/performance is achieved.</p><p>Acceptance testing is conducted by the customer to validate requirements.</p><p>Acceptance testing normally covers functionality and other aspects of the</p><p>software system to verify and validate against client requirements.</p><p>During the various phases of testing different types of test can be conducted to</p><p>test a specific view or aspect of the system. The following tests can be conductedduring unit, integration, system and acceptance testing.</p><p>Unit Testing: Functionality, User interface, Regression</p><p>Integration Testing: Functionality, User interface, Dependency (API testing),</p><p>Performance, Smoke, Regression, Forward backwardcompatibility, Capacity and volume, Error/Disaster</p><p>handling and recover, Concurrent Execution</p><p>System and Acceptance Testing: Functionality, User interface, Capacity and</p><p>Volume, Error/Disaster handling and recovery, Stress,</p><p>Performance, Localization, Networks and disturbedenvironments, Standards conformance, Security,</p><p>Installation and upgrade testing, Multiple configuration</p><p>testing, User Documentation testing, Beta, Alpha</p><p>1.3 Testing Types</p><p>Functionality Testing:</p><p>During unit testing each component or object is tested for completeness and</p><p>correctness of implementation of a specific feature of functionality. In integration</p><p>testing we focus on functionality that requires the correct operation of two ormore components or a flow of data between them.</p><p>Page 3 of 17</p></li><li><p>8/8/2019 Software Testing Concepts and Terminology</p><p> 4/17</p><p>divine QA Testing</p><p>During system testing we should consider functionality in terms of whole</p><p>sequence of end user operations or an entire area of functionality. In functionality</p><p>testing we should consider "right action", "wrong side effect" risks and also testwith both valid and invalid data sets.</p><p>User Interface Testing</p><p>User interface of a project is the first thing a user encounters in a product. While</p><p>the product may perform its intended function correctly, if the UI behavesincorrectly, does not refresh properly or over writes meaningful text on the screen</p><p>inadvertently the user can be seriously annoyed.</p><p>Testing the user interface of a product usually consist of the operation orsimulation of keyboard, typing and mouse clicking and movement to do menu</p><p>selection, field entry, field motion, control operation, request confirmation and so</p><p>on.</p><p>User interface testing during the unit testing phase often involves only a select</p><p>few UI objects or specific screen or page. During integration testing we need tofocus on navigation/page flow across several screens/pages. In system testing we</p><p>need to test the complete navigation required to meet client requirements. In</p><p>addition to testing the normal navigation with reference to various types of users,</p><p>it is necessary also to test for negative behavior.</p><p>Concurrent Execution Testing</p><p>Although a product may appear to function properly when tested by a single user,</p><p>when multiple instances of a product run concurrently undesirable results mayoccur: they may show up in the application or in the operating system. A product</p><p>that can have many instances of it running at the same time in parallel should</p><p>undergo these concurrent execution tests:</p><p> Simple usage (Two users)</p><p> Standard usage (many users)</p><p> Boundary situations (maximum number of users plus or minus one) if this</p><p>limit exists</p><p>Multiple Configuration Testing</p><p>A product may be usually tested under a default configuration, it is important totest the non-default configurations also. Sometimes new tests may have to be</p><p>written to test under a different configuration. It is possible that there could be</p><p>infinite possible configurations that can be set up. It is necessary to identify keyconfigurations that are important for the customer to operate the product and test</p><p>the product for these configurations</p><p>Page 4 of 17</p></li><li><p>8/8/2019 Software Testing Concepts and Terminology</p><p> 5/17</p><p>divine QA Testing</p><p>Dependency Testing</p><p>Dependency testing is recommended to test any API calls made to other products</p><p>and ensure that these calls act as promised. Such tests also provide a goodregression mechanism when the product being developed is integrated with the</p><p>new version of products it depends on. We need to test all interactions between</p><p> products, including error cases. In general, any where data or control istransferred from one component to another component (or components), either</p><p>immediately or in delayed fashion, an interface exists that can cause trouble.</p><p>Forward/Backward Compatibility Testing:</p><p>The product must be compatible (to the published extent) with any previous</p><p>release and supported dependencies. Backward compatibility testing ensures that</p><p>no problem will arise from a user trying to use the new version of a product alongwith an old version and verifies integration between products.</p><p>Installation and Upgrade Testing</p><p>If a user cannot install or upgrade the product whether it is through an inability to</p><p>understand documentation, a failure of the installation/upgrade program, or lackof resource, the product has failed the user.</p><p>We need to check the installation/upgrade documentation for accuracy and followall instructions form the point of view of a nave user. Test the entire installation</p><p>and upgrade procedure, both with correct and incorrect response to all prompts. If</p><p>verification scripts are provided to test the installation/upgrade, run the script asan additional check. Generate possible system error to check the response of the</p><p>installation/upgrade program.</p><p>User Documentation Testing</p><p>The user documentation is part of the product. If a new user is unable to use thesoftware because of poor documentation, the product as a whole is unusable. All</p><p>sections of the user documentation should be tested for accuracy against the</p><p>reality of the product. Ideally user documentation should be held under sourcecontrol and made part of regression testing of the product. Verify as corrected and</p><p>complete all demos, tutorials and exercise described in the user documentation.</p><p>Standards Conformance Testing</p><p>Specific set of industry standards has to be tested. Tests need to be planned and</p><p>conducted to detect bugs in standards conformance that could seriously affect theprospects of the product in the market.</p><p>Page 5 of 17</p></li><li><p>8/8/2019 Software Testing Concepts and Terminology</p><p> 6/17</p><p>divine QA Testing</p><p>Capacity and Volume Testing</p><p>During unit testing, from a structural test perspective, every buffer, occurrence,</p><p>storage resource, processor, bus, and I/O channel in the system has to be tested. Inintegration testing aspects related to capacity of network to handle traffic and its</p><p>behavior under various volumes are tested. Similarly the data storage capability is</p><p>tested. During system testing the capacity and volume limitations are tested froma user's point of view.</p><p>Localization</p><p>Localization typically refers to problems associated with different languages.</p><p>Languages other than English may require different character sets in which data</p><p>can be represented. Testing character set conversion for data conversion between</p><p>different localization is a very important consideration for planning localizationtesting. Another important item is the provision of different message files in</p><p>various languages for error messages and other system information. Testing forlocalization may require setting up special hardware and software.</p><p>Security Testing</p><p>Security testing attempts to verify that protection mechanism built into a system</p><p>will, in fact protect it from improper penetration. During security testing the tester</p><p>plays the role of the individual who desires to penetrate the system. The testerwould attempt to acquire passwords, access the system through some</p><p>indirect/direct means, overwhelm the system so that it is not available to others,</p><p>cause system errors, etc.</p><p>Performance Testing</p><p>Any product, no matter how well constructed and reliable, is worthless if its</p><p>performance does not meet the demands of its users. Testing must be done in</p><p>realistic setting to ensure adequate performance. However, we have to be cost</p><p>effective.</p><p>We need to test how the product performs during operations normally executed by</p><p>the users regularly. If performance goals are set for various operations of theproduct, it is necessary to verify that these goals are met. Performance is not only</p><p>"How many per second" but also" how long".</p><p>Smoke Testing</p><p>Smoke testing is an integration testing approach that is commonly used when</p><p>"Shrink wrapped" software products are being developed. It is designed as a</p><p>Page 6 of 17</p></li><li><p>8/8/2019 Software Testing Concepts and Terminology</p><p> 7/17</p><p>divine QA Testing</p><p>pacing mechanism for time-critical projects, allowing the software team to assess</p><p>its projects on a frequent basis.</p><p>Software components produced are integrated into a "build". A series of tests aredesigned to expose errors that will keep the build from properly performing its</p><p>function. The intent is to uncover "Show stopper" errors that have the highest</p><p>likelihood of throwing the software project behind schedule. Sometimes dailybuilds are made and subjected to smoke testing.</p><p>The smoke test should exercise the entire system from end to end. It does not haveto be exhaustive, but it should be capable of exposing major problems. The smoke</p><p>test should be thorough enough that if the build passes you can assume that it is</p><p>stable enough to be tested more thoroughly.</p><p>Regression Testing</p><p>Regression testing is the re-execution of some subset or all tests that have been</p><p>previously conducted to ensure that changes have not propagated unintended sideeffects. Regression testing is the activity that helps to ensure that changes do not</p><p>introduce unintended behavior or additional errors. When sub sets of test areselected for regression testing, they should contain:</p><p> A representative sample of test that will exercise all software functions</p><p> Additional tests that focus on software functions that are likely to be affected</p><p>by the change</p><p> Tests that focus on the software components that have been changed.</p><p>Networks and Distributed Environments</p><p>If a product works in a network or distributed environment we need to test for Date and time zone handling</p><p> Communication and telephony standards</p><p>Environment factors (temperature, pressure, etc) and their impact on the productshould also be tested if required.</p><p>Error / Disaster Handling and Recovery</p><p>A software system must recover from faults and resume processing within a</p><p>prescribed time. In some cases, a system must be fault tolerant: that is, processing</p><p>faults must not cause overall system function to cease. In other cases, a system</p><p>failure must be corrected within a specific period of time.</p><p>Recovery testing forces the software to fail in a variety of ways and verifies thatrecovery is properly performed. If recovery is automatic then re-initialization,</p><p>check pointing mechanisms, data recovery and restart are evaluated for</p><p>correctness. If recovery requires human intervention, the mean-time -to -repair(MTTR) is evaluated to determine whether it is with in acceptable limits.</p><p>Page 7 of 17</p></li><li><p>8/8/2019 Software Testing Concepts and Terminology</p><p> 8/17</p><p>divine QA Testing</p><p>Quality risks in this area include unacceptable failure rates, unacceptable recovery</p><p>times and...</p></li></ul>