Transcript
Page 1: Software Testing Concepts and Terminology

8/8/2019 Software Testing Concepts and Terminology

http://slidepdf.com/reader/full/software-testing-concepts-and-terminology 1/17

divine QA Testing

1 TESTING CONCEPTS AND TERMINOLOGY

1.1 Software Testing Strategy

Testing is a set of activities that can be planned in advance and conductedsystematically. Software testing strategy has the following characteristics:

• Testing begins at the component (unit) level and works "outward" towards the

integration of the entire software system.

• Different testing techniques are appropriate at different points of time

• Testing is conducted by the developer of the software and an independent test

group (QA testing group)

• Testing and debugging are different activities, but debugging must be

accommodated in any testing strategy.

A strategy for software testing must accommodate low level tests that are

necessary to verify that a small source code segment has been correctlyimplemented as well as high level tests that validate major system functions

against customer requirements.

Test granularity refers to the finesse or coarseness of a test's focus. A highly

granular test allows the tester to check low - level details, often internal to the

system; a less granular test provides the tester with information about generalsystem behavior. Test granularity can be though of as running along a spectrum

ranging from structural (white box) to behavioral (black box and live).

Structural test (also known as white box tests) find bugs in low level operations

such as those that occur at the level of line of code / object level. Structural testinginvolves a detailed knowledge of the system.

Behavioral tests (also known as black-box tests) are often used to find bugs in

high-level operations, at the levels of features, operational profiles and customer 

scenarios.

Testing strategy for testing a software system should encompass both structural

and behavioral testing.

1.2 Testing Phases

A sequential view of software development and testing is shown in Fig 1.

Considering the process from a procedural point of view, testing within thecontext of software engineering is actually the series of four steps that are

implemented sequentially. Initially test focus is on each component individually

ensuring that it functions properly as a unit. This phase of testing is known as unit

testing. Unit testing makes heavy use of white box testing techniques, exercising

Page 1 of 17

Page 2: Software Testing Concepts and Terminology

8/8/2019 Software Testing Concepts and Terminology

http://slidepdf.com/reader/full/software-testing-concepts-and-terminology 2/17

divine QA Testing

specific paths in a modules control structure to ensure complete coverage and

maximum error detection.

 

Fig. 1: Sequential view of software development and testing

Components, Classes are packaged and integrated to form the complete software

system. Integration testing addresses the issues associated with the dual problems

of verification and program construction. Black-box test case design techniques

Page 2 of 17

Requirements Design

High level Design

Implementation Design

Code

Unit Testing

Integration Testing

System Testing

Acceptance Testing

Acceptance Test Design

System Test Design

Integration Test Design

Unit Test Design

Page 3: Software Testing Concepts and Terminology

8/8/2019 Software Testing Concepts and Terminology

http://slidepdf.com/reader/full/software-testing-concepts-and-terminology 3/17

divine QA Testing

are the most prevalent during integration, although a limited amount of white -

 box testing may be used to ensure coverage of major control paths.

Acceptance and validation criteria (Established during requirement analysis) must

 be tested. Validation testing provides the final assurance that software meets all

functional, behavioral and performance requirement. Black-box testing techniquesare used exclusively during validation. Validations testing can be combined with

integration or system testing.

Software once validated, must be combined with other system elements

(hardware, network, databases, etc) system testing verifies that all elements mesh

 properly and that overall system function/performance is achieved.

Acceptance testing is conducted by the customer to validate requirements.

Acceptance testing normally covers functionality and other aspects of the

software system to verify and validate against client requirements.

During the various phases of testing different types of test can be conducted to

test a specific view or aspect of the system. The following tests can be conductedduring unit, integration, system and acceptance testing.

Unit Testing: Functionality, User interface, Regression

Integration Testing: Functionality, User interface, Dependency (API testing),

Performance, Smoke, Regression, Forward backwardcompatibility, Capacity and volume, Error/Disaster 

handling and recover, Concurrent Execution

System and Acceptance Testing: Functionality, User interface, Capacity and

Volume, Error/Disaster handling and recovery, Stress,

Performance, Localization, Networks and disturbedenvironments, Standards conformance, Security,

Installation and upgrade testing, Multiple configuration

testing, User Documentation testing, Beta, Alpha

1.3 Testing Types

Functionality Testing:

During unit testing each component or object is tested for completeness and

correctness of implementation of a specific feature of functionality. In integration

testing we focus on functionality that requires the correct operation of two or more components or a flow of data between them.

Page 3 of 17

Page 4: Software Testing Concepts and Terminology

8/8/2019 Software Testing Concepts and Terminology

http://slidepdf.com/reader/full/software-testing-concepts-and-terminology 4/17

divine QA Testing

During system testing we should consider functionality in terms of whole

sequence of end user operations or an entire area of functionality. In functionality

testing we should consider "right action", "wrong side effect" risks and also testwith both valid and invalid data sets.

User Interface Testing

User interface of a project is the first thing a user encounters in a product. While

the product may perform its intended function correctly, if the UI behavesincorrectly, does not refresh properly or over writes meaningful text on the screen

inadvertently the user can be seriously annoyed.

Testing the user interface of a product usually consist of the operation or simulation of keyboard, typing and mouse clicking and movement to do menu

selection, field entry, field motion, control operation, request confirmation and so

on.

User interface testing during the unit testing phase often involves only a select

few UI objects or specific screen or page. During integration testing we need tofocus on navigation/page flow across several screens/pages. In system testing we

need to test the complete navigation required to meet client requirements. In

addition to testing the normal navigation with reference to various types of users,

it is necessary also to test for negative behavior.

Concurrent Execution Testing

Although a product may appear to function properly when tested by a single user,

when multiple instances of a product run concurrently undesirable results mayoccur: they may show up in the application or in the operating system. A product

that can have many instances of it running at the same time in parallel should

undergo these concurrent execution tests:

• Simple usage (Two users)

• Standard usage (many users)

• Boundary situations (maximum number of users plus or minus one) if this

limit exists

Multiple Configuration Testing

A product may be usually tested under a default configuration, it is important totest the non-default configurations also. Sometimes new tests may have to be

written to test under a different configuration. It is possible that there could be

infinite possible configurations that can be set up. It is necessary to identify keyconfigurations that are important for the customer to operate the product and test

the product for these configurations

Page 4 of 17

Page 5: Software Testing Concepts and Terminology

8/8/2019 Software Testing Concepts and Terminology

http://slidepdf.com/reader/full/software-testing-concepts-and-terminology 5/17

divine QA Testing

Dependency Testing

Dependency testing is recommended to test any API calls made to other products

and ensure that these calls act as promised. Such tests also provide a goodregression mechanism when the product being developed is integrated with the

new version of products it depends on. We need to test all interactions between

  products, including error cases. In general, any where data or control istransferred from one component to another component (or components), either 

immediately or in delayed fashion, an interface exists that can cause trouble.

Forward/Backward Compatibility Testing:

The product must be compatible (to the published extent) with any previous

release and supported dependencies. Backward compatibility testing ensures that

no problem will arise from a user trying to use the new version of a product alongwith an old version and verifies integration between products.

Installation and Upgrade Testing

If a user cannot install or upgrade the product whether it is through an inability to

understand documentation, a failure of the installation/upgrade program, or lack of resource, the product has failed the user.

We need to check the installation/upgrade documentation for accuracy and followall instructions form the point of view of a naïve user. Test the entire installation

and upgrade procedure, both with correct and incorrect response to all prompts. If 

verification scripts are provided to test the installation/upgrade, run the script asan additional check. Generate possible system error to check the response of the

installation/upgrade program.

User Documentation Testing

The user documentation is part of the product. If a new user is unable to use thesoftware because of poor documentation, the product as a whole is unusable. All

sections of the user documentation should be tested for accuracy against the

reality of the product. Ideally user documentation should be held under sourcecontrol and made part of regression testing of the product. Verify as corrected and

complete all demos, tutorials and exercise described in the user documentation.

Standards Conformance Testing

Specific set of industry standards has to be tested. Tests need to be planned and

conducted to detect bugs in standards conformance that could seriously affect the prospects of the product in the market.

Page 5 of 17

Page 6: Software Testing Concepts and Terminology

8/8/2019 Software Testing Concepts and Terminology

http://slidepdf.com/reader/full/software-testing-concepts-and-terminology 6/17

divine QA Testing

Capacity and Volume Testing

During unit testing, from a structural test perspective, every buffer, occurrence,

storage resource, processor, bus, and I/O channel in the system has to be tested. Inintegration testing aspects related to capacity of network to handle traffic and its

 behavior under various volumes are tested. Similarly the data storage capability is

tested. During system testing the capacity and volume limitations are tested froma user's point of view.

Localization

Localization typically refers to problems associated with different languages.

Languages other than English may require different character sets in which data

can be represented. Testing character set conversion for data conversion between

different localization is a very important consideration for planning localizationtesting. Another important item is the provision of different message files in

various languages for error messages and other system information. Testing for localization may require setting up special hardware and software.

Security Testing

Security testing attempts to verify that protection mechanism built into a system

will, in fact protect it from improper penetration. During security testing the tester 

 plays the role of the individual who desires to penetrate the system. The tester would attempt to acquire passwords, access the system through some

indirect/direct means, overwhelm the system so that it is not available to others,

cause system errors, etc.

Performance Testing

Any product, no matter how well constructed and reliable, is worthless if its

 performance does not meet the demands of its users. Testing must be done in

realistic setting to ensure adequate performance. However, we have to be cost

effective.

We need to test how the product performs during operations normally executed by

the users regularly. If performance goals are set for various operations of the product, it is necessary to verify that these goals are met. Performance is not only

"How many per second" but also" how long".

Smoke Testing

Smoke testing is an integration testing approach that is commonly used when

"Shrink wrapped" software products are being developed. It is designed as a

Page 6 of 17

Page 7: Software Testing Concepts and Terminology

8/8/2019 Software Testing Concepts and Terminology

http://slidepdf.com/reader/full/software-testing-concepts-and-terminology 7/17

divine QA Testing

 pacing mechanism for time-critical projects, allowing the software team to assess

its projects on a frequent basis.

Software components produced are integrated into a "build". A series of tests aredesigned to expose errors that will keep the build from properly performing its

function. The intent is to uncover "Show stopper" errors that have the highest

likelihood of throwing the software project behind schedule. Sometimes daily builds are made and subjected to smoke testing.

The smoke test should exercise the entire system from end to end. It does not haveto be exhaustive, but it should be capable of exposing major problems. The smoke

test should be thorough enough that if the build passes you can assume that it is

stable enough to be tested more thoroughly.

Regression Testing

Regression testing is the re-execution of some subset or all tests that have been

 previously conducted to ensure that changes have not propagated unintended sideeffects. Regression testing is the activity that helps to ensure that changes do not

introduce unintended behavior or additional errors. When sub sets of test areselected for regression testing, they should contain:

• A representative sample of test that will exercise all software functions

• Additional tests that focus on software functions that are likely to be affected

 by the change

• Tests that focus on the software components that have been changed.

Networks and Distributed Environments

If a product works in a network or distributed environment we need to test for • Date and time zone handling

• Communication and telephony standards

Environment factors (temperature, pressure, etc) and their impact on the productshould also be tested if required.

Error / Disaster Handling and Recovery

A software system must recover from faults and resume processing within a

 prescribed time. In some cases, a system must be fault tolerant: that is, processing

faults must not cause overall system function to cease. In other cases, a system

failure must be corrected within a specific period of time.

Recovery testing forces the software to fail in a variety of ways and verifies thatrecovery is properly performed. If recovery is automatic then re-initialization,

check pointing mechanisms, data recovery and restart are evaluated for 

correctness. If recovery requires human intervention, the mean-time -to -repair (MTTR) is evaluated to determine whether it is with in acceptable limits.

Page 7 of 17

Page 8: Software Testing Concepts and Terminology

8/8/2019 Software Testing Concepts and Terminology

http://slidepdf.com/reader/full/software-testing-concepts-and-terminology 8/17

divine QA Testing

Quality risks in this area include unacceptable failure rates, unacceptable recovery

times and the inability of the system to function under legitimate conditions

without failure.

 

Alpha and Beta Testing

If software is developed as a product to be used by many customers, it is

impractical to perform formal acceptance tests with each one. Most software product builders use a process called alpha and beta testing to uncover errors that

only the end user seem to be able to find.

A customer conducts the alpha test at the developer site. The software is used in anatural setting with the developer recording the errors and usage problems. Alpha

tests are conducted in a controlled environment.

The beta test is conducted at one or more customer sites by the end-user of thesoftware unlike alpha testing the developer is not present. Beta test is conducted at

live sites not controlled by the developer. The customer records all problemsencountered during beta and reports them to the developer 

 

Stress Testing

Stress testing is the search for load dependent bugs that occur only when the

system is stressed and the specification of stress tests is a key activity. StressTesting increases the overall quality and reliability of software. Stress-related

 bugs are the most difficult to find and exhume.

In stress testing, the system is subjected to unrealistically harsh inputs or loads

with inadequate resources, with the intention of breaking it. We test to find out

weather the system can operate reliably at the limit of its resources or not.

Stress testing with high background load, to a point where one or more or all

resources are simultaneously saturated. It is to system testing what destructive

testing is to physical objects.

Stress testing often finds bugs for the following reasons:

• It forces race conditions

(Race conditions reflect temporal boundaries. Normally event A occurs before

event B; By chance if event B occurs before A, then the system may fail,giving a race condition bug)

• It totally distorts the normal order of processing, especially processing that

occurs at different priority levels.

• It forces the exercise of all system limits, thresholds, or other controls

designed to deal with overload situations.

Page 8 of 17

Page 9: Software Testing Concepts and Terminology

8/8/2019 Software Testing Concepts and Terminology

http://slidepdf.com/reader/full/software-testing-concepts-and-terminology 9/17

divine QA Testing

• It greatly increases the number of simultaneous actions.

• It depletes resource pools in extraordinary and unexpected sequences.

Some of the reasons why stress-related bugs are encountered in system are:

• Assumptions that there will be no interrupts.• Failure to block or unblock interrupts.

• Assumptions that code is reentrant or not reentrant.

• By passing data interlock.

• Failure to close or open interlock.

• Assumptions that a called routine is resident or non-resident.

• Assumption that the calling program is resident or non-resident.

• Assumption that registers or memory were initialized or not initialized.

• Assumptions that the registers or memory location content did not change.

• Local setting of global parameter.

• Global setting of local parameter.

1.4 Testing Web Applications

The various types of tests that one can apply to web application testing are:

• Unit Testing - testing of the individual modules and pages that make up the

application.

• Page Flow Testing - ensures that pages that have a preceding page

requirement cannot be visited without having first visited the preceding page.• Usability Testing – ensures that all pages present cohesive information to the

user. It is recommended that both end users and UI designers perform this

testing, and that this testing be done very early during the development cycleso as to gain early user acceptance of the user interfaces and/or visual

semantics presented.

• Performance Testing – check that both the hardware (including firewalls,

routers, etc.,) and software scale to meet variant demands in user load.

• Data Volume Testing – Moving large amounts of data through the application,

either directly via input/output to a web page, or directly as input/output from

database that is referenced by a web page.

• Security Testing – ensuring that sensitive information is not presented in clear text form, unauthorized access is not permitted, etc.,

• Regression Testing – re-testing specific parts or all of the application to ensure

that addition of new features has not affected other parts of the application.

• External Testing – deals with checking the effect of external factors on the

application. Example of external factors could be the web server, the database

server, the browser, network connectivity issues, etc.,

Page 9 of 17

Page 10: Software Testing Concepts and Terminology

8/8/2019 Software Testing Concepts and Terminology

http://slidepdf.com/reader/full/software-testing-concepts-and-terminology 10/17

divine QA Testing

• Business Logic Testing – checks that the applications' business functions work 

accurately and perform according to constraints detailed in the requirements

document.

Some of the categories of testing suggested might not be applicable for certain

types of web enabled applications; the development/test teams(s) should make adetermination as to whether a test is or is not strictly required. If not, then the test

should be defined, but simply marked as optional or not required .The rationale

for stating a test is not required is simply to provide a record of the fact that thecondition requiring the test was considered, but deemed inappropriate for this

application.

Unit Testing

Unit testing involves testing of the individual modules and pages that make up the

application. Typical tests that fall in to this category are: Input tests (good input,

 bad input, out of range input, excessive input, multiple clicks on buttons, reload of  pages prior to results being returned, etc.,). The deliberate attempt to confuse the

application can also be referred to as edge testing.

Some of the types of checking would include:

• Invalid input (Missing output, out of bound input, entering an integer when

float expected, and vice versa, control characters in strings etc.,)

• Alternate Input Format (e.g., 0 instead of 0.0, 0.00000001 instead of 0 etc.,)

• Button click testing e.g., multiple clicking with and without pauses between

clicks.

•Immediate reload after button click prior to response having been received.

• Multiple reloads in the same manner as above.

In general, unit tests check the behavior of a given page (i.e. does the application behave correctly and consistently given either good or bad input). Random input

and random click testing also falls under the domain of unit testing. This testing

involves a user randomly pressing buttons (including multiple clicks on "hrefs")

and randomly picking checkboxes and selecting them. There are two forms of output screen expected:

• An error page indicating the type of error encountered.

• A normal page showing either the results of the operation or the normal next page where more options may be selected.

In no event should a catastrophic error occur.

Page Flow Testing

Page 10 of 17

Page 11: Software Testing Concepts and Terminology

8/8/2019 Software Testing Concepts and Terminology

http://slidepdf.com/reader/full/software-testing-concepts-and-terminology 11/17

divine QA Testing

Page flow testing deals with ensuring that jumping to random pages does not

confuse the application. Each page should typically check to ensure that it can

only be viewed via specific previous pages, and if the referring page was not oneof that set, then an error page should be displayed. A page flow diagram is a very

useful aid for the tester to use when checking for correct page flow within the

application.

Other aspects of page flow testing cross into other areas of testing, such as

security. Some simple checks to consider are forcing the application to move in anunnatural path, the application must resist, and display appropriate error message.

Page flow testing involves logging into the system and then attempting to jump to

any page in any order once a session has been established. Security testingencompasses several areas discussed later in this section as well as:

• Resilience to external attacks (e.g., denial of service)

•Resilience to accidental corrupt user input

• Resilience to deliberate corrupt user input

The use of bookmarks, temporary web pages setup to redirect into the middle of an application using faked session information, etc., are all valid ways of testing

 page flow and session security.

Usability Testing

Usability testing ensures that all pages present a cohesive look to the user,

including spelling, graphics, page size, response time, etc. This testing alsoencompasses performance testing to some degree, as to load and regression

testing. Examples of usability testing include:

• Spelling checks

• Graphics checks (colors, dithering, aliasing, size, etc.,)

• Meaningful error messages

• Accuracy of data displayed

• Accuracy of data in the database as a result of user input

• Accuracy of data in the database as a result of external factors (e.g. imported

data)• Meaningful help pages including context sensitive help

Load and Performance Testing

Load testing the application involves generation of varying loads against not only

the web server but also the databases supporting the web server and the middle

ware/application server logic connecting those pages to the databases. Load

Page 11 of 17

Page 12: Software Testing Concepts and Terminology

8/8/2019 Software Testing Concepts and Terminology

http://slidepdf.com/reader/full/software-testing-concepts-and-terminology 12/17

divine QA Testing

testing also includes verification of data integrity on the web pages, within the

 back end database and also the load ramping or surges in activity against the

application. In particular, attention should be paid to pages that include largeamounts of data and what happens if multiple users hit these pages concurrently.

Some of the questions to be asked are: "Does the site scale ", "Is the site's

response time deterministic, etc.Examples of load testing would include:

• Sustained low load test (50 users for around 48 hours).

• Sustained high load test (300+ users for 12 hours).

• Surge test (e.g. run 50 users, then surge to 500 users and then return to 50,no

memory leaks, lost users, orphaned processes, etc., should be seen). The

system should continue running with multiple surges at various times during

the day. This test should run for 48 hour.

Another very important facet of load testing is to discover at what load the

application would fail and what are the saturation point. All applications

eventually degrade given sufficient load. The collection of this degradation pointdata is crucial since it may be used as a control set for monitoring data during live

use of the application. Predictive analysis made on live monitored data can

 provide indications of when application stress points may appear on the horizon,and proactive steps may be taken to ensure that the application and/or hardware

on which it runs scale up to meet new demand.

In case if major architectural changes occur in the application, (for example a totalface lift which makes it more usable), then the load tests must be re-run and a new

control set of performance data should be gathered, since the original control set

of data becomes invalid due to the architectural changes.

Table below illustrates loading estimates. This table is important in that it gives

the tester an indication of what is considered low, medium and high load. The testscripts to generate these loads can then be constructed accordingly. The table

should accurately reflect the real load requirements. For example the high load

number could be as high as 5000 concurrent users or as low as only a dozen or soconcurrent users.

Server Load Description

Low 10 -500 concurrent users

Medium 100 -500 concurrent users

High 500 -5000 concurrent users

Data Volume Testing

Data volume testing involves testing the application under data load, where large

quantities of data are passed through the system. (e.g. large number of items in

dropdown/combo boxes, large amount of data in text boxes). Performance of the

Page 12 of 17

Page 13: Software Testing Concepts and Terminology

8/8/2019 Software Testing Concepts and Terminology

http://slidepdf.com/reader/full/software-testing-concepts-and-terminology 13/17

divine QA Testing

application should be monitored during this testing, since a slow database could

significantly affect response time.

A key point to be noted is that one should also monitor all the systems involved in

data movement. The data collected from the monitoring is valuable indicator of 

system performance when coupled with the test load being delivered. This datacan be used as a control set for contrasting monitoring data from a live system and

 providing predictive information indicating when major application stress points

may be encountered. No errors should be seen on application pages or in error logs for pages that are

data intensive.

Security Testing

Security testing involves verifying weather both the servers and the application

are managing security correctly. All these tests must pass with no exceptions. A

 partial pass must be recorded as a " Fail" since unauthorized access to data is amajor breach of security. Other forms of security testing involve checking not

only the electronic access security but also the physical security of the servers.Some of the items that may be checked from a server perspective during this

testing are

• Attempt to penetrate system security both internally and externally to ensure

the system that houses the application is secure from bother internal andexternal attacks.

• Attempt to cause things like buffer overflow to result in root access being

given accidentally, (such code does exist, but explaining it is beyond the scope

of this document).

• Attempt to cause the application to crash by giving it false or randominformation.

• Ensure that the server OS is up to correct patch levels from security

viewpoint.

• Ensure that the server is physically secure.

Application level security testing involves testing some or all the following

• Unauthenticated access to the application

• Unauthorized access to the application

• Unencrypted data passing

Protection of the data• Log files should be checked to ensure they do not contain sensitive

information

• Faked sessions. Sessions information must be valid and secure. (e.g. a URL

containing a session identifier cannot be copied from one system to another 

and then the application be continued from the different system without beingdetected)

• Multiple login testing by a single user from several clients.

Page 13 of 17

Page 14: Software Testing Concepts and Terminology

8/8/2019 Software Testing Concepts and Terminology

http://slidepdf.com/reader/full/software-testing-concepts-and-terminology 14/17

divine QA Testing

• Automatic logout after N minutes of inactivity with positive feedback to the

user.

• Attempt to break into the application by running username/password checks

using password-cracking program.Security audit, e.g. examine log files, etc., no sensitive information should be left

in raw text/human readable form in any log file.

Regression Testing

Regression testing ensures that during the lifetime of the application, any fixes donot break other parts of the application. This type of testing typically involves

running all the other tests, or a relevant subset of those tests. A multidisciplinary

team consisting of developers, administrators and load testers would normally

 perform this test. If UI changes were made, then UI designers and end–users needto be involved. The regression tests must also be kept up to date with planned

changes in the application. As the application evolves, so must the tests.

External and Connectivity Testing

External testing deals with checking the effect of external factors on theapplication. Example of external factors would be the web server, the database

server, the browser, network connectivity issues, etc., Examples of external testing

are:

• Database unavailability test (e.g., is login or further access to the application

 permitted should the database go into a scheduled maintenance window).

• Database error detection and recovery test (e.g., simulate loss of database

connectivity, the application should detect this, and report an error accordingly). The application should be able to recover without human

intervention when the database returns.

• Database authentication test (check access privileges to the database).

• Connection pooling test (ensure that database connections are used sparingly,

and will not run out under load).

• Web page authentication test.

• Browser compatibility tests – for example, does the application behave the

same way on multiple browsers, does the JavaScript work the same way, etc.,

Connectivity testing involves determining if the servers and clients behave

appropriately under varying circumstances. This testing is difficult to accomplishfrom a server perspective since it is expected that the servers will be operating

with standby power supplies as well as being in a highly available configuration.

Thus the server tests need not be run using a power–off scenario; simply

removing the network connection to the PC may be sufficient. There are twoaspects of connectivity testing; the voluntary, where a user actively interacts with

the system in an unexpected way, and involuntary, where the system acts in an

unpredictable manner.

Page 14 of 17

Page 15: Software Testing Concepts and Terminology

8/8/2019 Software Testing Concepts and Terminology

http://slidepdf.com/reader/full/software-testing-concepts-and-terminology 15/17

divine QA Testing

Examples of involuntary connectivity checking includes some of the following;

• Forcing the browser to prematurely terminate during a page load using a task 

manager to kill the browser, or hitting the ESC key and reloading or revisiting

the same page via a bookmark. The testing should cover both a small delay (<10secs) in reinstating the browser as well as a long delay (> 10mins). In the

latter case the user should not be able to connect back to the application

without being redirected to the login page.

• Simulation of Hub Failure between PC and the Web Server. Removing the

network cable from the PC, attempt to visit a page; abort the visit, and then

reconnect the cable can simulate this. The test should use two time delays; the

first should be under 15 seconds, and the second delay around 15 minutes

 before reconnecting. After reconnecting, attempt to reload the previous page.The user should be able to continue with the session unless a specified timeout

has occurred in which case the user should be redirected to a login page.

• Web server on/off test. Shutdown the web server, then restart the server. Theuser should be able to connect back to the application without being redirected

to the login page; this will prove the statelessness of individual pages. Note

the shutdown is only for the web server. Do not attempt this with anapplication server, as that is a separate test.

• Database server on/off test. Shutdown the database server and restart it. The

user should be able to connect back to the application without being redirected

to the login page. It may be that a single transaction needs to be redone, andthe application should detect this and react accordingly.

• Application server on/off test. Shutdown the application server and restart it.

There are two possible outcomes for this depending on how session

management is implemented. The first outcome is that the applicationredirects to an error page indicating loss of connectivity, and the user is

requested to login and retry. The second outcome is the application continues

normally since no session information was lost because it was held in a persistent state that transcends application server restarts.

Examples of voluntary connectivity testing include;

• Quit from session without the user saving state.

• Quit from session with the user saving state.

• Server – forced quit from session due to inactivity.

• Server – forced quit from session due to server problem.

• Client forced quit from session due to visiting another site in the middle of a

session for a brief period of time.

• Client – forced quit from session due to visiting another site/application for an

extended period of time.

• Client – forced quit due to browser crashing.

Page 15 of 17

Page 16: Software Testing Concepts and Terminology

8/8/2019 Software Testing Concepts and Terminology

http://slidepdf.com/reader/full/software-testing-concepts-and-terminology 16/17

divine QA Testing

Browser compatibility also falls under the external or connectivity – testingdomain. The number of potential customers who may experience viewing

  problems because of an incompatibility in browsers could be quite high. In

general, large audience web applications should work consistently with thefollowing browsers unless specifically excluded by the application requirements:

•  Netscape Navigator 4.X

• Internet Explorer 4.X / 5.X

• AOL

The items that should be checked carefully are layout differences, JavaScript

 behavior, image behavior, etc.,

Extended Session Testing

Extended Session testing should involve checking some or all of the following

items:

• Remaining in a session for an extended period of time and click items to

navigate the screen. The session must not be terminated by the server except

in the case of a deliberate logout initiated by the user.

• Remaining on a single page for an extended length of time. The session

should be automatically terminated and the next click by the user should take

the user to a page indicating why the session was terminated and the option tolog back into the system should be present. The page may have timed redirect

associated with it, and if so, a page indicating a timed out session should be

displayed. The following must be tested• The user's session should have been saved and may optionally be restored on

re login

• The user's state must reflect the last complete action the user performed

• Leaving the application pages to visit another site or application and then

returning to original application via a bookmark or the back button should

result in a restoration of state, and the application should continue as if the

 person had not left.

Power Hit/Reboot/Other Cycle Testing

Power Hit/Cycle testing involves determining if the servers and clients actappropriately during the recovery process. The testing will be difficult to perform

from a server perspective since it is expected that the servers will be operating

with standby power supplies as well as being in a highly available configuration.(i.e., hot spares, RAID, application fail over, etc.) However, the clients can be

tested this way by removing power from the PC's. Below are the tests performed. 

• Client power off/on test.

Page 16 of 17

Page 17: Software Testing Concepts and Terminology

8/8/2019 Software Testing Concepts and Terminology

http://slidepdf.com/reader/full/software-testing-concepts-and-terminology 17/17

divine QA Testing

• Client hub power off/on test.

• Client network connection removal/reinsertion test.

• Server power off/on test.

• Server Hub power off/on test.

• Server network connection removal/reinsertion test.

Business Logic Testing

All the tests listed previously may pass with flying colors, but if the business

logic is not encapsulated by the application, then the application has not met its

requirements. Business requirements vary in nature and are thus difficult togeneralize.

The best way to quantify the business is to start with an existing business process

model, and extrapolate. The model can derive specific requirement for the webapplication, and may or may not involve significant back end mainframe or mini

computer technologies. The key is that the web application and middle ware logicmeets the overall business requirements. Representative from the business teammust participate in defining these requirements.

Part of business logic testing is validation that certain sequences to in factgenerate the correct outputs back to the application user as well any backend users

 be they human or machine. This is the black box approach, and works very well

for models that follow strict input/output process models. For those models inwhich multiple outputs occur at various points in the application, a more

sophisticated approach must be used. Part of that approach is to use tools to

validate data at various intermediate points during the application run.

Electronic equipment such as televisions, cell phones, etc, use test points to which

 probes may be attached and measurements taken to determine if the equipment is

 behaving within specific tolerance levels. The same approach may be taken for software, and will be a valuable aid in determining application success. This is

where the majority of the test teams time will be taken, automated tools may be

 purchased to augment load testing etc, but the tools to test specific points in theapplication must be written, or the data examined by hand.

The test team is responsible for constructing valid set of tests, scripts, and tools

that may be generated to facilitate testing. These tools will become the core tools

for performing regression testing as the application's capability increase over time.

Page 17 of 17