34
1. Responsible Parties Prepared By Company/Group  Contact Information Greg Annen [email protected]  2. Document Control Version Date Description  Approval and Date 1.0 Dec 2005 Draft of Methodology document GJA | 12/28/05 1.0 Jan 6 2006 Content Updates, all sections GJA | 01/06/06 1.0 Jan 13 2006 Content changes to sections: TEST LEVELS, DEVELOPING A FUNCTIONAL TEST STRATEGY, CREATING A FUNCTIONAL TEST PLAN, PERFORMING APPLICATION REQUIREMENTS ANALYSIS, WRITING TEST CASES FOR AUTOMATION, WRITING TEST SCRIPTS, TOOL-SPECIFIC FRAMEWORKS, and FTA METHODOLOGY DELIVERY PROCESS.  GJA | 01/13/06 1.1 May 1 2008 Highlighted some key points  GJA | 05/01/08 1,2 June 3, 2008 Added to Terms and Definitio ns GJA | 06/03/08 

function test automation methodology

Embed Size (px)

Citation preview

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 1/34

1.  Responsible Parties

Prepared By  Company/Group  Contact Information 

Greg Annen [email protected] 

2.  Document Control

Version  Date  Description  Approval and Date 1.0 Dec 2005  Draft of Methodology document  GJA | 12/28/05 

1.0Jan 6 2006 

Content Updates, all sections  GJA | 01/06/06 1.0 Jan 13 2006  Content changes to sections: TEST LEVELS,DEVELOPING A FUNCTIONAL TEST

STRATEGY, CREATING A FUNCTIONAL

TEST PLAN, PERFORMING

APPLICATION REQUIREMENTS

ANALYSIS, WRITING TEST CASES FOR 

AUTOMATION, WRITING TEST

SCRIPTS, TOOL-SPECIFICFRAMEWORKS, and FTAMETHODOLOGY DELIVERY PROCESS. 

GJA | 01/13/06 

1.1 May 1 2008  Highlighted some key points  GJA | 05/01/08 1,2 June 3, 2008  Added to Terms and Definitions  GJA | 06/03/08 

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 2/34

 

3.  Purpose

This document provides an overview of the concepts, processes and terms encountered in

developing a comprehensive methodology for functional test automation. It is a living document,structured to allow collaborative input as knowledge is gathered in the field and refined by testsolution architects.

4.  Objectives of Functional Test Automation

Functional testing is a process to ensure that applications work as they should -- that they dowhat knowledgeable users expect them to do. Functional tests:

y  Capture user requirements for business processes in a meaningful way

y  give both users and developers confidence that business processes meet those requirements

y  enable QA teams to verify that the software enabling those processes is ready for release

Simply stated, functional tests tell whether the completed application is doing the right things.

Today¶s enterprises must conduct thorough functional testing of their applications to ensure thatall business processes are fully available to users. Rigorous functional testing is also critical to

successful application development and deployment. This climate presents a challenge for developers, QA teams, and IT managers: speed up testing processes and increase accuracy and

completeness, without exceeding already tight budgets.

Why automate? Manual testing processes take too long to execute, provide incomplete functionaltest coverage, and introduce higher risk of manual errors and results that can't be reproduced. In

 practice, automated testing means programming the current manual testing process to run on itsown. At the minimum, such a process includes:

y  Detailed test cases, including predictable, expected results, which have been developed from business process functional specifications and application design documentation

y  A standalone test environment, including a test database that can be restored to a known state,such that all test cases can be repeated each time there are modifications made to the application

Automation is the key to improving the speed, accuracy, and flexibility of the software testing

 process, enabling companies to find and fix more defects earlier in the SDLC. By automatingkey elements of functional testing, companies can meet aggressive release schedules, test morethoroughly and reliably, verify that business processes function as required, and generate

increased revenue and customer satisfaction.

5.  Introduction to Functional Test Automation

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 3/34

Automation efforts must focus on critical business processes, complex applications, and the usecases that describe their functionality. There will be a positive impact from automated testing

when an application:

y  requires multiple or frequent builds/patches/fixes

y  needs to be tested on numerous hardware or software configurations

y  deals with large or complex sets of data

y  supports many concurrent users

In addition, if repetitive tasks, such as data loading and system configuration are involved, or if 

the application needs to meet a specific service-level agreement (SLA), automation makeseconomic sense.

A functional testing tool is developed or purchased to support the automation effort. The typicaluse of a test tool is to automate regression tests, a database of detailed, repeatable test cases that

are run each time there is a change to the application under test to ensure that this change doesnot produce unintended consequences. Within the tool, test steps are captured in the form of 

scripts. These can be individual scripts which test specific aspects of application functionality, or they can be functions which are reused as callable test steps.

Like the application under test, an automated test script is a program. Test automation can be

thought of as writing software to test other software. Automated testing tools are actuallydevelopment environments specialized for creating testing programs. Thus, to be effective, all

automated test script development must be subject to the same rules and standards that apply to

every software development project. Making effective use of any automated test tool requires atleast one trained, technical person ± in other words, a developer. Using record and playback techniques to generate scripts is not effective for creating repeatable, maintainable tests; such

techniques are often just an easy way to create throwaway test suites.

5.1.  Terms and Definitions

5.1.1.  Test Case

WikiPedia defines a test case as "« a set of conditions or variables under which a tester 

will determine if a requirement or use case upon an application is partially or fully

satisfied. In order to fully test that all the requirements of an application are met, theremust be at least one test case for each requirement. ...Written test cases should include adescription of the functionality to be tested, and the preparation required to ensure that

the test can be conducted«there is a known input and an expected output, which isworked out before the test is executed."

5.1.2.  Test Script

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 4/34

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 5/34

Test automation efforts can fail by trying to do too much. Every automation tool has its learningcurve and specific usage requirements, so it pays to start simple and build on each small success.

Build acceptance tests, for example, are excellent candidates for initial automation efforts: theyare run frequently and their aim is breadth of functionality, not depth. First, get one test to run to

completion. Then, use this test as a model to build up your test suite. Finally, verify that all tests

in the suite run to completion within the test execution framework.

One critical goal of automation is to develop robust test suites in which all test steps in a test case

can be executed without tester intervention, while detailed information is captured about errorsencountered in the application under test.

5.3.1.  Developing Test Cases for Automation

For any level of testing, it is first necessary to define the requirements and objectives of a test, before writing any test plans, test cases, or test scripts. The next step is to define the actions and

application components to be included in the test cases and, ultimately, in the automated test

script. Developing test cases for automation is a discipline: the automation framework, testscripts, and test data typically reside in separate repositories, which are linked together by aframework during execution of the test steps. The test case itself deals with application objects

(windows and controls), not specific data; it can also include conditional steps if required by thetest objectives.

5.3.2.  Handling Application Errors and Exceptions

A common problem that prevents truly unattended testing is the occurrence of cascading failures.

When one test fails, the application is left in an unexpected state: for example, an unexpecteddialog window pops up displaying an error message, and subsequent test steps can't be run while

the error dialog is present. An error recovery system is the solution to this problem. Itautomatically records the error and restores the application and test environment to a known

³base state´, allowing successive tests to run reliably. Cascading failures are avoided andunattended testing executes to completion. After each test case, a recovery system verifies that

the application is in the expected base state; if not, it will reset it.

5.4.  Roles and Responsibilities

Matrix of typical QA tasks and owners.

1  2  3 

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 6/34

 

6.  Developing A Functional Test Strategy

An effective functional testing strategy optimizes the QA effort to minimize risk. Note that no

matter how much testing you invest, there is always a risk. Therefore, releasing the software isdirectly related to the acceptable level of risk.

To implement a risk-based strategy, determine the minimum testing effort that must be invested

in order to maximize risk reduction. The basic methodology used can be described in thefollowing steps.

1.  Identify business-critical functionalities that could prevent a user from using the

software if a defect was encountered. This defect would be a high severity: for example, alogin page for a Web application that does not work. Efficient ways to gather this list of 

functionalities include surveying the user community, asking a business domain expert,

and assembling statistics from logs of a previous version of the application. Since risk increases with the frequency of use, the most used features will be the riskiest ones.

2.  Design and then assign test cases to each of the functionalities listed in Step 1.

3.  Size (in hours or minutes) the QA effort required to run the test cases identified in Step

2.

4.  Sort test cases in ascending order of effort so you have the test case with the minimumeffort first.

5. 

Start executing test cases in the order established in Step 4 until you run out of time.

Ideally, you always want to lower the risk in the shortest period of time in order to releaseversions more aggressively. One way to shorten your QA cycle yet retain the same confidence

level in the software is to automate the minimum QA effort with functional testing tools. Let¶ssay I want to implement two new features in an application. I have the choice of implementing

the two features in the same version, or implementing each feature in two successive versions.From a QA standpoint, having two successive versions with the same confidence level has a

huge impact on the workload unless you automate the test cases listed in step 4 of the above

methodology.

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 7/34

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 8/34

10.  Managing Test Data

An important goal of functional testing is to allow the test to be repeated with the same result,yet varied to allow problem diagnosis. Without this, it is hard to communicate problems to

coders, and it can become difficult to have confidence in the QA team's results. Good data allows

detailed diagnosis, effective reporting, and repeatable test steps. It fosters confidence in theresults obtained from test execution and iteration.

10.1.1.  Classification of Data Types

In the process of testing a system, many references are made to "The Data" or "Data Problems".

Although it is perhaps simpler to discuss data in these terms, it is useful to be able to classify thedata according to the way it is used. The following broad categories allow data to be handled and

discussed more easily.

y  Environmental data: tells the system about its technical environment. It includes

communications addresses, directory trees and paths, and environmental variables. For example,the current date and time can be seen as environmental data.

y  Setup data: communicates the business rules. Typically, setup data causes different

functionality to apply to otherwise similar data.

y  Input data: is the information entered during daily operation of business functions. Accounts,

 products, orders, actions, documents can all be input data. For the purposes of testing, thiscategory of data can itself be split into two types;

o  Fixed input data is available before the start of the test, and forms a

major component of the test conditions.

o  Consumable input data represents the test input

It is also revealing to categorize the data being used in a business process as:

y  Transitional data: exists only within an application, during processing of input data.

Transitional data is not seen outside the system, but its state can be inferred from actions that thesystem has taken. Typically held in internal system variables, it is temporary and is lost at the

end of processing.

y  Output data: the end result of processing input data and events. It generally has acorrespondence with the input data, and includes not only files, transmissions, reports anddatabase updates, but can also include test measurements. A subset of the output data is generally

compared with the expected results at the end of test execution. As such, it does not directlyinfluence the quality of the tests but is used to evaluate pass/fail criteria.

11.  Evaluating Test Tools

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 9/34

What should be ³under the hood´?

y  ³Scriptless´ representation of automated tests: testers should be able to visualize each step inthe business process, and view and edit test cases intuitively

y  Integrated data tables: testers should have the ability to pump large volumes of data throughthe system quickly, manipulate the data sets, perform calculations, and quickly create hundreds

of test iterations and permutations with minimal effort

y  Clear, concise reporting: reports should provide specifics about where application failures

occurred and what test data was used; provide application screen shots for every step to highlightany discrepancies; and provide detailed explanations of each verification point¶s pass and failure

y  Integration with requirements coverage and defect management tools

12.  Writing Test Scripts

Design test scripts for automation to be modular. Instead of using one test script to perform

multiple functions, break the tests into separate functions. This can help focus on the business process expressed by the functionality being tested.

Design test scripts to be generic in terms of process and repeatable in terms of data. Read test

data from a separate source: keep the scripts free of test data so that when you do have to changethe data, you only have to maintain the data, not the scripts.

13.  Developing Automation Frameworks

Test automation has undergone several stages of evolution, both in the development of marketable test tool technologies and in the development of test automation processes and

frameworks within individual QA organizations. The typical path followed is described below:

y  Record and Playback: monitoring of an active user session, recording user inputs related toobjects encountered in the user interface, and storing all steps and input data in a procedural

script. This method is useful in learning how to use a test tool, but the scripts produced aredifficult to maintain after the application under test changes and do not produce reliable,

consistent results.

y  Test Script Modularity: creating small, independent scripts that represent modules, sections,

and functions of the application-under-test; then combining them in a hierarchical fashion toconstruct larger tests. This represents the first step toward creating reusable test assets.

y  Test Library Architecture: dividing the application under test into procedures and functions ± also known as objects and methods depending on your implementation language ± instead of a

series of unique scripts. This requires the creation of library files that represent modules,sections, and functions of the application under test. These files, often referred to as function

libraries, are then called directly from within test case scripts. Thus, as elements of the

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 10/34

application change, only the common library components which reference them must bechanged, not multiple test scripts with hard-coded references which might be difficult to locate

and validate.

y  Data-Driven Testing: reading input and output values from data files or tables into the

variables used in recorded or manually coded test scripts. These scripts include navigationthrough the application and logging of test status. This abstraction of data from the test scriptlogic allows testers with limited knowledge of the test tool to focus on developing, executing and

maintaining larger and more complex sets of test data. This increase in organizational efficiencyfosters enhanced test coverage with shorter test cycles.

y  Keyword-Driven Testing: including test step functionality in the data driven process by usingdata tables and keywords to trigger test events. Test steps are expressed as Object Action  

Expected Result. The difference between data-driven and keyword-driven testing is that eachline of data in a keyword script includes a reference that tells the framework what to do with the

test data on that line. The keyword attached to the test step generally maps to a call to a library

function using parameters read in from the data file or table. One major benefit is the improvedmaintainability of the test scripts: by fully modularizing automation of each step, it's easier toaccommodate any user interface changes in the application under test.

As noted earlier, one of the challenges facing test automation is to speed up testing processeswhile increasing the accuracy and completeness of tests. The evolution of test automation

frameworks has been driven by accepting this challenge.

13.1.  Data-Driven Frameworks

This type of functional test automation framework abstracts the data layer from the test script

logic. Ideally, only data used as inputs to test objects and outputs from test events would need tochange from one iteration to the next. The types of test scripts used in this architecture are

described below.

13.1.1.  Driver Script

y  Performs initialization of the test environment (as required)

y  Calls each Test Case Script in the order specified by the Test Plan

y  Controls the flow of test set execution

13.1.2.  Test Case Script

y  Executes application test case logic using Business Component Function scripts

y  Loads test data inputs (function parameters) from data files and tables

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 11/34

y  Evaluates actual result based on the expected result loaded from data files and tables

13.1.3.  Business Component Function Script

y  Exercises specific business process functions within an application

y  Issues a return code to indicate result or exception

y  Uses parameter (input) data derived from data files and tables

13.1.4.  Common Subroutine Function Script

y  Performs application specific tasks required by two or more business component functions

y  Issues a return code to indicate result or exception

y  Uses parameter (input) data derived from data files and tables

13.1.5.  User-Defined Function Script

y  Contains logic for generic, application-specific, and screen-access functions

y  Can include code for test environment initialization, debugging and results logging

In this architectural model, the ³Business Component´ and ³Common Subroutine´ functionscripts invoke ³User Defined Functions´ to perform navigation. The ³Test Case´ script would

call these two scripts, and the ³Driver´ script would call this ³Test Case´ script the number of 

times required to execute Test Cases of this kind. In each case, the only change betweeniterations is in the data contained in the files that are read and processed by the ³BusinessFunction´ and ³Subroutine´ scripts.

13.2.  Keyword-driven Frameworks

This type of framework builds on the data-driven framework by including business componentfunctionality in the data tables. Keywords are used within each test step to trigger specific

actions performed on application objects.

13.2.1.  Driver Script

y  Governs text execution workflow

y  Performs initialization of the test environment (as required)

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 12/34

y  Calls the application-specific Action ("Controller") Script, passing to it the names of the business process test cases. These test cases can be stored in spreadsheets, delimited text files, or 

database records.

13.2.2.  Action Script

y  Acts as the ³controller´ for test case execution

y  Reads and processes the business process test case name received from the Driver Script

y  Matches on "key words" contained in the input dataset

y  Builds a list of parameters from values included with the test data record

y  Calls "Utility" scripts associated with the "key words", passing the created list of parameters

13.2.3.  Utility Scripts

y  Process the list of input parameter received from the Action Script

y  Perform specific tasks (e.g. press a key or button, enter data, verify data, etc.), calling "User Defined Functions" as required

y  Record any errors encountered during test case execution to a Test Report (e.g. data sheet,table, test tool UI, etc.)

y  Return to the Action Script, passing a result code for processing status (e.g. pass, fail,

incomplete, error)

13.2.4.  User-Defined Function Libraries

y  Contain code for general and application-specific functions

y  May be called by any of the above script-types in order to perform specific tasks

y  Can contain business rules

13.3.  Tool-specific Frameworks

13.3.1.  Mercury Quality Center with TestDirector and QuickTest Professional

Business Process Testing uses a role-based model, allowing collaboration between

non-technical Subject Matter Experts and QA Engineers versed in QuickTest Pro.

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 13/34

Business process tests are composed of business components. The information inthe business component's outer layer -- the description, status, and implementation

requirements, together with the steps that make up the component -- is defined inQuality Center by the SME, who then runs and analyzes the associated tests and test

sets. A QuickTest Engineer populates a shared repository with the different objects

in the application being tested and encapsulates all activities and scripted steps intooperations, essentially using function libraries in a keyword based automationframework.

When QuickTest Professional is connected to a Quality Center project with

Business Process Testing support, the objects defined by the QuickTest Engineer inthe object repository are available for use by the SME. In addition, all the business

component information is visible in QuickTest. This integration and visibility between the two applications enables the SME to implement the testing steps for the

 business components that are defined in business process tests, and also enables theQuickTest Engineer to effectively maintain the set of objects in the object repository

and the operations in the function libraries.

In addition to creating and maintaining the object repository, the QuickTest

Engineer defines a set of elements that comprise an Application Area, created inQuickTest Professional and containing all of the settings and resources required to

create the content of a business component. These include all the objects from theapplication under test contained in the shared object repository, and the user-defined

operations contained in function library files.

Each business component can be associated with a specific Application Area, or can

share an Application Area with other components. Application area settings areautomatically inherited by the business components that are based on that

application area.

An application area includes:

y  Resources: resource settings include associated library files and the shared object repository.

Add-Ins: the add-ins associated with the first business component in a business

 process test (inherited from the application area used by the component) areautomatically loaded in QuickTest Professional when Quality Center runs the test.

Quality Center assumes that add-ins are required for all the business components inthe same business process test.

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 14/34

Windows-Based Applications: if you are creating a business component to test aWindows-based application, you must specify the application on which the business

component can run. Other environments are supported by the appropriate QuickTestAdd-In.

Recovery Scenarios: activated during execution of a business component testwhen an unexpected event occurs.

The picture below illustrates the workflow (Roles and Activities) encountered in

Business Process Testing with Mercury Quality Center integrated with theQuickTest Professional automated functional testing tool.

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 15/34

 

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 16/34

13.3.2.  Desktop Certification Automation

In an application certification run, the steps for dealing with the application under test include:

1. Install Application. 7. Close Application.

2. Reboot (Optional). 8. Scan.

3. Post Install Step. 9. Uninstall Application.

4. Analyze Workstation. 10. Analyze Workstation.

5. Test and Leave Application. 11. Perform Interoperability Tests

6. Perform Tests (Go To Top).

13.4.  Further Evolution

Implementing test automation is most often an evolutionary process, making it easier for a QAorganization to assimilate the necessary learning curve. Some of the types of automation

development, current and future, are described below:

Ad-Hoc

-  Scripting developed in reactionary mode to test a single issue or fix

-  Test case steps are part of each Action script: high maintenance, low reusability

-  Contains some of the necessary data inputs stored in QTP script's datasheet but not full

data-driven implementation

Data-Driven

-  Scripts are an assembly of function calls

-  Data for test cases read in from external source (e.g., Excel spreadsheet)

-  Results can be captured externally per script execution (i.e., spreadsheet, database, TD)

Keyword-Driven

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 17/34

-  Test cases are expressed as sequence of keyword-prompted actions

-  A Driver script runs Actions which call functions as prompted by keywords

-   No scripting knowledge necessary for developing and maintaining test cases (unless new

functionality)

Model-Driven

-  Descriptive programming is used to respond to dynamic applications (e.g., websites)

-  Actually, this is a method which can used within other solution types

-  Regular expressions used to define objects

-  Custom functions used to enhance workflow capabilities

3rd

-Party: Quality Center Integrated with QuickTest Pro and Business Process Testing

-  Similar to keyword-driven but controlled using Mercury QC database

-  Begins with high-level test requirements:

-  Business Requirements defined

-  Application Areas (shared resources) defined

-  Business Components defined and grouped under Application Areas

-  Test steps defined

-  Tests can be defined as Scripted Components (QTP scripts with Expert Mode)

-  Business Process Tests and Scripted Components are cataloged under Test Plan

-  Test Runs are organized from Test Plan components and executed from Test Lab

-  Test Runs can be scheduled and/or executed on remote test machines (with QTP)

-  Defects can be generated automatically or entered manually per incident

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 18/34

-  Dashboard available for interactive status monitoring

Intelligent Query-Driven

-  Agile

-  Object Oriented

-  Constructed as a layered framework 

-  Test data is compiled as required using data-mining techniques

Each type of framework has its own unique advantages and disadvantages.

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 19/34

Comparison of Automation Types: Test Coverage and Maintenance Level 

Test Coverage: in functional testing, a measurement of the extent to which the business

requirements of an application are verified during test execution.

Maintenance Level: the amount of effort (time and staff) required to keep test assets up to datewith changes and additions contained in releases of the applications under test. It includes tasks

such as creating and updating test cases, test scripts, function libraries, and object repositories,and debugging test code.

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 20/34

 Not every type is required in this progression. The implementation path is dependent on suchfactors as project timelines, resource allocation, tool selection, and QA organization maturity

level.

The most significant ROI is provided by the automation development model whichhas the greatest degree of test coverage with the least amount of maintenance.

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 21/34

Functional Test Execution

13.5.  Business Components

13.6.  Test Plans

13.7.  Test Sets (Test Lab)

14.  Functional Test Results Analysis

15.  Defect Tracking and Resolution

16.  Status Reporting and Test Metrics

1  2  3 

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 22/34

17.  FTA Methodology Delivery Process

Discover 

Conduct Discovery Session(s)

Establish Functional Test Goals

Define Application(s) Under Test

Review Requirements, Design Specifications, and Manual Functional Tests

Identify Business Processes to Automate

Identify Test Resources (Tools, Staff, Skills, Environments)

Create Test Plan

Develop Detailed Project Plan

Develop 

Exercise AUT

Build Test Data

Create Business Component Tests

Define Test Plan Components

Customize Test Scripts and Function Libraries

Create Test Sets and Parameters

Dry-Run Test Sets (Test Lab)

Execute 

Verify Test Readiness (Build Complete?)

Validate Test Data

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 23/34

Execute Test Cycles

Review Functional Test Results

Identify Defects

Analyze 

Analyze defects

Submit Defects to Development for Resolution

Retest Cycle for Closure

Validate results with stakeholders

Report 

Perform Test Coverage Analysis

Test Execution Metrics

Present Report(s) to stakeholders

Transition 

Knowledge Transfer 

Framework Support and Maintenance (Ongoing)

- each section includes Key Terms, Roles & Responsibilities, and Deliverables - effort (number of tasks) diminishes as method progresses 

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 24/34

 

18.  Resources

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 25/34

18.1.  Function Libraries

18.1.1.  Mercury Quality Center with TestDirector 

18.1.2.  Mercury Test Director 

18.1.3.  Windows Script Host VBScript

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 26/34

18.2.  Templates

18.2.1.  Test case

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 27/34

18.2.2.  WinRunner script (note: structure applies to all coding, e.g., QTP) 

Generic script template

#########################################################

Standard script header that defines what thescript does, any required parameters, special

notes, return value, and change log. Eachchange to the script should be logged in the

script header  

# Script # --------------------------------------------------# Word_File_Close ## Description # --------------------------------------------------# This script verifies that Word can close a file # 

# Parameters # --------------------------------------------------# No Parameters# # Notes# --------------------------------------------------# Microsoft Office scripts are accompanied by specific 

# application files containing application macros

# invoked by WinRunner scripts. # # Return Value# --------------------------------------------------# Returns Status Code#

# Author Date Change

Assign the test_id variable that is used by theresults reporting functions to identify the test

case. 

# --------------------------------------------------# Lars Nargren 8/29/01 Creation

Perform the action. In this example a newdocument is opened using the appropriate Word

macro. The newly opened document is then

closed (the objective of the test in this case is toverify that Word can close a document). 

Start the log entry in the detail results file for 

this test case 

Define any variables that are to be used by thescript. This allows easy maintenance by

keeping all script data in one place. 

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 28/34

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 29/34

Write the test case results to the detail andsummary files 

Perform the verification. First, the verificationvariables (standard across all test scripts) areinitialized. Then, in this example, Microsoft

Word is checked to see if there are any open

documents. If there are the test status is set toFAIL, and the actual result message is specified

to describe the failure. 

rc= set_window (³Microsoft Word - No Open Documents´, 10);if (rc != E_OK)

{

status = FAIL;act_res = ³Document was not closed´;

}

# Log results to log files test_case_result_log (test_id, status, act_res);

# Log results to WR/TD report tl_step (test_id, status, act_res);

# Return application to initial state word_macro_run (WORD_QUIT);

# Return test status treturn (status);

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 30/34

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 31/34

Sample 2: operations, functions, and parameters used with the DDT Template

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 32/34

 

18.2.4.  Keyword (framework) Reference Guide examples

Function Name Keywords 

BackPage ± Moves back a page in the browser using the keyboard ³backspace´ key.

Parameters  Parameter

Values 

Description 

--- --- ---

ClickImage ± Clicks on the specified image.

Parameters  Parameter

Values 

Description 

Param1 ImageName The name recorded in the Object Repository for theimage.

CloseAllBrowsers ± Closes all active browsers on the workstation.

Parameters  Parameter

Values 

Description 

--- --- ---

CloseWindow ± Closes the specified browser window.

Parameters  ParameterValues 

Description 

Param1 BrowserObj Identifies the Browser that should be closed.

Ex.: Browser(³ChildBrowser´)

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 33/34

EndState ± Closes the main APPL browser window.

Parameters  Parameter

Values Description 

--- --- ---

8/8/2019 function test automation methodology

http://slidepdf.com/reader/full/function-test-automation-methodology 34/34

18.3.  Links to Other Resources

Software Quality Engineering¶s test-related articles and info:

http://www.stickyminds.com