Upload
austinfru
View
228
Download
0
Embed Size (px)
Citation preview
8/12/2019 SETLabs Briefings Software Validation
1/75
8/12/2019 SETLabs Briefings Software Validation
2/75
SETLabs BriefingsAdvisory Board
Aveejeet PalitPrincipal Solutions Manager,System Integration Practice
Gaurav RastogiAssociate Vice President,Global Sales Effectiveness
George Eby MathewSenior PrincipalInfosys Australia
Kochikar V P PhDAssociate Vice President,
Education & Research UnitRaj Joshi
Managing Director,Infosys Consulting Inc.Rajiv Narvekar PhD
Manager,
R&D Strategy
Software Engineering &Technology LabsRanganath M
Vice President & Head,Domain Competency Group
Srinivas UppaluriVice President & Global Head,
MarketingSubu Goparaju
Vice President & Head,Software Engineering &Technology Labs
Testing Times: Copingwith Test Challenges
A recent Gartner report mentions that a major re-haul in softwarearchitecture is in the offing what with major upgrades being made to keybusiness packages. This is likely to aggravate the already existing concernson software quality. Defective software is always a huge drain on anenterprises resources. Testing for software quality has thus grown from asoftware lifecycle stage to a full blown industry.
In this issue we touch upon different facets of software validationand the challenges that they bring to an enterprise.
Business intelligence decisions are largely dependent on datacontained in data warehouse systems. But how does one appropriateflawless gains from such systems that manage a gargantuan amount of data?We have a paper that dwells around this idea and proposes an approach totest data warehouse systems.
While validation supposedly removes all glitches inherent in faultysoftware, the big question is whether an enterprise should engage its ownexperts or some third party in the validation process. One of our paperssheds light on this question by proposing a decision matrix that can helpenterprises assess the need for independent validation.
Nip the problem in its bud, assert most validation experts. One of ourresearchers takes a step backwards and advocates preventing the problem,from arising, in the first place. In a very simple and nonchalant way, heaccentuates the need for testing the unwritten code through a proper
deployment of design metrics.
Manual interventions in the testing process are still prevalent.Automated testing has of late, taken centre-stage though. We have threepapers around this concept. While two of them draw from live cases,the third one proposes an automation strategy especially in packageimplementation scenarios.
SOA is becoming a ubiquitous architectural norm. While itsadvantages are many, it is not devoid of challenges and especially so in atesting scenario. Testing SOA applications can be a technology nightmare.In yet another interesting paper, we take you through the testing challengesthat an SOA environment poses to the test planners.
In the spotlight, is a paper on accessibility testing. Authored by adifferently-abled researcher, the paper sets about explaining the need foraccessibility in a world where the differently-abled customers have becomea major force to reckon with.
I am sure you would enjoy the collection of papers that we havestrewn together for you.
Wishing you Merry Christmas and a very Happy New Year ahead!
Praveen B. Malla [email protected]
8/12/2019 SETLabs Briefings Software Validation
3/75
SETLabs BriefingsVOL 6 NO 1
2008
Tutorial: The STV Approach to Redefining Data Warehouse 3System TestingBy Girish VishwanathanDo you want to maximize your gains from your existing data warehouse? In this paper,the author takes you through the STV methodology to test your data warehouse systemand appropriate DW benefits.
Insight:Self-Assessment of Need for Independent Validation 9By Manikandan M and Anuradha GoyalEnterprise package implementations are cost intensive. Independent validations, if under-taken, add to these costs. The authors probe into the need for independent validation andpropose a decision matrix to self-assess such need.
Viewpoint:Earlier = Cheaper: Test Code Before it is Written 17By Vinoth Michael ProPrevention is better than cure. The author draws from his consulting experience and pro-pounds the need for pre-testing the unwritten code. This he views can be done through aproper modeling of the design metrics.
Framework:Test Automation Strategy for ERP/CRM Business Scenarios 23By Ashwin Anjankar and Sriram SridharanMultiple business processes need to be tested during package implementations. TestAutomation not only shortens delivery cycles but also considerably reduces the overall costof quali ty.
Methodology:Model-Based Automated Test Case Generation 33By Ravi Gorthi PhD and Kailash K P Chanduka PhDTest planning is still largely subject to manual interventions. This is both, effort intensiveand error prone. The authors propose a methodology that automates test case genera-tion which they contend overcomes the shortcomings posed by the widely employed pathanalysis techniques.
Perspective:Testing Challenges in SOA Application 41By Sandeep K Singhdeo, Lokesh Chawla and Satish K BalasubramanianSOA characterizes flexibili ty and interoperability. These features, the authors opine, canbe a technology nightmare for effective testing. The paper explains the different testingchallenges that one has to confront in an SOA environment.
Case Study:How to Select Regression Tests to Validate Applications upon 47Deployment of Upgrades?By Anjaneyulu Pasala PhD, Yannick Lew and Ravi Gorthi PhDThe conventional model of executing the entire test suite to validate applications upondeployment of upgrades is expensive and time consuming. The authors propose a regres-sion test strategy that captures and analyzes the runtime behavior of the application toaddress validation related issues.
Spotlight: Reach Out to the Differently-Abled Users: Be Accessible 55By Shrirang SahasrabudheDifferently-abled customers have, of late, become a potent force and are now more in-volved in making buy-decisions than ever. The author draws from his personal experienceand expounds the need for being accessible to this new class of customers.
Index 69
8/12/2019 SETLabs Briefings Software Validation
4/75
Behavioral slicing is a very effective method to discoverambiguities and incompleteness of Software Requirements
Specifications early in the lifecycle of software development.
Ravi Prakash GorthiPrincipal ResearcherArtificial Intelligence and Test Automation LabsSETLabs, Infosys Technologies Limited
Ensuring product quality does not begin with testingand tracking defects. Such a narrow view can eliminate
opportunities for early problem detection during the softwaredevelopment lifecycle.
Vinoth Michael ProProgram Manager, Quality GroupInfosys Technologies Limited
8/12/2019 SETLabs Briefings Software Validation
5/75
SETLabs BriefingsVOL 6 NO 1
2008
The STV Approach to RedefiningData Warehouse System Testing
By Girish Viswanathan
Strategize, Test and Validate yourDW System to maximize gains from your existing
Data Warehouse
T
he data warehouse (DW) is a subject-
oriented, integrated, time variant
(temporal) and non-volatile collection of data,
used to support the strategic decision making
process for the enterprise or business intelligence
Claudia Imhoff [1].
Data warehouse system testing gains
its importance due to the fact that data from the
DW is extensively used for business planning and
reporting. The data is analyzed from different
dimensions and is interpreted differently across
business groups. This paper presents a holisticapproach to testing a data warehouse system
which comprises of core components like Source
Systems, ETL, Data Warehouse, Data Marts and
OLAP. A methodology on effectively testing
a data warehouse system is discussed taking
both functional and non-functional testing into
consideration. This is a three-phase approach
where a test strategy is populated after
analyzing the system, test is carried out in a staged
manner and in the post-test phase, test results are
analyzed.
WHY DATA WAREHOUSE SYSTEM TESTING
IS DIFFERENT
As data warehousing relies fully on the accuracy
of data, end-to-end testing is necessary to verify
and validate the correctness of data across various
systems. During each step of the way, for e.g.,
extracting from each source, merging with that of
other sources, translating and processing, loading
into the warehouse, queries/retrievals/reporting
from the warehouse expected inputs and
outputs are to be verified. A data warehouse has
billions of records which are transformed as perbusiness requirements that are highly complex.
There is a high possibility that a minute change
in the value of one field can affect thousands of
records. The factors that make data warehouse
testing to stand apart from the usual application
testing are discussed below.
Mode of Testing:Unlike normal testing, data
warehouse testing is divided into two parts viz.,
back-end testing where the source systems data
is compared to the end-result data in loaded
5
8/12/2019 SETLabs Briefings Software Validation
6/75
area (system triggered through scripts) andfront-end testing where the user checks the
data by comparing his/her information system
with the data displayed by the end-user tools
like OLAP.
Batch v/s Online Gratification:While a transaction
system provides instant or at least overnight
gratification to the users when they enter a
transaction, which either is processed online or as
overnight batch; in case of data warehouses since
most of the action happens at back-end, users have
to trace individual transactions.
Volume of Test Data:The test data in a transaction
system in the extreme scenario, is a very small
sample of the overall production data. Data
Warehouse has typically a large test data as one
does try tofill-up maximum possible permutations
and combinations of dimensions and facts. DW
can be tested with a gradual scaling of volumeof data from limited test data, expanding it to
limited production data and finally ending with
full production data.
Possible Scenarios/ Test Cases:While the number
of test scenarios in transaction systems is limited,
for a DW system they could be unlimited given
that the core objective of DW is to allow all
possible views of data. Therefore, one has to be
creative in designing the test scenarios to gain a
high level of confidence.
Programming for Testing Challenge: DW dataQuality testing and Extraction, Transformation
and Loading testing is done by running separate
stand-alone scripts say, comparison of aggregates
between the pre-transformation script and post-
transformation script and it produces pilferages
which could offer a big challenge to the tester.
Validations:In data warehouse testing there are
two types of validations that need to be performed
viz.,functional validation where one checks for the
functionality of the application through standard
(date, number) validation and business (lookup,
data integrity, data transformation) validation
and non-functional validation where checks are
made for load testing, performance testing,
volume testing and end-to-end testing.
End-to-end testing is a must to verify the authenticity of dataresiding in the data warehouse system
Apart from the factors mentioned above,
some other challenges that drive the importance
of DW testing are:
Environmental set up like setting upof source systems, stage, DWH, ETL
mappings
System constraints like test DB size,performance de-gradation and automation
issues.
THE STV (STRATEGIZE, TEST, VALIDATE)
APPROACH TO DW SYSTEM TESTING
Testing of the data warehouse and the evolving
needs of the business will both drive continuous
change to the data warehouse schema and the
6
8/12/2019 SETLabs Briefings Software Validation
7/75
data being loaded into it, especially after each areaof the data warehouse moves into production.
Therefore, the testing processes need to be defined
within a bigger framework with clearly defined
testing and production steps. The framework
should have a well-defined business process
enabled by pervasive metadata and impact-
analysis and also have strong alignment between
development, operations and business.
The STV testing framework comprises
of 3 different phases: Strategize, Testing the data
warehouse application, Validate.
Phase 1 - Strategize: In this phase, a primary
assessment of the application is done, a test strategy
formulated, test results repository designed and
a test plan arrived at. Creative designing of the
test scenarios is required to gain a high level of
confidence among end users. The test strategy
mainly focuses on functional automation and
performance automation. Functional automationchecks for data integrity at various stages of DW
testing while performance automation focuses on
sub-system response under load.
Phase 2 - Testing the DW application:This is a very
crucial phase that involves testing extraction and
transformation logic, data loading and analytical
processing. The test results arrived during all
the stages is stored in a dedicated test results
repository that acts as a single source of truth for
all the functional testing that are carried out.
Extraction Testing:Verifi
es whether correct datais being extracted from the source, as efficiently as
possible. Data extraction from multiple sources is
verified in the extraction phase. This is achieved
by checking whether
Data is able to extract the required fields Extraction logic for each source system is
working
Extraction scripts are granted securityaccess to the source systems
Extracted audit log is updated and timestamping materialized
Source to extraction destination isworking in terms of completeness and
accuracy
Extraction is getting completed within theexpected window.
The STV testing framework is characterized by a robustalignment between development, operations and business
Data validation is done for each of
the test cases formulated and the results along
with relevant metadata are stored in test results
repository.
Transformation Testing:This testing is carried out
in stages to validate the transformation logic of
each source separately. This is often facilitated by
a debugger in the ETL tool which enables profiling
and what-if testing as the job executes.
Transformation testing checks whether
Transaction scripts are transforming thedata as per the expected logic
The one time transformation for historicalsnap-shots is working
7
8/12/2019 SETLabs Briefings Software Validation
8/75
Detailed and aggregated data sets arecreated and are matching
Transaction audit log is updated and timestamping materialized
There is no pilferage of data duringtransformation process
Transformation is getting completedwithin the expected window.
The test results along with relevant
metadata are stored in the test results repository.
Loading Testing:Involves testing the data loaded
into each target field to ensure correct execution
of business rules such as valid-value constraints,
cleansing and calculations. Valid-value and
cleansing operations can be tested with a data
profiling tool and can be enforced via domain
constraints in the target schema or in the ETL
processes. For business transformations, a controlled
source test data set is recommended to facilitatechecking expected against actual results.
Loading testing verifies whether
There is pilferage during the loading process Transformations during loading process
are working
Data sets in staging to loading destinationare working
One time historical snap-shots are working Both incremental and total refresh are
working
Data is extracted from the data warehouseand updated in the down-stream systems/
data marts
Loading is happening within the expectedwindow.
Independent Data marts i.e., data
collected directly from sources and not derived
from data warehouse, if any, are also tested in this
phase. Data Quality reports that are generated are
stored in the test results repository.
End User Browsing and OLAP Testing:Involves
testing the accuracy of data from the business
point of view and evaluating various business
scenarios with the end users; generating test cases
Storing multiple functional test results in a single repositorycan facilitate easy retrieval during validation phase
as per the business scenarios; scripting; execution
and test configuration. This is done in an iterative
manner to cover all the major test scenarios.
OLAP testing is carried out by checking: Ad-hoc queries creation is as per the
expected functionalities
Ad-hoc queries output response time is asexpected
Business views and dashboard aredisplaying data as expected
The scheduled reports are accurate andcomplete
The scheduled reports and other batchoperations like view refresh is happening
within the expected window
8
8/12/2019 SETLabs Briefings Software Validation
9/75
Analysis Functions and Data Analysisare working
There is no pilferage of data between thesource systems and the views.
The test results of all business scenarios
are stored in the test results repository.
Error Handling:Here, the logic involves testing
of alerts and rollback activity that takes place on
certain error states, such as referential integrity
constraints, lost connectivity at all phases. Error
handling logic can be defined once and reused for
the entire DW process to simplify testing.
Phase 3 Validate: In this phase the focus is on
data integrity and application monitoring. Test
results repository is browsed for testing data
integrity. Application monitoring is done at the
client, server and network levels.
Data Integrity Check: Test results from the
various activities carried out during test execution
phase are retrieved from the test result repository
and are evaluated by the end users for data
integrity and accuracy. In addition to this end
to-end integrated testing may be carried out to
check the accuracy of data from source system to
the down stream system.
Application Monitoring
Client monitoring provides a picture of
the client experience from the perspectiveof application performance
Server monitoring gives a check onthe internal temperature of data
warehouse servers, including CPU/
memory usage, logfile analysis, database/
SQL statistics, service/process availability,
and connection up/down
Network analysis provides a diagnosticWindow on how network communications
contribute to application performance.
ROI
High user participation, carefully developed test
cases and test data during the strategy phase of
STV methodology, provides flexibility in time
Validating the test results is crucial to ensure an error-freedata warehousing application
and effort spent during testing, leading to less
rework due to missing rejected data and hence
increased confidence amongst users. Focus on
testing with highly developed processes in the
test execution phase helps identify defects earlyon, providing complete coverage to testing,
resulting in significant cost reduction and higher
DW quality. Robust and well defined testing
framework improves test efficiency and provides
full guarantee on correct data for decision making
when the application moves into production.
CONCLUSION
Data warehousing applications are subjected
to changing requirements. There may be a case
that a requirement tested on commencing the
9
8/12/2019 SETLabs Briefings Software Validation
10/75
test cycle has changed when the testing cycleends. Changing source systems, adhoc reporting,
volume of test data all makes DW testing stand
apart from testing other online applications.
To perform DW testing correctly and
thoroughly takes some time and effort. The
best test cases come from detailed requirement
documents created during the planning and
analysis phases. Each and every requirement that
is documented must be measurable, achievable
and testable. Each requirement must be assigned
to someone who wants it, who will implement it
and who will test and measure it.
The STV methodology blends the
conventional approach to testing with the flavor
of DW auditing integrated from the initial stages,
thereby reducing costs to the organization and
also providing high quality datawarehousing
application.
REFERENCES1. Claudia Imhoff, The Corporate Information
Factory, DM Review, December 1999
2. W H Inmon, Building the Data Warehouse:Getting Started, 2000. Available at http://
www.inmoncif .com/registrat ion/
whitepapers/ttbuild-1.pdf. Accessed
March 2007
3. Nate Skinner and Paul Down, AvoidingPerformance Downtime and System Outages
Through Goals-based Testing, August 2003.
Available on www.embarcadero.com/
resources/tech_papers/Goals_Based_
Testing.pdf. Accessed during March 2007
4. Rob Levy, What is Data WarehouseTesting? D M Review, December 2003.
Available at http://www.datawarehouse.
com/article/?articleid=3275. Accessed
during March 2007.
10
8/12/2019 SETLabs Briefings Software Validation
11/75
SETLabs BriefingsVOL 6 NO 1
2008
Self-Assessment of Need forIndependent Validation
By Manikandan M and Anuradha Goyal
A litmus test to understand when independentvalidation adds most value during enterprise
package implementations
Over the years, ERPs and other enterprise
wide applications like CRM, BI, SCM and
the like have evolved as the backbone of the
organizations they support. While consolidating
all the functional requirements of the enterprise in
a few packaged applications has its advantages,
it also shifts most business dependencies on
these applications. There is enough evidence
that with the increase in number of users, the
complexity of the solution increases, which
directly impacts the investments that go intothe solution. This necessitates that enterprises
need to ensure the robustness of the solution
being implemented. The solution should take
care of all the business requirements of the
organization, while ensuring flexibility for
anticipated enhancements and scalability for
future growth.
The dilemma that most IT decision
makers face is how to figure out if independent
validation (IV) is required in their enterprise
application scenario. Investing additional funds
in independent validation to secure already
invested funds in enterprise applications can be
a tough decision to make. Ascertaining the need
for independent validation of a solution being
implemented can be tricky. There are multiple
factors that need to be considered to arrive at
this decision. In this paper, we identify some of
them which help in deciding IV strategy.
WHY INDEPENDENT VALIDATION?
Independent Validation is the process of usingan independent team to concurrently test the
solution being delivered. These teams work
independent of the implementation team and can
have representatives from organizations business
users as well.
A complex task that it is, large package
implementations come with inherent risks and
complexities. Organizations that have realized
this believe that independent validation is a
necessary step in safeguarding their investments
in these implementations.
12
8/12/2019 SETLabs Briefings Software Validation
12/75
The key reasons for this trend are:Package Implementations are Getting Riskier:
A Gartner study states that the industry average
success rate for ERP Implementations is only
30% when measured by on-budget and schedule
parameters [1]. However, if the implementations
failure to meet stated functionality is also measured,
success rate will most likely be less than this.
Packaged applications today pack-in
more functionality than they did a few years
ago. These applications being enterprise-wide in
nature have to co-exist with multiple applications
and have to interact with them. This means that
the channels with which existing applications and
the new packaged application(s) interact need to
be tested extensively and thoroughly. Apart from
this, organizations wanting to leverage features
such as localization and language support will
have to face increased complexity in working with
configurations and customizations. This drives up
the complexity of the implementation, resultingin dramatically high risk.
There are other sources for complexity
as well. Most large organizations today have
more than one ERP installed. According to an
Aberdeen benchmarking report of 2006, 71%
of large organizations have two or more ERPs
and 25% of them have at least four installed
across the enterprise [2]. Unlike the latest
ERPs, their predecessors were not designed
with interoperability in mind. This limitation
of yesteryears ERPs can prove a nightmare
for companies planning to integrate internalfunctions and consolidate reporting.
Apart from technical challenges,
people and change related challenges are
substantial. A large package implementation can
mean an extensive change in the fundamental
process structure of the enterprise. It is always
recommended that the processes of the
organizations be modified to reflect the processes
supported by the package. Such changes are
usually large scale and involve high risks that are
only well known to organizations.
Package Implementations Involve High
Investments: Large implementations demand
more investments. An ERP TCO survey conducted
Independent validation helps firms take a call onwhether huge investments in risk intensive large package
implementations can be committedby Aberdeen has shown that the average cost of
software and services spent on SAP was as high
as $5995 per business user [2]. The cost for other
ERPs surveyed were also in this range. This
means for a 5000 member organization the costof implementation of SAP ERP can be as high as
$29.98million [2].
Some of the factors that influence the cost
of implementation include hardware, software
licenses, professional services for implementation,
customization, integration, data conversion and
testing and training.
The huge benefits of large packaged
applications on the one hand and high risks
of their implementation on the other, have led
organizations to look for ways of mitigating
13
8/12/2019 SETLabs Briefings Software Validation
13/75
risks and independent validationof packages is fastemerging as a popular alternative.
IS INDEPENDENT VALIDATION ALWAYS
REQUIRED?
While Independent Validation adds to the
predictability of the implementation, it also
increases the costs and can potentially increase
the implementation time. Independent validation
teams consist of business users and testing
experts who specialize in certain areas of testing
and have good understanding of testing tools
and processes. Employing/ hiring people with
specialized skill sets adds to the cost. Furthermore,
co-ordination between the independent testing
team and the implementation team can run into
rough weather if adequate and apt processes
are not set-up.
Given the fact that cost saving is a key
business metric, it is important to ensure that the
return on independent validation is justified fora given implementation. And more importantly,
organizations must be able to justify this return
before spending on the project. A framework
which can quickly determine if an independent
validation can add value to the implementation,
can be a good tool in the kit for key decision
makers.
Later in this paper, we propose a
framework which is aimed at helping IT decision
makers understand when an independent
validation adds maximum value and also to assess
whether it is needed for their implementation.Risks and investments are to be identified,
assessed and categorized on a Low-Medium-
High scale and further plotted onto the proposed
framework to understand the value added by
independent validation.
FACTORS AFFECTING INDEPENDENT
VALIDATION DECISION
It is important for us to explore all the factors that
affect Independent Validation while assessing
the value-add it brings to the table. All factors
have been clubbed under two broad headings
viz., Risk of Implementation and Investments in
Independent Validation, for discussion.
The factors listed under the headings
Justifying spends made on independent validation is asmuch a concern as investing huge amounts in package
implementationsbelow are intended to serve as an initial checklist
and organizations must add and delete factors that
are specific to their organizations/ industry.
I. Investments in Independent ValidationIndependent Validation costs vary with vendor
selection. While most organizations today offer
independent validation services, some have
more mature processes than others. So a careful
consideration must be made while selecting
vendors for this task.
The cost of Independent Validation
services for packaged applications like ERP and
CRM can range roughly from 8% to 12% of the
total implementation size. Organizations are
required to assess and rate their investments in
14
8/12/2019 SETLabs Briefings Software Validation
14/75
independent validation on a High-Med-Low scaleto be plotted on the graph.
II. Risk Assessment
Risk of implementation vary widely between
organizations and depend on a large number
of factors. The list of factors mentioned here act
as a checklist and as with Investments in IV,
organizations are required to use these factors to
arrive at a High-Low-Med score for Risk.
Risks during ERP implementations, as
discussed here is a function of two major elements
Investments in implementation and Complexity
of implementation. Because high investments
and complexity drive up risks, it is important to
ascertain the impact of each of these on the overall
risk of implementation.
A. Investments in Implementation:Investments
can broadly be of two types: Direct and Indirect
Investments
(i) Direct Investments Most organizations have
an established way of tracking these investments.
In a typical packaged implementation, the major
cost elements are:
(a) Costs in Hardware and Software: Large
package implementations generally require
additional hardware capacity than what isavailable in the organization. A detailed sizing
exercise needs to be undertaken to estimate
the hardware requirements before a purchase/
upgrade decision.
License costs comprise a significant
part of package implementations, vary widely
between vendors and depend on the package
being chosen. Supporting software such as
operating systems will also need to be upgraded
to support the installations and will add to the
total cost.
(b) Costs in Software Services: A typicalbusiness organization will not have required
skill sets to perform an ERP implementation
and hence a professional services organization
is usually engaged to do the implementation.
These organizations specialize in assessing,
implementing, upgrading, integrating and
maintaining large package applications. This is
a crucial investment because it influences the
quality of the solution delivered. Investment here
is dictated by the price quoted by the vendors and
this forms a significant part of investments. It is
not unusual for these costs to be as high as 100%
of license costs.
(c) Other Internal Costs: People at all levels
in the organization are involved in activities
like collecting and providing requirements,
re-engineering processes, project management,
change management and training during
implementation. In addition to this, large
package implementations demand constant senior
management attention in activities like planning,
budgeting, reviews and ROI assessments. These
are time investments made by employees during
the implementation.
A large package implementation exercise
can bring changes in systems and processes in the
day-to-day operations of people. This requires that
people be trained on new systems and processesto operate smoothly in the new environment.
Training expenses can form a significant part of
the entire expenses. 33% respondents in a report
by the Aberdeen Group said that training during
ERP implementations was a challenge for them
[3].
According to another survey by
TechRepublic Inc., end-user adoption of the ERP
package being implemented was the greatest
concern among IT professionals [4]. Managing
people and their expectations becomes important
15
8/12/2019 SETLabs Briefings Software Validation
15/75
in programs as large as package implementationsand many failed ERP implementations in the past
owe their failure to poor change management.
Workshops and meetings are conducted
regularly to set the right expectations and
identify any people related problems that might
surface.
Overhead costs like travel, welfare,
hiring expenses, are also covered under this
head. Most organizations have established ways
of tracking these investments. Usually a separate
investment project is created and people are
assigned to it, to track the investments. Though
no standard benchmark of this cost is available,
we believe this cost to be a significant portion of
the total costs.
(ii) Indirect Investments: There are other
investments apart from the ones mentioned
above, which many not be quantifiable or tracked
directly. For instance, people not directly assigned
to the project also spend time and effort in makinga given project successful. Key employees have to
be assigned with the implementation task because
of the criticality, which means they essentially
have to be taken off their current tasks and this
might cause some business disruption. Such costs
are tough to quantify.
It is important to realize that efforts
that may not be quantifiable will be incurred
during implementations and not including
them in calculating total investment costs can be
misleading.
Organizations need to consider bothfinancial investments and investments in time and
arrive at a Low/Medium/High score for the total
investments. This score must then be combined
with Complexity Score, discussed in the following
section, to arrive at a final Risk Score.
B. Complexity Factors
Large package implementations come with
inherent complexities. Further modifications and
enhancements increase complexity and as the
complexity increases, risk follows. While a lot of
factors can contribute to complexity and risk, the
factors discussed below must always be reckoned
while evaluating complexity. These factors are
generic and can be applied to any large package
The more complex the package is, more are the risks andhigher their accompanying costs
implementation. Individual organizations using
this model must evaluate if these factors suffice
to arrive at a risk score for their implementations.
The model is flexible so as to accommodate more
factors.
(a)Lifecycle Stage of the Package:Package lifecycle
has three distinct phases, viz., implementation,
upgrade and maintenance and the risks associated
with these phases vary. Implementations are
the most complex of all the three. Package
implementation involves feasibility study, process
documentation and re-modelling, package
evaluation, vendor selection, and other intensive
activities and needs extensive time and support
from the organization.
16
8/12/2019 SETLabs Briefings Software Validation
16/75
Post implementat ion, packagedapplications need regular upgrades. Reasons
for upgrade can include statutory updates,
older releases no longer supported by vendors,
technology stack changes, etc. A report by AMR
states that 55% of the upgrades are business-driven
expansions, consolidations or are triggered by
implementation of new functionality [5]. While
upgrades are not as complex as implementations,
they also involve activities like infrastructure
upgrade and training that need careful planning.
Once the system is stable after an upgrade
or an implementation, minor updates in the form
of bugfixes, minor customizations and patches are
applied to the package to keep the system up and
running. This is referred to as the maintenance of
the system and is usually anchored by a professional
services organization. Gartner in its 2006 report on
testing recommended that even seemingly small
changes must be tested in a controlled environment
that imitates production operations to ensure that
critical business operations are not interrupted
when a change is implemented [1]. Having a strong
testing team to back-up post-production support
is important because individual business units
cannot make changes without affecting other lines
of business involved.
As we proceed from implementation
to upgrade to maintenance, there is a significant
decrease in the overall activity level and thecorresponding complexity levels.
(b)Level of Customization:Though most packages
are comprehensive in functionality, they do not
meet the requirements of all organizations in the
world and hence packages are customized to take
care of the organization specific requirements.
According to one estimate, 20% of the processes
in an organization cannot be modelled in an ERP
system without customization [6].
Extensive customization, however, adds
to complexity. In a report by Aberdeen, in an ERPimplementation, customization related challenges
topped the list of challenges [3]. Customizations
introduce more challenges technical, project
management and organization related which drive
up the complexity levels. Higher the customization,
higher is the complexity.
(c) Level of Integration:Enterprise applications
have to co-exist with other applications in the
organization. And some of these applications could
be mission critical. Interfaces are to be developed
between the enterprise applications and the residing
application to ensure smooth exchange of data.
While interfaces are essential, organizations
must ensure that the number of applications
that need to be integrated has to be kept at a
minimum and that the package application being
implemented is leveraged fully to replace any
smaller applications. In an Aberdeen survey, 37% of
respondents quoted that integration issues were the
motivation behind replacement of their ERPs and
26% of them said that high integration costs was
one of their implementation challenges [3]. Large
number of integration points will drive both the cost
and complexity of implementation, northwards.
(d) Size in terms of Number of Users:Number
of users impacted depends on the number of
business units that are impacted. Large packageimplementations can affect a majority of people
spanning various geographies. The more the
number of users, the higher is the complexity of
implementation.
As discussed, the impact of the complexity
factors discussed above varies widely between
organizations and can be quantified by individual
organizations using Table 1 in page 15.
Steps in Computing Complexity Scores
Ensure completeness of Complexity
17
8/12/2019 SETLabs Briefings Software Validation
17/75
factors. Fill any organization/ Industry
specific complexity factors.
Assign weights to each of these complexityfactors. The combined score of the weights
should be equal to 1.
Calculate Group Complexity Score bymultiplying individual score of each
complexity factor with their weights.
Add all Group Complexity Scores to arriveat a Total Complexity Score.
C. Other Risk Factors
In addition to the complexity factors discussed
above and any other organization specific factors,
the following risks, along with any organization/
industry specific risks, must also be consideredto arrive at a total risk score on a High-Med-Low
scale.
(a) Big Bang vs. Phased Approach:The approach
to implementing packages can also introduce
risks during implementation. In a big bang
implementation, all the modules being
implemented go live at one time. Heavy co
ordination and work on both the organizations
part and the implementation teams part increases
the risk in a big bang implementation.
A phased approach on the other hand
is far less complex and is less risky to implement.
As one proceeds from a big bang to a phased
approach, the risk of implementation goes down.
(b)Duration of the Project: The longer the duration
of the project, the higher is the risk involved in
its completion. Unforeseen circumstances like
economic recession and organizational change can
affect the project progress adversely and hence it
is recommended that the project duration be as
short as possible.
Table 1: Complexity Score Source:Infosys Research
INTERPRETING THE FRAMEWORK
Organizations are required to arrive at a High-
Med-Low score for both the decision factors incremental investment in Independent
Validation and Risks as mentioned above. Plotting
these scores will place organizations in one of the
four quadrants depicted in Figure 1 overleaf.
The model makes use of a simple 2X2
graph plotted with investments in independent
validation on the X-axis and risk of implementation
on the Y-axis. Organizations would map to one
of the quadrants in this model, based on their
investments and risk scores. Interpretation of
the value added by independent validation for
18
8/12/2019 SETLabs Briefings Software Validation
18/75
organizations in each of these quadrants (Q) is
listed below:
Independent Validation is most effective fororganizations falling in Q2 since the cost of
IV is low while the risk is high.
While organizations in Q1 will benefit fromIndependent Validation, since the cost of
IV is high individual organizations need
to take a call on whether they need it. Organizations in Q3 also need to take in
dividual judgment on whether they need
Independent Validations because of rela
tively low risk and investments involved.
Independent validation for organizationsin Q4 is not justified since the cost of
investment is high and the risk levels are
relatively low. However, organizations
with less appetite for risk may still opt for
an independent validation here.
Figure 1: Decision MatrixSource:Infosys Research
CONCLUSIONOrganizations need to take a balanced and objective
view for de-risking the huge investments made
in implementing large enterprise applications. By
considering the various factors mentioned in this
paper, they can take an informed decision on the
best way to mitigate risks associated with running
the business on enterprise applications. The
weights of each of the factors mentioned can be
decided depending on the various organizational,
industry and environmental factors. Investment in
independent validation can help the organizations
deal with potential risks and at the same time
they need to evaluate if this investment is really
required.
REFERENCES
1. Pat Phelan, The Role of Testing in BusinessApplication Support, Gartner Research Id
No. G00142376, September 2006
2. Cindy Jutras, The ERP in Manufacturing
Benchmark Report, Aberdeen Group
Benchmark Report, August 2006
3. The Total Cost of ERP Ownership,Aberdeen Group Report, October 2006
4. Donald Burleson, Four Factors that Shape
the Cost of ERP, Tech Republic, August
2001. Also available at http://www.dba
oracle.com/art_erp_factors.htm
5. Bill Swanton, Dineli Samaraweera and EricKlein, Minimizing ERP Upgrade Costs
Requires Synchronizing With Business
Improvement Projects, AMR Research,
July 2004
6.J E Sc ot t an d L Ka in dl , Enha nc ingFunctionality in an Enterprise Package,
Information and Management, Vol. 37,
No.3, April 2000, pp. 111-122
7. www.dgs.virginia.gov/PPEA_EAP/IBM_EADP/03-Vol_I_Executive_Summary.pdf.
19
8/12/2019 SETLabs Briefings Software Validation
19/75
SETLabs BriefingsVOL 6 NO 1
2008
Earlier = Cheaper:Test Code Before it is Written
By Vinoth Michael Pro
Employ design metrics to assess systemsprior to implementation and mitigate losses
due to software defects
T
ypically, quality checks in software do not
begin until the testing phase, by which time
many months and lines of code would have been
written. Today we test compiled codeas we get it.
Testing, before code is even written may seem
illogical, but some of the defects in an application
can be traced back to poorly defined requirements
and design flaws in the software.
One of the revolutionary improvements
in manufacturing has been the advent of three-
dimensional modeling that detects conflicting
parts so that part design can be changed muchbefore they are manufactured. In systems and
software engineering, such attempt would take
the form of design specifications with capabilities
to detect flaws and mismatched interfaces.
Visual UML design models can provide
an automated way for software practitioners
to assess the quality of their software prior to
coding. This approach involves processing
design specifications/ UML diagrams to calculate
design metrics on classes before they have been
implemented. Design metrics focus more on
design-level parameters and reveal characteristics
such as coupling, cohesion, and complexity. These
design metrics have been shown to correlate
with aspects of software quality such as fault-
proneness and changeability.
Since design metrics do not rely on the
syntax of code, they can be computed from design
specifications before the code has been written.
This allows project managers and developers an
early insight into the quality of the design and
the potential quality of the eventual software.
For instance, if the values of design metrics showthat a module is very complex or lacks cohesion,
the project manager could order redesign of the
module. This paper focuses on employing design
metrics to assess systems that have not yet been
implemented.
COST OF FIXING DEFECTS
Software quality can be described as the
convergence of complete requirements, flawless
design and correct code that align to meet
business goals.
21
8/12/2019 SETLabs Briefings Software Validation
20/75
When an organization emphasizes
only testing, it typically focuses on product
quality in terms of defects that are to be
tracked and reworked in time to meet schedule
commitments. This narrow view of quality
eliminates opportunities for early problem
detection throughout the software development
lifecycle. Between 40-50% defects in a software
application can be traced back to a poorly defined
requirement and design flaw in the software
[Table 1] [1].
Table 1: Defect Potential in Sof tware
Source:Economics of Software Process Improvement,Capers Jones, 2005
When organizations consider quality as
an afterthought they are in effect increasing their
costs and decreasing their efficiencies. Figure 1
illustrates how significantly the estimated cost
per defect can increase in the latter stages of thesoftware development lifecycle. By taking a more
proactive approach to quality management,
organizations can detect and remove defects
earlier during design and ultimately spend less
time on rework and testing.
WHAT ARE DESIGN METRICS?
Obviously, design review is a good practice in
systems and software engineering for detecting
and removing defects before progressing to
the next phase of development. But review
still remains an inspection-based approach forquality control. Rather, measuring for code error
proneness factors in design and eliminating
those factors would be much more cost effective.
This realization has led to the concept of design
metrics.
Metrics to predict software quality do
exist and can be used starting in the design
phase. Fundamentally, design metrics involve
counting or measuring some of the design
parameters that correlate with error proneness.
To find out how much quality you have in your
system, you can find how many of these metrics
provide abnormal measures at the design level.
Thus, design metrics play an important role in
helping developers understand design aspects
of software and, hence, improve software
quality.
Object-oriented design measures are
believed to be indicators of complexity and one
of the main reasons for defects is complexity.
This means that object oriented classes are easy
to understand as long as their complexity is
below a threshold. Above that threshold their
understandability decreases rapidly, leading
to an increased probability of a fault. There
are, however, a number of object-oriented
design principles and practices to help contain
complexity, or at least identify areas that may
become overly complex so that actions can betaken judiciously. For e.g.,
Interfacing - Level of interfacing quantifiesthe extent to which the different parts of
the software are connected with each
other. The higher this connection, higher
is the possibility of error propagating from
one part of the software to another.
Cohesiveness - Refers to the inherentsimilarity of the activities carried out by
a component/module/class. If a class/
module is not cohesive it will possibly
22
8/12/2019 SETLabs Briefings Software Validation
21/75
8/12/2019 SETLabs Briefings Software Validation
22/75
8/12/2019 SETLabs Briefings Software Validation
23/75
This means that object-oriented classes are easyto understand as long as their complexity is
below a threshold. Above that threshold their
understandability decreases rapidly, leading to
an increased probability of a fault. This occurs,
according to the theory, due to an overflow of
short-term human memory.
Object-oriented programming has matured
to the point where it is commonly accepted as
the most employed paradigm for developing
software. With the shift in the way software is
developed comes the new suite of design metrics
aimed at validating the quality of object-oriented
designs. The goal is to gather data that may later
be analyzed to identify flaws. Reverse may also be
stated, in which case, metrics may provide some
assurance that the code that comes as result of the
design is without serious flaw.Once design level metrics have been
collected for several systems, they can be compared
with defect data for the implemented software to
develop models for predicting aspects of software
quality such as changeability, or fault-proneness.
An increasing rigor is being applied to this area
of research, with the hope being that someday
software quality will be mathematically proven.
Until that day, it is important to understand a bit
of the theory behind these metrics and how to best
put to use what they show.
CONCLUSIONDesign metrics computed for design models
can indicate relative cohesion, complexity, and
coupling of the systems modules. Computing
design metrics in the design phase, before the
code is ever written, can give the development
team a very valuable insight. Managers can use
this information in the area of project planning.
They can also take preventive measures in
modules that are complex or lacking in cohesion,
by redesigning or including activities such as
code inspections. Cost savings, better delivery
date estimation and code with better design and
fewer errors are the potential results of such an
initiative.
Quantitatively proving quality of software can be aninsurmountable task. Nevertheless, researchers are
working with renewed vigor to prove it mathematically
REFERENCES
1. E c o n o m i c s o f S o f t w a r e P r o c e s sImprovement, Capers Jones, 2005
2. A Lakshminarayana et. al., AutomaticExtraction and Visualization of Object-
Oriented Software Design Metrics,
Proceedings of SPIE The International
Society for Optical Engineering, Vol. 3960,
2000, pp. 218-225
3. L Briand et. al., Predicting Fault-ProneClasses with Design Measures in Object-
Oriented Systems, Proceedings of the 9th
International Symposium on Software
Reliability Engineering, 1998, pp. 334-343
25
8/12/2019 SETLabs Briefings Software Validation
24/75
4.
V Basili, L Briand and W Melo, AValidation of Object- Oriented Design
Metrics as Quality Indicators. IEEE
Transactions on Software Engineering,
Vol. 22, No. 10, 1996, pp. 751-761
5.J Bansiya and C Davis, A HierarchicalModel for Object-Oriented Design
Quality Assessment, IEEE Transactions
on Software Engineering, Vol. 28, No. 1,2002, pp. 4-17
6. R Subramanyam and M S Krishnan,Empirical Analysis of CK Metrics for
Object-Oriented Design Complexity:
Implications for Software Defects, IEEE
Transactions on Software Engineering,
Vol. 29, No. 4, 2003, pp. 297-310.
26
8/12/2019 SETLabs Briefings Software Validation
25/75
SETLabs BriefingsVOL 6 NO 1
2008
Test Automation Strategyfor ERP/CRM Business Scenarios
By Ashwin Anjankar and Sriram Sridharan
Efficient test management compresses deliverycycles and reduces overall cost of quality
Any package implementation involves anumber of business processes that need tobe tested. These business processes can be seeded
or customized. The extent of customization will
depend on the gaps, with respect to customer
requirements. These business processes can
correspond to specific modules or can be
cross-functional, spanning multiple modules.
The number of business scenarios can vary
depending on the scale of ERP/CRM package
implementation.
Though all the functionalities that need
to be tested can be independent of each other,
they are related to business in some way or theother. Thus each of these functionalities can be
seen in the context of a business process or a
business scenario.
Test automation considering only
specific user-interfaces and limited functionality
may not have more usage. Hence the objective is
to create generic test scripts having more usage
and easier maintainability. There can be different
ways to automate the testing of these business
processes.
In the simplest form, a main test script
can be created which will invoke different sub
scripts. Each sub-script can cater to some of the
functionalities under test [Fig. 1]. Different sub
scripts can be executed sequentially by taking
the user inputs using a user input excel.
Such functionality can be extended
further, to create scripts that are independent
of each other and that can be executed
independently after passing the required inputs.
In this case each script exists as a stand-alone
component. These components can be either
executed independently or together with other
components. Each component can be configured
to accept inputs from other components[Fig. 2]. This is component based design and
development.
Figures 1 and 2 overleaf depict test
automation with and without component based
development approach.
This paper looks at test automation from
the perspective of business processes by using
component based design and development. It
also discusses strategies to address some of the
issues with test automation and to make test
automation more effective.
28
8/12/2019 SETLabs Briefings Software Validation
26/75
Figure 1: Test Automation Without Using ComponentsSource:Infosys Research
Figure 2: Test Automation Using ComponentsSource:Infosys Research
In the subsequent sections we detail thevarious constituents that go into making a robust
test automation strategy.
GATHERING TEST REQUIREMENTS
Requirement gathering is one of the most
important areas in any project. Test requirements
should be collected as early as possible in the
software development life cycle. In order to make
test automation more effective, test requirements
should encompass the following:
Traceability, that should be maintainedbetween business requirements and test
requirements
Identification of stakeholders and theirreporting/communication needs about
test progress
Business, Functional and Unit Testscenarios are well laid out.
Use-case based requirements collection
is ideal to maintain test requirement traceability.
Some of the modern test management tools have
built-in support for achieving the same.
The traceability between business
requirements and test requirements (established
at the time of requirements gathering) can be
further extended to design/development phase
in the form of a test coverage matrix.
TEST COVERAGE MATRIX
Knowledge of the automated test components
repository among business/functional testers
is very essential to achieve reuse and maximum
utilization of test assets. A test coverage matrix
showing details of the business scenario, related
functional scenario, corresponding automated
script components and the coverage of the test
cases within that script component will help to
assess the test automation coverage very early in
the process [Table 1]. Test coverage matrix can
29
8/12/2019 SETLabs Briefings Software Validation
27/75
have the following challenges:
Maintaining the test coverage matrix forlatest details
Build awareness of the matrix itself Establish a process to assess test coverage
automation using the matrix.
Table 1: Example of Test Automation Coverage MatrixSource:Infosys Research
DESIGNING SCRIPT REPOSITORY
Usually, all the test scripts are stored in a
central server for easy access. During the script
development/modification, these test scripts are
downloaded to the client machine. Over a period
of time the number of scripts will increase in the
repository. The size and the number of the testscripts can affect the download time and thus
prolong development time.
The solution is to restrict the number
of scripts that get downloaded to the client.
Ideally, only the required scripts should get
downloaded to the client. This can be achieved
by effectively designing the way these scripts
are stored in the server [Fig. 3]. Scripts can be
stored separately according to the modules
or functionality. This will reduce download
time, achieve modularity and ensure ease of
maintenance.
Figure 3: Script Repository DesignSource:Infosys Research
TEST SCRIPTS DESIGN AND
DEVELOPMENT
Thoughtful design of test scripts is essential to
achieve maximum reuse and easy portability of
scripts for future application upgrades. Newer
releases of test management products support
business process testing. Using this feature each
business process can be further subdivided
into smaller parts that can be developed as test
components. These test components can be
sequenced together to cover the entire business
process. Component based design gives lot of
flexibility in terms of development and reuse.
Component Definition: Component is the
fundamental block of a business process that canbe reused across business processes. The design
of a component can affect the reuse of that
component. For e.g., component encompassing
order-to-shipping together is less likely to be
reused because of its rigidity and tight-coupling.
On the other hand, separate generic components
for all the order creation steps are more likely to
be reused.
Business Process Design: Each business
process owner will own his business process
30
8/12/2019 SETLabs Briefings Software Validation
28/75
components and will be responsible for testing
his business processes [Table 2]. End-to-end
testing of applications like ERP/CRM would
involve testing several business processes that
are cross functional in nature. Different business
owners coordinate with each other in doing such
end-to-end testing.
Table 2: Component DefinitionSource:Infosys Research
Component based design should
facilitate the business owners to selectively test
the business process either fully or partly.
One of the fundamental aspects of
componentized test development is to achieve
loose coupling between the components. Each
component should take multiple input values
and provide multiple output values which the
subsequent components accept as inputs. These
inputs will also be supplemented by user inputs.
Such components will be part of the component
repository and can be easily reused during the
creation of new test scenarios.
Business Scenario Design: Business scenario
consists of one or many business processes. For
e.g., order-to-cash is a single business scenario
[Fig. 4] which contains multiple business
processes like order-creation, order-shipment,
invoicing, etc.
Figure 4: Order-to-Cash End-To-End Business Scenario Source:Infosys Research
31
8/12/2019 SETLabs Briefings Software Validation
29/75
Test script can be designed based on theneed to focus only on a single business process
or multiple business processes. Test script design
should give flexibility to business owner to test
her business process either as a single scenario
or as a part of different scenarios.
Design Considering the Big Picture: While
designing the test scripts for business
components, consider the other related business
scenarios. This will facilitate more reuse, quicker
development and reduced maintenance effort.
For e.g., Figure 5 shows three different scenarios,
each using three components. One component
for create/query order, one for credit check
and one for book order. While designing create
order component, it can be made generic so
that it can be used for multiple scenarios thus
facilitating reuse.
Figure 5: Design of Related ScenariosSource:Infosys Research
TESTING OF TEST SCRIPTS
Once the test scripts are developed, they need to
be tested. Reduced testing time can contribute
towards compressing the overall delivery time.
Testing time of the test scripts can be reduced byconsidering the following options
Provide Restart Ability to the Scripts:Provide
the ability to restart from the same place where
the execution failed earlier. This will save time
on account of
(a) repetitive testing of components that
have already been successfully tested,
Figure 6: Testing of Related ScenariosSource:Infosys Research
32
8/12/2019 SETLabs Briefings Software Validation
30/75
(b) consumption of data by componentsalready tested successfully,
(c) execution time for test script.
Test the Related Scenarios Together: Business
scenarios having common business processes
can be tested together. For e.g., Figure 6 shows
three different scenarios involving ordermanagement and shipping process. Here, the
order management business process can be
tested only once. Instead of retesting the order
management process, the data generation scripts
can be used to provide the required data to the
subsequent business processes.
Selection Criteria for Automation What
to Automate First? : The sequence in which
the business scenarios are automated decide
the extent to which previously developed
components are reused. A practical approach for
selection of automation sequence for business
processes can be set in terms of minimum reuse
percentage of components [Fig. 7]. For e.g.,
Figure 7: Scenario Selection Based on Reuse CriteriaSource:Infosys Research
The scenario selected for automationshould produce at least 10% components
that can be reused during later
automation
Alternatively, the scenario selected forautomation should have at least 10%
components that have been previously
developed.
(The number 10% is an indicative
figure).
Table 3: Reuse Decision MatrixSource:Infosys Research
There can be different permutations
and combinations with respect to the sequence
in which the components can be automated.For each sequence, calculate the total reuse
33
8/12/2019 SETLabs Briefings Software Validation
31/75
percentage. Select the sequence which providesthe maximum reuse.
This criterion may not be always
satisfied but will hold good in most of the cases
as there will be always few components that can
be reused. We believe such goals will increase
reuse of components thereby reducing the
delivery time and accelerating the ROI on the
automation.
Table 3 provides different permutations
for automating three business scenarios
having equal customer preference. Here, the
automation sequence should be B-A-C as it
provides maximum reuse. Data in the table is
indicative and is provided to clarify the strategy
proposed.
TEST SCRIPTS MAINTENANCE AND
PORTING METHODOLOGY
One of the challenges for test management isto identify the impact on test scripts because of
changes to the underlying forms and to keep the
test scripts updated.
Porting methodology provides the
ability to identify the changes to the underlying
GUI [Fig. 8]. It also facilitates the measurement
of the impact of these changes on the test scripts/
components.
Why is Porting Methodology Required?
To understand the need for porting
methodology, let us consider a large scale ERP/CRM package implementation as an example.
Test automation can be applicable to seeded
as well as custom user-interfaces. As these
ERP/CRM packages are upgraded from time
to time, there is a possibility that the seeded
user-interface might get changed. Similarly the
custom forms can also undergo some changes
regularly due to process enhancements. Any
change to the underlying seeded/custom
user-interface (either because of patch/code
change) can make the test scripts unusable.
The challenge here is to identify the change in
the underlying form and then to measure the
Porting methodology helps detect changes that are made to
user interfaces due to package upgradation
impact on the test scripts.
Effective porting methodology can help
in identification of impact on the test scripts thus
facilitating script updation, thereby resulting in
ready availability of the scripts, as and when
required.
TESTING STRATEGIES AND REALIZATION
OF BENEFITS
Some of the challenges for test automation
are availability of scripts, achieving more
coverage, faster delivery, reduced maintenance
effort, identification of impact post new patch
introduction and also post enhancements to
underlying user interfaces.
The strategies discussed in this paper
help in addressing these issues and can make
34
8/12/2019 SETLabs Briefings Software Validation
32/75
test automation more effective. The correlationbetween these strategies and benefits can be seen
in Figure 9.
35
Figure 8: Porting ProcessSource:Infosys Research
DIRECTIONS IN THE TEST AUTOMATION
SPACE
Test automation is gaining momentum for
package implementations. Considering the
frequent product upgrades that are rolled
out by package vendors, automation of
regression testing is becoming more and more
important.
Test management products are also
becoming mature over the years. They are not
only limited to test automation but are also
related to the complete system development life
cycle right from maintaining the traceability
(with respect to requirements) till management
reporting. Vendors that support business process
testing can leverage these features to provide
maximum benefits to their customers.
CONCLUSION
Testing should be viewed as a value added
service, offering which, adds value to the
delivery capabilities of the organization. Test
automation is picking up fast and testing
products are maturing faster than ever. Most
organizations confront testing in piecemeal
manner rather than as a comprehensiveframework based on strategies. As is evident,
the current day enterprises need a strategic
viewpoint that looks at the testing function
holistically within the context of the system
development life cycle. Return on investment
(ROI) for test management function can be
measured in terms of the usage and coverage
of the test scripts. It can also be measured
indirectly by reduction in cost of quality by
ensuring that test procedures and activities are
followed at each phase of the system lifecycle
8/12/2019 SETLabs Briefings Software Validation
33/75
Figure 9: Realization of Benefits on Adoption of Testing Source:Infosys ResearchStrategies
and most of the defects are captured duringthat phase itself. Effective testing strategies can
go hand in hand with the advanced product
features in providing maximum benefits to the
customers.
REFERENCES
1. Kiran Karanki and Sukandha Ram,
Automated Testing: Achieving Complete
Coverage, OAUG Fall Conference, 2003
2. Anand Kulkarni, Flexing the IT muscle
for Business Flexibility and Innovation,
SETLabs Briefings, Vol.2, No.1, 2004,pp.3-10
3. Manoj Narayan and Somil Katiyar,
Realizing Business Value with a Testing
Centre of Excellence. Available at (http://
www.infosys.com/services/enterprise-
quality-services/white-papers/business
value-of-testing-CoE.pdf. Accessed
during June 2007
4. Brian Marick, When Should a Test be
Automated? Available at http://www.
testing.com/writings/automate.pdf
36
8/12/2019 SETLabs Briefings Software Validation
34/75
Accessed during June 20075. Carl Nagle, Test Automation Frame
works, Available at (http://safsdev.
sourceforge.net/FRAMESDataDriven-
TestAutomationFrameworks.htm Ac
cessed during June 2007
6. Surendra Dosad, Nishanth Rao and Ravi
Rengasamy, Building a Robust Suite forAutomated Regression testing of SAP Re
leases, Available at (http://www.infosys.
com/services/packaged-applications/
white-papers/automated-regression
testing-of-sap-releases.pdf?ws=qwp. Ac
cessed during June 2007.
37
8/12/2019 SETLabs Briefings Software Validation
35/75
SETLabs BriefingsVOL 6 NO 1
2008
Model -Based AutomatedTest Case Generation
By Ravi Gorthi PhD and Kailash K.P.Chanduka PhD
Automating the test planning activity isthe key to enhancing productivity and
quality of testing services
T
est planning and execution are two critical
phases of software testing services. In
the last one decade, test execution phase has
witnessed an increased degree of automation,
resulting in enhanced productivity and quality
gains. However, the test planning activities are
still largely manual. One of the main constituents
of test planning phase is the creation and
maintenance of test-cases, data and scripts.
Typically, even today, in many IT organizations,
test cases, data and scripts are manually created
and maintained from Software RequirementsSpecifications (SRS) and UML analysis and
design models. For medium to large software
applications, this manual process is effort
intensive and prone to human errors. An NIST
study estimates that ineffective software testing
costs the U.S. economy as much as $56 billion
per year [1]. These indications call for effective
and efficient methodologies for creation and
maintenance of test-case, data and scripts.
We present here a methodology
to automatically generate test-cases from
text based SRS and UML Use-Case Activity
Diagrams (UCAD) / InFlux task-flow diagrams
(InFlux is a tool, designed and developed by
Infosys Technologies Limited that facilitates the
development of UML Analysis Models such as
use-case activity diagrams named as task-flow
diagrams). Automation of test-case generation
requires a machine readable format of the SRS.
Hence, the methodology uses a novel concept
called Behavioral Slicing to structure a given SRS
into a machine readable format. This structured
format is used to generate the UCAD and test-cases automatically. A prototype tool, based
on the proposed methodology, is designed and
developed that offers the following functionality:
A facility for a business analyst to converta given SRS into a structured one, using
Behavioral Slicing and save it as an xml
file
A facility to automatically generateUCAD from the structured SRS
A facility to automatically generate testcases from the structured SRS or UCAD.
39
8/12/2019 SETLabs Briefings Software Validation
36/75
Proof-of-concept (PoC) experimentswere carried out on three live projects using the
tool. The analysis of results indicated benefits as
listed below:
The process of structuring SRS usingthe concept of behavioral slicing enables a
business analyst to discover ambiguities
and incompleteness of SRS early in the
life cycle of software development. These
defects, if undetected at an early stage, will
be more expensive to fix at later stages.
The facility to automatically generateUCAD from the structured SRS,
considerably enhances the productivity
of the Software Analysis phase.
The facility to automatically generate testcases from structured SRS or UCAD not
only improves the productivity of test-
case planning phase but also enhances
the quality of coverage of the test cases
by systematically traversing through all
the paths in the UCAD.
In the subsequent sections, the state-of-
the-art and the details of the methodology are
discussed and illustrated using a case study.
STATE OF THE ART
There are a few approaches in literature on
generation of test cases from the requirementsspecification expressed in a natural language.
These can mainly be categorized into two
groups. One set of approaches uses formal
specification languages such as SCR [2], Z [3] to
manually transform the requirements expressed
in a natural language into a more formal model
and generates the test-cases from the formal
model. The other set of approaches is based on
UML analysis models (e.g., UCAD and state
diagrams) and discusses methods to derive test
cases from these models [4, 5]. Most of these
latter approaches use path analysis techniques togenerate the test cases. These approaches have the
following main shortcomings in the IT industry
context: (a) use of formal specification languages
is found to be unsuitable to express requirements
of medium to large complex software systems.
It is found that even for small systems, the use
of formal languages was cumbersome and
effort intensive; (b) use of manual processes
in generating test-cases from use-cases largely
depends upon the experience and domain skills
of the tester and is effort intensive and error
prone, and (c) lack of a well-defined structure in
expressing requirements from which test-cases
can be generated automatically.
A methodology is presented here to
address the above short-comings.
OVERVIEW OF THE METHODOLOGY
The methodology serves the following two
objectives based on whether a given IT project has
readily available UCAD or not. If the project has
no UCAD models, then it takes an unstructured
SRS as an input and facilitates a business analyst
to structure the SRS and then automatically
generates UCAD and test-cases. If the project
has UCAD models (developed using UCAD
support tools like InFluxTM or IBM Rational
Rose), it automatically processes UCAD models
and generates test-cases. Accordingly, themethodology consists of two major phases:
Phase-1 - Structuring an SRS: A given SRS is
decomposed into a set of use-cases and each use
case is structured using the concept and process
of behavioral slicing. The outcome of this phase is
the structured SRS.
Phase-2 Automatic Generation of UCAD
and Test Cases: This phase takes the structured
SRS as an input and generates UCADs. It then
40
8/12/2019 SETLabs Briefings Software Validation
37/75
processes UCADs, enumerates all the paths ineach UCAD and generates test cases from each
path.
STRUCTURING AN SRS
SRS is the document that describes the expected
behavior of a software system to be developed.
In the IT industry, many projects use either
unstructured SRS or at most partially structured
SRS, expressed in proprietary formats, from
which the test cases are manually generated.
For medium to large software applications, the
SRS runs into hundreds of pages. Often, it is
difficult to generate the test cases from such SRS,
as the lack of structure gives rise to ambiguities.
To address these problems, the methodology
proposes a process to structure SRS using
behavioral slicing.
Behavioral Slicing: A given SRS is first
decomposed into a set of use-cases using OOAD
best practices. Then each use-case is sliced
into units of behavior. The behavior model of
Gerrard [6] is used to define a unit of behavior
as a tuple , where,
UI: user inputs
SP: system process
C: a set of conditions on the state of the
system
SO: system output
To illustrate this idea, let us consider
the use-case, Withdraw Cash, from the SRS of an
Automatic Teller Machine (ATM) system. One
of the units of behavior in this case is:
UI: user inputs the details of his/ hercredit or debit card by swiping it
SP: the system processes the card numberand checks whether the given card is
valid or not
C: if the card is not valid then
SO: the system displays an errormessage.
Given that the purpose of SRS is to
express the composite behavior of a complex,
to-be-developed system, our methodology
facilitates a business user to (a) decompose the
complex behavior into a set of use-cases, and
(b) slice the behavior contained in each use-case
into a sequence of units of behavior. Cockburns
use-case template [7] is adapted and modified to
slice a use-case behavior into units of behavior,
as shown in Table 1.
Table 1: Use-case Template for Structuring the SRSSource:Infosys Research
Most of the fields described in this use-
case template are common to other use-case
templates followed in practice, except for the
steps in User Interactions. The User Interactions
steps describe the interactions between the actor
and the system to meet the goal of the use-case.
The interactions between the actor and the system
are structured into one or more steps which are
expressed in a natural language. A step has the
form
< Type>
41
8/12/2019 SETLabs Briefings Software Validation
38/75
One can note from Table 1 that the typesand sequence of interactions exactly correspond
to the unit of behavior defined and discussed
above. In other words, the User Interactionsteps
of the above use-case template facilitate the
slicing of a composite behavior of each use-
case into units of behavior.
A prototype tool is developed to assist
in structuring a given SRS. The tool offers a GUI
with two windows that allows one to import
the given SRS into its left window. In the right
window, it displays the use-case template of
Table 1. The tool facilitates drag-and-drop and
editing of slices of units of behavior of each use-
case from the left window into the structured
template contained in the right window.
The thus structured SRS is stored in an
XML format using the following key tags:
Start of a Use-case
User Inputs
System Process
Conditions
System Output
Goto
End of a Use-case
It can be observed that the resulting
structure is very uniform and offers great
benefits like automatic generation of use-case
activity diagrams and test cases.
AUTOMATIC GENERATION OF UCAD AND
TEST CASES
In this phase, the methodology carries out the
following two main steps:
i) Process the Structured SRS and generate
UCAD models:
a. For each use-case, construct a UCAD
as a cyclic di-graph, where the nodes
correspond to the tags, User Inputs, System
Process, Conditionsand System Output
b. Save the resulting UCAD model.ii) Process UCAD models and generate test-
cases:
a. Traverse the UCAD using a depth-first
strategy (DFS) and enumerate all the
paths (note: cyclic sub-paths are traversed
only once)
b. Each path corresponds to a test scenario;
slice each path into units of behavior and
print each unit as a test case.
A tool is designed and developed to
automatically generate test cases using the above
discussed methodology.
CASE STUDY
To demonstrate the efficacy of the proposed
methodology in generating test cases, a case
study using the following SRS of an ATM system
is carried out.
A customer uses the ATM to withdraw cash
from her account, query the balance of her account,
or transfer funds from one account into another.
Assuming the card is recognized, the system validates
the ATM card to determine that the card has not
expired, that the user has entered the correct PIN
(personal identification number), and that the card was
not reported as lost or stolen. The customer is allowed
three attempts to enter the correct PIN; the card beingconfiscated if the third attempt fails. Cards that have
been reported lost or stolen are also confiscated. If the
PIN is validated satisfactorily, the customer is prompted
for a withdrawal, query, or transfer transaction. For a
withdrawal transaction, the system validates that the
user has entered the amount figure correctly. Before
the withdrawal transaction is approved, the system
determines that sufficient funds exist in the requested
account, the maximum daily limit is not exceeded and
ATM has sufficient funds in cash dispenser. If the
ATM does not have sufficient funds, then an apology
42
8/12/2019 SETLabs Briefings Software Validation
39/75
message is displayed and the card is returned. If thetransaction is approved, the requested amount of cash
is dispensed, and the