Upload
lakshmiescribd
View
214
Download
0
Embed Size (px)
Citation preview
8/12/2019 Principles of Automated Testing
1/4
Principles of automated testing Quality Assurance
Principles of automated testing
Automated Testing is the use of software to control the execution of tests, the
comparison of actual outcomes to predicted outcomes, the setting up of testpreconditions, and other test control and test reporting functions Commonly, test
automation involves automating a manual process already in place that uses a
formalized testing process.
In general, Automated Testing can be divided into two groups:
Functional testing
Performance testing
Functional Testing is a process of writing a computer program to do testing that
would otherwise need to be done manually. Once the testing has been automated, alarge number of test cases can be validated quickly. This is most cost effective for
software products that will have a long shelf life, because even minor patches over
the lifetime of the application can cause features to break which were working at an
earlier point in time.
There are two ways to design the tests:
Black box testing. The test developer has no knowledge of the inner workings of
the program. The tests cover all the cases that an end user would run into. The
completeness of the tests depends on the test developer's expertise using the
application.
White box testing. The test developer has full knowledge of the inner workings of
the program. The tests ensure each pathway through the source code has been
exercised and is working properly.
There are two general approaches to test automation:
Graphical user interface testing. A testing framework generates user interface
events such as keystrokes and mouse clicks, and observes the changes that result in
the user interface, to validate that the observable behavior of the program is
correct.
Code-driven testing. The public (usually) interface to classes, modules, or libraries
are tested with a variety of input arguments to validate that the results that are
returned are correct.
It does make sense to use Graphical user interface testing for regression testing of
functionality, that is already developed. For example, once a software product has
been released to production, each new release of the software could cause existing
features to fail. To prevent this, it is wise to create a set of regression test cases
that are run with each new release. Many test automation tools provide record and
http://www.rozrobka.com/blog/qa/http://www.rozrobka.com/blog/qa/http://www.rozrobka.com/blog/qa/8/12/2019 Principles of Automated Testing
2/4
playback features that allow users to record interactively user actions and replay it
back any number of times, comparing actual results to those expected. However,
this approach can cause reliability and maintainability problems. Thats why almost
all automated testing tools support manual writing of automated test cases, using
different scripting languages (e.g. JavaScript Perl, Python or VisualBasic Script).
Automated testing can be slightly compared to software development.
Code-driven testing can be performed since the beginning of the project and is
performed with help of such tools as JUnit which allow the code to conduct unit tests
to determine whether various sections of the code are acting as expected under
various circumstances. Test cases describe tests that need to be run on the program
to verify that the program runs as expected.
Automated testing tool selection
Testing tools can help automate tasks such as product installation, test data
creation, GUI interaction, problem detection (consider parsing or polling agents
equipped with oracles), defect logging, etc.
You must be aware of following criteria when selecting automated testing tool:
Platform and OS independence
Data driven capability
Customizable Reporting
Email Notifications
Easy debugging and logging
Version control friendly Extensible & Customizable
Common Driver (Ant or Maven)
Headless execution for unattended runs
Support distributed execution environment
Framework approach in automated testing
Automated testing framework is an integrated system that sets the rules of
Automation of a specific product. This system integrates the function libraries, test
data sources, object details and various reusable modules. These components act as
small building blocks which need to be assembled in a regular fashion to represent abusiness process.
Major automated testing frameworks approaches are:
Data-driven testing where test scripts are executed and verified based on the data
values stored in one or more central data sources or databases. These databases
can range from datapools, ODBC sources, csv files, Excel files, DAO objects, ADO
objects, etc. Data-driven testing is the establishment of several interacting test
scripts together with their related data results in a framework used for the
methodology. In this framework, variables are used for both input values and output
verification values: navigation through the program, reading of the data sources,
and logging of test status and information are all coded in the test script. Thus, thelogic executed in the script is also dependent on the data values.
8/12/2019 Principles of Automated Testing
3/4
Keyword-driven testing where test cases are written based on the keywords defined
by automated testing engineers. The test is executed using a driver that reads the
keywords and executes the corresponding code.
Modularity-driven testing that requires the creation of small, independent scripts
that represent modules, sections, and functions of the application-under-test. These
small scripts are then used in a hierarchical fashion to construct larger tests,
realizing a particular test case. It is a well-known programming strategy to build an
abstraction layer in front of a component to hide the component from the rest of the
application. This insulates the application from modifications in the component and
provides modularity in the application design. The test script modularity framework
applies this principle of abstraction or encapsulation in order to improve the
maintainability and scalability of automated test suite
Performance testing
Performance testing is testing that is performed, from one perspective, to determine
how fast some aspect of a system performs under a particular workload. It can also
serve to validate and verify other quality attributes of the system, such as
scalability, reliability and resource usage.
Performance testing can serve different purposes. It can demonstrate that the
system meets performance criteria. It can compare two systems to find which
performs better. Or it can measure what parts of the system or workload cause the
system to perform badly. In the diagnostic case, software engineers use tools such
as profilers to measure what parts of a device or software contribute most to the
poor performance or to establish throughput levels (and thresholds) for maintained
acceptable response time. The later a performance defect is detected, the higher thecost of remediation.
There are several types of performance testing:
load testing
stress testing
endurance testing
A load test is usually conducted to understand the behavior of the application under
a specific expected load. This load can be the expected concurrent number of userson the application performing a specific number of transaction within the set
duration. This test will give out the response times of all the important business
critical transactions. If the database, application server, etc are also monitored, then
this simple test can itself point towards the bottleneck in the application.
Stress testing is normally used to break the application. Double of the number of
users are added to the application and the test is run again until the application
breaks down. This kind of test is done to determine the application's robustness in
times of extreme load and helps application administrators to determine if the
application will perform sufficiently if the current load goes well above the expected
load.Endurance testing is usually done to determine if the application can sustain the
8/12/2019 Principles of Automated Testing
4/4
continuous expected load. Generally this test is done to determine if there are any
memory leaks in the application.
It is always advisable to keep the test automation team separate from the test execution team, as thetwo require entirely different skills.
decision may be based on whether the product to be tested has a GUI,Command Line or Web Interface, or whether network-related testing is required, among
other matters.
The result directory for the test suite should also contain the complete result logs for allthe tests that failed. These result logs will help in analyzing the failures and debuggingthe test scripts, if required. To make this analysis easy and smooth, it is important, onthe part of the test developer, to give meaningful return and error messages in thetests.
1.7 Manual v/s automated testingAutomated and manual tests are not mutually exclusive. They must coexist to improvethe overall testing productivity.The testing process will be most benefited if one has an optimum mixture of automatedand manual tests. The automated tests should usually be those, which cover the mostimportant features of the product and are likely to be executed in all the regressions.It will never be possible to have all the test suites automated. Some tests cannot beautomated because the tool or testing framework does not support automation. Forexample, with a console-testing tool, the automation of GUI tests will not be possible.There might be other tests, whose automation is not possible because the productunder test requires some manual hardware intervention to execute those tests.
Apart from these, there can be tests, whose automation is not very cost-effective. Theymight require enormous amounts of the developers time for automation/maintenance
and might test a small, not very important feature of the product under test.