Upload
trinhhanh
View
260
Download
2
Embed Size (px)
Citation preview
PERFORMANCE TESTING MANUAL
Submitted By
Table of contents
Introduction
What is Performance Testing?
Purpose of Performance Testing
Performance Testing Sub-Genres
Load Testing
Stress Testing
Volume Testing
Capacity Testing
Reliability/Recovery Testing
Endurance /Soak Testing
Spike testing/Scalability
Smoke Testing
Difference between Functional Testing and Performance Testing
Attributes, Components and Goals of Performance Testing
Attributes of Performance Testing
Identification of Components for Testing
Functional, business critical features
Components that process high volumes of data
Components which are commonly used
Components interfacing with one or more application systems
Performance Testing Goals
Performance Testing Workflow
Key Activities in Performance Testing
Requirement Analysis/Gathering
POC/Tool selection
Performance Test Plan & Design
Performance Test Development
Performance Test Modeling
Test Execution
Test Results Analysis
Report
Tools for Performance Testing
Tools to Perform Test
Apache JMeter
LoadRunner
WebLOAD
Appvance
NeoLoad
LoadUI
WAPT
Loadster
LoadImpact
Rational Performance Tester
Testing Anywhere
OpenSTA
QEngine (ManageEngine)
Loadstorm
CloudTest
Httperf
How to Perform Test?
Load Testing with Apache Jmeter
Building a Basic Test Plan
Add a Thread Group
Add an HTTP Request Defaults
Add an HTTP Cookie Manager
Add an HTTP Request Sampler
Add a View Results in Table Listener
Summary
Testing with Xcode
Introducing the Test Navigator
Create a Test Target
Run the Test and See the Results
Edit the Test and Run It Again
Use the setUp() and tearDown() Methods for Common Code
Summary
Testing with WAPT
Analyzing the test report
How to conclude performance tests?
How to Record tests
Performance Testing using Selenium
Conversion of recorded Web performance test into a coded Web performance test
Creating a Coded Web Performance Test
Adding Code to a Web Performance Test
To verify the Web performance test
Risk Addressed by Performance Testing
Speed-Related Risks
Stability-Related Risks
Scalability-Related Risks
Career in Performance Testing
Introduction
What is Performance Testing?
Performance testing is a subset of Performance engineering; it is non-functional and used to
determine how fast some of the aspects of a system perform under a specific workload. It can
also serve to emerge computer science practice which strives to build performance into the
design and architecture of a system, prior to the onset of actual coding effort.
Purpose of Performance Testing
No one wants to put up with a slow performing, unreliable site in cases of purchasing, online test
taking, bill payment, etc. With the internet being so widely available, the alternates are immense.
It is easier to lose clientele than gain them and performance is a key game changer.
It is indeed a comprehensive and detailed stage that would determine whether the performance of
a site or an application meets the needs. Overall, the purpose of this test is to understand the
performance of application under load, particularly users.
Performance Testing Sub-Genres
Load Testing
Load testing under the performance test is the simplest form of testing the application’s
performance on normal and peak usage. It will result in measuring important business critical
transactions and load on the database, application server, etc., are also monitored.
Stress Testing
Stress testing is the test to find the ways to break the system. The test also gives the idea for the
maximum load the system can hold. Generally it is an incremental approach where the load is
increased gradually. The test is started with good load for which application has been already
tested. Then slowly more load is added to stress the system and the point when we start seeing
servers not responding to the requests is considered as a break point.
Volume Testing
Volume test is to verify the performance of the application is not affected by volume of data that
is being handled by the application. Generally with the application usage, the database size
grows and it is necessary to test the application against heavy Database.
Capacity Testing
Capacity testing is generally done for future prospects. It is used to determine how many users
and/or transactions a given web application will support and still meet performance. During this
testing resources such as processor capacity, network bandwidth, memory usage, disk capacity,
etc. are considered and altered to meet the goal.
Online Banking is a perfect example of where capacity testing could play a major part.
Reliability/Recovery Testing
Reliability Testing or Recovery Testing – is to verify as to whether the application is able to
return back to its normal state or not, after a failure or abnormal behavior- and also how long
does it take for it to do so(in other words, time estimation).
Endurance /Soak Testing
Soak Testing or endurance testing, is performed to determine the system parameters under
continuous expected load. During soak tests the parameters such as memory utilization is
monitored to detect memory leaks or other performance issues. The main aim is to discover the
system's performance under sustained use.
Spike testing/Scalability
Spike testing is performed by increasing the number of users suddenly by a very large amount
and measuring the performance of the system. The main aim is to determine whether the system
will be able to sustain the workload.
Smoke Testing
Smoke testing is a shallow and wide approach whereby all areas of the application without
getting into too deep, is tested. It is scripted, either using a written set of tests or an automated
test. It is conducted to ensure whether the most crucial functions of a program are working, but
not bothering with finer details.
Difference between Functional Testing and Performance Testing
Attributes, Components and Goals of Performance Testing
Attributes of Performance Testing
Speed
Scalability
Stability
Reliability
Identification of Components for Testing
In an ideal scenario, all components should be performance tested. However, due to time & other
business constraints that may not be possible. Hence, the identification of components for testing
happens to be one of the most important tasks in performance testing.
The following components must be included in performance testing:
#1. Functional, business critical features
Components that have a Customer Service Level Agreement or those having complex business
logic (and are critical for the business’s success) should be included.
Example: Checkout and Payment for an E-commerce site like eBay.
#2. Components that process high volumes of data
Components, especially background jobs are to be included for sure. Example: Upload and
download feature on a file sharing website.
#3. Components which are commonly used
A component that is frequently used by end-users, jobs scheduled multiple times in a day, etc.
Example: Login and Logout.
#4. Components interfacing with one or more application systems
In a system involving multiple applications that interact with one another, all the interface
components must be deemed as critical for performance test.
Example: E-commerce sites interface with online banking sites for payments, which is an
external third party application. This should be definitely the part of Performance testing.
Performance Testing Goals
It is conducted to accomplish the following goals:
Verify Application’s readiness to go live.
Verify if the desired performance criteria are met
Compare performance characteristics/configurations of the application to what is
standard
Identify Performance bottlenecks.
Facilitate Performance Tuning.
Performance Testing Workflow
Key Activities in Performance Testing
#1. Requirement Analysis/Gathering
Performance team interacts with the client for identification and gathering of requirement –
technical and business. This includes getting information on application’s architecture,
technologies and database used, intended users, functionality, application usage, test
requirement, hardware & software requirements etc.
#2. POC/Tool selection
Once the key functionality is identified, POC (proof of concept – which is a sort of
demonstration of the real time activity but in a limited sense) is done with the available tools.
#3. Performance Test Plan & Design
During this activity, a Performance Test Plan is created. This serves as an agreement before
moving ahead and also as a road map for the entire activity. Once created this document is
shared to the client to establish transparency on the type of the application, test objectives,
prerequisites, deliverable, entry and exit criteria, acceptance criteria etc.
#4. Performance Test Development
Once approved, script development starts with a recording of the steps in use cases with the
performance test tool selected during the POC (Proof of Concepts) and enhanced by performing
Correlation (for handling dynamic value), Parameterization (value substitution) and custom
functions as per the situation or need.
#5. Performance Test Modeling
Performance Load Model is created for the test execution. The main aim of this step is to
validate whether the given Performance metrics (provided by clients) are achieved during the test
or not.
#6. Test Execution
Test execution is done incrementally. For example: If the maximum number of users are 100, the
scenarios is first run with 10, 25, 50 users and so on, eventually moving on to 100 users.
#7. Test Results Analysis
Test results are the most important deliverable for the performance tester. This is where we can
prove the ROI (Return on Investment) and productivity that a performance testing effort can
provide.
#8. Report
Test results should be simplified so the conclusion is clearer and should not need any derivation.
Development Team needs more information on analysis, comparison of results, and details of
how the results were obtained.
The final report to be shared with the client has the following information:
Execution Summary
System Under test
Testing Strategy
Summary of test
Results Strategy
Problem Identified
Recommendations
Along with the final report, all the deliverable as per test plan should be shared with the client.
Tools for Performance Testing
Tools for Performance Testing
To enable all with limited resources, fast and to obtain reliable results tools are often used for
this process. There are a variety of tools available in the market- licensed, free wares and open
sourced.
1. Apache JMeter
Description: Open source load testing tool: It is a Java platform application. This tool has the
capacity to be loaded into a server or network so as to check on its performance and analyze its
working under different conditions. It is of great use in testing the functional performance of the
resources such as Servlets, Perl Scripts and JAVA objects. Need JVM 1.4 or higher to run.
Apache JMeter System Requirements: It works under Unix and Windows OS.
2. LoadRunner
Description: This can be bought as a HP product from its HP software division. Also, it is very
much useful in understanding and determining the performance and outcome of the system when
there is actual load. One of the key attractive features of this testing tool is that, it can create and
handle thousands of users at the same time. The LoadRunner comprises of different tools;
namely, Virtual User Generator, Controller, Load Generator and Analysis.
LoadRunner System Requirements: Microsoft Windows and Linux are the favourable OS for this
measuring tool.
3. WebLOAD
Description: WebLOAD lets you perform load and stress testing on any internet application
using Ajax, Adobe Flex, .NET, Oracle Forms, HTML5 and many more technologies. You can
generate load from the cloud and on-premises machines. WebLOAD’s strengths are its ease of
use with features like DOM-based recording/playback, automatic correlation and JavaScript
scripting language.
WebLOAD System Requirements: Windows, Linux.
4. Appvance
Description: Appvance Performance Cloud is a broad-scale platform targeted at enterprise clients
which can fully exercise apps from beginning (at the UX) to end. The platform is used to surface
deep app and site issues from functional to load and stress and performance to APM. Compatible
with Selenium, JMeter, Dynatrace, Oracle Forms, Flash/Flex and more, and can test up to 10M
concurrent users.
Appvance System Requirements: Runs in private or public cloud.
5. NeoLoad
Description: Load and performance testing software: This is a tool used for measuring and
analyzing the performance of the website. This tool analysis the performance of the web
application by increasing the traffic to the website and the performance under heavy load can be
determined. You can get to know the capacity of the application and the amount of users it can
handle at the same time. It is available in two different languages; English and French.
NeoLoad System Requirements: This tool is compatible on operating systems like Microsoft
windows, Linux and Solaris.
6. LoadUI
Description: Open Source Stress Testing Tool: Load UI is yet another open source and load
testing software used for measuring the performance of the web applications. This tool works
effectively when it is integrated with the functional testing tool soapUI. LoadUI is the most
flexible and interactive testing tools. This allows you to create, configure and update your tests
while the application is being tested. It also gives a visual Aid for the user with a drag and drop
experience. You need not bother to restart the LoadUI each and every time you modify or change
the application. It automatically gets updated in the interface.
System Requirements: Cross platform.
7. WAPT
Description: Performance testing tool for web sites and intranet applications: WAPT refers
to the Web Application Performance tool. These are scales or analyzing tools for measuring the
performance and output of any web application or web related interfaces. With this tool you have
the advantage of testing the web application performances in various different environment and
different load conditions. WAPT provides detailed information about the virtual users and its
output to its users during the load testing. The WAPT tools can tests the web application on its
compatibility with browser and operating system. It is also used for testing the compatibility with
the windows application in certain cases.
WAPT System Requirement: Windows OS is required for this testing tool.
8. Loadster
Description: Loadster is a desktop based advanced HTTP load testing tool. The web browser can
be used to record the scripts which are easy to use and record. Using the GUI you can modify the
basic script with dynamic variables to validate response. With control over network bandwidth
you can simulate large virtual user base for your application stress tests. After test is executed
HTML report is generated for analysis. This tool is best to identify performance bottlenecks in
your application.
Loadster System Requirements: Windows 7/Vista/XP.
9. LoadImpact
Description: LoadImpact is a load testing tool which is mainly used in the cloud-based services.
This also helps in website optimization and improvising the working of any web application.
This LoadImpact comprises of two main parts; the load testing tool and the page analyzer. The
load testing can be divided into three types such as Fixed, Ramp up and Timeout. The page
analyzer works similar to a browser and it gives information regarding the working and statistics
of the website.
System Requirement: This works well on Windows OS and Linux.
10. Rational Performance Tester
Description: The Rational performance tester is an automated performance testing tool which can
be used for a web application or a server based application where there is a process of input and
output is involved. This tool creates a demo of the original transaction process between the user
and the web service. By the end of it all the statistical information are gathered and they are
analyzed to increase the efficiency. Any leakage in the website or the server can be identified
and rectified immediately with the help of this tool.
Rational Performance Tester System Requirement: Microsoft Windows and Linux AIX good
enough for this performance testing tool.
11. Testing Anywhere
Description: Test Anywhere is a automated testing tool which can be employed for testing the
performance of any web sites, web applications or any other objects. It is a powerful tool which
can test any application automatically. The testing anywhere tool involves 5 simple steps to
create a test. They are object recorder, advanced web recorder, SMART test recorder, Image
recognition and Editor with 385+ comments.
Testing Anywhere System Requirement: This tool is compatible with all versions of Windows
OS.
12. OpenSTA
Description: Open source HTTP performance test tools: Open STA stands for Open System
Testing Architecture. This is a GUI based performance tool used by application developers for
load testing and analyzing. It has proven capability in the past and the current toolset is capable
of performing the heavy load test and analyses for the scripted HTTP and HTTPS. Here, the
testing is carried out by using the recordings and simple scripts. These data and results can be
later exported to software for creating reports. This is a free testing tool and it is distributed
under GNU GPL and it will remain free forever.
OpenSTA System Requirement: OpenSTA runs only on Windows operating system.
13. QEngine (ManageEngine)
Description: QEngine (ManageEngine) is a most common and easy-to-use automated testing tool
helping in performance testing and load testing of your web applications. The key important
feature of this testing tool is its ability to perform remote testing of web services from any
geographical location. Other than that, QEngine (ManageEngine) also offers carious other testing
options such as functionality testing, compatibility testing, stress testing , load testing and
regression testing.
QEngine System Requirement: This tool works with the Microsoft Windows and Linux.
14. Loadstorm
Description: Cloud load testing for web applications: Loadstorm is the cheapest available
performance and load testing tool. Here, you have the option of creating your own test plans,
testing criteria and testing scenario. You can generate up to 50000 concurrent users by
generating traffic to your website and then carry out the testing. The cloud infrastructure is used
in this tool, which enables you to send huge amount of requests per second. There is no need of
any scripting knowledge for using this tool. You will be provided with many graphs and reports
which measures the performance in various metrics such as error rates, average response time
and number of users.
Loadstorm System Requirement: Windows OS.
15. CloudTest
Description: SOASTA CloudTest is a performance testing tool for the cloud computers. This
CloudTest has the capacity to enable number of users to use the website at the same time. They
are not free services, the price differs according to the number of load injector machines required
per hour by you.
SOASTA CloudTest System Requirement: It runs on Windows, Linux and Mac OS.
Plus one more:
16. Httperf
Description: Httperf is a high performance testing tool for measuring and analyzing the
performance of any web services and web applications. This is mainly used to the test the HTTP
servers and its performance. The main objective of this testing tool would be to count the number
of responses generated from this particular server. This generates the HTTP GET requests from
the server which helps in summarizing the overall performance of the server. The ability to
sustain the server overload, support the HTTP/1.1 protocol and compatibility with new workload
are the three key features of this performance testing tool.
Httperf System Requirement: Windows and Linux.
How to Perform Test?
Load Testing with Apache Jmeter
JMeter is an open source desktop Java application that is designed to load test and measure
performance. It can be used to simulate loads of various scenarios and output performance data
in several ways, including CSV and XML files, and graphs. Because it is 100% Java, it is
available on every OS that supports Java 6 or later.
Here is a list of the software, with links to archives, required to run JMeter:
Oracle Java or OpenJDK (6 or later)
Apache JMeter
Building a Basic Test Plan
After starting JMeter, you should see the graphical user interface with an empty Test Plan:
A test plan is composed of a sequence of test components that determine how the load test will
be simulated. We will explain the how some of these components can be used as we add them
into our test plan.
Add a Thread Group
First, add a Thread Group to Test Plan:
1. Right-click on Test Plan
2. Mouse over Add >
3. Mouse over Threads (Users) >
4. Click on Thread Group
Add an HTTP Request Defaults
The HTTP Request Defaults Config Element is used to set default values for HTTP Requests in
our test plan. This is particularly useful if we want to send multiple HTTP requests to the same
server as part of our test. Now let's add HTTP Request Defaults to Thread Group:
1. Select Thread Group, then Right-click it
2. Mouse over Add >
3. Mouse over Config Element >
4. Click on HTTP Request Defaults
In HTTP Request Defaults, under the Web Server section, fill in the Server Name or IP field
with the name or IP address of the web server you want to test. Setting the server here makes it
the default server for the rest of the items in this thread group.
Add an HTTP Cookie Manager
If your web server uses cookies, you can add support for cookies by adding an HTTP Cookie
Manager to the Thread Group:
1. Select Thread Group, then Right-click it
2. Mouse over Add >
3. Mouse over Config Element >
4. Click on HTTP Cookie Manager
Add an HTTP Request Sampler
Now you will want to add an HTTP Request sampler to Thread Group, which represents a page
request that each thread (user) will access:
1. Select Thread Group, then Right-click it
2. Mouse over Add >
3. Mouse over Sampler >
4. Click on HTTP Request
Add a View Results in Table Listener
In JMeter, listeners are used to output the results of a load test. There are a variety of listeners
available, and the other listeners can be added by installing plugins. We will use the Table
because it is easy to read.
1. Select Thread Group, then Right-click it
2. Mouse over Add >
3. Mouse over Listener >
4. Click on View Results in Table
Summary
JMeter can be a very valuable tool for determining how your web application server setup should
be improved, to reduce bottlenecks and increase performance. Now that you are familiar with the
basic usage of JMeter, feel free to create new test plans to measure the performance of your
servers in various scenarios.
Testing with Xcode
Xcode provides you with capabilities for extensive software testing. Testing your projects
enhances robustness, reduces bugs, and speeds the acceptance of your products for distribution
and sale. Well-tested apps that perform as expected improve user satisfaction. Testing can also
help you develop your apps faster and further, with less wasted effort, and can be used to help
multiperson development efforts stay coordinated.
Introducing the Test Navigator
You’ll use the Xcode test navigator often when you are working with tests.
The test navigator is the part of the workspace designed to ease your ability to create, manage,
run, and review tests. You access it by clicking its icon in the navigator selector bar, located
between the issue navigator and the debug navigator. When you have a project with a suite of
tests defined, you see a navigator view similar to the one shown here.
The test navigator shown above displays a hierarchical list of the test bundles, classes, and
methods included in a sample project. This particular project is a sample calculator app. The
calculator engine is implemented as a framework. You can see at the top level of the hierarchy
the SampleCalcTests test bundle, for testing the code in the application.
Create a Test Target
With the test navigator open, click the Add button (+) in the bottom-left corner and choose New
Test Target from the menu.
In the new target assistance that appears, edit the Product Name and other parameters to your
preferences and needs.
Run the Test and See the Results
Now that you’ve added testing to your project, you want to develop the tests to do something
useful. But first, hold the pointer over the SampleCalcTests test class in the test navigator and
click the Run button to run all the test methods in the class. A test failure is indicated in the test
navigator. Click the testExample method to see the following view of the test results, source
code, and highlighted error condition:
The test failed because the test class template contains a default test method that calls XCTFail(),
an assertion that forced an unconditional failure message.
Edit the Test and Run It Again
Because this sample project is a calculator app, you might decide first to check whether it
performs addition correctly. Because tests are built in the app project, you can add all the context
and other information needed to perform tests at whatever level of complexity makes sense for
your needs.
Insert the following #import and instance variable declarations into the SampleCalcTests.m file:
#import <XCTest/XCTest.h>
//
// Import the application specific header files
#import "CalcViewController.h"
#import "CalcAppDelegate.h"
@interface CalcTests : XCTestCase {
// add instance variables to the CalcTests class
@private
NSApplication *app;
CalcAppDelegate *appDelegate;
CalcViewController *calcViewController;
NSView *calcView;
}
@end
Then give the test method a descriptive name, such as testAddition, and add the implementation
source for the method.
- (void) testAddition
{
// obtain the app variables for test access
app = [NSApplication sharedApplication];
calcViewController = (CalcViewController*)[[NSApplication sharedApplication] delegate];
calcView = calcViewController.view;
// perform two addition tests
[calcViewController press:[calcView viewWithTag: 6]]; // 6
[calcViewController press:[calcView viewWithTag:13]]; // +
[calcViewController press:[calcView viewWithTag: 2]]; // 2
[calcViewController press:[calcView viewWithTag:12]]; // =
XCTAssertEqualObjects([calcViewController.displayField stringValue], @"8", @"Part 1
failed.");
[calcViewController press:[calcView viewWithTag:13]]; // +
[calcViewController press:[calcView viewWithTag: 2]]; // 2
[calcViewController press:[calcView viewWithTag:12]]; // =
XCTAssertEqualObjects([calcViewController.displayField stringValue], @"10", @"Part 2
failed.");
}
Use the setUp() and tearDown() Methods for Common Code
Using setUp and tearDown is simple. From the testAddition source in Mac_Calc_Tests.m, cut
the four lines starting with // obtain the app variable for test access and paste them into the
default setUp instance method provided by the template.
- (void)setUp
{
[super setUp];
// Put setup code here. This method is called before the invocation of each test method in the
class.
// obtain the app variables for test access
app = [NSApplication sharedApplication];
calcViewController = (CalcViewController*)[[NSApplication sharedApplication] delegate];
calcView = calcViewController.view;
}
Summary
Xcode sets up most of the basic testing configuration. When you add a test target, Xcode creates
the test bundle files for the project, adds the test bundle to the test navigator, adds the XCTest
framework to the project, and provides templates for test classes and test methods.
Testing with WAPT
WAPT – website performance tool performs test by emulating activity of many virtual users.
Each virtual user can have its own profile settings. You can have thousands of virtual users
acting simultaneously on your website performing any activity like reading or writing with your
web server. Once you set number of virtual users to act on your website you have option to run
your tests for specified time or specified user sessions.
Analyzing the test report
Test result consists of charts updated in real time which you can monitor when your tests are
running. The final comprehensive report is provided at the end of the tests.
Here are the important parameters to be monitored on the test report:
Error Rate: Failure rate against total number of tests run. The error may be due to the high load
on server or due to the network problems and timeouts.
Response Time: Obviously a great parameter to check when you run tests for website
performance. This response time indicates time required by server to provide correct reply to the
request.
Number of pages per second: Number of page requests successfully completed by server per
second.
How to conclude performance tests?
These performance criteria change during each test-pass with different load conditions. You need
to conclude what is your acceptable load limit and whether your server is able to serve this load.
E.g.: If you expect your server to handle 100 requests successfully per second then anything
below this will be failure of your server which needs to be tackled.
How to Record tests
WAPT works like any other record and playback tool but the real strength is behind its
parameterization where you can configure any parameter from website URL or user session to
act as a real user.
Testing with WAPT in simple 5 steps: Record->Configure->Verify->Execute->Analyze
WAPT is available in two versions – Standard version (Latest WAPT 7.1)
– Professional version (Latest WAPT Pro 2.1)
Performance Testing using Selenium
Performance testing using Selenium ensures a Web application operates functions within an
acceptable amount of time at various levels of concurrent user load. A load and performance test
identifies the Scalability Index ensuring the application achieves a Service Level Agreement
(SLA) as more and more users operate the application concurrently.
While Selenium does well at testing Web applications for proper functioning, there is an easy
way to repurpose Selenium functional tests to be load and performance tests.
Selenium scripts implement a workflow. For example, a functional Selenium test has a set-up
method followed by running a test that interacts with the application under test, and finally
operates a teardown method. Set-up connects to a data source and prepares the first batch of
operational test data. Operate this as functional test by running a single thread to operate the test
use case once from top-to-bottom. The tear down method terminates the data source connection.
The results show the pass/fail status of each Selenium command in the use case and the duration
of each Selenium command to operate.
Load and Performance testing surfaces the root cause to performance bottlenecks in a Web
application by running the test as various levels of virtual concurrently running users. The results
show how close the application under test gets to linear scalability.
In the above chart the red bar columns show the measured Transactions Per Second (TPS) at 1,
2, 4 virtual users. In this example a transaction is the same as a test use case. The ghosted area
behind the red bar columns shows where the application performance should be when the
application under test scales linearly. The gap indicates that the time it takes to process test use
cases increases at the larger number of virtual users.
Selenium tests require a browser. Internet Explorer (IE) is not appropriate because of its size and
complexity. HtmlUnit is a light-weight headless Web browser with Ajax compatibility.
TestMaker runs Selenium tests in the HtmlUnit browser.
Conversion of recorded Web performance test into a coded Web performance test
In this walkthrough you will convert a recorded Web performance test into a coded Web
performance test.
This walkthrough steps you through the conversion of an existing, recorded Web performance
test into a coded Web performance test. A recorded Web performance test begins as a list of
URLs that represent Web requests. A Web performance test can be converted to a code-based
script. After a Web performance test has been converted to its coded format, looping and
branching constructs can be added. After you convert the Web performance test to a coded Web
performance test, you can edit that code like any other source code.
To complete this walkthrough, you need the following:
Visual Studio Ultimate
The Web application that you created.
Web performance test that you created.
Creating a Coded Web Performance Test
To convert an existing Web performance test to a coded Web performance test
1. Choose the Generate Code button on the toolbar in the Web Performance Test Editor.
2. Accept the default name in the dialog box and choose OK.
3. On the Build menu, choose Build Solution.
Adding Code to a Web Performance Test
To add code to a Web performance test
1. Locate the Run() method if your test is in Visual Basic or the GetRequestEnumerator()
method if your test is in C#. You will see code that corresponds to each Web request in
the test.
2. Scroll down to the end of the method, and after the code for the last Web request, add the
following code:
3. On the Build menu, choose Build Solution.
To verify the Web performance test
1. With the coded Web performance test selected in the code editor, open the shortcut menu
and choose Run Coded Web Performance Test.
2. The coded Web performance test runs and the results begin to appear in the Web
Performance Test Results Viewer.
3. In the Web Performance Results Viewer, you can run your coded Web performance test
again by choosing the Choose here to run again link on the embedded status bar.
Risk Addressed by Performance Testing
Performance testing is indispensable for managing certain significant business risks. For
example, if your website cannot handle the volume of traffic it receives, your customers will
shop somewhere else. Beyond identifying the obvious risks, performance testing can be a useful
way of detecting many other potential problems. While performance testing does not replace
other types of testing, it can reveal information relevant to usability, functionality, security, and
corporate image that is difficult to obtain in other ways.
Many businesses and performance testers find it valuable to think of the risks that performance
testing can address in terms of three categories: speed, scalability, and stability.
Speed-Related Risks
Speed-related risks are not confined to end-user satisfaction, although that is what most people
think of first. Speed is also a factor in certain business and data related risks. Some of the most
common speed-related risks that performance testing can address include:
Is the application fast enough to satisfy end users?
Is the business able to process and utilize data collected by the application before that
data becomes outdated?
Is the application capable of presenting the most current information to its users?
Is a Web Service responding within the maximum expected response time before an error
is thrown?
Stability-Related Risks
Stability is a blanket term that encompasses such areas as reliability, uptime, and recoverability.
Although stability risks are commonly addressed with high-load, endurance, and stress tests.
Some common stability risks addressed by means of performance testing include:
Can the application run for long periods of time without data corruption, slowdown, or
servers needing to be rebooted?
If the application does go down unexpectedly, what happens to partially completed
transactions?
When the application comes back online after scheduled or unscheduled downtime, will
users still be able to see/do everything they expect?
Can the system be patched or updated without taking it down?
Scalability-Related Risks
Scalability risks concern not only the number of users an application can support, but also the
volume of data the application can contain and process, as well as the ability to identify when an
application is approaching capacity. Common scalability risks that can be addressed via
performance testing include:
Can the application provide consistent and acceptable response times for the entire user
base?
Can the application store all of the data that will be collected over the life of the
application?
Are there warning signs to indicate that the application is approaching peak capacity?
Will the application still be secure under heavy usage?
Career in Performance Testing
Performance testing is easy to learn but need lots of dedication to master it. It’s like a
mathematics subject where you have to build your concept. Once the concept is through, it can
be applied to most of the tools irrespective of the scripting language being different, straight
forward logic not being applicable, look and feel of the tool being different, etc. I would highly
recommend this hot and booming technology and to enhance your skill by learning this.
Thanks