View
1
Download
0
Category
Preview:
Citation preview
REAL-TIME PUSH NOTIFICATIONS USING HYBRID CLOUD
Aneesh PulukkulAssociate Principal EngineerEMC
2015 EMC Proven Professional Knowledge Sharing 2
Table of Contents
Introduction to Business Model .................................................................................................. 3
What are Push Notifications? ..................................................................................................... 4
Hybrid cloud Primer ................................................................................................................... 5
Hybrid Cloud in Action......................................................................................................... 7
Solution Overview ...................................................................................................................... 8
High Level Flow – How does it work? ..................................................................................... 8
Scalability – How does it support millions of devices? ...............................................................12
Reliability – Is your solution fault tolerant? ................................................................................14
Vendor-supplied features (built-in for platform) ......................................................................14
Design for failure in solution ..................................................................................................14
Data storage – How does Flash Memory Storage help? ...........................................................15
Performance ..........................................................................................................................15
Capacity ................................................................................................................................16
Data Protection and Availability .................................................................................................16
Deployment Model ....................................................................................................................17
Conclusion ................................................................................................................................18
Appendix – List of terms ............................................................................................................19
References ...............................................................................................................................21
Footnotes ..................................................................................................................................22
Disclaimer: The views, processes or methodologies published in this article are those of the
author. They do not necessarily reflect EMC Corporation’s views, processes or methodologies.
2015 EMC Proven Professional Knowledge Sharing 3
Introduction to Business Model
To understand the model better, consider a couple of scenarios applicable in our day-to-day life:
A soccer game is being played between your favorite teams and you do not want to miss
watching the match. However, at the same time, an urgent work matter needs your
attention. As the game progresses, you receive score updates sent as notifications to
your mobile device. You feel lucky, don’t you?
You are a smartphone techie who uses a mobile app to get updates on the current
events happening in your city.
Now imagine that you are starting a business (bitten by entrepreneur bug!) that creates
interesting mobile applications for the aforementioned scenarios. As you may have inferred from
the scenarios above, a key requirement of the applications is that they need to receive
notifications for a range of purposes.
Once you are ready with the business plan, you decide to develop a platform using technical
resources to achieve this business functionality. At that point, you realize that it is a time-
consuming and costly affair to set up a platform that can support the functionality of sending
millions of notifications. This is where a custom notification service on hybrid cloud becomes
important. It saves time and money by eliminating steps such as procuring and configuring
hardware and software resources, hiring a team to address non-functional business
requirements, and setting up a service team to maintain infrastructure. Sounds promising,
doesn’t it?
Here is the set of features the custom notification service platform offers:
Comprehensive Notification Service as Platform as a Service (PaaS1) to send
notification messages to mobile applications.
Web interface to perform administrative tasks configuration activities.
Multi-tenant model to support many vendors.
Secured REST (Representational State Transfer) API (Application Programming
Interface) endpoints that can be used by vendor’s application development team.
High performance computing and data storage to provide a close to real-time
experience.
Data protection and availability features.
2015 EMC Proven Professional Knowledge Sharing 4
Ever wondered how to design such solutions that send notifications to millions of users? Is
cloud computing a good choice for such solutions? How can market leaders like EMC and
Microsoft help you build such solutions with industry-leading technologies?
This Knowledge Sharing article answers these questions by covering the basics of push
notifications, hybrid cloud, and a design approach for building the notification service. An
overview of technology products from market leaders is provided that will enable you to address
the challenges and solve problems you may face during the design phase.
What are Push Notifications?
Have you ever wondered how messages and application updates are received on your mobile?
Push Notification Services (PNS) are hosted services through which notifications are sent to
target devices. The benefit of push notifications is that applications in the device need not
constantly pull for updates or messages. There are different implementations of PNS hosted by
various technology firms.
For example, consider Microsoft Push Notification Service (MPNS)2. MPNS enables sending
notifications to Windows Phone devices after an initial registration process. This is achieved by
creating a device-specific Uniform Resource Identifier (URI) and storing it in the MPNS
database. The URI could be time-bound and can be renewed when it expires. Anyone who
wants to send notifications can make use of this device URI once the device shares it. Refer to
Figure 1 for a better understanding of the push notification flow,
Note: Security for cloud and data storage is a vast topic and is outside the scope of this
article.
2015 EMC Proven Professional Knowledge Sharing 5
Hybrid cloud Primer
Now that you know about push notifications, the next concept that you need to understand,
before moving on to the solution overview is the hybrid cloud. The cloud paradigm has been
around for a few years and is already implemented by technology companies like Microsoft,
EMC, Amazon, and Google. Whenever you hear the buzzword cloud, it’s likely that you will hear
other buzzwords like public cloud, private cloud, and hybrid cloud. Have you ever thought about
the difference between these three and how they matter? To keep things simple, here is a short
explanation of these three cloud models3:
Public Cloud: In this model, the compute, network, and storage services are available for use
over the Internet. Two examples of public cloud are Amazon Web Services and Microsoft Azure.
Here, the ownership of infrastructure is taken up by cloud vendors.
Private Cloud: In this model, the enterprise takes ownership since the compute, network, and
storage services are within a firewall of an on-premises data center, meaning that these are not
available on a public network such as the Internet.
Figure 1: Push Notifications Flow
2015 EMC Proven Professional Knowledge Sharing 6
Hybrid Cloud: In this model, services of the public cloud and private cloud are used in
conjunction.
Here is an example that will helpful for understanding hybrid cloud:
Microsoft Azure is a public cloud that runs across Microsoft managed data centers across
different regions around the globe. Azure Services provides you with Virtual Machines (VM)
to host a website that captures data from users and to host a worker process that performs
batch processing – essentially, your compute resources are on public cloud. Meanwhile,
you’re also using EMC products for data storage which are set up at your enterprise’s on-
premises data center – meaning your data storage is on private cloud.
Now that you have one of the services – VMs – on public cloud and another – data storage
– on private cloud, all you need now is to establish a secure connection between the public
cloud and the private cloud to enable an integration of these components. Microsoft Azure
provides features such as ExpressRoute1 and Virtual Network1 to achieve this. While
ExpressRoute uses dedicated carrier partners, Virtual Network is designed based on the
concept of Virtual Private Network (VPN). Both features have the same goal of enabling a
secure link between data centers in the public and private cloud using industry-standard
protocols. Once connectivity is established, applications hosted on hybrid cloud are ready to
go! Refer to Figure 2 for the hybrid cloud model example.
2015 EMC Proven Professional Knowledge Sharing 7
Hybrid Cloud in Action
Figure 2: An example of the hybrid cloud model
Note: All hybrid cloud architectures need not follow the compute-on-public cloud and
storage-on-private cloud architecture. There are hybrid cloud architectures that have both
compute and storage resources on public cloud as well as private cloud.
2015 EMC Proven Professional Knowledge Sharing 8
Solution Overview
High Level Flow – How does it work?
The custom push notification service is designed to run on a hybrid cloud with Microsoft Azure
being the public cloud vendor and a company using EMC products as the private cloud vendor.
Refer to Figure 3 for a high level view of the solution.
Figure 3: High Level Flow
The interaction of application vendor(s) and notification service hosted on hybrid cloud is
summarized below:
1. Vendor registers application with the service – This step stores the vendor details
into service database.
2. Vendor sends notifications to service.
3. Service validates the source based on the credentials created during registration
process.
4. Service sends a message to the endpoint specified by the device URI – This is
received by the MPNS server.
5. MPNS forwards the notification to the appropriate device.
2015 EMC Proven Professional Knowledge Sharing 9
The interaction between mobile application(s) and push notification service hosted on hybrid
cloud is summarized below:
At this point, you may be wondering about internal workings of the solution. How does it accept
requests from vendors? How does it perform registration and delivery of notifications? Figure 4
shows a low-level solution architecture to address your questions by bringing clarity about
components working under the hood.
Figure 4: Solution Architecture at low level
1. Mobile application establishes a channel with MPNS.
2. MPNS returns URI (unique per device).
3. Application sends URI along with the device details to service in order to register the
mobile device with service. In this step, the service stores the device details –
including the URI – in the database.
2015 EMC Proven Professional Knowledge Sharing 10
The solution is designed primarily using four major components:
1. a REST service
2. a worker process Registration Processor
3. a worker process Notification Dispatcher
4. data storage
Also the solution makes use of queue structure to hold the requests received through REST
service. This is a conscious decision made to avoid a heavy load on the REST service
considering a scenario where millions of devices and vendors are registering with the service. In
such cases, millions of notifications have to be sent to the devices. Let’s address questions that
typically arise when you view the architecture diagram.
How does the REST service help the Application Vendor and Mobile Apps?
The REST service provides the following API for Application Vendors:
Register – This is consumed by a Vendor Registration web interface that helps vendors
do a self-registration process. The registration is approved by a service administrator
after which approval-intimation will be sent to vendors. If the vendor wishes to stop the
service, this endpoint can be used to deregister from the service database by passing
required details. Vendor-specific secret keys are generated during registration approval
and the vendor can view them from the web interface.
Send Notification - Application vendors can consume this REST endpoint using a
desktop application or web application, or even a mobile application to send a
notification to mobile apps. The REST service validates the incoming requests from
application vendors using the secret key generated during the registration approval
process.
As for mobile devices, there is a REST endpoint to register the device with service. Typically,
when a user downloads and installs the app on a mobile device, it asks the user to complete the
registration process. During registration, in the background, it establishes connection with
MPNS and receives the device-specific URI. On completion of device registration with custom
push notification service, the service database will have details of the device including the
device URI sent from MPNS.
2015 EMC Proven Professional Knowledge Sharing 11
What is the role of the Registration Processor? Why are Registration Processor
and Notification Dispatcher running on different machines?
Registration Processor is implemented as a background worker process that continuously polls
the Registration Queue which holds the registration requests from devices and application
vendors. The processing logic of registration request is segregated from the web server that
hosts the REST API. This makes the REST API lightweight and offloads the heavy processing
workload to the worker process.
Notification Dispatcher, implemented as a worker process, needs to poll Notification Queue at
regular intervals, and for each notification it needs to fetch details of devices from data storage.
For each device, it needs to dispatch the notification to MPNS using the device URI.
Is it required to build the queue data structure or is it available in the cloud
platform?
Most cloud vendors provide queue as a built-in feature in their cloud platform. Microsoft Azure
provides built-in queues as part of a component named Service Bus4. Service Bus queues have
features such as ordered delivery that guarantee delivery order in a sequence batch delivery
that helps process multiple items in a batch and certain advanced features which are outside
the scope of this article.
You may wonder why there is no mention of cloud in terms of computing resources regarding
our solution design discussion. All the virtual machines you see in the architecture are running
on cloud and the beauty of this approach is the flexibility to increase or decrease the number of
virtual machines based on the workload. This is further explained in the scalability section of this
article.
Note: In the Introduction to Business Model section, there is a mention of the web interface
that helps perform administrative tasks and configuration activities. That web application
can be hosted on the same VM where REST service is hosted (details on the web interface
design are not covered in this article). Since the solution follows a PaaS service model, the
web interface is a standard functionality, and therefore mentioned in the feature set.
2015 EMC Proven Professional Knowledge Sharing 12
Scalability – How does it support millions of devices?
What is the most interesting aspect of cloud computing? Consider a scenario where your
solution needs to use hundreds of virtual machines for peak hours, suppose two hours per day.
The on-premises approach requires that you buy hundreds of machines and keep them with
you. Does that sound profitable? Not really. It consumes a lot of real estate and incurs
maintenance charges.
With cloud, you just need to address a large scale workload by increasing the number of virtual
machines either manually or in an automated fashion, referred to as scaling-out. Cloud vendors
expose Service Management API5 for scaling purposes. This feature is referred to as dynamic-
scaling or auto-scaling and is often closely associated with the term, elasticity. The important
point to note is that you just need to pay for the number of hours you use the virtual machine.
Scale out vs. Scale up
Scaling-out (also called horizontal-scaling) means you increase the number of virtual
machines with same configuration; scaling-up (also called vertical-scaling) means you
upgrade the virtual machine with a higher configuration.
Consider a traffic controller solution under normal workload that uses 4 VMs, each having 2
Central Processing Unit (CPU) cores and 4 GB Random Access Memory (RAM) to operate.
To accommodate workload during rush hours:
- Scale-out approach increases the VM count by 4, meaning that there are total 4
VMs to level the workload. The newly added VMs have the same configuration as
existing VMs.
- Scale-up approach replaces each VM with higher configuration having 4 CPU cores
per machine and 6 GB of RAM.
A calculation of cost for hardware configuration makes it clear that the scale-out approach
would be cost-effective for hardware resources. When you use VMs provided by platforms
such as Microsoft Azure, you do not need to purchase the license for Windows Server
Operating System – you just pay for the number of hours the VMs are in use.
2015 EMC Proven Professional Knowledge Sharing 13
How do we decide whether to increase the number of virtual machines for unpredictable and predictable workloads?
For unpredictable workloads, this is easily achieved by using the Performance Counters feature
in the Windows Operating System. Performance counters help you capture certain operating
parameters such as CPU use, memory use, request per seconds received by web server, etc.
You can choose any of the pre-existing performance counters that provide information on
processor use, memory use, or even add custom performance counters that provide metrics on
pending items in the queue. Data captured by performance counters can be used to increase
the number of virtual machines. As when increasing the number of VMs, it very important to
decrease the number of VMs once the workload comes down. Remember, in cloud you pay for
the allocated machines regardless of whether they are running or idle. To automate the scaling
process, you need to incorporate a rule engine that works based on the pre-defined rules.
Examples on rules for unpredictable workload:
The approach above addresses unpredictable workloads.
If you know the workload increase rate at certain time periods, i.e. predictable workloads, the
rules could be framed as:
Now that you have an understanding of the particulars of scaling, the next question is how to
test the dynamic scaling. Unlike unit testing (test individual components) or integration testing
(test integrated components), performance testing (also called load testing) has certain
challenges. You will need to generate large number of requests to the REST service hosted on
cloud to mimic a scenario where thousands of application vendors are simultaneously sending
notifications. A plethora of commercial tools are available for load testing, but you can use a free
and simple performance testing tool from Microsoft, Web Capacity Analysis Tool (WCAT)6.
WCAT can simulate scenarios where thousands of concurrent users send requests to the REST
service.
Increase VM count for REST Service by 2 if CPU usage exceeds 75%.
Increase VM count for Notification Dispatcher by 1 if number of pending items in
Notification Queue exceeds 10K.
Increase VM count for REST Service by 10 between 10AM and 12PM every day.
Increase VM count for Notification Dispatcher by 20 between 10AM and 12PM every day.
2015 EMC Proven Professional Knowledge Sharing 14
Reliability – Is your solution fault tolerant?
When building a real-time notification solution, reliability is a critical feature that impacts the
business significantly. Imagine what could happen when VMs that deliver notifications go down
for some reason. Users will not receive important notifications and consequently they will stop
using your service which results in a huge dip in the revenue. Therefore, the solution should use
fault-tolerant features provided by vendors and any additional mechanism built by the service
development team.
Here are some approaches:
Vendor-supplied features (built-in for platform)
Failover Instances
Microsoft Azure provides a Service Level Agreement (SLA)7 of at least 99.95% availability of
virtual machines if the solution is deployed on two or more virtual machine instances. This
means that, when a machine goes down, it will redirect all requests to the second instance so
that downtime is avoided. The placement of these two instances will be chosen by Azure in
such a way that both do not fall under the same point or region of failure.
Auto-recovery on failure
Microsoft Azure is equipped with agents that continuously monitor each VM allocated for
consumers, sending signals to the VM and using a heartbeat check to confirm whether the VM
is running in a healthy state. When a VM goes down, the agent loses the heartbeat signal from
the VM and takes the necessary steps to restart the VM.
Design for failure in solution
Cloud-based systems have higher complexity due to the large number of interconnected
components and a resource pool that is shared by millions
of users and applications. This could cause issues in your
solution that is hosted on the cloud. Though cloud platform
provides features such as failover and auto-recovery,
failures and crashes that occur in your solution need to be
addressed by an approach called Design for Failure8. This approach dictates that the solution
should be designed to anticipate failures and account for these failures, meaning that it should
handle the failures gracefully.
If anything can go wrong, it will.
- Murphy’s Law
2015 EMC Proven Professional Knowledge Sharing 15
Logging and tracing
The concept of logging and tracing is not new. However, in a cloud-based system, it gains
importance since troubleshooting and debugging is relatively more complex in cloud
environments than in traditional systems. A fully-fledged logging and tracing system needs to be
implemented in the application in order to capture all issues that can directly or indirectly lead to
system failure. It is recommended to store log data in a separate storage area other than
solutions data. This helps prevent performance impact to main applications that may be caused
by a huge amount of log data.
Alert Mechanism
Certain areas are critical to your application and these areas need to be placed under a
monitoring procedure that helps to identify issues that occur and instantaneously send alerts to
a team of administrators. This proactive approach needs to be implemented in order to address
issues that cannot be resolved by the solution itself. These days, many applications are
designed to have self-healing capabilities as well as an alert mechanism that require attention
and manual action.
Data storage – How does Flash Memory Storage help?
What are the considerations for choosing data storage? The most important are performance
and capacity.
Performance
As you may have inferred from the title of this article, notifications need to be in real-time. Thus,
latency plays a big role here, as it negatively impacts performance. Suppose you’ve chosen
Microsoft SQL Server9 as your database for storing details of vendor, devices, and notification
history. SQL Server’s latency heavily depends on the underlying storage system. Traditional
storage systems with spinning hard disk arrays have certain inherent latency issues caused by
mechanical motion. This is where storage leaders like EMC help with all-flash array10
-based
storage systems such as XtremIO®11. Latency rates for Input/Output (I/O) operations are
considerably low and best suited for storing and retrieving data for real-time applications. Flash
arrays do not have the aforementioned latency issues that are suffered by rotational storage
devices like hard disk array.
2015 EMC Proven Professional Knowledge Sharing 16
Capacity
Another important consideration for data storage is capacity. Imagine storing details about
millions of devices and the details about the notifications sent to them. Over time, it could reach
a scale of terabytes (TB) or even petabytes (PB). With this amount of data, you cannot go very
far without considering a storage system designed to support massive storage scale.
XtremIO is built with a scale-out design that provides not only higher performance but also large
capacity by forming clusters of multiple storage units called X-Bricks12
. The clusters have better
performance and scalability (easily scales to petabytes) than all other arrays and do not
compromise availability.
Data Protection and Availability
Choosing XtremIO as a storage system is a wise decision. Once you have sorted out the design
concerns about data storage, the next challenge is how to protect your data. Will you be able to
afford data loss for millions of devices and thousands of application vendors? Of course not!
As for compute and network resources you have seen that Microsoft Azure offers the SLA and
availability features with the platform. As a business continuity plan, we need to make sure that
a similar availability feature is planned and implemented for data storage in the event of failure
of a site or individual storage array. EMC offers an industry leading data protection and
availability technologies VPLEX®13 and RecoverPoint
14. While VPLEX is designed to address
availability, RecoverPoint is built for continuous data protection by replicating data. VPLEX, in
conjunction with RecoverPoint, can be configured to ensure a zero recovery time objective
(RTO) and a near-zero recovery point objective (RPO). XtremIO is fully integrated with VPLEX
and RecoverPoint, addressing application data protection and availability concerns.
2015 EMC Proven Professional Knowledge Sharing 17
Deployment Model
When the solution is targeted for global users, the best deployment model is multi-site
deployment. In this model, the solution will be deployed in multiple sites across the globe. The
advantage of this model is:
Low Latency and high performance
Serves users from closest location and thus avoids latency during communication. One
key component to rescue us here is Microsoft Azure Traffic Manager15. It helps you
configure a single and global web address for accessing the deployed solution and
internally redirects to the solution that is hosted at the closest location.
High Availability and Disaster Recovery
Each site can be synchronized with each other, greatly contributing to High Availability
and Disaster Recovery. For example, if the solution is deployed on data centers across
USA, Europe, and Asia, one of the sites will be given a primary role and the other two
become secondary. When one of the sites is down due to a disaster and cannot be
brought back without impacting business, one of the secondary sites can be made
primary. This approach needs to be implemented at two levels:
1. At the public cloud end where the REST service is hosted. For implementing
such failover-to-secondary-site, Microsoft Azure Traffic Manager is available to
help us again. In addition to performance policy that identifies closest
deployment, it has a failover policy that helps to detect the health of primary and
fails over to secondary in case issues or downtime is detected for primary.
2. At the private cloud end where storage systems reside. To ensure storage
system availability, we can make use of VPLEX which is designed to provide
benefits of a multi-site deployment model. It offers a deployment model – VPLEX
Geo – which is built based on the multi-site deployment concept. The VPLEX
Geo model along with RecoverPoint helps enable asynchronous replication and
data movement between storage arrays hosted in data centers over large
distances.
2015 EMC Proven Professional Knowledge Sharing 18
Conclusion
This article discussed the right ingredients for a real-time and scalable push notification service
and attempted to convey that effective implementation of such a service for mobile devices and
application vendors is not a complex task once we understand concepts and address the design
concerns. We gained an understanding of how push notifications work by an example of MPNS.
We discussed how a hybrid cloud model can take the best of both public and private worlds. We
examined how the hybrid cloud model undoubtedly helps us optimize business by reducing
CapEx for infrastructure such as compute, network, and storage and eliminating development-
cost of features such as fault tolerance and auto-scaling of resources.
The solution overview helped in understanding high level and low level components along with
non-functional requirements such as the scalability and reliability. We walked through important
considerations such as performance and capacity when choosing a data storage system and
learned why a flash array-based storage system such as XtremIO is the right fit for a high
performing real-time application. We discussed the importance of data protection and availability
and got to know the options such as VPLEX and RecoverPoint. Finally, we saw the multi-site
deployment model solution and how components like Azure Traffic Manager, and VPLEX helps
us address availability and disaster recovery challenges. Now that you have seen the advanced
technologies on cloud computing and data storage, let us conclude the article with a quote by
sci-fi writer and futurist Sir Arthur C. Clarke.
Any sufficiently advanced technology is indistinguishable from magic.
2015 EMC Proven Professional Knowledge Sharing 19
Appendix – List of terms
API Application Programming Interface. It specifies the functionalities
provided by a software application and how to make use of these
functionalities while programming. API documentation is supplied to
software application developers who consume routines provided by
the application.
Multi-tenancy A design-approach where a single instance of a software solution
addresses multiple tenants. The tenant could be an individual user or
customer who shares the software component.
Performance Counter Performance Counters in Windows Operating system helps capture
and provide data that is related to the performance of the operating
system or applications. It’s an effective way to fine-tune the operating
system and dynamically add or remove resources to address the
performance bottleneck.
Queue A First In, First Out (FIFO) data structure used to hold data elements
in an ordered manner. This data structure is built on the real-world
concept of queues where the first person entered in a queue will be
served first.
Recovery Point
Objective
Represents an amount of data loss in terms of time period that is
acceptable to business.
Recovery Time
Objective
A point in time to which the system will recover. The time duration
allowed to recover from a failure.
Reliability An application is said to be reliable when it works as expected for a
given amount of time without failure. Reliability is also synonymously
used with the term fault-tolerance.
2015 EMC Proven Professional Knowledge Sharing 20
REST Representational State Transfer. An architectural style where
resources are read or written using a stateless protocol such as
Hypertext Transfer Protocol (HTTP). This style is mostly used for
exposing web-based services in networked applications.
Scalability Scalability of an application is a quantified measurement of user base.
If an application supports 10 million users without compromising
performance and availability, the application is scalable for 10 million.
If application becomes unavailable when more than 10 million users
access it, 10 million is termed as the scalability limit for the
application.
Storage Array A storage array is a data storage system that contains multiple disk
drives and a cache memory. Source:
http://www.techopedia.com/definition/1009/disk-array
Uniform Resource
Identifier(URI)
Identifies the name of a resource by use of a textual format and
provides a way to represent the resource in a network such as
internet. The textual format comprises protocol, web address, and
resource details.
Virtual Machine(VM) A virtual machine is a logical machine created on top of a physical
machine using a hypervisor, a virtualization software component.
Hypervisor enables a VM to make use of hardware resources of the
physical machine where it runs.
Virtual Private Network Virtual Private Network (VPN) is a logical network that enables
connectivity to a private network using a public network such as
Internet. Tools and systems that implement VPN have security and
encryption features to authorize users who are connecting to a private
network.
2015 EMC Proven Professional Knowledge Sharing 21
References
1. Enabling Hybrid Cloud Today with Microsoft Technologies Whitepaper
http://www.microsoft.com/en-us/download/details.aspx?id=39052
2. Building a Trusted Cloud: Deployment Strategies for Private and Hybrid Clouds
http://www.emc.com/collateral/emc-perspective/h8558-cloud-trust-ep.pdf
3. How to use Azure Service Bus Queues
http://azure.microsoft.com/en-in/documentation/articles/service-bus-dotnet-how-to-use-
queues/
4. Microsoft Azure ExpressRoute Technical Overview
http://msdn.microsoft.com/en-us/library/azure/dn606309.aspx
5. Microsoft Azure Virtual Network Overview
http://msdn.microsoft.com/en-us/library/azure/jj156007.aspx
6. Introduction to EMC XtremIO storage array
http://www.emc.com/collateral/white-papers/h11752-intro-to-XtremIO-array-wp.pdf
7. EMC XtremIO for Microsoft SQL Server
http://www.emc.com/collateral/white-papers/h13163-xtremio-microsoft-sql-server-wp.pdf
8. EMC RecoverPoint Specification-sheet
http://www.emc.com/collateral/software/specification-sheet/h2770-recoverpoint-ss.pdf
9. Configure Failover Load Balancing using Microsoft Azure Traffic Manager
http://msdn.microsoft.com/en-us/library/azure/hh744832.aspx
10. EMC VPLEX Geo Specification Sheet
http://india.emc.com/collateral/hardware/specification-sheet/h8690-vplex-geo-ss.pdf
2015 EMC Proven Professional Knowledge Sharing 22
Footnotes
1 http://www.microsoft.com/industry/government/guides/cloud_computing/5-PaaS.aspx
2 http://msdn.microsoft.com/en-us/library/windows/apps/ff402558(v=vs.105).aspx
3 http://www.cloud-competence-center.com/understanding/cloud-computing-deployment-models/
4 http://azure.microsoft.com/en-in/documentation/articles/fundamentals-service-bus-hybrid-solutions/
5 http://msdn.microsoft.com/library/azure/ee460799.aspx
6 http://www.galcho.com/articles/StressTestingWCAT.aspx
7 http://azure.microsoft.com/en-in/support/legal/sla/
8 http://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-
apps-with-windows-azure/design-to-survive-failures 9 http://en.wikipedia.org/wiki/Microsoft_SQL_Server
10
http://searchsolidstatestorage.techtarget.com/definition/Flash-array 11
https://store.emc.com/in/Product-Family/EMC-XtremIO-Products/EMC-XtremIO-All-Flash-Scale-Out-Array/p/EMC-XtremIO-Flash-Scale-Out 12
http://www.xtremio.com/x-brick 13
http://www.emc.com/storage/vplex/vplex.htm 14
http://www.emc.com/storage/recoverpoint/recoverpoint.htm 15
http://msdn.microsoft.com/en-us/library/azure/hh744833.aspx
EMC believes the information in this publication is accurate as of its publication date. The
information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION
MAKES NO RESPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO
THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an
applicable software license.
Recommended