19
UNIT – 5 Market Based Management of Cloud Unit-05/Lecture-01 Market Based Management of Clouds As consumers rely on Cloud providers to supply all their computing needs, they will require specific QoS to be maintained by their providers in order to meet their objectives and sustain their operations. Cloud providers will need to consider and meet different QoS parameters of each individual consumer as negotiated in specific SLAs. To achieve this, Cloud providers can no longer continue to deploy traditional system-centric resource management architecture that do not provide incentives for them to share their resources and still regard all service requests to be of equal importance. Instead, market- oriented resource management is necessary to regulate the supply and demand of Cloud resources at market equilibrium, provide feedback in terms of economic incentives for both Cloud consumers and providers, and promote QoS-based resource allocation mechanisms that differentiate service requests based on their utility. Figure shows the high-level architecture for supporting market-oriented resource allocation in Data Centers and Clouds. There are basically four main entities involved: •Users/Brokers: Users or brokers acting on their behalf submit service requests from anywhere in the world to the Data Center and Cloud to be processed. •SLA Resource Allocator: The SLA Resource Allocator acts as the interface between the

CLOUD COMPUTING UNIT-5 NOTES

Embed Size (px)

Citation preview

Page 1: CLOUD COMPUTING UNIT-5 NOTES

UNIT – 5

Market Based Management of Cloud

Unit-05/Lecture-01

Market Based Management of Clouds

As consumers rely on Cloud providers to supply all their computing needs, they will require specific QoS to be maintained by their providers in order to meet their objectives

and sustain their operations. Cloud providers will need to consider and meet different QoS parameters of each individual consumer as negotiated in specific SLAs. To achieve

this, Cloud providers can no longer continue to deploy traditional system-centric resource management architecture that do not provide incentives for them to share their resources and still regard all service requests to be of equal importance. Instead, market-

oriented resource management is necessary to regulate the supply and demand of Cloud resources at market equilibrium, provide feedback in terms of economic incentives for

both Cloud consumers and providers, and promote QoS-based resource allocation mechanisms that differentiate service requests based on their utility. Figure shows the

high-level architecture for supporting market-oriented resource allocation in Data Centers and Clouds.

There are basically four main entities involved: •Users/Brokers: Users or brokers acting on their behalf submit service requests from anywhere in the world to the Data Center and Cloud to be processed.

•SLA Resource Allocator: The SLA Resource Allocator acts as the interface between the

Page 2: CLOUD COMPUTING UNIT-5 NOTES

2

Data Center/Cloud service provider and external users/brokers. It requires the interaction of the following mechanisms to support SLA-oriented resource management:

Service Request Examiner and Admission Control : When a service request is first submitted, the Service Request Examiner and Admission Control mechanism interprets the submitted request for QoS requirements before determining whether to accept or reject the request. Thus, it ensures that there is no overloading of resources whereby many service requests cannot be fulfilled

successfully due to limited resources available. It also needs the latest status information regarding resource availability (from VM Monitor mechanism) and

workload processing (from Service Request Monitor mechanism) in order to make resource allocation decisions effectively. Then, it assigns requests to VMs and

determines resource entitlements for allocated VMs. Pricing: The Pricing mechanism decides how service requests are charged. For

instance, requests can be charged based on submission time (peak/off-peak), pricing rates (fixed/changing) or availability of resources (supply/demand). Pricing serves as a basis for managing the supply and demand of computing resources within the Data Center and facilitates in prioritizing resource allocations effectively.

Accounting: The Accounting mechanism maintains the actual usage of resources

by requests so that the final cost can be computed and charged to the users. In addition, the maintained historical usage information can be utilized by the Service Request Examiner and Admission Control mechanism to improve resource allocation decisions.

VM Monitor: The VM Monitor mechanism keeps track of the availability of VMs and their resource entitlements.

Dispatcher: The Dispatcher mechanism starts the execution of accepted service requests on allocated VMs.

Service Request Monitor: The Service Request Monitor mechanism keeps track of the execution progress of service requests.

•VMs: Multiple VMs can be started and stopped dynamically on a single physical machine to meet accepted service requests, hence providing maximum flexibility to configure various partitions of resources on the same physical machine to different specific requirements of service requests. In addition, multiple VMs can concurrently run applications based on different operating system environments on a single physical machine since every VM is completely isolated from one another on the same physical

machine. •Physical Machines: The Data Center comprises multiple computing servers that provide

resources to meet service demands.

Commercial offerings of market-oriented Clouds must be able to:

support customer-driven service management based on customer profiles and requested service requirements,

define computational risk management tactics to identify, assess, and manage risks involved in the execution of applications with regards to service requirements

and customer needs,

derive appropriate market-based resource management strategies that encompass both

customer-driven service management and computational risk management to sustain SLA-oriented resource allocation,

Page 3: CLOUD COMPUTING UNIT-5 NOTES

3

incorporate autonomic resource management models that effectively self-manage changes in service requirements to satisfy both new service demands and existing service obligations, and

leverage VM technology to dynamically assign resource shares according to service requirements.

S.NO RGPV QUESTIONS Year Marks

Page 4: CLOUD COMPUTING UNIT-5 NOTES

4

Unit-01/Lecture-02

Federated Clouds/Inter Cloud The terms cloud federation and InterCloud, often used interchangeably, convey the general meaning of an aggregation of cloud computing providers that have separate administrative

domains. It is important to clarify what these two terms mean and how they apply to cloud computing.

The term federation implies the creation of an organization that supersedes the decisional and

administrative power of the single entities and that acts as a whole. Within a cloud computing con-text, the word federation does not have such a strong connotation but implies that there are agree- ments between the various cloud providers, allowing them to leverage each other’s services in a privileged manner. A definition of the term cloudfederation was given by Reuven Cohen,founder and CTO of EnomalyInc : Cloud federation manages consistency and access controls when two or more independent geo- graphically distinct Clouds share either authentication, files, computing resources, command and control or access to storage resources. InterCloud is a term that is often used interchangeably to express the concept of

Cloudfederation. It was introduced by Cisco for expressing a composition of clouds that are interconnected by means of open standards to provide a universal environment that leverages cloud computing services. By mimicking the Internet term, often referred as the “network of networks,” InterCloud represents a “Cloud of Clouds” and therefore expresses the same concept of federating together clouds that belong to different administrative organizations. The term InterCloud refers mostly to a global vision in which interoperability among different cloud providers is governed by standards, thus creating an open platform where applications can shift workloads and freely compose services from different sources. On the other hand, the concept of a cloud federation is more general and includes ad hoc aggregations between cloud providers on the basis of private agreements and proprietary interfaces.

S.NO RGPV QUESTIONS Year Marks

Q.1

Q.2

Q.3

Page 5: CLOUD COMPUTING UNIT-5 NOTES

5

Unit-01/Lecture-03

Cloud Federation Stack

Creating a cloud federation involves research and development at different levels: conceptual,

logical and operational, and infrastructural. Figure 11.7 provides a comprehensive view of the

challenges faced in designing and implementing an organizational structure that coordinates

together cloud services that belong to different administrative domains and makes them operate

within a context of a single unified service middleware. Each cloud federation level pres ents

different challenges and operates at a different layer of the IT stack. It then requires the use of

different approaches and technologies. Taken together, the solutions to the challenges faced at

each of these levels constitute a reference model for a cloud federation.

The conceptual level addresses the challenges in presenting a cloud federation as a favorable

solution with respect to the use of services leased by single cloud providers. In this level it is

important to clearly identify the advantages for either service providers or service consumers in

joining a federation and to delineate the new opportunities that a federated environment creates

with respect to the single-provider solution. The conceptual level addresses the challenges in

presenting a cloud federation as a favorable soluion with respect to the use of services leased by

Page 6: CLOUD COMPUTING UNIT-5 NOTES

6

single cloud providers. In this level it is important to clearly identify the advantages for either

service providers or service consumers in joining a federation and to delineate the new

opportunities that a federated environment creates with respect to the single-provider solution.

Elements of concern at this level are:

• Motivations for cloud providers to join a federation

Motivations for service consumers to leverage a federation

• Advantages for providers in leasing their services to other providers

• Obligations of providers once they have joined the federation

• Trust agreements between providers • Transparency versus consumers

The logical and operational level of a federated cloud identifies and addresses the challenges in

devising a framework that enables the aggregation of providers that belong to different

administrative domains within a context of a single overlay infrastructure, which is the cloud

federation. At this level, policies and rules for interoperation are defined. Moreover, this is the

layer at which decisions are made as to how and when to lease a service to—or to leverage a

service from— another provider. The logical component defines a context in which agreements

among providers are settled and services are negotiated, whereas the operational component

characterizes and shapes the dynamic behavior of the federation as a result of the single

providers’ choices. This is the level where MOCC is implemented and realized.

The infrastructural level addresses the technical challenges involved in enabling heterogeneous

cloud computing systems to interoperate seamlessly. It deals with the technology barriers that

keep separate cloud computing systems belonging to different administrative domains. By having

standardized protocols and interfaces, these barriers can be overcome. In other words, this level

for the federation is what the TCP/IP stack is for the Internet: a model and a reference

implementation of the technologies enabling the interoperation of systems. The infrastructural

level lays its foundations in the IaaS and PaaS layers of the Cloud Computing Reference Model.

Services for interoperation and interface may also find implementation at the SaaS level, especially

for the realization of negotiations and of federated clouds.

S.NO RGPV QUESTIONS Year Marks

Page 7: CLOUD COMPUTING UNIT-5 NOTES

7

Unit-01/Lecture-04

Third Party Cloud Services

One ofthekeyelementsofcloudcomputingisthepossibilityofcomposingservicesthatbelongto

differentvendorsorintegratingthemintoexistingsoftwaresystems.Theservice-orientedmodel, which

isthebasisofcloudcomputing,facilitatessuchanapproachandprovidestheopportunity for

developinganewclassofservicesthatcanbecalled third-partycloudservices. Thesearetheresult of

adding value to preexisting cloud computing services, thus providing customers with a dif -

ferent and more sophisticated service. Added value can be either created by smartly

coordinating existing services or implementing additional features on top of an existing

basic service. Besides this general definition, there is no specific feature that

characterizes this class of service. Therefore, in this section, we describe some examples of

third-party services.

MetaCDN [158] providesuserswithaContentDeliveryNetwork(CDN)[159][servicebyleverag- ing

andharnessingtogetherheterogeneousstorageclouds.Itimplementsasoftwareoverlaythat

coordinatestheserviceofferingsofdifferentcloudstoragevendorsandusesthemasdistributed

elasticstorageonwhichtheusercontentisstored.MetaCDNprovidesuserswiththehigh-level

servicesofaCDNforcontentdistributionandinteractswiththelow-levelinterfacesofstorage

cloudstooptimallyplacetheusercontentinaccordancewiththeexpectedgeographyofits

demand.Byleveragingthecloudasastorageback-enditmakesacomplex—andgenerally expensive—

contentdeliveryserviceavailabletosmallenterprises. SpotCloud has

alreadybeenintroducedasanexampleofavirtualmarketplace.Byactingasan

intermediaryfortradingcomputeandstoragebetweenconsumersandserviceproviders,itprovides the

twopartieswithaddedvalue.Forserviceconsumers,itactsasamarketdirectorywherethey can

browseandcomparedifferentIaaSserviceofferingsandselectthemostappropriatesolution for

them.Forserviceprovidersitconstitutesanopportunityforadvertisingtheirofferings.Inaddi- tion,

itallowsuserswithavailablecomputingcapacitytoeasilyturnthemselvesintoserviceprovi- ders

bydeployingtheruntimeenvironmentrequiredbySpotCloudontheirinfrastructure. SpotCloud is not only an

enabler for IaaS providers and resellers, but its intermediary role also includes a complete

bookkeeping of the transactions associated with the use of resources. Users deposit credit on

their SpotCloud account and capacity sellers are paid following the usual pay- per-use model.

SpotCloud retains a percentage of the amount billed to the user. Moreover, by leveraging a

uniform runtime environment and virtual machine management layer, it provides users with a

vendor lock-in-free solution, which might be strategic for specific applications. The two

previously presented examples give an idea of how different in nature third-party ser- vices

can be: MetaCDN provides end users with a different service from the simple cloud storage

offerings; SpotCloud does not change the type of service that is finally offered to end

users, but it enriches it with additional features that result in more effective use of it.

These are just two examples of the market segment that is now developing as a result of the

Page 8: CLOUD COMPUTING UNIT-5 NOTES

8

consolidation of cloud computing as an approach to a more intelligent use of IT resources.

S.NO RGPV QUESTIONS Year Marks

Page 9: CLOUD COMPUTING UNIT-5 NOTES

9

Unit-01/Lecture-05

Google App Engine Google AppEngine is a PaaS implementation that provides services for developing and hosting

scalable Web applications. AppEngine is essentially a distributed and scalable runtime

environment that leverages Google’s distributed infrastructure to scale out applications

facing a large number of requests by allocating more computing resou rces to them and

balancing the load among them. The runtime is completed by a collection of services that

allow developers to design and implement applications that naturally scale on AppEngine.

Developers can develop applications in Java, Python, and Go, a new programming language

developed by Google to simplify the development of Web applications. Application usage of

Google resources and services is metered by AppEngine, which bills users when their

applications finish their free quotas.

9.2.1.1Infrastructure AppEngine hosts Web applications, and its primary function is to serve users requests efficiently. To do so, AppEngine’s infrastructure takes advantage of many servers available

within Google datacenters. For each HTTP request, AppEngine locates the server s hosting the

application that pro- cesses the request, evaluates their load, and, if necessary, allocates

additional resources (i.e., ser- vers) or redirects the request to an existing server. The

particular design of applications, which does not expect any state information to be

implicitly maintained between requests to the same application, simplifies the work of the

infrastructure, which can redirect each of the requests to any of the servers hosting the

target application or even allocate a new one.

The infrastructure is also responsible for monitoring application performance and collecting

sta- tistics on which the billing is calculated.

Page 10: CLOUD COMPUTING UNIT-5 NOTES

10

Page 11: CLOUD COMPUTING UNIT-5 NOTES

11

S.NO RGPV QUESTIONS Year Marks

Page 12: CLOUD COMPUTING UNIT-5 NOTES

12

UNIT 1/LECTURE 6

6 Microsoft Azure , AppEngine, a framework for developing scalable Web applications, leverages Google’s

infrastruc- ture. The core components of the service are a scalable and sandboxed runtime

environment for executing applications and a collection of services that implement most of

the common features required for Web development and that help developers build applications

that are easy to scale. One of the characteristic elements of AppEngine is the use of simple

interfaces that allow applica- tions to perform specific operations that are optimized and

designed to scale. Building on top of these blocks, developers can build applications and let

AppEngine scale them out when needed.

The WindowsAzureplatformismadeupofafoundationlayerandasetofdeveloperservicesthat can

beusedtobuildscalableapplications.Theseservicescovercompute,storage,networking,and

identitymanagement,whicharetiedtogetherbymiddlewarecalled AppFabric. Thisscalablecom- puting

environmentishostedwithinMicrosoftdatacentersandaccessiblethroughtheWindows

AzureManagementPortal.Alternatively,developerscanrecreateaWindowsAzureenvironment (with

limitedcapabilities)ontheirownmachinesfordevelopmentandtestingpurposes.Inthissec- tion,

weprovideanoverviewoftheAzuremiddlewareanditsservices.

Page 13: CLOUD COMPUTING UNIT-5 NOTES

13

S.NO RGPV QUESTION YEAR MARKS

Page 14: CLOUD COMPUTING UNIT-5 NOTES

14

UNIT 1/LECTURE 7

Apache Hadoop is an open-source software framework for storage and large-scale processing of data-sets on clusters of commodity hardware. Hadoop is an Apache top-level project being built and used by a global community of contributors and users.[2] It is licensed under the Apache License 2.0.

The Apache Hadoop framework is composed of the following modules:

Hadoop Common – contains libraries and utilities needed by other Hadoop modules Hadoop Distributed File System (HDFS) – a distributed file-system that stores data on

commodity machines, providing very high aggregate bandwidth across the cluster. Hadoop YARN – a resource-management platform responsible for managing compute

resources in clusters and using them for scheduling of users' applications. Hadoop MapReduce – a programming model for large scale data processing.

All the modules in Hadoop are designed with a fundamental assumption that hardware failures (of individual machines, or racks of machines) are common and thus should be automatically handled in software by the framework. Apache Hadoop's MapReduce and HDFS components

originally derived respectively from Google's MapReduce and Google File System (GFS) papers.

Beyond HDFS, YARN and MapReduce, the entire Apache Hadoop "platform" is now commonly considered to consist of a number of related projects as well – Apache Pig, Apache Hive, Apache HBase, Apache Spark, and others.[3]

For the end-users, though MapReduce Java code is common, any programming language can be used with "Hadoop Streaming" to implement the "map" and "reduce" parts of the user's

program.[4] Apache Pig, Apache Hive, Apache Spark among other related projects expose higher level user interfaces like Pig latin and a SQL variant respectively. The Hadoop framework itself is

mostly written in the Java programming language, with some native code in C and command line utilities written as shell-scripts.

Apache Hadoop is a registered trademark of the Apache Software Foundation.

S.NO RGPV QUESTION YEAR MARKS

Page 15: CLOUD COMPUTING UNIT-5 NOTES

15

UNIT 1/LECTURE 8 Amazon Web Services (AWS) is a platform that allows the development of flexible applications by providing solutions for elastic infrastructure scalability, messaging, and data storage.

The platform is accessible through SOAP or RESTful Web service interfaces and provides a Web -

based console where users can handle administration and monitoring of the resources required,

as well as their expenses computed on a pay-as-you-go basis. Atthebaseofthesolution

stackareservicesthatproviderawcomputeandrawstorage: Amazon ElasticCompute(EC2) and

AmazonSimpleStorageService(S3). Thesearethetwomostpopularservices,whicharegenerally

complementedwithotherofferingsforbuildingacompletesystem.Atthehigherlevel, Elastic MapReduce

and AutoScaling provideadditionalcapabilitiesforbuildingsmarterandmoreelastic

computingsystems.Onthedataside, ElasticBlockStore(EBS), Amazon SimpleDB, AmazonRDS, and

Amazon ElastiCache provide solutionsforreliabledatasnapshotsandthemanagementofstruc-

turedandsemistructureddata.Communicationneedsarecoveredatthenetworkinglevelby

AmazonVirtualPrivateCloud(VPC), ElasticLoadBalancing, AmazonRoute53, and Amazon

DirectConnect. Moreadvancedservicesforconnectingapplicationsare AmazonSimpleQueue Service

(SQS), AmazonSimpleNotificationService(SNS), and Amazon SimpleE-mailService (SES).

Otherservicesinclude: • AmazonCloudFront content deliverynetworksolution • AmazonCloudWatch

monitoringsolutionforseveralAmazonservices • AmazonElasticBeanStalk and CloudFormation flexibleapplicationpackaginganddeployment As shown,AWScompriseawidesetofservices.Wediscussthemostimportantservicesby

examiningthesolutionsproposedbyAWSregardingcompute,storage,communication,andcom- plementaryservices.

S.NO RGPV QUESTION YEAR MARKS

UNIT 1/LECTURE 9

Aneka is an Application Platform-as-a-Service (Aneka PaaS) for Cloud Computing. It acts as a

framework for building customized applications and deploying them on either public or private Clouds. One of the key features of Aneka is its support for provisioning resources on different

public Cloud providers such as Amazon EC2, Windows Azure and GoGrid. In this chapter, we will present Aneka platform and its integration with one of the public Cloud infrastructures,

Windows Azure, which enables the usage of Windows Azure Compute Service as a resource provider of Aneka PaaS. The integration of the two platforms will allow users to leverage the

Page 16: CLOUD COMPUTING UNIT-5 NOTES

16

power of Windows Azure Platform for Aneka Cloud Computing, employing a large number of compute instances to run their applications in parallel. Furthermore, customers of the Windows Azure platform can benefit from the integration with Aneka PaaS by embracing the advanced features of Aneka in terms of multiple programming models, scheduling and management services, application execution services, accounting and pricing services and dynamic provisioning services. Finally, in addition to the Windows Azure Platform we will illustrate in this chapter the integration of Aneka PaaS with other public Cloud platforms such as Amazon EC 2

and GoGrid, and virtual machine management platforms such as Xen Server. The new support of provisioning resources on Windows Azure once again proves the adaptability, extensibility

and flexibility of Aneka.

S.NO RGPV QUESTION YEAR MARKS

Page 17: CLOUD COMPUTING UNIT-5 NOTES

17

UNIT 1/LECTURE 10/ADDITIONAL TOPICS

REFERENCCE

BOOK AUTHOR

PRIORITY

Mastering Cloud Computing Buyya, Selvi 1

Cloud Computing Kumar Saurabh 2

Page 18: CLOUD COMPUTING UNIT-5 NOTES

18

Setting of page

1. Page no. at top in the center. 2. Theme font -Calibri

3. Main text font size-12 4. All headings in bold (12)

5. Top centre headings font size-14 6. Page A-4 size

7. Header and footer -0 8. margin -left (1.25), right (1) 9. Line spacing-1.00

Page 19: CLOUD COMPUTING UNIT-5 NOTES

19