5
Proceedings of IEEE CCIS2011 EFFICIENT RESOURCE ARBITRATION AND ALLOCATION STRATEGIES IN CLOUD COMPUTING THROUGH VIRTUALIZATION T.R. Gopalakrishnan Nair, Vaidehi M Advanced Networking Research Interest Group Research and Industry Incubation Center, (Recognized by Ministry of Science and Technology, India) Dayanada Sagar Institutions, Bangalore, India [email protected], [email protected] Abstract Resource Arbitration and allocation are some of the very critical management issues in cloud computing, because IT services are to be provisioned on subscriptions based on the consumer computing requirements. Enabling the optimal utilization of the requested resources by the consumers is a challenge, and any failure of it can possibly lead to a serious performance degradation of the cloud system. Here, we focus on the dynamic distribution and optimal utilization of the resources in the cloud architecture in a specific time period. We have proposed a Rule Based Resource allocation model (RBRAM) along with a Supply-Demand analysis of the resources in a time marching paradigm. Within the framework of this approach, the analysis shows an improved performance of the system, achieved through efficiently allocating the resources to the jobs submitted to the cloud. Keywords: Virtual machines, tightly coupled, runtime, rule based allocation, processor pool, business priority, task priority. 1 Introduction Cloud computing provides a promising advantage for companies and institutions which have to rely on large scale IT operations in a cost effective way. It enables hiring of the IT utilities like infrastructure, software or platform applications. The preliminary cloud based models have paved way to the users to access more computing power and more applications at an attractive price pattern [5]. Cloud computing has helped the enterprises in improving the creation and delivery of IT solutions in a cost effective and flexible manner [5]. The cloud architectures are designed in such a way that multiple services like IaaS, PaaS, and SaaS are provided to a large set of consumers concurrently. In cloud systems, the jobs initiated by the consumers are allocated with a set of virtual machines (VM) which run in the datacenters. These VMs are available in different types with varying features like number of processors (CPUs), different ranges of main memory, and different storage capacity. Resource allocation process is one of the contemporary fields of interest, and it is used for estimating the efficiency of the cloud operations. Inept resource allocation approaches can drive the cloud network towards an operational failure. Some of the IaaS providers are Amazon EC2, Go Grid, Rack Space, Sun Grid, VM Ware and Zen Server. Many existing cloud systems use earlier versions of resource management approach which were developed abintio. The need for support to as many consumers to access and utilize the application services have led to enhancement in development of architecture to handle issues like resource allocation, security, monitoring and fault tolerance [5]. In this paper we discuss the resource allocation strategies, complexity of allocation and state transition panorama of resource allocation. Here we assume a resource allocation manager taking care of incoming queue of consumer requests. This paper is structured as follows. In section II we present the related work, section III the challenge considered and the problem definition. In section IV, the architecture of Rule Based Resource allocation model RBRAM. The performance analysis of the model is presented in section V. Section VI presents the Conclusion and future work. 2 Related work Much of the research in this field is at its initial stages with respect to the realization of effective allocation of the resources for optimal utilization in cloud computing. Yagiz Onat Yazir et.al [1], have proposed a new approach for dynamic autonomous resource management in computing cloud. It is a two-fold work. In the first case, they have developed a distributed architecture in which the resource ___________________________________ 978-1-61284-204-2/11/$26.00 ©2011 IEEE

[IEEE 2011 IEEE International Conference on Cloud Computing and Intelligence Systems (CCIS) - Beijing, China (2011.09.15-2011.09.17)] 2011 IEEE International Conference on Cloud Computing

  • Upload
    m

  • View
    217

  • Download
    5

Embed Size (px)

Citation preview

Page 1: [IEEE 2011 IEEE International Conference on Cloud Computing and Intelligence Systems (CCIS) - Beijing, China (2011.09.15-2011.09.17)] 2011 IEEE International Conference on Cloud Computing

Proceedings of IEEE CCIS2011

EFFICIENT RESOURCE ARBITRATION AND ALLOCATION STRATEGIES IN CLOUD

COMPUTING THROUGH VIRTUALIZATION T.R. Gopalakrishnan Nair, Vaidehi M

Advanced Networking Research Interest Group Research and Industry Incubation Center, (Recognized by Ministry of Science and Technology, India)

Dayanada Sagar Institutions, Bangalore, India [email protected], [email protected]

Abstract Resource Arbitration and allocation are some of the very critical management issues in cloud computing, because IT services are to be provisioned on subscriptions based on the consumer computing requirements. Enabling the optimal utilization of the requested resources by the consumers is a challenge, and any failure of it can possibly lead to a serious performance degradation of the cloud system. Here, we focus on the dynamic distribution and optimal utilization of the resources in the cloud architecture in a specific time period. We have proposed a Rule Based Resource allocation model (RBRAM) along with a Supply-Demand analysis of the resources in a time marching paradigm. Within the framework of this approach, the analysis shows an improved performance of the system, achieved through efficiently allocating the resources to the jobs submitted to the cloud. Keywords: Virtual machines, tightly coupled, runtime, rule based allocation, processor pool, business priority, task priority.

1 Introduction Cloud computing provides a promising advantage for companies and institutions which have to rely on large scale IT operations in a cost effective way. It enables hiring of the IT utilities like infrastructure, software or platform applications. The preliminary cloud based models have paved way to the users to access more computing power and more applications at an attractive price pattern [5]. Cloud computing has helped the enterprises in improving the creation and delivery of IT solutions in a cost effective and flexible manner [5]. The cloud architectures are designed in such a way that multiple services like IaaS, PaaS, and SaaS are provided to a large set of consumers concurrently. In cloud systems, the jobs initiated by the consumers are allocated with a set of virtual machines (VM) which run in the datacenters. These VMs are available in different types with varying features like

number of processors (CPUs), different ranges of main memory, and different storage capacity. Resource allocation process is one of the contemporary fields of interest, and it is used for estimating the efficiency of the cloud operations. Inept resource allocation approaches can drive the cloud network towards an operational failure. Some of the IaaS providers are Amazon EC2, Go Grid, Rack Space, Sun Grid, VM Ware and Zen Server. Many existing cloud systems use earlier versions of resource management approach which were developed abintio. The need for support to as many consumers to access and utilize the application services have led to enhancement in development of architecture to handle issues like resource allocation, security, monitoring and fault tolerance [5]. In this paper we discuss the resource allocation strategies, complexity of allocation and state transition panorama of resource allocation. Here we assume a resource allocation manager taking care of incoming queue of consumer requests. This paper is structured as follows. In section II we present the related work, section III the challenge considered and the problem definition. In section IV, the architecture of Rule Based Resource allocation model RBRAM. The performance analysis of the model is presented in section V. Section VI presents the Conclusion and future work.

2 Related work

Much of the research in this field is at its initial stages with respect to the realization of effective allocation of the resources for optimal utilization in cloud computing. Yagiz Onat Yazir et.al [1], have proposed a new approach for dynamic autonomous resource management in computing cloud. It is a two-fold work. In the first case, they have developed a distributed architecture in which the resource

___________________________________ 978-1-61284-204-2/11/$26.00 ©2011 IEEE

Page 2: [IEEE 2011 IEEE International Conference on Cloud Computing and Intelligence Systems (CCIS) - Beijing, China (2011.09.15-2011.09.17)] 2011 IEEE International Conference on Cloud Computing

management is developed into independent tasks, which is done by autonomous node agents which are lightly coupled with the physical machines in the data center. The second case is, the autonomous node agents carry out configuration in parallel through multiple criteria decision analysis. There approach is said to have a positive reflection on scalability, feasibility and flexibility.

Daniel Warneke and Odej Kao [2] have designed a data processing frame work Nephele and have compared it with Hadoop data processing framework. Their proposed work explicitly exploits the dynamic resource allocation offered by IaaS clouds for task scheduling and execution. Jiayin et.al [3], have proposed an adaptive resource allocation algorithm for cloud systems with preemtable tasks. Their algorithms adjust the resource allocation adaptively based on the update of the actual task execution. An ad hoc cloud model has been designed by Graham Kirby et.al [4]. The paper says that the proposed model is self managing in terms of resilience of performance and balancing potentially conflicting goals. The authors say that their adhoc cloud model allows complex cloud style applications to exploit untapped resources on dedicated hardware.

3 Problem definition It is an accepted knowledge that a non virtualized system performance will be less than a virtualized one. The primary purpose of the cloud system is that its clients utilize the resources effectively to have an economic benefit. A resource allocation management process is required to avoid under utilization or overutilization of the resources which may affect the service of the cloud. Some of the jobs may be rejected due to over crowding for the virtual machines by the current jobs in the cloud system. This kind of rejections will result in a business loss. There may be cases arising where a job in a queue with highest priority requiring less resource, may get allocated with a higher capacity machine and this kind of scenario will lead to underutilization of the resources and a deterioration in the performance of the cloud system.

4 Rule Based Resource Allocation Model (RBRAM). Here, we are using the queuing system where the jobs tend to generate request at a random fashion for resources from the cloud. The following algorithms govern the rules relevant to the operations. Let

λ - Be the rate of resource request from all the subscribers

μ- Rate with which the resource is allocated to the subscribers

For stable operation the μ > λ. However, small intervals in which the resource request rate exceeds the resource allocation rate are acceptable, provided that the mean request rate is lower than the allocation rate and reserves are not fully depleted. A sample cloud system is shown in Figure 1. The jobs requesting the resources for processing are submitted to the cloud. Based on the algorithm 1, the cloud priority manager evaluates the task priority. In the next phase, the resource arbitration is done on the bases of the job’s size and the binding time of the job with the VMs. Once the allocation is done, the Virtualization system manager captures the job, it enables the execution of the job and releases the results. In the next phase, the released results from the VMs are submitted to the delivery system, which is an interface between the cloud and the consumers.

Figure 1 Block diagram of the cloud architecture with the submissions and the delivery systems

Let Tp and Bp are two factors as defined below

Tp= Task priority

Tp means the task priority given to the jobs based on their criticality.

Bp=Business priority.

The Bp is the priority that is estimated by the priority manger based the customer relationship factors and the cost of the current job. Algorithm 1: To estimate the effective priority of the job in order to allocate the resources.

1. In the beginning of a cycle or time window under consideration, as the request for resources by the jobs arises, the scheduler puts them to the FIFO.

2. The Job Priority manager (JPM) assigns the priority of allocating the resources to the jobs based on their criticality or task priority Tp and business priority BP. This approach also improves the throughput. The Jobs with highest criticality and highest

Cloud

Submission

System

Cloud

Priority Manager

Cloud

Resource Allocatio

n

Virtualization

System Manager (Capture,

End Result

(Collection,

Recordin

Cloud

Delivery System

Consumer 1 Consumer 2 Consumer 3

.

. Consumer n

Page 3: [IEEE 2011 IEEE International Conference on Cloud Computing and Intelligence Systems (CCIS) - Beijing, China (2011.09.15-2011.09.17)] 2011 IEEE International Conference on Cloud Computing

business value will get the highest priority out of Tp+Bp [11] and resources i.e., it may lead to the earliest realization of the virtual machines (VM).

The cloud system under this scheme dynamically constitutes multiple array of virtual machines (VM1, VM2,…………..VMn)

Allocation Process and Virtual Machine (VM) Design

At any time window, the jobs received from the priority Manager will have a complex encoding of priority. Any such Job arriving at the allocation system will be assigned a VM to further proceed in the processing. A VM in the routines sense emphasizes the single CPU and a Hypervisor to arbitrate the resources across many or single type operating systems, where as here we suggest a threading of resources available for each one and enable distribution of jobs on a partial basis or as whole in the individual CPU environment. It may be worth while the resultant instances of threaded CPUs can be called Super VMs or SVMs. We continue to deal this as VMs for the reason being anyway the dimensions are virtual.

The VMs are integrated from a processor pool, like a rack of CPUs in the machine room, or a collection of such entities geographically distributed which can be dynamically allocated to jobs on demand.

The space of VM can be shown as in Figure 2

VM = (Mi, Pi, Si)

Figure 2 Virtual machine from memory

Processor, Storage (Trio Space)

It constitutes of memory (Mi) having a capacity in terms of gigabytes, processors (Pi) operating generally at a speed of Giga hertz and storage (Si) all distributed with enough connectivity to realize meaning full string of processor beads and storages.

There are few hypotheses with which the resource allocation for the arriving tasks is framed, with respect to the available processors, memories and storage disks. The formation of virtual machines is the key to the progress.

Formation of Virtual Machines – The whole cloud is viewed with Trio Space approach as discussed earlier. They are formed by of a set of processors P (P1…….Pm), each having memory M (M1………M2) and storage S (S1…….S2). The combination of the

three form the MPS matrix. The M-P-S matrix is formed based on the Hypothesis 1 and 2.

Hypothesis 1:

The processor (Pmn) is tightly coupled to the corresponding memory (Mmn) at the runtime. These memories are partitioned and provisioned on demand.

Hypothesis 2:

Communication between processors and the systems utilize the maximum available bandwidth. The storage systems and the network associated with cloud unit under consideration are fairly coupled so that sharing and reattachment is easily possible.

Migration Processors Platoon

The availability of processors in a VM at a time instance “t” is estimated with respect to time instances “t-1” and “t”.

Here the entirety of processors Pe at time “t” is

Pe = Pr (t-1) + P t

Pr (t-1) = Total number of processors released at time “t-1”

Pe = FPm (t)

The five different cases of the free migrating processors FPm (t) are

Case 1: Underutilization of the available free processors FPm (t)

Here the available FPm (t) would be more than that required for job Jn

FPm (t) >> Pr (t-1) - ∑TJn

nNPJn + ∑

nJR

Jn1

Pr

Resource in abundance and the job submitted will be executed in the stipulated time period which would be an advantage to the consumer but not to the service provider or the cloud system since the resources are idling.

Where

NJPn : Is the Characteristic population value of the cloud.

PrJn : Is the processor released

Case 2: Near optimal utilization of free processors FPm (t)

FPm (t) > Pr (t-1) - ∑TJn

nNPJn

When compared to the first case the number of resources idling will be less, hence comparatively there would be a less business loss of the system

Case 3: Optimal utilization of free processors FPm (t)

FPm (t) = Pr (t-1) - ∑TJn

nNPJn

M1 M2 : Mn

P1 P2 : Pn

S1 S2 : Sn Trio Space

Approach

Page 4: [IEEE 2011 IEEE International Conference on Cloud Computing and Intelligence Systems (CCIS) - Beijing, China (2011.09.15-2011.09.17)] 2011 IEEE International Conference on Cloud Computing

This is an optimal condition where in the available resources or the processors a re utilized maximum.

Case 4: Scenario of Scarcity processors FPm (t)

FPm (t) < Pr (t-1) - ∑TJn

nNPJn

In this case the available resources are still not released by the Jobs submitted earlier. This causes over utilization of the processors at time instances “t-1”. Owing to the scarcity there is a possibility that the job submitted would be allocated with a processor with a finite delay. In this case the performance of the cloud system might snooze.

Case 5: Job abort due to non availability of the processors FPm (t)

FPm (t) << Pr (t-1) - ∑TJn

nNPJn

In this case the processors are still held by the jobs submitted at time instance “t-1”. The job arriving at time “t” will be starving, so here the jobs submitted will be aborted causing business loss.

The stability condition of the cloud for optimal operation is the comparison between the total number of processors migrating from the previous time frame and the total number of processors demand swept across all the jobs accepted by the allocation unit based on the priority manager.

Cloud Efficiency η = No. of Processors on Supply - No. of processors on demand

Mean free occurrence value of processors of the cloud

Algorithm 2

1. Job Arrives to the cloud

2. Jobs put in FIFO

3. Check for job priority, based on the algorithm 1.

4. Estimate the time required for processing the Job J1 i.e. total binding time Tbt

as per the equation 1.

5. Check for the VM machine availability

If (VM machine available as per the requirement then allot)

else form the VMs with the combination of tightly coupled (processors and memory)

and the loosely coupled (processors and storage disks) components of the VMs.

6. Compare the time estimated from step 4 and the current time.

If ( The current time ≥ Time estimated => job completed)

{Deallocate the resources}

elseif (Current time < Time estimated => job in process)

repeat step 6

5 Performance analysis of the model Here Figure 3 shows the performance results of the model. Around 80-90 percent of the jobs submitted to the cloud system are allocated the required resources at a time instance ‘t’. The figure depicts the efficiency of the cloud system. With respect to the time window advancement. As the time window advances the efficiency becomes saturated. Table 2 depicts the various levels of utilization of the resources in the Cloud system.

Simulation Setup

Table 1 The simulation set up of the Cloud System.

Table 1 Simulation setup

Parameters

Units

Number of tasks 1800

Number of virtual machines in the cloud

1000-2500

Configuration time 20 units

Allocation time 5 units

Deallocation time 5 units

Table 2 Cloud Efficiency based on Allocation

Success

From To Performance Efficiency Class

0.6 0.8 Underutilization less Demand

0.2 0.6 Near to optimal utilization

A just demand

0.2 0 -. 02 Optimal utilization Matching demand

-0.2 – 0.4 Scarcity of the resources

Over demand

-0.4 – 0.6 Acute Scarcity, Job rejections

Larger demand

Page 5: [IEEE 2011 IEEE International Conference on Cloud Computing and Intelligence Systems (CCIS) - Beijing, China (2011.09.15-2011.09.17)] 2011 IEEE International Conference on Cloud Computing

Windows in the time marching

Figure 3 Cloud resource allocation efficiency

6 Conclusions

Emerging Cloud computing scenario looks forward to a strategic repositioning in approaches to engage consumers with a wide variety of services starting from the conventional infrastucture as a service to a powerful user applicaton service paradigm. It calls for a much sophisticated job management and resource allocation strategy which can maintain the cloud efficiency at optimal levels of utilizing the processors and other resources. In this paper, we have presented a model which deals with the efficient allocation of the resources in a M-P-S matrix model. We also formulated a strategy to estimate Cloud Resource Allocation Efficiency. Here certain strategies to analyze the performance level of cloud system through Cloud Efficiency factor is presented. The extension or the future work can be a sequence of research on other parameters of QoS of a Cloud System.

References [1] Yagiz Onat Yazir, Chris Matthews Roozbeh

Farahbod “Dynamic Resource Allocation in Computing Clouds using Distributed Multiple Criteria Decision Analysis” IEEE Third International Conference on Cloud Computing, pp 91-98, 2010.

[2] Daniel Warneke and Odej Kao “Exploiting Dynamic Resource Allocation for Efficient Parallel Data Processing in the Cloud”. In IEEE Transactions On Parallel and Distributed Systems, pp 1-14, 2011.

[3] Jiyin Li, Meikang Qiu, Jain-Wei Niu, YuChen, Zhong Ming “Adaptive Resource Allocation for Preempt able Jobs in Cloud Systems”.IEEE International Conference on Intelligent Systems Design and Applications, pp. 31-36, 2010.

[4] Graham Kirby, Alan Dearle, Angus Macdonald,

alvaro Fernandes” An Approach to Ad hoc Cloud Computing”, Arxiv, pp. 1- 6, 2010.

[5] Rajkumar Buyya and Karthik Sukumar “Platforms for Building and Deploying Applications for Cloud Computing” CSI Communication, pp. 6-11, 2011

[6] T. R. Gopalakrishnan Nair, P Jayarekha, “Strategic Prefetching of VoD Programs Based on ART2 driven Request Clustering” International Journal of Computer Science, Engineering and Applications (IJSCEA) May 2011 ISSN: 2230-9616 (online), 2231-0088

[7] T. R. Gopalakrishnan Nair, P Jayarekha, “Pre allocation Strategies of Computational Resources in Cloud Computing using Adaptive Resonant Theory-2”, International Journal On Cloud Computing: Services and Architecture (IJCCSA) April 2011 ISSN: 2231 - 5853 (online) 231 - 6663

[8] Donald Kossmann, Tim Kraska Simon Loesing “An Evaluation of Alternative Architectures for Transaction Processing in the Cloud” SIGMOD’10, 2010,

[9] T. R. Gopalakrishnan Nair, P Jayarekha, “Prediction based Prefetching of Computational Resources in Cloud Computing using Adaptive Resonance Theory -2”, Technical Report in Advanced Networking, Research and Industry incubation Centre (RIIC), TR /AN / 55, 2011

[10] T. R. Gopalakrishnan Nair, Vivek Sharma “A Hybrid Scheduling Approach for Creating Optimal Priority of Jobs with Business Values in Cloud Computing” Technical Report in Advanced Networking, RIIC TR /AN / 56, 2011

W1 W2 W3 W4 W5 W6 W7

(-) N

egat

ive

Per

form

ance

(+

) P

ositi

ve P

erfo

rman

ce

Allo

catio

n su

cces

s