Upload
editor-ijritcc
View
215
Download
1
Embed Size (px)
DESCRIPTION
Cloud computing is an “evolving paradigm” that has redefined the way Information Technology based services can be offered. It has changed the model of storing and managing data for scalable, real time, internet based applications and resources satisfying end users’ needs. More and more remote host machines are built for cloud services causing more power dissipation and energy consumption. Over the decades, power consumption has become an important cost factor for computing resources. In this thesis we will investigate all possible areas in a typical cloud infrastructure that are responsible for substantial amount of energy consumption and we will address the methodologies by which power utilization can be decreased without compromising Quality of Services (QoS) and overall performance. We also plan to define the scope for further extension of research from the findings we would have from this thesis . In this thesis we are using energy aware rate monotonic scheduling for improve the performance of packet lost . Packet lost are reducing by the proposed algorithm.
Citation preview
International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169
Volume: 3 Issue: 9 5500 - 5504
______________________________________________________________________________________
5500
IJRITCC | September 2015, Available @ http://www.ijritcc.org
_______________________________________________________________________________________
Improve Energy Efficiency Model for Cloud Computing
Dhanraj Meena1, Dr. R.K.Gupta
2, Mahesh Verma
3
1M.Tech Scholar, Gyan Vihar University, Jaipur, Rajasthan, India
2 Professor, E.C.E Deptt., Gyan Vihar University, Jaipur, Rajasthan, India
3M.Tech Scholar, Gyan Vihar University, Jaipur, Rajasthan, India
Abstract:- Cloud computing is an “evolving paradigm” that has redefined the way Information Technology based services can be offered. It has
changed the model of storing and managing data for scalable, real time, internet based applications and resources satisfying end users’ needs.
More and more remote host machines are built for cloud services causing more power dissipation and energy consumption. Over the decades,
power consumption has become an important cost factor for computing resources. In this thesis we will investigate all possible areas in a typical
cloud infrastructure that are responsible for substantial amount of energy consumption and we will address the methodologies by which power
utilization can be decreased without compromising Quality of Services (QoS) and overall performance. We also plan to define the scope for
further extension of research from the findings we would have from this thesis . In this thesis we are using energy aware rate monotonic
scheduling for improve the performance of packet lost . Packet lost are reducing by the proposed algorithm.
Keywords: Cloud computing, energy efficiency, scheduling, cluster.
__________________________________________________*****_________________________________________________
1. INTRODUCTION
The latest innovations in cloud computing are making our
business applications even more mobile and collaborative,
similar to popular consumer apps like Facebook and Twitter.
As consumers, we now expect that the information we care
about will be pushed to us in real time, and business
applications in the cloud are heading in that direction as
well.
Cloud computing models are shifting. In the cloud/client
architecture, the client is a rich application running on an
Internet-connected device, and the server is a set of
application services hosted in an increasingly elastically
scalable cloud computing platform. The cloud is the control
point and system or record and applications can span
multiple client devices. The client environment may be a
native application or browser-based; the increasing power of
the browser is available to many client devices, mobile and
desktop alike.
Robust capabilities in many mobile devices, the increased
demand on networks, the cost of networks and the need to
manage bandwidth use creates incentives, in some cases, to
minimize the cloud application computing and storage
footprint, and to exploit the intelligence and storage of the
client device. However, the increasingly complex demands
of mobile users will drive apps to demand increasing
amounts of server-side computing and storage capacity.
1.1 CLOUD COMPUTING AN OVERVIEW
Cloud computing is a computing paradigm, where a large
pool of systems are connected in private or public networks,
to provide dynamically scalable infrastructure for
application, data and file storage. With the advent of this
technology, the cost of computation, application hosting,
content storage and delivery is reduced significantly.
Cloud computing is a practical approach to experience direct
cost benefits and it has the potential to transform a data
center from a capital-intensive set up to a variable priced
environment.
The idea of cloud computing is based on a very fundamental
principal of „reusability of IT capabilities'. The difference
that cloud computing brings compared to traditional
concepts of “grid computing”, “distributed computing”,
“utility computing”, or “autonomic computing” is to
broaden horizons across organizational boundaries.
Forrester defines cloud computing as:
“A pool of abstracted, highly scalable, and managed
compute infrastructure capable of hosting end-customer
applications and billed by consumption.”
Fig 1: Conceptual View of Cloud Computing
International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169
Volume: 3 Issue: 9 5500 - 5504
______________________________________________________________________________________
5501
IJRITCC | September 2015, Available @ http://www.ijritcc.org
_______________________________________________________________________________________
1.2 CLOUD COMPUTING MODELS
1. Software as a Service (SaaS): In this model, a
complete application is offered to the customer, as a
service on demand. A single instance of the service runs
on the cloud & multiple end users are serviced. On the
customers‟ side, there is no need for upfront investment
in servers or software licenses, while for the provider,
the costs are lowered, since only a single application
needs to be hosted & maintained. Today SaaS is offered
by companies such as Google, Salesforce, Microsoft,
Zoho, etc.
2. Platform as a Service (Paas): Here, a layer of
software, or development environment is encapsulated
& offered as a service, upon which other higher levels
of service can be built. The customer has the freedom to
build his own applications, which run on the provider’s
infrastructure. To meet manageability and scalability
requirements of the applications, PaaS providers offer a
predefined combination of OS and application servers,
such as LAMP platform (Linux, Apache, MySql and
PHP), restricted J2EE, Ruby etc. Google‟s App Engine,
Force.com, etc are some of the popular PaaS examples.
3. Infrastructure as a Service (Iaas): IaaS provides
basic storage and computing capabilities as
standardized services over the network. Servers, storage
systems, networking equipment, data centre space etc.
are pooled and made available to handle workloads.
The customer would typically deploy his own software
on the infrastructure. Some common examples are
Amazon, GoGrid, 3 Tera, etc.
Fig 2: Cloud Model
1.3 UNDERSTANDING PUBLIC AND PRIVATE
CLOUDS:-
Enterprises can choose to deploy applications on Public,
Private or Hybrid clouds. Cloud Integrators can play a vital
part in determining the right cloud path for each
organization.
Public Cloud
Public clouds are owned and operated by third parties; they
deliver superior economies of scale to customers, as the
infrastructure costs are spread among a mix of users, giving
each individual client an attractive low-cost, “Pay-as-you-
go” model. All customers share the same infrastructure pool
with limited configuration, security protections, and
availability variances. These are managed and supported by
the cloud provider. One of the advantages of a Public cloud
is that they may be larger than an enterprises cloud, thus
providing the ability to scale seamlessly, on demand.
Private Cloud
Private clouds are built exclusively for a single enterprise.
They aim to address concerns on data security and offer
greater control, which is typically lacking in a public cloud.
There are two variations to a private cloud.
On-premise Private Cloud
On-premise private clouds, also known as internal clouds
are hosted within one‟s own data center. This model
provides a more standardized process and protection, but is
limited in aspects of size and scalability. IT departments
would also need to incur the capital and operational costs for
the physical resources. This is best suited for applications
which require complete control and configurability of the
infrastructure and security.
Externally hosted Private Cloud
This type of private cloud is hosted externally with a cloud
provider, where the provider facilitates an exclusive cloud
environment with full guarantee of privacy. This is best
suited for enterprises that don‟t prefer a public cloud due to
sharing of physical resources.
Hybrid Cloud
Hybrid Clouds combine both public and private cloud
models. With a Hybrid Cloud, service providers can utilize
3rd party Cloud Providers in a full or partial manner thus
increasing the flexibility of computing. The Hybrid cloud
environment is capable of providing on-demand, externally
provisioned scale. The ability to augment a private cloud
with the resources of a public cloud can be used to manage
any unexpected surges in workload.
2. PROBLEM STATEMENT
Cloud Computing has emerged as a new consumption and
virtualization model for the high cost computing
infrastructures and web based IT solutions. Cloud provides
suitable, on-demand service, elasticity, broad network
access, resource pooling and measured service [1] in highly
customizable manner with minimal management effort. The
application of low-cost computing devices, high-
performance network resources, huge storage capacity,
semantic web technology, SOA (Service Oriented
Architecture), usage of API (Application Programming
Interfaces), etc., have helped in the swift growth of cloud
International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169
Volume: 3 Issue: 9 5500 - 5504
______________________________________________________________________________________
5502
IJRITCC | September 2015, Available @ http://www.ijritcc.org
_______________________________________________________________________________________
technology. A cloud infrastructure generally encapsulates all
those existing technologies in a web service based model to
offer business agility, improved scalability and on demand
availability. The rapid deployment model, low start up
investment, pay-as-you-go scheme, multi-tenant sharing of
resources are all added attributes of cloud technology due to
which major industries tend to virtualization for their
enterprise applications [2].
Cloud applications are deployed in remote data centers
(DCs) where high capacity servers and storage systems are
located. A fast growth of demand for cloud based services
results into establishment of enormous data centers
consuming high amount of electrical power. Energy
efficient model is required for complete infrastructure to
reduce functional costs while maintaining vital Quality of
Service (QoS). Energy optimization can be achieved by
combining resources as per the current utilization, efficient
virtual network topologies and thermal status of computing
hardwares and nodes. On the other hand, the primary
motivation of cloud computing is related to its flexibility of
resources. As more and more mobile devices are getting
considered as major consumption points for remote users in
mainstream business, power management has been a
bottleneck for proper functioning of services at users end. A
trade-off between energy consumed in computation and the
same in communication has been the critical aspect to be
considered for mobile clients also.
In this paper we plan to consolidate all the plausible aspects
of energy efficient infrastructure model for cloud data
centers while considering performance bottlenecks for the
same.
2.1. Energy Consumption Analysis
To calculate the amount of energy consumed by data
centers, two metrics were established by Green Grid, an
international consortium [10]. The metrics are Power Usage
Effectiveness (PUE) and Data Centre Infrastructure
Efficiency (DCiE) as defined below:
PUE = Total Facility Power/IT Equipment Power
DCiE = 1/PUE = (IT Equipment Power/Total Facility
Power) x 100%
The IT equipment power is the load delivered to all
computing hardware resources, while the total facility power
includes other energy facilities, specifically, the energy
consumed by everything that supports IT equipment load.
In cloud infrastructure, a node refers to general multicore
server along with its parallel processing units, network
topology, power supply unit and storage capacity. The
overall energy consumption of a cloud environment can be
classified as follows [9]:
ECloud = ENode + ESwitch + EStorage + EOhters
Consumption of energy in a cloud environment having n
number of nodes and m number of switching elements can
be expressed as:
ECloud = n (ECPU + EMemory + EDisk + EMainboard + ENIC) +
m(EChassis + ELinecards + EPorts ) + (ENASServer + EStorageController +
EDiskArray) + EOthers
2.2. ENERGY EFFICIENCY IN CLOUD
INFRASTRUCTURES
Building an energy efficient cloud model does not indicate
only energy efficient host machines. Other existing
components of a complete cloud infrastructure should also
be considered for energy aware applications. Several
research works have been carried out to build energy
efficient cloud components individually. In this section we
will investigate the areas of a typical cloud setup that are
responsible for considerable amount of power dissipation
and we will consolidate the possible approaches to fix the
issues considering energy consumption as a part of the cost
functions to be applied.
2. 3. ENERGY EFFICIENT HARDWARE
One of the best approaches to reduce the power
consumption at data centre and virtual machine level is
usage of energy efficient hardwares at host side.
International standard bodies such as: European TCO
Certification [3], US Energy Star [4] are there to rate energy
efficient consumer products. The rating is essential to
measure the environmental impact and carbon footprint of
computer products and peripherals.
2.4 MEMORY-AWARE SCHEDULING IN
MULTIPROCESSOR SYSTEMS
Main issue from the memory aware scheduling is high
packet lost and low Residual energy. In present multi core
systems, cores on the chip share resources such as caches,
DRAM etc. Tasks running on one core may harmfully affect
the performance of tasks on other cores and hence it may
even maliciously create a Denial of Service (DoS) attack on
the same chip [17]. Task assortment should be optimized by
co-scheduling them in the processor cores considering
memory contention and frequency selection.
Memory aware task scheduling is based on run queue
sorting followed by frequency selection [16]. Run queue
sorting is a time slice based multiprocessor scheduling
International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169
Volume: 3 Issue: 9 5500 - 5504
______________________________________________________________________________________
5503
IJRITCC | September 2015, Available @ http://www.ijritcc.org
_______________________________________________________________________________________
algorithm which is a specific form of gang scheduling. For
further avoidance of memory contention, frequency
selection can be used which allows processor switching to a
suitable frequencies for each task without causing any
significant performance overhead.
First, all tasks are sorted in descending order based on their
execution time. Tasks with lower execution time are more
flexible when it comes to their scheduling as their impact on
the critical path is not as great as tasks with long execution
times. We then proceed to build our schedule one task at a
time. We get the next available CPU p and task t from the
ordered task list and check to see if all of its dependencies
are done executing. If t’s dependencies have not completed
their execution, we check to see if there are early execution
edges for t. For every early execution edge going to t, we
uses the edge’s information to determine if it is possible to
execute the task t at the current time without having to wait
for its dependence to complete its execution. We verify that
all dependencies meet the early execution criteria by looking
at their current loop’s iterator/iteration pair. If these match
the iterator/iteration pair in the early execution edge, we can
assume that we can start the execution of t. Before we map
task t, we look at the data currently placed in p’s SPM, and
search for a task alt which depends on this data. If we find
such task, we map it to p, otherwise, we map task t to p at
the cost of a DMA transfer. This helps us keep the number
of DMA transfer to a minimum.
3. PROPOSED METHODOLOGY
We are introducing energy aware rate monotonic scheduling
for reduce the packet loses during the data uploading and
downloading . According to the existing design algo
(Memory aware scheduling ) the packet losses are too much.
4. RESULTS
In figure 3 the images shows that which we have to
download and upload .
Fig 3:- Image for uploading
In figure 4 energy efficient for the memory aware
scheduling and energy aware rate monotonic scheduling .
Fig 4:- Energy efficient v/s number of nodes
In figure 5 graph is showing the packet loss for number of
nodes for data upload and download .
Fig 5:- Packet loss vs number of nodes
International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169
Volume: 3 Issue: 9 5500 - 5504
______________________________________________________________________________________
5504
IJRITCC | September 2015, Available @ http://www.ijritcc.org
_______________________________________________________________________________________
In figure 5 graph is showing the packet loss for time for data
upload and download .
Fig6:- packet loss vs time
CONCLUSION
In this paper we have investigated the need of power
consumption and energy efficiency in cloud computing
model. It has been shown that there are few major
components of cloud architecture which are responsible for
high amount of power dissipation in cloud. The possible
ways to meet each sector for designing an energy efficiency
model has also been studied. Finally we have shown the
future research direction and the continuity of this work for
next level implementation.
References
[1] P. Mell and T. Grance, “The NIST definition of
cloud computing”, National Institute of Standards
and Technology, vol. 53, no. 6, (2009).
[2] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R.
Katz, A. Konwinski, G. Lee, D. Patterson, A.
Rabkin, I. Stoica and M. Zaharia, “Above the
Clouds: A Berkeley View of Cloud Computing”,
Tech. rep., (2009) February, UC Berkeley Reliable
Adaptive Distributed Systems Laboratory.
[3] European TCO Certification,
http://www.tcodevelopment.com.
[4] Energy Star, http://www.energystar.gov,
http://www.euenergystar.org.
[5] Intel whitepaper “Wireless Intel SpeedStep Power
Manager: optimizing power consumption for the
Intel PXA27x processor family”.
[6] “AMD PowerNow!™ Technology: dynamically
manages power and performance”, Informational
white paper.