12
Int. J. of Thermal & Environmental Engineering Volume 13, No. 1 (2016) 37-48 * Corresponding author. Tel.: +971508311059 E-mail: [email protected] © 2016 International Association for Sharing Knowledge and Sustainability DOI: 10.5383/ijtee.13.01.007 37 Energy-Aware Task Scheduling (EATS) Framework for Efficient Energy in Smart Cities Cloud Computing Infrastructures Leila Ismail a, *, Abbas A. Fardoun b a College of Information Technology, UAE University, Al-Ain, United Arab Emirates b College of Engineering, Electrical Engineering, Phoenicia University, District of Zahrani, Lebanon Abstract Cloud computing is an emerging technology that has an important potential in future Smart Cities’ information technology infrastructure. Cloud computing employs a heterogeneous infrastructure and a middleware aiming to provide services to users in Smart Cities. The energy consumption of the underlying data centers of the Clouds becomes a crucial issue, as Clouds come to be necessary components in a heavily used smart digital ecosystem. In this paper, we propose an energy- aware task scheduling (EATS) framework, which is responsible to schedule users’ tasks in the Cloud while optimizing the energy consumption of the underlying infrastructure. This paper describes our framework, its implementation and report on energy consumption under different workload conditions. The results show that steady states servers consume 54% of energy of servers at peak usage, and that the power-off and the startup of servers counts to 54% and 68% respectively of energy consumption at servers’ peak usage in our experimental environment; suggesting that strategies based on power- off and power-on of servers should be avoided. The results in this paper are promising directions to save energy in cloud providers’ data centers. Keywords: Cloud Computing, Green Computing, Energy Efficiency, Data Centers, Scheduling Algorithms, Cloud Computing Services and Middleware I. Introduction Smart cities use Information Communication Technologies (ICT) and digital ecosystems to deliver services, increase resource efficiency and engage more efficiently with its citizens. Cloud computing plays a major role as a consolidated ICT software and hardware infrastructure communicating with a big number of digital devices and engaging in a digital ecosystem including network and software communications. Citizens use digital devices to obtain services. Digital devices could be health wearable devices to send real-time health data to medical staff for an immediate or planned intervention; saving their lives. Health data could be sent to the Cloud regularly for real-time processing, predictions, and notifications to the medical staff. Citizens of a Smart City can use communication devices connected to Cloud applications, for instance, to have uninterrupted access to mailboxes, medical records, real-time analysis of stock options portfolio, comparative prices of airline tickets, smart-learning applications, and other applications from which citizens, businesses and public entities need answers. Furthermore, with the emerging vision of Internet of Things and many of its applications such as, sensor-equipped communicating vehicles, autonomous smart energy- consumption communicating monitors, smart grids, smart homes, where physical entities are connected, giving real-time access to information about the connected entities and the objects in them is very valuable as it provides means to increase system efficiency and productivity. The Cloud computing ICT infrastructure is of high importance for real-time communication, big-data processing, and elastic scaling up and down according to the applications requirements; enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources, for instance,

Energy-Aware Task Scheduling (EATS) Framework for ...iasks.org/wp-content/uploads/pdf/IJTEE-1301007.pdfKeywords: Cloud Computing, Green Computing, Energy Efficiency, Data Centers,

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Energy-Aware Task Scheduling (EATS) Framework for ...iasks.org/wp-content/uploads/pdf/IJTEE-1301007.pdfKeywords: Cloud Computing, Green Computing, Energy Efficiency, Data Centers,

Int. J. of Thermal & Environmental Engineering Volume 13, No. 1 (2016) 37-48

* Corresponding author. Tel.: +971508311059 E-mail: [email protected] © 2016 International Association for Sharing Knowledge and Sustainability DOI: 10.5383/ijtee.13.01.007

37

Energy-Aware Task Scheduling (EATS) Framework for Efficient Energy in Smart Cities Cloud Computing Infrastructures

Leila Ismail a,*, Abbas A. Fardoun b

a College of Information Technology, UAE University, Al-Ain, United Arab Emirates b College of Engineering, Electrical Engineering, Phoenicia University, District of Zahrani, Lebanon

Abstract

Cloud computing is an emerging technology that has an important potential in future Smart Cities’ information technology infrastructure. Cloud computing employs a heterogeneous infrastructure and a middleware aiming to provide services to users in Smart Cities. The energy consumption of the underlying data centers of the Clouds becomes a crucial issue, as Clouds come to be necessary components in a heavily used smart digital ecosystem. In this paper, we propose an energy-aware task scheduling (EATS) framework, which is responsible to schedule users’ tasks in the Cloud while optimizing the energy consumption of the underlying infrastructure. This paper describes our framework, its implementation and report on energy consumption under different workload conditions. The results show that steady states servers consume 54% of energy of servers at peak usage, and that the power-off and the startup of servers counts to 54% and 68% respectively of energy consumption at servers’ peak usage in our experimental environment; suggesting that strategies based on power-off and power-on of servers should be avoided. The results in this paper are promising directions to save energy in cloud providers’ data centers. Keywords: Cloud Computing, Green Computing, Energy Efficiency, Data Centers, Scheduling Algorithms, Cloud Computing Services and Middleware

I.Introduction

Smart cities use Information Communication Technologies (ICT) and digital ecosystems to deliver services, increase resource efficiency and engage more efficiently with its citizens. Cloud computing plays a major role as a consolidated ICT software and hardware infrastructure communicating with a big number of digital devices and engaging in a digital ecosystem including network and software communications. Citizens use digital devices to obtain services. Digital devices could be health wearable devices to send real-time health data to medical staff for an immediate or planned intervention; saving their lives. Health data could be sent to the Cloud regularly for real-time processing, predictions, and notifications to the medical staff. Citizens of a Smart City can use communication devices

connected to Cloud applications, for instance, to have uninterrupted access to mailboxes, medical records, real-time analysis of stock options portfolio, comparative prices of airline tickets, smart-learning applications, and other applications from which citizens, businesses and public entities need answers.

Furthermore, with the emerging vision of Internet of

Things and many of its applications such as, sensor-equipped communicating vehicles, autonomous smart energy-consumption communicating monitors, smart grids, smart homes, where physical entities are connected, giving real-time access to information about the connected entities and the objects in them is very valuable as it provides means to increase system efficiency and productivity. The Cloud computing ICT infrastructure is of high importance for real-time communication, big-data processing, and elastic scaling up and down according to the applications requirements; enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources, for instance,

Page 2: Energy-Aware Task Scheduling (EATS) Framework for ...iasks.org/wp-content/uploads/pdf/IJTEE-1301007.pdfKeywords: Cloud Computing, Green Computing, Energy Efficiency, Data Centers,

Ismail & Fardoun / Int. J. of Thermal & Environmental Engineering, 13 (2016) 37-48

38

networks, servers, storage, applications and services. It can be rapidly provisioned and released with minimal management effort or service provider interaction [1]. Cloud computing is then essential in the building blocks of the digital ecosystems of a Smart City. There are three service models of a Cloud: Cloud Software as a Service (SaaS), Cloud Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). The Cloud SaaS provides the users with applications that they can run and get results. The Cloud PaaS provides the users with the possibility to deploy applications onto the cloud. The Cloud IaaS provides the users with the capability to provision processing, storage, and networks for running their applications. Cloud Computing is also used by companies to highly reduce the cost associated to the operations and administrations of their own infrastructure to run applications.

With their potential heavy use, the Clouds, running

middleware and software services, including high performance computing and big data processing applications, will be highly solicited. This makes the energy consumption of the Cloud data centers an important issue to look at for the sake of the environment and reducing the electricity cost. According to the Natural Resources Defense Council (NRDC) in the USA [2], the data centers used about 91 billion kilowatt – hours of electrical energy in 2013, which is equivalent to 34 large coal fired plants. This is estimated to become 140 billion kilowatt-hours by 2020, which is a drastic 53% increase. The electricity cost is expected to be $13 billion and emitting about 100 million metric tons of carbon pollution every year. Electricity consumed in data centers, including enterprise servers, ICT equipment, cooling equipment and power equipment, is expected to contribute substantially to the electricity consumed in the European Union (EU). Western European electricity consumption of 56 TWH per year can be estimated for the year 2007 and is projected to increase to 104 TWH per year by 2020, hence responsible for CO2 emission [3]. Therefore, it is crucial to introduce models to reduce the energy consumption of the data centers to run energy-efficient data centers. In addition to energy cost saving, energy efficiency is seen as a solution to the problem of reducing greenhouse gas emissions [4].

In the last few years, researchers make several studies and

experiments which show that the power consumption of clusters and data centers is considerable and that necessary actions are required. However, to our knowledge, very little research was done to introduce scheduling algorithms which reduce the power consumption in clouds. The energy consumption in data centers can be decreased by improving the hardware components architecture, deploying energy-efficient resource scheduling, using efficient power supply options [5], designing measures for efficient air handling [6], and cooling measures ([5]-[7]), as well as by changing the software or firmware properties of computing cloud. In this work, we focus on scheduling to improve the data center energy efficiency. Some works explore the efficient migration of virtual machines in clouds [8] [9], load-balancing, unbalancing algorithms in resources consolidation and selection [10] [11], initializing component sleep states [12], and turning off the ideal servers [11].

In our initial work [13], we introduced EATS model. In

this work, we introduce EATS framework and we implement an experimental test-bed to understand the impact of applications types (CPU-bound versus I/O-bound) and load on the energy consumption. The results show that CPU-bound applications consume more energy than I/O-bound applications; thus the CPU load as part of our scheduling framework must be considered. The results of our empirical studies on servers’

shutdowns and startups suggest that scheduling algorithms should avoid frequent shutdown and startup in the scheduling strategies as they count 54% and 68% of the energy consumption at server peak performance.

The rest of the paper is structured as follows. Section II

overviews related works. Section III describes the system model. Our scheduling algorithm and the power-consumption monitoring tool are described in Section IV. Section V describes the implemented testbed and its design. Section VI describes conducted experiments and the experimental results. Section VII concludes the work and states future works.

II.RelatedWorks

In the last few years there have been several research efforts to introduce scheduling models to enhance the energy efficiency in data centers, clusters and recently the Clouds. works in scheduling models for energy saving are divided into 2 main categories: 1) load-balancing energy-aware scheduling models, and 2) optimization-based scheduling models. References [8] [9] ] [10] [11] and [13] achieve energy saving through load balancing. References [12] [14] and [15] introduce optimization techniques. Reference [13] introduces two task consolidation heuristics, Energy Conscious task consolidation [ECTC] and Maximum Rate Utilization [Max-Util] to maximize resource utilization and assign tasks to resources where energy consumption will be the minimum. Comparative analysis of the proposed to random algorithm in simulation concludes that the proposed heuristics to be more energy efficient by 18% and 13% respectively. Reference [10] tackles the placement problem by considering availability of CPU resources and memory constraints. Memory compression technique is used here to prevent node under-utilization. It also considers energy reduction through request discrimination. Reference [11] proposes an algorithm based on load balancing and unbalancing decisions by considering tradeoff between power and two types of performance, throughput and execution time performance, freeing the nodes if possible and turning them OFF; a 20% of energy saving was reported on a homogeneous simulated cluster. Reference [8] presents and implements a resource management architecture called MUSE that controls server allocation and routing of requests via a reconfigurable switching infrastructure. The system continuously monitors the load and plans resource allotments by estimating the value of their effects on service performance. Reference [9] proposes a power-aware load balancing algorithm, the Bee-MMT (artificial bee colony algorithm-minimal migration time). The proposed algorithm detects the over loaded hosts by Artificial Bee Colony Algorithm (ABC) and migrates the VMs by Minimum Migration Time policy. The underutilized hosts were then detected and turning them to sleep status. Through simulation the proposed algorithm was compared to Local Regression- The minimum migration Time (LR-MMT), Dynamic Voltage Frequency Scaling (DVFS), Interquartile Range-MMT (IQR-MMT), Median Absolute Deviation (MAD-MMT) and non-power aware policy. The results concluded Bee-MMT to be more effective by 26.46% than LR-MMT, 24.87% than MAD-MMT, 26.68% than IQR-MMT, 89.32% than DVFS and 96.44% than non-power-aware. Reference [14] implements Bee Colony and Ant Colony Optimization to check for the idle machines and put them to sleep.

In reference [15], the CPU and Disk usage were analyzed for finding an optimal combination point where energy per

Page 3: Energy-Aware Task Scheduling (EATS) Framework for ...iasks.org/wp-content/uploads/pdf/IJTEE-1301007.pdfKeywords: Cloud Computing, Green Computing, Energy Efficiency, Data Centers,

Ismail & Fardoun / Int. J. of Thermal & Environmental Engineering, 13 (2016) 37-48

39

utilization is minimized. The heuristics maximizes the sum of Euclidean distance of the current allocations to these optimal points of each server for task consolidation. In reference [12] energy saving is achieved by putting network interfaces, routers and switches to sleep while maintaining the network connectivity. Though sleeping of network components is a feasible strategy to reduce energy consumption, it require changes to the current protocol specifications.

In this paper, we introduce an energy-aware framework based on our empirical experiments and observations that CPU load has an important factor in the increase or decrease of energy consumption of the underlying server. The experiments are conducted using different workload and Phoronix benchmark.

III.CloudMonitoringSystemModel

Figure 1: Energy-Aware Task Scheduling (EATS) System Model

Fig.1 shows a cloud computing system model, where users can access cloud resources and services. The system includes a master worker and multiple computing workers, which are independently connected to the master. Each computing worker is operating in an independent heterogeneous mode as they differ in computing capacity, memory limitations and communication protocols of shared resources. Energy monitors, connected to the physical computing workers, computes respective energy consumption and stores it in a database. An energy monitor is an electrical monitoring system augmented with implemented IT solutions for data acquisition and storage in a database. The average power-consumption of each computing worker is measured in real-time in Watts per Second. EATS software runs on the master worker to assign a user’s task to a particular computing worker based on a scheduling algorithm which choses the server for which the energy consumption would be the minimum among all servers. To account for heterogeneity, the scheduler considers the computing capacity of each computing worker. It also considers the load size of the task and the current energy consumption at the time of scheduling.

The cloud consists of N heterogeneous computing workers. Each computing worker i, i є {1,2,..., N} has a computing capacity μi. The total energy consumption of a task

increases with the time taken to complete the task. Consequently, we aim to schedule users’ tasks in in a way tasks are completed at the earliest (minimum time) in an energy-efficient way.

The time taken to process an application j by a computing worker i depends on the followings:

(i) The computing power of the computing worker (μi).

(ii) The load size (ψj) of the application.

A load size consists of units of load. It could be one byte or several bytes and is application dependent. The time taken (TPij) for a computing worker i to process an application j is given by Eq.1.

TPij =Өiψj

μi (1)

where μi is the computational speed of the computing worker i in units of load per second. ψj is the load size of the application j in units of load. Өi , is the fixed latency in seconds for starting the computation at the computing worker i.

IV.Energy‐AwareTasksSchedulingAlgorithm

Our model uses the results of the work done by Youg Choon et al. [14] on simulation and confirmed by our empirical study in this work that there exists a linear relationship between energy consumption and the load processed. Since, energy required for processing a task increases with the completion time, we added execution time also as a constraint in developing the energy function (eij) used by EATS. To process the user’s task j by the computing worker i, eij is defined as follows:

eij = (1- ϕ) Eimax TPij ψj + EiReal (2)

Where, Eimax is the maximum energy consumed by the computing worker i at its peak load; i.e, energy at 100% CPU utilization. EiReal is the real time energy consumption of the computing worker i, measured by the energy meter connected to the physical node of the individual computing worker. ϕ is a constant power factor whose value is the percentage of energy consumption of servers at steady state compared to the servers’ energy consumption at 100% CPU load. EATS main goal is to schedule n applications among N computing workers. EATS framework requires the following values:

• The dynamic computing capacity μi of each computing worker i.

• The load size ψj of the application j, for the n number of applications. ψj is measured in units of loads.

• The maximum energy consumption, Eimax of each computing worker i.

Before scheduling, the computing workers are sorted in the decreasing order of their computing capacity, and the applications are sorted in the decreasing order of their load size. A computing worker with a high computing capacity can process a load faster than that with a lower computing capacity. Sorting the applications in terms of decreasing order of its load size and the computing workers in decreasing order of its computing capacity before scheduling, makes sure of the fact that the

Page 4: Energy-Aware Task Scheduling (EATS) Framework for ...iasks.org/wp-content/uploads/pdf/IJTEE-1301007.pdfKeywords: Cloud Computing, Green Computing, Energy Efficiency, Data Centers,

Ismail & Fardoun / Int. J. of Thermal & Environmental Engineering, 13 (2016) 37-48

40

applications with higher load size are assigned to the faster machines.

Efforts are made not to make the servers fully utilized by keeping an upper utilization threshold of 90% of CPU load. It is highly undesirable to dispatch an application to a fully utilized server, as it will affect the overall reliability and may result in queuing of the applications. If the server utilization is found to be above our upper threshold, then it is considered as an unsuitable host. Each application will be then tested against all the suitable computing workers on the basis of the EATS energy function defined in Eq.2. The computing worker with the lowest energy function is considered as the most energy-efficient host for that application. Below is the algorithm for EATS scheduling.

/* n applications. Application j has a load size of ψj, N number of computing workers. Capacity μi of worker i

DecreasingComputingCapacity( i , μi). SortDecreasingLoadSize(j, ψj).

For ( j=1; j<=n; j++ ) Ij = i1; For (i=1; i<=N; i++)

If (Li <= 90%) calculate: eij = (1- ϕ) Eimax Ө ψj+ EiReal

if (eij > e(i-1)j) ; do eij = e(i-1)j

else ; do Ij = ii

assign the application j to computing worker Ij

V.EnergyCloudMonitorSoftwareDesign

Fig.2 shows the block diagram for the implemented

energy cloud monitoring tool.

Figure 2: Block Diagram for the Cloud Energy Monitor

Page 5: Energy-Aware Task Scheduling (EATS) Framework for ...iasks.org/wp-content/uploads/pdf/IJTEE-1301007.pdfKeywords: Cloud Computing, Green Computing, Energy Efficiency, Data Centers,

Ismail & Fardoun / Int. J. of Thermal & Environmental Engineering, 13 (2016) 37-48

41

Legend :

  Initialize Virtual Instrument [VI]   Initiate Virtual Instrument [VI] 

 Auto‐Setup Virtual Instrument [VI] 

 

 Configure Continuous Acquisition Virtual 

Instrument [VI] 

   Controls   Indicators 

 Serial Configuration   VISA resource name: Specifies the resource 

to be opened. 

  Time Out  

 Multiply Function 

Fetch Waveform Virtual Instrument [VI]   Basic Averaged DC‐RMS Virtual Instrument 

[VI] 

 RMS Block   Waveform graph Virtual Instrument [VI] 

 Write To Measurement File Virtual 

Instrument 

Merge Signal Function

Page 6: Energy-Aware Task Scheduling (EATS) Framework for ...iasks.org/wp-content/uploads/pdf/IJTEE-1301007.pdfKeywords: Cloud Computing, Green Computing, Energy Efficiency, Data Centers,

Ismail & Fardoun / Int. J. of Thermal & Environmental Engineering, 13 (2016) 37-48

42

Figure 3: Cloud Monitoring Tool Front Panel

We used the virtual instruments (VIs) of LabVIEW in order to implement the Block Diagram. Fig.3 is a Front Panel which represents our implementation. The Block Diagram (Fig.2) shows the signal acquisition and the computation of the power consumption of the server whose power consumption is measured every second. The Block Diagram shows our design in 6 different blocks. Data propagation between different objects (such as terminals, subVIs, functions, constants, and structures [17].) is represented by connecting wires between them.

Block1 is an internal automatic setting in LabVIEW for signal acquisition from the electrical tool, which is an oscilloscope, in our experimental environment. It acquires the voltage and current signal from channel: 1 and channel: 2 of the oscilloscope respectively into the device onboard memory. The Virtual Instrument Software Architecture [VISA] resource name specifies the resource to be opened in LabVIEW. It is selected in instrument descriptor format “USB0::0x0699::0x0367::C057998::INSTR”, as shown in Fig.3., to represent the ( USB[board]:: manufacturer ID:: model code:: serial number[:: USB interface number]::INSTR). Time Out specifies the time, in milliseconds, for read and write operations, whose value is set at 10000 milliseconds. For proper communication with the oscilloscope, the serial configuration defining the Baud Rate, Flow Control, Stop Bits, and Parity are also configured to be compliant with the oscilloscope specification. Baud Rate is the rate of data transmission, whose value is set at 9600. Flow Control sets the type of control used by the transfer mechanism, set as RTS/CTS, uses Request to Send [RTS] output signal and Clear to Send [CTS] input signal to perform Flow Control. Stop Bits specifies the number of stop bits used to indicate the end of a frame, set at 10 (means 10 stop bit). Parity specifies the parity used for every frame to be transmitted or received, set with default value of 0. The Initialize Virtual Instrument [VI] [17] in the Block Diagram configures the communications interface to the instrument. The Auto-Setup VI senses the input signal and automatically chooses

the instrument settings. Configure Continuous Acquisition VI enables the signal acquisition to be continuous. Initiate VI starts the signal acquisition and enter into a running state.

The acquired signals by Block1 is fetched by fetch waveform VI in Block 2 to be able to transfer the acquired data from on-board memory to the server collecting power consumption data. The voltage waveform from channel: 1, the current waveform from channel: 2 and the respective time out in milliseconds are fetched. The voltage and current signal data are sent to a multiplier, where it is multiplied with respective probe attenuation with value of 1.

Block3 calculates the root mean square (rms) voltage [18] while, Block 4 calculates the rms current. The output voltage signal from block2 is input to a multiplier in Block 3, whose function is to multiply the input data with a constant value of 20 for correcting the probe attenuation [19] and oscilloscope attenuation [19]. This corrected voltage signal is input to the rms block , where the root mean square value of the voltage is calculated and displayed via an indicator in Fig.2. Similarly, the Current signal from block 2 is input to a multiplier in Block 4, where it is first corrected for probe and oscilloscope attenuation before fed as input to ‘rms block’ for calculating the root mean square value of the server current. This is also displayed via an indicator in Fig.3.

Block 5 outputs waveform display in Fig.3. The voltage and current signal output from block 2 is fed as inputs to a multiplier to get the power output signal. These three signals: voltage, current and the power are inputs to MergeSignalFunction. The output is then fed to a waveform graph VI for real-time dynamic graph display of all the signals in Fig.3.

Block 6 outputs the average DC RMS power. The voltage and current signals output from Block 2 are inputs to a multiplier to get power output signal. Thus obtained power signal is input to Basic Averaged DC-RMS VI to get the root mean squared value of the power, averaged by N inputs over time. The output from this VI is multiplied by constant 20 to

Page 7: Energy-Aware Task Scheduling (EATS) Framework for ...iasks.org/wp-content/uploads/pdf/IJTEE-1301007.pdfKeywords: Cloud Computing, Green Computing, Energy Efficiency, Data Centers,

Ismail & Fardoun / Int. J. of Thermal & Environmental Engineering, 13 (2016) 37-48

43

correct the probe and oscilloscope attenuation of the voltage and current input signals. Average DC RMS power is displayed via an indicator in Fig.3 and also exported to a file in our experimental test-bed.

VI. ExperimentsandResultsAnalysis

A. Experimental Environment

The energy-consumption benchmarking experiments

are conducted on top of a dual CPU of Intel Xeon 2.8GHz processor, quad-core, with 64KB BIOS, Linux operating system, CentOS release 6.3. The power supply for the server is 220V, 50Hz from a group9, C13 style power outlet [20]. To measure the power consumption of the server in real time, we used a Tektronix-TDS2012B digital oscilloscope [21]. The two-channeled oscilloscope have a sampling rate of 1GSamples/sec. A high voltage differential probe [22] is used to measure the voltage. One end of the probe is connected to the channel 1 of the digital oscilloscope and the other end to the power cord. A current probe is connected between channel 2 of the oscilloscope and the power cord to measure the current signal. We implemented a data acquisition of the power consumption of the server using the LabVIEW software [17], by implementing the Block Diagram described previously. The computed power consumption is then stored in a database every second.

B. Applications/ Benchmarks and Experiments

In order to know the impact of different applications

types (CPU-bound, I/O-bound, and memory-bound) on the energy consumption, as well as the effect of the load size, in our experimental environment, we conducted an energy performance benchmark with the following applications:

1) Mencoder: Mencoder is mostly a CPU-bound

application. It is a command-line transcoding tool included in Mplayer project. All video formats that is adaptable with Mplayer can be compressed or uncompressed using Mencoder. We used the Mencoder source code version 1.2.0, which is included in the Mplayer project version SVN-r31628-4.4.6 [23]. We used the AVI video format [24] as video input. Power consumption is analyzed over a workload range from of 1MB to 500MB.

2) I/O Stress, 7ZIP, and RAMSpeed SMP: We used

AIO_Stress test profile version pts/aio-stress-1.1.1 and 7ZIP. Both are included in Phoronix test suite version 2.1.0 [25], an open source comprehensive testing and benchmarking software available for Linux. AIO-Stress is a basic workload generator, an asynchronous I/O disk benchmark created by SuSE, and in Phoronix-Test-Suite. AIO-Stress performs random writes to a 2048MB test file using a 64KB record size. To evaluate the energy consumption with varying I/O load sizes, we used file size varying from 2GB – 640Gb with corresponding record size varying from 128KB to 40GB. 7ZIP is a standard compression tool. RAMSpeed is a free open source command line utility to measure cache and memory performance of computer systems. It is a synthetic benchmark that tests RAM speed, returning a throughput score represented in megabytes per second. The higher the number, the better the performance. Result is an average of 3 runs of 4 different benchmarks representing one

type of memory operation: Add, Copy, Scale and Triad. We used RAMSpeed SMP test profile version pts/aio-stress-1.4.0 available in Phoronix-test-suite.

3) Startup and Shutdown Experiments: Startups and

shutdowns are recurrent administrative tasks in data centers. In order to assess the impact of startups and shutdowns on energy consumption, we conducted power-consumption measurements during startup and shutdown of the server and compared them to the power consumption of the server at maximum utilization.

4) Maximum Utilization of CPU: The server is loaded with

multiple numbers of parallel running tasks so that the CPU utilization is 100%. Fig.5 shows the power consumption during maximum CPU utilization of the server. An average power consumption of 244 Watts was found. The power consumption at full CPU load was measured as a reference for a Peak of power consumption in our experimental environment.

C. Results Analysis

Startup and Steady State: Fig.4 shows the power

consumption during server startup and steady state. The average power consumption during startup period is 166 watts, which is 68% of the average power consumption at the maximum CPU utilization (100% of CPU load) shown in Fig.5; note that the startup power consumption reaches up to 203 Watts which is 83% of the average power consumption at maximum CPU utilization. The steady state power consumption (Fig.4) is constant, 54% of the average power consumption at maximum CPU utilization. This shows that even when the server is at idle state, a considerable amount of energy is still consumed by the underlying server. Hence, servers should be rather be used at their maximum utilization to balance data center use versus power consumption cost.

Figure 4: Power Consumption over Time at Startup

Page 8: Energy-Aware Task Scheduling (EATS) Framework for ...iasks.org/wp-content/uploads/pdf/IJTEE-1301007.pdfKeywords: Cloud Computing, Green Computing, Energy Efficiency, Data Centers,

Ismail & Fardoun / Int. J. of Thermal & Environmental Engineering, 13 (2016) 37-48

44

Figure 5: Power-Consumption over Time at 100% CPU

Utilization

Fig.6 shows the power consumption during the server shutdown period. The average power consumption is 54% of that at maximum CPU utilization and the peak value is at 75% of the average power consumption at maximum CPU utilization.

The results obtained at startup and shutdown suggest that datacenters should avoid frequent startup and shutdown of the servers.

Figure 6: Power-Consumption over Time at Shutdown

Figure 7: Power-Consumption over Load for Mencoder Application

Page 9: Energy-Aware Task Scheduling (EATS) Framework for ...iasks.org/wp-content/uploads/pdf/IJTEE-1301007.pdfKeywords: Cloud Computing, Green Computing, Energy Efficiency, Data Centers,

Ismail & Fardoun / Int. J. of Thermal & Environmental Engineering, 13 (2016) 37-48

45

Figure 8: Power-Consumption over Time for AIO_Stress: 2 GB File Size and 64 KB Record Size 

Figure 9: Power-Consumption over Load for AIO_Stress

Page 10: Energy-Aware Task Scheduling (EATS) Framework for ...iasks.org/wp-content/uploads/pdf/IJTEE-1301007.pdfKeywords: Cloud Computing, Green Computing, Energy Efficiency, Data Centers,

Ismail & Fardoun / Int. J. of Thermal & Environmental Engineering, 13 (2016) 37-48

46

Figure 10: Power-Consumption over Time for 7ZIP

Figure 11: Power-Consumption over Time for RAMSpeed SMP

Fig.7 and Fig.9 show the power consumption of the Mencoder and I/O Stress respectively with varying load size. The experiments show a linear relationship between the load size and energy consumption of the underlying

server. We can also see that the Mencoder applications were consuming more power than AIO_Stress. This reveals that CPU bound applications are more energy intense than I/O bound applications. Fig.8 shows an

Page 11: Energy-Aware Task Scheduling (EATS) Framework for ...iasks.org/wp-content/uploads/pdf/IJTEE-1301007.pdfKeywords: Cloud Computing, Green Computing, Energy Efficiency, Data Centers,

Ismail & Fardoun / Int. J. of Thermal & Environmental Engineering, 13 (2016) 37-48

47

average consumption value of 137.5 Watts for AIO_Stress test profile. However 7ZIP application consumes on average 161 Watts as shown in Fig.10. 7ZIP is a standard compression tool, with considerable observed overhead on CPU and required memory. During the normal compression of a large file in 7ZIP, CPU usage remains constant at the maximum utilization, whereas RAM usage fluctuates. Fig.10 shows the power consumption in our test server over the 7ZIP benchmark. The average power consumption, while the 7ZIP application is running, is 161 watts with a standard deviation of 6.5. 7ZIP has the same energy consumption as RAMSpeed . Fig.11 shows the power consumption over time for RAMSpeed SMP benchmark, with a mean value of 159.2 watts and standard deviation 4.015.

VII.Conclusion

Cloud computing is an emerging technology offering services to individuals and companies. Due to its great advantages providing real-time access to a dynamically configured pool of hardware, software and services, Cloud computing is an essential component in a digital ICT ecosystem in future Smart Cities; for IoT applications, big data processing, storage, and connectivity with ubiquitous smart devices. Due to the growing adoption of the cloud technologies, the power consumption of the underlying data centers become a crucial issue to address. In addition to the environmental threats, the electricity bills of the data centers become considerable. In this work, a power-aware scheduling framework to reduce the power consumption in Clouds data centers has been proposed. We conducted empirical studies and the results revealed that CPU load accounts for a big part of the power consumption of servers. Therefore, it must be considered in any power-aware scheduling algorithm. Our experiments show that the average power consumption of the shutdown and the startup procedures account for 54% and 68% respectively, suggesting that scheduling based on power OFF and power ON the servers must be avoided in any power-aware scheduling algorithm. Our work continues to extend EATS framework in a Cloud Computing environment of hundreds of servers and measure its energy-performance, compared to other scheduling algorithms in the literature.

Acknowledgments

The authors would like to thank the United Arab Emirates University (UAEU) for funding this work, Fund number: 31R056.

References

[1] P. Mell and T. Grance. National Institute of Standards and Technology (NIST) Cloud Computing Definition. NIST Special Publication 800-145, September 2011

[2] Energy Star ( EPA ): https://www.energystar.gov/about, last accessed on 28 June 2016

[3] Paolo Bertoldi. Code of Conduct on Data Centers Energy Efficiency. Version 1.0, 30 October 2008. http://ec.europa.eu/information_society/activities/sustainable_growth/docs/datacenter_code-conduct.pdf, last accessed on 29 June 2016

[4] Natural resources Defence Council (NRDC). Scaling Up Energy Efficiency across the Data Center Industry: Evaluation Key Drivers and Barriers. IP:14-08-a, August 2014, https://www.nrdc.org/sites/default/files/data-center-efficiency-assessment-IP.pdf, last accessed on 28 June 2016

[5] Berkeley Lab Data Center Energy Efficiency Research: http://eetd.lbl.gov/l2m2/datacenter.html, last accessed on 28 June 2016

[6] Steve Greenberg, Evan Mills, Bill Tschudi, and Bruce Myatt. Best Practices for Data Centers: Lessons Learned from Benchmarking 22 Data Centers. 2006 ACEEE Summer Study on Energy Efficiency in Buildings, January 2006

[7] Tom Brey, Pamela Lembke, Joe prisco, and ken Abbot Emerson. Case Study: The ROI of cooling system Energy efficiency Upgrades. White paper #39 The green grid: energystar.gov

[8] R. Raghavendra, P. Ranganathan, Vanish Talwar, Zhikui Wang, Xiaoyun Zhu. No Power” Struggles: Coordinated Multi-level Power Management for the Data Center. In Proceedings of the 13th International Conference on Architectural Support for Programming Languages & Operating Systems, March 2008.

[9] Seyed Mohssen Ghafari Mahdi Fazeli, Ahmad Patooghy, Leila Rikhtechi. BEE-MMT: A load Balancing Method for Power Consumption Management in Cloud. Published on Contemporary Computing (IC3), 2013 Sixth International Conference on Aug 2013, IEEE

[10] Torres J, Carrera D, Hogan K, Gavalda R, Beltran V, Poggi N. Reducing wasted resources to help achieve green data centers. In Proceedings of the 4th workshop on High-Performance Power-Aware Computing (HPPAC’08), 2008

[11] Eduardo Pinheiro, Richardo Bianchini, Enrique V Carrera , Taliver Heath. Load balancing and unbalancing for power and performance in cluster-based systems. Technical Report DCS–TR–440, May 2001

[12] Maruti Gupta, Suresh Singh. Greening of the Internet. Published in: Proceedings of the 2003 conference on Applications, technologies, architectures, and protocols for computer communications

[13] Leila Ismail, and Abbas A. Fardoun, EATS: Energy-Aware Tasks Scheduling in Cloud Computing. International Conference on Sustainable Energy Information Technology (SEIT 2016), May 2016

[14] Young Choon Lee and Albert Y. Zomaya. Energy efficient utilization of resources in cloud computing systems. 19 March 2010. © Springer Science+Business Media, LLC 2010

[15] Poulami Dalapati1, G. Sahoo. Green solution for cloud computing with load balancing and power consumption management. In international Journal of Emerging Technology and Advanced Engineering Website:

Page 12: Energy-Aware Task Scheduling (EATS) Framework for ...iasks.org/wp-content/uploads/pdf/IJTEE-1301007.pdfKeywords: Cloud Computing, Green Computing, Energy Efficiency, Data Centers,

Ismail & Fardoun / Int. J. of Thermal & Environmental Engineering, 13 (2016) 37-48

48

www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 3, Issue 3, March 2013)

[16] Shekhar Srikant, Aman Kansal, Feng Zhao. Energy aware consolidation for cloud computing. In: Proc USENIX workshop on power aware computing and systems in conjunction with OSDI, 2008

[17] LabVIEW System Design Software, National Instruments: http://www.ni.com/labview/, last accessed on 28 June 2016

[18] Louis E. Frenzel, ‘Electronics Explained: The New Systems Approach to Learning Electronics’; Copyright @2010 Elsevier Inc.; ISBN 978-1-85617-700-9

[19] Uday Bakshi, Ajay Bakshi, ‘Electronics Measurement and Instrumentation’; First edition 2009; ISBN 81-89411-24-1; Technical Publications

[20] Group9, C13 power outlet : http://www.redbooks.ibm.com/redbooks/pdfs/sg247780.pdf, last accessed on 28 June 2016

[21] Tektronix-TDS2012B : http://www2.tek.com/cmswpt/psdetails.lotr?cs=psu&ci=13295&lc=ES-MX, last accessed on 28 June 2016

[22] B.D. Wedlock and James Kerr Roberge. High Voltage Differential Probe: Electronic Components and Measurements (Electrical Engineering). Prentice Hall, February 1970

[23] MPlayer 1.3.0, February 2016: https://www.mplayerhq.hu/design7/dload.html, last accessed on 28 June 2016

[24] AVI video, ©2016 DivX, LLC: http://www.divx.com/en/software/technologies/avi, last accessed on 28 June 2016

[25] Phoronix Test Suite 6.4, 02 June 2016, GNU GPLv3. http://www.phoronix-test-suite.com/?k=downloads, last accessed on 28 June 2016