Transcript

Near-Threshold Computing Review of Previous Work

An Optimal Control Policy in a Mobile Cloud Computing System Based on Stochastic DataXue Lin, Yanzhi Wang, Massoud PedramUniversity of Southern California1Cloud Computing and Mobile DevicesCloud Computing ParadigmCloud service providerClientsMobile Device Computing PlatformCompactness, portability, and functionalityWeak computing and storageShort battery lifeMobile Cloud Computing (MCC) paradigmExtend capabilitiesImprove performanceReduce energy consumption

2Mobile cloud computing (MCC)2Mobile Devices in MCC ParadigmService RequestLocal processingRemote processing by offloading to the cloudControl DecisionWhether to offload a service requestCPU operating frequency for local processingPerformance and Power ConsumptionHigher performance means higher powerA trade-off is desirable3Mobile device make use of MCC3OutlineMotivationSystem ModelMCC systemBattery modelExpected performance sumOptimal Control PolicyProblem formulationDynamic programming algorithmExperimental ResultsConclusion44MotivationInter-Charging Interval (ICI)Short ICI length: high performance modeLong ICI length: low power modeOnly stochastic data is availablePower and Performance Trade-OffPerformance sum: the sum of the performance for all requests processed during an ICIMaximize the expected performance sum5

System Model: MCC SystemA mobile device in the MCC system : service request generation rate, Poisson : local processing probability : local request rate, Poisson : remote request rate, Poisson

6System Model: Response TimeAverage response time of local processing

Average response time of remote processing

Average response time

: avg. request processing rate of the CPU

: avg. request sending rate in the RF

: avg. request sending time

: avg. round trip time7

System Model: Power ConsumptionMobile Device Power ConsumptionCPU: dynamic and static powerRF: dynamic and static power8

System Model: BatteryBattery is power source during ICI : length of an ICI, random variable, probability density function (p.d.f) , the i-th time interval , ,

The remaining energy in the battery

The operating time of the mobile device, assuming an infinite long ICI,

9System Model: Expected Performance SumThe performance for processing a request

The expected value of the performance sum

where

is the indicator function

10

Problem FormulationDerive the optimal control policy for the mobile device, based on the stochastic data of the ICI length, to maximize the expected performance sumDerive the optimal and for can only assume values for the set , where the elements are request processing rates corresponding to K CPU frequency values.The stochastic data about ICI length are given in the form of .

11Optimal Control Policy ProblemGiven: (i) the number of discharging time intervals , and (ii) the amount of remaining battery energy after the discharging process.Find: the and values for .Maximize:

Subject to:

This is a general problem i.e., problem.When and , the problem becomes the Optimal Control Policy (OCP) problem.

12Dynamic Programming AlgorithmThe optimal substructure property: Suppose that the problem has been optimally solved, and the the energy stored in the battery at time is in that optimal solution. This corresponds to the problem. The optimal solution of the problem contains within it the optimal solution of the problem.Find the optimal solution of the problem from the optimal solutions of the problems for , which are stored in matrix elements

13

Dynamic Programming AlgorithmMaximize the expected performance of the mobile device during the time intervalGiven: the battery energy at time is and the battery energy at time is Find: the and values Maximize: the expected performance

Subject to:The maximum expected performance in time interval is denoted by

14Dynamic Programming AlgorithmFind the optimal solution of the problem is then calculated as

The optimal is stored in the matrix entry

The problem is solved as we calculate

15

Dynamic Programming Algorithm

16Dynamic Programming AlgorithmThe average performance value is a non-increasing function, or equivalently, is a non-decreasing function, over all the time intervals .Proof: Suppose that function is not a non-decreasing function. Then there must exist two consecutive time intervals i and i+1 satisfying . We exchange the control decisions with , and it will result in the same value but a higher expected value of the performance sum. This is because .

17Experimental ResultsTwo baseline control policies:Always chooses the highest CPU operating frequency, and chooses p[i] values to maximize performance.Always chooses the lowest CPU operating frequency, and chooses p[i] values to maximize performance.Three probability density functions of ICI length

1818Experimental ResultsThe optimal control policy outperforms two baseline control policies with higher expected value of the performance sum.19

Experimental Results

20ConclustionThe mobile device control decisions should be made according to the ICI length.We define the expected performance sum as the objective function, which is a trade-off between performance and power consumption and accounts for the ICI length uncertainty.A dynamic programming algorithm is proposed to derive the optimal control policy.21Thank you !


Recommended