28
Scheduling Periodic Real-Time Tasks with Heterogeneous Reward Requirements I-Hong Hou and P.R. Kumar 1

Scheduling Periodic Real-Time Tasks with Heterogeneous Reward Requirements I-Hong Hou and P.R. Kumar 1

Embed Size (px)

Citation preview

1

Scheduling Periodic Real-Time Tasks with Heterogeneous Reward Requirements

I-Hong Hou and P.R. Kumar

2

Problem Overview Imprecise Computation Model:

Tasks generate jobs periodically, each job with some deadline

Jobs that miss deadline cause performance degradation of the system, rather than timing fault

Partially-completed jobs are still useful and generate some rewards

Previous work: maximize the total rewards of all tasks Assumes that rewards of different tasks are equivalent May result in serious unfairness Does not allow tradeoff between tasks

This work: Provide guarantees on reward for each task

3

Example: Video Streaming A server serves several video streams Each stream generates a group of frames (GOF)

periodically Frames need to be delivered on time, or they are

not useful Lost frames result in glitches of videos Frames of the same flow are not equally important

MPEG has three types of frames: I, P, and B I-frames are more important than P-frames, which are

more important than B-frames Goal: provide guarantees on perceived video

quality for each stream

4

System Model A system with several tasks (task = video

stream) Each task X generates one job every τX time

slots, deadline = τX (job = GOF) All tasks generate one job at the first time slot

A

B

C

A A A A

B B B

C C C C C

τ A=4

τ B=6

τ C=3

5

System Model A frame = time between two time slots that

all tasks generate a job Length of frame (T) = least common multiple

of τA, τB, …

A

B

C

T = 12

6

Model for Rewards A job can be executed several times before its

deadline A job of task X obtains a reward of rX

k when it is executed for the kth time, where rX

1 ≥ rX2 ≥ rX

3 ≥…(the reward of the ith frame in a GOF is rX

i)A

B

C

A A A A

B B B

C C C C C

τ A=4

τ B=6

τ C=3

7

Scheduling Example Reward of A per frame = 3rA

1 +2rA2 +rA

3

A

B

C

A A A A

B B B

C C C C C

A A

B

C C

B

A

C

B

AA A

rA1 rA

2 rA1 rA

1

rA2

rA3

8

Scheduling Example Reward of A per frame = 3rA

1 +2rA2 +rA

3

Reward of B per frame = 2rB1 +rB

2

Reward of A per frame = 3rC1

A

B

C

A A A A

B B B

C C C C C

A A

B

C C

B

A

C

B

AA A

9

Reward Requirements Task X requires an average reward per frame

of qX

Q: How to evaluate whether [qA, qB,…] is feasible? How to meet reward requirement of each task? A

B

C

A A A A

B B B

C C C C C

A A

B

C C

B

A

C

B

AA A

10

Extension for Imprecise Computation Models Imprecise Computation Model: Each job may have a

mandatory part and an optional part Mandatory part needs to be completed, or results

in a timely fault Incomplete optional part only reduces performance Our model: set the reward of a mandatory part to

be M, where M is larger than any finite number The reward requirement of a task = aM+b, where a

is the number of mandatory parts, and b is the requirements on optional parts

Fulfill reward requirement ensures that all mandatory parts are completed on time

11

Feasibility Condition fX

i := average number of jobs of X that are executed at least i times per frame

Obviously, 0 ≤ fXi ≤ T/τX

Average reward of X = ∑i fXirX

i

Average reward requirement: ∑i fXirX

i ≥ qX

The average number of time slots that the server spends on X per frame = ∑i fX

i

Hence, ∑X ∑i fXi ≤ T

12

Admission Control Theorem: A system is feasible if and only if

there exists a vector [ fXi ]such that

1. 0 ≤ fXi ≤ T/τX

2. ∑i fXirX

i ≥ qX

3. ∑X ∑i fXi ≤ T

Check feasibility by linear programming Complexity of admission control can be

further reduced by noting rX1 ≥ rX

2 ≥ rX3 ≥…

Theorem: check feasibility in O(∑X τX) time

13

Scheduling Policy Q: Given a feasible system, how to design a

policy that fulfill all reward requirements?

Propose a framework for designing policies

Propose an on-line scheduling policy

Analyze the performance of the on-line scheduling policy

14

A Condition Based on Debts Let sX(k) be the reward obtained by X in the kth

frame Debt of task X in the kth frame:

dX(k) = [dX(k-1)+qX - sX(k)]+

x+ := max{x, 0} The requirement of task X is met if dX(k)/k→0, as k→∞

Theorem: A policy that maximizes ∑X dX(k)sX(k) for every frame fulfills every feasible system

Such a policy is called a feasibility optimal policy

15

Approximation Policy Computation overhead of a feasibility optimal

policy may be high Study performance guarantees of suboptimal

policies

Theorem: A policy whose resulting ∑X dX(k)sX(k) is at least 1/p of the resulting ∑X dX(k)sX(k) by an optimal policy, then this policy achieves reward requirement [qX] if the reward requirement [pqX] is feasible

Such a policy is called a p-approximation policy

16

An On-Line Scheduling Policy At some time slot, let ( jX -1) be the number of

times that the server has worked on the current job of X

If the server schedules X in this time slot, X obtains a reward of rX

jX

Greedy Maximizer: Schedule the task X that maximizes rX

jX dX(k) in every time slot

Greedy Maximizer can be efficiently implemented

17

Performance of Greedy Maximizer The Greedy Maximizer is feasibility optimal

when the period length of all tasks are the same τA = τB = …

However, when tasks have different period lengths, the Greedy Maximizer is not feasibility optimal

18

Example of Suboptimality A system with two tasks Task A: τA = 6, rA

1 = rA2 = rA

3 = rA4 = 100, rA

5 = rA6 = 1

Task B: τB = 3, rB1 = 10, rB

2 = rB3 = 0

Suppose dA(k) = dB(k) = 1

dA(k)sA(k) + dB(k)sB(k) of Greedy Maximizer = 411

A AA AA

BB

A

100 1

10

19

Example of Suboptimality A system with two tasks Task A: τA = 6, rA

1 = rA2 = rA

3 = rA4 = 100, rA

5 = rA6 = 1

Task B: τB = 3, rB1 = 10, rB

2 = rB3 = 0

Suppose dA(k) = dB(k) = 1

dA(k)sA(k) + dB(k)sB(k) of Greedy Maximizer = 411

dA(k)sA(k) + dB(k)sB(k) of an optimal policy = 420

A AA AA

BBB

20

Approximation Bound Analyze the worst case performance of the

Greedy Maximizer

Show that resulting ∑X dX(k)sX(k) is at least 1/2 of the resulting ∑X dX(k)sX(k) by any other policy

Theorem: The Greedy Maximizer is a 2-approximation policy

The Greedy Maximizer achieves reward requirements [qX] as long as requirements [2qX] are feasible

21

Simulation Setup: MPEG Streaming MPEG: 1 GOF consists of 1 I-frame, 3 P-frames,

and 8 B-frames

Two groups of tasks, A and B

Tasks in A treat both I-frames and P-frames as mandatory parts, while tasks in B only require I-frames to be mandatory B-frames are optional for tasks in A; both P-frames

and B-frames are optional for tasks in B

3 tasks in each group

22

Reward Function for Optional Part Each task gains some reward when its

optional parts are executed Consider three types of optional part reward

functions: exponential, logarithmic, and linear

Exponential: X obtains a total reward of (5+k)(1-e-i/5) if its job is executed i times, where k is the index of the task X

Logarithmic: X obtains a total reward of (5+k)log(10i+1) if its job is executed i times

Linear: X obtains a total reward of (5+k)i if its job is executed i times

23

Performance Comparison Assume all tasks in A requires an average

reward of α, and all tasks in B requires an average reward of β

Plot all pairs of (α, β) that are achieved by each policy

Consider three policies: Feasible: the feasible region characterized by

the feasibility conditions Greedy Maximizer MAX: a policy that aims to maximize the total

reward in the system

24

Simulation Results: Same Frame Rate All streams generate one GOF every 30 time

slotsExponential reward functions:

Greedy = FeasibleThus, Greedy Maximizer is indeed feasibility optimal

Greedy is much better than MAX

25

Simulation Results: Same Frame Rate All streams generate one GOF every 30 time

slots Greedy = Feasible, and is always better than MAX

Logarithmic

Linear

26

Simulation Results: Heterogeneous Frame Rate Different tasks generate GOFs at different

rates Period length may be 20, 30, or 40 time slots

Performance of Greedy is close to optimal

Greedy is much better than MAX Exponentia

l

27

Simulation Results: Heterogeneous Frame Rate Different tasks generate GOFs at different

rates Period length may be 20, 30, or 40 time slots

Logarithmic

Linear

28

Conclusions We propose a model based on the imprecise

computation models that supports per-task reward guarantees

This model can achieve better fairness, and allow fine-grain tradeoff between tasks

Derive a sharp condition for feasibility Propose an on-line scheduling policy, the

Greedy Maximizer Greedy Maximizer is feasibility optimal when

all tasks have the same period length It is a 2-approximation policy, otherwise