16
1 mlps2006ani.ppt IEEE Infocom 2006, Barcelona, Spain, 23.29.4.2006 Mean Delay Analysis of Multi Level Processor Sharing Disciplines Samuli Aalto (TKK) Urtzi Ayesta (before CWI, now LAAS- CNRS)

Mean Delay Analysis of Multi Level Processor Sharing Disciplines

  • Upload
    pakuna

  • View
    30

  • Download
    0

Embed Size (px)

DESCRIPTION

Mean Delay Analysis of Multi Level Processor Sharing Disciplines. Samuli Aalto (TKK) Urtzi Ayesta (before CWI, now LAAS-CNRS). Teletraffic application: scheduling elastic flows. Consider a bottleneck link in an IP network loaded with elastic flows, such as file transfers using TCP - PowerPoint PPT Presentation

Citation preview

Page 1: Mean Delay Analysis of  Multi Level Processor Sharing Disciplines

1mlps2006ani.ppt IEEE Infocom 2006, Barcelona, Spain, 23.29.4.2006

Mean Delay Analysis of Multi Level Processor Sharing

Disciplines

Samuli Aalto (TKK)Urtzi Ayesta (before CWI, now LAAS-CNRS)

Page 2: Mean Delay Analysis of  Multi Level Processor Sharing Disciplines

2

Teletraffic application: scheduling elastic flows

• Consider a bottleneck link in an IP network– loaded with elastic flows, such as file transfers using

TCP– if RTTs are of the same magnitude, then

approximately fair bandwidth sharing among the flows

• Intuition says that – favouring short flows reduces the total number of

flows, and, thus, also the mean delay at flow level

• How to schedule flows and how to analyse?– Guo and Matta (2002), Feng and Misra (2003),

Avrachenkov et al. (2004), Aalto et al. (2004, 2005)

Page 3: Mean Delay Analysis of  Multi Level Processor Sharing Disciplines

3

Queueing model• Assume that

– flows arrive according to a Poisson process with rate

– flow size distribution is of type DHR (decreasing hazard rate) such as hyperexponential or Pareto

• So, we have a M/G/1 queue at flow level– customers in this queue are flows (and not packets)– service time = the total number of packets to be

sent– attained service = the number of packets sent– remaining service = the number of packets left

Page 4: Mean Delay Analysis of  Multi Level Processor Sharing Disciplines

4

Scheduling disciplines at flow level• FB = Foreground-Background = LAS = Least Attained

Service– Choose a packet from the flow with least packets sent

• MLPS = Multi Level Processor Sharing– Choose a packet from a flow with less packets sent

than a given threshold

• PS = Processor Sharing– Without any specific scheduling policy at packet level,

the elastic flows are assumed to divide the bottleneck link bandwidth evenly

• Reference model: M/G/1/PS

Page 5: Mean Delay Analysis of  Multi Level Processor Sharing Disciplines

5

Optimality results for M/G/1• Schrage (1968)

– If the remaining service times are known, then• SRPT is optimal minimizing the mean delay

• Yashkov (1987)– If only the attained service times are known, then

• DHR implies that FB (which gives full priority to the youngest job) is optimal minimizing the mean delay

• Remark: In this study we consider work-conserving (WC) and non-anticipating (NA) service disciplines such as FB, MLPS, and PS (but not SRPT) for which only the attained service times are known

Page 6: Mean Delay Analysis of  Multi Level Processor Sharing Disciplines

6

MLPS disciplines• Definition: MLPS discipline, Kleinrock, vol. 2 (1976)

– based on the attained service times– N1 levels defined by N thresholds a1 … aN – between levels, a strict priority is applied– within a level, an internal discipline (FB, PS, or FCFS)

is applied

a

FCFS+PS(a)

Page 7: Mean Delay Analysis of  Multi Level Processor Sharing Disciplines

7

Earlier results: comparison to PS• Aalto et al. (2004):

– Two levels with FB and PS allowed as internal disciplines

• Aalto et al. (2005): – Any number of levels with FB and PS allowed as

internal disciplines

PSPSPSPSFBFB DHR TTTT

PSMLPSFB DHR TTT

Page 8: Mean Delay Analysis of  Multi Level Processor Sharing Disciplines

8

Idea of the proof

• Definition: Ux = unfinished truncated work with threshold x– sum of remaining truncated service times min{S,x}

of those customers who have attained service less than x

• Kleinrock (4.60)

– implies that

– so that'' & DHR TTxUU xx

)()( )'( )()( 0 xFxTUdyyFyTU xx

x

0

10 )()'( )()( dxxhUdxxfxTT x

Page 9: Mean Delay Analysis of  Multi Level Processor Sharing Disciplines

9

Mean unfinished truncated work E[Ux]

bounded Pareto file size distribution

Page 10: Mean Delay Analysis of  Multi Level Processor Sharing Disciplines

10

Given two MLPS disciplines, which one is better in the mean delay sense?

Question we wanted to answer to

Page 11: Mean Delay Analysis of  Multi Level Processor Sharing Disciplines

11

Locality result• Proposition 1:

– For MLPS disciplines, unfinished truncated work Ux within a level depends on the internal discipline of that level but not on the internal disciplines of the other levels

xa

FCFS+PS(a)PS+PS(a)

Page 12: Mean Delay Analysis of  Multi Level Processor Sharing Disciplines

12

New results 1: comparison among MLPS disciplines

• Theorem 1: – Any number of levels with all internal disciplines

allowed– MLPS derived from MLPS’ by changing an internal

discipline from PS to FB (or from FCFS to PS)

• Proof based on the locality result and sample path arguments– Prove that, for all x and t,

– Tedious but straightforward

MLPS'MLPS DHR TT

)()( MLPS'MLPS tUtU xx

Page 13: Mean Delay Analysis of  Multi Level Processor Sharing Disciplines

13

New results 2: comparison among MLPS disciplines

• Theorem 2: – Any number of levels with all internal disciplines

allowed– MLPS derived from MLPS’ by splitting any FCFS level

and copying the internal discipline

• Proof based on the locality result and mean value arguments– Prove that, for all x,

– An easy exercise since, in this case, we have explicit formulas for the mean conditional delay

MLPS'MLPS DHR TT

MLPS'MLPSxx UU

Page 14: Mean Delay Analysis of  Multi Level Processor Sharing Disciplines

14

New results 3: comparison among MLPS disciplines

• Theorem 3: – Any number of levels with all internal disciplines allowed– The internal discipline of the lowest level is PS– MLPS derived from MLPS’ by splitting the lowest level and

copying the internal discipline

• Proof based on the locality result and mean value arguments– Prove that, for all x,

– In this case, we don’t have explicit formulas for the mean conditional delay

– However, by truncating the service times, this simplifies to a comparison between PS+PS and PS

MLPS'MLPS DHR TT

MLPS'MLPSxx UU

Page 15: Mean Delay Analysis of  Multi Level Processor Sharing Disciplines

15

Note related to Theorem 3• Splitting any higher PS level is still an open problem

(contrary to what we thought in an earlier phase)!

• Conjecture 1:– Let = PS+PS(a) and ’ = PS+PS(a’) with a a’– For any x a’,

• This would imply the result of Theorem 3 for any PS level

)()'()()'( ' xTxT

Page 16: Mean Delay Analysis of  Multi Level Processor Sharing Disciplines

16

Recent results• For IMRL (Increasing Mean Residual Lifetime) service

times:

– FB is not necessarily optimal with respect to the mean delay

• a counter example where FCFS+FB and PS+FB are better than FB

• in fact, there is a class of service time distributions for which FCFS+FB is optimal

– If only FB and PS allowed as internal disciplines, then • any such MLPS discipline is better than PS

• Note: DHR IMRL