Packet Scheduling and Buffer Management Switches S.Keshav: “ An Engineering Approach to...

Preview:

Citation preview

Packet Schedulingand

Buffer Management

SwitchesS.Keshav: “ An Engineering Approach to Networking”

Original Design Goals

• Deliverability

• Survivability

• Speed

Statistical Gain

r0

Switch

r0

rn-1

r1

R’<<ri

Switch

rn-1

r1

R=ri

Ri is the rate for the ith input

Internet Today: Best Effort

• Packet scheduling: FCFS

• Queue management: Drop Tail

– Avantage: Simplicity– Inconvenience: flat service, no guarantees!

From Best Effort to Guaranteed Service?

Shared Resource

Zoom on a Router

Switch

Scheduling Functions

• 1) Packet scheduling: select the next packet that will use the link

• 2) Queue management: Managing the shortage of storage for awaiting packets

allocating queuing delays

allocating loss rates

Scheduling Is Necessary If

• On networks with high statistical fluctuations (more for packet switched than for circuit switched)

• Need of GUARANTEES

• Or Need of FAIRNESS

A Limit: The Law of Conservation

• In words: you cannot decrease mean delays for one flow without increasing mean delays for other flows

• Formally: for each flow fi

– i = mean arrival rate

– xi = mean service time

– qi = mean waiting time

– Sum for all flows of i .xi .qi = Constant (whatever is the scheduling policy)

Example

• Consider ATM virtual circuits A and B with arrival rates 10 and 25 Mbps that share an OC3 link. Packet size is PS.– When using FCFS, both mean queuing delays for

A and B are 0.5 ms– A researcher claims that he designed a new

scheduling policy where A’s mean delay is reduced by 0.4ms, and B’s mean delay is reduced by 0.2ms. Is this possible?

Example (Cont’d)

Switch

10 Mbps

25 Mbps

155 Mbps

Example (Cont’d)

• FCFS– A=10 Mbps, xA = PS/155Mbps, qA = 0.5 ms

– B=25 Mbps xB = PS/155Mbps, qB = 0.5 ms

• New scheduling policy– A=10 Mbps, xA = PS/155Mbps, qA = 0.1 ms

– B=25 Mbps, xB = PS/155Mbps, qB = 0.3 ms

Fairness

• Max-min fair share: – Resources are allocated in order of

increasing demand– No source gets a resource share larger

than its demand– Sources with unsatisfied demands get an

equal share of the resource.

• Example: Compute the max-min fair allocation for a set of four sources with demands 2, 2.6, 4, and 5. Resource has capacity 10.

Fair Scheduling

• Implement a max-min.

• Is it possible?

• Generalized processor sharing (ideal): serve an infinitesimal portion for each “connection”

Approximations of GPS

• Round Robin: serve a packet instead of infinitesimal quantity.

• Weighted Round Robin: round robin, but using a weight for each connection.

• Example: Suppose connections A, B, and C have the same packet size, and weights 0.5, 0.75, and 1.0. How many packets from each connection should a round robin scheduler serve in each round?

Approximations of GPS (2)

• Problem: how to handle variable packet size? Specially if we do not know the mean packet size?

• Deficit round robin

Bad News

• Keep – state variables per flow– Queues per flow

• 5000 (simultaneous) active flows may be handled by current harware scalability?

• Complex buffer management (different packet size)

• Changing traffic patterns

Goods News

• Close to the user, smaller number of flows

• Core routers “overprovisionned”

• Small number of “bad guys”

Architecture

45 Mbps

OC3-OC192

Two Options

• Implement fair queueing on all routers (or variant (DDR)

• Approximate fair queueing

Approximate Fair Queueing

• Keep it simple in the Core router– Use FIFO to allocate queueing delays– Differenciate service by using a differentiated

buffer management (allocate loss rates)

• Push complexity to the edge of the network:– Keep per flow state– Label packets for differentiated service

Core-Stateless Fair Queueing

• At the edge router– Estimate the rate for each flow (requires keeping

state information)– Label packets with estimated rates ri

• At the core router:– Estimate fair share rate – Estimate packet dropping probability based on ri.

(No need of state, flow characterization is ri)

Active Research Area

• Labeling techniques

• Corresponding packet drop techniques (Active queueing Management)

• End-2-end performance of the two techniques above.

Buffer Management Techniques (Allocations Loss

Rates)• Drop Tail

• Drop Front

• RED: Random Early Detection

• CHOKe: stateless fair queue management

• Stochastic Fair Blue

RED Routers(Random Early Detection)

• Objective :– keep queue length small– keep high link utilization– Warn senders earlier to avoid massive losses and

back offs

• How ?– Detect incipient congestion (not a temporary burst)

•Read Floyd and Jacobson paper (1993)

RED Implementation

• Maintain an average queue length AQL (different from current queue length)

• When packet gets to the queue– if AQL < Tmin, forward packet– if Tmin <= AQL <=Tmax, “mark” packet

with probability p(AQL)– if AQL > Tmax, “mark” every packet

TminTmax

RED Performance• Keeps good delay (short queue)• “Marks” packets with the same

performance• RED implemented by many router

manufacturers (CISCO)• Shortcoming

– Does not inforce fairness– Tuning thresholds and other algorithm

parameters done by try (work on adaptive RED)

Problems with RED

• Parameter Tuning– Threshold– Slope of the probability function

• Provider thinking:– Why drop a packet that I can handle?

WRED Routers

• How would you design a WRED (Weighted RED?

CHOKe: A Stateless Queue Management

AQL <= Tmin ?

Admit NPDraw a packet at random RP

New packet NP

NP and RP same flow?

AQL <= Tmax

Admit NP with Probability p Drop NP

Drop both

y

y

y

n

n

n

Blue Algorithm

• Maintain a probability pm to mark or drop packets

• Upon packet loss– Increase pm (with insuring some delay

between increase)

• Upon idle link– Decrease pm

Recommended