Upload
dahlia
View
46
Download
4
Embed Size (px)
DESCRIPTION
QoS in The Internet: Scheduling Algorithms and Active Queue Management. Principles for QOS Guarantees. Consider a phone application at 1Mbps and an FTP application sharing a 1.5 Mbps link. bursts of FTP can congest the router and cause audio packets to be dropped. - PowerPoint PPT Presentation
Citation preview
CSIT5600 by M. Hamdi1
QoS in The Internet: Scheduling Algorithms and Active
Queue Management
CSIT5600 by M. Hamdi2
Principles for QOS Guarantees• Consider a phone application at 1Mbps and an FTP application sharing
a 1.5 Mbps link. – bursts of FTP can congest the router and cause audio packets to be dropped.
– want to give priority to audio over FTP
• PRINCIPLE 1: Marking of packets is needed for router to distinguish between different classes; and new router policy to treat packets accordingly
CSIT5600 by M. Hamdi3
Principles for QOS Guarantees (more)
• Applications misbehave (audio sends packets at a rate higher than 1Mbps assumed above);
• PRINCIPLE 2: provide protection (isolation) for one class from other classes (Fairness)
CSIT5600 by M. Hamdi4
Ba
nd
wid
thB
an
dw
idth
Ba
nd
wid
thB
an
dw
idth
DelayDelayDelayDelay
The path as perceived by a packet!The path as perceived by a packet!AA BB
QoS MetricsWhat are we trying to control?
• Four metrics are used to describe a packet’s transmission through a network – Bandwidth, Delay, Jitter, and Loss
• Using a pipe analogy, then for each packet: Bandwidth is the perceived width of the pipe
Delay is the perceived length of the pipe
Jitter is the perceived variation in the length of the pipe
Loss is the perceived leakiness if the pipe
CSIT5600 by M. Hamdi5
Internet QoS Overview• Integrated services
• Differentiated Services
• MPLS
• Traffic Engineering
CSIT5600 by M. Hamdi6
QoS: State information
• No State Vs. Soft State Vs. Hard State
No State
IPCircuit
SwitchedATMIntserv/RSVPdiffserv
DedicatedCircuit
HardState
SoftState
No State inside the networkFlow information at the edges
Packet Switched
CSIT5600 by M. Hamdi7
QoS Router
Policer
Policer
Classifier
Policer
Policer
Classifier
Per-flow Queue
Scheduler
Per-flow Queue
Per-flow Queue
Scheduler
Per-flow Queue
shaper
shaper
Queue management
CSIT5600 by M. Hamdi8
Queuing DisciplinesFirst come first serve
Class 1
Class 2
Class 3
Class 4
Class based scheduling
Scheduler
flow 1
flow 2
flow n
Classifier
Buffer management
CSIT5600 by M. Hamdi9
DiffServDiffServ Domain
PremiumPremium GoldGold SilverSilver BronzeBronze
PHBLLQ/WRED
Classification / Conditioning
CSIT5600 by M. Hamdi10
Functionality at DiffServ Routers
CSIT5600 by M. Hamdi11
Differentiated Service (DS) Field
Version HLen TOS Length
Identification Fragment offsetFlags
Source address
Destination address
TTL Protocol Header checksum
0 4 8 16 19 31
Data
IPheader
• DS filed reuse the first 6 bits from the former Type of Service (TOS) byte to determine the PHB
DS Field0 5 6 7
CSIT5600 by M. Hamdi12
R2AR1
R3
B
R4
A RESVA RESV
A RESV
A RESV
Integrated Services RSVP and Traffic Flow example
PATH message will leave the IP address of the previous hop node in each router. Contains Sender Tspec, Sender Temp, Adspec.
Admission/policy control determines if the node has sufficient available resources to handle the request. If request is granted, bandwidth and buffer is allocated.
A RESV message containing a flowspec and a filterspec must be sent in the exact reverse path.The flowspec (T-spec/R-spec) defines the QoS and the traffic characteristics being requested.
Reserved buffer and bw
Reserved buffer and bw
Reserved buffer and bw
BPATH
Data B
Data
RSVP maintains soft state information (DstAddr, Protocol, DstPort) in the routers. All packets will get MF classification treatment and put in the appropriate queue.
Routers enforce MF classification and put packets in the appropriate queue.The scheduler will then serve these queues.
Phop = A
Phop = R1
Phop = R2BPATH
BPATH
BPATH
CSIT5600 by M. Hamdi13
IntServ: Per-flow classification
SenderReceiver
CSIT5600 by M. Hamdi14
Per-flow buffer management
SenderReceiver
CSIT5600 by M. Hamdi15
Per-flow scheduling
SenderReceiver
CSIT5600 by M. Hamdi16
Round Robin (RR)
• RR avoids starvation
• All sessions have the same weight and the same packet length:
A: B: C:
Round #2
…
Round #1
CSIT5600 by M. Hamdi17
RR with variable packet length
A: B: C:
Round #1 Round #2
…
But the Weights are equal !!!
CSIT5600 by M. Hamdi18
Solution…
A: B: C:
#1 #2 #3
…
#4
CSIT5600 by M. Hamdi19
Weighted Round Robin (WRR)
WA=3 WB=1 WC=4
#1
round length = 8
…
#2
CSIT5600 by M. Hamdi20
WRR – non Integer weights
WA=1.4 WB=0.2 WC=0.8
WA=7 WB=1 WC=4
Normalize
round length = 13
…
CSIT5600 by M. Hamdi21
Weighted round robin
• Serve a packet from each non-empty queue in turn– Can provide protection against starvation
– It is easy to implement in hardware
• Unfair if packets are of different length or weights are not equal
• What is the Solution?
• Different weights, fixed packet size– serve more than one packet per visit, after normalizing
to obtain integer weights
CSIT5600 by M. Hamdi22
Problems with Weighted Round Robin
• Different weights, variable size packets– normalize weights by mean packet size
• e.g. weights {0.5, 0.75, 1.0}, mean packet sizes {50, 500, 1500}
• normalize weights: {0.5/50, 0.75/500, 1.0/1500} = { 0.01, 0.0015, 0.000666}, normalize again {60, 9, 4}
• With variable size packets, need to know mean packet size in advance
• Fairness is only provided at time scales larger than the schedule
CSIT5600 by M. Hamdi23
Max-Min Fairness
• An allocation is fair if it satisfies max-min fairness– each connection gets no more than what it wants
– the excess, if any, is equally shared
Transfer half of excess
Unsatisfied demand
Transfer half of excess
Unsatisfied demand
CSIT5600 by M. Hamdi24
Max-Min FairnessA common way to allocate flows
N flows share a link of rate C. Flow f wishes to send at rate W(f), and is allocated rate R(f).
1. Pick the flow, f, with the smallest requested rate.
2. If W(f) < C/N, then set R(f) = W(f).
3. If W(f) > C/N, then set R(f) = C/N.
4. Set N = N – 1. C = C – R(f).
5. If N>0 goto 1.
CSIT5600 by M. Hamdi25
1W(f1) = 0.1
W(f3) = 10R1
C
W(f4) = 5
W(f2) = 0.5
Max-Min FairnessAn example
Round 1: Set R(f1) = 0.1
Round 2: Set R(f2) = 0.9/3 = 0.3
Round 3: Set R(f4) = 0.6/2 = 0.3
Round 4: Set R(f3) = 0.3/1 = 0.3
CSIT5600 by M. Hamdi26
Fair Queueing
1. Packets belonging to a flow are placed in a FIFO. This is called “per-flow queueing”.
2. FIFOs are scheduled one bit at a time, in a round-robin fashion.
3. This is called Bit-by-Bit Fair Queueing.
Flow 1
Flow NClassification Scheduling
Bit-by-bit round robin
CSIT5600 by M. Hamdi27
Weighted Bit-by-Bit Fair Queueing
Likewise, flows can be allocated different rates by servicing a different number of bits for each flow
during each round.
1R(f1) = 0.1
R(f3) = 0.3R1
C
R(f4) = 0.3
R(f2) = 0.3
Order of service for the four queues:… f1, f2, f2, f2, f3, f3, f3, f4, f4, f4, f1,…
Also called “Generalized Processor Sharing (GPS)”
CSIT5600 by M. Hamdi28
Understanding bit by bit WFQ 4 queues, sharing 4 bits/sec of bandwidth, Weights 3:2:2:1
Weights : 3:2:2:1
3
2
2
1
6 5 4 3 2 1 0
B1 = 3
A1 = 4
D2 = 2 D1 = 1
C2 = 1 C1 = 1
Time
3
2
2
1
6 5 4 3 2 1 0
B1 = 3
A1 = 4
D2 = 2 D1 = 1
C2 = 1 C1 = 1
A1A1A1B1
A2 = 2
C3 = 2
Time
Weights : 3:2:2:1
Round 1
3
2
2
1
6 5 4 3 2 1 0
B1 = 3
A1 = 4
D2 = 2 D1 = 1
C2 = 1 C1 = 1
A1A1A1B1
A2 = 2
C3 = 2
D1, C2, C1 Depart at R=1Time
B1C1C2D1
Weights : 3:2:2:1
Round 1
CSIT5600 by M. Hamdi29
Understanding bit by bit WFQ 4 queues, sharing 4 bits/sec of bandwidth, Weights 3:2:2:1
Weights : 3:2:2:1
3
2
2
1
6 5 4 3 2 1 0
B1 = 3
A1 = 4
D2 = 2 D1 = 1
C2 = 1 C1 = 1
A2 = 2
C3 = 2
B1, A2 A1 Depart at R=2Time
A1A1A1B1B1C1C2D1A1A2A2B1
Round 1Round 2
Weights : 3:2:2:1
3
2
2
1
6 5 4 3 2 1 0
B1 = 3
A1 = 4
D2 = 2 D1 = 1
C2 = 1 C1 = 1
A2 = 2
C3 = 2
D2, C3 Depart at R=2Time
A1A1A1B1B1C1C2D1A1A2A2B1C3C3D2D2
Round 1Round 23
Weights : 1:1:1:1
Weights : 3:2:2:1
3
2
2
1
6 5 4 3 2 1 0
B1 = 3
A1 = 4
D2 = 2 D1 = 1
C2 = 1C3 = 2 C1 = 1
C1C2D1A1A1A1A1A2A2B1B 1B1
A2 = 2
C3C3D2D2
Departure order for packet by packet WFQ: Sort by finish time of packetsTime
Sort packets
CSIT5600 by M. Hamdi30
Packetized Weighted Fair Queueing (WFQ)
Problem: We need to serve a whole packet at a time.
Solution:
1. Determine what time a packet, p, would complete if we served it flows bit-by-bit. Call this the packet’s finishing time, Fp.
2. Serve packets in the order of increasing finishing time.
Also called “Packetized Generalized Processor Sharing (PGPS)”
CSIT5600 by M. Hamdi31
WFQ is complex
• There may be hundreds to millions of flows; the linecard needs to manage a FIFO queue per each flow.
• The finishing time must be calculated for each arriving packet,
• Packets must be sorted by their departure time.
• Most efforts in QoS scheduling algorithms is to come up with practical algorithms that can approximate WFQ!
1
2
3
N
Packets arriving to egress linecard
CalculateFp
Find Smallest Fp
Departing packet
Egress linecard
CSIT5600 by M. Hamdi32
When can we Guarantee Delays?
• Theorem
If flows are leaky bucket constrained and all nodes employ GPS (WFQ), then the network can guarantee worst-case delay bounds to sessions.
CSIT5600 by M. Hamdi33
Traffic Managers: Active Queue
Management Algorithms
CSIT5600 by M. Hamdi34
Queuing Disciplines
• Each router must implement some queuing discipline
• Queuing allocates both bandwidth and buffer space:– Bandwidth: which packet to serve (transmit) next - This
is scheduling
– Buffer space: which packet to drop next (when required) – this buffer management
• Queuing affects the delay of a packet (QoS)
CSIT5600 by M. Hamdi35
Queuing Disciplines
Traffic Sources
Class C
Class B
Class A
Traffic Classes
DropScheduling Buffer Management
CSIT5600 by M. Hamdi36
Active Queue Management
Queue
SinkOutbound LinkRouterInbound Link
Sink
TCP
TCP
ACK…
ACK…
Queue
SinkOutbound LinkRouterInbound Link
Sink
TCP
TCP
ACK…
ACK…
Queue
SinkOutbound LinkRouterInbound Link
Sink
TCP
TCP
ACK…
Drop!!!
Queue
SinkOutbound LinkRouterInbound Link
Sink
TCP
TCP
Queue
SinkOutbound LinkRouterInbound Link
Sink
TCP
TCP AQM
Congestion
Congestion Notification…
ACK…
Queue
SinkOutbound LinkRouterInbound Link
Sink
TCP
TCP AQM
Advantages• Reduce packet losses (due to queue overflow)• Reduce queuing delay
CSIT5600 by M. Hamdi37
QoS Router
Policer
Policer
Classifier
Policer
Policer
Classifier
Per-flow Queue
Scheduler
Per-flow Queue
Per-flow Queue
Scheduler
Per-flow Queue
shaper
shaper
Queue management
CSIT5600 by M. Hamdi38
Packet Drop Dimensions
AggregationPer-connection state Single class
Drop positionHead Tail
Random location
Class-based queuing
Early drop Overflow drop
CSIT5600 by M. Hamdi39
Typical Internet Queuing• FIFO + drop-tail
– Simplest choice
– Used widely in the Internet
• FIFO (first-in-first-out) – Implies single class of traffic
• Drop-tail– Arriving packets get dropped when queue is full
regardless of flow or importance
• Important distinction:– FIFO: scheduling discipline
– Drop-tail: drop policy (buffer management)
CSIT5600 by M. Hamdi40
FIFO + Drop-tail Problems
• FIFO Issues: (irrespective of the aggregation level)– No isolation between flows: full burden on e2e control
(e..g., TCP)
– No policing: send more packets get more service
• Drop-tail issues:– Routers are forced to have have large queues to maintain
high utilizations
– Larger buffers => larger steady state queues/delays
– Synchronization: end hosts react to the same events because packets tend to be lost in bursts
– Lock-out: a side effect of burstiness and synchronization is that a few flows can monopolize queue space
CSIT5600 by M. Hamdi41
Synchronization Problem
• Because of Congestion Avoidance in TCP
cwnd
TimeRTT
1
2
4
Slow Start
W*
W W+1
RTT
Congestion Avoidance
W*/2
CSIT5600 by M. Hamdi42
Synchronization Problem
Queue Size
Time
Total Queue
All TCP connections reduce their transmission rate on crossing over the maximum queue size.
The TCP connections increase their tx rate using the slow start and congestion avoidance.
The TCP connections reduce their tx rate again.It makes the network traffic fluctuate.
CSIT5600 by M. Hamdi43
Global Synchronization Problem
Can result in very low throughput during periods of Can result in very low throughput during periods of congestioncongestion
Max Queue Length
CSIT5600 by M. Hamdi44
Global Synchronization Problem
TCP Congestion control Synchronization: leads to bandwidth under-utilization
Persistently full queues: leads to large queueing delays
Cannot provide (weighted) fairness to traffic flows – inherently proposed for responsive flows
Flow 1Rate
Time
Flow 2
Aggregate load
bottleneck rate
CSIT5600 by M. Hamdi45
Lock-out Problem
• Lock-Out:Lock-Out: In some situations tail drop allows a In some situations tail drop allows a single connection or a few flows (misbehaving single connection or a few flows (misbehaving flows: UDP) to monopolize queue space, preventing flows: UDP) to monopolize queue space, preventing other connections from getting room in the queue. other connections from getting room in the queue. This "lock-out" phenomenon is often the result of This "lock-out" phenomenon is often the result of synchronization. synchronization.
Max Queue Length
CSIT5600 by M. Hamdi46
Bias Against Bursty Traffic
During dropping, bursty traffic will be dropped in benchs – which is During dropping, bursty traffic will be dropped in benchs – which is not fair for bursty connectionsnot fair for bursty connections
Max Queue Length
CSIT5600 by M. Hamdi47
Active Queue ManagementGoals
• Solve lock-out and full-queue problems
– No lock-out behavior
– No global synchronization
– No bias against bursty flow
• Provide better QoS at a router
– Low steady-state delay
– Lower packet dropping
CSIT5600 by M. Hamdi48
RED (Random Early Detection)
• FIFO scheduling
• Buffer management: – Probabilistically discard packets
– Probability is computed as a function of average queue length
Discard Probability
AverageQueue Length
0
1
min_th max_th queue_len
CSIT5600 by M. Hamdi49
Random Early Detection(RED)
CSIT5600 by M. Hamdi50
RED operationMin threshMax thresh
Average queuelength
minthresh maxthresh
MaxP
1.0
Avg length
P(drop)
CSIT5600 by M. Hamdi51
Define Two Threshold Values
RED (Random Early Detection)
• FIFO scheduling
Min threshMax thresh
Average queuelength
Make Use of Average Queue LengthCase 1:
Average Queue Length < Min. Thresh ValueAdmit the New Packet
CSIT5600 by M. Hamdi52
RED (Cont’d)
Min threshMax thresh
Average queuelength
Case 2: Average Queue Length betweenMin. and Max. Threshold Value
p
1-p
Admit the New Packet With Probability p…
p
1-p
Or Drop the New Packet With Probability 1-p
CSIT5600 by M. Hamdi53
Random Early Detection Algorithm
• ave = (1 – wq)ave + wqq
• P = max_P*(avg_len – min_th)/(max_th – min_th)
for each packet arrival: calculate the average queue size ave if ave ≤ minth
do nothing else if minth ≤ ave ≤ maxth
calculate drop probability p drop arriving packet with probability p else if maxth ≤ ave drop the arriving packetarriving packet
CSIT5600 by M. Hamdi54
Random early detection (RED) packet drop
MaxMaxthresholdthreshold
MinMinthresholdthreshold
Average queue lengthAverage queue length
Forced dropForced drop
ProbabilisticProbabilisticearly dropearly drop
No dropNo drop
TimeTime
Drop probabilityDrop probabilityMaxMax
queue lengthqueue length
CSIT5600 by M. Hamdi55
Time
Max Queue Size
Active Queue ManagementRandom Early Detection (RED)
• Weighted average accommodates bursty traffic
Max Threshold
Min Threshold
Forced drop
Probabilistic drops
No drops
Drop probability
Average queue length
Probabilistic drops» avoid consecutive drops» drops proportional to bandwidth utilization
– (drop rate equal for all flows)
CSIT5600 by M. Hamdi56
RED Vulnerable to Misbehaving Flows
0 10 20 30 40 50 60 70 80 90 1000
200
400
600
800
1,000
1,200
1,400
FIFORED
UDP blast
TC
P T
hro
ughp
ut
(Kby
tes/
Sec
)
Time (seconds)
CSIT5600 by M. Hamdi57
Effectiveness of RED- Lock-Out & Global Synchronization
• Packets are randomly dropped
• Each flow has the same probability of being discarded
CSIT5600 by M. Hamdi58
Effectiveness of RED- Full-Queue & Bias against bursty traffic
• Drop packets probabilistically in anticipation of congestion– Not when queue is full
• Use qavg to decide packet dropping probability : allow instantaneous bursts
CSIT5600 by M. Hamdi59
What QoS does RED Provide?
• Lower buffer delay: good interactive service– qavg is controlled to be small
• Given responsive flows: packet dropping is reduced– Early congestion indication allows traffic to throttle back
before congestion
• RED provide small delay, small packet loss, and high throughput (when it has responsive flows).
CSIT5600 by M. Hamdi60
Weighted RED (WRED)
• WRED provides separate thresholds and weights for different IP precedences, allowing us to provide different quality of service to different traffic
• Lower priority class traffic may be dropped more frequently than higher priority traffic during periods of congestion
CSIT5600 by M. Hamdi61
Random
Dropping
WRED (Cont..)High Priority traffic
Medium Priority traffic
Low Priority traffic
CSIT5600 by M. Hamdi62
AverageQueue Depth
StandardMinimumThreshold
PremiumMinimumThreshold
Std and PreMaximumThreshold
Adds Per-Class Queue Thresholds for Differential Treatment
Two Classes are Shown; Any number of classesCan Be Defined
Congestion Avoidance: Weighted Random Early Detection (WRED)
Probabilityof Packet
Discard
CSIT5600 by M. Hamdi63
Problems with (W)RED – unresponsive flows
CSIT5600 by M. Hamdi64
Vulnerability to Misbehaving Flows
• TCP performance on a 10 Mbps link under RED in the face of a “UDP” blast
CSIT5600 by M. Hamdi65
Vulnerability to Misbehaving Flows
• Try to look at the following example:
• Assume there is a network which is set up as:
UDP sources
R1 R2S(m)
S(1)
S(m+1)
S(m+n)
S(m)
S(1)
S(m+1)
S(m+n)
10Mbps
100Mbps 100MbpsTCP
sourcesTCP
sources
UDP sources
CSIT5600 by M. Hamdi66
Vulnerability to Misbehaving FlowsThroughput Analysis
0
0.2
0.4
0.6
0.8
1
1.2
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Flow number
Th
rou
gh
pu
t (M
bp
s)
Ideal
RED
CSIT5600 by M. Hamdi67
Vulnerability to Misbehaving Flows
• Queue Size versus Time
0 5 10 15 20 25 30 35 40 45 500
50
100
150
200
250CHOKe: Queue Size
Time (Seconds)
Siz
e of
Que
ue (
No.
of
Pac
kets
)
Average Queue SizeCurrent Queue Size
Delay is bounded
Delay is bounded
Global Synchronization solvedGlobal Synchronization solved
RED: Queue Size
CSIT5600 by M. Hamdi68
Unfairness of RED
0 5 10 15 20 25 30 350
100
200
300
400
500
600
700
800
900
1000RED's Throughput
Flow Number
Thr
ough
put
(Kbp
s)Idea Fair ShareRED's Throughput
Unresponsive Flow (such
as UDP)
Unresponsive Flow (such
as UDP)
32 TCP Flows1 UDP Flow
32 TCP Flows1 UDP Flow
An unresponsiveflow occupies over 95% of bandwidth
An unresponsiveflow occupies over 95% of bandwidth
CSIT5600 by M. Hamdi69
Scheduling & Queue Management
• What routers want to do?
– Isolate unresponsive flows (e.g., UDP)
– Provide Quality of Service to all users
• Two ways to do it
– Scheduling algorithms:
e.g., WFQ, WRR
– Queue management algorithms:
e.g., RED, FRED, SRED
CSIT5600 by M. Hamdi70
The setup and problems
• In a congested network with many users
• QoS requirements are different
• Problem:
Allocate bandwidth fairly
CSIT5600 by M. Hamdi71
• Network node: Weighted Fair Queueing (WFQ)
• User traffic: any type
Problem: complex implementation
lots of work per flow
Approach 1: Network-Centric
CSIT5600 by M. Hamdi72
Approach 2: User-Centric
• Network node: simple FIFO buffer;
active queue management (AQM): RED
• User traffic: congestion-aware (e.g. TCP)
Problem: requires user cooperation
CSIT5600 by M. Hamdi73
Current Trend
• Network node:
simple FIFO buffer
AQM schemes with enhancement to provide fairness: preferential dropping packets
• User traffic: any type
CSIT5600 by M. Hamdi74
Packet Dropping Schemes
• Size-based Schemes drop decision based on the size of FIFO queue
e.g. RED
• Content-based Schemes drop decision based on the current content of the FIFO
queue
e.g. CHOKe
• History-based Schemes keep a history of packet arrivals/drops to guide drop
decision
e.g. SRED, RED with penalty box, AFD
CSIT5600 by M. Hamdi75
CHOKe (no state information)
CSIT5600 by M. Hamdi76
Random Sampling from Queue
• A randomly chosen packet more likely from the unresponsive flow
• Unresponsive flows can’t fool the system
CSIT5600 by M. Hamdi77
Comparison of Flow ID
• Compare the flow id with the incoming packet
– More accurate
– Reduce the chance of dropping packets from a TCP-friendly flows
CSIT5600 by M. Hamdi78
Dropping Mechanism
• Drop packets (both incoming and matching samples)
– More arrival More Drop
– Give users a disincentive to send more
CSIT5600 by M. Hamdi79
CHOKe (Cont’d)
Min threshMax thresh
Average queuelength
Case 1: Average Queue Length < Min. Thresh Value
Admit the New Packet
CSIT5600 by M. Hamdi80
CHOKe (Cont’d)
Min threshMax thresh
Average queuelength
p
1-p
Case 2: Avg. Queue Length is between Min. and Max. Threshold Values
A packet is randomly chosen from the queue to compare with the new arrival packet
If they are from different flows, the samelogic in RED applies
If they are from the same flow, both packets will be dropped
CSIT5600 by M. Hamdi81
CHOKe (Cont’d)
Min threshMax thresh
Average queuelength
Case 3:Avg. Queue Length > Max. Threshold Value
A random packet will be chosen forcomparison
If they are from different flows, the new packet will be droppedIf they are from the same flow,
both packets will be dropped
CSIT5600 by M. Hamdi82
Simulation Setup
CSIT5600 by M. Hamdi83
Network Setup Parameters
• 32 TCP flows, 1 UDP flow
• All TCP’s maximum window size = 300
• All links have a propagation delay of 1ms
• FIFO buffer size = 300 packets
• All packets sizes = 1KByte
• RED: (minth, maxth) = (100,200) packets
CSIT5600 by M. Hamdi84
32 TCP, 1 UDP (one sample)
CSIT5600 by M. Hamdi85
32 TCP, 5 UDP (5 samples)
CSIT5600 by M. Hamdi86
How Many Samples to Take?
• Different samples for different Qlenavg
– # samples decrease when Qlenavg close to minth
– # samples increase when Qlenavg close to maxth
CSIT5600 by M. Hamdi87
32 TCP, 5 UDP (self-adjusting)
CSIT5600 by M. Hamdi88
Two Problems of CHOKe
• Problem I: – Unfairness among UDP flows of different rates
• Problem II:– Difficulty in choosing automatically how many to drop
CSIT5600 by M. Hamdi89
SAC (Self Adjustable CHOKe
Tries to Solve the previously mentioned two problems
CSIT5600 by M. Hamdi90
SAC• Problem 1: Unfairness among UDP flows of different rates (e.g.,
when k =1, the UDP flow 31 (6 Mbps) has 1/3 throughput of UDP flow 32 (1 Mbps), and when k =10 , throughput of UDP flow 31 is almost 0).
0 5 10 15 20 25 30 350
50
100
150
200
250
300
350
400
Flow number
Tth
roughput(
Kbps)
Throughput per flow (30 tcp flows and 2 udp misbehaving flows)
CHOKe 1 CHOKe 10 Ideal Fair Share
CSIT5600 by M. Hamdi91
SAC• Problem 2: Difficulty in choosing automatically how many to drop
(when k = 4, UDPs occupy most of the BW. When k =10, relatively good fair sharing, and when k = 20, TCPs get most of the BW).
0 5 10 15 20 25 30 350
20
40
60
80
100
120
140
160
180
Flow number
Thro
ughput(
Kbps)
Throughput per flow (30 tcp flows and 4 udp misbehaving flows)
CHOKe 4 CHOKe 10CHOKe 20
CSIT5600 by M. Hamdi92
SAC• Solutions:
1. Search from the tail of the queue for a packet with the same flow number and drop this packet instead of random dropping – because the higher a flow rate is, the more likely its packets will gather at the rear of the queue.
The queue occupancy will be more evenly distributed among the flows.
2. Automate the process of determining k according to traffic status (number of active flows and number of UDP flows)
CSIT5600 by M. Hamdi93
SAC• Once an incoming UDP is compared with a randomly selected
packet, if they are of the same flow, P is updated in this way:
P (1-wp) P + wp.
• If they are of different flows, P is updated as follows:
P (1-wp) P .
• If P is small, then there are more competing flows, and we should increase the value of k.
• Once there is an incoming packet, if it is a UDP packet, R is updated in this way:
R (1-wr) R+ wr..
• If it is a TCP packet, R is updated as follows:
R (1-wr) R.
• If R is large, then we have a large amount of UDP traffic, and we should increase k to drop more UDP packets.
CSIT5600 by M. Hamdi94
SAC simulation• Throughput per flow (30 TCP flows and 2 UDP flows of different
rate)
CSIT5600 by M. Hamdi95
SAC simulation• Throughput per flow (30 TCP flows and 4 UDP flows of the same
rate).
CSIT5600 by M. Hamdi96
SAC simulation• Throughput per flow (20 TCP flows and 4 UDP flows of different
rates)
CSIT5600 by M. Hamdi97
AQM Using “Partial” state
information
CSIT5600 by M. Hamdi98
Congestion Management and Congestion Management and Avoidance:Avoidance: GoalGoal
Provide fair bandwidth allocation similar to WFQ Be simple to implement like RED
Simplicity
Fa
irn
es
s
WFQ
RED
Ideal
CSIT5600 by M. Hamdi99
• Objective: achieve fairness close to that of max-min fairness
1. If W(f) < C/N, then set R(f) = W(f).
2. If W(f) > C/N, then set R(f) = C/N.
• Formulation:– Ri: the sending rate of flow i
– Di: the drop probability of flow i
– Ideally, we want» Ri (1 – Di) = Rfair (equal share)
» Di = (1 – Rfair/Ri)+ (That is, drop the excess)
AQM Based on Capture RecaptureAQM Based on Capture Recapture
CSIT5600 by M. Hamdi100
AQM Based on Capture-Recapture:AQM Based on Capture-Recapture:
Incoming packets
Active Queue Management
The estimation of the sending rate
The estimation of the fair share
The adjustment mechanism
The key question is: how to estimate the sending rate (Ri) and the fair share (Rfair) !!!
Fair allocation of BW
CSIT5600 by M. Hamdi101
Capture-Recapture ModelsCapture-Recapture Models
• The CR models were originally developed for estimating demographic parameters of animal populations (e.g., population size, number of species, etc.).
– It is an extremely useful method where inspecting the whole state space is infeasible or very costly
– Numerous models have been developed to various situations
• The CR models are being used in many diverse fields ranging from software inspection to epidemiology.
• It is based on several key ideas: animals are captured randomly, marked, released and then recaptured randomly from the population.
CSIT5600 by M. Hamdi102
CSIT5600 by M. Hamdi103
CSIT5600 by M. Hamdi104
Time is then allowed for the marked individuals to mix with the unmarked individuals.
CSIT5600 by M. Hamdi105
CSIT5600 by M. Hamdi106
Then another sample is captured.
CSIT5600 by M. Hamdi107
CSIT5600 by M. Hamdi108
Capture-Recapture Model
• Unknown number of fish in a lake
• Catch a sample and mark them
• Let them loose
• Recapture a sample and look for marks
• Estimate population size
• n1 = number in first sample 15 n2 = number in second sample 10 n12 = number in both samples 5 N = total population size assume that n1/N = n12/n2 therefore 15/N = 5/10
N = (10 x 15) / 5 = 30
CSIT5600 by M. Hamdi109
Capture-Recapture ModelsCapture-Recapture Models• Simple model: estimate a homogeneous population of animals (N):
– n1 animals are captured (marked)
– n2 animals were recaptured, and
– m2 of these appeared to be marked.
• Under this simple capture recapture model (M0):
m2/n2 = n1/N
N
n1
N
n2
N
n1n2
m2
CSIT5600 by M. Hamdi110
Capture-Recapture ModelsCapture-Recapture Models
• The capture probability refers to the chance that an individual animal get caught.
• M0 implies that the capture probability for all animals are the same.
– ‘0’ refers to constant capture probability
• Using the Mh model, the capture probabilities vary by animal, sometimes for reasons like difference in species, sex, or age, etc..
– ‘h’ refers to heterogeneity.
CSIT5600 by M. Hamdi111
Capture-Recapture ModelsCapture-Recapture Models
• Estimation of N under the Mh Model is based on the capture frequency data f1, f2…, and ft (t captures)
– f1 is the number of animals that were caught only once,
– f2 is the number of animals that were caught only twice, … etc.
• The jackknife estimator of N is computed as a linear combination of these capture frequencies, s.t.:
N = a1f1 + a2f2 + … + atft
where ai are coefficients which are a function of t.
CSIT5600 by M. Hamdi112
AQM Based on Capture-RecaptureAQM Based on Capture-Recapture
• The key question is: how to estimate the sending rate (Ri) and the fair share (Rfair) !!!
• We use an arrival buffer to store the recently arrived packet headers (we can have control over how large the buffer is, and is a better representation of the nature of the flows when compared to the sending buffer):
1. We estimate Ri using the M0 capture-recapture model
2. We estimate Rfair using the Mh capture-recapture model (by estimating the number of active flows).
CSIT5600 by M. Hamdi113
AQM Based on Capture-RecaptureAQM Based on Capture-Recapture
Ri is estimated for every arriving packet (we can increase the accuracy by having multiple captures, or decrease it by capturing packets periodically)
If the arrival buffer is of size B, and the number of captured packets is Ci, then Ri = R Ci/B where R is the aggregate arrival rate
Rfair may not change every single time slot (as a result, the capturing and the calculation of the number of active flows could be done independently of the arrival of each incoming packet)
Rfair = R/(number of active flows)
The capture-recapture model gives us a lot of flexibility in terms of accuracy vs. complexity
The same capture-recapture can be used for calculating both Ri and Rfair
CSIT5600 by M. Hamdi114
AQM Based on Capture-AQM Based on Capture-RecaptureRecapture
Incoming packets
Active Queue Management(Capture-Recapture)
The estimation of Ri by the M0 model
The estimation of Rfair by the Mh CR model
Di = (1 – Rfair/Ri)+
Fair allocation of BW
CSIT5600 by M. Hamdi115
Performance evaluationPerformance evaluation• This is a classical setup that researchers use to evaluate AQM
schemes (we can vary many parameters, responsive vs. non-responsive connections, the nature of responsiveness, link delays, etc.)
UDP sources
R1 R2S(m)
S(1)
S(m+1)
S(m+n)
S(m)
S(1)
S(m+1)
S(m+n)
10Mbps
100Mbps 100MbpsTCP
sourcesTCP
sources
UDP sources
CSIT5600 by M. Hamdi116
Performance evaluationPerformance evaluation• Estimation of the number of flows
Estimation of numnber of flows (varying number of flows)
0
10
20
30
40
50
60
Time
Est
imat
ed n
umbe
r of
flow
s
SRED CAP Ideal
CSIT5600 by M. Hamdi117
Performance evaluationPerformance evaluation• Bandwidth allocation comparison between CAP and RED
0
0.2
0.4
0.6
0.8
1
1.2
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25Flow number
Thro
ughp
ut (M
bits
/s)
Ideal RED CAP
CSIT5600 by M. Hamdi118
Performance evaluationPerformance evaluation• Bandwidth allocation comparison between CAP and SRED
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25Flow number
Thro
ughp
ut (M
bits
/s)
Ideal SRED CAP
CSIT5600 by M. Hamdi119
Performance evaluationPerformance evaluation• Bandwidth allocation comparison between CAP and RED-PD
0
0.2
0.4
0.6
0.8
1
1.2
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25Flow number
Thro
ughp
ut (M
bits
/s)
Ideal RED RED-PD CAP
CSIT5600 by M. Hamdi120
Performance evaluationPerformance evaluation• Bandwidth allocation comparison between CAP and SFB
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25Flow number
Thro
ughp
ut (M
bits/s
)Ideal SFB CAP
CSIT5600 by M. Hamdi121
Normalized Measure of PerformanceNormalized Measure of Performance
• A single comparison of the fairness using a normalized value, where norm is defined as:
where bi is ideal fair share, bj is the bandwidth received by each flow
• Thus, ||BW|| = 0 for the ideal fair sharing
CSIT5600 by M. Hamdi122
Normalized Measure of Normalized Measure of PerformancePerformance
0
200
400
600
800
1000
1200
25 30 25 40 45 50 55 60 65 70number of flows
||BW
||
Ideal RED SRED SFB RED-PD CAP
CSIT5600 by M. Hamdi123
Performance Evaluation: Variable Performance Evaluation: Variable amount of unresponsivenessamount of unresponsiveness
0
500
1000
1500
2000
2500
3000
1 5 10 15 20 25 30UDP load (% of bottleneck link)
||BW
||
Ideal RED RED-PD SFB SRED CAP