1
CS716
Advanced Computer Networks
By Dr. Amir Qayyum
Lecture No. 17
3
Virtual Paths with ATM• Two level hierarchy of virtual connection: 8-bit
VPI and 16-bit VCI– Switches in the public network use 8-bit VPI– Corporate sites use full 24-bit address (VPI + VCI)– Much less connection-state info in switches– Virtual path: fat pipe with bundle of virtual circuits
Public network
Network BNetwork A
4
Physical Layers for ATM• ATM may run over several phy media
• ATM was assumed to run over SONET but both are entirely separable entities
• ATM frame boundaries to be correctly identified– Successive 53-byte ATM frames in payload
– SONET overhead byte points to the payload
– Another way is to calculate CRC (5th byte of the cell)
5
ATM and LANs
• ATM grew out of the telephone community and later used for computer communication
• Significant advantage of performance and better scalability of switched over shared media
• No distance limitation in ATM making it a good choice for high-performance LAN backbone
• Point-to-point, long distance Gigabit Ethernet is a competing technology with ATM
6
ATM as a LAN Backbone
• Different from traditional LANs; no native support for broadcast or multicast
E1
H5
H6
H7
H1E3
H2
H4
H3E2
ATM linksEthernet links
Ethernet switch
ATM switchATM-attachedhost
7
ATM in a LAN
• How to broadcast to all nodes on an ATM LAN ?–Without knowing all the
addresses
–Without setting up VC to all of them
8
ATM in a LAN• Two solutions
– Redesign protocols that consider LAN different from what ATM can provide (e.g. ATMARP)
– Make ATM behave like shared media, without loosing performance advantage of switched media (e.g. LANE)• ATM address is different from a unique 48-bit
MAC address
9
Shared Ethernet Emulation with LANE
• All hosts think they are on the same Ethernet
LANE / EthernetAdaptor Card
LANE / EthernetAdaptor Card
HHHH
HH
HHHH
EthernetSwitchATM Switch
LANE / EthernetAdaptor Card
LANE / EthernetAdaptor Card
HHHH
HH
HHHH
EthernetSwitchATM Switch
10
LAN Emulation (LANE) with ATM
• Transparent shared media emulation of ATM
• Adds (not changes) functionality to ATM switches
• Each device needs a global MAC address, as well as an ATM address to establish a VC
11
LAN Emulation (LANE) with ATM
• Devices connect as LAN Emulation Clients (LEC)
• LANE provides Ethernet-like interface to LECs
• Similar solutions for other networks: VPNs on WANs, VLANs on large, switched Ethernets
12
ATM / LANE Protocol Layers
Higher-layerprotocols
(IP, ARP, . . .)
Signalling+ LANE
AAL5
ATM
PHY
ATM
PHY PHY
Higher-layerprotocols
(IP, ARP, . . .)
Signalling+ LANE
AAL5
ATM
Host Switch Host
PHY
Ethernet-likeinterface
13
Clients and Servers in LANE
• LAN Emulation Client (LEC)–Host, bridge, router or switch
• LAN Emulation Server (LES)–Maintains client’s MAC and
ATM addresses–Maintains ATM address of BUS
14
Clients and Servers in LANE
• LAN Emulation Configuration Server (LECS)– High-level network management when
LEC starts up
– Reachable by preset VC (recall known server port#)
– Maintains mapping of ATM address to LANE type
15
Clients and Servers in LANE
• Broadcast and Unknown Server (BUS)– Emulates broadcast and multicast; critical to LANE– Uses point-to-multipoint VC with all clients
• Servers physically located in one or more devices
H2H1
BUSLESATM network
Point-to-point VC
Point-to-multipoint VCLECS
16
LANE Registration
1. Client contacts LECS on predefined VC, and sends ATM address to it
2. LECS returns LAN type, MTU and ATM address of LES
3. Client signals connection to LES, and registers MAC and ATM addresses with LES
4. LES returns ATM address of BUS5. Client signals connection to BUS6. Bus adds client to point-to-multipoint
VC
ATM Network
LECS
LES BUS
H1 H2
H3
17
LANE Circuit Setup
1. Client (H1) knows destination MAC address of receiver (H2)
2. Client (H1) sends 1st packet to BUS
3. BUS sends address resolution request to LES
4. LES returns ATM address to client (H1)
5. Client (H1) signals connection to H2 for subsequent packets
ATM Network
LECS
LES BUS
H1 H2
H3
Switches: The Intersections
19
The Intersections
Design intersection to accommodate traffic flows
RawalpindiSaddar
Airport
Pir Wadhai
IslamabadZero Point
Rawal Dam
FaizabadFaizabad FlyoverAyub
Park
20
Contention in Switches• Some packets destined for same output
– One goes first– Others delayed or dropped
• Delaying packets requires buffering– Finite capacity, some packets must still drop– At inputs
• Increases/adds false contention• Sometimes necessary
– At outputs– Can also exert “backpressure”
21
Output Buffering
1x6 Switch
x
a
Standard check-in linesCustomer
service
trying to check-inyou Mr. X
writing complaint
letter
Mr. A waiting to
claim refund of Rs.100
22
Input Buffering: Head-of-line Blocking
1x6 Switch
x
a
Standard check-in linesCustomer
service
trying to check-in
you
Mr. X writing
complaint letter
Mr. A waiting to
claim refund of Rs.100
agents are standing by !
23
Backpressure
1x6 Switch
x
a
Standard check-in linesCustomer
service
trying to check-inyou i
“no more, please”
propagation delay requires that switch exerts backpressure before buffer is full; thus used
in networks with small propagation delay
24
Backpressure
• Propagation delay requires that switch 2 exert backpressure at high-water mark rather than when buffer completely full
• It is thus typically only used in networks with small propagation delays (e.g., switch fabrics)
Switch 1 Switch 2
“no more, please”
25
Switching Hardware• Multi-input multi-output device, getting packets
from inputs to the outputs as fast as possible• Performance of a switch is limited by I/O bus
bandwidth (each packet traverse twice)– 1Gbps I/O bus can support ten T3 (45 Mbps) links,
three STS-3 (155 Mbps) links, and not even one STS-12 (625 Mbps) link
• Success or failure of a new protocol depend on whether it takes advantage of switch’s capabilities
26
Switching Fabric• Special-purpose (switching) hardware
• General problem– Connect N inputs to M outputs (NxM switch)
– Often N=M (bidirectional links)
• Design goals– High throughput: want aggregate close to
MIN (sum of inputs, sum of outputs)
– Avoid contention (fabric faster than ports)
– Good scalability:linear size/cost groth in N/M
27
Switching Fabric and Ports
Inputport
Inputport
Inputport
Inputport
Outputport
Outputport
Outputport
Outputport
Fabric
Switchfabric
SwitchFabric
Avoid contention
here
28
Switch: Fabric and PortsFabric has a job to deliver packets to the right output
Inputport
Inputport
Inputport
Inputport
Outputport
Outputport
Outputport
Outputport
FabricSwitchfabric
(with small internal
buffering)
29
Ports and Fabric
• Ports deals with the complexity of the real world– Virtual circuit management is handled in ports
– Determine outpt port using forwarding tables
• Input port is the first in performance bottlenecks– Header processing and handling packet to fabric
30
Ports and Fabric
• Buffering is required at ports– Buffer management has profound
impact on performance
– Internal (in fabric) or output buffering is normally used
• Fabric: simply move packets from inputs to outputs
31
Design Goals - Throughput
• An n x m switch can provide max ideal throughput of S = S1 + S2 + ……… + Sn
– Only possible if traffic at inputs is evenly distributed across all outputs
– Sustained throughput higher than link speed of output is not possible
32
Design Goals - Throughput• Variable size packets affect performance
– Some operations have constant overhead per packet
– Switch performs differently for different sizes of packets
– Packet per second (pps) rate is also important
• Most switches are subject to internal contention– Determine performance under diff traffic loads
33
Design Goals - Throughput• Traffic models are important to throughput
– Arrival time, output port, packet length
– Extremely difficult to achieve accurate models
– Traffic-modeling very successful in telephony
• Designers now expect high range of throughputs– In order to handle a steady stream of 64-byte
packets, a 40Gbps switch need a rate of 78M pps !!!
34
Design Goals - Scalability
• Cost of hardware rises fast with increasing the number of ports n– Adding ports increases hardware & design
complexity
– Scalability in terms of rate of increase in cost
• Design complexity determines maximum switch size– Switch designs run into problems at some maximum
number of inputs and outputs
35
Switch Performance• Avoid contention with buffering
– Use output buffering when possible– Apply backpressure through fabric– Input buffering with “peeking” (non-FIFO semantics)
to reduce head-of-line blocking problems– Drop packets if input buffer overflows
• Good scalability– O(N) ports– Port design complexity O(N) gives O(N2) for switch– Port design complexity O(1) gives O(N) for switch
36
Crossbar (“Perfect”) Switch
• Problem: hardware scales as O(N2)
37
Knockout Switch: Pick L from N
• Problem: what if more than L arrive
1
2
3
4
OutputsInputs
D D D D D
DDD
D
D D D
D
D
D
2x2 random selector
delay unit
8-to-4 concentrator
38
Shared Memory Switch
Mux Buffer memory Demux
Writecontrol
Readcontrol
Inputs Outputs
… …
39
Self-Routing Fabrics• Use source routing on “network” within switch• Input port attaches output port number as
header• Fabric routes packet based on output port• Types
– Banyan network
– Batcher-Banyan network
– Sunshine switch
40
Banyan Network
• No contention if inputs are sorted and uniqueMSB LSB
Sends 0 bit upSends 1 bit down
41
Banyan Network
• Sends 0 bit up, 1 bit down
001
011
110
111
001
011
110
111
MSB LSB
42
Batcher (Merge Sort) Network
Routing packets through a Batcher network
• Batcher-Banyan Network– Attach the two-back-to-back– Arbitrary unique permutations routed without contention
7 3
3 7
3 3
6 6
3 1
1 3
6 6
1 1
7 1
1 7
6 6
7 7
Sort Merge Merge
43
Batcher-Banyan Network
sends 1 bit upsends 0 bit down
sends 0 bit upsends 1 bit down
44
Sunshine Switch
• Like a Knockout switch, except• Recirculates overflow packets i.e., when more
than L arrive in one cycle
Delay
Inputs Batcher Trap SelectorOutputs
nnn
n
kk
n + kn + kl banyans
nnn(marks
overflow packets)