CSIT560 1 Introduction to High- Performance Internet Switches and Routers

Preview:

Citation preview

1CSIT560

Introduction to High-Introduction to High-Performance Internet Performance Internet Switches and RoutersSwitches and Routers

2CSIT560

Network Architecture

Core Routers

EdgeRouters

Access RoutersAccess Routers• • •

• • •

MetropolitanMetropolitan

Access switch

EdgeEdgeswitchswitch

DWDMDWDM

Long Haul NetworkLong Haul Network

Core Routers

10GbE

GbE

1010GbEGbE

10GbE10GbE

Campus / Campus / ResidentialResidential

MetropolitanMetropolitan

CoreCoreCore

http://www.ust.hk/itsc/network/

3CSIT560

pop

pop

pop po

p

4CSIT560

How the Internet really is: Current Trend

Modems, DSL

SONET/SDHDWDM

5CSIT5605

What is Routing?

A

B

C

R1

R2

R3

R4 D

E

FR5

6CSIT5606

Points of Presence (POPs)

A

B

C

POP1

POP3POP2

POP4 D

E

F

POP5

POP6 POP7POP8

7CSIT560049045 - Router Architectures 7

Where High Performance Routers are Used

R10 R11

R4

R13

R9

R5

R2R1 R6

R3 R7

R12

R16R15

R14

R8

(10 Gb/s)

(10 Gb/s)(10 Gb/s)

(10 Gb/s)

8CSIT560

Hierarchical arrangementEnd hosts

(1000s per mux)

Access multiplexer

Core RoutersPOP

POP

POP

Edge Routers

Point of Presence (POP)

POP: Point of Presence. Richly interconnected by mesh of long-haul links.Typically: 40 POPs per national network operator; 10-40 core routers per POP.

10Gb/s “OC192”

9CSIT560

Typical POP Configuration

Backbone routers

Aggregation switches/routers(Edge Switches)

> 50% of high speed interfaces are router-to-router (Core routers)

10G Router-RouterIntra-Office Links

Transport Network

10G WANTransport Links

DWDM/SONETTerminal

10CSIT560

DWDMDWDMRoutersRouters SwitchesSwitches SONETSONET

LAYER 3 LAYER 2 LAYER 1 LAYER 0

Internet FR & ATM SONET DWDMProtocol

LAYER 3 LAYER 2 LAYER 1 LAYER 0

Internet FR & ATM SONET DWDMProtocol

Today’s Network EquipmentToday’s Network Equipment

11CSIT560

Functions in a packet switch

Interconnect scheduling

Route lookup

TTL proces

sing

Buffering

Buffering

QoS schedu

ling

Control plane

Ingress linecard Egress linecardInterconnect

Framing

Framing

Data path

Control path

Scheduling path

12CSIT560

Functions in a circuit switch

Interconnect scheduling

Control plane

Interconnect

Framing

Framing

Ingress linecar

d

Egress linecard

Data path

Control path

13CSIT560

Our emphasis for now is to Our emphasis for now is to look at packet switches (IP, look at packet switches (IP,

ATM, Ethernet, Framerelay, ATM, Ethernet, Framerelay, etc.)etc.)

14CSIT560049045 - Router Architectures 14

What a Router Looks Like

Cisco CRS-1 Juniper T1600

214 cm

60 cm

101cm

Capacity: 640Gb/sPower: 13.2kWFull rack

95 cm

79 cm

44 cm

Capacity: 1.6Tb/sPower: 9.1kWHalf-a-rack

(16-Slot Single-Shelf System) (16-Slot System)

15CSIT560

What a Router Looks Like

Cisco GSR 12416 Juniper M160

6ft

19”

2ft

Capacity: 160Gb/sPower: 4.2kW

3ft

2.5ft

19”

Capacity: 80Gb/sPower: 2.6kW

16CSIT560

A Router Chassis

Linecards

Fans/Power

Supplies

17CSIT560

Backplane

• A Circuit Board with connectors for line cards

• High speed electrical traces connecting line cards to fabric

• Usually passive

• Typically 30-layer boards

18CSIT560

Line Card Picture

19CSIT560

What do these two have in common?

Cisco CRS-1

Cisco Catalyst 3750G

20CSIT560

What do these two have in common?

CRS-1 linecard

• 20” x (18”+11”) x 1RU

• 40Gbps, 80MPPS

• State-of-the-art 0.13u silicon

• Full IP routing stack including IPv4 and IPv6 support

• Distributed IOS

• Multi-chassis support

Cat 3750G Switch

• 19” x 16” x 1RU

• 52Gpbs, 78 MPPS

• State-of-the-art 0.13u silicon

• Full IP routing stack including IPv4 and IPv6 support

• Distributed IOS

• Multi-chassis support

21CSIT560

What is different between them?

Cisco CRS-1

Cisco Catalyst 3750G

22CSIT560

A lot…

CRS-1 linecard

• Up to 1024 linecards

• Fully programmable forwarding

• 2M prefix entries and 512K ACLs

• 46Tbps 3-stage switching fabric

• MPLS support

• H-A non-stop routing protocols

Cat 3750G Switch

• Up to 9 stack members

• Hardwired ASIC forwarding

• 11K prefix entries and 1.5K ACLs

• 32Gbps sharedstack ring

• L2 switching support

• Re-startable routing applications

23CSIT560

Other packet switches

Cisco 7500 “edge” routers

Lucent GX550 Core ATM switch

DSL router

24CSIT560

What is Routing?

R3

A

B

C

R1

R2

R4 D

E

FR5

R5F

R3E

R3D

Next HopDestination

DD

25CSIT560

What is Routing?

R3

A

B

C

R1

R2

R4 D

E

FR5

R5F

R3E

R3D

Next HopDestination

D

DDD

16 3241

Data

Options (if any)

Destination Address

Source Address

Header ChecksumProtocolTTL

Fragment OffsetFlagsFragment ID

Total Packet LengthT.ServiceHLenVer

20

byte

s

26CSIT560

What is Routing?

A

B

C

R1

R2

R3

R4 D

E

FR5

27CSIT560

Control Plane“Typically in Software”

Switch (per-packet processing)“Typically in Hardware”

• Switching•Arbitration•Scheduling

• Routing Lookup• Packet Classifier

Routing• Routing table update (OSPF, RIP, IS-IS)• Admission Control• Congestion Control• Reservation

Basic Architectural Elementsof a Router

SwitchingSwitching

28CSIT560

Basic Architectural ComponentsDatapath: per-packet processing

ForwardingDecision

ForwardingDecision

ForwardingDecision

ForwardingTable

ForwardingTable

ForwardingTable

Interconnect

OutputScheduling

1.

2.

3.

29CSIT560

Per-packet processing in a Switch/Router

1. Accept packet arriving on an ingress line.

2. Lookup packet destination address in the forwarding table, to identify outgoing interface(s).

3. Manipulate packet header: e.g., decrement TTL, update header checksum.

4. Send packet to outgoing interface(s).

5. Queue until line is free.

6. Transmit packet onto outgoing line.

30CSIT560

ATM Switch

• Lookup cell VCI/VPI in VC table.• Replace old VCI/VPI with new.• Forward cell to outgoing interface.• Transmit cell onto link.

31CSIT560

Ethernet Switch

• Lookup frame DA in forwarding table.– If known, forward to correct port.

– If unknown, broadcast to all ports.

• Learn SA of incoming frame.• Forward frame to outgoing interface.• Transmit frame onto link.

32CSIT560

IP Router

• Lookup packet DA in forwarding table.– If known, forward to correct port.

– If unknown, drop packet.

• Decrement TTL, update header Cksum.• Forward packet to outgoing interface.• Transmit packet onto link.

33CSIT560

Special per packet/flow processing

• The router can be equipped with additional capabilities to provide special services on a per-packet or per-class basis.

• The router can perform some additional processing on the incoming packets:– Classifying the packet

• IPv4, IPv6, MPLS, ...

– Delivering packets according to a pre-agreed service: Absolute service or relative service (e.g., send a packet within a given deadline, give a packet a better service than another packet (IntServ – DiffServ))

– Filtering packets for security reasons

– Treating multicast packets differently from unicast packets

34CSIT560

Per packet Processing Must be Fast !!!

1. Packet processing must be simple and easy to implement2. Memory access time is the bottleneck

200Mpps × 2 lookups/pkt = 400 Mlookups/sec → 2.5ns per lookup

Year Aggregate Line-rate

Arriving rate of 40B POS packets (Million pkts/sec)

1997 622 Mb/s 1.56

1999 2.5 Gb/s 6.25

2001 10 Gb/s 25

2003 40 Gb/s 100

2006 80 Gb/s 200

2008 … …

35CSIT560

RouteTableCPU Buffer

Memory

LineInterface

MAC

LineInterface

MAC

LineInterface

MAC

Typically <0.5Gb/s aggregate capacity

First Generation Routers

Shared Backplane

Line Interface

CPU

Memory

36CSIT560

Bus-based Router Architectures with Single Processor

• The first generation of IP router• Based on software implementations on a single general-

purpose CPU.• Limitations:

– Serious processing bottleneck in the central processor– Memory intensive operations (e.g. table lookup & data

movements) limits the effectiveness of processor power

– A severe limiting factor to overall router throughput from input/output (I/O) bus

37CSIT560

Second Generation Routers

RouteTableCPU

LineCard

BufferMemory

LineCard

MAC

BufferMemory

LineCard

MAC

BufferMemory

FwdingCache

FwdingCache

FwdingCache

MAC

BufferMemory

Typically <5Gb/s aggregate capacity

38CSIT560

Bus-based Router Architectures with Multiple Processors

• Architectures with Route Caching

– Distribute packet forwarding operations

– Network interface cards» Processors

» Route caches

– Packets are transmitted once over the shared bus

– Limitations:» The central routing table is a bottleneck at high-speeds

» traffic dependent throughput (cache)

» shared bus is still a bottleneck

39CSIT560

Third Generation Routers

LineCard

MAC

LocalBuffer

Memory

CPUCard

LineCard

MAC

LocalBuffer

Memory

Switched Backplane

Line Interface

CPUMem

ory FwdingTable

RoutingTable

FwdingTable

Typically <50Gb/s aggregate capacity

40CSIT560

Switch-based Router Architectures with Fully Distributed Processors

• To avoid bottlenecks:

– Processing power

– Memory bandwidth

– Internal bus bandwidth

• Each network interface is equipped with appropriate processing power and buffer space.

41CSIT560

Fourth Generation Routers/SwitchesOptics inside a router for the first time

Switch Core Linecards

Optical links

100sof metres

0.3 - 10Tb/s routers in development

42CSIT560

Alcatel 7670 RSP Juniper TX8/T640

ChiaroAvici TSR

43CSIT560

DSLDSL,,FTTHFTTH,,DialDial

Telecommuter

Residential

(G)MPLS based Multi-service Intelligent Packet Backbone Network

IPv6 IX

ISP’s

GGSN

Service POP

SGSN

Dual Stack IPv4-IPv6 Enterprise NetworkDual Stack IPv4-IPv6 Enterprise Network

Dual Stack IPv4-IPv6 DSL/FTTH/Dial access Network

Dual Stack IPv4-IPv6 DSL/FTTH/Dial access Network

Dual Stack IPv4-IPv6 Cable NetworkDual Stack IPv4-IPv6 Cable Network

ISP offering Native IPv6 servicesISP offering Native IPv6 services

CE router

CE router

CE router

PE Router(Service POP)

PE

PE

• One Backbone NetworkOne Backbone Network• Maximizes speed, flexibility and manageability Maximizes speed, flexibility and manageability

Next Gen. Backbone Network Architecture – One backbone, multiple access networks

44CSIT560

Current Generation: Generic Router Architecture

LookupIP Address

UpdateHeader

Header ProcessingData Hdr Data Hdr

~1M prefixesOff-chip DRAM

AddressTable

AddressTable

IP Address Next Hop

QueuePacket

BufferMemoryBuffer

Memory~1M packetsOff-chip DRAM

45CSIT560

Current Generation: Generic Router Architecture (IQ)

LookupIP Address

UpdateHeader

Header Processing

AddressTable

AddressTable

LookupIP Address

UpdateHeader

Header Processing

AddressTable

AddressTable

LookupIP Address

UpdateHeader

Header Processing

AddressTable

AddressTable

QueuePacket

BufferMemory

BufferMemory

QueuePacket

BufferMemory

BufferMemory

QueuePacket

BufferMemory

BufferMemory

Data Hdr

Data Hdr

Data Hdr

1

2

N

1

2

N

Data Hdr

Data Hdr

Data Hdr

Scheduler

46CSIT560

LookupIP Address

UpdateHeader

Header Processing

AddressTable

AddressTable

LookupIP Address

UpdateHeader

Header Processing

AddressTable

AddressTable

LookupIP Address

UpdateHeader

Header Processing

AddressTable

AddressTable

QueuePacket

BufferMemory

BufferMemory

QueuePacket

BufferMemory

BufferMemory

QueuePacket

BufferMemory

BufferMemory

Data Hdr

Data Hdr

Data Hdr

1

2

N

1

2

N

Current Generation: Generic Router Architecture (OQ)

47CSIT560

Packet Processing

Framing & Maintenance

Physical Layer

Buffer Mgmt & Scheduling

Buffer Mgmt & Scheduling

Lookup Tables

Buffer &State Memory

Buffer &State Memory

Scheduler

Buffered or Bufferless

Fabric(e.g. crossbar,

bus)

OC192c Linecard:~10-30M gates~2Gbits of memory~2 square feet>$10k cost; price $100K

OC192c Linecard:~10-30M gates~2Gbits of memory~2 square feet>$10k cost; price $100K

Typical IP Router LinecardTypical IP Router Linecard

Backplane

Basic Architectural Elementsof a Current Router

48CSIT560

Performance metrics1. Capacity

– “maximize C, s.t. volume < 2m3 and power < 5kW”

2. Throughput– Operators like to maximize usage of expensive long-

haul links.

3. Controllable Delay– Some users would like predictable delay.

– This is feasible with output-queueing plus weighted fair queueing (WFQ).

WFQ( , ) ( , )

49CSIT560

Why do we Need Faster Routers?

1. To prevent routers from becoming the bottleneck in the Internet.

2. To increase POP capacity, and to reduce cost, size and power.

50CSIT560

Why we Need Faster Routers To prevent routers from being the bottleneck

1

10

100

1,000

10,000

100,000

1,000,000

1980 2005

Nor

mal

ized

Gro

wth

sin

ce 1

980

DRAM Random Access Time1.1x / 18months

Moore’s Law2x / 18 months

Router Capacity2.2x / 18months

Line Capacity2x / 7 months

User Traffic2x / 12months

51CSIT560

Why we Need Faster Routers 1: To prevent routers from being the bottleneck

0

100

200

300

400

500

600

2003 2006 2009 2012

Nor

maliz

ed g

row

th

5-folddisparity

traffic

Routercapacity

Disparity between traffic and router growth

52CSIT560

POP with smaller routersPOP with large routers

• Interfaces: Price >$200k, Power > 400W

• About 50-60% of interfaces are used for interconnection within the POP.

• Industry trend is towards large, single router per POP.

• Big POPs need big routers

Why we Need Faster Routers 2: To reduce cost, power & complexity of POPs

53CSIT560

A Case study: UUNET Internet Backbone Build Up

1999 View (4Q)

•8 OC-48 links between POPs (not parallel)

2002 View (4Q)

• 52 OC-48 links between POPs: many parallel links

• 3 OC-192 Super POP links: multiple parallel interfaces between POPs (D.C. – Chicago; NYC – D.C.)

To Meet the traffic growth, Higher Performance Routers with Higher Port Speed, are required

54CSIT560

Why we Need Faster Routers 2: To reduce cost, power & complexity of POPs

D S L A M L 3 / 4

S w i t c h

D i r e c t

C o n n e c t s

C M T S

D S L A M L 3 / 4

S w i t c h

D i r e c t

C o n n e c t s

C M T S

D S L A M L 3 / 4

S w i t c h

D i r e c t

C o n n e c t s

C M T S

Further Reduces CapEx, Operational costFurther increases network stability

55CSIT560

Ideal POPIdeal POP

CARRIER OPTICAL

TRANSPORT

Existing Carrier

Equipment

Existing Carrier

Equipment

SONET

VoIP Gateways

Cable ModemAggregation

Gigabit Ethernet

Digital SubscriberLine Aggregation

Gigabit Routers

ATM

SONET

VoIP Gateways

Cable ModemAggregation

Gigabit Ethernet

Digital SubscriberLine Aggregation

Gigabit Routers

ATM

Existing Carrier Equipment

Existing Carrier Equipment

DWDM and OPTICAL

SWITCHES

DWDM and OPTICAL

SWITCHES

56CSIT560

Why are Fast Routers Difficult to Make?

1. Big disparity between line rates and memory access speed

1

10

100

1,000

10,000

100,000

1,000,000

1980

2005

Nor

mal

ized

Gro

wth

Rat

e

57CSIT560

Problem: Fast Packet Buffers

Example: 40Gb/s packet bufferSize = RTT*BW = 10Gb; 64 byte packets

Write Rate, R

1 packetevery 12.8 ns

Read Rate, R

1 packetevery 12.8 ns

BufferManager

BufferMemory

Use SRAM?+ fast enough random access time, but

- too low density to store 10Gb of data.

Use SRAM?+ fast enough random access time, but

- too low density to store 10Gb of data.

Use DRAM?+ high density means we can store data, but- too slow (50ns random access time).

Use DRAM?+ high density means we can store data, but- too slow (50ns random access time).

58CSIT560

Memory Technology (2007)

Technology Max single chip density

$/chip

($/MByte)

Access speed

Watts/chip

Networking DRAM

64 MB $30-$50

($0.50-$0.75)

40-80ns 0.5-2W

SRAM 8 MB $50-$60

($5-$8)

3-4ns 2-3W

TCAM 2 MB $200-$250

($100-$125)

4-8ns 15-30W

59CSIT560

How fast a buffer can be made?

BufferMemory

~5ns for SRAM~50ns for DRAM

Rough Estimate:– 5/50ns per memory operation.

– Two memory operations per packet.

– Therefore, maximum ~50/5 Gb/s.

64-byte wide bus 64-byte wide bus

Exte

rnal

Lin

e

Aside: Buffers need to be largefor TCP to work well, so DRAM is usually required.

60CSIT560

DRAM Buffer Memory

Packet Caches

Buffer Manager

SRAM

ArrivingPackets

DepartingPackets12

Q

21234

345

123456

Small ingress SRAM cache of FIFO headscache of FIFO tails

5556

9697

8788

57585960

899091

1

Q

2

Small ingress SRAM

1

57 6810 9

79 81011

1214 1315

5052 515354

8688 878991 90

8284 838586

9294 9395 68 7911 10

1

Q

2

DRAM Buffer Memory

b>>1 packets at a time

61CSIT560

Why are Fast Routers Difficult to Make?

time

Inst

ruct

ion

s p

er

arr

ivin

g b

yte

What we’d like: (more features)QoS, Multicast, Security, …

What will happen

Packet processing gets harderPacket processing gets harder

62CSIT560

Why are Fast Routers Difficult to Make?

0

100

200

300

400

500

600

700

1996 1997 1998 1999 2000 2001

Clock cycles per minimum length packet since 1996

63CSIT560

Options for packet processing

• General purpose processor– MIPS

– PowerPC

– Intel

• Network processor– Intel IXA and IXP processors

– IBM Rainier

– Control plane processors: SiByte (Broadcom), QED (PMC-Sierra).

• FPGA

• ASIC

64CSIT560

General Observations

• Up until about 2000, – Low-end packet switches used general purpose

processors,

– Mid-range packet switches used FPGAs for datapath, general purpose processors for control plane.

– High-end packet switches used ASICs for datapath, general purpose processors for control plane.

• More recently,– 3rd party network processors now used in many low-

and mid-range datapaths.

– Home-grown network processors used in high-end.

65CSIT560

Demand for Router Performance Exceeds Moore’s Law

Growth in capacity of commercial routers (per rack):– Capacity 1992 ~ 2Gb/s

– Capacity 1995 ~ 10Gb/s

– Capacity 1998 ~ 40Gb/s

– Capacity 2001 ~ 160Gb/s

– Capacity 2003 ~ 640Gb/s

– Capacity 2007 ~ 11.5Tb/s

Average growth rate: 2.2x / 18 months.

Why are Fast Routers Difficult to Make?

66CSIT560

Maximizing the throughput of a routerEngine of the whole router

• Operators increasingly demand throughput guarantees:– To maximize use of expensive long-haul links

– For predictability and planning

– Serve as many customers as possible

– Increase the lifetime of the equipment

– Despite lots of effort and theory, no commercial router today has a throughput guarantee.

67CSIT560

Maximizing the throughput of a routerEngine of the whole router

Interconnect scheduling

Route lookup

TTL proces

sing

Buffering

Buffering

QoS schedu

ling

Control plane

Ingress linecard Egress linecardInterconnect

Framing

Framing

Data path

Control path

Scheduling path

68CSIT560

Maximizing the throughput of a routerEngine of the whole router

• This depends on the architecture of the switching:– Input Queued

– Output Queued

– Shared memory

• It depends on the arbitration/scheduling algorithms within the specific architecture

• This is key to the overall performance of the router.

69CSIT560

Why are Fast Routers Difficult to Make?

Power: It is exceeding the limit

0

1

2

3

4

5

6

1990 1993 1996 1999 2002

Pow

er (

kW)

approx...

70CSIT560

Switching Architectures

71CSIT560

Generic Router ArchitectureGeneric Router Architecture

LookupIP Address

UpdateHeader

Header Processing

AddressTable

AddressTable

LookupIP Address

UpdateHeader

Header Processing

AddressTable

AddressTable

LookupIP Address

UpdateHeader

Header Processing

AddressTable

AddressTable

QueuePacket

BufferMemory

BufferMemory

QueuePacket

BufferMemory

BufferMemory

QueuePacket

BufferMemory

BufferMemory

Data Hdr

Data Hdr

Data Hdr

1

2

N

1

2

N

N times line rate

N times line rate

72CSIT560

Generic Router ArchitectureGeneric Router Architecture

LookupIP Address

UpdateHeader

Header Processing

AddressTable

AddressTable

LookupIP Address

UpdateHeader

Header Processing

AddressTable

AddressTable

LookupIP Address

UpdateHeader

Header Processing

AddressTable

AddressTable

QueuePacket

BufferMemory

BufferMemory

QueuePacket

BufferMemory

BufferMemory

QueuePacket

BufferMemory

BufferMemory

Data Hdr

Data Hdr

Data Hdr

1

2

N

1

2

N

Data Hdr

Data Hdr

Data Hdr

Scheduler

73CSIT560

InterconnectsInterconnectsTwo basic techniquesTwo basic techniques

Input Queueing Output Queueing

Usually a non-blockingswitch fabric (e.g. crossbar) Usually a fast bus

74CSIT560

Simple model of output queued Simple model of output queued switchswitch

R1Link 1

Link 2

Link 3

Link 4

Link 1, ingress Link 1, egress

Link 2, ingress Link 2, egress

Link 3, ingress Link 3, egress

Link 4, ingress Link 4, egress

Link rate, R

R

R

R

Link rate, R

R

R

R

75CSIT560

Output Queued (OQ) Switch

How an OQ Switch Works

76CSIT560

Characteristics of an output Characteristics of an output queued (OQ) switchqueued (OQ) switch

• Arriving packets are immediately written into the output queue, without intermediate buffering.

• The flow of packets to one output does not affect the flow to another output.

• An OQ switch has the highest throughput, and lowest delay.

• The rate of individual flows, and the delay of packets can be controlled (QoS).

77CSIT560

The shared memory switchThe shared memory switch

Link 1, ingress Link 1, egress

Link 2, ingress Link 2, egress

Link 3, ingress Link 3, egress

Link N, ingress Link N, egress

A single, physical memory device

R

R

R

R

R

R

78CSIT560

Characteristics of a shared Characteristics of a shared memory switchmemory switch

( )

.

( ) / ,Static queues:

Assume memory of size bytes, and is the length of

the queue f or output at time

I f f or all then the switch

operates the same as the basic output queued switc

Dyna

h.

i

i

M Q t

i t

Q t M N i

1( ) ,

I f queues can have any length, so long

as then the l

mic q

oss rate is l

ueues:

ower. N

iiQ t M

79CSIT560

Memory bandwidthMemory bandwidth

Basic OQ switch:• Consider an OQ switch with N different physical

memories, and all links operating at rate R bits/s.

• In the worst case, packets may arrive continuously from all inputs, destined to just one output.

• Maximum memory bandwidth requirement for each memory is (N+1)R bits/s.

Shared Memory Switch:• Maximum memory bandwidth requirement for the

memory is 2NR bits/s.

80CSIT560

How fast can we make a centralized How fast can we make a centralized shared memory switch?shared memory switch?

SharedMemory

200 byte bus

5ns SRAM

1

2

N

5ns per memory operation Two memory operations per packet Therefore, up to 160Gb/s (200 x 8/10 nsec) In practice, closer to 80Gb/s

81CSIT560

Output QueueingOutput QueueingThe “ideal”The “ideal”

1

1

1

1

1

1

1

1

1

11

1

2

2

2

2

2

2

82CSIT560

How to Solve the Memory How to Solve the Memory Bandwidth Problem?Bandwidth Problem?

Use Input Queued Switches• In the worst case, one packet is written and one

packet is read from an input buffer• Maximum memory bandwidth requirement for each

memory is 2R bits/s.• However, using FIFO input queues can result in what

is called “Head-of-Line (HoL)” blocking

83CSIT560

Input QueueingHead of Line Blocking

Del

ay

Load58.6% 100%

84CSIT560

Head of Line BlockingHead of Line Blocking

85CSIT560

86CSIT560

87CSIT560

Virtual Output Queues (VoQ)Virtual Output Queues (VoQ)

• Virtual Output Queues: – At each input port, there are N queues – each

associated with an output port

– Only one packet can go from an input port at a time

– Only one packet can be received by an output port at a time

• It retains the scalability of FIFO input-queued switches

• It eliminates the HoL problem with FIFO input Queues

88CSIT560

Input QueueingInput QueueingVirtual output queuesVirtual output queues

89CSIT560

Input QueuesInput QueuesVirtual Output QueuesVirtual Output Queues

Del

ay

Load100%

90CSIT560

Input Queueing (VoQ)Input Queueing (VoQ)

Scheduler

Memory b/w = 2R

Can be quitecomplex!

91CSIT560

.…

….

Packets (data)

Flow control

1

N

N output queues

In one shared memory

Routing fabric

Combined IQ/SQ ArchitectureCan be a good compromise

92CSIT560

A Comparison Memory speeds for 32x32 switch

Cell size = 64 bytes

Line Rate MemoryBW

Access TimePer cell

MemoryBW

Access Time

Shared-Memory Input-queued

100 Mb/s 6.4 Gb/s 80 ns 200 Mb/s 2.56 s

1 Gb/s 64 Gb/s 8 ns 2 Gb/s 256 ns

2.5 Gb/s 160 Gb/s 3.2 ns 5 Gb/s 102.4 ns

10 Gb/s 640 Gb/s 0.8 ns 20 Gb/s 25.6 ns

93CSIT560

Scalability of Switching Fabrics

94CSIT560

Shared Bus• It is the simplest interconnect possible

• Protocols are very well established

• Multicasting and broadcasting is natural

• They have a scalability problem as we cannot have multiple transmissions concurrently

• Its maximum bandwidth is around 100 Gbps – it limits the maximum number of I/O ports and/or the line rates

• It is typically used for “small” shared memory switches or output-queued switches – very good choice for Ethernet switches

95CSIT560

Crossbars• It is becoming the preferred interconnect of choice for high-

speed switches

• Have a very high throughput, and support QoS and multicast

• N2 crosspoints – but now it is not the real limitation nowadays

configuration

Dat

a In

Data Out

96CSIT560

Limiting factors

Crossbar switchCrossbar switch

– N2 crosspoints per chip,

– It’s not obvious how to build a crossbar from multiple chips,

– Capacity of “I/O”s per chip.

• State of the art: About 200 pins each operating at 3.125Gb/s ~= 600Gb/s per chip.

• About 1/3 to 1/2 of this capacity available in practice because of overhead and speedup.

• Crossbar chips today are limited by the “I/O” capacity.

97CSIT560

Limitations to Building Large Crossbar Switches: I/O pins

• Maximum practical bit rate per pin ~ 3.125 Gbits/sec

At this speed you need between 2-4 pins per single bit To achieve a 10 Gbps/sec (OC-192) line rate, you need

around 4 parallel data lines (4-bit parallel transmission)For example, consider a 4-bit data data parallel 64-input

crossbar that is designed to support OC-192 line rates per port. Each port interface would require 4 x 3 = 12 pins in each

direction. Hence a 64-port crossbar would need 12 x 64 x 2 = 1536 pins just for the I/O data lines

Hence, the real problem is I/O pin limitations

• How to solve the problem?

98CSIT560

Scaling: Trying to build a crossbar from multiple chips

4 inp

uts

4 outputs

Building Block: 16x16 crossbar switch:

Eight inputs and eight outputs required!

99CSIT560

How to build a scalable crossbar

1. Use bit slicing – parallel crossbars•For example, we can use 4-bit crossbars to implement the

previous example. So we need 4 parallel 1-bit crossbars.

•Each port interface would require 1 x 3 = 3 pins in each direction. Hence a 64-port crossbar would need 3 x 64 x 2 = 384 pins for the I/O data lines – which is reasonable (but we need 4 chips here).

100CSIT560

Scaling: Bit-slicing

Linecard

Cell

Cell

Cell

SchedulerScheduler

• Cell is “striped” across multiple identical planes.

• Crossbar switched “bus”.

• Scheduler makes same decision for all slices.

1

2345678

N

101CSIT560

Scaling: Time-slicing

Linecard

SchedulerScheduler

• Cell goes over one plane; takes N cell times.

• Scheduler is unchanged.

• Scheduler makes decision for each slice in turn.

1

2345678

N

Cell

Cell

Cell

Cell

Cell

Cell

102CSIT560

HKUST 10Gb/s 256x256 Crossbar Switch HKUST 10Gb/s 256x256 Crossbar Switch Fabric DesignFabric Design

• Our overall switch fabric is an OC-192 256*256OC-192 256*256 crossbar switch

• Such a system is composed of 8 256*256 crossbar chips, each running at 2Gb/s (to compensate for the overhead and to provide a switch speedup)

256*256Crossbar Switch

256*256Crossbar Switch

D E S8

S E R

Input @ 10Gb/s

8Output @ 10Gb/s

Scheduler 8 bits

• The Deserializer (DES) is to convert the OC-192 10Gb/s data at the fiber link to 8 low speed signals, while the Serializer (SER) is to serialize the low speed signals back to the fiber link

103CSIT560

Architecture of the Crossbar ChipArchitecture of the Crossbar Chip

• Crossbar Switch Core – fulfills the switch functions

• Control – configures the crossbar core

• High speed data link – communicates between this chip and SER/DES

• PLL – provides on-chip precise clock

P L LHigh Speed Data Link

High Speed Data Link

Hig

h S

pe

ed

Da

ta L

ink

Hig

h S

pe

ed

Da

ta L

ink

Controller

1GHz 256*256Crossbar Switch Core

104CSIT560

Technical Specification of our Core-Crossbar Technical Specification of our Core-Crossbar ChipChip

Full crossbar core 256*256 (embedded with 2 bit-slices)

Technology TSMC 0.25m SCN5M Deep (lambda=0.12 m)

Layout size 14 mm * 8 mm

Transistor counts 2000k

Supply voltage 2.5v

Clock Frequency 1GHz

Power 40 W

105CSIT560

Layout of a 256*256 crossbar switch core Layout of a 256*256 crossbar switch core

106CSIT560

HKUST Crossbar Chip in the NewsHKUST Crossbar Chip in the News

Researchers offer alternative to typical crossbar designhttp://www.eetimes.com/story/OEG20020820S0054By Ron Wilson - EE TimesAugust 21, 2002 (10:56 a.m. ET)   PALO ALTO, Calif. — In a technical paper presented at the Hot Chips conference here Monday (Aug.19) researchers Ting Wu, Chi-Ying Tsui and Mounir Hamdi from Hong Kong University of Science and Technology (China) offered an alternative pipeline approach to crossbar design.

Their approach has yielded a 256-by-256 signal switch with a 2-GHz input bandwidth, simulated in a 0.25-micron, 5-metal process.

The growing importance of crossbar switch matrices, now used for on-chip interconnect as well as for switching fabric in routers, has led to increased study of the best ways to build these parts.

107CSIT560

Scaling a crossbarScaling a crossbar

• Conclusion: scaling the capacity is relatively straightforward (although the chip count and power may become a problem).

• In each scheme so far, the number of ports stays the same, but the speed of each port is increased.

• What if we want to increase the number of ports?

• Can we build a crossbar-equivalent from multiple stages of smaller crossbars?

• If so, what properties should it have?

108CSIT560

Multi-Stage Multi-Stage SwitchesSwitches

109CSIT560

Basic Switch Element

2,2X

Two States•Cross•Through

Optional Buffering

0 0

1 1

This is equivalent to crosspoint in the crossbarThis is equivalent to crosspoint in the crossbar

(no longer a good argument)(no longer a good argument)

110CSIT560

Example of Multistage SwitchExample of Multistage Switch

• It needs NlogN Internal switches (crosspoints) – less than the crossbar

K

01

01

01

01

01

01

01

01

01

01

01

01

000001

010011

100101

110111

N

01

234

56

7

one half of

the deck

theother half of

the deck

a perfect shuffle a perfect shuffle

111CSIT560

Packet RoutingPacket Routing

The bits of the destination address provide the required routing tags. The digits in the destination address are used to set the state of the stages.

01

01

01

01

01

01

01

01

01

01

01

01

001010011

100101

110111

0123

4567

000

Perfect shuffle Perfect shuffleStage 1 Stage 2 Stage 3

011

101

011

101

011

101

011

101

0

10

1 1

1

destination port

address

white bitcontrolsswitchsetting

in each stage

112CSIT560

Internal blocking Internal blocking • Internal link blocking as well as output blocking can happen in a

Multistage switch. The following example illustrates an internal blocking for connections of input 0 to output 3 and input 4 to output 2.

01

01

01

01

01

01

01

01

01

01

01

01

001010011

100101

110111

01

23456

7

000

Perfect shuffle Perfect shuffleStage 1 Stage 2 Stage 3

blocking link011

010

011

010

??? ???

???

113CSIT560

Output Blocking Output Blocking

The following example illustrates output blocking for the connections between input 1 and output 6, and input 3 and output 6.

01

01

01

01

01

01

01

01

01

01

01

01

001010011

100101

110111

01

23456

7

000

Perfect shuffle Perfect shuffleStage 1 Stage 2 Stage 3

110

110

110

110

110

110

output blocking

114CSIT560

3-stage Clos Network3-stage Clos Network

n x k

m x m

k x n1

N

N = n x mk >= n

1

2

m

1

2

k

1

2

m

1

N

n n

115CSIT560

Clos-network PropertiesClos-network PropertiesExpansion factorsExpansion factors

• Strictly Nonblocking iff m >= 2n -1

• Rearrangeable Nonblocking iff m >= n

)( thanless complexity

of discovered switch gnonblockinFirst

)( Complexity

2

2/3

nO

nO

116CSIT560

3-stage Fabrics (Basic building block – a crossbar)Clos Network

117CSIT560

3-Stage FabricsClos Network

Expansion factor required = 2-1/N (but still blocking for multicast)

118CSIT560

4-Port Clos NetworkStrictly Non-blocking

3,2X

3,2X

2,2X

2,2X

2,2X

2,3X

2,3X

119CSIT560

Construction example

• Switch size1024x1024

• Construction module– Input switch

thirty-two 32x48

– Central switchforty-eight 48x48

– Output switchthirty-two 48x32

– Expansion 48/32=1.5

48x48#1

32x48#1

48x32#1

48x48#2

32x48#2

48x32#2

48x48#48

32x48#32

48x32#32

1

32

33

64

1024

993

120CSIT560

Lucent ArchitectureLucent Architecture

Buffers

121CSIT560

MSM ArchitectureMSM Architecture

122CSIT560

LC (1)

LC (16)

LC (1137)

LC (1152)

S1/S3(1)

18 x 18

S2 (1)72 x 72

S1/S3(8)

18 x 18

12.5G

LCC(1)

S1/S3(569)

18 x 18

S1/S3(576)

18 x 18

LCC(72)

40G

FCC(1)

FCC(8)

12.5G

S2 (18)72 x 72

S2 (127)72 x 72

S2 (144)72 x 72

Line Card Chassis Fabric Card Chassis

Cisco’s 46Tbps Switch System

• total 80 chassis• 8 sw planes • speedup 2.5• 1152 LICs• 1296x1296 switch fabric• 3-stage Benes sw• multicast in the sw• 1:N fabric redundancy• 40 Gbps packet processor (188 RISCs)

123CSIT560

Massively Parallel SwitchesMassively Parallel Switches

• Instead of using tightly coupled fabrics like a crossbar or a bus, they use massively parallel interconnects such as hypercube, 2D torus, and 3D torus.

• Few companies use this design architecture for their core routers

• These fabrics are generally scalable

• However:

– It is very difficult to guarantee QoS and to include value-added functionalities (e.g., multicast, fair bandwidth allocation)

– They consume a lot of power

– They are relatively costly

124CSIT560

Massively Parallel SwitchesMassively Parallel Switches

125CSIT560

3D Switching Fabric: Avici

• Three components– Topology 3D torus

– Routing source routing with randomization

– Flow control virtual channels and virtual networks

• Maximum configuration: 14 x 8 x 5 = 560• Channel speed is 10 Gbps

126CSIT560

Packaging• Uniformly short wires between

adjacent nodes– Can be built in passive backplanes

– Run at high speed

Figures are from Scalable Switching Fabrics for Internet Routers, by W. J. Dally (can be found at www.avici.com)

127CSIT560

Avici: Velociti™ Switch FabricAvici: Velociti™ Switch Fabric

• Toroidal direct connect fabric (3D Torus)• Scales to 560 active modules• Each element adds switching & forwarding

capacity • Each module connects to

6 other modules

128CSIT560

Switch fabric chips comparison

http://www.lightreading.com/document.asp?doc_id=47959