highd_wp

  • Upload
    gynx

  • View
    217

  • Download
    0

Embed Size (px)

Citation preview

  • 8/9/2019 highd_wp

    1/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.

    Page 1 of 33

    WHITE PAPER

    Designing High-Performance CampusIntranets w ith M ulti layer Sw itchingAuthor: Geoff H aviland

    E-mail: [email protected]

    Synopsis

    This paper briefly compares several approaches to designing

    campus intranets using multilayer switching. Then it

    describes the hierarchical approach called multilayer campus

    network design in greater detail. The multilayer design

    approach makes optimal u se of multilayer switching to

    build a campus intranet that is scalable, fault tolerant,

    and manageable.

    Whether implemented with an Ethernet backbone

    or an Asynchronous Transfer M ode (ATM) backbone, the

    multilayer model has many advantages. A multilayer campus

    intranet is highly deterministic, which makes it easy to

    troubleshoot as it scales. The multilayer design is modular,

    so bandwidth scales as buildingblocks are added. Intelligent

    Layer 3 services keep broadcasts off the backbone. Intelligent

    Layer 3 routing pro tocols such as O pen Shortest Path First

    (OSPF) and Enhanced Interior Gateway Routing Protocol

    (IGRP) handle load balancing and fast convergence across

    the backbone. The multilayer model makes migration easier,

    because it preserves existing addressing. Redundancy and

    fast convergence are provided by UplinkFast and H ot

    Standby Router Protocol (HSRP). Bandwidth scales from

    Fast Ethernet to Fast EtherChannel, and from G igabit

    Ethernet to Gigabit EtherChannel. The model supports all

    common campus protocols.

    The ideas expressed in this paper reflect experiencewith many large campus intranets. Detailed configuration

    examples are provided in the appendix t o enable readers

    to implement the multilayer model with either a switched

    Ethernet backbone or an ATM LAN Emulation

    (LANE) backbone.

    Contents

    Campus Network Design Considerations

    Flat Bridged Networks

    Routing and Scalability

    Layer 2 Switching

    Layer 3 Switching

    Layer 4 Switching

    Virtual LANs and Emulated LANs

    Comparing Campus Network Design Models

    Hub and Router Model

    Campus-Wide VLAN M odel

    Multiprotocol over ATM

    The Multilayer Model

    The New 80/20 Rule

    Components of the Multilayer Model

    Redundancy and Load Balancing

    Scaling Bandwidth

    Policy in the Core

    Positioning Servers

    ATM/LANE Backbone

    IP Multicast

    Scaling Considerations

    Migration Strategies

    Security Considerations

    Bridging in the Multilayer Model

    Benefits of the Multilayer Model Appendix A: Implementing the Multilayer Model

    Ethernet Backbone

    Server Farm

    ATM LANE Backbone

  • 8/9/2019 highd_wp

    2/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 2 of 33

    Campus Ne tw ork Design Considerations

    Flat Br idged Netw orks

    Originally campus networks consisted of a single local-area

    network (LAN) to which new users were added. This LAN

    was a logical or physical cable into which the network

    devices tapped. In the case of Ethernet, the half-duplex 10Mbps available was shared by all the devices. The LAN can

    be considered a collision domain, because all packets are

    visible to all devices on the LAN and are therefore free to

    collide, given the carr ier sense multiaccess with collision

    detection (CSMA/CD) scheme used by Ethernet.

    When the collision domain of the LAN became

    congested, a bridge was inserted. A LAN bridge is a

    store-and-forward packet switch. The br idge segments

    the LAN into several collision domains, and therefore

    increases the available network throughput per device.

    Bridges flood b roadcasts, multicasts, and unknown

    unicasts to a ll segments. Therefore, all the bridged

    segments in the campus together form a single broadcast

    domain. The Spanning Tree Protocol (STP) was developed

    to prevent loops in the network and to route around failed

    elements.

    The following are characteristics of the STP

    broadcast domain:

    Redundant links are blocked and carry no data traffic.

    Suboptimal paths exist between different points.

    STP convergence typically takes 40 t o 50 seconds.

    Broadcast traffic within the Layer 2 domain interrupts

    every host.

    Broadcast storms within the Layer 2 domain affect the

    whole domain.

    Isolating problems can be time consuming.

    Network security within the Layer 2 domain is limited.

    In theory, the amount of broadcast tra ffic sets a practical

    limit to the size of the broadcast domain. In pra ctice,

    managing and t roubleshooting a bridged campus becomes

    increasingly difficult as the number of users increases. One

    misconfigured or malfunctioning workstation can disable an

    entire broadcast domain for an extended period of time.

    When designing a bridged campus, each bridged

    segment correspondsto a workgroup. The workgroup server

    isplaced in the same segment as the clients, allowing most of

    the traffic to be contained. This design principle isreferred to

    as the 80/20 rule and r efers to the goal of keeping at least

    80 percent of the traffic contained within the local segment.

    Routing and Scalability

    A router is a packet switch that is used to create an

    internetwork or internet, thereby providing connectivity

    between broadcast domains. Routers forward packets based

    on network addresses rather than M edia Access Control

    (MAC) addresses. Internets are mor e scalable than flat

    bridged networks, because routers summarize reachability

    by network number. Routers use protocols such as OSPF

    and Enhanced IGRP to exchange network reachability

    information.

    Compar ed with STP, rou ting protocols have the

    following characteristics:

    Load balancing across many equal-cost paths (in the

    Cisco implementation)

    Optimal or lowest-cost paths between networks

    Fast convergence when changes occur

    Summar ized (and therefore scalable) reachability

    information

    In addition to controlling broadcasts, Cisco routers provide

    a wide range of value-added features that improve the

    manageability and scalability of campus internets. These

    features are characteristics of the Cisco IOS software and

    are common to Cisco routers and multilayer switches. The

    IOS software has features specific to each protocol typically

    found in the campus, including the following:

    TCP/IP

    AppleTalk

    DECnet Novell IPX

    IBM Systems Network Architecture (SNA), data-link

    switching (DLSw), and Advanced Peer-to-Peer

    Networking (APPN)

    When routers are used in a campus, the number of rou ter

    hops from edge to edge iscalled the diameter. It is considered

    good practice to design for a consistent diameter within a

    campus. This is achieved with a hierarchical design model.

    Figure 1 shows a typical hierarchical model that combines

    routers and hubs. The diameter is always two router hops

    from an end stat ion in one building to an end station in

    another building. The distance from end station to a server

    on the backbone Fiber Distributed Data Interface (FDDI)

    is always one hop.

  • 8/9/2019 highd_wp

    3/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 3 of 33

    Layer 2 Sw itching

    Layer 2 switching is hardware-based bridging. In particular,

    the frame forwarding is hand led by specialized hardware,

    usuallya pplication-specific integrated circuits (ASICs). Layer

    2 switches are replacing hubs at the wiring closet in campus

    network designs.

    The performance advantage of a Layer 2 switch

    compared with a shared hub is dramatic. Consider a

    workgroup of 100 users in a subnet shar ing a single

    half-duplex Ethernet segment. The average available

    throughput per user is 10 M bps divided by 100, or just

    100 kbps. Replace the hub with a full-duplex Ethernet

    switch, and the average available throughput per user is

    10 Mbps times two, or 20 Mbps. The amount of network

    capacity available to the switched workgroup is 200 times

    greater than to t he shared workgroup. The limiting factor

    now becomes the workgroup server, which is a 10-M bps

    bottleneck. The high performance of Layer 2 switching

    has led to some network designs that increase the number

    of hosts per subnet. Increasing the hosts leads to a flatter

    design with fewer subnets or logical networks in the campus.

    However, for all its advantages, Layer 2 switching has all the

    same characteristics and limitations as bridging. Broadcast

    domains built with Layer 2 switches stillexperiencethe same

    scaling and performance issues as the large bridged networks

    of the past. The broadcast rad iation increases with the

    number of hosts, and br oadcasts interrupt all the end

    stations. The STP limitations of slow convergence and

    blocked links still apply.

    Layer 3 Switching

    Layer 3 switching is hardwa re-based routing. In pa rticular,the packet forwarding is handled by specialized hardware,

    usually ASICs. Depending on the protocols, interfaces, and

    features supported, Layer 3 switches can be used in place of

    routers in a campus design. Layer 3 switches that suppor t

    standards-based packet header rewrite and time-to-live

    (TTL) decrement are called packet-by-packet Layer 3

    switches.

    High-performance packet-by-packet Layer 3 switching

    is achieved in different ways. The Cisco 12000 Gigabit

    Switch Router (GSR) achieves wire-speed Layer 3 switching

    with a crossbar switch matr ix. The Catalyst

    family ofmultilayer switches performs Layer 3 switching with ASICs

    developed for the Supervisor Engine. Regardless of the

    underlying technology, Ciscos packet-by-packet Layer 3

    switching implementations are standards-compliant and

    operate as a fast router to external devices.

    Figure 1 Traditional Router and Hub Campus

    Access

    Layer

    Distribution

    Layer

    Core

    Layer

    Hubs Hubs Hubs

    FDDI Backbone

    Dual Homed

    Enterprise

    Servers

    FDDI Dual Ring

    Building A

    Workgroup

    Server

    Workgroup

    Server

    Workgroup

    Server

    Building B Building C

    ATM

    VLAN Trunk Fast EthernetFast EtherChannel

    Ethernet or Fast Ethernet Port

    Token Ring PortFDDI Port

  • 8/9/2019 highd_wp

    4/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 4 of 33

    Ciscos Layer 3 switching implementation on the

    Catalyst family of switches combines the full multiprotocol

    routing support of the Cisco IOS software with

    hardw are-based Layer 3 switching. The Route Switch

    Module (RSM) is an IO S-based router with t he same

    Reduced Instruction Set Computing (RISC) processor

    as the RSP2 engine in the high-end Cisco 7500 router family.

    The hardware-based Layer 3 switching is achieved with

    ASICs on the NetFlow feature card. The NetFlow feature

    card is a daughter-card upgrade to the Supervisor Engine

    on a Catalyst 5000 family multilayer switch.

    Layer 4 Sw itching

    Layer 4 switching refers to hardware-based routing

    that considers the application. In Transmission Control

    Protocol (TCP) or User Datagram Protocol (UDP) flows,

    the application is encoded as a port number in the packet

    header. Cisco routers have the ability to control traffic basedon Layer 4 information using extended access lists and

    to provide granular Layer 4 accounting of flows using

    NetFlow switching.

    Multilayer switching on the Catalyst family of switches

    can optionally be configured to operat e as a Layer 3 switch

    or a Layer 4 switch. When operating as a Layer 3 switch, the

    NetFlow feature card caches flows based on destination IP

    address. When operating as a Layer 4 switch, thecard caches

    flows based on source address, destination add ress, source

    port, and destination port. Because the NetFlow feature card

    performs Layer 3 or Layer 4 switching in hardware, there

    is no performance difference between the two modes of

    operation. Choose Layer 4 switching if your policy dictates

    granular control of traffic by application or if you require

    granular accounting of traffic by application.

    Virtual LANs and Emulated LANs

    One of the technologies developed to enable Layer 2

    switching across the campus is Virtual LANs (VLANs).

    A VLAN is a way to establish an extended logical network

    independent of the physical network layout. Each VLANfunctions as a separate broadcast domain and has

    characteristics similar to an extended bridged network.

    STP norma lly operates between the switches in a VLAN.

    Figure 2 Virtual LAN (VLAN) Technologies

    ATM

    VLAN Trunk Fast Ethernet

    Fast EtherChannelEthernet or Fast Ethernet Port

    WorkgroupServer

    Pink VLAN

    Workgroup ServerGreen VLAN

    ClientPink VLAN

    ClientGreen VLAN

    Y Z

    B

    A

    D

    X

    C

    Catalyst

    Switch

    ATM

    Switch

    Catalyst 5000With LANE Card

    LANE Clien t (LEC)

    Pink, Purple, Green

    Server LANE Client (LEC)

    Pink, Purple, Green

    Workgroups:Pink 131.108.2.0

    Purple 131.108.3.0

    Green 131.108.4.0

    ISL Attached

    Enterprise Server

  • 8/9/2019 highd_wp

    5/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 5 of 33

    Figure 2 shows three VLANs labeled pink, purple, and

    green. Each color corresponds to a workgroup, which is also

    a logical subnet:

    Pink = 131.108.2.0

    Purple = 131.108.3.0

    Green = 131.108.4.0

    One of the technologies developed to enable campus-wide

    VLANs is VLAN trunk ing. A VLAN tr unk between two

    Layer 2 switches allows traffic from several logical networks

    to be multiplexed. A VLAN trunk between a Layer 2 switch

    and a router a llows the router to connect to several logical

    networks over a single physical interface. In Figure 2, a

    VLAN trunk allows server X to talk t o all the VLANs

    simultaneously. The yellow lines in Figure 1 are Inter-Switch

    Link (ISL) trunks that carry the pink, purple, and

    green VLANs.

    ISL, 802.10, and 802.1q ar e VLAN tagging protocolsthat were developed to allow VLAN trunk ing. The VLAN

    tag is an integer incorpora ted into the header of frames

    passing between two devices. The tag value allows the data

    from multiple VLANs to be multiplexed and demultiplexed.

    ATM LANE permits multiplelogical LANs to exist over

    a single switched ATM infrastructure. ATM Emulated LANs

    (ELANs) usea similar integer index, because ISL, 802.10 and

    802.1q, and are compatible with Ethernet VLANs from end

    to end. In Figure 2, LANE cards in Catalyst switches B and

    C act as LANE clients (LECs) that connect the Ethernet

    VLANs pink, pur ple, and green across the ATM backbone.

    The server D is ATM attached and has LECs for the pink,

    purple, and green ELANs. Thus server D can talk directly to

    hosts in the pink, purple, and green VLANs.

    ATM LANE emulates the Ethernet broadcast protocol

    over connection-oriented ATM. N ot shown in Figure 2 are

    the LANE Configuration Server (LECS), LANE Server (LES),

    and Broadcast and Unknown Server (BUS) that are required

    to make ATM work like Ethernet. The LECS and LES/BUS

    functions are supported by the Cisco IOS software and

    can reside in a Cisco LightStream 1010 switch, a Cisco

    Catalyst 5000 family switch with a LANE card, or a Cisco

    router w ith an ATM interface.

    Ethernet-attached hosts and servers in one VLAN

    cannot ta lk to Ethernet-attached hosts and servers in a

    different VLAN. In Figure 2, client Z in the green VLAN

    cannot t alk to server Y in the pink VLAN. Tha t is because

    there is no router to connect pink to green.

    Comparing Campus N etw ork Design M odels

    The Hub and Router M odel

    Figure 1 showsa campus with thetraditional router and hub

    design. The access-layer devices are hubs that act as Layer 1

    repeaters. The distribution layer consists of routers. The core

    layer contains FDDI concentrators or other hubs that act as

    Layer 1 repeaters. Routers in the d istribution layer pr ovide

    broadcast control and segmentation. Each wiring closet hub

    corresponds to a logical network or subnet and homes to a

    router port. Alternatively, several hubs can be cascaded or

    bridged together to form one logical subnet or network.

    The hub and router model is scalable because of the

    advantages of intelligent routing protocolssuch as OSPF and

    Enhanced IGRP. The distribution layer is the demarcation

    between networks in the access layer and networks in the

    core. Distribution-layer routers provide segmentation and

    terminate collision domains as well as broadcast domains.

    The model is consistent and deterministic, which simplifies

    troubleshooting and administrat ion. This model also maps

    well to all the network protocols such as Novell IPX,

    AppleTalk, DECnet, and TCP/IP.

    The hub and router model is straightforward toconfigure and maintain because of its modularity. Each

    router within the distribution layer is programmed with the

    same features. Common configuration elements can be cut

    and pasted across the layer. Because each router is

    programmed the same way, itsbehavior ispredictable, which

    makes troubleshooting easier. Layer 3 packet switching load

    and middleware services are shared among all the routers in

    the distribution layer.

    The traditional hub and router campus model can be

    upgraded as performance demands increase. The shared

    media in the access layer and corecan be upgraded to Layer2 switching, and the distribution layer can be upgraded to

    Layer 3 switching with multilayer switching. Upgrading

    shared Layer 1 media to switched Layer 2 media does not

    change the network addressing, the logical design, or the

    programming of the routers.

  • 8/9/2019 highd_wp

    6/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 6 of 33

    The Campus-W ide VLAN M odel

    Figure 3 shows a conventional campus-wide VLAN design.

    Layer 2 switching is used in the access, distribution, and core

    layers. Four workgroups represented by the colors blue, red,

    purple, and green are distributed across several access-layer

    switches. Connectivity between workgroups is by Router X

    that connects to all four VLANs. Layer 3 switching and

    services are concentrat ed at Router X. Enterprise servers

    are shown behind the router on different logical networks

    indicated by the black lines.

    The various VLAN connections to Router X could be

    replaced by an ISL trunk. In either case, Router X is typically

    referred to as a router on a stick or a one-armed router.

    Mor e routers can be used to distribute the load, and each

    router attaches to several or all VLANs. Traffic between

    workgroups must traverse the campus in the source VLAN

    to a port on the gateway router, then back out into the

    destination VLAN.

    Figure 3 Traditional Campus-Wide VLAN Design

    Access

    Enterprise

    Servers

    Workgroup

    ServerGreen

    ISLAttached

    EnterpriseServer

    Building A Building B Building C

    X Core

    Four Workgroups

    Blue 131.108.1.0

    Pink 131.108.2.0

    Purple 131.108.3.0Green 131.108.4.0

    Distribution

    Server

    Distribution

  • 8/9/2019 highd_wp

    7/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 7 of 33

    Figure 4 shows an updated version of the campus-wide

    VLAN model that takes advantage of m ultilayer switching.

    The switch marked X is a Cata lyst 5000 family multilayer

    switch. The one-armed router is replaced by a RSM and

    the hardware-based Layer 3 switching of the NetFlow

    feature card. Enterprise servers in the server farm may be

    attached by Fast Ethernet at 100 Mbps, or by Fast

    EtherChannel to increase the bandwidth to 200 Mbps FDX

    or 400 Mbps FDX.

    The campus-wide VLAN model is highly dependent

    upon the 80/20 rule. If 80 percent of the traffic is within a

    workgroup, then 80 percent of the packets are switched at

    Layer 2 from client to server. However, if 90 percent of the

    traffic goes to the enterprise servers in the server farm, then

    90 percent of t he packets are switched by the one-armed

    router. The scalability and performance of the VLAN model

    are limited by the characteristics of STP. Each VLAN is

    equivalent to a flat bridged network.

    Figure 4 Campus-Wide VLANs with Multilayer Switching

    AccessLayer

    Building A Building B Building C

    Workgroup

    Server Green

    Four WorkgroupsBlue 131.108.1.0

    Pink 131.108.2.0

    Purple 131.108.3.0Green 131.108.4.0

    DistributionLayer

    CoreLayer

    FEC/ISLServer

    Catalyst 5000Multilayer Switch

    Fast EtherChannelAtt ached

    EnterpriseServers

    Fast EtherChannelAtt ached

    EnterpriseServer

    Server

    DistributionSi

    X

  • 8/9/2019 highd_wp

    8/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 8 of 33

    The campus-wide VLAN model provides the flexibility

    to have statically configured end stations move to a different

    floor or building within the campus. Ciscos VLAN

    Membership Policy Server (VMPS) and the VLAN Trunking

    Protocol (VTP) make this possible. A mobile user plugs a

    laptop PC into a LAN por t in another building. The local

    Catalyst switch sends a query to the VMPS to determine

    the access policy and VLAN membership for the user.

    Then the Catalyst switch adds the users port to the

    appropriate VLAN.

    M ultiprotocol over ATM

    Multiprotocol over ATM (MPOA) adds Layer 3 cut-through

    switching to ATM LANE. The ATM infrastructure is the

    same as in ATM LANE. T he LECS and the LES/BUS for

    each ELAN a re configured the usual way. Figure 5 shows

    the elements of a small MPOA campus design.

    Figure 5 MPOA Campus Design

    With MPO A, the new elements are the multiprotocol

    client (MPC) ha rdware and software on t he access switches

    as well as the multiprotocol server (MPS), which is

    implemented in software on Rou ter X. When the client in

    the pink VLAN t alks to an enterpr ise server in the server

    farm, thefirst packet goes from theM PC in theaccessswitch

    to theMPSusingLANE. TheMPSforwardsthe packet to the

    destination MPC using LANE. Then the MPS tells the two

    MPCs to establish a direct switched virtualcircuit (SVC) path

    between the green subnet and the server farm subnet.

    With MPO A, IP unicast packets take the cut-through

    SVC as indicated. Multicast packets, however, are sent to the

    BUS to be flooded in the originating ELAN. Then Router X

    copies the multicast to the BUS in every ELAN that needs

    to receive the packet as determined by multicast rout ing.

    In turn, each BUS floods the packet again within each

    destination ELAN.

    Packets of protocols other than IP always proceed

    LANE to router to LANE withou t establishing a direct

    cut-through SVC. MPOA design must consider the amount

    of broadcast, multicast, and non-IP traffic in relation to t he

    performance of the r outer. M POA should be considered for

    networks with predominately IP unicast traffic and ATM

    trunks to the wiring closet switch.

    The M ul ti layer M odel

    The New 80/20 Rule

    The conventional wisdom o f the 80/20 rule underlies thetraditional design models discussed in the preceding section.

    With the campus-wide VLAN model, the logical workgroup

    is dispersed across the campus, but still organized such that

    80 percent of t raffic is contained within the VLAN. The

    remaining 20 percent of traffic leaves the network or subnet

    through a router.

    The traditional 80/20 traffic model arose because each

    department or workgroup had a local server on the LAN.

    The local server was used as file server, logon server, and

    application server for the workgroup. The 80 /20 traffic

    pattern has been changing rapidly with the rise of corporateintranets and applications that rely on distributed IPservices.

    ATM

    VLAN Trunk Fast Ethernet

    Fast EtherChannelEthernet or Fast Ethernet Port

    ClientGreen

    VLAN

    ClientPink

    VLAN

    X

    MultiProtocol ServerRoutes First Packet

    of IP Unicast Flow

    Access

    SwitchMPC

    EnterpriseServer MPC

    Access

    Switch MPC

    Enterprise

    Servers

    Fast EtherChannelAt tachedEnterprise Server

  • 8/9/2019 highd_wp

    9/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 9 of 33

    Many new and existing applications are moving to

    distributed World Wide Web (WWW)-based da ta storage

    and retrieval. The traffic pattern is moving toward what

    is now referred to as the 20/80 model. In the 20/80 model,

    only 20 percent of traffic is local to the workgroup LAN

    and 80 percent of the traffic leaves.

    Components of the M ultil ayer M odel

    The performance of multilayer switching matches the

    requirements of the new 20/80 traffic model. The two

    components of multilayer switching on the Ca talyst 5000

    family are the RSM and the NetFlow feature card. The RSM

    is a Cisco IOS-based multiprotocol router on a card. It has

    performance and features similar t o a Cisco 7500 r outer.

    The NetFlow feature card is a daughter-card upgrade to the

    Supervisor Engine of the Catalyst 5000 family switches.

    It performs both Layer 3 and Layer 2 switching in hardware

    with specialized ASICs. It is important to note that there is

    no performance penalty associated with Layer 3 switching

    versus Layer 2 switching with the N etFlow feature card.

    Figure 6 illustrates a simple multilayer campus network

    design. The campus consists of three buildings, A, B, and C,

    connected by a backbone called the core. The distribution

    layer consists of Catalyst 5000 family multilayer switches.

    The multilayer design takes advantage of the Layer 2

    switching performance and features of the Catalyst family

    switches in the access layer and backboneand uses multilayer

    switching in the distribution layer. The multilayer model

    preserves the existing logical network design and addressing

    as in the traditional hub and router model. Access-layer

    subnets terminate at t he distribution layer. From the other

    side, backbone subnets also terminat e at the distribution

    layer. So the multilayer model does not consist of

    campus-wide VLANs, bu t does take advantage of

    VLAN trunk ing as we shall see.

    Figure 6 Multilayer Campus Design with Multilayer Switching

    Access

    Layer

    Building A Building B Building C

    ISLAttached

    Building Server

    DistributionLayer

    CoreLayer

    Catalyst 5000

    L2 Swit ch

    Catalyst 5000

    Multilayer Switch

    Fast EtherChannelAttachedEnterprise

    Server

    Fast EtherChannelAttached

    Workstation

    Catalyst 5000L2 Swit ch

    Si Si Si

    ATM

    VLAN Trunk Fast EthernetFast EtherChannel

    Ethernet or Fast Ethernet Port

  • 8/9/2019 highd_wp

    10/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 10 of 33

    Because Layer 3 switching is used in the distribution

    layer of the multilayer model, this is where many of the

    characteristic advantages of routing apply. The distribution

    layer forms a broadcast boundary so that br oadcasts dont

    pass from a building to the backbone o r vice-versa.

    Value-added features of the Cisco IOS software apply at

    the distribution layer. For example, the distribution-layer

    switches cache information about Novell servers and

    respond to Get Nearest Server queries from Novell clients in

    the building. Another example is forwarding Dynamic Host

    Configuration Protocol (DHCP) messages from mobile

    IP workstations to a DHCP server.

    Another Cisco IOS feature that is implemented at

    the multilayer switches in t he distribution layer is called

    Local Area Mobility (LAM). LAM is valuable for campus

    intranets that have not deployed DHCP services and permits

    workstat ions with statically configured IP addresses and

    gateways to move throughout the campus. LAM w orks

    by propagating the address of the mobile hosts out into

    the Layer 3 routing table.

    There are actua lly hundreds of valuable Cisco IOS

    features that improve the stability, scalability, and

    manageability of enterprise networks. These features apply

    to all the protocols found in the campus, including DECnet,

    AppleTalk, IBM SNA, NovellIPX, TCP/IP, and many others.

    One characteristic shared by most of these features is that

    they are out of the box. Out-of-the-box features apply

    to the functioning of the network as a whole. They are in

    contrast with inside-the-box features, such as port density

    or performance, that apply to a single box rather than to the

    network as a whole. Inside-the-box features have little to

    do with the stability, scalability, or manageability of

    enterprise networks.

    The greatest strengths of the multilayer model arisefrom

    itshierarchical and modular nature. It ishierarchical because

    the layers are clearly defined and specialized. It is modular

    because every part within a layer performs the same logical

    function. One key advantage of modular design is that

    different technologies can be deployed with no impact on thelogical structure of the model. For example, Token Ring can

    be replaced by Ethernet. FDDI can be replaced by switched

    Fast Ethernet. Hubs can be replaced by Layer 2 switches. Fast

    Ethernet can be substituted with ATM LANE. ATM LANE

    can be substituted with G igabit Ethernet, and so on. So

    modularity makes both migration and integration of legacy

    technologies much easier.

    Another key advantage of modu lar design is that each

    device within a layer is programmed the same way and

    performs the same job, making configuration much easier.

    Troubleshooting is also easier, because the whole design

    is highly deterministic in terms of performance, path

    determination, and failure recovery.

    In the access layer a subnet corresponds to a VLAN. A

    VLAN may map to a single Layer 2 switch, or it mayappear

    at several switches. Conversely, one or more VLANs may

    appear at a given Layer 2 switch. If Cata lyst 5000 family

    switches are used in the access layer, VLAN trunking

    provides flexible allocation of networks and subnets across

    more than one switch. In our later examples we will show

    two VLANs per switch in order to illustrate how to use

    VLAN trunk ing to achieve load balancing and fast failure

    recovery between the access layer and the distribution layer.

    In its simplest form, the core layer is a single logical

    network or VLAN. In our examples, we show the core layer

    as a simple switched Layer 2 infrastructure with no loops.

    It is advantageous to avoid spanning tree loops in the core.

    Instead we will take advantage of the load balancing and fast

    convergence of Layer 3 routing protocols such as OSPF and

    Enhanced IGRP to handle path determination and failure

    recovery across the backbone. So all the path determination

    and failure recoveryis handled at the distribution layer in the

    multilayer model.

    Redundancy and Load Balanc ing

    A distribution-layer switch in Figure 6 represents a point of

    failure at the building level. One thousand users in Building

    A could lose their connections to the backbone in the event

    of a power failure. Ifa link from a wiring closet switch to the

    distribution-layer switch is disconnected, 100 users on a floor

    could lose their connections to the backbone. Figure 7 shows

    a multilayer design that addresses these issues.

    Multilayer switches A and B provide redundant

    connectivity to domain North. Redundant links from each

    access-layer switch connect to distribution-layer switches A

    and B. Redundancy in the backbone is achieved by installing

    two or more Cat alyst switches in the core. Redundant links

    from the distribution layer provide failover as well as load

    balancing over multiple paths across the backbone.

  • 8/9/2019 highd_wp

    11/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 11 of 33

    Redundant links connect access-layer switches to a pair

    of Catalyst multilayer switches in the distribution layer. Fast

    failover at Layer 3 is achieved with Ciscos Hot Standby

    Router Protocol. The tw o distribution-layer switches

    cooperate to provide HSRP gateway routers for all the IP

    hosts in the building. Fast failover at Layer 2 is achieved

    by Ciscos UplinkFast feature. UplinkFast is a convergence

    algorithm that achieves link failover from the forwar ding

    link to the backup link in about three seconds.

    Load balancing across the core is achieved by intelligent

    Layer 3 routing protocols implemented in the Cisco IOS

    software. In this picture there are four equal-cost path s

    between any two buildings. In Figure 7, the four paths from

    domain North to domain West are AXC, AXYD, BYD, and

    BYXC. These four Layer 2 paths are considered equal by

    Layer 3 routing protocols. Note that all paths from domains

    Nor th, West, and South to the backbone a re single, logical

    hops. The Cisco IOS software supports load balancing over

    up to six equal-cost path s for IP, and over many paths for

    other protocols.

    Figure 7 Redundant Multilayer Campus Design

    AccessLayer

    North West South

    ISLAttachedBuilding Servers

    DistributionLayer

    CoreLayer

    EnterpriseServers Fast EtherChannelAt tached

    Enterprise

    Server

    Si Si Si Si Si Si

    ATMVLAN Trunk Fast EthernetFast EtherChannelEthernet or Fast Ethernet Port

    A B

    X Y

    C D

  • 8/9/2019 highd_wp

    12/33

  • 8/9/2019 highd_wp

    13/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 13 of 33

    Figure 9 shows HSRP operating between two

    distribution-layer switches. Host systems connect at a switch

    port in the access layer. The even-numbered subnets map to

    even-numbered VLANs, and the odd-numbered subnets map

    to odd-numbered VLANs. The HSRP primary for the

    even-numbered subnets is distribution-layer Switch X, and

    the HSRP primary for the odd-numbered subnets isSwitch Y.

    The HSRP backup for even-numbered subnets is Switch Y,

    and t he HSRP backup for odd-numbered subnets is Switch

    X. The convention followed here isthat everyH SRP gateway

    router a lways has host address 100so the HSRP gateway

    for subnet 15.0 is 15.100. If gateway 15.100 loses power or

    isdisconnected, Switch X assumes the address15.100 as well

    as the HSRP MAC add ress within abou t two seconds as

    measured in the configuration shown in Appendix A.

    Figure 9 Redundancy with HSRP

    Figure 10 shows load balancing between the access

    layer and the distribut ion layer using Ciscos ISL VLAN

    trunking protocol. We have allocated VLANs 10 and 11 to

    access-layer Switch A, and VLANs 12 and 13 to Switch B.

    Each access-layer switch has two trunks to the distribution

    layer. The STP puts redundant links in blocking mode

    as shown. Load distribution is achieved by making one

    trunk the active forwarding path for even-numbered

    VLANs and the other trunk the active forwarding path

    for odd-numbered VLANs.

    Figure 10 VLAN Trunking for Load Balancing

    On Switch A, the left-hand trunk is labeled F10, which

    means its the forwarding path for VLAN 10. The right-hand

    trunk is labeled F11, wh ich means its the forw arding path

    for VLAN 11. Theleft-hand trunk isalso labeled B11, which

    means its the blocking path for VLAN 11, and the

    right-hand trunk is B10, which means blocking for VLAN

    ATM

    VLAN Trunk Fast Ethernet

    Fast EtherChannelEthernet or Fast Ethernet Port

    Si Si

    AccessLayer

    Host A

    Even SubnetGateway

    10.1

    10.010.100

    Host B

    Odd SubnetGateway

    11.1

    11.011.100

    Host C

    Odd SubnetGateway

    15.1

    15.015.100

    Host D

    Odd SubnetGateway

    17.1

    17.017.100

    X Y

    HSRP Primary

    Even Subnets,Even VLANs,

    10, 12, 14, 16

    ISL Trunks

    VLAN M ultiplexing

    Fast Ethernet orFast EtherChannel

    HSRP Primary

    Odd Subnets,Odd VLANs,

    11, 13, 15, 17

    Si Si

    STP RootEven VLANs

    10, 12, 14, 16

    STP RootOdd VLANs

    11, 13, 15, 17

    ISL Trunks VLAN

    MultiplexingFast Ethernet or

    Fast EtherChannel

    F 10

    B 11

    F 11

    B 10

    F 12

    B 13

    F 13

    B 12

    F 14

    B 15

    F 15

    B 14

    F 16

    B 17

    F 17

    B 16

    F ForwardingB Blocking

    X

    A B C D

    Y

    VLANs

    10, 11

    VLANs

    12, 13

    VLANs

    14, 15

    VLANs

    16, 17

    ATM

    VLAN Trunk Fast EthernetFast EtherChannel

    Ethernet or Fast Ethernet Port

  • 8/9/2019 highd_wp

    14/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 14 of 33

    10. Th is is accomplished by making X the roo t for even

    VLANs and Y the root for odd VLANs. See Appendix A for

    the configuration commands required.

    Figure 11 shows Figure 10 after a link failure, which is

    indicated by the big X. UplinkFast changes the left-hand

    trunk on Switch A to be the active forwarding path forVLAN 11. Traffic isswitched acrossFast EtherChannel trunk

    Z if required. Trunk Z is the Layer 2 backup path for all

    VLANs in the domain, and a lso carries some of the return

    traffic that is load-balanced between Switch X and Switch Y.

    With conventional STP, convergence would take 40 to 50

    seconds. With UplinkFast, failover takes about three seconds

    as measured in the configuration shown in Appendix A.

    Figure 11 VLAN Trunking with Uplink Fast Failover

    Scaling Bandwidth

    Ethernet trunk capacity in the multilayer model can be scaled

    in several ways. Ethernet can be migrated to Fast Ethernet.

    Fast Ethernet can be migrated to Fast EtherChannel or

    Gigabit Ethernet or Gigabit EtherChannel. Access-layerswitches can be partitioned into multiple VLANs with

    multiple trunks. VLAN multiplexing with ISL can be used

    in combination with the different trunks.

    Fast EtherChannel combines two or four Fast Ethernet

    links together into a single high-capacity trunk. Fast

    EtherChannel is supported by the Cisco 7500 family routers

    with IOS Release 11.1.14CA and above. It is supported on

    Catalyst 5000 switches with the Fast EtherChannel line card

    or on Supervisor II or III. Fast EtherChannel support has

    been announced by several partners, including Adaptec,

    Auspex, Com paq, H ewlett-Packard, Intel, Sun

    Microsystems, and Znyx. With Fast EtherChannel trunking,

    a high-capacity server can be connected to the corebackbone

    at 400 M bps FDX for 800 Mbps total throughput.

    Figure 12 shows three ways to scalebandwidth between

    an access-layer switch and a distribution-layer switch. On the

    configuration labeled A - Best, all VLANs are combined

    over Fast EtherChannel with ISL. In the middleconfiguration

    labeled BGood, a combination of segmentation and ISL

    trunk ing is used. On the configuration labeled CO K,

    simple segmentation is used.

    Figure 12 Scaling Ethernet Trunk Bandwidth

    You should use model A if possible, because Fast

    EtherChannel provides more efficient bandw idth ut ilization

    by multiplexing traffic from multipleVLANs over one trunk.

    If a Fast EtherChannel line card isnot available, use model B

    if possible. If neither Fast EtherChannel nor ISL trunking arepossible, use model C. With simple segmentation, each

    VLAN uses one trunk, so one can be congested whileanother

    is unused. More por ts will be required to get the same

    performance.

    Si Si

    STP Root

    Even VLANs

    10, 12, 14, 16

    STP Root

    Odd VLANs

    11, 13, 15, 17

    ISL Trunks VLAN

    Multiplexing

    Fast Ethernet or

    Fast EtherChannel

    F 10

    B 11F 12

    B 13

    F 13

    B 12

    F 14

    B 15

    F 15

    B 14

    F 16

    B 17

    F 17

    B 16

    F Forwarding

    B Blocking

    X

    Z

    A B C D

    Y

    VLANs

    10, 11

    VLANs

    12, 13

    VLANs

    14, 15

    VLANs

    16, 17

    ATM

    VLAN Trunk Fast Ethernet

    Fast EtherChannel

    Ethernet or Fast Ethernet Port

    XVLANs VLANs

    1, 2, 3, 4, 5, 6 1 2 3

    Si Si Si

    VLAN Trunk Fast Ethernet

    Fast EtherChannelEthernet or Fast Ethernet Port

    VLANs1, 2, 3, 4, 5, 6

    A

    Best

    B

    Good

    C

    OK

    FEC ISL

    VLANs1, 2, 3, 4, 5, 6

    400 MbpsFDX

    FastEthernet

    ISL

    1,2 3,4 5,6 1 2 3Fast

    Ethernet

  • 8/9/2019 highd_wp

    15/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 15 of 33

    Scale bandwidth within ATM backbones by adding

    more O C-3 or O C-12 trunks as required. The intelligent

    routing provided by Private Networ k-to-Network Interface

    (PNNI) handles load balancing and fast failover.

    Policy i n the Core

    With Layer 3 switching in the distribution layer, it is possibleto implement the backbone as a single logical network or

    multiple logical networks as required. VLAN technology can

    be used to create separate logical networks that can be used

    for different purposes. One IP core VLAN could be created

    for management traffic and another for enterprise servers. A

    different policy could be implemented for each core VLAN.

    Policy is applied with access lists at the distribution layer. In

    this way, access to management traffic and management

    ports on network devices is carefully controlled.

    Another way to logically partition t he core is by

    protocol. Cr eate one VLAN for enterprise IP servers andanother for enterpr ise IPX or DECnet servers. The logical

    partition can be extended t o become complete physical

    separation on multiple core switches if dictated by security

    policies. Figure 13 shows the core separated physically into

    two switches. VLAN 100 on Switch V corresponds to IP

    subnet 131.108 .1.0 where the World Wide Web (WWW)

    server farm attaches. VLAN 200 on Switch W corresponds

    to IPX network BEEF0001 where the Novell server

    farm attaches.

    Figure 13 Logical or Physical Partitioning of the Core

    Of course the simpler the backbone topology, the better.

    A small number of VLANs or ELANs is preferred. A

    discussion of the scaling issues related to large numbers of

    Layer 3 switches peered across many networks appears later

    in this paper in Scaling Considerations.

    Positioning ServersIt is very common for an enterprise to centralize servers.

    In some cases, services are consolidated into a single server.

    In other cases, servers are grouped at a dat a center for

    physical security or easier administration. At the same time,

    it is increasingly common for workgroups or individuals

    to publish a Web page locally and make it accessible to

    the enterprise.

    With centralized servers directly attached to the

    backbone, all client/server traffic crosses one hop from a

    subnet in the accesslayer to a subnet in the core. Policy-based

    control of access to enterprise servers is implemented byaccess lists applied at the distribution layer. In Figure 14,

    server W is Fast Ethernet-attached to the core subnet.

    Server X is Fast EtherChannel-attached to the core subnet.

    As mentioned, servers attached directly to the core must use

    proxy ARP, IRDP, GDP, or RIP snooping to populate their

    routing tables. HSRP would not be used within core subnets,

    because switches in the distribution layer all connect to

    different part s of the campus.

    WorkgroupServers

    Distribution

    Layer

    CoreLayer

    ServerDistribution

    NovellIPX

    File

    Servers

    IPServers

    World

    WideWeb

    Domain A

    Si

    Domain B

    Si

    Domain C

    Si

    Si Si

    ATMVLAN Trunk Fast Ethernet

    Fast EtherChannel

    Ethernet or Fast Ethernet Port

    V W

    X Y

    VLAN 100

    IP Subnet

    131.108.1.0

    VLAN 200

    IPX Network

    BEEF0001

  • 8/9/2019 highd_wp

    16/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 16 of 33

    Enterprise servers Y and Z are placed in a server farm

    by implementing multilayer switching in a server

    distribution building block. Server Y

    is Fast Ethernet-attached, and server Z is Fast

    EtherChannel-attached. Policy controlling access

    to these servers is implemented with access lists on the core

    switches. Another big advantage of the server distribution

    model is that HSRP can be used to provide redundancy with

    fast failover. The server distribution model also keeps all

    server-to-server t raffic off the backbone. See Appendix A

    for a sample configuration that shows how to implement

    a server farm.

    Server M is within workgroup D, w hich corresponds

    to one VLAN. Server M is Fast Ethernet-attached at a por t

    on an access-layer switch, because most of the traffic to

    the server is local to the workgroup . This follows the

    conventional 80/20 rule. Server M could be hidden from

    the enterprise with an access list at the distribution layer

    switch H if required.

    Server N attaches to a d istribution layer at switch H.

    Server N is a building-level server that communicates with

    clients in VLANs A, B, C, and D. A direct Layer 2 switched

    path between server N and clients in VLANs A, B, C, and D

    can be achieved in two ways. With four network interface

    cards (NICs), it can be directlyattached to each VLAN. With

    an ISL NIC, server N can talk directly to all four VLANs over

    a VLAN trunk. Server N can be selectively hidden from the

    rest of the enterprise with an access list on distribution layer

    switch H if required.

    ATM/ LANE Backbone

    Figure 15 shows the multilayer campus model with ATM

    LANE in the backbone. For customers that r equire

    guaranteed quality of service (QoS), ATM is a good

    alternative. Real-time voice and video applications may

    mandate ATM features like per-flow queuing, which provides

    granular contr ol of delay and jitter.

    Figure 14 Server Attachment in the Multilayer Model

    AccessLayer

    VLANs

    A, B, C, D

    Server N

    A, B, C, D Server MWorkgroup D

    DistributionLayer

    CoreLayer

    ServerDistribution

    Fast EtherAtt achedEnterprise

    Servers

    Fast EtherChannelAtt achedEnterprise

    Server

    Si Si

    Si Si

    Si

    ATMVLAN Trunk Fast Ethernet

    Fast EtherChannelEthernet or Fast Ethernet PortGigabit Ethernet

    W

    Y Z

    X

    H

    HSRP HSRP

  • 8/9/2019 highd_wp

    17/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 17 of 33

    Each Catalyst 5000 multilayer switch in the distribution

    layer is equipped with a LANE card. The LANE card acts as

    LEC so that the distribution-layer switches can communicate

    across the backbone. The LANE card has a redundant ATM

    OC-3 physical interface called dual-PHY. In Figure 15, the

    solid lines represents the active link and the dotted lines

    represents the hot-standby link.

    Two LightStream 1010 switches form t he ATM core.

    Routers and servers with nat ive ATM interfaces attach

    directly to ATM ports in the backbone. Enterprise servers

    in the server farm attach to multilayer Catalyst 5000

    switches X and Y. Servers may be Fast Ethernet- or Fast

    EtherChannel-attached. These Catalyst 5000 switches are

    also equipped with LANE cards and act as LECs that connect

    Ethernet-based enterprise servers to the ATM ELAN in

    the core.

    The trunks between the two LightStream 1010 core

    switches can be OC-3 or O C-12 as required. The PNNI

    protocol handles load balancing and intelligent routing

    between the ATM switches. Intelligent routing is increasingly

    important as the core scales up from two switches to many

    switches. STP is not used in the backbone. Intelligent Layer 3

    routing protocols such as OSPF and Enhanced IGRP manage

    path determination and load balancing between

    distribution-layer switches.

    Cisco has implemented the Simple Server Redundancy

    Protocol (SSRP) to provide redundancy of the LECS and

    the LES/BUS. SSRP is available on Cisco 7500 r outers,

    Catalyst 5000 family switches, and LightStream 1010

    switches and is compatible with all LANE 1.0 standa rd

    LECs.

    Figure 15 Multilayer Model with ATM LANE Core

    Domain ABuilding A

    Domain BBuilding B

    Domain CBuilding C

    Distribution

    Layer

    Core LayerATM LANE

    Server

    Distribution

    Fast Ethernet-AttachedEnterprise

    Servers

    Catalyst 5000

    LES/BUS Primary

    Catalyst 5000

    LES/BUS Backup

    Fast EtherChannelAtt achedEnterprise

    Server

    Si Si Si

    SiSi

    Si Si Si

    ATM

    VLAN Trunk Fast Ethernet

    Fast EtherChannelEthernet or Fast Ethernet Port

    X Y

    B LightStream 1010ATM Switch LECS Backup

    LightStream 1010ATM Switch LECS Backup

    OC-3 orOC-12 Upl inks

    Cisco 7500 LECSPrimary

  • 8/9/2019 highd_wp

    18/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 18 of 33

    The LANE card for the Ca talyst 5000 family is an

    efficient BUS with broadcast performance of 120 kpps.

    This is enough capacity for the largest campus networks.

    In Figure 15 we place the primaryLES/BUS on Switch X and

    the backup LES/BUSon Switch Y. For a small campus SSRP,

    LES/BUS failover takes only a few seconds. For a very large

    campus, LES/BUSfailover can take several minutes. In large

    campus designs, dual ELAN backbones are frequently used

    to provide fast convergencein the event of a LES/BUSfailure.

    As an example, two ELANs, Red and Blue, are

    created in the backbone. If the LES/BUS for ELAN Red is

    disconnected, traffic is quickly rerouted over ELAN Blue

    until ELAN Red recovers. After ELAN Red recovers, the

    multilayer switches in the distribution layer reestablish

    contact across ELAN Red and start load balancing between

    Red and Blue again. This process applies to routed protocols

    but not bridged protocols.

    The primary and backup LECS database is configured

    on the LightStream 1010 ATM switches because of their

    central position. When the ELAN isoperating in steady state,

    there is no overhead CPU utilization on the LECS. The LECS

    is only contacted when a new LEC joins an ELAN. For this

    reason, there are few performance considerations associated

    with placing the primary and backup LECS. A good choice

    for a primaryLECSwould bea Cisco 7500 router with direct

    ATM attachment to the backbone, because it would not

    be affected by ATM signaling traffic in the event of a

    LES/BUS failover.

    Figure 16 shows an a lternative implementation of

    the LANE core using the Catalyst 5500 switch. Here the

    Catalyst 5500 operates as an ATM switch with the addition

    of the ATM Switch Processor (ASP) card. It is configured

    as a LEC with the addition of the OC-12 LANE/MPOA

    card. It is configured as an Ethernet frame switch with the

    addition of the appropr iate Ethernet or Fast Ethernet line

    cards. The server farm is implemented with the addition

    of multilayer switching. The Catalyst 5500 combines

    the functionality of the LightStream 1010 and the

    Catalyst 5000 in a single chassis. See Appendix A for

    an example of configuring an ATM backbone with

    Catalyst 5500 multilayer switches.

    Figure 16 ATM LANE Core with Catalyst 5500 Switches

    ServerDistribution

    Enterprise

    Servers

    Catalyst 5500LES/BUS Primary

    Catalyst 5500LES/BUS Backup

    Fast EtherChannelAt tached

    EnterpriseServer

    Si Si

    ATM

    VLAN Trunk Fast Ethernet

    Fast EtherChannelEthernet or Fast Ethernet Port

    Domain A

    Building A

    Domain B

    Building B

    Domain C

    Building C

    Distribution

    Layer

    Core Layer

    ATM LANE

    Si Si Si Si Si Si

    OC-3 or

    OC-12 Uplinks

  • 8/9/2019 highd_wp

    19/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 19 of 33

    IP Mult icast

    Applications based on IP multicast represent a small but

    rapidly growing component o f corporat e intranets.

    Applications such as IPTV, Microsoft NetShow, and

    NetM eeting are being tried and deployed. There are several

    aspects to handling multicasts effectively:

    Multicast routing, Protocol Independent Multicast (PIM)

    dense mode and sparse mode

    Clients and servers join multicast groups with Internet

    Group Management Protocol (IGMP)

    Pruning multicast trees with Cisco Group Multicast

    Protocol (CGMP) or IGMP snooping

    Switch and router multicast performance

    Multicast policy

    The preferred rout ing protocol for multicast is PIM. PIM

    sparse mode is described in RFC 2117, and PIM dense mode

    is on the standards track. PIM is being widely deployed inthe Internet as well as in corpora te intranets. As its name

    suggests, PIM works with various unicast routing protocols

    such as OSPF and Enhanced IGRP. PIM routers may also

    be required to interact with the Distance Vector Multicast

    Routing Protocol (DVMRP). DVMRP is a legacy multicast

    routing protocol deployed in the Internet multicast backbone

    (MBONE). Currently 50 percent of the MBONE has

    converted to PIM, and it is expected that PIM will replace

    DVMRP over time.

    PIM can operat e in dense mode or in sparse mode.

    Dense-mode operation is used for an application likeIPTV where there is a multicast server with many clients

    throughout the campus. Sparse-mode operat ion is used

    for workgroup applications like NetMeeting. In either case,

    PIM builds efficient multicast treesthat minimize the amount

    of traffic on the network. This is particularly important

    for high-bandwidth applications such as real-time video.

    In most environments, PIM is configured as sparse-dense

    and automatically uses either sparse mode or dense mode

    as required.

    IGMP is used by multicast clients and servers to join

    or advertise multicast groups. The local gateway routermakes a multicast available on subnets with active listeners,

    but b locks the traffic if no listeners are present. CGM P

    extends multicast pruning down to the Catalyst switch.

    A Cisco rout er sends out a CGMP message to advertise all

    the host MAC addresses that belong to a multicast group.

    Catalyst switches receive the CGMP message and forward

    multicast traffic only to ports with the specific MAC address

    in the forwarding table. This blocks multicast packets from

    all switch ports that dont have group members downstream.

    The Catalyst 5000 family of switches have an

    architecture that forwards multicast streams to one port ,

    many port s, or all ports with no performance penalty.

    Catalyst switches will support one or many multicast groups

    operating at wire speed concurrently.

    One way to implement multicast policy is to place

    multicast servers in a server farm behind multilayer Catalyst

    Switch X as shown in Figure 17. Switch X acts as a multicast

    firewall that enforces rate limiting and controls access to

    multicast sessions. To further isolate multicast traffic, create

    a separate multicast VLAN/subnet in the core. The multicast

    VLAN in the core could be a logicalpartition of existing core

    switches or a dedicated switch if traffic isveryhigh. Switch X

    is a logical place to implement the PIM rendezvous point.

    The rendezvous point is like the roo t of t he multicast tree.

    Figure 17 Multicast Firewall and Backbone

    Distribution

    Layer

    Server

    Distribution

    Unicast

    Server

    Farm

    Multicast

    Server

    Farm

    Clients for

    Multicast

    A Only

    Clients for

    Multicast

    B Only

    Clients for

    Multicast

    C Only

    Si Si Si

    Si Si

    ATM

    VLAN Trunk Fast Ethernet

    Fast EtherChannel

    Ethernet or Fast Ethernet Port

    Gigabit Ethernet

    X

    Unicast VLAN 200

    IP Subnet 131.108.2.0Mult icast VLAN 100

    IP Subnet 131.108.1.0

    A Only

    B Only

    C Only

  • 8/9/2019 highd_wp

    20/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 20 of 33

    Scaling Considerations

    The multilayer design model is inherently scalable. Layer 3

    switching performance scales because it is distributed.

    Backbone performance scalesas you add morelinks or more

    switches. The individual switch domains or buildings scale to

    over 1000 client devices with two distribution-layer switches

    in a typical redundant configuration. M ore building blocks

    or server blocks can be added to the campus withou t

    changing the design model. Because the multilayer design

    model is highly structured and deterministic, it is also

    scalable from a management and administration perspective.

    In all the multilayer designs discussed, we have avoided

    STP loops in the backbone. STP takes 40 to 50 seconds to

    converge and does not suppor t load-balancing across

    multiple paths. Within Ethernet backbones, no loops ar e

    configured. For ATM backbones, PNN I handles load

    balancing. In all cases, intelligent Layer 3 routing protocols

    such as OSPF and Enhanced IGRP handle path

    determination and load balancing over multiple paths in the

    backbone.

    OSPF overhead in the backbone rises linearly as the

    number of distribution-layer switches rises. This is because

    OSPF elects one designated rou ter and one backup

    designated router to peer with all the other Layer 3 switches

    in the distribution layer. If two VLANs or ELANs are created

    in the backbone, a designated router and a backup are elected

    for each. So the O SPF routing traffic and CPU overhead

    increase as the number of backbone VLANs or ELANs

    increases. For this reason, it is recommended to keep the

    number of VLANs or ELANs in t he backbone small. For

    large ATM/LANE backbones, it is recommended to create

    two ELANs in the backbone as was d iscussed in the ATM/

    LANE Backbone section earlier in this paper.

    Another import ant consideration for O SPF scalability

    is summarization. For a large campus, make each building

    an OSPF area and make the distribution-layer switches

    area border routers (ABRs). Pick all the subnets within

    the building from a contiguous block of addresses and

    summarize with a single summary advertisement at t he

    ABRs. This reduces the amount of rout ing informat ion

    throughout the campus and increases the stability of the

    routing tab le. Enhanced IGRP can be configured for

    summarization in the same way.

    Not all routing protocols are created equal, however.

    AppleTalk Routing Table M aintenance Protocol (RTM P),

    Novell Server Advertisement Protocol (SAP), and Novell

    Routing Information Protocol (RIP) are protocols with

    overhead that increases as the square of the number of

    peers. For example, say there are 12 distribution-layer

    switches attached to the backbone and running Novell SAP.

    If there are 100 SAP services being advertised throughout

    the campus, each distribution switch injects 100/7 = 15

    SAP packets into the backbone every 60 seconds. All 12

    distribution-layer switchesreceive and process 12 * 15 = 180

    SAP packets every 60 seconds. The Cisco IOS software

    provides features such as SAP filtering to contain SAP

    advertisements from local servers where appropriate. The

    180 packets is a reasonable number, but consider what

    happens with 100 distribution-layer switches advertising

    1000 SAP services.

    Figure 18 shows a design for a large hierarchical,

    redundant ATM campus backbone. The ATM core

    designated B consists of eight LightStream 1010 switches

    with a par tial mesh of OC-12 trunks. Domain C consists

    of three pairs of LightStream 1010 switches. Domain C can

    be configured with an ATM prefix address that is

    summarized where it connects to the core B. On this scale,

    manual ATM address summarization would have little

    benefit. The default summarization would have just 26

    routing entries corresponding to the 26 switches in Figure 18.

    In domain A, pairs of distribution-layer switches attach to

    the ATM fabric with O C-3 LANE. A server farm behind

    Catalyst switches X and Y attaches directly to the core with

    OC-12 LANE/MPOA cards.

  • 8/9/2019 highd_wp

    21/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 21 of 33

    M igration Strategies

    The multilayer design model describes the logical structure

    of the campus. The addressing and Layer 3 design are

    independent of choice of media. The logical design principles

    are the same whether implemented with Ethernet, Token

    Ring, FDDI, or ATM. This is not always true in the case of

    bridged protocols such as N etBIOS and Systems N etwork

    Architecture (SNA), which are media dependent. In

    particular, Token Ring applications with frame sizes

    larger than the 1500 bytes allowed by Ethernet need

    to be considered.

    Figure 19 shows a multilayer campus with a parallel

    FDDI backbone. The FDDI backbonecould be bridged to the

    switched Fast Ethernet backbone with translational bridging

    implemented at the distribution layer. Alternatively, the

    FDDI backbone could be configured as a separate logical

    network. There are several possible reasons for keeping an

    existing FDDI backbone in place. FDDI supports 4500-byte

    frames, while Ethernet frames can be no larger than 1500

    bytes. This is important for bridged protocols that originate

    on Token Ring end systems that generate 4500-byte frames.

    Another reason to maintain an FDDI backbone is for

    enterprise servers that have FDDI network interface cards.

    Figure 18 Hierarchical Redundant ATM Campus Backbone

    Si

    Si

    Si

    Si

    Si

    Si

    Si

    SiX Y

    Server

    Distribution

    C

    B

    A

    ATM OC-3

    ATM OC-12Gigabit Ethernet

  • 8/9/2019 highd_wp

    22/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 22 of 33

    Data-link switching plus (DLSw+) is Ciscos

    implementation of standa rd DLSw. SNA frames from

    native SNA client B are encapsulated in TCP/IP by a rout er

    or a distribution-layer switch in the multilayer model.

    A distribution switch de-encapsulates the SNA traffic out to

    a Token Ring-attached front-end processor (FEP) at a data

    center. Multilayer switches can be attached to Token Ring

    with the Versatile Interface Processor (VIP) card and the

    Token R ing port adapter (PA).

    Security in the M ult i layer Model

    Access control lists are supported by multilayer switching

    with no performance degradation. Because all traffic passes

    through the distribution layer, this is the best place to

    implement policy with access control lists. These lists can

    also be used in the contro l plane of the network to restrict

    access to the switches themselves. In addition, the TACACS+

    and RADIUS protocols provide centralized access control

    to switches. The Cisco IO S software also provides multiple

    levels of authorization with pa ssword encryption. Network

    managers can be assigned to a particular level at which

    a specific set of commands are enabled.

    Figure 19 FDDI and Token Ring Migration

    Access

    Layer

    NetBIOS

    Client A

    SNA

    Client B

    Distribution

    LayerSi Si Si

    VLAN Trunk Fast Ethernet

    Fast EtherChannel

    Ethernet or Fast Ethernet PortToken Ring Port

    FDDI Port

    TokenRing

    TokenRing

    TokenRing

    TokenRing

    Si Si

    Switched Ethernet

    Backbone

    IBM SNA FEPsTIC-Attached

    NetBIOS

    Servers

    Dual-Homed

    FDDI Backbone

    ServerDistribution

    FDDI

    Dual Ring

  • 8/9/2019 highd_wp

    23/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 23 of 33

    Implementing Layer 2 switching at the access layer

    and in the server farm has immediate security benefits. With

    shared media, all packets are visible to all users on the logical

    network. It is possible for a u ser to captur e clear-text

    passwords or files. On a switched network, conversations

    are only visible to the sender and receiver. And within a

    server farm, all server-to-server tr affic is kept off the

    campus backbone.

    WAN security is implemented in firewalls. A firewall consists

    of one or more routers and bastion host systems on a special

    network called a demilitarized zone (DMZ). Specialized Web

    caching servers and other firewall devices may attach to the

    DMZ . The inner firewall routers connect to the campus

    backbone in wha t can be considered a WAN distribution

    layer. Figure 20 shows a WAN distribution building block

    with firewall components.

    Bridging in the M ult i layer Model

    For nonrouted protocols, bridging is configured. Bridging

    between access-layer VLANs and the backbone is handled

    by the RSM. Because each access-layer VLAN is runn ing

    IEEE spanning tree, the RSM must not be configured with

    an IEEE bridge group. The effect of r unning IEEE bridging

    on the RSM is to collapse all the spanning trees of all the

    VLANs into a single spanning tree with a single root bridge.

    Configure the RSM with a DEC STP bridge group to keep

    all the IEEE spanning trees separate.

    For a redundant br idging configuration a s shown in

    Figure 7, run IOS Release 11.2(13)P or higher on all RSMs.

    IOS Release 11.2(13)P has a feature tha t allows the DEC

    bridge protocol dat a units (BPDUs) to pass between RSMs

    through t he Catalyst 5000 switches. With older versions of

    the Cisco IOS software, the DEC bridges will not see each

    other and will not block redundant links in the topology.

    If running an older version of IOS software on the RSMs,

    ensure that only RSM A bridges between the backbone and

    even-numbered VLANs, and that only RSM B bridges

    between the backbone and odd-numbered VLANs.

    Figure 20 WAN Distribution to the Internet

    DistributionLayer

    Core Layer

    WANDistribution Bastion Hosts

    Web ServersFirewall Devices

    in the DMZ

    Multilayer Switch

    as Inner Firewall

    Router

    Si Si Si Si Si Si

    ATM

    VLAN Trunk Fast Ethernet

    Fast EtherChannelEthernet or Fast Ethernet Port

    Outer FirewallRouters

    Si Si

    To Internet Service Providers ISPs

  • 8/9/2019 highd_wp

    24/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 24 of 33

    Advantages of the M ult i layer M odel

    We have discussed several variations of the multilayer

    campus design model. Whether implemented w ith

    frame-switched Ethernet backbones or cell-switched ATM

    backbones, allshare the same basicadvantages. The model is

    highly deterministic, which makes it easy to troubleshoot as

    it scales. The modular building-block approach scales easily

    as new buildings or server farms are added t o the campus.

    Intelligent Layer 3 routing protocols such as OSPF and

    Enhanced IGRP handle load balancing and fast convergence

    across the backbone. The logical structure and addressing

    of the hub and router model are preserved, which makes

    migration much easier. Many value-added services of the

    Cisco IOS software, such as server proxy, tunneling, and

    summarization ar e implemented in the Cat alyst multilayer

    switches at the distribution layer. Policy is also implemented

    with access lists at the distribution layer or at the serverdistribution switches.

    Redundancy and fast convergence are provided by

    features such as UplinkFast and HSRP. Bandwidth scales

    from Fast Ethernet to Fast EtherChannel to Gigabit Ethernet

    without changing addressing or policy configuration. With

    the features of the Cisco IOSsoftware, the multilayer model

    supports all common campus protocols including TCP/IP,

    AppleTalk, Novell IPX, DECnet, IBM SNA, NetBIOS, and

    manymore. Many of thelargest and most successful campus

    intranets are built with the multilayer model. It avoids allthe

    scaling problems associated with flat bridged or switcheddesigns. And lastly, the multilayer model with multilayer

    switching handles Layer 3 switching in hardw are with no

    performance penalty compared with Layer 2 switching.

    Appendix A: Implementing the M ult i layer M odel

    Ethernet Backbone

    This section shows how to configure the multilayer

    model with an Ethernet backbone. Figure 21 shows a small

    campus intranet. Two buildings are represented,

    corresponding to VTP domains North and South. The

    backbone is VTP doma in Backbone. Within each VTPdomain, at least one switch is configured as the VTP server.

    The VTP server keeps track of all theVLANs configured in a

    domain. Switch d1a is the VTP server for domain N orth.

    Switch d2a is the VTP server for domain South. Both ca and

    cb are VTP servers for domain Backbone. Both core switches

    are VTP servers, because we are not trunking VLAN 1 in the

    core. In fact there are no ISL trunks in the core. An access

    switch such as a1a is not a good choice for the VTP server,

    because not all VLANs in domain North appear on th is

    switch. Switches not configured as VTP servers are

    configured in VTP transparent m ode. Using transpar ent

    mode on access Switch a1a allows us to restrict the set of

    VLANs known to the switch.

    Figure 21 Implementing the Multilayer Model with Ethernet Backbone

    Figure 22 shows VLAN 10 in detail. The VLAN trunks

    that carryVLAN 10 form a triangle. Switch d1a at the lower

    left is the root switch for VLAN 10. On Switch a1a, trunk 2/

    1 is forwarding with respect to VLAN 10, and trunk 2/2 is

    blocking. The blocking trunk is shown in purple. UplinkFast

    is enabled on Switch a1a . In addition to the three trunks,

    three ports atta ch to VLAN 10. PC A with IP 131.108.10.1

    attaches to Port 2/11 on Switch a1a. The two RSM modules

    r1a and r1b aredepicted as routers that attach logically to the

    VLAN. RSM r1a is attached to Port 3 /1 of Switch d1a, and

    RSM r1b is attached to Port 3/1 of Switch d1b. RSM r1a has

    IP address 131.108.10.151 on interface VLAN 10, but also

    acts as primary H SRP default gateway 131.108.10.100.

    Si Si

    ATM

    VLAN Trunk Fast EthernetFast EtherChannel

    Ethernet or Fast Ethernet Port

    ca

    d1ar1a

    d1br1b

    d2ar2a

    d2br2b

    a1a a1b a2a a2b

    cb

    VTP DomainBackbone

    VLAN 99

    Si Si

    VTP Domain Nort hVLANs 1, 10, 11, 12, 13

    VTP Domain SouthVLANs 1, 20, 21, 22, 23

    VLAN 10

    131.108.10.1

    VLAN 11

    131.108.11.1

    VLAN 20

    131.108.20.1

    VLAN 21

    131.108.21.1

  • 8/9/2019 highd_wp

    25/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 25 of 33

    Figure 22 VLAN 10 Logical Topology within Multilayer Model

    Figure 23 shows VLAN 11 in detail. N ote that Switch

    d1b is the root for odd-numbered VLANs. O n Switch a1a,

    trunk 2/1 is in blocking mode and trunk 2/2 is in forwarding

    mode. RSM r1b acts as the HSRP primary gateway

    131.108.11.100 for VLAN 11, and r1a is the backup

    gateway.

    Figure 23 VLAN 11 Logical Topology within Multilayer Model

    Note that we have used a simple naming convention for

    switches. The first letter a in a1arefers to access; the letter

    d in d1a refers to distribution; and the letter c in ca

    refers to core. The first RSM is r1a within Switch d1a.

    Low IP addresses such as 131.108.10.1 represent hosts or

    clients, 131.108.10.20x addresses represent servers, and

    131.108.10.10x addresses represent HSRP gateway routers.

    RSM host addr esses other than for H SRP are the same for

    every VLAN, a s follows:

    Domain North has the four subnets: 131.108.10.0,

    131.108.11.0, 131.108.12.0, and 131.108.13.0. These

    correspond to VLANs 10, 11 , 12, and 13. In addition, a

    management subnet 131.108.1.0 corr esponds to VLAN 1

    within the domain. VLAN 1 does not extend beyond the

    distribution-layer switches. Within domain South, VLAN 1

    is also used, but it is a different subnet, 131.108.2 .0. The

    management port SC0 of each switch is in VLAN 1 and is

    configured with an address on subnet 131.108.1.0 asfollows:

    Forwarding Trunk

    PortBlocking Trunk

    Si Si

    PC A

    131.108.10.1

    a1a

    UpLinkfast

    2/1

    2/3

    2/3

    2/2

    d1b

    2/11

    2/2 B-Blocking2/1 F-Forwarding

    d1a-ROOT

    Int er face VLA N 10 Int erf ace VLAN 10

    RSM r1a (Logical)

    131.108.10.151HSRP Primary

    131.108.10.100

    RSM r1b (Logical)

    131.108.10.152HSRP Backup

    131.108.10.100

    r1a r1b

    3/1 3/1

    VTP Domain Nort h

    VLANs 1, 10, 11, 12, 13

    Forwarding Trunk

    PortBlocking Trunk

    Si Si

    PC B131.108.11.1

    a1aUplinkfast

    2/1

    2/3

    2/3

    2/2

    d1b-ROOT

    2/12

    2/2 F-Forwarding2/1 B-Blocking

    d1a

    Inter face VLAN 11 In ter face VLAN 11

    RSM r1a (Logical)131.108.11.151

    HSRP Backup131.108.11.100

    RSM r1b (Logical)131.108.11.152

    HSRP Primary131.108.11.100

    r1a r1b

    3/1 3/1

    VTP Domain NorthVLANs 1, 10, 11, 12, 13

    RSM Host Address

    r1a x.x.x.151

    r1b x.x.x.152

    r2a x.x.x.153

    r2b x.x.x.154

    rca x.x.x.155

    rcb x.x.x.156

    rwan x .x .x .157 (Cisco 7500 WAN router at tached to the backbone)

    Device IP Address Gatew ay Address

    a1a 131.108.1.1 131.108.1.100

    a1b 131.108.1.2 131.108.1.101

    d1a 131.108.1.3 131.108.1.100

    d1b 131.108.1.4 131.108.1.101

    r1a 131.108.1.151 N/A (HSRP primar y for 131.108.1.100)

    r1b 131.108.1.152 N/A (HSRP backup for 131.108.1.100)

  • 8/9/2019 highd_wp

    26/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 26 of 33

    Domain South has four subnets: 131.108.20.0,

    131.108.21.0, 131.108.22.0, and 131.108.23.0. These

    correspond to VLANs 20, 21, 22, and 23. In addition,

    a management subnet 131.108.2.0 corresponds to VLAN

    1 within the domain. The management port SC0 of each

    switch is configured with an address on subnet 131 .108.2.0

    as follows:

    Domain Backbone has the subnet 131.108.99.0, whichisVLAN 99. HSRP isnot configured on subnet 99.0, because

    configuring standby on a VLAN interface disables Internet

    Control MessageProtocol (ICMP) redirects. The gateway for

    ca and cb is configured to their own addresses with a default

    class Bmask, so ca and cb will use proxy ARPto route to any

    other networks.

    The configuration for Switch a1a is shown below. Slot 2

    has a 10/100 card, and Ports 2/1 and 2/2 areused to connect

    to Switches d1a and d1b respectively. The last command,

    set spantree uplinkfast enable, enables the fast STP

    failover feature.

    set prompt a1a

    set vtp mode transparent

    set vtp domain North

    set interface sc0 1 131.108.1.1 255.255.255.0

    set ip route default 131.108.1.100 0

    set trunk 2/1 on

    set trunk 2/2 on

    set vlan 10

    set vlan 10 2/11 (assigns one host port in

    VLAN 10)

    set vlan 11

    set vlan 11 2/12 (assigns one host port in

    VLAN 11)

    set spantree uplinkfast enable

    The configuration for Switch d1a follows.Slot 2 has a 10/100

    card, and Port s 2/1, 2/2, and 2/3 are used to connect to

    Switches a1a, a1b, and d1b respectively. This switch is the

    STP root for even VLANs 10 and 12. We remove VLANs 12,

    13, and 99 from the trunk 2/1 to Switch a1a. We remove

    VLANs 10, 11, and 99 from the trunk 2/2 to Switch a1b. We

    remove VLAN 99 from trunk 2/3 to Switch d1b to eliminate

    spanning tree loops in the core. VLAN 1 cannot be removed

    from thetrunkswithin a VTP domain. Switch d1a is theVTP

    server for domain North.

    set prompt d1a

    set vtp domain North

    set vtp mode serverset interface sc0 1 131.108.1.3 255.255.255.0

    set ip route default 131.108.1.100 0

    set trunk 2/1 on

    set trunk 2/2 on

    set trunk 2/3 on

    set vlan 10, 11, 12, 13, 99

    set spantree root 10, 12

    clear trunk 2/1 12,13,99

    clear trunk 2/2 10,11,99

    clear trunk 2/3 99

    Device IP Address Gatew ay Address

    a2a 131.108.2.1 131.108.2.100

    a2b 131.108.2.2 131.108.2.101

    d2a 131.108.2.3 131.108.2.100

    d2b 131.108.2.4 131.108.2.101

    r 2a 131.108.2.153 N/A (HSRP primary for 131.108.2.100)

    r 2b 131.108.2.154 N/A (HSRP bac kup for 131.108.2.100)

    Device IP Address Gatew ay Address

    ca 131.108.99.1 Proxy ARP Gateway 131.108.99.1

    - - - Mask 255.255.0.0

    cb 131.108.99.2 Proxy ARP Gateway 131.108.99.2

    - - - Mask 255.255.0.0

    r1a 131.108.99.151 N/A -

    r1b 131.108.99.152 N/A -

    r2a 131.108.99.153 N/A -

    r2b 131.108.99.154 N/A -

  • 8/9/2019 highd_wp

    27/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 27 of 33

    The configuration for RSM r1a follows. This switch acts

    as HSRP primary for 131.108.1.100. Th is switch is also

    the HSRP primary gateway for even-numbered subnets

    131.108.10.0 and 131.108.12.0 and the HSRP backup

    gateway for odd-numbered subnets 131.108.11.0 and

    131.108.13.0.

    hostname r1a

    interface vlan 1

    ip address 131.108.1.151 255.255.255.0

    standby 1 ip 131.108.1.100

    standby 1 priority 100

    standby 1 preempt

    interface vlan 10

    ip address 131.108.10.151 255.255.255.0

    standby 1 ip 131.108.10.100

    standby 1 priority 100

    standby 1 preempt

    interface vlan 11

    ip address 131.108.11.151 255.255.255.0

    standby 1 ip 131.108.11.100

    standby 1 priority 50interface vlan 12

    ip address 131.108.12.151 255.255.255.0

    standby 1 ip 131.108.12.100

    standby 1 priority 100

    standby 1 preempt

    interface vlan 13

    ip address 131.108.13.151 255.255.255.0

    standby 1 ip 131.108.13.100

    standby 1 priority 50

    interface vlan 99

    ip address 131.108.99.151 255.255.255.0

    router ospf 777

    network 131.108.0.0 0.0.255.255 area 0

    Implementing a Se rver Farm

    Figure 24 shows enterprise servers attached to VLAN 100.

    We have added RSMs rca and rcb to coreswitches ca and cb.

    RSM rca is HSRP primary gateway 131.108.100.100, and

    RSM rcb is HSRP backup gateway for 131.108.100 .100.

    RSM rcb is HSRP primary gateway for 131.108.100 .101,

    and RSM rca isH SRP backup gateway for 131.108.100.101.

    Enterprise server 131.108 .100.200 uses default gateway

    131.108.100.100, and enterprise server 131.108.100.201

    uses default gateway 131.108.100.101. This provides for

    load distribution of outbound packets from the server farm

    to the backbone.

    Figure 24 Creating a Server Farm

    Figure 25 shows core Switches ca and cb in more detail.

    A Fast EtherChannel VLAN 100 link connects ca and cb,

    providing a redundant Layer 2 pa th from enterprise servers

    to the H SRP primary gateways and backup gateways. This

    link also carries all server-to-server traffic.

    Figure 25 Server Farm Detail

    Si

    Si Si

    Si

    ATMVLAN Trunk Fast Ethernet

    Fast EtherChannel

    Ethernet or Fast Ethernet Port

    ca

    rca

    cb

    rcb

    d1a

    r1ad1b

    r1b

    d2a

    r2ad2b

    r2b

    a1a a1b a2a a2b

    VTP Domain

    Backbone

    VLAN 99

    Server

    VLAN 100

    Si Si

    VTP Domain North VTP Domain South

    131.108 .100.200 131.10 8.100.201

    Sica

    1/ 1 1/2

    2/1-2

    2/3-42/ 12 2/12

    1/ 1 1/ 2

    rca

    HSRP Primary HSRP Backup

    cb

    rcb

    VTP Domain

    Backbone

    VLAN 99

    Server VLAN

    100

    131.108.100.200 131.108.100.201

    Si

  • 8/9/2019 highd_wp

    28/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 28 of 33

    The configuration for Switch ca follows:

    set prompt ca

    set vtp domain Backbone

    set vtp mode server

    set interface sc0 99 131.108.99.1 255.255.255.0

    set ip route default 131.108.99.100 0

    set vlan 99 name 131.108.99.0

    set vlan 100 name 131.108.100.0set port channel 2/1-2 on (Fast EtherChannel

    VLAN 99)

    set port channel 2/3-4 on (Fast EtherChannel

    VLAN 100)

    set vlan 99 2/1-2

    set vlan 100 2/12

    set trunk 1/1 off

    set trunk 1/2 off

    set trunk 2/1 off (FEC is not an ISL trunk)

    set trunk 2/3 off (FEC is not an ISL trunk)

    The configuration for RSM rca follows:

    hostname rca

    interface vlan 99

    ip address 131.108.99.155 255.255.255.0

    interface vlan 100

    ip address 131.108.100.155 255.255.255.0

    standby 1 ip 131.108.100.100

    Standby 1 priority 100

    standby 1 preempt

    standby 2 ip 131.108.100.101

    standby 2 priority 50

    router ospf 777

    network 131.108.0.0 0.0.255.255. area 0

    ATM LANE BackboneFigure 26 shows the mu ltilayer model with an ATM/LANE

    core. Catalyst 5500 Switches ca and cb are used to provide

    ATM switching in the core and Ethernet switching in the

    server farm distribution. On all distribution-layer and

    core-layer switches, a LANE card isused to connect Ethernet

    VLAN 98 to ATM ELAN atmbackbone. The LANE card on

    Switch ca is the LES/BUS primary for atmbackbone, and

    the LANE card on Switch cb is the LES/BUS backup for

    atmbackbone.

    Figure 26 Implementing the Multilayer ModelATM/LANE Core

    Only one ELAN atmbackbone, which is subnet

    131.108.98.0, is provisioned in the core. This simplifies the

    core and reduces the number of virtual circuits required. In a

    large ATM campus backbone, two ELANs would be used for

    redundancy. N ine LANE clients att ach to atmbackbone in

    Figure 27. Each LANE card on switches d1a, d1b, d2a, d2b,

    ca, and cb has one LEC associated with VLAN 98. Each

    ATM switch has a LEC associated with the management

    port. Router rwan has a native ATM interface, and therefore

    has a LEC.

    Fast Ethernet ChannelAt tachedEnterprise Server

    EnterpriseServers

    Si

    Si Si

    Si

    ATM

    VLAN Trunk Fast EthernetFast EtherChannelEthernet or Fast Ethernet Port

    d1ar1a

    d1br1b

    d2ar2a

    d2br2b

    a1a a1b a2a a2b

    VTP DomainBackbone VLAN 98

    ELAN AtmbackboneIP Subnet 98.0

    ServerDistribution

    caCatalyst 5500

    LES/BUS Primary

    cbCatalyst 5500

    LES/BUS Backup

    Si Si

    VTP Domain North VTP Domain South

  • 8/9/2019 highd_wp

    29/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 29 of 33

    The number of virtual circuits in the backbone is given

    by the formula:

    6n < x < 6n + (n(n-1)/2)

    Where n is the number of LECs on atmbackbone. With no

    traffic, each LEC has exactlysix VCs. For atmbackbonewith

    nine LECs, we get:

    54 < x < 90

    With no traffic, ELAN atmbackbone has a tota l of 54 VCs.

    If every LEC has open connections to every other LEC,

    atmbackbone has 90 VCs. This number is still relatively

    small, and the LES/BUSfor atmbackbonewill not be stressed.

    In the event that the primary LES/BUSis disconnected, SSRP

    failover isjust a few seconds. To observethe number of open

    VCs on a LEC, use the command show atm vc.

    Rwan#show atm vc

    AAL / Peak Avg. Burst

    Interface VCD VPI VCI Type Encapsulation Kbps

    Kbps Cells Status

    ATM0/0 1 0 5 PVC AAL5-SAAL 155000 155000 96

    ACTIVE

    ATM0/0 2 0 16 PVC AAL5-ILMI 155000 155000 96

    ACTIVE

    ATM0/0.2 4 0 33 SVC LANE-LEC 155000 155000 32

    ACTIVE

    ATM0/0.2 5 0 34 MSVC LANE-LEC 155000 155000 32

    ACTIVE

    ATM0/0.2 6 0 35 SVC LANE-LEC 155000 155000 32

    ACTIVE

    ATM0/0.2 7 0 36 MSVC LANE-LEC 155000 155000 32

    ACTIVE

    ATM0/0.2 79 0 127 SVC LANE-DATA 155000 155000

    32 ACTIVE

    Figure 27 Management Subnet in the ATM/LANE Core

    Fast Ethernet ChannelAttached

    Enterpri se Server

    Enterprise

    Servers

    Si Si

    ATMVLAN Trunk Fast Ethernet

    Fast EtherChannel

    Ethernet or Fast Ethernet Port

    ca

    LES/BUS Primary LenecaLECS Primary Aspca 98.171

    rca

    cb

    LES/BUS Backup LenecbLECS Backup aspcb 98.172

    rcb

    VTP Domain

    BackboneATM ELAN

    AtmbackboneIP Subnet 131.108.98.0

    Server

    Distribution

    r1a98.151 Lane1a

    r1b98.152 Lane1b

    r2a98.153 Lane2a

    r2b98.154 Lane2b

    rwan98.157

  • 8/9/2019 highd_wp

    30/33

    Copyright 1998 Cisco Systems, Inc. All Rights Reserved.Page 30 of 33

    This output shows the six default VCs and one open data

    VC, which is because of the current Telnet session to rwan.

    Permanent Virtual Circuit (P