69
A Dell EMC Deployment and Configuration Guide Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment Dell Networking Solutions Engineering November 2014

Deploying Dell Networking MXL and PowerEdge I/O ......Cisco Nexus 7000 Series Cisco Nexus 7000 Series Cisco MDS 9000 Cisco MDS 9000 Cisco Nexus 5500 Dell Networking MXL or Dell PowerEdge

  • Upload
    others

  • View
    22

  • Download
    0

Embed Size (px)

Citation preview

  • A Dell EMC Deployment and Configuration Guide

    Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment

    Dell Networking Solutions Engineering November 2014

  • 2 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    Revisions

    Date Description Authors

    November 2014 Version 1.3 Fixed Duplicate Images Ed Blazek, Kevin Locklear, Curtis Bunch, Mike Matthews

    October 2014 Version 1.2 Added vPC/VLT switch configuration Updated existing configurations

    Ed Blazek, Kevin Locklear, Curtis Bunch, Mike Matthews

    November 2013 Version 1.0 Release Ed Blazek, Kevin Locklear

    Copyright © 2014-2016 Dell Inc. or its subsidiaries. All Rights Reserved.

    Except as stated below, no part of this document may be reproduced, distributed or transmitted in any form or by any means, without

    express permission of Dell.

    You may distribute this document within your company or organization only, without alteration of its contents.

    THIS DOCUMENT IS PROVIDED “AS-IS”, AND WITHOUT ANY WARRANTY, EXPRESS OR IMPLIED. IMPLIED WARRANTIES OF

    MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE SPECIFICALLY DISCLAIMED. PRODUCT

    WARRANTIES APPLICABLE TO THE DELL PRODUCTS DESCRIBED IN THIS DOCUMENT MAY BE FOUND AT:

    http://www.dell.com/learn/us/en/vn/terms-of-sale-commercial-and-public-sector-warranties

    Performance of network reference architectures discussed in this document may vary with differing deployment conditions, network

    loads, and the like. Third party products may be included in reference architectures for the convenience of the reader. Inclusion of

    such third party products does not necessarily constitute Dell’s recommendation of those products. Please consult your Dell

    representative for additional information.

    Trademarks used in this text: Dell™, the Dell logo, Dell Boomi™, PowerEdge™, PowerVault™, PowerConnect™, OpenManage™,

    EqualLogic™, Compellent™, KACE™, FlexAddress™, Force10™ and Vostro™ are trademarks of Dell Inc. EMC VNX®, and EMC

    Unisphere® are registered trademarks of Dell. Other Dell trademarks may be used in this document. Cisco Nexus®, Cisco MDS®,

    Cisco NX-0S®, and other Cisco Catalyst® are registered trademarks of Cisco System Inc. Intel®, Pentium®, Xeon®, Core® and

    Celeron® are registered trademarks of Intel Corporation in the U.S. and other countries. AMD® is a registered trademark and AMD

    Opteron™, AMD Phenom™ and AMD Sempron™ are trademarks of Advanced Micro Devices, Inc. Microsoft®, Windows®, Windows

    Server®, Internet Explorer®, MS-DOS®, Windows Vista® and Active Directory® are either trademarks or registered trademarks of

    Microsoft Corporation in the United States and/or other countries. Red Hat® and Red Hat® Enterprise Linux® are registered

    trademarks of Red Hat, Inc. in the United States and/or other countries. Novell® and SUSE® are registered trademarks of Novell Inc.

    in the United States and other countries. Oracle® is a registered trademark of Oracle Corporation and/or its affiliates. VMware®,

    Virtual SMP®, vMotion®, vCenter® and vSphere® are registered trademarks or trademarks of VMware, Inc. in the United States or

    other countries. IBM® is a registered trademark of International Business Machines Corporation. Broadcom® and NetXtreme® are

    registered trademarks of QLogic is a registered trademark of QLogic Corporation. Other trademarks and trade names may be used in

    this document to refer to either the entities claiming the marks and/or names or their products and are the property of their respective

    owners. Dell disclaims proprietary interest in the marks and names of others.

    http://www.dell.com/learn/us/en/vn/terms-of-sale-commercial-and-public-sector-warranties

  • 3 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    Table of Contents Revisions............................................................................................................................................................................. 2

    1 Introduction ................................................................................................................................................................... 5

    1.1 Configuration Overviews .................................................................................................................................... 5

    1.1.1 Overview of Configuration One .......................................................................................................................... 6

    1.1.2 Overview of Configuration Two .......................................................................................................................... 7

    1.1.3 Overview of Configuration Three ........................................................................................................................ 8

    2 Technology used in this Deployment Guide ................................................................................................................. 9

    2.1 Fiber Channel Over Ethernet .............................................................................................................................. 9

    2.2 Data Center Bridging ........................................................................................................................................ 10

    2.3 N_Port ID Virtualization and N_Port Virtualization ........................................................................................... 10

    2.4 Cisco vPC and Dell Networking FTOS Multichassis Ether Channel Technology ............................................ 10

    2.5 Multi-Path I/O .................................................................................................................................................... 10

    3 Hardware Used in this Deployment Guide ................................................................................................................. 11

    3.1 Dell PowerEdge M1000e Blade Enclosure Overview ...................................................................................... 11

    3.2 Server – PowerEdge M620 Blade Server ........................................................................................................ 12

    3.3 M1000e I/O Modules ........................................................................................................................................ 12

    3.3.1 Dell Networking MXL Overview ........................................................................................................................ 13

    3.3.2 Dell PowerEdge M I/O Aggregator Overview ................................................................................................... 13

    3.3.3 FlexIO Expansion Modules ............................................................................................................................... 14

    3.4 Cisco Nexus 5548UP Overview ....................................................................................................................... 16

    3.5 EMC VNX 5300 Overview ................................................................................................................................ 16

    4 Preparation ................................................................................................................................................................. 17

    4.1.1 WWN/MAC Addresses ..................................................................................................................................... 17

    4.1.2 Virtual SAN (VSAN) and Virtual Fibre Channel (VFC) ..................................................................................... 18

    4.1.3 Configuration Table .......................................................................................................................................... 18

    4.1.4 Component Information .................................................................................................................................... 19

    5 Configuration One – Dell MXL or IOAs in Nexus Fabric Mode .................................................................................. 20

    5.1 Cisco Nexus 5448UP Setup ............................................................................................................................. 21

    5.2 Dell Networking MXL Setup .............................................................................................................................. 26

    6 Configuration Two – Dell MXL or IOA in Nexus NPV Mode with Cisco MDS 9148 ................................................... 28

    6.1 Cisco Nexus 5548UP Setup ............................................................................................................................. 29

  • 4 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    6.2 Dell Networking MXL Setup .............................................................................................................................. 34

    6.3 Cisco MDS 9148 Setup .................................................................................................................................... 36

    7 Configuration Three – Nexus Fabric Mode with Brand Varied MC-LAG Architecture ............................................... 37

    7.1 Cisco Nexus 5548UP Setup ............................................................................................................................. 38

    7.2 Dell Networking IOA Setup ............................................................................................................................... 43

    8 Configuration and Troubleshooting ............................................................................................................................ 48

    8.1 Dell PowerEdge MXL or M I/O Aggregator ...................................................................................................... 48

    8.2 Cisco Nexus 5548UP and MDS 9148 Validation ............................................................................................. 58

    A Basic Terminology ..................................................................................................................................................... 64

    B References ................................................................................................................................................................. 67

    C Attachments ................................................................................................................................................................ 69

    Support and Feedback ...................................................................................................................................................... 69

  • 5 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    1 Introduction This deployment guide covers configuring two Dell M1000e Blade server chassis I/O Modules (IOMs) in a

    Fibre Channel over Ethernet (FCoE) single-hop topology with the Blade IOMs in FIP Snooping Bridge (FSB)

    mode. FSB capabilities allow the bridge (or the switch in this case) to snoop the packets coming across the

    ports, process the FCoE packets appropriately and send them to the intended Fiber Channel Forwarder

    (FCF). This is a very simple explanation of the process, as there are several things that occur such as

    installing Access Control lists (ACLs) that allow FCoE traffic that has logged in (FLOGI’ed). While some of

    these more advanced topics will be touched on in this document, for the most part the document is

    purposefully kept at a high level.

    This document focuses on a few of the many possible network configurations containing FCoE topologies.

    Similar products are used in the configurations to reduce the amount of overlapping content while still

    covering numerous customer environments.

    While not covered in this document, additional configuration is necessary before a switch is deployed in a

    production environment (e.g. Security, Inter Switch Links (ISLs), Virtual Port Channels (vPCs) etc.). In

    addition, due to the varied nature of storage offerings, configuring the storage is not covered in any detail in

    this document.

    1.1 Configuration Overviews This section covers the three configurations that are built in this deployment guide.

    Note: In a typical production environment, most configurations will include several additional connections

    between servers, networking and storage devices.

  • 6 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    1.1.1 Overview of Configuration One Dell MXL or in Nexus Fabric Mode

    Configuration One consists of a pair of two-port LAG connections configured between two Cisco Nexus 5500s

    and two Dell Networking MXLs or PowerEdge M I/O Aggregators (IOA), which act as a FSB. As illustrated in

    Figure 1, the I/O modules are in slots A1 and A2 of the M1000e chassis. N_Port ID Virtualization (NPIV) is

    enabled on the Nexus switches and FC capable storage is attached directly to the Nexus switches.

    Cisco Nexus 7000 Series

    SAN BSAN A

    Cisco Nexus 5500 Series

    Dell PowerEdge M1000e Blade Server Chassis

    Cisco Nexus 5500 Series

    Cisco Nexus 7000 Series

    Dell Networking MXLor

    Dell PowerEdge I/O Aggregator

    Dell Networking MXLor

    Dell PowerEdge I/O Aggregator

    FCoE

    Ethernet

    FC

    vPC

    Configuration One - Dell MXLs or IOAs in Nexus Fabric Mode

  • 7 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    1.1.2 Overview of Configuration Two Dell MXL or IOA in Nexus NPV Mode with Cisco MDS 9148

    In Configuration Two (Figure 2), a two-port connection is configured between a Cisco 5548UP and either a

    MXL or an IOA. This is similar to the previous example but in this configuration the Cisco 5548UP is running

    in NPV mode with Inter-Switch links (ISLs) to Cisco MDS devices.

    SAN BSAN A

    3

    CISCO NEXUS N5548P 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32

    STAT

    ID

    1/10 GIGABIT ETHERNET 1/2/4/8 G FIBRE CHANNEL

    1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8N55-M8P8FP

    C ON S O LE

    33-3637-40

    LNK

    ACTLN

    KACT

    PowerEdge M I/O

    Aggregator

    41-48

    49-56

    10G

    SFP

    + M

    OD

    ULE

    LNKACT

    C ON S O LE

    33-3637-40

    LNK

    ACTLN

    KACT

    PowerEdge M I/O

    Aggregator

    41-48

    49-56

    10G

    SFP

    + M

    OD

    ULE

    LNKACT

    3

    CISCO NEXUS N5548P 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32

    STAT

    ID

    1/10 GIGABIT ETHERNET 1/2/4/8 G FIBRE CHANNEL

    1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8N55-M8P8FP

    Cisco Nexus 5500

    FCoE

    Ethernet

    FC

    DS-C9148-K9

    P/S

    FAN

    STATUS

    CONS

    OLE

    MGMT

    10/10

    0

    LINK ACT

    MDS 9148 Multi layer Fabr ic Switch 11 129 107 85 63 41 2 23 2421 2219 2017 1815 1613 14 35 3633 3431 3229 3027 2825 26 47 4845 4643 4441 4239 4037 38

    DS-C9148-K9

    P/S

    FAN

    STATUS

    CONS

    OLE

    MGMT

    10/10

    0

    LINK ACT

    MDS 9148 Multi layer Fabr ic Switch 11 129 107 85 63 41 2 23 2421 2219 2017 1815 1613 14 35 3633 3431 3229 3027 2825 26 47 4845 4643 4441 4239 4037 38

    Dell PowerEdge M1000e Blade Server Chassis

    Dell Networking MXLor

    Dell PowerEdge I/O Aggregator

    Cisco Nexus 7000 SeriesCisco Nexus 7000 Series

    Cisco MDS 9000 Cisco MDS 9000

    Cisco Nexus 5500

    Dell Networking MXLor

    Dell PowerEdge I/O Aggregator

    vPC

    Configuration Two - Dell MXL or IOA in Nexus NPV Mode with Cisco MDS 9148

  • 8 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    1.1.3 Overview of Configuration Three Dell MXL or IOA in a Nexus Fabric Mode with Brand Varied MC-LAG Architecture

    In Configuration Three (Figure 3), a two-port connection is configured between a Cisco 5548UP and either a

    MXL or an IOA I/O. This is similar to Configuration 1 except the I/O modules are placed in PMUX mode and a

    VLTi peer link is built connecting the two I/O modules together. For further details on the benefits of this,

    please see the Technology used in this deployment Guide section.

    SAN BSAN A

    Cisco Nexus 5500 Series

    Dell PowerEdge M1000e Blade Server Chassis

    Cisco Nexus 5500 Series

    Dell Networking MXLor

    Dell PowerEdge I/O Aggregator

    Dell Networking MXLor

    Dell PowerEdge I/O Aggregator

    FCoE

    Ethernet

    FC

    VLT or vPC

    Configuration Three - Dell MXL or IOA in a Nexus Fabric Mode with Brand Varied MC-LAG Architecture

  • 9 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    2 Technology used in this Deployment Guide

    2.1 Fiber Channel Over Ethernet Fiber Channel Over Ethernet (FCoE) is a networking protocol that encapsulates Fiber channel frames over

    Ethernet networks. This allows Fibre Channel to use 10, 40 or even 100 Gigabit Ethernet networks while

    preserving the Fibre Channel protocol. The FCoE protocol specification replaces the FC0 and FC1 layers of

    Fibre Channel stack with Ethernet. By retaining the native Fibre Channel constructs, FCoE can integrate with

    existing Fibre Channel fabrics and management solutions.

    Note: FCoE (which is referenced as FC-BB_E in the FC-BB-5 specifications) achieved standard status in

    June 2009, and is documented in the T11 publication. You can access this publication at

    http://www.t11.org/ftp/t11/pub/fc/bb-5/09-056v5.pdf.

    FCoE operates directly above Ethernet in the network protocol stack, in contrast to iSCSI that runs on top of

    TCP and IP. As a consequence, FCoE cannot be routed across IP networks. In addition, traditional Ethernet

    has no priority-based flow control, unlike Fibre Channel. As a result, FCoE requires modifications to the

    Ethernet standard to support priority-based flow control mechanisms (this reduces frame loss from

    congestion). The IEEE standards body added priorities via Data Center Bridging (DCB). The three primary

    extensions are:

    Encapsulation of native Fibre Channel frames into Ethernet frames

    Extensions to the Ethernet protocol itself to enable Lossless Ethernet links.

    Mapping between Fibre Channel N_Port Ids (aka FCIDs) and Ethernet MAC address

    The primary purpose of the FCoE protocol in the data center is Storage Area Networks (SANs). FCoE

    enables cable reduction due to converged networking possibilities. To achieve these goals three hardware

    components must be in place.

    Converged Network Adapters (CNAs).

    Lossless Ethernet Links via DCB extensions.

    An FCoE capable switch, typically referred to as a Fibre Channel Forwarder (FCF)

    FIP Snooping Bridge (FSB) is a fourth, optional, component that can be introduced and still allow full FCoE

    functionality. In traditional Fibre Channel networks, FC switches are considered trusted. Other FC devices

    must log directly into the switch before they can communicate with the rest of the fabric. This login process is

    accomplished through a protocol called Fibre Channel Initialization Protocol (FIP), which operates at L2 for

    end point discovery and fabric association. With FCoE an Ethernet bridge typically exists between the End

    Node (ENode) and the FCF, this prevents a FIP session from properly establishing. To allow ENodes to login

    to the FCF, FSB is enabled on the Ethernet Bridge. By snooping the FIP packets during the discovery and

    login process, the intermediate bridge can implement data integrity using ACLs that permit valid FCoE traffic

    between the ENode and FCF.

    NOTE: In this document both, the Dell Networking MXL and the Dell PowerEdge IOA can behave as a FSB

    if the appropriate features are enabled.

    http://www.t11.org/ftp/t11/pub/fc/bb-5/09-056v5.pdf

  • 10 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    2.2 Data Center Bridging Data Center Bridging (DCB) deals with a collection of mechanisms that have been added to the existing

    Ethernet protocol. These mechanisms allow Ethernet to become lossless which is a perquisite for FCoE. The

    four additions made to the existing Ethernet protocol are:

    Priority-based Flow Control (PFC) (IEEE 802.1Qbb)

    Enhanced Transmission Selection (ETS) (IEEE P802.1Qaz)

    Congestion Notification (CN) (IEEE P802.1Qau)

    Data Center Ethernet Bridging Capability Exchange Protocol (DCBX)

    2.3 N_Port ID Virtualization and N_Port Virtualization N_Port ID Virtualization (NPIV) allows an N_Port to have multiple Word Wide Port Names (WWPNs),

    associated with it. In traditional FC fabrics, an N_Port is associated with a single WWPN. After the initial

    FLOGI process, a NPIV enabled physical N_Port can issue subsequent WWPNs. NPIV is required when

    dealing with numerous servers that are behind a single switch, as is found in an M1000e blade enclosure.

    The purpose of N_Port Virtualization (NPV) is different from NPIV. NPV provides simplified management and

    increased interoperability in large SAN deployments. Each edge FC switch requires a domain, which are

    limited to 239 domain IDs on the same SAN or VSAN. This number can be kept manageable by having some

    of the edge devices act as N_Port proxies, aka NPV mode.

    NPV introduces a new Fibre Channel port type, the NP_Port. This connects to a F_Port and acts similar in

    function as a proxy to the N_Port on the NPV enabled switch. The NPV enabled switch then registers

    WWPNs via NPIV.

    2.4 Cisco vPC and Dell Networking FTOS Multichassis Ether Channel

    Technology Cisco vPC and Dell Networking FTOS VLT are separate but similar layer two solutions. vPC and VLT are

    virtualization technologies that present a pair of identical switches as a unique Layer 2 logical node to access

    layer switch and servers. In other words, this technology allows links that are physically connected to two

    different switches to appear as a single port channel to a third device. This device can be a switch, server or

    any other networking device that supports link aggregation.

    The primary benefits from deploying these technologies is the elimination of Spanning Tree Protocol (STP)

    blocking ports. By eliminating STP blocking ports, all available uplink bandwidth can be utilized. These two

    benefits lead to a simplified network design while growing the Layer 2 network in a controlled method.

    2.5 Multi-Path I/O There are generally two types of multi-path access methods for communicating from a host to an external

    device. For general networking communications, the preferred method of redundant connections is teaming

    multiple NICs into a single, virtual network connection entity. For storage, the preferred method is the use of

    Multi-Path IO (MPIO).

  • 11 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    3 Hardware Used in this Deployment Guide The following section highlights the hardware used in this document.

    3.1 Dell PowerEdge M1000e Blade Enclosure Overview Powerful management tools The PowerEdge M1000e Blade enclosure allows you to focus more on growing your business or managing

    your organization and less on managing computing resources by using an array of blade management tools

    that help make your job easier. These tools include:

    Centralized management controllers that provide redundant and secure access paths for you to

    manage multiple enclosures and dozens of blades from a single console.

    Dynamic power management that enables you to set high and low power thresholds to help ensure

    that blades operate efficiently within your power envelope.

    Flexible remote management

    Manage the blades in the M1000e chassis individually or as groups, in single or multiple enclosures, and

    within a data center or in remote locations around the world with the Dell Chassis Management Controller

    (CMC). It provides:

    A single secure interface for inventory and configuration, as well as monitoring and alerting, for the

    enclosure and all installed components.

    Multi-chassis management from a single, embedded, agentless interface spanning nine enclosures

    and up to 288 servers.

    Real-time power and thermal monitoring and management, including AC power consumption with

    resettable peak and minimum values.

    System-level power limiting and slot-based power prioritization.

    Outstanding efficiency

    The M1000e blade enclosure allows you to take advantage of the thermal design efficiencies of Dell’s Energy

    Smart technology, including:

    Up to six hot-swap ultra-efficient power supplies.

    Nine hot-swap redundant fan modules with dynamic power-efficient fans.

    Optimized airflow design to efficiently cool the enclosure and enable exceptional performance in a low

    power envelope.

  • 12 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    3.2 Server – PowerEdge M620 Blade Server The Dell PowerEdge M620 blade server (Figure 4) is a feature rich, 2-socket blade server, designed for

    maximum performance with extreme density.

    M620 Blade Server

    Designed for taxing workloads, such as email, database and virtual environments, the M620 blade server is

    an ideal blend of density, performance, efficiency and scalability. The M620 delivers unprecedented memory

    density and superb performance with no compromise on enterprise-class features.

    Intel Xeon processor E5-2600 and E5-2600 v2 product families. Supporting up to twelve cores per

    processor.

    Memory

    - Up to 768GB (24 DIMM slots): 2GB/4GB/8GB/16GB/32GB DDR3 up to 1866MT/s.

    - Up to 1.5TB (24 DIMM slots): 64GB DDR3 LRDIMM up to 1600MT/s (with Intel Xeon processor

    E5-2600 v2 product family only).

    Support for a failsafe hypervisor. Protect against hardware failure and maximize virtualization uptime

    by running the hypervisor on an optional SD card and installing a backup copy on the other mirrored

    SD card.

    The M620 blade server takes advantage of the shared power, cooling and networking infrastructure of

    the M1000e blade enclosure coupled with the Dell Chassis Management Controller to manage

    individual or groups of M620 blade servers.

    3.3 M1000e I/O Modules The Dell I/O Modules used in this document are the Dell Networking MXL and PowerEdge M I/O Aggregator.

    Both of these modules were designed with ease of use in mind and support interchangeable FlexIO

    Expansion Modules.

  • 13 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    3.3.1 Dell Networking MXL Overview The MXL 10/40GbE Switch (Figure 5) is a layer 2/3 blade switch with two fixed 40GbE ports on the base

    module and support for two optional plug-in modules (FlexIO Expansion Modules). The MXL 10/40GbE switch

    runs the Dell Networking Operating System, providing switching, bridging and routing functionality for

    transmitting data, storage and server traffic.

    Fixed 40GbE QSFP+ PortsExpansion Slot 0Expansion Slot 1

    Dell Networking MXL

    3.3.2 Dell PowerEdge M I/O Aggregator Overview The IOA (Figure 6) is a zero-touch blade switch with two fixed 40 GB ports on the base module and support

    for two optional plug-in modules (FlexIO Expansion Modules). The Aggregator runs the Dell Networking

    Operating System and has the capability to auto configure as an unmanaged switch with bridging and

    multiplexing functionality. In one of these automated modes (SMUX, VLT or Stacking) all VLANs are allowed

    as well as any DCBx, iSCSI or FCoE settings. In addition, the external ports are all part of the same LAG

    which obviates the need for the Spanning Tree Protocol (STP) on the IOA.

    I/O Bay 1

    Expansion Slot 1 Expansion Slot 0 Fixed 40GbE QSFP+ Ports

    Dell PowerEdge M I/0 Aggregator

  • 14 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    3.3.3 FlexIO Expansion Modules The Dell FlexIO Expansion Modules will support a combination of FlexIO Modules (Figure 7). The four types of FlexIO expansion modules are:

    4-port 10Gbase-T FlexIO module (only one 10Gbase-T module can be used)

    4-port 10Gb SFP+ FlexIO module

    2-port 40Gb QSFP+ FlexIO module

    4-port Fiber Channel 8Gb module

    FlexIO expansion modules

    Note: Using the FC FlexIO module that provides 8 GB Fiber Channel interfaces is not covered in this

    deployment guide.

  • 15 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    3.3.3.1 I/O Module Port Mapping The connections are 10 Gigabit Ethernet connections for basic Ethernet traffic, iSCSI storage traffic or FCoE

    storage traffic. In a typical M1000e configuration of 16 half-height blade servers, ports 1-16 are used and 17 -

    32 disabled. However if quad port adapters or quarter-height blade servers are used, ports 17-32 will be

    enabled.

    Table 1 lists the port mapping for the two expansion slots on the Dell Networking MXLs and Dell PowerEdge

    IOAs as well as the internal 10/1 GbE interfaces on the blade servers installed in the M1000e chassis.

    Port-Mapping for the M1000e Blade Enclosure

    QSFP+ 2x40Gb

    QSFP+ 8x10GB

    SFP+ (breakout) SFP+ 4x10Gb

    10G-BaseT

    4x10Gb FC8 x 4

    56

    55

    54

    53 53

    52 52 52 52

    51 51 51 51

    50 50 50 50

    49 49 49 49 49

    QSFP+ 2 X

    40Gb

    QSFP+ 8 X 10GB

    SFP+ (breakout) SFP+ 4 X10Gb

    10G-BaseT 4 X

    10Gb FC8 x 4

    48

    47

    46

    45 45

    44 44 44 44

    43 43 43 43

    42 42 42 42

    41 41 41 41 41

    QSFP+ 2 X

    40Gb

    QSFP+ 8 X 10GB

    SFP+ (breakout) SFP+ 4 X10Gb

    10G-BaseT 4 X

    10Gb FC8 x 4

    40

    39

    38

    37 37

    36 . . .

    35 . . .

    34 . . .

    33 33 . . .

    32 32 32 32 32

    31 31 31 31 31

    . . . . .

    . . . . .

    . . . . .

    2 2 2 2 2

    1 1 1 1 1

    Internal 10 / 1 GB interfaces

    Dell Networking MXL and Dell PowerEdge M I/O Aggregator – Port Mapping

    Inte

    rnal

    10/

    1 Gb

    Fixe

    d Q

    SFP

    Port

    sEx

    pans

    ion

    Slot

    0Ex

    pans

    ion

    Slot

    1

  • 16 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    3.4 Cisco Nexus 5548UP Overview The Cisco Nexus 5548UP is a 1RU 10gigabit Ethernet, Fibre Channel, and FCoE capable switch offering up

    to 48 ports. The switch has 32 unified ports and a single expansion slot. The switch operates in NPIV by

    default and NPV can be enabled if required.

    Note: This document utilizes command line interface (CLI) commands to configure the devices. Cisco

    supplies various graphical interfaces for managing their equipment. These interfaces may make it easier to

    configure the Cisco switches.

    3.5 EMC VNX 5300 Overview The VNX5300, the introductory model for the VNX unified platform, is designed for the mid-range entry space.

    This model provides either block and file services, file only services, or block only services, and uses a Disk-

    Processor Enclosure (DPE).

    The VNX5300 uses a 1.6Ghz, four-core Xeon 5600 processor with 8 GB RAM and a maximum of 125 drives

    with the following block-based host connectivity options: FC, FCoE, and iSCSI.

  • 17 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    4 Preparation The following sections contain information on gathering and verifying the required FCoE component’s

    addresses and numbers. It also contains a list of the firmware or versions of the components, which were

    used to validate the configurations.

    4.1.1 WWN/MAC Addresses Obtain the MAC addresses of the network adapters in the blade servers and covert them to the FIP MAC

    addresses by performing the following steps.

    1. Login to the Chassis Management Controller (CMC).

    2. In the left pane, select Server Overview (Figure 8).

    M1000e Chassis Management Controller WWN/MAC Screen (Server in Slot 1)

    3. Once the Server Overview page populates, select WWN/ MAC in the top pane.

    The screen shows all the server’s MAC addresses.

    4. Scroll down to Slot 1, in the Filter drop down, select Fabric B and in the next drop down select

    Fibrechannel.

    5. Record the MAC addresses (Server-Assigned or Chassis-Assigned). In this example, the first

    Chassis-Assigned B1 MAC address is 20:01:5C:F9:DD:16:EF:07 and the first B2 MAC address is

    20:01:5C:F9:DD:16:F0:10.

    6. Next, derive the FIP MAC address from the WWPN by dropping the first two sets of numbers from the

    WWPN. For example, for Server 1 the WWPN is 20:01:5C:F9:DD:16:EF:07, the first two sets of

    numbers (20:01) is dropped leaving the FIP MAC address of 5C:F9:DD:16:EF:07.

  • 18 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    4.1.2 Virtual SAN (VSAN) and Virtual Fibre Channel (VFC) Once the Fibre Channel related addresses have been gathered, the VSAN and VFC can be planned. In the

    case of the configurations contained in this paper, VSAN 2 and VFC 101 are assigned to SAN A, and VSAN 3

    and VFC 201 are assigned to SAN B. Keep in mind the VSAN number cannot be the same on SAN A and B,

    must be between 1 and 4094, and should be easy to manage and facilitate troubleshooting.

    4.1.3 Configuration Table The following table (Table 2) shows the configuration information for the devices (servers, switches, network

    adapters) used in the scenarios covered in this document.

    Configuration Information

    SAN A SAN B

    Storage Storage Processor WWPN 50:06:01:6F:3E:E0:18:70 50:06:01:6F:3E:E0:18:70

    Boot LUN 0 0

    Server 1 VSAN Number 2 3

    FCoE VLAN 1000 1001

    VFC Number 101 201

    Binding method MAC MAC

    Physical Port 1/1-2 1/1-2

    Network Adaptor MAC address WWPN (20:01 + FIP MAC) FIP MAC WWNN (20:00 + FIP MAC)

    20:01:5C:F9:DD:16:EF:07 5C:F9:DD:16:EF:07 20:00:5C:F9:DD:16:EF:07

    20:01:5C:F9:DD:16:F0:10 5C:F9:DD:16:F0:10 20:00:5C:F9:DD:16:F0:10

    Cisco 5548UP Physical Ports (Fiber Channel) FC 2/1-2 FC 2/1-2

    Physical Port (vPC Ports) 1/17-18 1/17-18

    Cisco 9148 MDS Physical Ports

    FC 1/13-14 FC 1/13-14

  • 19 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    4.1.4 Component Information The following table (Table 3) lists the components and firmware revisions used in the scenarios covered in

    this document.

    Component Information

    Component Version

    Chassis / Server M1000e Chassis Management Controller 4.45

    Dell PowerEdge M I/O Aggregator 9.6

    Dell Networking MXL 9.6

    Dell PowerEdge M620 Blade Server BIOS 2.4.3

    Lifecycle Controller 1.4.2.12

    Broadcom 10Gb 2P 57810S-k Mezzanine Card 7.10.18

    QLogic 10Gb 2P QME8262-k Mezzanine Card 02.10.07

    Intel 10Gb 2P X520-k blade Network Daughter Card 01.03.10

    Storage EMC_3U VNX 5300 05.32.000.5.008

    Network Cisco Nexus 7004 (system and kickstart) 6.2 (8)

    Cisco Nexus 5548UP (system and kickstart) 7.0.(2)N1(1)

    Cisco MDS 9148 (system and kickstart) 6.2(9)

    Cables SFP+ Optical Transceivers (SR or LR) with Fiber Cables

    5 Meter Cable

  • 20 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    5 Configuration One – Dell MXL or IOAs in Nexus Fabric Mode Cisco Nexus 7000 Series

    SAN BSAN A

    Cisco Nexus 5500 Series

    Dell PowerEdge M1000e Blade Server Chassis

    Cisco Nexus 5500 Series

    Cisco Nexus 7000 Series

    Dell Networking MXLor

    Dell PowerEdge I/O Aggregator

    Dell Networking MXLor

    Dell PowerEdge I/O Aggregator

    FCoE

    Ethernet

    FC

    vPC

    Configuration One - Dell MXL or IOAs in Nexus Fabric Mode

    In Configuration One (Figure 9), the Cisco Nexus 5500 Series Top of Rack switch is left in the default fabric

    mode, which allows the Nexus switch to perform as a fabric services provider and a fiber channel switch. For

  • 21 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    the storage fabric, the following configuration is a default FCoE single-hop configuration with FSBs in a

    converged network environment. Configurations for both SAN A and SAN B are provided. For upstream

    Ethernet connectivity to the spine or core, a vPC domain is created allowing all available bandwidth to be

    utilized.

    5.1 Cisco Nexus 5448UP Setup In this configuration, the Cisco Nexus 5548UP switch is the primary configuration point for the rest of the

    solution. The M1000e I/O modular switches will pass DCB information from the Nexus 5548UP switch down

    to the servers CNAs. The steps required to configure the Nexus 5548UP switches are shown on the following

    pages.

    Note: The following instructions have been included as an attachment (Fabric_Mode-Config_Sheets.pdf)

    to this document.

    In this first section, the required features are enabled (Figure 10). Then the interfaces are substantiated and

    finally the FIP address is bound to the Virtual Fibre Channel (VFC) interface.

    Enable required features and management interface for vPC Enable LACP, vPC and NPIV features

    feature lacp

    feature fcoe

    feature npiv

    feature vpc

    Nexus_5548-1

    Create interfaces and VSAN used

    vsan database

    vsan 2

    vlan 20,30-32, 88

    vlan 1000

    fcoe vsan 2

    interface port-channel 8

    interface port-channel 20

    Enable required features and management interface for vPC Enable LACP, vPC and NPIV features

    feature lacp

    feature fcoe

    feature npiv

    feature vpc

    Nexus_5548-2

    Create interfaces and VSAN used

    vsan database

    vsan 3

    vlan 21,30-32, 88

    vlan 1001

    fcoe vsan 3

    interface port-channel 8

    interface port-channel 21

    Create VFC interfaces and bind fip-addresses Bring VFC interfaces out of administrative

    shutdown

    interface vfc101

    bind mac-address 5C:F9:DD:16:EF:03

    no shutdown

    Create VFC interfaces and bind fip-addresses Bring VFC interfaces out of administrative

    shutdown

    interface vfc201

    bind mac-address 5C:F9:DD:16:F0:10

    no shutdown

    Enable Global Switch Features and configure Interfaces

  • 22 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    Next, the created VSAN is populated with the appropriate interfaces (Figure 11). In production environments

    additional VFCs would be created for each server occupying the M1000e enclosure chassis and added to the

    appropriate VSAN, The port channels are then configured, and the appropriate physical interfaces are added

    to the corresponding upstream and downstream port-channel groups.

    Associate interfaces created earlier with the appropriate VSAN.

    5548-1

    vsan database

    vsan 2 interface vfc101

    vsan 2 interface fc2/1

    vsan 2 interface fc2/2

    Associate interfaces created earlier with the appropriate vsan.

    5548-2

    vsan database

    vsan 3 interface vfc201

    vsan 3 interface fc2/1

    vsan 3 interface fc2/2

    Add downstream interfaces to appropriate port channel

    Add upstream interfaces to appropriate port channel

    interface ethernet 1/21-22

    channel-group 20 mode active

    desc FCoE_downlink_to_IOA-MXL

    interface ethernet 1/9-10

    channel-group 1 mode active

    desc Ethernet_uplink_to_7K

    Add downstream interfaces to appropriate port channel

    Add upstream interfaces to appropriate port channel

    interface ethernet 1/21-22

    channel-group 21 mode active

    desc FCoE_downlink_to_IOA-MXL

    interface ethernet 1/9-10

    channel-group 2 mode active

    desc Ethernet_uplink_to_7K

    Configure the port channels created previously with applicable settings.

    interface port-channel 8

    desc port-channel_eth9+10_to_7k

    switchport mode trunk

    switchport trunk allowed vlan 30-32,88

    interface port-channel 20

    desc port-channel_eth1+2_to_IOA-MXL

    switchport mode trunk

    switchport trunk native vlan 20

    switchport trunk allowed vlan 20,1000

    Configure the port channels created previously with applicable settings.

    interface port-channel 8

    desc port-channel_eth9+10_to_7k

    switchport mode trunk

    switchport trunk allowed vlan 30-32,88

    interface port-channel 21

    desc port-channel_eth1+2_to_IOA-MXL

    switchport mode trunk

    switchport trunk native vlan 21

    switchport trunk allowed vlan 21,1001

    Configure VSAN Database and Upstream/Downstream Port Channels

  • 23 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    In Figure 12 the fibre channel interfaces leading to the storage array are brought out of administrative

    shutdown and the FC fabric is built and activated.

    At this time, the command show flogi database can be ran to verify that both the storage array and the

    servers CNAs have completed successful Fabric Logins (FLOGI).

    Associate interfaces created earlier with the appropriate VSAN.

    5548-1

    interface fc2/1-2

    no shutdown

    Associate interfaces created earlier with the appropriate vsan.

    5548-2

    interface fc2/1-2

    no shutdown

    Create zone and add all participating members

    zone name zone1SAN_A vsan 2

    member pwwn

    member interface fc2/1

    member interface fc2/2

    Create zone and add all participating members

    zone name zone1SAN_B vsan 3

    member pwwn

    member interface fc2/1

    member interface fc2/2

    Configure the port channels created previously with applicable settings.

    zoneset name set1SAN_A vsan 2

    member zone1SAN_A

    zoneset activate name set1SAN_A vsan 2

    Configure the port channels created previously with applicable settings.

    zoneset name set1SAN_B vsan 3

    member zone1SAN_B

    zoneset activate name set1SAN_B vsan 3

    Bring Fibre Channel Ports Online and Configure FC Fabric

  • 24 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    Next, a vPC peer link is created (Figure 13). First, the vPC feature is enabled on both switches and a

    management IP is assigned. Next, the vPC domain is configured using a value of 1 and the keep-alive

    address of the peer switch.

    Enabling VPC by configuring the management interface and creating a VPC domain ID.

    5548-1

    configure

    interface mgmt 0

    ip address 172.25.188.60 255.255.0.0

    no shutdown

    end

    Enabling VPC by configuring the management interface and creating a VPC domain ID.

    5548-2

    configure

    interface mgmt 0

    ip address 172.25.189.60 255.255.0.0

    no shutdown

    end

    Create a VPC domain. Assign role priority. Assign the keepalive management IP of 5548-1.

    configure

    vpc domain 55

    role priority 65535

    peer-keepalive dest 172.25.188.60

    end

    Create a VPC domain. Assign role priority. Assign the keepalive management IP of 5548-2.

    configure

    vpc domain 55

    role priority 1

    peer-keepalive dest 172.25.189.60

    end

    Configure vPC domain and keep alive address

  • 25 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    Finally, a port channel with the same ID as the vPC domain is created (Figure 14). It is important to limit the

    VLANs that were selected for FCoE traffic to NOT be allowed to transverse this trunk.

    Configure port channel and port channel members for the vPC peer-link. Create a port channel. Enable switchport mode trunk. Assign as a vpc peer-link.

    configure

    interface port-channel 55

    description “vPC Peer-Link”

    switchport mode trunk

    switchport trunk allowed vlan except

    1000-1001

    no shutdown

    vpc peer-link

    end

    5548-1

    Assign the interfaces to the port channel and enable LACP.

    configure

    interface ethernet 1/16-17

    description “vPC Peer-Link”

    switchport mode trunk

    channel-group 55 mode active

    no shutdown

    end

    Configure port channel and port channel members for the vPC peer-link. Create a port channel. Enable switchport mode trunk. Assign as a vpc peer-link.

    configure

    interface port-channel 55

    description “vPC Peer-Link”

    switchport mode trunk

    Switchport trunk allowed vlan except

    1000-1001

    no shutdown

    vpc peer-link

    end

    5548-2

    Assign the interfaces to the port channel and enable LACP.

    configure

    interface ethernet 1/16-17

    description “vPC Peer-Link”

    switchport mode trunk

    channel-group 55 mode active

    no shutdown

    end

    vPC Port Channel Configuration

  • 26 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    5.2 Dell Networking MXL Setup The steps required to configure the Dell Networking MXL are shown in this section.

    First, enable FIP snooping and change the default VLAN. The downstream and upstream interfaces are then

    configured for DCBx (Figure 15). In this case, all DCBx settings are adopted from Cisco 5548UP ToR.

    Enable features, configure all pre-planned VLANs and other commands. Enable FIP-snooping feature Enable LLDP protocol Configure service-class dynamic dot1p Set the global default VLAN

    Enable features, configure all pre-planned VLANs and other commands. Enable FIP-snooping feature Enable LLDP protocol Configure service-class dynamic dot1p Set the global default VLAN

    feature fip-snooping

    protocol lldp

    exit

    service-class dynamic dot1p

    default vlan-id 20

    MXL_IOA_1 MXL_IOA_2

    Configure the downstream, server facing, ports.

    interface range te 0/1

    portmode hybrid

    switchport

    protocol lldp

    dcbx port-role auto-downstream

    exit

    no shutdown

    feature fip-snooping

    protocol lldp

    exit

    service-class dynamic dot1p

    default vlan-id 21

    Configure the downstream, server facing, ports.

    interface range te 0/1

    portmode hybrid

    switchport

    protocol lldp

    dcbx port-role auto-downstream

    exit

    no shutdown

    Configure upstream, FCF switch facing, external ports to be part of a port channel.

    interface range te 0/51 - 52

    port-channel-protocol LACP

    port-channel 1 mode active

    exit

    protocol lldp

    man management advertise system-name

    no advertise dcbx-tlv ets-reco

    dcbx port-role auto-upstream

    exit

    no shutdown

    Configure upstream, FCF switch facing, external ports to be part of a port channel.

    interface range te 0/51 - 52

    port-channel-protocol LACP

    port-channel 1 mode active

    exit

    protocol lldp

    man management advertise system-name

    no advertise dcbx-tlv ets-reco

    dcbx port-role auto-upstream

    exit

    no shutdown

    Dell Networking MXL Configuration for FIP Snooping

  • 27 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    Next, the upstream port channel is configured, and the appropriate FCoE designated VLAN is set on the

    corresponding interfaces (Figure 16).

    Configure the upstream port-channel and then add all interfaces to the FCoE VLAN. Enable fip-snooping on the FCoE VLAN

    Configure the upstream port-channel and then add all interfaces to the FCoE VLAN. Enable fip-snooping on the FCoE VLAN

    interface port-channel 1

    portmode hybrid

    switchport

    fip-snooping port-mode fcf

    no shutdown

    exit

    interface vlan 1000

    tagged TenGigabitEthernet 0/1

    tagged Port-channel 1

    fip-snooping enable

    no shutdown

    MXL_IOA_1 MXL_IOA_2

    interface port-channel 1

    portmode hybrid

    switchport

    fip-snooping port-mode fcf

    no shutdown

    exit

    interface vlan 1001

    tagged TenGigabitEthernet 0/1

    tagged Port-channel 1

    fip-snooping enable

    no shutdown

    Dell Networking MXL Enabling Uplinks for FCoE FIP Snooping

  • 28 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    6 Configuration Two – Dell MXL or IOA in Nexus NPV Mode

    with Cisco MDS 9148 Usually the Cisco Nexus 5548UP top of rack switch is configured in NPV to pass FC traffic out to another

    terminating switch, in this example the Cisco MDS 9148. The following figure (Figure 17) and the following

    examples describe a two link LAG from an IOA to the Cisco 5548UP ToR switch configured in NPV mode.

    SAN BSAN A

    3

    CISCO NEXUS N5548P 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32

    STAT

    ID

    1/10 GIGABIT ETHERNET 1/2/4/8 G FIBRE CHANNEL

    1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8N55-M8P8FP

    C ON S O LE

    33-3637-40

    LNK

    ACTLN

    KACT

    PowerEdge M I/O

    Aggregator

    41-48

    49-56

    10G

    SFP

    + M

    OD

    ULE

    LNKACT

    C ON S O LE

    33-3637-40

    LNK

    ACTLN

    KACT

    PowerEdge M I/O

    Aggregator

    41-48

    49-56

    10G

    SFP

    + M

    OD

    ULE

    LNKACT

    3

    CISCO NEXUS N5548P 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32

    STAT

    ID

    1/10 GIGABIT ETHERNET 1/2/4/8 G FIBRE CHANNEL

    1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8N55-M8P8FP

    Cisco Nexus 5500

    FCoE

    Ethernet

    FC

    DS-C9148-K9

    P/S

    FAN

    STATUS

    CONS

    OLE

    MGMT

    10/10

    0

    LINK ACT

    MDS 9148 Multi layer Fabr ic Switch 11 129 107 85 63 41 2 23 2421 2219 2017 1815 1613 14 35 3633 3431 3229 3027 2825 26 47 4845 4643 4441 4239 4037 38

    DS-C9148-K9

    P/S

    FAN

    STATUS

    CONS

    OLE

    MGMT

    10/10

    0

    LINK ACT

    MDS 9148 Multi layer Fabr ic Switch 11 129 107 85 63 41 2 23 2421 2219 2017 1815 1613 14 35 3633 3431 3229 3027 2825 26 47 4845 4643 4441 4239 4037 38

    Dell PowerEdge M1000e Blade Server Chassis

    Dell Networking MXLor

    Dell PowerEdge I/O Aggregator

    Cisco Nexus 7000 SeriesCisco Nexus 7000 Series

    Cisco MDS 9000 Cisco MDS 9000

    Cisco Nexus 5500

    Dell Networking MXLor

    Dell PowerEdge I/O Aggregator

    vPC

    Configuration Two – NPV with Cisco MDS

  • 29 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    6.1 Cisco Nexus 5548UP Setup By default, the Cisco Nexus 5000 series switches operate in NPIV mode. A disadvantage of running in this

    mode, in a large datacenter with a large number of edge FC switches, is the limited number of domain IDs.

    With the Cisco Nexus configured for NPV mode, the switch will not provide the essential fabric services, but it

    will pass these services from an upstream fabric services core/aggregation device through to end devices.

    Typically, in a Cisco environment this upstream device will be a Cisco MDS multilayer fabric switch operating

    in default fabric mode.

    Note: The following instructions have been included as an attachment (NPV_Mode-Config_Sheets.pdf) to

    this document.

    The following tables/pages (Figure 18 thru Figure 21) show the steps required to configure the Nexus 5548UP

    switches.

    vsan database

    vsan 2

    vlan 20,30-32, 88

    vlan 1000

    fcoe vsan 2

    interface port-channel 8

    interface port-channel 20

    NPV configurations with FC SAN switches, NPV must be set. Once issued the switch will reload.

    feature npv

    5548-1

    Enable required features and management interface for vPC Enable LACP, vPC and NPIV features

    feature lacp

    feature fcoe

    feature npiv

    feature vpc

    NPV configurations with FC SAN switches, NPV must be set. Once issued the switch will reload.

    feature npv

    5548-2

    Enable required features and management interface for vPC Enable LACP, vPC and NPIV features

    feature lacp

    feature fcoe

    feature npiv

    feature vpc

    Configure Interfaces Configure Interfaces

    vsan database

    vsan 3

    vlan 21,30-32, 88

    vlan 1001

    fcoe vsan 3

    interface port-channel 8

    interface port-channel 21

    Enabling Global Switch Features and Interfaces

  • 30 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    Create VFC interfaces and bind fip-addresses Bring VFC interfaces out of administrative

    shutdown

    5548-1

    Create VFC interfaces and bind fip-addresses Bring VFC interfaces out of administrative

    shutdown

    5548-2

    interface vfc101

    bind mac-address 5C:F9:DD:16:EF:03

    no shutdown

    interface vfc201

    bind mac-address 5C:F9:DD:16:F0:10

    no shutdown

    Associate interfaces created earlier with the appropriate VSAN ID.

    Associate interfaces created earlier with the appropriate VSAN ID.

    vsan database

    vsan 2 interface vfc101

    vsan 2 interface fc2/1

    vsan 2 interface fc2/2

    vsan database

    vsan 3 interface vfc201

    vsan 3 interface fc2/1

    vsan 3 interface fc2/2

    Add downstream interfaces to appropriate port channel

    Add upstream interfaces to appropriate port channel

    interface ethernet 1/1-2

    channel-group 20 mode active

    desc FCoE_downlink_to_IOA-MXL

    interface ethernet 1/9-10

    channel-group 8 mode active

    desc Ethernet_uplink_to_7K

    Add downstream interfaces to appropriate port channel

    Add upstream interfaces to appropriate port channel

    interface ethernet 1/1-2

    channel-group 21 mode active

    desc FCoE_downlink_to_IOA-MXL

    interface ethernet 1/9-10

    channel-group 8 mode active

    desc Ethernet_uplink_to_7K

    VFC Configuration and VSAN Database Configurations

  • 31 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    Associate interfaces created earlier with the appropriate VSAN.

    5548-1

    interface fc2/1-2

    no shutdown

    Associate interfaces created earlier with the appropriate vsan.

    5548-2

    interface fc2/1-2

    no shutdown

    Configure the port channels created previously with applicable settings.

    interface port-channel 8

    desc port-channel_eth9+10_to_7k

    switchport mode trunk

    switchport trunk allowed vlan 30-32,88

    interface port-channel 20

    desc port-channel_eth1+2_to_IOA-MXL

    switchport mode trunk

    switchport trunk native vlan 20

    switchport trunk allowed vlan 20,1000

    Configure the port channels created previously with applicable settings.

    interface port-channel 8

    desc port-channel_eth9+10_to_7k

    switchport mode trunk

    switchport trunk allowed vlan 30-32,88

    interface port-channel 21

    desc port-channel_eth1+2_to_IOA-MXL

    switchport mode trunk

    switchport trunk native vlan 21

    switchport trunk allowed vlan 21,1001

    Enable Fibre Channel Interfaces and Upstream/Downstream Port Channels

  • 32 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    Next, a vPC peer link is created (Figure 21). First vPC is enabled on both switches and a management IP is

    assigned. Then the vPC domain is configured using a value of 1 with the keep-alive address of the peer

    switch.

    Enabling VPC by configuring the management interface and creating a VPC domain ID.

    5548-1

    configure

    interface mgmt 0

    ip address 172.25.188.60 255.255.0.0

    no shutdown

    end

    Enabling VPC by configuring the management interface and creating a VPC domain ID.

    5548-2

    configure

    interface mgmt 0

    ip address 172.25.189.60 255.255.0.0

    no shutdown

    end

    Create a VPC domain. Assign role priority. Assign the keepalive management IP of 5548-1.

    configure

    vpc domain 55

    role priority 65535

    peer-keepalive dest 172.25.188.60

    end

    Create a VPC domain. Assign role priority. Assign the keepalive management IP of 5548-2.

    configure

    vpc domain 55

    role priority 1

    peer-keepalive dest 172.25.189.60

    end

    Configure vPC domain and keep alive address

  • 33 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    As a final step a port channel with the same ID as the vPC domain is created (Figure 22).The designated

    FCoE VLANs should not be allowed be allowed to transverse this vPC peer link.

    Configure port channel and port channel members for the vPC peer-link. Create a port channel. Enable switchport mode trunk. Assign as a vpc peer-link.

    configure

    interface port-channel 55

    description “vPC Peer-Link”

    switchport mode trunk

    switchport trunk allowed vlan except

    1000-1001

    no shutdown

    vpc peer-link

    end

    5548-1

    Assign the interfaces to the port channel and enable LACP.

    configure

    interface ethernet 1/16-17

    description “vPC Peer-Link”

    switchport mode trunk

    channel-group 55 mode active

    no shutdown

    end

    Configure port channel and port channel members for the vPC peer-link. Create a port channel. Enable switchport mode trunk. Assign as a vpc peer-link.

    configure

    interface port-channel 55

    description “vPC Peer-Link”

    switchport mode trunk

    Switchport trunk allowed vlan except

    1000-1001

    no shutdown

    vpc peer-link

    end

    5548-2

    Assign the interfaces to the port channel and enable LACP.

    configure

    interface ethernet 1/16-17

    description “vPC Peer-Link”

    switchport mode trunk

    channel-group 55 mode active

    no shutdown

    end

    vPC Port Channel Configuration

  • 34 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    6.2 Dell Networking MXL Setup The following pages show the steps required to configure the Dell Networking MXL (Figure 23 and Figure 24).

    First, enable FIP snooping and change the default VLAN. The downstream and upstream interfaces are then

    configured for DCBx. In this case all DCBx settings are adopted from Cisco 5548UP ToR.

    Enable features, configure all pre-planned VLANs and other commands. Enable FIP-snooping feature Enable LLDP protocol Configure service-class dynamic dot1p Set the global default VLAN

    Enable features, configure all pre-planned VLANs and other commands. Enable FIP-snooping feature Enable LLDP protocol Configure service-class dynamic dot1p Set the global default VLAN

    feature fip-snooping

    protocol lldp

    exit

    service-class dynamic dot1p

    default vlan-id 20

    MXL_IOA_1 MXL_IOA_2

    Configure the downstream, server facing, ports.

    interface range te 0/1

    portmode hybrid

    switchport

    protocol lldp

    dcbx port-role auto-downstream

    no shutdown

    feature fip-snooping

    protocol lldp

    exit

    service-class dynamic dot1p

    default vlan-id 21

    Configure the downstream, server facing, ports.

    interface range te 0/1

    portmode hybrid

    switchport

    protocol lldp

    dcbx port-role auto-downstream

    no shutdown

    Dell Networking MXL Setup (Pt. 1)

  • 35 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    Configure the upstream port-channel and then add all interfaces to the FCoE VLAN. Enable fip-snooping on the FCoE VLAN

    Configure the upstream port-channel and then add all interfaces to the FCoE VLAN. Enable fip-snooping on the FCoE VLAN

    interface port-channel 1

    portmode hybrid

    switchport

    fip-snooping port-mode fcf

    no shutdown

    exit

    interface vlan 1000

    tagged TenGigabitEthernet 0/1

    tagged Port-channel 1

    fip-snooping enable

    no shutdown

    MXL_IOA_1 MXL_IOA_2

    interface port-channel 1

    portmode hybrid

    switchport

    fip-snooping port-mode fcf

    no shutdown

    exit

    interface vlan 1001

    tagged TenGigabitEthernet 0/1

    tagged Port-channel 1

    fip-snooping enable

    no shutdown

    Configure upstream, FCF switch facing, external ports to be part of a port channel.

    interface range te 0/51 - 52

    port-channel-protocol LACP

    port-channel 1 mode active

    protocol lldp

    no advertise dcbx-tlv ets-reco

    dcbx port-role auto-upstream

    no shutdown

    Configure upstream, FCF switch facing, external ports to be part of a port channel.

    interface range te 0/51 - 52

    port-channel-protocol LACP

    port-channel 1 mode active

    protocol lldp

    no advertise dcbx-tlv ets-reco

    dcbx port-role auto-upstream

    no shutdown

    Dell Networking MXL Setup (Pt. 2)

  • 36 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    6.3 Cisco MDS 9148 Setup The Cisco MDS 9148 is configured in this section. This configuration requires NPIV allow the necessary

    number of WWPN to be assigned through the two downstream ports to the Next 5548UP.

    Enable NPIV feature Enable NPIV feature

    feature npiv

    MDS_9000_1 MDS_9000_2

    Create relevant entries in VSAN database

    vsan database

    vsan 2

    vsan 2 interface fc1/1-2

    vsan 2 interface fc1/13-14

    feature npiv

    Create relevant entries in VSAN database

    vsan database

    vsan 3

    vsan 3 interface fc1/1-2

    vsan 3 interface fc1/13-14

    zone name Blade1And2-SAN_A vsan 2

    member interface fc1/1-2

    member interface fc1/13-14

    zoneset name set1-SAN_A vsan 2

    member Blade1And2-SAN_A vsan 2

    exit

    zoneset activate name set1-SAN_A vsan 2

    zone name Blade1And2-SAN_B vsan 3

    member interface fc1/1-2

    member interface fc1/13-14

    zoneset name set1-SAN_B vsan 3

    member Blade1And2-SAN_B vsan 3

    exit

    zoneset activate name set1-SAN_B vsan 3

    Create zone, zoneset and activate that zoneset

    Create zone, zoneset and activate that zoneset

    Cisco MDS 9148 Configuration Steps

  • 37 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    7 Configuration Three – Nexus Fabric Mode with Brand Varied

    MC-LAG Architecture The following sections contains the CLI to configure a Dell PowerEdge IOA and Cisco Nexus 5548UP in a

    configuration that allows a fully functional VLT and vPC Ethernet fabric while using separate FCoE links

    between the IOAs and the Nexus switches.

    SAN BSAN A

    Cisco Nexus 5500 Series

    Dell PowerEdge M1000e Blade Server Chassis

    Cisco Nexus 5500 Series

    Dell Networking MXLor

    Dell PowerEdge I/O Aggregator

    Dell Networking MXLor

    Dell PowerEdge I/O Aggregator

    FCoE

    Ethernet

    FC

    VLT or vPC

    Configuration Three - Dell MXL or IOA in a Nexus Fabric Mode with Brand Varied MC-LAG Architecture

  • 38 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    7.1 Cisco Nexus 5548UP Setup All required switch features are enabled, a hostname is specified and a management address is put in place (Figure 27). Finally, a vPC domain is created with the peer switch management IP address. This vPC domain is for vPC heartbeat monitoring to prevent a split-brain situation.

    Enable the required features and management interface for vPC. Enable the FCoE, LACP, vPC and NVIP features.

    feature fcoe

    feature lacp

    feature vpc

    feature npiv

    Configure the hostname and an assign IP to management.

    configure

    hostname 5548-1

    interface mgmt 0

    ip address 172.25.188.60 255.255.0.0

    no shutdown

    end

    Enable the required features and management interface for vPC. Enable the FCoE, LACP, vPC and NVIP features.

    feature fcoe

    feature lacp

    feature vpc

    feature npiv

    5548-2

    Configure the hostname and assign an IP to management.

    configure

    Hostname 5548-2

    interface mgmt 0

    ip address 172.25.189.60 255.255.0.0

    no shutdown

    end

    Create a VPC domain. Assign role priority. Assign the keepalive management IP of 5548-1.

    configure

    vpc domain 55

    role priority 65535

    peer-keepalive dest 172.25.188.60

    end

    Create a VPC domain. Assign role priority. Assign the keepalive management IP of 5548-2.

    configure

    vpc domain 55

    role priority 1

    peer-keepalive dest 172.25.189.60

    end

    5548-1

    Initial Nexus 5548 Setup

  • 39 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    Once the vPC domain has been created, a port channel for the switch-to-switch vPC peer-link is created. This

    is a normal trunk and it is considered a best practice to exclude FCoE designated VLANs from traversing the

    trunk (Figure 28).

    Configure port channel and port channel members for the vPC peer-link. Create a port channel. Enable switchport mode trunk. Assign as a vpc peer-link.

    configure

    interface port-channel 55

    description vPC Peer-Link

    switchport mode trunk

    switchport trunk allowed vlan except

    1000-1001

    no shutdown

    vpc peer-link

    end

    5548-1

    Assign the interfaces to the port channel and enable LACP.

    configure

    interface ethernet 1/16-17

    description “vPC Peer-Link”

    switchport mode trunk

    channel-group 55 mode active

    no shutdown

    end

    Configure port channel and port channel members for the vPC peer-link. Create a port channel. Enable switchport mode trunk. Assign as a vpc peer-link.

    configure

    interface port-channel 55

    description vPC Peer-Link

    switchport mode trunk

    Switchport trunk allowed vlan except

    1000-1001

    no shutdown

    vpc peer-link

    end

    5548-2

    Assign the interfaces to the port channel and enable LACP.

    configure

    interface ethernet 1/16-17

    description “vPC Peer-Link”

    switchport mode trunk

    channel-group 55 mode active

    no shutdown

    end

    vPC Peer-Link and Port Channel Configuration

  • 40 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    Next, the port channel that will only carry Ethernet traffic is created (Figure 29). Note, vPC 30 must be

    included to ensure that the vPC configuration is aware of both sides of the port channel. It is considered a

    best practice to set the vPC ID the same as the Port Channel ID to simplify troubleshooting.

    Configure the port channel and port channel members for IOA connectivity. Create the port channel. Enable switchport mode trunk. Specify the vPC ID.

    5548-1Configure the port channel and port channel members for IOA connectivity. Create the port channel. Enable switchport mode trunk. Specify the vPC ID.

    5548-2

    configure

    interface port-channel 1

    description vPC/VLT enabled Eth to IOA

    switchport mode trunk

    switchport trunk allowed vlan 30-32,88

    vpc 1

    no shutdown

    end

    configure

    interface port-channel 1

    description vPC/VLT enabled Eth to IOA

    switchport mode trunk

    switchport trunk allowed vlan 30-32,88

    vpc 1

    no shutdown

    end

    Assign interfaces to the port channel and enable LACP.

    Assign interfaces to the port channel and enable LACP.

    configure

    interface ethernet 1/1-2

    description PO1 Member

    switchport mode trunk

    channel-group 1 mode active

    no shutdown

    end

    configure

    interface ethernet 1/1-2

    description PO1 Member

    switchport mode trunk

    channel-group 1 mode active

    no shutdown

    end

    Configure Downstream vPC Enabled Port Channel for Ethernet Traffic

  • 41 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    In the next set of commands (Figure 30), the designated VLAN for FCoE traffic is substantiated and the VSAN

    database is populated with the corresponding interfaces.

    Create a VLAN for the appropriate VSAN and add the VSAN to the database.Create a VLANAdd a VSAN ID to the VLAN

    Nexus_5548-1

    Create a VLAN for the appropriate VSAN and add the VSAN to the database.Create a VLANAdd a VSAN ID to the VLAN

    Nexus_5548-2

    configure

    interface vlan 1000

    description “VSAN 2 VLAN”

    fcoe vsan 2

    no shutdown

    end

    configure

    interface vlan 1001

    description “VSAN 3 VLAN”

    fcoe vsan 3

    no shutdown

    end

    Add the VSAN ID to the VSAN database Create the VRF interfaces and bind the PWWN

    addresses. Enable the designated fiber channel interface Add all of the interfaces to the VSAN database

    binding them to the VSAN ID created prior.

    Add the VSAN ID to the VSAN database Create the VRF interfaces and bind the PWWN

    addresses. Enable the designated fiber channel interface Add all of the interfaces to the VSAN database

    binding them to the VSAN ID created prior.

    configure

    vsan database

    vsan 2

    exit

    interface vfc101

    bind mac-address 5CF9.DD16.EF07

    no shutdown

    exit

    interface fc2/1

    no shutdown

    exit

    vsan database

    vsan 2 interface vfc101

    vsan 2 interface fc2/1

    end

    configure

    vsan database

    vsan 3

    exit

    interface vfc101

    bind mac-address 5CF9.DD16.F010

    no shutdown

    exit

    interface fc2/1

    no shutdown

    exit

    vsan database

    vsan 3 interface vfc101

    vsan 3 interface fc2/1

    end

    Create a VLAN for the appropriate VSAN and add the VSAN to the database.Create a VLAN.Add a VSAN ID to the VLAN.

    5548-1

    Create a VLAN for the appropriate VSAN and add the VSAN to the database.Create a VLAN.Add a VSAN ID to the VLAN.

    5548-2

    configure

    vlan 1000

    description VSAN 2 VLAN

    fcoe vsan 2

    no shutdown

    end

    configure

    vlan 1001

    description VSAN 3 VLAN

    fcoe vsan 3

    no shutdown

    end

    Add the VSAN ID to the VSAN database.

    Create the VFC interface and bind the FCoE

    FIP-MAC address of the CNA recorded

    earlier.

    Enable the designated Fibre Channel

    interface.

    Add the VFC and FC interfaces to the VSAN

    database and bind them to the VSAN ID

    created earlier.

    Add the VSAN ID to the VSAN database.

    Create the VFC interface and bind the FCoE

    FIP-MAC address of the CNA recorded

    earlier.

    Enable the designated Fibre Channel

    interface.

    Add the VFC and FC interfaces to the VSAN

    database and bind them to the VSAN ID

    created earlier.

    configure

    vsan database

    vsan 2

    exit

    interface vfc101

    bind mac-address 5c:f9:dd:16:ef:03

    no shutdown

    exit

    interface fc2/1-2

    no shutdown

    exit

    vsan database

    vsan 2 interface vfc101

    vsan 2 interface fc2/1-2

    end

    configure

    vsan database

    vsan 3

    exit

    interface vfc201

    bind mac-address 5c:f9:dd:16:f0:10

    no shutdown

    exit

    interface fc2/1-2

    no shutdown

    exit

    vsan database

    vsan 3 interface vfc201

    vsan 3 interface fc2/1-2

    end

    Initial Interface and VSAN Configuration

  • 42 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    The port channel dedicated to carrying FCoE traffic is then configured (Figure 31). The command shutdown

    LAN will prevent any VLAN not associated with a VSAN ID from traversing the trunk to the Dell IOAs.

    Configure the port channel and port channel members for IOA connectivity. Create the port channel. Enable switchport mode trunk. Specify the vPC ID.

    5548-1Configure the port channel and port channel members for IOA connectivity. Create the port channel. Enable switchport mode trunk. Specify the vPC ID.

    5548-2

    configure

    interface port-channel 10

    description FCoE enabled Eth to IOA

    switchport mode trunk

    shutdown LAN

    no shutdown

    end

    configure

    interface port-channel 20

    description FCoE enabled Eth to IOA

    switchport mode trunk

    shutdown LAN

    no shutdown

    end

    Assign interfaces to the port channel and enable LACP.

    Assign interfaces to the port channel and enable LACP.

    configure

    interface ethernet 1/23-24

    description PO10 Member

    switchport mode trunk

    channel-group 10 mode active

    no shutdown

    end

    configure

    interface ethernet 1/23-24

    description PO20 Member

    switchport mode trunk

    channel-group 20 mode active

    no shutdown

    end

    Configure Downstream Port Channel for FCoE Traffic

    Finally, the zone is created and all the related FCIDs and FC interfaces to the zone (Figure 32). The zone is

    then activated.

    configure

    zone name zone1SAN_B vsan 3

    member pwwn 20:01:5c:f9:dd:16:f0:10

    member interface fc2/1-2

    exit

    zoneset name set1SAN_B vsan 3

    member zone1SAN_B

    exit

    zoneset activate name set1SAN_B vsan 3

    end

    configure

    zone name zone1SAN_A vsan 2

    member pwwn 20:01:5c:f9:dd:16:ef:03

    member interface fc2/1-2

    exit

    zoneset name set1SAN_A vsan 2

    member zone1SAN_A

    exit

    zoneset activate name set1SAN_A vsan 2

    end

    Create the zone name and add the interfaces Create a zoneset and place the zone name in the

    container and finally activate the zoneset.

    5548-1

    Create the zone name and add the interfaces Create a zoneset and place the zone name in the

    container and finally activate the zoneset.

    5548-2

    Configure and Enable Zone Fabric

  • 43 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    7.2 Dell Networking IOA Setup Before configuring the IOA, it is strongly suggested that the switches be returned to their factory default

    settings and set them to programmable MUX (PMUX) mode (Figure 33). This mode allows the IOA to act very

    similar to the MXL. For additional information on the IOA modes please see Dell PowerEdge Configuration

    Guide for the M I/O Aggregator.

    configure

    stack-unit 0 iom-mode progammable-mux

    end

    reload

    configure

    stack-unit 0 iom-mode progammable-mux

    end

    reload

    In this environment the IOA is used in PMUX mode. Factory default the IOA to place the switch in

    standalone mode.

    restore factory-defaults stack-unit 0

    clear-all

    IOA-1

    In this environment the IOA will be used in PMUX mode. Factory default the IOA to place the switch in

    standalone mode.

    restore factory-defaults stack-unit 0

    clear-all

    IOA-2

    Once the switch is reloaded, configure for PMUX mode.

    Once the switch is reloaded, configure for PMUX mode.

    Restoring Factory Defaults Before Configuration

    http://downloads.dell.com/Manuals/all-products/esuprt_ser_stor_net/esuprt_networking/esuprt_net_blade_intercnts/poweredge-m-io-aggregator_Administrator%20Guide4_en-us.pdf?c=us&l=en&cs=04&s=bsdhttp://downloads.dell.com/Manuals/all-products/esuprt_ser_stor_net/esuprt_networking/esuprt_net_blade_intercnts/poweredge-m-io-aggregator_Administrator%20Guide4_en-us.pdf?c=us&l=en&cs=04&s=bsd

  • 44 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    The first step of configuring the IOAs is to configure the Out-of-band management interface for VLT heart

    beats (Figure 34), which are used later in the configuration. Once this is done the FIP snooping feature is

    enabled globally and the designated FCoE VLAN ID is created and enabled.

    configure

    feature fip-snooping

    fip-snooping enable

    default vlan-id 20

    end

    configure

    feature fip-snooping

    fip-snooping enable

    default vlan-id 21

    end

    Management Configuration Set the hostname. Set the IP for management. Set the default route for management. Enable LLDP hostname advertisement globally.

    enable

    configure

    hostname IOA-1

    interface managementethernet 0/0

    ip address 172.25.189.29 /16

    yes

    !

    exit

    manage route 0.0.0.0/0 172.25.189.254

    protocol lldp

    advertise management-tlv management-

    address system-name

    end

    IOA-1

    Create the FCoE VLAN. Enable fip-snooping on the VLAN. Set the FC map to match FCF switch.

    configure

    interface vlan 1000

    fip-snooping enable

    no shut

    end

    Management Configuration Set the hostname. Set the IP for management. Set the default route for management. Enable LLDP hostname advertisement globally.

    enable

    configure

    hostname IOA-2

    interface managementethernet 0/0

    ip address 172.25.189.30 /16

    yes

    !

    exit

    manage route 0.0.0.0/0 172.25.189.254

    protocol lldp

    advertise management-tlv management-

    address system-name

    end

    IOA-2

    Create the FCoE VLAN. Enable fip-snooping on the VLAN. Set the FC map to match FCF switch.

    configure

    interface vlan 1001

    fip-snooping enable

    no shut

    end

    Turn on the fip-snooping feature. Enable fip-snooping globally.

    Turn on the fip-snooping feature. Enable fip-snooping globally.

    Configure global settings and substantiate FCoE VLAN

  • 45 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    Next, the FCoE specific upstream port channel is created (Figure 35). The interface is also set for FIP

    snooping and DCBx is set to automatically accept DCB settings from the Nexus 5548UP ToR. Finally, the

    internal interface attached to slot 1 is tagged with the designated FCoE VLAN ID.

    Configure the upstream LAG for FCoE. Remove the switchport before enabling the port as

    hybrid. Tag the interface with the FCoE VLAN. Set the fip-snooping port mode to FCF. Set the DCB port role.

    configure

    interface po10

    no switchport

    portmode hybrid

    switchport

    vlan tagged 1000

    fip-snooping port-mode fcf

    no shutdown

    exit

    interface range te 0/49-50

    description FCoE Po Members to 5548-1

    port-channel-protocol LACP

    port-channel 10 mode active

    no shutdown

    protocol lldp

    advertise management-tlv management-

    address system-name

    no advertise dcbx-tlv ets-reco

    dcbx port-role auto-upstream

    end

    Configure the upstream LAG for FCoE. Remove the switchport before enabling the port as

    hybrid. Tag the interface with the FCoE VLAN. Set the fip-snooping port mode to FCF. Set DCB port role.

    configure

    interface po20

    no switchport

    portmode hybrid

    switchport

    vlan tagged 1001

    fip-snooping port-mode fcf

    no shutdown

    exit

    interface range te 0/49-50

    Description FCoE Po Members to 5548-2

    no port-channel-protocol LACP

    port-channel 20 mode active

    no shutdown

    protocol lldp

    advertise management-tlv management-

    address system-name

    no advertise dcbx-tlv ets-reco

    dcbx port-role auto-upstream

    end

    Configure the internal port facing the server. Remove the switchport before enabling the port

    as hybrid. Tag the interface with the FCoE VLAN. Set the DCB port role.

    configure

    interface te0/1

    no switchport

    portmode hybrid

    switchport

    vlan tagged 1000

    protocol lldp

    advertise management-tlv management-

    address system-name

    no advertise dcbx-tlv ets-reco

    dcbx port-role auto-downstream

    end

    configure

    interface te0/1

    no switchport

    portmode hybrid

    switchport

    vlan tagged 1001

    protocol lldp

    advertise management-tlv management-

    address system-name

    no advertise dcbx-tlv ets-reco

    dcbx port-role auto-downstream

    end

    Configure the internal port facing the server. Remove switchport before enabling the port as

    hybrid. Tag the interface with the FCoE VLAN. Set the DCB port role.

    IOA-1 IOA-2

    Configure Upstream LAG and Downstream Internal Server Connections

  • 46 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    Then the components and interfaces that comprise the VLTi peer-link that will allow the pair of IOAs to appear

    as a single switch to the upstream Nexus switches are created (Figure 36). In this environment the eight ports

    (te0/33 – 40) that comprise the two Forty Gigabit Ethernet ports are selected. For more information please

    see the Dell Networking OS Configuration Guide.

    Enable VLT and configure the VLTi peer-link. Create the VLAN for VLT traffic. Create the VLAN for Ethernet traffic. Create a port channel interface for the VLT peer-

    link. Create the VLT domain and the set back-up

    destination.

    configure

    interface vlan 55

    no shut

    exit

    interface po55

    no shut

    exit

    vlt domain 55

    peer-link port-channel 55

    back-up destination 172.25.189.30

    unit-id 0

    end

    Enable VLT and configure the VLTi peer-link. Create the VLAN for VLT traffic. Create the VLAN for Ethernet traffic. Create a port channel interface for the VLT peer-

    link. Create the VLT domain and set the back-up

    destination.

    configure

    interface vlan 55

    no shut

    exit

    interface po55

    no shut

    exit

    vlt domain 55

    peer-link port-channel 55

    back-up destination 172.25.189.29

    unit-id 1

    end

    Tag the VLTi port channel with the VLT VLAN. Add both 40GbE interfaces to the VLTi port

    channel.

    configure

    interface po55

    vlan tagged 55

    exit

    interface range te0/33-40

    port-channel-protocol lacp

    port-channel 55 mode active

    no shut

    end

    configure

    interface po55

    vlan tagged 55

    exit

    interface range te0/33-40

    port-channel-protocol lacp

    port-channel 55 mode active

    no shut

    end

    Tag the VLTi port channel with the VLT VLAN. Add both 40GbE interfaces to the VLTi port

    channel.

    IOA-1 IOA-2

    Create VLT LAG, VLAN and other Ethernet Designated VLANs

    https://www.force10networks.com/CSPortal20/KnowledgeBase/DOCUMENTATION/CLIConfig/FTOS/TCS_Config_8.4.6.0_Apr-21-2014.pdf

  • 47 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    Finally, a typical port channel with the VLT peer LAG enabled is created (Figure 37). Next, the upstream port

    channels are tagged with all the required Ethernet VLANs (30-32 and 88). Alternatively, all VLANs except the

    designated FCoE VLANS (1000 and 1001) can be allowed and the Nexus can be used to prune the allowed

    VLANs. Finally, the internal facing VLAN is tagged with the Ethernet VLAN allowed to Slot 1 of the M1000e

    enclosure.

    Enable the Ethernet VLT member ports facing upstream. Add ports to PO30. Tag PO30 with the LAN VLAN. Specify the peer VLT peer interface.

    configure

    interface range te 0/51-52

    port-channel-protocol lacp

    port-channel 30 mode active

    no shut

    exit

    interface port-channel 30

    portmode hybrid

    switchport

    vlan tagged 30-32,88

    vlt-peer-lag po30

    no shut

    end

    Enable the Ethernet VLT members ports facing upstream. Add ports to PO30 Tag PO30 with the LAN VLAN. Specify the peer VLT peer interface.

    configure

    interface range te 0/51-52

    port-channel-protocol lacp

    port-channel 30 mode active

    no shut

    exit

    interface port-channel 30

    portmode hybrid

    switchport

    vlan tagged 30-32,88

    vlt-peer-lag po30

    no shut

    end

    IOA-1 IOA-2

    Tag the server facing interface with LAN VLAN. Save the configuration.

    configure

    interface te0/1

    vlan tagged 31

    end

    copy run start

    configure

    interface te0/1

    vlan tagged 31

    end

    copy run start

    Tag the server facing interface with LAN VLAN. Save the configuration.

    Configure Upstream Connectivity to Nexus 5572UP Pair

  • 48 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

    8 Configuration and Troubleshooting The following section contains commands that can be used to validate the configuration covered in this

    document.

    Note: When using these validation/debug commands keep in mind that both configurations need to