113
Connectrix MDS-Series Switch Architecture and Management - 1 Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. Welcome to Connectrix MDS-Series Switch Architecture and Management. EMC provides downloadable and printable versions of the student materials for your benefit, which can be accessed from the Supporting Materials Tab. Copyright © 2010 EMC Corporation. All rights reserved. These materials may not be copied without EMC's written consent. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED ―AS IS.‖ EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC² , EMC, EMC ControlCenter, AdvantEdge, AlphaStor, ApplicationXtender, Avamar, Captiva, Catalog Solution, Celerra, Centera, CentraStar, ClaimPack, ClaimsEditor, ClaimsEditor, Professional, CLARalert, CLARiiON, ClientPak, CodeLink, Connectrix, Co-StandbyServer, Dantz, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, Document Sciences, Documentum, EmailXaminer, EmailXtender, EmailXtract, enVision, eRoom, Event Explorer, FLARE, FormWare, HighRoad, InputAccel,InputAccel Express, Invista, ISIS, Max Retriever, Navisphere, NetWorker, nLayers, OpenScale, PixTools, Powerlink, PowerPath, Rainfinity, RepliStor, ResourcePak, Retrospect, RSA, RSA Secured, RSA Security, SecurID, SecurWorld, Smarts, SnapShotServer, SnapView/IP, SRDF, Symmetrix, TimeFinder, VisualSAN, VSAM-Assist, WebXtender, where information lives, xPression, xPresso, Xtender, Xtender Solutions; and EMC OnCourse, EMC Proven, EMC Snap, EMC Storage Administrator, Acartus, Access Logix, ArchiveXtender, Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, C-Clip, Celerra Replicator, CLARevent, Codebook Correlation Technology, Common Information Model, CopyCross, CopyPoint, DatabaseXtender, Digital Mailroom, Direct Matrix, EDM, E-Lab, eInput, Enginuity, FarPoint, FirstPass, Fortress, Global File Virtualization, Graphic Visualization, InfoMover, Infoscape, MediaStor, MirrorView, Mozy, MozyEnterprise, MozyHome, MozyPro, NetWin, OnAlert, PowerSnap, QuickScan, RepliCare, SafeLine, SAN Advisor, SAN Copy, SAN Manager, SDMS, SnapImage, SnapSure, SnapView, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix DMX, UltraFlex, UltraPoint, UltraScale, Viewlets, VisualSRM are trademarks of EMC Corporation. All other trademarks used herein are the property of their respective owners.

Connectrix Mds-series Switch Architecture and Management_srg

Embed Size (px)

Citation preview

Connectrix MDS-Series Switch Architecture and Management - 1

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Welcome to Connectrix MDS-Series Switch Architecture and Management.

EMC provides downloadable and printable versions of the student materials for your benefit, which can be accessed from the Supporting Materials Tab.

Copyright © 2010 EMC Corporation. All rights reserved.

These materials may not be copied without EMC's written consent.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to changewithout notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED ―AS IS.‖ EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

EMC² , EMC, EMC ControlCenter, AdvantEdge, AlphaStor, ApplicationXtender, Avamar, Captiva, Catalog Solution, Celerra, Centera, CentraStar, ClaimPack, ClaimsEditor, ClaimsEditor, Professional, CLARalert, CLARiiON, ClientPak, CodeLink, Connectrix, Co-StandbyServer, Dantz, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, Document Sciences, Documentum, EmailXaminer, EmailXtender, EmailXtract, enVision, eRoom, Event Explorer, FLARE, FormWare, HighRoad, InputAccel,InputAccel Express, Invista, ISIS, Max Retriever, Navisphere, NetWorker, nLayers, OpenScale, PixTools, Powerlink, PowerPath, Rainfinity, RepliStor, ResourcePak, Retrospect, RSA, RSA Secured, RSA Security, SecurID, SecurWorld, Smarts, SnapShotServer, SnapView/IP, SRDF, Symmetrix, TimeFinder, VisualSAN, VSAM-Assist, WebXtender, where information lives, xPression, xPresso, Xtender, Xtender Solutions; and EMC OnCourse, EMC Proven, EMC Snap, EMC Storage Administrator, Acartus, Access Logix, ArchiveXtender, Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, C-Clip, Celerra Replicator, CLARevent, Codebook Correlation Technology, Common Information Model, CopyCross, CopyPoint, DatabaseXtender, Digital Mailroom, Direct Matrix, EDM, E-Lab, eInput, Enginuity, FarPoint, FirstPass, Fortress, Global File Virtualization, Graphic Visualization, InfoMover, Infoscape, MediaStor, MirrorView, Mozy, MozyEnterprise, MozyHome, MozyPro, NetWin, OnAlert, PowerSnap, QuickScan, RepliCare, SafeLine, SAN Advisor, SAN Copy, SAN Manager, SDMS, SnapImage, SnapSure, SnapView, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix DMX, UltraFlex, UltraPoint, UltraScale, Viewlets, VisualSRM are trademarks of EMC Corporation.

All other trademarks used herein are the property of their respective owners.

Connectrix MDS-Series Switch Architecture and Management - 2

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The objectives for this course are shown here. Please take a moment to read them.

Connectrix MDS-Series Switch Architecture and Management - 3

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The objectives for this module are shown here. Please take a moment to read them.

Connectrix MDS-Series Switch Architecture and Management - 4

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The objectives for this lesson are shown here. Please take a moment to read them.

Connectrix MDS-Series Switch Architecture and Management - 5

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The Connectrix family represents extensive selection of networked storage connectivity

products. Connectrix integrates high-speed Fibre Channel connectivity (1 to 10 Gb/s), highly

resilient switching technology, and options for intelligent IP storage networking. This wide

range of connectivity options allows you to configure Connectrix directors, switches, and routers

to meet any business requirement. All of the Connectrix storage-networking devices carry

EMC‘s standard two-year hardware warranty. Combine that with EMC‘s design,

implementation, and support services, and you have everything in one complete package.

Connectrix products provide more than just network connectivity. They offer:

Simple, centralized, automated SAN management

Proven interoperability across your networked storage solution

The highest availability to meet escalating business continuity and service level

requirements

Scalability with built-in investment protection

Connectrix MDS for intelligent SANs are an integral part of an enterprise data center

architecture and provide a better way to access, manage, and protect growing information

resources across a consolidated Fibre Channel, Fibre Channel over IP (FCIP), Small Computer

System Interface over IP (iSCSI), Gigabit Ethernet, and Optical network.

The MDS-9000 series serve as a platform for EMC Invista and RecoverPoint. In addition, MDS

provides Storage Media Encryption for tape and virtual-tape environments.

Connectrix MDS-Series Switch Architecture and Management - 6

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

This slide summarizes the available configuration options of the MDS-9500 series directors.

Connectrix MDS-Series Switch Architecture and Management - 7

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The MDS-9513 provides port density with up to 528 1, 2,4 and 8-Gbps ports or up to 44 10-

Gbps ports per chassis. This port density allows up to 1584 ports per rack. Redundant crossbar

Fabric Modules provide switching capacity and 96 Gbps available per line card slot. The 6000W

AC power supplies provide room to grow for future application modules.

The MDS-9513 provides industry port density in a 14-request/response unit (RU) form factor.

Chassis depth of the 9513 is 28 inches compared to the 9509 chassis depth of 18.8 inches.

The Fabric Module originally was included with the first release of the 9513. It supported the

4Gbps line modules. The Fabric-2 module is required to run either of the 8 Gbps performance

modules; the MDS-PBF-24-8G and the MDS-PBF-48-8G in a MDS-9513 chassis. The Fabric-2

module is backwards compatible with SAN-OS 3.x, and can be installed non-disruptively in

SAN-OS 3.x and NX-OS 4.x.

Connectrix MDS-Series Switch Architecture and Management - 8

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The MDS-9513 chassis front configured components include Supervisor modules, Line card

modules, and Line card fan tray. The MDS-9513 chassis rear configured components include

Fabric Modules, Power supplies, and Fabric card fan tray. The Modules are numbered from top

to bottom.

Connectrix MDS-Series Switch Architecture and Management - 9

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The MDS 9509 Director chassis has nine slots. Two slots 5 and 6, are reserved for the

Supervisors and the other seven can contain any switching module. Slots are numbered 1

through 9 from top to bottom. The 9509 will support either V1 or V2 Supervisor Modules.

Today's model is shipped with Version 2. The power supplies are redundant by default and can

be configured to be combined if desired. The hot-swappable fan module with redundant fans is

located on the left side. There are no FRU components on the non-port side

The components on the Port side include:

1 Supervisor-2 modules

2 Console port

3 10/100 management Ethernet port

4 Slots for optional modules

5 Two power supplies (FRU)

6 Fan module (FRU)

The components on the Nonport side include:

7 Clock modules (FRU)

8 Air vent

Connectrix MDS-Series Switch Architecture and Management - 10

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The MDS 9506 Director has a 6-slot chassis and supports the following:

Up to two Supervisor-1 modules that provide a switching fabric, with a console port, COM1

port, and a MGMT 10/100 Ethernet port on each module. Slots 5 and 6 are reserved for the

supervisor modules.

Four slots for optional modules that can include up to four switching modules or three MSM

modules.

Two power supplies located in the back of the chassis. The power supplies are redundant by

default and can be configured to be combined if desired.

Two power entry modules (PEMs) in the front of the chassis for easy access to power supply

connectors and switches.

One hot-swappable fan module with redundant fans.

Connectrix MDS-Series Switch Architecture and Management - 11

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The Supervisor-2 module crossbar is used for switching in first-generation chassis such as the

9506 or the 9509. In the 9513 chassis, the Supervisor-1 module acts as the arbiter while crossbar

fabrics are provided by the MDS-9513 fabric cards.

Connectrix MDS-Series Switch Architecture and Management - 12

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The MDS-9222i is a second generation MDS chassis containing two expansion slots. The MDS-

9222i integrates the 18 auto-sensing 1/2/4 Gbps Fibre Channel ports, four fixed 1 Gbps Ethernet

ports and an expansion slot in a 3U chassis. Both SUP and Linecard functionality is supported in

Slot 1. Slot 1 is preconfigured with 18 1/2/4 Gb FC ports and four GigE ports, leaving Slot 2 for

future expansion. Processing is supplied by a 1.3GHz 8548 PowerPC processor and 1GB of

internal memory. A 1GB compact flash card is also provided to allow the user to transport data

to a remote location. Management connection is provided by a 10/100 Ethernet port. The unit

contains dual removable power supplies and removable fan tray for field replacement.

Connectrix MDS-Series Switch Architecture and Management - 13

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The MDS-9134 offers up to thirty-two autosensing 1/2/4 Fibre Channel ports and two 10Gbps in

a 1 RU chassis. Each Fibre Channel port is dedicated 4 Gbps of bandwidth. The MDS-9134 has

the flexibility to expand from 24 to 32 ports in 8-port increments.

The 10Gbps ISL ports support a range of optics. They can connect to another 9134 or to a

director via the 4 port 10 Gbps blade. The two 10Gbps ports can be activated independently,

from the 1/2/4 Fibre Channel Ports.

The MDS-9134 offers non-disruptive software upgrades, dual hot-swappable power supplies

with integrated fans for redundancy, VSANs for fault isolation, and PortChannels for Inter-

Switch Link (ISL) resiliency.

Connectrix MDS-Series Switch Architecture and Management - 14

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The MDS 9124 24-Port Multilayer Fabric Switch features 24 ports that are capable of speeds of

4, 2, and 1 Gbps. It supports the same SAN-OS software that is supported by all MDS-Series

products. The MDS 9124 offers flexible, on-demand port activation. Through software licensing,

ports can be activated in 2 eight-port increments. The default base switch has eight active ports.

The MDS 9124 can support up to 16 VSANs. It does not support IVR

Connectrix MDS-Series Switch Architecture and Management - 15

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The objectives for this lesson are shown here. Please take a moment to read them.

Connectrix MDS-Series Switch Architecture and Management - 16

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The MDS-PBF-24-8G line card has 24 auto-sensing 8 Gbps capable ports. There are 8 port

groups, each consisting of 3 ports. Each port group shares approximately 12 Gb of bandwidth.

Ports can be configured in dedicated mode in configurations that do not exceed that capacity. In

shared-bandwidth mode, the oversubscription rate is 2:1 if all 3 ports in the group are running at

8 Gbps. Since this is a performance card, it can only be installed in a MDS-9500 series switch.

Connectrix MDS-Series Switch Architecture and Management - 17

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The MDS-PBF-48-8G line card has 48 auto-sensing 8 Gbps capable ports. There are 8 port

groups, each consisting of 6 ports. Each port group shares approximately 12 Gb of bandwidth.

Ports can be configured in dedicated mode in configurations that do not exceed that capacity. In

shared-bandwidth mode, the oversubscription rate is 4:1 if all 6 ports in the group are running at

8 Gbps. Since this is a performance card, it can only be installed in a MDS-9500 series switch.

Connectrix MDS-Series Switch Architecture and Management - 18

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The MDS-PBF-44-8G line card has 48 8 Gbps capable FC ports. These ports are divided into 4

separate port groups, each containing 12 ports. If ports are configured for dedicated mode

bandwidth, as an E-Port for example, there are limitations as to how many ports per port group

can be configured. Essentially, each port group has 12 Gbps of bandwidth to utilize, and that

capacity can be divided in any way that does not exceed 12 Gbps. In shared rate mode, if all

ports in a port group are running at 8 Gbps, then the oversubscription rate is roughly 8:1.

This line card is the only 8 Gbps line card that can be installed in a MDS-9222i.

Connectrix MDS-Series Switch Architecture and Management - 19

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Following are the basics about third-generation line cards:

Each port group is clearly marked on the line cards with screen-printed borders.

Each port group has 12.8 Gbps of internal bandwidth available.

Any port can be configured to have dedicated bandwidth at 1, 2,4 or 8 Gbps. All remaining

ports in the port group share any remaining unused bandwidth.

Requires NX-OS

Connectrix MDS-Series Switch Architecture and Management - 20

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The Storage Services Module (SSM) is designed with eight application-specific integrated circuits

(ASIC)

In addition to being a platform for EMC Invista and RecoverPoint, the Storage Services Module

(SSM) supports Fibre Channel Write Acceleration.

The 18/4 Multi-Services switching module with NX-OS supports the SATap feature, so it can be

deployed for RecoverPoint. It provides 4 GE ports to be used for iSCSI and FCIP. The 18/4 Multi-

Services switching module supports the MDS Storage Media Encryption, which encrypts data-at-

rest on tape and virtual tape libraries.

The SSN supports SAN extension with FCiP and Storage Media Encryption (SME). In addition,

the SSN delivers SAN extension performance with FCIP acceleration features including FCIP

write acceleration and FCIP tape write and read acceleration. The SSN supports hardware-based

encryption (with IP Security (IPsec)) and also supports hardware-based compression.

Connectrix MDS-Series Switch Architecture and Management - 21

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The table shown on the slide displays compatibility information for NX-OS 4.1(1). Note that

only a few Generation-1 line cards are supported with NX-OS 4, and there will be no new

functionality available for those line cards as new versions of NX-OS are released.

Connectrix MDS-Series Switch Architecture and Management - 22

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Generation-2 FC Modules are end of life, but are still supported for use in EMC SAN‘s

Connectrix MDS-Series Switch Architecture and Management - 23

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Following are the basics about second-generation line cards:

Each port group is clearly marked on the line cards with screen-printed borders.

Each port group has 12.8 Gbps of internal bandwidth available.

Any port can be configured to have dedicated bandwidth of 1, 2, or 4 Gbps. All remaining

ports in the port group share any remaining unused bandwidth.

Any port in dedicated bandwidth mode has access to extended buffers.

Any port in shared bandwidth mode has only 16 buffer credits.

Connectrix MDS-Series Switch Architecture and Management - 24

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The objectives for this lesson are shown here. Please take a moment to read them.

Connectrix MDS-Series Switch Architecture and Management - 25

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

VSAN functionality is a feature that leverages the advantages of isolated SAN fabrics with

capabilities that address the limitations of isolated SAN islands. VSANs provide a method for

allocating ports within a physical fabric to create virtual fabrics. Independent physical SAN

islands are virtualized onto a common SAN infrastructure. An analogy is that VSANs on Fibre

Channel (FC) networks are like VLANs on Ethernet networks.

Separate fabric services are available on each VSAN, because it is a virtual fabric–as are

statistics–which are gathered on a per-VSAN basis. Each CPU process is common to all VSANs

(for example, only one instance of the name server service runs on each switch) but the process

uses separate databases for each VSAN.

Virtual Storage Area Networks:

Allocate ports within a physical fabric to create virtual fabrics

SAN islands are virtualized onto a common SAN infrastructure

VSAN on FC is similar to VLAN on Ethernet

Fabric services are per VSAN

Statistics gathered per VSAN

Connectrix MDS-Series Switch Architecture and Management - 26

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The multiprotocol and multi-transport features of the MDS-Series family are designed for a total

storage networking solution.

In addition to supporting iSCSI for midrange storage consolidation and FCIP for SAN extension,

the MDS-Series family also supports the FICON protocol to enable the connection of IBM

mainframes like the zSeries or OS/390 to storage subsystems.

This feature is available on the MDS 9500 director switches and MDS 9200 fabric switches. The

FICON Control Unit Port (CUP) specification is supported to allow hosts to perform in-band

management functions for FICON devices, and switch cascading is possible because the MDS-

Series platform supports static domain IDs.

The MDS-Series VSANs feature provides a mechanism for safely intermixing FICON and FC

traffic on the same physical infrastructure. Additionally, disaster recovery and business

continuance applications can be run through the MDS-Series platform by implementing FICON

over FCIP, using the IPS Storage Services Module.

Connectrix MDS-Series Switch Architecture and Management - 27

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Intelligent network security coverage is provided at several levels, including management

access, control plane, and data plane. Management access security can be provided by direct

console or by modem control to the console interface, which is password protected.

Telnet access is secured with the use of Secure Shell (SSH) and Secure File Transfer Protocol

(SFTP), while role-based access control provides configurable privilege levels for users. Simple

Network Management Protocol version 3 (SNMPv3) is an encrypted management protocol used

by Fabric Manager and Device Manager to communicate with the switch.

Advanced Encryption Standard (AES) is implemented to enhance the capabilities of SNMPv3

and SSH by increasing security with larger key sizes of 128, 192, and 256 bits. RADIUS AAA

provides AAA services for MDS-Series. TACACS+ support complements the RADIUS server

by providing centralized authentication, authorization, and accounting (AAA) services through a

reliable TCP/IP secure connection between the MDS and TACACS+ server. At the time of this

writing, EMC only supports AES for iSCSI.

Security is provided through multiple layers of authentication, zoning, fabric binding, and

individual port security functions. At the data plane, security is provided through hard fabric

zoning, user-configurable access control lists (ACLs), VSANs, Logical Unit Number (LUN)

zoning, read-only zone configurations, and the use of secure switch control protocols (FC-SP).

IPSec encryption is supported for IP storage traffic, including FCIP and iSCSI.

Connectrix MDS-Series Switch Architecture and Management - 28

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Following are basics about intelligent network security:

The MDS-Series provides a full suite of intelligent security functions that, when deployed,

enable a truly secure SAN environment.

Secure SAN management is achieved via role-based access. It includes customizable roles

that apply to command line interface (CLI), SNMP, and web-based access, along with full

accounting support.

Secure management protocols like SSH, SFTP, and SNMPv3 ensure that outside connection

attempts to the MDS-Series network are valid and secure.

Secure switch control protocols that leverage IP Security-Encapsulating Security Protocol

(IPSec-ESP) specifications yield SAN protocol security (FC-SP). Diffie-Hellman Challenge

Handshake Authentication Protocol (DH-CHAP) authentication is used between switches

and devices.

MDS-Series support of RADIUS AAA and TACACS+ services helps to ensure user, switch,

and iSCSI-host authentication for the SAN.

Secure VSANs and hardware-enforced zoning restrictions using port ID and WWN provide

layers of device access and isolation security to the SAN.

Connectrix MDS-Series Switch Architecture and Management - 29

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Many features in the MDS-Series switches require configuration synchronization in all switches

in the fabric. Maintaining configuration synchronization across a fabric is important to maintain

fabric consistency. In the absence of a common infrastructure, such synchronization is achieved

through manual configuration at each switch in the fabric.

Cisco Fabric Services (CFS) provides a common infrastructure for automatic configuration

synchronization in the fabric. It provides the transport function as well as a rich set of common

services to the applications. CFS has the ability to discover CFS-capable switches in the fabric

and discover the application capabilities in all CFS-capable switches.

The NX-OS and SAN-OS software use the Cisco Fabric Services (CFS) infrastructure to enable

efficient database distribution and to provide device flexibility.

Connectrix MDS-Series Switch Architecture and Management - 30

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Connectrix MDS supports communication between select devices in different VSANs via Inter-

VSAN Routing (IVR)

With inter-VSAN routing, resources across VSANs can be accessed without compromising other

VSAN benefits. Data traffic can be transported between specific initiators and targets on

different VSANs without merging VSANs into a single logical fabric. FC control traffic does not

flow between VSANs, nor can initiators access any resources aside from the ones designated

with IVR routing. Valuable resources like tape libraries can be easily shared without

compromise.

An Enterprise License package is required for IVR.

Connectrix MDS-Series Switch Architecture and Management - 31

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

PortChannels are an aggregation of multiple physical interfaces into one logical interface.

PortChannels provide higher aggregated bandwidth, load balancing, and link redundancy.

PortChannels can connect to interfaces across switching modules, so a failure of a switching

module can not bring down the PortChannel link.

A PortChannel has the following features:

Provides a point-to-point connection over ISL (E ports) or Extended Inter-Switch Link

(EISL) (TE ports). The multiple links can be combined in a PortChannel.

Increases the aggregate bandwidth on an ISL by distributing traffic among all functional

links in the channel.

Load-balances across multiple links and maintains optimum bandwidth utilization. Load

balancing is based on the source ID, destination ID, and exchange ID (OX_ID).

Provides high availability on an ISL; if one link fails, traffic previously carried on this link is

switched to the remaining links. If a link goes down in a PortChannel, the upper protocol is

not aware of it. To the upper protocol, the link is still there, although the bandwidth is

diminished. The routing tables are not affected by link failure. PortChannels can contain up

to 16 physical links and can span multiple modules for added high availability.

MDS-Series switches support PortChannels with 16 ISLs per PortChannel. On switches with

first-generation switching modules, or a combination of first-generation and second-generation

switching modules, you can configure a maximum of 128 PortChannels. In switches with only

second-generation switching modules, you can configure a maximum of 265 PortChannels.

Connectrix MDS-Series Switch Architecture and Management - 32

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The 18/4 MSM module enables the creation of virtual iSCSI targets.

Physical FC targets on SAN map to Virtual iSCSI targets

18/4 MSM module presents these FC targets to the IP hosts

Physical iSCSI initiators map to virtual FC Initiators on the SAN

Connectrix MDS-Series Switch Architecture and Management - 33

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

iSCSI, or Internet SCSI, is a means of encapsulating the SCSI command within an IP transport.

The intent of iSCSI is to function as a ―native‖ protocol—that is, the storage and servers in an

iSCSI SAN have Ethernet, rather than Fibre Channel, interfaces, and use iSCSI as the language

to communicate with each other. iSCSI is therefore an alternative to Fibre Channel.

Connectrix MDS-Series Switch Architecture and Management - 34

When the Connectrix MDS-9500 series directors and the MDS-9200 series switches are used, the SAN

Extension Package provides an integrated, cost-effective, and reliable business continuity offering that uses

existing IP infrastructure. Fibre Channel over IP (FCIP) can be used to connect Fibre Channel SANs across a

distance using IP networks. Each MDS Gigabit Ethernet port is capable of managing up to three FCIP tunnels.

The FCIP tunnel (or link) consist of one or more independent connections between two FCIP ports. Each

tunnel transport encapsulated Fibre Channel frames over TCP/IP. FCIP defines virtual E (VE) ports which

behave exactly like standard Fibre Channel E_Ports, except that the transport in this case is FCIP instead of

Fibre Channel. VE ports connects to VE ports only.

Implementing FCIP with the MDS series eliminates the need for expensive systems to connect SANs over a

long distance.

Other features include:

FCIP Compression – Increases WAN bandwidth without costly upgrades

Inter-VSAN Routing for FCIP – Allows selective transfer of data traffic on different VSANs without

merging fabrics

Write Acceleration – Improves application performance when storage traffic is routed over WANS

Tape Acceleration – Helps achieve near full throughput over WAN links for remote tape backup and

WAN links for remote tape backup

The 18/4 Multi-Services switching module provides 4 GE ports to be used for FCIP:

2 TCP connections per FCIP tunnel (Control & Data)

3 FCIP tunnels per G/E port

Note: EMC currently supports 1 FCIP tunnel per G/E port

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Connectrix MDS-Series Switch Architecture and Management - 35

Storage Media Encryption secures data stored on heterogeneous tape drives and virtual tape

libraries in a SAN environment. The Disk Library DL4000 series is supported. It uses secure

IEEE standard Advanced Encryption Standard (AES) 256-bit algorithms that protect data at rest.

The Storage Media Encryption license activates the cryptographic engines of the MDS-9222i

and/or the 18/4 Multi-Services switching module. Storage Media Encryption enables traffic

from any switch port to be encrypted without SAN reconfiguration—no rewiring or no down

time, and easy scalability. Storage Media Encryption also supports VSAN traffic and integrated

management with MDS Fabric Manager and MDS CLI.

Other features:

Optional tape compression to maximize media utilization

High availability clustering capability

Automatic load balancing and failover if a fault occurs

Scalable performance simply by adding more 18/4 modules and Storage Media Encryption

licenses

Includes a native key manager called MDS Key Management Center, but Storage Media

Encryption also integrates with RSA Key Manager for the Datacenter

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Connectrix MDS-Series Switch Architecture and Management - 36

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

These are the key points covered in this module. Please take a moment to review them.

Connectrix MDS-Series Switch Architecture and Management - 37

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The objectives for this module are shown here. Please take a moment to read them.

Connectrix MDS-Series Switch Architecture and Management - 38

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The console needs a rollover RJ-45 cable. There is a switch on the supervisor module of the

MDS 9500 series directors that, if placed in the out position, allows the use of a straight through

cable. The switch is shipped in the ‗in‘ position and is located behind the LEDs.

Connectrix MDS-Series Switch Architecture and Management - 39

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

As indicated in the Basic System Configuration dialog box, this setup utility guides you

through the basic configuration of the system. The setup configures only enough

connectivity for management of the system, and it is mainly used for configuring the system

when no configuration is present. It is important to note that the setup command always

assumes system defaults and not the current system configuration values.

The Basic System Configuration dialog box options can be skipped by entering a carriage

return, or you can skip all remaining dialog boxes by typing Ctrl-C at any time. You must

enter a [y] when prompted to continue with the Basic System Configuration dialog box. The

default password is admin, but this can be changed after initial setup. An SNMPv3 user-

authentication password can also be established in this process.

After the initial setup is completed, you can log in and make changes to the parameters that

were set during that initial configuration process. If you wish to make changes to the initial

configuration at a later time, the setup command can be issued in the EXEC mode. Then the

setup utility guides you through the basic configuration process.

In the event that you change the administrator password during the initial setup process and

subsequently forget this new password, you have the option to recover this password. You

need to configure only the SNMPv3 user name and password to get access to the switch

through the Fabric Manager. The community strings can be configured at any time.

Connectrix MDS-Series Switch Architecture and Management - 40

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

At this point, the name of your switch is entered along with the IP address and subnet mask of

the OOB Ethernet management port interface. Without this information, management access to

the switch through the OOB Ethernet port would not be possible.

When there are options to select with each dialog, you can either press Return, which accepts

the choice indicated between the square brackets (for example, [n]), or you can select the

alternative. In the example, n, for ―no‖, was entered at Enable IP routing?, Configure static

route?, and Configure the default network? because [y] was the current selection and these

items were not desired in the configuration. However, Configure the default gateway? was

desired, so pressing Return enabled the user to enter an IP address on the next dialog line. No

other options in the example dialog script were changed.

A Network Time Protocol (NTP) server provides a precise time source (radio clock or atomic

clock) to synchronize the system clocks of network devices. NTP is transported over User

Datagram Protocol (UDP)/IP. All NTP communications use Coordinated Universal Time (UTC).

An NTP server receives its time from a reference time source, such as a radio clock or atomic

clock, attached to the time. NTP distributes this time across the network. Using NTP is optional

but recommended.

Telnet services are enabled to remotely log on to the switch. The DNS client on the switch

communicates with the DNS server to perform the IP address-to-name mapping. Setting up the

Domain Name Server (DNS) is optional but recommended.

Connectrix MDS-Series Switch Architecture and Management - 41

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The system prints a summary of the configuration for your review. The configuration printed

will be exactly what you entered. Compare it once more with the information you obtained in

the initial setup requirements to verify there are no typing errors. If everything was entered

correctly, there is no need to edit.

The system asks if you would like to edit the configuration that just printed out. Any

configuration changes made to a switch are immediately enforced but are not saved. If no edits

are needed, then you are asked if you want to use this configuration and save it as well. Since [y]

(―yes‖) is the default selection, pressing Return activates this function, and the configuration

becomes part of the running-config and is copied to the startup-config.

This also ensures that the kickstart and system boot images are automatically configured.

Therefore, you do not have to run a copy command after this process. A power loss restarts the

switch using the startup-config, which has everything saved that has been configured to

nondefault values. If you do not save the configuration at this point, none of your changes are

updated the next time the switch is rebooted.

Connectrix MDS-Series Switch Architecture and Management - 42

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Fabric Manager (FM) software is downloadable from PowerLink. There are two distinct

versions of Fabric Manager, Standalone and Fabric Manager Server. Fabric Manager Server is a

platform for advanced MDS-Series monitoring, troubleshooting, and configuration capabilities.

This tool provides centralized MDS-Series management services and performance monitoring.

Fabric Manager Client is a Java and SNMP-based network fabric and device management tool

with a GUI that displays real-time views of your network fabric, including Nexus 5000 Series

switches, MDS-Series switches and third-party switches, hosts, and storage devices.

Fabric Manager Server has the following features:

Multiple fabric management – Fabric Manager Server monitors multiple physical fabrics

under the same user interface. This facilitates managing redundant fabrics. A licensed Fabric

Manager Server maintains up-to-date discovery information on all configured fabrics so

device status and interconnections are immediately available when you open the Fabric

Manager Client.

Continuous health monitoring – MDS-Series health is monitored continuously, so any events

that occurred since the last time you opened the Fabric Manager Client are captured.

Roaming user profiles – The licensed Fabric Manager Server uses the roaming user profile

feature to store your preferences and topology map layouts on the server, so that your user

interface will be consistent regardless of what computer you use to manage your storage

networks.

Connectrix MDS-Series Switch Architecture and Management - 43

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

This slide shows the installation and upgrade wizard for Fabric Manager. From the CD, or the

downloadable image, run the start.html file. This will launch the installation page seen in

window # 1. From the Install Management Software menu, select Cisco Fabric Manager. From

there select Install Fabric Manager. From there, select FM Installer, the Installation wizard will

then launch.

Connectrix MDS-Series Switch Architecture and Management - 44

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

When installing FM for the first time, you can choose to install either FM Server or FM

Standalone. The install defaults to FM Standalone. If the FM server is desired, it must be

selected. FM Server requires a database and uses clients to manage fabrics.

Fabric Manager Server does not require a license, but some of the features used may require the

FM Server license to be installed on the MDS switch.

Connectrix MDS-Series Switch Architecture and Management - 45

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Once Fabric Manager Server is installed and running, the Client and be installed. The Client

allows interaction thru the server to manage the switches.

To install Fabric Manager Client open a connection thru port 80 with a web browser to the IP

address of the Fabric Manager Server. Log into the Server with the user and password created

during the installation.

Connectrix MDS-Series Switch Architecture and Management - 46

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Customer Support can receive messages about events on MDS-Series switches using one

of two methods:

EMC Email Home for MDS-Series switches

EMC Control Center

Originally, the only way to have EMC remote support for MDS switches was to use EMC

Control Center. However, Cisco and EMC have worked together to provide an alternative

solution to have the MDS-Series provide EMC Email Home capabilities without the need of

EMC ControlCenter. Both EMC solutions support similar set of events. Customers have the

flexibility to choose the best solution that meets their needs. Email home provides the customer

with the remote notification of problem conditions without remote connection back to the

switches. The ControlCenter solution provides for both remote notification and also remote

connection back to the switches. ControlCenter alert configuration is beyond the scope for this

class.

Traps from all the MDS-Series switches are forwarded to the Cisco Fabric Manager. The

minimum supported version is Cisco Fabric Manager 3.0(2), but it is highly recommended that

3.0(2a) or higher is used. The instructions in this module are based on this version. Cisco Fabric

Manager identifies traps of interest to EMC Email Home and generates the appropriate xml

email messages that are forwarded to EMC Email Home. If the Fabric Manager service is not

running, email home will not work.

Connectrix MDS-Series Switch Architecture and Management - 47

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

It is recommended that the one-step install all command be used to upgrade your system software. This command upgrades all modules in any MDS-Series switch. Only one install all command can be running on a switch at any time, and no other command can be issued while running that command. The install all command can not be performed on the standby supervisor module. It can only be issued on the active supervisor module.

If the switching modules are not compatible with the new supervisor module image, some traffic disruption may be noticed in the related modules, depending on your configuration. These modules are identified in the summary when you issue the install all command. You can choose to proceed with the upgrade or abort at this point.

The general steps to upgrade your system are:

Log into the switch through the console, Telnet, or SSH port of the active supervisor.

Create a backup of your existing configuration file, if required.

Perform the upgrade by issuing the install all command. The example above demonstrates upgrading to SAN-OS 3.0.1 using the install all command.

When upgrading, images can be retrieved in one of two ways:

Local, where images are locally available on the switch. The install all command uses the specified local

images.

Remote, where images are in a remote location and the user specifies the destination using the remote

server parameters and the file name to be used locally.

To upgrade the switch to a new image, you must specify the variables that direct the switch to the images. To

select the kickstart image, use the kickstart variable, or to select the system image, use the system variable. The

images and variables are important factors in any install procedure. You must specify the variable and the

image to upgrade your switch. Both images are not always required for each installation.

Connectrix MDS-Series Switch Architecture and Management - 48

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

When you issue the install all command, the switch displays a summary of changes that are

made to your configuration and waits for your authorization to continue executing the command

process.

A compatibility check is conducted for each module installed in the system to be upgraded. The

impact of an upgrade and the install type are displayed.

Modules and specific images to be upgraded based on the files specified in the previous step are

displayed in an upgrade table that also shows the running and new versions.

Compatibility check terms are as follows:

Bootable: The ability of the module to boot or not boot, based on image compatibility

Impact: The type of software upgrade mechanism; disruptive or nondisruptive.

Install type terms are as follows:

reset: Resets the module

sw-reset: Resets the module immediately after switchover

rolling: Upgrades each module in sequence

copy-only: Updates the software for BIOS, loader, or bootrom

Connectrix MDS-Series Switch Architecture and Management - 49

VSANs help achieve traffic isolation in the fabric by adding control over each incoming and outgoing port. There can be up to 256 VSANs in the switch and 239 switches per VSAN. This effectively helps with network scalability because the fabric is no longer limited by 239 Domain IDs, because the IDs can be reused within each VSAN.

EMC supports 20 VSANS per switch. Check the EMC support matrix for the most up-to-date support information. The default VSAN number is VSAN 1. The maximum number of VSANs per switch includes default VSAN 1 and isolated VSAN 4094.

To uniquely identify each frame in the fabric, the frame is labeled with a VSAN identification (VSAN_ID) on the ingress port; the VSAN_ID is stripped away across E_Ports. Across trunkingE (TE) ports, the VSAN_ID is still maintained. By carrying SAN and priority in the header, quality of service (QoS) can be properly applied. The VSAN_ID is always stripped away at the other edge of the fabric. If an E_Port is capable of carrying multiple VSANs, it then becomes a TE_Port.

VSANs also facilitate the reuse of address space by creating independent virtual SANs, thereby increasing the available number of addresses and improving switch granularity. Without a VSAN, an administrator needs to purchase separate switches and links for separate SANs. The system granularity is at the switch level, not at the port level.

VSANs are easy to manage. To move or change users, you need to change only the configuration of the SAN, not its physical structure. To move devices between VSANs, you simply change the configuration at the port level; no physical moves are required.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Connectrix MDS-Series Switch Architecture and Management - 50

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

In Fabric Manager, select the Create VSAN icon from the toolbar. The Create VSAN dialog box

allows you to:

• Select one or more switches where the VSAN will be created.

• Specify the VSAN ID (valid range: 2 to 4093).

• Select the load balancing scheme.

• Select the interop mode.

• Specify the administrative state (active/suspended).

• Choose whether to specify static domain IDs for this VSAN (optional).

• Choose whether this VSAN will be exclusively used for Fibre Connection (FICON)

Protocol.

Connectrix MDS-Series Switch Architecture and Management - 51

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

There is a special configuration submode for interface configuration. This submode is entered

with the interface fc2/1 command.

The switchport ? command from the interface configuration submode provides a listing of all

the options that are available for the switchport configuration of the interface.

Connectrix MDS-Series Switch Architecture and Management - 52

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Use the switchport command to configure a switch port parameter on a Fibre Channel, Gigabit

Ethernet, or management interface.

The example above reserves 4-Gbps dedicated bandwidth for interface 2/1 (module 2, port 1)

Connectrix MDS-Series Switch Architecture and Management - 53

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Verify the configuration using show port-resources command. Note the available and allocated

bandwidth figures in the output:

• Port-Group 1

• Total bandwidth is 12.8 Gbps

• Total shared bandwidth is 4.8 Gbps

• Allocated dedicated bandwidth is 8.0 Gbps

Connectrix MDS-Series Switch Architecture and Management - 54

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Using Fabric Manager provides an easy tool for all zone configuration tasks. To create and edit

zonesets, right-click the VSAN folder in the Logical Domains pane. The pop-up menu displays

several options, including:

Edit Full Zone Database: Choose this to create and edit fcaliases, zones, and zonesets.

Deactivate Zoneset: Choose this option to deactivate the currently active zoneset.

Copy Full Zone Database: Choose this option to propagate the configured zoneset in the

VSAN to any switch.

Edit Full Zone Database dialog allows complete fcalias, zone, and zoneset configuration:

Left pane: Displays fcalias names, zone and zoneset folders.

Bottom-right pane: Displays all Name Server entries for the VSAN.

Top-right pane: displays the configuration of the fcalias, zone or zoneset you select in the

left pane.

Add zones: To add a new zone or zoneset, select the folder and click the blue arrow.

Delete zones: To delete any zone or zoneset selected in the left pane or selected item(s) in

the top-right pane, click the red arrow.

Bottom menu: Provides options to activate, deactivate, and distribute zonesets.

Connectrix MDS-Series Switch Architecture and Management - 55

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Once a Zone has been created, members can be added to the Zone. One method is to select the

zone from the bottom left window. Select the member from the bottom window, then click Add

to Zone. The member should appear in the top right window

Connectrix MDS-Series Switch Architecture and Management - 56

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Fabric Manager can be used to create Zonesets and activate them for a given VSAN. Select the

Zoneset folder and right click. Select insert and define a Zoneset name. From the bottom

window, select the desired zones and drag and drop them into the Zoneset. Once the Zoneset

contains the correct Zones, select activate. This will cause a menu to be displayed which will

allow the comparison of the new Zoneset to the already present active Zoneset.

Connectrix MDS-Series Switch Architecture and Management - 57

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The objectives for this module are shown here. Please take a moment to read them.

Connectrix MDS-Series Switch Architecture and Management - 58

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The objectives for this lesson are shown here. Please take a moment to read them.

Connectrix MDS-Series Switch Architecture and Management - 59

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The Connectrix NEX 5000 series is designed for those ready to deploy a unified fabric that can

handle their LAN, SAN, and server clusters, networking over a single link. The Connectrix NEX

5000 family includes the NEX 5020 and NEX 5010 FCoE switches, as well as three different

expansion modules.

The following slides cover the hardware in more detail.

Connectrix MDS-Series Switch Architecture and Management - 60

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The NEX 5020 is a two rack-unit (2RU), 10 Gigabit Ethernet, Data Center Ethernet, and FCoE

Fibre Channel switch built to provide 1.04 terabits per second throughput with very low latency.

It has 40 fixed 10 Gigabit Ethernet, Cisco Data Center Ethernet, and FCoE Small Form Factor

Pluggable Plus (SFP+) ports. Two expansion module slots can be configured to support up to 12

additional 10 Gigabit Ethernet, Cisco Data Center Ethernet, and FCoE SFP+ ports, up to 16

Fibre Channel switch ports, or a combination of both. The switch has a serial console port and

an out-of-band 10/100/1000-Mbps Ethernet management port. The switch is powered by 1+1

redundant, hot-pluggable power supplies and 4+1 redundant, hotpluggable fan modules to

provide highly reliable front-to-back cooling.

Connectrix MDS-Series Switch Architecture and Management - 61

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Shown here is the front view of the NEX-5020 switch. From the front, you can see the dual

redundant power supplies and the fan modules. These items are all field replaceable units

(FRUs), and can be easily accessed via the front of the switch. The typical operating power is

480W and maximum 750W.

Connectrix MDS-Series Switch Architecture and Management - 62

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Shown here is the rear view of the NEX-5020. All ports are located on the rear of the unit to

facilitate cabling from the servers to the switch. There are four out-of-band management ports,

each capable of 10, 100 or 1000 Mbps; however, only the upper-rightmost port is currently used.

The console port provides terminal access to the switch, for initial configuration or in the event

that management port(s) are unavailable. There are 40 fixed 10 Gbps Ethernet ports that can be

used to connect hosts via Converged Network Adapters (CNAs) or to uplink the NEX-5020 to a

standard Ethernet switch with 10 Gbps capabilities. The first 16 ports operate at 1 or 10 Gbps,

the remaining ports will only operate at 10 Gbps, and cannot be configured to operate at lower

speeds, nor will they allow an adapter that operates at speeds other than 10 Gbps to connect.

Several SFP+ options are available and will be discussed later in this module. These ports

provide line rate speed, are non-blocking and do not share bandwidth.

The two expansion slots can be used to increase the number of Ethernet ports or add Fibre

Channel ports to the switch. Specific information regarding the expansion module options will

be discussed later in this module.

The redundant power supply connectors are also included on the rear of the switch.

Connectrix MDS-Series Switch Architecture and Management - 63

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Shown here is the port numbering scheme for the NEX-5020. The green ports on the left side

represent the out-of-band management ports. M0 is the network management port that is

currently used. M1, L1 and L2 are also network ports, but are currently unutilized. C is the

console port. The switch has 3 possible ―slots‖ for port numbering. Slot 1 is comprised of the 40

fixed Ethernet ports, and thus the ports will range from Ethernet 1/1 to Ethernet 1/40. Slots 2

and 3 refer to the expansion modules. If no module is inserted in a particular bay, no interfaces

will show for that slot. As an example, we have shown the port numbering for both the hybrid

Ethernet and Fibre Channel Module as well as the Ethernet only module. The hybrid module

will have ports Ethernet 2/1 through Ethernet 2/4 and FC 2/1 through FC 2/4. The Ethernet only

module will have ports FC 3/1 through FC 3/6.

Connectrix MDS-Series Switch Architecture and Management - 64

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The NEX 5010 is a one rack-unit (1RU), 10 Gigabit Ethernet, Data Center Ethernet, and FCoE

Fibre Channel switch built to provide 520 gigabits per second throughput with very low latency.

It has 20 fixed 10 Gigabit Ethernet, Cisco Data Center Ethernet, and FCoE Small Form Factor

Pluggable Plus (SFP+) ports. The first 8 fixed ports are dual speed, supporting both 10 Gigabit

Ethernet and 1 Gigabit Ethernet in hardware. One expansion module slot can be configured to

support up to 6 additional 10 Gigabit Ethernet, Cisco Data Center Ethernet, and FCoE SFP+

ports, up to 8 Fibre Channel switch ports, or a combination of 4, 10 Gigabit Ethernet and 4 Fibre

Channel. The switch has a serial console port and an out-of-band 10/100/1000-Mbps Ethernet

management port. The switch is powered by 1+1 redundant, hot-pluggable power supplies and

1+1 redundant, hotpluggable fan modules.

Connectrix MDS-Series Switch Architecture and Management - 65

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Shown here is the front view of the NEX-5010 switch. From the front, you can see the dual

redundant power supplies and fan modules. These items are all field replaceable units (FRUs),

and can be easily accessed via the front of the switch.

Connectrix MDS-Series Switch Architecture and Management - 66

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Shown here is the rear view of the NEX-5010. The NEX-5010 has four out-of-band

management ports, each capable of 10, 100 or 1000 Megabytes per second, however, only the

upper-rightmost port is currently used. The console port provides terminal access to the switch.

There are 20 fixed 10 Gigabit per second Ethernet ports. The first eight ports will operate at 1 or

10 Gigabit per second. The remaining ports only operate at 10 Gigabit per second , and cannot

be configured to operate at lower speeds. These ports provide line rate speed, are non-blocking

and do not share bandwidth.

The redundant power supply connectors are also included on the rear of the switch.

Connectrix MDS-Series Switch Architecture and Management - 67

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Shown here is the port numbering scheme for the NEX-5010. The green ports on the left side are

the out-of-band management ports. M0 is the network management port that is currently used. C

is the console port. The switch has 2 ―slots‖ for port numbering. Slot 1 is comprised of the 10

fixed Ethernet ports, and thus the ports will range from Ethernet 1/1 to Ethernet 1/20. Slot 2

refers to the expansion module. If no module is inserted in a particular bay, no interfaces will

show for that slot. We have shown the port numbering for the Fibre Channel Module. The FC

only module will have ports Ethernet 2/1 through FC 2/8.

Connectrix MDS-Series Switch Architecture and Management - 68

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The NEX switches support expansion modules used to increase the number of 10 Gigabit

Ethernet ports or connect to Fibre Channel SANs with 4/2/1-Gbps Fibre Channel switch ports,

or both.

Ethernet module that provides 6 ports of 10 Gigabit Ethernet, Cisco Data Center Ethernet,

and FCoE using the SFP+ interface

Fibre Channel plus Ethernet module that provides 4 ports of 10 Gigabit Ethernet using the

SFP+ interface, and 4 ports of 4/2/1-Gbps native Fibre Channel connectivity using the SFP

interface

Fibre Channel module that provides 8 ports of 4/2/1-Gbps native Fibre Channel using the

SFP interface.

The Fibre Channel ports are standard 4 Gbps ports and can be configured as E, TE, and F ports.

Fibre Channel modules will ship with SFP optical transceivers for all ports. Ethernet modules

have a choice between optical SR SFP+ or Copper SFP+ with Twinax cable.

EMC will support any combination of the expansion modules in the NEX 5020.

Please note that the expansion modules are not hot-pluggable. The switch must be powered

down before they are inserted or removed.

Connectrix MDS-Series Switch Architecture and Management - 69

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

In addition to using standard optical transceivers (SFP) for host connectivity, Copper-based

Twinax cables can be used. A Twinax cable is composed of two pairs of copper cables that are

covered with a shielded casing. Each end of the cable is terminated with a SFP+ GBIC. These

GBICs cannot be removed or replaced. In the event that a cable or transceiver becomes faulty,

the entire unit must be replaced. Twinax cables are designed to provide connectivity between a

host CNA and the NEX. The twinax cable comes in three lengths: 1, 3, and 5 Meters. Since the

NEX switches are currently being offered as a ―top of rack‖ solution, a 5 meter cable should be

sufficient in length to reach from the upper and lower positions of a rack. The main advantage of

using Twinax cables for host connectivity is cost. The Twinax cable assembly can provide a

significant cost savings over optical components. Connectivity between the NEX and the SAN

will remain as standard optical connections, and connectivity to the Ethernet network will most

likely be via fiber as well.

Connectrix MDS-Series Switch Architecture and Management - 70

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Shown here is the ASIC architecture for the NEX-5020. The switch has a total of fourteen

Universal Port Controllers (UPCs) which provide connectivity between the external SFP+

interfaces and the Unified Crossbar Fabric (UCF). Each UPC can support up to four 10 Gbps

Ethernet ports, and each port is given a dedicated 12.5 Gbps pathway to the UPC and from the

UPC to the UCF. The additional 2.5 Gbps allows for additional overhead that may be needed for

packet encoding, encapsulation, etc. Each of the expansion modules is serviced by a set of 6

UPC connections. In the case of the Ethernet or Ethernet/Fibre Channel modules, each Ethernet

port is allocated it’s own pathway to a UPC, just as with the fixed ports. Fibre Channel ports,

however, share connections so that every two (2) Fibre Channel ports share bandwidth to the

UPC and the UCF. Notice the SFP on the right of the map, they are fibre channel ports. Notice

the dual NICs are connected to two ports on the UPC.

In addition to providing host connectivity, the UPCs provide two connections for the supervisor

module, and two of the UPC interfaces are currently unused.

Connectrix MDS-Series Switch Architecture and Management - 71

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The objectives for this lesson are shown here. Please take a moment to read them.

Connectrix MDS-Series Switch Architecture and Management - 72

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Shown here are the Fibre Channel features and limitations of the NEX. For certain limitations,

such as the maximum number of domains or hops supported in a fabric, check the

documentation on Powerlink.

Connectrix MDS-Series Switch Architecture and Management - 73

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Shown here are some of the Ethernet features that are supported on the NEX. Spanning tree is

supported and required to prevent loops from occurring in the Ethernet network.

Connectrix MDS-Series Switch Architecture and Management - 74

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Fibre Channel over Ethernet, FCoE is a new technology protocol was defined by the T11

standards committee. It expands FC into the Ethernet environment. As a physical interface it

uses Converged Enhanced Ethernet NICs, FCoE HBAs, or CNAs. Basically FCoE allows Fibre

Channel frames to be encapsulated within Ethernet frames, providing a transport protocol more

efficient than TCP/IP.

Connectrix MDS-Series Switch Architecture and Management - 75

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Converged Network Adapters (CNAs) are intelligent multi-protocol adapters that provide host

LAN and Fibre Channel SAN connectivity over 10Gbps Ethernet using Fibre Channel over

Ethernet (FCoE) and Enhance Ethernet functionality.

CNAs are built using off-the-shelf NIC and HBA ASIC‘s by Qlogic and Emulex. Usually a CNA

offers full hardware offload for FCoE protocol reducing the system CPU utilization for I/O

operations, which leads to faster application performance and higher levels of consolidation in

virtualized systems. Implementing CNA creates minimum disruption in existing environments.

Connectrix MDS-Series Switch Architecture and Management - 76

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Converged Network Adapter (CNA) appear to the host as two PCI devices. One is a network

adapter and the other a Fibre Channel adapter. The drivers communicate with the respective

PFCI function.

If the request is a network transaction, this is delivered to the lossless MAC. In the case of Fibre

Channel, the frames are encapsulated as FCoE by the FCoE encapsulation engine. They are later

sent to the lossless MAC for delivery.

Received traffic is processed by the lossless MAC, it filters FCoE and delivers traffic either to

the Ethernet NIC, if it‘s a network transaction, or is decapsulated by the FCoE engine. The FCoE

engine are forwarded to the Fibre Channel HBA device. The CNA construct FCoE_LEP allows

only one MAC address per FCoE Link End Point.

Connectrix MDS-Series Switch Architecture and Management - 77

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Converged Enhanced Ethernet eliminates Ethernet‘s lossy behavior and makes it suitable for

transporting storage networking. The four main additions are Priority-based Flow Control, DCB

Capability Exchange Protocol, Congestion Notification, and Enhanced Transmission Selection.

Connectrix MDS-Series Switch Architecture and Management - 78

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

These are the key points covered in this module. Please take a moment to review them.

Connectrix MDS-Series Switch Architecture and Management - 79

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The objectives for this module are shown here. Please take a moment to read them.

Connectrix MDS-Series Switch Architecture and Management - 80

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The objectives for this lesson are shown here. Please take a moment to read them.

Connectrix MDS-Series Switch Architecture and Management - 81

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

NEX switches use NX-OS 4.0(x) firmware. The switch cannot run any version of SAN-OS or

NX-OS 4.1, which is targeted at MDS-Series series switches. Fabric Manager 5.0(1a) can be

used to manage Connectrix MDS switches in the same fabric as a NEX-5000 Series switches.

Always refer to the EMC Support Matrix for the most up-to-date support information.

Connectrix MDS-Series Switch Architecture and Management - 82

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The NEX has an out-of-band management port that functions identically to the management port

on an MDS switch. The switch can be managed via Telnet, SSH or Fabric Manager/Device

Manager. One difference between the NEX and an MDS switch, is that the management

interface on the NEX requires VRF (Virtual Routing and Forwarding), which allows multiple

routing tables to exist within a device. This is due to having multiple Ethernet ports on the NEX

that can potentially be in the same VLAN as the management interface.

XML Management interface uses the NETCONF protocol to manage devices and communicate

over the interface with an XML management tool or a program. You can configure the entire set

of CLI commands on the device with NETCONF.

SNMP allows you to configure switches using Management Information Bases (MIBs). Newer

firmware of SAN-OS and NX-OS support SNMP v1, 2 and 3. SNMP is used to configure access

to third party management protocols such as EMC Control Center.

Fabric Manager is the GUI configuration tool for MDS and NEX products. A stand-alone

version is licensed with every MDS/NEX switch. A license can be installed to install a Server-

Client model, which allows the management of multiple fabrics.

Connectrix MDS-Series Switch Architecture and Management - 83

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The diagram above shows how Fabric Manager for NEX switches combines Fibre Channel and

Ethernet Management into a single suite. Fibre Channel features of NEX 5000 are managed by

Fabric Manager with N5K Extensions. Ethernet features of NEX 5000 are managed by Cisco

Data Center network Manager (DCNM) with N5K Extensions. You cannot configure physical

Ethernet interfaces using Fabric Manager or Device Manager.

Connectrix MDS-Series Switch Architecture and Management - 84

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

You can configure the NEX-5000 Series switch using the Fabric Manager client, which runs on

a local PC and uses the Fabric Manager server. The Fabric Manager server software must be

installed before running Fabric Manager. On a Windows PC, the Fabric Manager server is

installed as a service and is administered using the Windows Services control panel. You cannot

change the configuration for physical Ethernet interfaces using Fabric Manager. Versions earlier

than 3.4.(1) cannot interact with the NEX switch.

Connectrix MDS-Series Switch Architecture and Management - 85

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Fabric Manager must be prepared to work with FCoE. The server properties of the Fabric

Manager must be edited after it is loaded for the first time. Once the change is saved, Fabric

Manager must be restarted.

In order to use the FCoE and Fibre Channel features on the NEX switches, the Storage Protocols

Service License is required. The switch will ship with the Storage Protocols Service License

installed. In the event that FCoE is not enabled, it can be enabled by using the ―feature‖

command. This command has existed on network switches running IOS for some time, but is

new for storage switches. Enabling and disabling FCoE is disruptive, as either function requires

the running configuration to be saved to startup and the switch rebooted. If the FCoE feature is

not enabled, no FC interfaces in the switch will be operative, and will not display in the interface

listing.

Connectrix MDS-Series Switch Architecture and Management - 86

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Device Manager can be used to manage an NEX switch in the same way that it can be used with

an MDS switch. Additional functionality has been added to allow for the management of the

Virtual Interfaces and Virtual Interface Groups. The example shown has a 4 port FC + 4 port 10

GigE expansion module installed.

New features:

Quick Configuration Wizard

Virtual Interface Group management

Fibre Channel and Ethernet Virtual Interface management

N5K specific diagnostics

View Ethernet Interface configuration

N5K Switch Inventory management

Connectrix MDS-Series Switch Architecture and Management - 87

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The process to upgrade firmware on the NEX switch is identical to the process used on the

MDS-series switches. After the system and kickstart images are saved onto bootflash, the install

all command is used to upgrade the firmware images and the BIOS.

Same process as MDS switches

Kickstart and System images downloaded into bootflash

install all command used to upgrade Kickstart, System and BIOS

Upgrade summary and confirmation displayed

Connectrix MDS-Series Switch Architecture and Management - 88

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

ISSU support on the Nexus 5000 provides the capability to perform transparent software

upgrades, reducing downtime and allowing you to integrate the newest features and functions

with little or no effect on network operation for Ethernet, storage, and converged network

environments.

ISSU is supported on the NEX switches for non-disruptive upgrades. However, in versions

earlier than NX-OS v4.2(1), ISSU is not supported. Therefore, any firmware upgrades will result

in a full switch reset. All traffic will be disrupted while the switch is being reset, and the length

of time required for a reboot is generally longer than that of an MDS switch.

Connectrix MDS-Series Switch Architecture and Management - 89

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The objectives for this lesson are shown here. Please take a moment to read them.

Connectrix MDS-Series Switch Architecture and Management - 90

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Ethernet Top of Rack (TOR) Switch Topology

The NEX switches can be deployed as 10-Gigabit Ethernet top-of-rack (TOR) switches, with uplinks to the data center LAN distribution layer switches. The blade server rack can incorporates blade switches that support 10-Gigabit Ethernet uplinks to the NEX. NEX switches can have Ethernet uplinks to Ethernet switches. If STP is enabled in the data center LAN, the links to one of the switches will be STP active and the links to the other switch will be STP blocked.

Fabric Extender Deployment Topology

Fabric Extender top-of-rack units provide 1-Gigabit host interfaces connected to the servers. The Fabric Extender units are attached to their parent NEX switches with 10-Gigabit fabric interfaces. Each Fabric Extender acts as a Remote I/O Module on the parent NEX switch. All device configurations are managed on the NEX switch and configuration information is downloaded using inband communication to the Fabric Extender. This configuration is currently not supported by EMC.

I/O Consolidation Topology

The NEX switch connects to the server ports using FCoE. Ports on the server require CNAs. For redundancy, each server should connect to two switches. CNAs are configured in active-passive mode when using Ethernet. The server needs to support server-based failover. CNAs are active/active or active/passive when they are configured for FCoE. Multipathing can be handled via MPIO or PowerPath.

On the NEX switch, the Ethernet network-facing ports can be connected to two Catalyst switches. Depending on required uplink traffic volume, there may be multiple ports connected to each Catalyst switch, configured as port channels. If STP is enabled in the data center LAN, the links to one of the switches will be STP active and the links to the other switch will be STP blocked.

The SAN network-facing ports on the NEX series are connected to the SAN (MDS-Series). Depending on required traffic volume, there may be multiple Fibre Channel ports connected to each SAN switch, configured as SAN port channels.

Connectrix MDS-Series Switch Architecture and Management - 91

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

There are two options for host connectivity to an NEX switch. The first option is for hosts that

do not need to access storage over a SAN. In this scenario, a host will have one or more (for

redundancy) 10 Gbps Ethernet Network Interface Cards (NICs) installed. The NEX acts as a

standard Ethernet switch for these hosts, and forwards Ethernet frames to the next device in the

Ethernet network. The standard 10 Gbps Ethernet NICs do not provide access to Fibre Channel

(SAN) resources at this time.

In the second environment, a host will have one or more Converged Network Adapters (CNAs).

The CNAs support the same standard networking protocols as the NICs as well as Fibre Channel

over Ethernet (FCoE). This allows hosts that have CNAs installed to access both network

resources as well as Fibre Channel resources by using a single interface.

Both NICs and CNAs can be used to connect different hosts on the same switch.

Connectrix MDS-Series Switch Architecture and Management - 92

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

As the CNAs multiplex the Ethernet and Fibre Channel traffic together, the NEX must separate

that traffic and forward it to the correct locations, either the Ethernet network or the SAN. To do

this, one or more virtual interfaces must be created on each active port on the switch. For hosts

that are connected with NICs, only a Virtual Ethernet (veth) Interface need be created as only

Ethernet connectivity is supported. For hosts that are connected with a CNA, either a Virtual

Ethernet (veth) or Virtual Fibre Channel Interface (vfc) can be created, or both, depending on

what functionality is required (network or storage). Typically hosts that have a CNA installed

will use it to access both Ethernet and Fibre Channel resources, and will therefore need both a

Virtual Fibre Channel Interface and Virtual Ethernet interface.

Connectrix MDS-Series Switch Architecture and Management - 93

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Link aggregation can refer to different technologies and components for the purposes of increased bandwidth

or redundancy. At the host level, the CNAs can be configured as a ―team‖ for redundancy at the Ethernet level;

essentially a virtual adapter that contains multiple physical adapters. Teams are configured in active-passive

mode, where one CNA is transmitting and receiving data while the other remains inactive in most situations.

However, in the event that the primary interface experiences a failure, the inactive CNA assumes the role of

the active interface and continues communications.

From a Fibre Channel perspective, multipathing software, such as PowerPath or native OS MPIO, can be used

to provide traditional fault-tolerance and load balancing functionality. Unlike the Ethernet configuration,

CNAs will function in an active-active configuration for Fibre Channel communication.

Port-channels can be created between the NEX and an MDS SAN switch and between the NEX and an

Ethernet switch. SAN port-channels on the NEX switch are identical to ones on an MDS switch. They can be

comprised of E or TE ports, can carry one or more VSANs and can have two or more members. An alternative

to using port-channels is to use individual ISLs between the NEX switches and the MDS switches. ISLs will

also provide additional bandwidth and redundancy in the event of a failure. Port-channels to MDS switches

can be created using the ―san port-channel‖ command.

Ethernet port-channels are similar in function as SAN port-channels. They can carry one or more VLANs and

can have two or more members. On the NEX switches, port-channels can be created statically or by using the

Link Aggregation Control Protocol (LACP). Ethernet port-channels are critical in providing high availability

to the Ethernet network. Unlike SAN connectivity, where you can have multiple ISLs between switches, in an

Ethernet environment you can only have one active link between two switches. Any additional connections

will be disabled by the Spanning Tree Protocol to prevent loops from forming in the data path. Since a port-

channel is viewed as a single logical link, it allows multiple physical connections to be utilized, and thereby

provides fault tolerance in the event that a single connection fails.

Connectrix MDS-Series Switch Architecture and Management - 94

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

N_Port ID Virtualization (NPIV) is an ANSI T11 standard that describes how a single Fibre

Channel HBA port (single N_Port/single FCID) can register with several World Wide Port

Names (WWPNs) or multiple N_Port IDs in the SAN fabric. This allows a fabric-attached

N_Port to claim multiple fabric addresses. Each address appears as a unique entity on the Fibre

Channel fabric

In other words, NPIV-capable HBAs can provide multiple WWPNs rather than registering a

single WWPN in the fabric. This is beneficial in two ways: In a virtual machine environment

each VM can have separate WWPNs so that the hypervisor will be released to provide the I/O

blending operation. In a virtual machine environment where many host operating systems or

applications are running on a physical host, each virtual machine can now be managed

independently from zoning, aliasing, and security perspectives. Also, there would be no extra

physical ports to be connected in the SAN fabric so the addition of more edge switches would

not be required.

Connectrix MDS-Series Switch Architecture and Management - 95

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

N port virtualization (NPV) reduces the number of Fibre Channel domain IDs in SANs. Switches

operating in the NPV mode do not join a fabric. They pass traffic between NPV core switch links and end

devices, which eliminates the domain IDs for these edge switches.

When a switch acts as an NPV edge switch, it does not perform any fabric services, and instead forwards

all fabric activity (FLOGI, FDISC, Name Server, Zoning, etc.) to the NPV Core switch. Care should be

taken when enabling or disabling NPV, as in order to enter or exit from NPV mode, the switch will

perform a write erase on the switch and reboot. When the switch completes the reboot, it will require an

admin password - necessitating console access - and all configuration information will be lost. Please

check Powerlink for the supported NPV configurations.

NPV mode applies to an entire switch. All end devices connected to a switch that is in NPV mode must

log in as an N port to use this feature. All links from the edge switches to the NPV core switches are

established as NP ports (not E ports), which are used for typical ISLs. NPIV is used by the switches in

NPV mode to log in to multiple end devices that share a link to the NPV core switch.

An NP port (proxy N port) is a port on a device that is in NPV mode and connected to the NPV core

switch using an F port. NP ports behave like N ports except that in addition to provide N port behavior,

they also function as proxies for multiple, physical N ports. An NP link is basically an NPIV uplink to a

specific end device. NP links are established when the uplink to the NPV core switch comes up; the links

are terminated when the uplink goes down. After the uplink is established, the NPV switch performs an

internal FLOGI to the NPV core switch, and then registers itself with the NPV core switch‘s name server.

Subsequent FLOGIs from end devices in this NP link are converted to FDISCs.

Connectrix MDS-Series Switch Architecture and Management - 96

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

These are the key points covered in this module. Please take a moment to review them.

Connectrix MDS-Series Switch Architecture and Management - 97

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The objectives for this module are shown here. Please take a moment to read them.

Connectrix MDS-Series Switch Architecture and Management - 98

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

The basic boot sequence of the NEX begins with powering on the switch. If, within 2 seconds of

power on, the Ctrl+Shift+6 key sequence is pressed, the switch will boot into the ―Golden

BIOS‖, which is a failsafe copy of the boot code. That key sequence must be pressed from a

terminal locally connected to the console port, and the connection settings must be at 9600

BAUD, 8 data bits, no parity and 1 stop bit. If the key sequence is not pressed, or not pressed in

time, the switch will attempt to boot into the user loaded version of BIOS. If that BIOS is not

valid, it will default to booting into the Golden BIOS. Otherwise, it will proceed to boot the

upgraded BIOS. Regardless of which version of BIOS loads, it will then launch the loader.

Connectrix MDS-Series Switch Architecture and Management - 99

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

In the event that the kickstart image stored on bootflash becomes corrupted or invalid, booting

to the loader prompt may be necessary. Accessing the loader prompt is as simple as pressing

Ctrl+Shift+R or Crtl+Shift+L. In the event that the console port settings have been changed, or

the CMOS has become corrupted, it is possible to force the console back to the default values.

To do this, connect a terminal at the default settings and reboot the switch. As the switch is

booting, repeatedly press Ctrl+Shift+R until the loader> prompt displays. At this point, the

console is using the default values and the CMOS can be updated to the default values if desired.

If there are no CMOS issues with the switch, Ctrl+Shift+L can be used to access the loader

prompt using the CMOS values as they are configured.

Connectrix MDS-Series Switch Architecture and Management - 100

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

After the kickstart image loads, it will decompress the system image and load it, at which point

the switch will be at its normal operating state. In the event that the system image is faulty, it can

be prevented from automatically loading by pressing Ctrl+Shift+B while the kickstart is loading.

This will bring the switch to the switch(boot)# prompt.

Connectrix MDS-Series Switch Architecture and Management - 101

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Troubleshooting startup issues on the NEX is very similar to the process used on the MDS.

Problematic configurations can be corrected by halting the boot process, after the kickstart

loads, and by clearing the configuration. The lost password recovery process is identical to the

process on the MDS, and is accomplished by exiting to the kickstart prompt and resetting the

password. Bad boot files can be resolved by exiting to either the kickstart or loader and

downloading new images to bootflash.

Connectrix MDS-Series Switch Architecture and Management - 102

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

As the 10 Gbps capable SFP+ transceivers are similar in appearance to standard Ethernet or FC

SFPs, there is the possibility that an invalid SFP could be inserted into an Ethernet port. In this

case, the interface will show an ―SFPInvalid‖ status. The commands shown here can be used to

determine if the transceiver is not a supported model, or if it simply has a physical error.

Connectrix MDS-Series Switch Architecture and Management - 103

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

If the Ethernet link does not come up, one thing to check is to make sure that there is a physical

link. The command ―show hardware internal gatos port Ethernet 1/1 xcvr info | grep State‖ will

show the link state of the interface. The state will be one of several options, which can be found

in the full output of the command, or by issuing the same command with a lowercase ―s‖ on the

word ―state‖ (that is, show hardware internal gatos port ethernet 1/1 xcvr info | grep state).

Gatos is the ―codename‖ for the Unified Port Controller in the NEX.

The ASIC number is the same as the Gatos UPC. Use the value under gatos_instance to

verify if there is a MAC local or remote fault.

Connectrix MDS-Series Switch Architecture and Management - 104

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Shown here is the output from a show interface for an Ethernet port. Some items to take note of

in the output are:

Reliability: Any significant variation from 255/255 could indicate a physical link problem; a

faulty cable, transceiver, etc.

Port mode: Port mode should be access if a host is connected and is only using a single

VLAN. Port mode should be set to trunk when it is an uplink to an Ethernet switch or if the

host is using multiple VLANs.

The section under Rx displays numerous error conditions which could result from buffer

overruns, crc errors, carrier loss, malformed frames and others.

Interface counters can also be obtained for Virtual Ethernet and Virtual Fibre Channel ports.

Connectrix MDS-Series Switch Architecture and Management - 105

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Shown here is a sample of some errors that can be encountered during system POST as well as

what effect it has on the hardware and what steps are automatically taken by the system.

Connectrix MDS-Series Switch Architecture and Management - 106

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Shown here is the output from the ―show environment‖ command. This command displays fan

information for all five fan units, as well as the two power supply fans, temperature for all of the

fan units, power supplies and modules, as well as the status of the power supply and power

usage for the system.

Connectrix MDS-Series Switch Architecture and Management - 107

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Shown here is the output of the show diagnostic result command. This command shows the

status of hardware components associated with the designated module when it performs its

POST diagnostics. This command can be issued for module 1, the core NEX, or modules 2 and

3, the expansion modules.

Connectrix MDS-Series Switch Architecture and Management - 108

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

There are numerous show commands that will function on the NEX switches. Many of them are

the same or similar to the ones used on the MDS switches. Show tech-support details has the

same function as on the MDS, but includes additional information. In addition to being able to

save the show tech to a file, the command ―tac-pac‖ can be used to gather the information and

automatically save it to a zipped file. That file can then be copied off the switch by normal

means (ftp, scp, sftp, etc.). By default, the tac-pac stores the file in volatile, but other locations

can be specified on the command line.

Connectrix MDS-Series Switch Architecture and Management - 109

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Shown here is the information that is captured in either the show tech-details or tac-pac

command.

Connectrix MDS-Series Switch Architecture and Management - 110

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

This slide continues the information that is captured in both the show tech-details or tac-pac

command.

Connectrix MDS-Series Switch Architecture and Management - 111

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

Core files can be listed by using the show cores command. This will display all cores on the

system, as well as the process-name, PID and timestamp of the core. The core can be copied to a

remote server via ftp, or scp by using the copy core command. An example is shown on the

slide.

Connectrix MDS-Series Switch Architecture and Management - 112

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

These are the key points covered in this module. Please take a moment to review them

Connectrix MDS-Series Switch Architecture and Management - 113

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

These are the key points covered in this training. Please take a moment to review them.

This concludes the training. Please proceed to the Course Completion slide to take the

assessment.