59
Dell EMC Vscale Architecture Overview Document revision 1.0 February 2017

Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

  • Upload
    vuanh

  • View
    258

  • Download
    4

Embed Size (px)

Citation preview

Page 1: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Dell EMCVscale Architecture Overview

Document revision 1.0

February 2017

Page 2: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Revision history

Date Document revision Description of changes

February 2017 1.0 Initial version

Revision history | 2

Page 3: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Contents

Introduction.................................................................................................................................................4

Architecture overview................................................................................................................................ 5

Connecting system resources.................................................................................................................10Connectivity..........................................................................................................................................10Physical components........................................................................................................................... 11LAN architecture...................................................................................................................................12

LAN configurations.........................................................................................................................15VXLANs......................................................................................................................................... 16

SAN architecture.................................................................................................................................. 17SAN switches.................................................................................................................................18

Architecture resources.............................................................................................................................20Physical components........................................................................................................................... 20Technology connects........................................................................................................................... 21Compute resources.............................................................................................................................. 24Storage resources................................................................................................................................ 25

File storage.................................................................................................................................... 28Data protection resources.................................................................................................................... 29System resources.................................................................................................................................31

Hosting management applications......................................................................................................... 33Compute components.......................................................................................................................... 35Storage components............................................................................................................................ 35Network components............................................................................................................................36

Network architecture......................................................................................................................36VMware vSphere virtual switch designs...............................................................................................41Management workload cluster and resource pools..............................................................................43Virtualization.........................................................................................................................................47

Sample configurations............................................................................................................................. 48Sample ACI configuration with the Cisco Nexus 9504 Switch............................................................. 48Sample ACI configuration with the Cisco Nexus 9508 Switch............................................................. 49Sample technology connect configuration............................................................................................51Sample system with compute...............................................................................................................52Sample system with storage................................................................................................................ 53Sample Vscale Management Platform with VNX5200......................................................................... 54

Additional references............................................................................................................................... 56Virtualization components.................................................................................................................... 56Compute components.......................................................................................................................... 56Network components............................................................................................................................57Storage components............................................................................................................................ 58

3 | Contents

Page 4: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

IntroductionThis document describes the high-level design of the Vscale Architecture.

Vscale Architecture is an architectural framework that uses Vscale Fabric to connect modular buildingblocks such as Converged Systems and Vscale Fabric Technology Extensions. This provides VscaleArchitecture the flexibility to accommodate a wide variety of application requirements using an ITinfrastructure that can scale from a single Converged System to the largest data center.

The target audience for this document includes sales engineers, field consultants, advanced servicesspecialists, and customers.

The following table provides a description of related documentation:

Document Provides

Release Certification Matrix A list of the certified versions of software, firmware, and hardware.

Converged Systems Physical Planning Guide A description of the physical components and elevations.

Converged Systems Powering On and OffGuide

Instructions on how to manage power.

Integrated Data ProtectionGuide Contains information on advanced planning and backup guidelines.

Release Certification Matrix A list of the certified versions of software, firmware, and hardware.

Vision Intelligent Operations AdministrationGuide

Information on how to manage Converged Systems.

Glossary Definitions of terms specific to Converged Systems.

Introduction | 4

Page 5: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Architecture overviewVscale Architecture is a framework that enables the building of data-center scale IT systems comprised ofresources logically connected using a Vscale Fabric to form logical systems.

The Vscale Architecture modular building blocks include the following:

• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems).

• Converged Systems that can be expanded with Converged Technology Extensions.

• Vscale Fabric delivers scalable LAN and SAN switching for connectivity between Vscale FabricTechnology Extensions and Converged Systems.

— Vscale Fabric incorporates a scalable spine/leaf LAN architecture with optional software-defined networking (SDN) and a core/edge SAN architecture.

— Vscale Fabric Technology Extensions are modular containers that provide connectivity forcompute, storage, and data protection resources consumed by other resources attached tothe Vscale Fabric.

• TheVscale Border Technology Connect provides external intranet and internet connectivity forexternal facing routers, firewalls, and other edge functions to communicate with the VscaleArchitecture.

• The Vscale Open Technology Connect enables an organization to integrate non-Dell EMCresources into the Vscale Architecture to provide investment protection and flexibility.

• Vscale Management Platform is a scalable management platform that hosts core managementapplications, which can also be extended to support an IT organization's management andorchestration stacks, as well as other applications, such as logging and SEIM.

By connecting multiple modular components to a scalable network fabric, a wide variety of applicationrequirements at any scale can be accommodated. Vscale Architecture offers a high degree of flexibilityand scale that compliments Converged Systems with the following benefits:

• Dell EMC engineered and validated architecture.

• Dell EMC lifecycle management and support.

• Dell EMC Release Certification Matrix (RCM) certification for all Dell EMC-provided componentsincluding LAN and SAN fabrics.

• Uplinks to Vscale Fabric for unified storage access and file services.

• Out-of-band management networks.

• VMP as a scalable platform used to host Converged Systems element managers for allinfrastructure components in the data center. VMP can be extended to support Dell EMCecosystem management workloads and other management applications.

• Logical build guidelines that include prescriptive connectivity and management.

• Vision Intelligent Operations software for managing the health, Dell EMC RCM and securitycompliance of Converged Systems across the Vscale Architecture.

5 | Architecture overview

Page 6: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

The following illustration shows how Vscale Architecture combines the modular components:

Vscale Fabric

Vscale Fabric consists of two discrete, switched LAN and SAN networks. The Vscale LAN Fabric containsspine and leaf switches that provide Ethernet and IP connectivity. The spine switches are high-throughputswitches that forward traffic between the leaf switches. The leaf switches provide a network connectionpoint for resources in the Vscale Fabric Technology Extensions and Converged Systems to connect tothe spine switches.

Architecture overview | 6

Page 7: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

The following illustration shows a Vscale LAN Fabric where the spine, leaf, and Vscale Fabric TechnologyExtensions connect:

The Vscale SAN Fabric is a flexible, core-edge or edge-core-edge architecture that uses FC SANswitches for storage connectivity. The core switches are high throughput, director-class switches with highport density that provide connectivity for storage arrays and edge switches. The edge switches provide aSAN connection point for compute and storage resources in Vscale Fabric Technology Extensions andConverged Systems.

7 | Architecture overview

Page 8: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

The following illustration shows how the SAN core and Vscale Fabric Technology Extensions connect:

Vscale Fabric Technology Extensions

Vscale Fabric Technology Extensions are modular containers that connect Dell EMC compute, storage,and data protection resources to the Vscale Fabric. These resources can be logically configured to formlogical systems or consumed by other systems attached to the Vscale Fabric.

Vscale Fabric Technology Extensions contain the following components:

• Intelligent physical infrastructure is a 42 RU enclosure that includes an intelligent gateway thatgathers information about power, thermals, security, alerts, and all components in the physicalcabinet.

• Two LAN leaf switches (optional if LAN connectivity is not required).

• Two SAN edge switches (optional if SAN connectivity is not required).

• One or more Dell EMC management switches.

Vscale Fabric Technology Extensions can be populated with storage, compute or data protectionresources, or a combination of resources to meet a particular operational model or application use cases.

Architecture overview | 8

Page 9: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

For example, a Vscale Fabric Technology Extension can be configured with storage or compute only orwith both storage and data protection resources.

Technology connects

The is a special use case of Vscale Fabric Technology Extensions that can include resource types thatDell EMC does not support. The is used solely for non-Dell EMC, third-party resources that provideconnectivity for non-Dell EMC supplied, third-party IT infrastructure. The includes compute servers,storage, or data protection resources and provides VXLAN connectivity.

An can be populated with any third-party, non-Dell EMC supplied assets. This enables organizations tointegrate technology assets that have not been depreciated, or have strategic value, but cannot be re-platformed to x86 technology. An cannot be used for Dell EMC-provided systems or resources.

The contains the following components:

• Intelligent physical infrastructure

• Two LAN leaf switches

• Two SAN edge switches

• One or more Dell EMC management switches

The contains similar components as the but is used to provide external connectivity for customer routers,and other edge devices such as firewalls, application delivery controllers, or intrusion detection protection.SAN edge switches are not required for a border technology connect.

Management

Vscale Management Platform is used to manage all components in a Vscale Architecture deployment.VMP is a scalable management platform that hosts core Dell EMC management workloads such aselement managers, VMware vCenter Servers, and Secure Remote Support. The VMP may also extend tomanagement workloads that are not provided such as management and orchestration stacks or logging.

Vision Intelligent Operations, contained in the VMP, simplifies management and operations of convergedinfrastructure and helps to ensure that all shared resources in a Vscale Architecture data center arecompatible and available for applications to consume.

Refer to the appropriate Architecture Overview for more information about your system.

Related information

Connecting system resources (see page 10)

Architecture resources (see page 20)

Hosting management applications (see page 33)

9 | Architecture overview

Page 10: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Connecting system resourcesVscale Fabricprovides high performance, LAN and SAN switched networks for connectivity betweenVscale Architecture system resources. The Vscale LAN Fabric is a spine and leaf architecture and theVscale SAN Fabric is a core and edge architecture.

LAN architecture

The LAN network contains the spine and leaf architecture that provides a high performance, routednetwork for connectivity between system resources attached to the Vscale Fabric Technology Extensionleaf switches and switched for each system resource. LAN connectivity consists of an equal number ofconnections from each leaf switch to all spine switches using a minimum of one, and a maximum of four40 Gbps connections, dependent on 40 GbE capacity and number of deployed spine switches.

The LAN contains the spine and leaf architecture that provide a high performance, routed underlaynetwork for transparent VXLAN overlay connectivity between all leaf switches connected to the VscaleFabric Technology Extensions. LAN connectivity consists of an equal number of connections from eachleaf switch to all spine switches using a minimum of one, and a maximum of four 40 Gbps connections,dependent on 40 GbE capacity and number of deployed spine switches.

SAN architecture

The SAN network is a flexible core-edge or edge-core-edge architecture. Two separate Vscale SANFabrics provide high availability. Compute resources only connect to SAN edge switches. Storage anddata protection resources may connect to edge or core switches, which is considered a collapsed coredesign.

ConnectivityVscale Fabric combines Vscale LAN Fabric spine switches and Vscale SAN Fabric core switches forconnectivity between the Converged Systems and Vscale Fabric Technology Extensions.

The LAN leaf switches and SAN edge switches combine to connect resources in a Vscale FabricTechnology Extension to the Vscale LAN Fabric and Vscale SAN Fabric.

Important: To simplify traffic management and scaling in the border technology connect, only customerupstream core routers and switches should connect to the border leaf switches.

Vscale LAN Fabric connectivity

Vscale LAN Fabric is a spine and leaf architecture that adheres to the following connectivity rules:

• Vscale Fabric leaf switches connect to the spine switches.

• Vscale Fabric leaf switches cannot be directly connected to other leaf switches.

• Vscale Fabric spine switches cannot be directly connected to the other spine switches.

• Hosts, such as servers, IP storage (NAS), or routers, cannot be directly connected to the spineswitches.

Connecting system resources | 10

Page 11: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Vscale SAN Fabric connectivity

The Vscale SAN Fabric is a classic redundant SAN A/B architecture with the following connectivity:

• Edge switches do not connect directly to other edge switches.

• Edge switches can provide connectivity for the following:

— Storage arrays

— Compute resources

— Data Protection resources

• Core switches only connect to other core switches in the same Vscale SAN Fabric.

• Core switches can provide connectivity for the following:

— Storage arrays

— Data protection resources

— Core switches for future meshing at the core switches for expansion

— Edge switches

Physical componentsVscale Fabric consists of LAN and SAN switching that is used to interconnect Converged Systems andresources within Vscale Fabric Technology Extensions.

LAN switches

The following table provides the LAN switches that are supported in the Vscale Fabric:

Component Spine Leaf

Cisco Nexus 3172TQ Switch X

Cisco Nexus 9332PQ, 32 port, QSFP+ based switch X (NX-OS) X (ACI for VxRackSystems only)

Cisco Nexus 9336PQ ACI Spine Switch X

Cisco Nexus 9396PX Switch with M12PQ, 12 port, QSFP+ uplink card X (Default)

Cisco Nexus 9504 Switch with N9K-X9636PQ, 36 port, QSFP+ line cards,NX-OS

X

Cisco Nexus 9508 Switch with N9K-X9636PQ, 36 port, QSFP+ line cards,NX-OS

X (Default)

Cisco Nexus 9504 Switch with N9K-X9736PQ, 36 port, QSFP+ line cards,ACI

X

Cisco Nexus 9508 Switch with N9K-X9736PQ, 36 port, QSFP+ line cards,ACI

X (Default)

11 | Connecting system resources

Page 12: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

SAN switches

The following table provides the SAN switches that are supported in the Vscale Fabric:

Component Core Edge

Cisco MDS 9396S 16G Multilayer Fabric Switch X

Cisco MDS 9706 Multilayer Director, 48 port, 16 Gb FC Module X X

Cisco MDS 9710 Multilayer Director, 48 port, 16 Gb FC Module X (Default) X

Cisco MDS 9148S Multilayer Fabric Switch X (Default)

Cisco MDS 9148 Multilayer Fabric Switch X

LAN architectureThe LAN contains the spine and leaf architecture providing high performance, non-blocking, multi-stageswitched network connectivity between system resources attached to the Vscale Fabric TechnologyExtension leaf switches.

Access to the Vscale LAN Fabric is provided through a leaf switch connected to all spine switches. Allbandwidth must be the same between each spine and leaf switch in the fabric. Any increase in bandwidthmust be performed across all leaf switches. For example, if one leaf switch requires additional bandwidthto support an application and an additional 40 GbE uplink per spine is provisioned, all switches connectedto the Vscale LAN Fabric must be upgraded.

The Vscale LAN Fabric underlay is a Layer 3, routed network supporting overlay VXLANs usingmultiprotocol, border gateway protocol extensions (MP-BGP) for Ethernet VPN (EVPN) distributed controlplane operations. MP-BGP supports the EVPN address family that advertises Layer 2 reachabilityinformation between VXLAN endpoints. If MP-BGP is configured, route reflectors (RR) are used tosimplify configuration because the Vscale Fabric scales in size.

When a host VM attaches to a leaf, the leaf uses MP-BGP EVPN to advertise the MAC address and IPaddress of the host to the leaf switches. As hosts move between the Converged Systems and VscaleFabric Technology Extensions, MP-BGP updates reachability information across leaf switches to ensureforwarding information is current and correct.

Connecting system resources | 12

Page 13: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

The following illustration shows how EVPN is used:

Vscale LAN Fabric uses open shortest path first (OSPF) routing protocol to establish and maintain therouting topology, and provides equal-cost multi-pathing (ECMP) for VXLAN tunnel end point (VTEP)address reachability. Protocols such as intermediate system to intermediate system (IS-IS) protocol canbe used as a fabric underlay VTEP address routing protocol. If VXLAN is deployed, the default is to usethe distributed control plane features of EVPN for anycast gateway and head-end replication.

13 | Connecting system resources

Page 14: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

The following illustration shows the difference in connectivity between the IP Layer 3 and bridged Layer 2host:

The following illustration shows the transport network for scalability and border leaf connectivity:

Use multiprotocol internal BGP (MP-iBGP) as the underlay routing protocol to carry EVPN, the controlplane protocol used to distribute Layer 2 network layer reachability information (NLRI) between switches.

The leaf provides its hosts Layer 2 connectivity and VXLAN functionality to extend Layer 2 domainsacross the Layer 3 fabric. In operation, each leaf switch uses OSPF to advertise its network virtualization

Connecting system resources | 14

Page 15: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

endpoint (NVE), VTEP, and IP address to every other leaf switch connected to the fabric to exchangeroutes and form a full point-to-point mesh network.

LAN configurations

LAN configurations are based on the configuration of the spine and leaf switches.

Spine switch configurations

Each leaf switch in the Vscale LAN Fabric has a minimum of one 40 Gbps uplink to each spine switch.The maximum number of spines in the LAN depends on the number of uplinks per leaf switch and thenumber of spine switches. For example, if a leaf switch has 12 ports and two uplinks per spine, themaximum number of spine switches is six. Each Vscale Fabric Technology Extensions has two leafswitches that yield 960 Gbps of bi-sectional bandwidth between any two Vscale Fabric TechnologyExtensions.

Important: Dell EMC recommends deploying a minimum of three and a maximum of six spine switchesfor Vscale LAN Fabric.

The number of Vscale Fabric Technology Extensions that can connect to a Vscale LAN Fabric is limitedby a number of factors such as VXLAN tunnel endpoint (VTEP) interface limitations or port density in thespine. Vscale Fabric Technology Extensions use two connections per spine switch if one uplink per leafswitch is connected, and four connections if two uplinks per leaf switch are connected. All Cisco Nexus9500 Series Switches have a mandatory base configuration such as PDUs, system fabrics, and fans, forline rate operation and fault tolerance.

The following table provides base configurations and limitations for spine switches:

Switch Base configuration

Cisco Nexus 9332PQ • 32 port, QSFP+ fixed port switch

• 16 blocks: one connection per leaf

• 8 blocks: two connections per leaf

Cisco Nexus 9336PQ • 32 port, QSFP+ fixed port switch

• 18 blocks: one connection per leaf

• 9 blocks: two connections per leaf

Cisco Nexus 9504 • 4 slot chassis

• 6 fabric modules

• 2 supervisors

• 1 Cisco Nexus 9636PQ Switch 36 port, 40 GbE, QSFP+ line card (NX-OS)

• 1 Cisco Nexus 9736PQ Switch 36 port, 40 GbE, QSFP+ line card (ACI)

• 72 blocks: one connection per leaf

• 36 blocks: two connections per leaf

Cisco Nexus 9508 • 8 slot chassis

• 6 fabric modules

• 2 Sup-A supervisors

• 1 Cisco Nexus 9636PQ Switch 36 port, 40 GbE, QSFP+ line card (NX-OS)

• 1 Cisco Nexus 9736PQ Switch 36 port, 40 GbE, QSFP+ line card (ACI)

• 144 blocks: one connection per leaf

• 72 blocks: two connections per leaf

15 | Connecting system resources

Page 16: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

The number of supported Vscale Fabric Technology Extensions is based on the number of leaf switchuplinks and spine switch port density. For example, to deploy three Cisco Nexus 9332PQ Switches with32 40 GbE ports, with each leaf switch connected to the spine switches with one 40 GbE uplink, you candeploy a maximum of fifteen Converged Systems or Vscale Fabric Technology Extensions and oneborder technology connect. If more are required, use larger spine switches such as the Cisco Nexus 9504or Cisco Nexus 9508 Switch.

Important: The Vscale LAN Fabric supports up to 128 Converged Systems orVscale Fabric TechnologyExtensions due to current VTEP limitations.

Leaf switch configurations

Each Vscale Fabric Technology Extension contains two leaf switches with each connected to a spineswitch. Every leaf switch has a minimum of one, and maximum of three uplinks to each spine switch. AllVscale Fabric Technology Extensions must have the same number of links between the leaf switches andthe spine.

The Cisco Nexus 9396PX Switch has a 12 port, 40 GbE, QSFP+ module and provides the followingfeatures:

• 48 1/10 GbE, SFP+ non-blocking ports

• 12 port, 40 GbE QSFP+ non-blocking ports, or six port, 40 GbE, QSPF+ non-blocking ports(optional)

The Cisco Nexus 9332PQ Switch is only supported on VxRack Systems.

VXLANs

The overlay LAN architecture is based on VXLAN enabling the extension of Layer 2, broadcast domainsacross arbitrary, Layer 3 routed topologies.

The Vscale LAN Fabric provides:

• OSPF routed Layer 3 backbone for VXLAN tunnel end point (VTEP) reachability

• Multiprotocol, border gateway protocol (MP-BGP) Ethernet VPN (EVPN) distributed control plane

• VXLAN overlay networking with anycast gateway and unicast head-end replication for multi-tenant operations

Networking

Leaf switches can be configured to use VXLAN to extend Layer 2 domains across the Layer 3 fabric ineach tenant overlay, using MP-BGP EVPN as the distributed control plane. Using the Cisco Nexus 9000Series Switches as leaf switches, all VXLAN operations are done in the switch hardware application-specific integrated circuit (ASICs) to achieve line-rate performance for all VXLAN enabled traffic.

The spine is used only for high speed transport and is never involved in the overlay VXLAN encapsulationand de-capsulation operations.

Each Vscale Fabric Technology Extension has a pair of leaf switches configured to use an anycast IPaddress and shared anycast MAC address for each VXLAN enabled VLAN SVI interface. Each VscaleFabric Technology Extension leaf switch pair uses a proxy VTEP address. This is also known as thevirtual port channel (vPC) shared address with the individual switch VTEP address for vPC load balancing

Connecting system resources | 16

Page 17: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

and redundancy for end-host connectivity. The IP address is advertised within the open shortest path first(OSPF) routing table.

The anycast IP address provides redundancy and efficient load distribution and routing. All routing is localto each system at their respective top-of-rack switches to optimize both east/west and north/south trafficflows. To advertise and maintain Layer 2 forwarding information (MP-BGP) with EVPN is used as acontrol plane among the leaf switches in the underlay network.

As hosts connect to the Vscale Fabric Technology Extension, the leaf switch learns the MAC address andIP address of the host. These addresses are inserted in network layer reachability information (NRLI)advertised by MP-BGP to all of the leaf switches connected to the Vscale LAN Fabric.

The leaf switch supports routing frames between different VXLANs, or to switch frames inside the sameVXLAN enabled VLAN at line rate, inside the same tenant overlay. All VXLAN operations, includingencapsulation, de-encapsulation, bridging, and routing are transparent to the host systems and the trafficgenerated by the systems. No additional configuration is required beyond the setup of typical networkconnectivity for the hosts.

This differs from the external routing where the destination of the packet is not inside the tenant overlay,such as traffic destined for the internet, non-VXLAN corporate resources, or for resources located in adifferent tenant overlay. Traffic exits the VXLAN fabric through the border leaf switches routed at thecustomer core.

SAN architectureVscale SAN Fabric provides FC connectivity that enables Converged Systems and Vscale FabricTechnology Extensions resources to be interconnected to support applications.

Uplink support

Each edge switch connects to the core switch using eight or 16 uplinks. The Cisco MDS 9148S MultilayerFabric Switch is a fixed, form-factor switch. The Cisco MDS 9706 Multilayer Director and Cisco MDS 9710Multilayer Director support a 48 port, FC port module with 48 line-rate 16 Gbps FC ports (with at leastthree fabric modules).

All FC ports on the 48 FC port modules should be populated with a 16 Gbps, standard, form-fit, pluggabledevice (SFP). Mixing 8 Gbps and 16 Gbps FC connections on a Vscale SAN Fabric is not recommended.

An enterprise license is not required for most Vscale Fabric Technology Extensions. A full Cisco DataCenter Network Manager license is not required.

Topology

The SAN network architecture is defined by two-tier and three-tier topologies. The number of switch hopsbetween the host and the storage for a given topology is as follows:

• Two-tier design: fabric interconnect -> edge-> core-> storage ==1 Hop

• Three-tier design: fabric interconnect -> edge -> core -> edge -> array == 2 Hops

Converged Systems with Cisco MDS 9500 Series Switches should not connect to theVscale SAN Fabric because the SAN directors can severely limit the growth capacity of theSAN.

17 | Connecting system resources

Page 18: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

The two-tier topology is a core-edge/collapsed, core-edge topology that exists when storage arrays canbe connected to the core, and the compute servers are attached to the host edge switch. The core-edge/collapsed core-edge topology is designed as follows:

• All servers are connected to edge SAN switches.

• Storage arrays may be directly connected to the core switches.

• VPLEX and RecoverPoint and other service nodes must be connected to the core switches actingas intermediary devices. Placing these types of capabilities at the core provides the mostbandwidth efficient interception point if multi-array replication is required.

The three-tier topology is an edge-core-edge topology. This topology exists when the storage array isattached to the storage edge switch, and the servers are attached to the host edge switch.

The edge-core-edge topology is designed as follows:

• Edge switches can be a pair of Cisco MDS 9148S Multilayer Fabric Switches or Cisco MDS 9700Series Switches.

• All storage and servers are connected to edge switches.

• Storage arrays are not connected to the core switches.

• VPLEX and RecoverPoint and other service nodes can directly connect to the core switchesacting as intermediary devices. Placing these capabilities at the core provides the mostbandwidth efficient interception point if multi-array replication is required.

SAN switches

The core switch connectsConverged Systems and Vscale Fabric Technology Extensions, while an edgeswitch allows resources to be accessed within the Vscale Fabric.

Core switches

The following requirements apply:

• Full high availability, field replaceable components, such as supervisors, fabric module, fan andpower supplies

• All ports must be line-rate capable

• Support 16 Gbps or higher speeds at line rate

• Inter-VSAN routing (IVR) capable

• Smart-zoning capable

• Support N_port virtualization (NPV)

• Support port channeling

Edge switches

The following table shows the connection topologies for SAN connections within Vscale Fabric:

Connecting system resources | 18

Page 19: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

The accessing resource is the source and the sharing resource is the target.

Model Description

Local Source and targets are within the same network.

Core-edge Target is connected at the SAN core, and the source is located one edge link away.

Edge-core-edge Target is connected to an edge separate from the source. Traffic must traverse through coreswitches to arrive at the target.

Edge switches connect to the SAN core switch using eight or 16 uplinks.

19 | Connecting system resources

Page 20: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Architecture resourcesVscale Fabric Technology Extensions connect directly to the Vscale Fabric and contain one or morecompute, storage, and/or data protection resources.

Every resource, except Dell EMC, third-party resources can be deployed in a single Vscale FabricTechnology Extension.

Important: Other direct connections for the Vscale Fabric are not supported.

The following scalability limits are imposed by the physical port availability for a Vscale Fabric TechnologyExtension:

Area Limit Description

SAN 77 FC switches Cisco SAN fabric supports a maximum of 80 switches. A maximum of 3SAN core switches are supported per fabric, which limits the edgeswitches to 77 per SAN fabric.

SAN 10,000 world-wide port names(WWPN) (includes storage andhost ports)

Cisco SAN fabric supports a maximum of 10,000 WWPNs in the globalname server database. This increases to 20,000 when using CiscoMDS 9700 Series switches exclusively in SAN fabric.

LAN 128 Vscale Fabric TechnologyExtensions

Number of VXLAN tunnel end points (VTEPs) available within a VXLANfabric is 256. Each Vscale Fabric Technology Extensions requires twoVTEPs.

Dell EMC Sales and Professional Services work with Dell EMC Manufacturing to gather the requiredinformation to configure Vscale Fabric Technology Extensions in the factory. This requires modificationsto the Logical Configuration Survey (LCS). In some cases, only a minimal configuration for ReleaseCertification Matrix (RCM) will be done at the factory. The RCM provides a list of the certified versions ofcomponents for Vscale Fabric Technology Extensions.

Physical componentsThe type of Vscale Fabric Technology Extension determines the type of Cisco Nexus 9300 or 9500 leafswitches required for LAN connectivity and Cisco MDS switches for SAN connectivity.

Architecture resources | 20

Page 21: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

The following illustration shows a sample configuration:

Cisco Nexus 3172TQ Switches connected to a pair of Cisco Nexus 3164Q aggregation switches provideout-of-band management networking.

Technology connectsThe Vscale Border Technology Connect provides external access to Vscale Architecture as a specializedfunction withinVscale Fabric. The Vscale Open Technology Connect contains third-party, non-Dell EMCresources that are provided to the Vscale Architecture

Vscale Border Technology Connect

The Vscale Border Technology Connect includes two leaf switches with no end-host connectivity. Onlyexternal connectivity is provided in and out of the Vscale Architecture through the border leaf switches.

21 | Architecture resources

Page 22: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Components

The following table provides a description of the Vscale Border Technology Connect components:

Component Provides

Intelligent Physical Infrastructure (IPI) Appliance 42 RU enclosure with intelligent PDUs and environmentalmonitoring.

Cisco Nexus 9396PX Switch Ethernet leaf switch for external network access.

Cisco Nexus 3172TQ Switch Management

Connectivity

The border resources connect directly to the following components:

• Vscale LAN Fabric (Ethernet spine)

• External routers or switches for external network connectivity

Border resources connect to the Vscale Fabric in the same manner as the other leaf switches.Connections to external networking components varies by deployment.

External connectivity

Vscale Fabric uses border resources to provide external northbound connectivity only. The border leafswitches peer with the external routers or switches using supported dynamic routing protocols. The leafswitches can be used to connect other edge devices such as firewalls, application delivery controllers,intrusion detection and prevention appliances. Any device connecting to the border leaf switches mustsupport a dynamic routing protocol, have sufficient Layer 3 interfaces, and contain virtual routing andforwarding (VRF) support for multi-tenancy.

The border leaf switches require dedicated, symmetrical, and redundant Layer 3 interfaces in each tenantVRF towards each external northbound switch or router deployed in the fabric environment.

Important: Extra precaution should be used for the external router when connecting multiple VXLANoverlay environments using only the default VRF for exchanging routes.

The border does not require the vn-segment ID creation for all VXLAN enabled VLANs because there isno end-host connectivity. If absolute isolation of each tenant is required in a multi-tenant environment,each VXLAN overlay Layer 3 interface must be connected to a Layer 3 port residing in dedicated VRFs toensure isolation. Failure to place links in different VRFs in multi-tenant environments with isolationrequirements may result in all routes being installed into a single routing table. This could allow isolatedtenants to access each other.

Vscale Open Technology Connect

The following conditions apply:

• Dell EMC resources cannot connect to an open technology connect.

• Non-Dell EMC, third-party resources can be added to the Vscale Fabric for LAN and SAN.

• Dell EMC does not support components connected to resources beyond the Ethernet or FC portdemarcation point. This applies even if the connected component firmware is compatible with apublished release certification matrix (RCM).

Architecture resources | 22

Page 23: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Components

The following table provides a description of open components:

Component Provides

IPI Appliance 42 RU enclosure with intelligent PDUs and environmentalmonitoring.

Cisco Nexus 9396PX Switches Ethernet leaf switch required for LAN access.

Cisco MDS 9148S Switches

FC edge switch required for FC block access.Cisco MDS 9396S 16G Multilayer Fabric Switch

Cisco MDS 9700 Series Multilayer Directors(large environments)

Third-party servers Servers

Third-party storage arrays Storage

Cisco Nexus 3172TQ Switch Management switch

Connectivity

Third-party resources require a separate VSAN and inter-VSAN Routing (IVR) to provide or consumestorage resources within the Vscale Fabric.

The following connection limits apply:

• External SAN switches cannot connect to an open technology connect.

• Only end devices, such as storage array front-end ports can connect into the open technologyconnect.

• External network switches providing Layer 3 services cannot connect to a third-party resource.

Data flow

The following table provides possible data flows for open resources between the source and destinationdevices:

Device

<->Edge<-> <->Core<-> <->Edge<-> Device Permitted Comments

Host Open-edge Core Array Y (Inter-VSAN routing) IVR required.

Host Open-edge Core Edge Array Y IVR required.

Host Open-edge Array N Devices are not supported.

Host Open-edge Core VPLEX Y IVR required.

Host Open-edge Core Edge VPLEX N Too many hops.

Array Open-edge Core VPLEX N Interoperability of the end array isnot managed.

Array Open-edge Core Edge VPLEX N Too many hops.

23 | Architecture resources

Page 24: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Device

<->Edge<-> <->Core<-> <->Edge<-> Device Permitted Comments

Array Open-edge Edge Host Y IVR required. No running VMs onthe Dell EMC hardware from datastores from open-edge connectedarrays.

Array Open-edge Core ConvergedSystem-edge

Host Y IVR required. No running VMs onthe Dell EMC hardware from datastores from open-edge connectedarrays.

Array Open-edge Core RPA N

Array Open-edge Core ConvergedSystem-edge

VPLEX N

Compute resourcesCompute resources can be added to a Vscale Fabric Technology Extension to provide additionalcompute resources within the Vscale Fabric.

Components

The following table provides a description of compute components:

Component Provides

Cisco Nexus 9396PX Switches Ethernet leaf switch for LAN access.

Cisco MDS 9148S Switches

FC edge switch for FC block access.Cisco MDS 9396S 16G Multilayer Fabric Switch

Cisco MDS 9700 Series Multilayer Directors

Cisco UCS 62xxUP fabric interconnects Compute connectivity

Cisco UCS 5108 Blade Server ChassisOptional Cisco UCS B200 M4, B260 M4, B420 M3, or B460 M4Blades.

Cisco UCS 22xxXP fabric extenders

Cisco UCS B-Series Blade Servers

Cisco UCS C-Series Servers Optional Cisco UCS C220 and C240 M4 servers.

Cisco Nexus 3172TQ Switch Management

Compute resources connect through fabric interconnects to the Ethernet leaf and SAN edge switches.The Vscale Fabric Technology Extensioncan contain up to four Cisco UCS domains, depending on portavailability on the Ethernet and SAN switches.

Disjoint Layer 2

In the Disjoint Layer 2 configuration, traffic is split between two or more networks at the fabricinterconnect. This enables Cisco UCS servers in a Converged System to connect to two or more discreteEthernet clouds. Upstream Disjoint Layer 2 networks allow two or more Ethernet clouds to be accessedby servers or VMs located in the same Cisco UCS domain.

Architecture resources | 24

Page 25: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Connectivity

The following table provides connection models for compute resources:

Connection Description

Server-to-FI Cisco UCS C-Series Servers use the same connection models as the Cisco UCS TechnologyExtension.

FEX-to-FI Cisco UCS C-Series Servers with FEX connections use the same connection models as theCisco UCS Technology Extension. Cisco UCS 22xxXP FEX offers two or four links to the FI,Cisco UCS 2208XP FEX offers eight.

FI-to-Cisco Nexus Cisco UCS 6248UP Fabric Interconnect has eight Ethernet links. Cisco UCS 6296UP defaultsto eight Ethernet links and can expand to 16.

FI-to-Cisco MDS Cisco UCS 6248UP Fabric Interconnect defaults to four FC links and can expand to eight.Cisco UCS 6296UP defaults to eight FC links and can expand to 16.

Data flows

The following table provides possible data flows for compute resources between the source anddestination devices:

Device <->Edge<->

<->Core<->

<->Edge<-> Device Permitted

Comments

Host Edge Array Y Uses production VSAN.

Host Edge Core Array Y Uses production VSAN.

Host Edge Core Edge Array Y Uses production VSAN.

Host Edge Core Open-edge Array Y Uses an Inter-VSAN routing (IVR)VSAN. Cannot boot from remoteSAN device over IVR.

Host Edge Core ConvergedSystem-edge

Array Y Assumes there is no domainconflict in the merging VSAN andthe switch is using defaultproduction VSAN.

Host Edge VPLEX Y Uses production VSAN. Requirearray for VPLEX to be in the sameedge.

Host Edge Core VPLEX Y Requires array for VPLEX to be inthe same core switch as the arrayports. Can use production VSANsor IVR.

Storage resourcesStorage resources can be added to a Vscale Fabric Technology Extension to provide additional storageresources within the Vscale Fabric.

25 | Architecture resources

Page 26: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Dell EMC storage

The storage resource has the following characteristics:

• Contains only Dell EMC-certified components

• Prescriptive connectivity, Intelligent Physical Infrastructure (IPI) Appliance, and management

• System resource operating environment is based on Vscale Fabric Release Certification Matrix(RCM)

Third-party storage

Third-party storage resources have the following characteristics:

• Third-party compute and storage resources do not have RCM certification

• Components sourced directly from Dell EMC partners do not have RCM certification

• External LAN or SAN resources are not permitted

• Vision Intelligent Operations does not discover and manage any device below the fabric

Components

The following table provides the available storage components:

Component Provides

Cisco Nexus 9396PX Switches Optional Ethernet leaf switch required for file access.

Cisco MDS 9148S Multilayer Fabric Switches

Optional FC edge switch for FC block access.Cisco MDS 9396S 16G Multilayer Fabric Switch

Cisco MDS 9700 Series Multilayer Directors

Isilon (per the technology extension for Isilon)

One or more storage arrays.

Unity

VMAX, VMAX3, VMAX3 All Flash Array (AFA)

VNX

XtremIO

Connectivity

Storage arrays connect through the core switches for FC block access and through Cisco Nexus 9396PXleaf switches for file access. Dell EMC recommends connecting directly to the core block from all storagearrays shared between multiple Vscale Fabric Technology Extensions.

The following table provides connection models:

Connection Description

Edge-to-core Edge switches connect to the core using a minimum of eight 16 Gb FC ports in a single portchannel that can be expanded to 16 ports. Inter-VSAN routing (IVR) is supported, but not required.

Architecture resources | 26

Page 27: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Connection Description

Unity Uses the same connectivity standards as the Unity storage arrays within the VxBlock andVblock Systems 350.

VNX Uses the same connectivity standards as the VNX storage arrays within the VxBlock andVblock Systems 340.

VMAX Uses the same connectivity standards as the VMAX storage arrays within the Vblock System 720.

VMAX3,VMAX3 AFA

Uses the same connectivity standards as the VMAX storage arrays within the VxBlock andVblock Systems 740.

XtremIO Uses the same connectivity standards as the XremIO storage arrays within the VxBlock andVblock Systems 540.

Edge-to-core Edge switches connect to the core using a minimum of eight 16 Gb FC ports in a single portchannel that can be expanded to 16 ports. Inter-VSAN routing (IVR) is supported, but not required.

Storage port connections use dynamic port mapping and FC.

Data flows

The following table provides possible data flows for storage resources between the source anddestination devices for storage arrays:

<->Edge<->

<->Core<->

<->Edge<-> Device Permitted

Comments

Edge Host Y Uses production VSAN.

Edge Core Edge Host Y Uses production VSAN.

Core Edge Host Y Uses production VSAN.

Core EMC VPLEX Y Uses production VSAN.

Edge EMC VPLEX Y Uses production VSAN.

Core Open-edge Host Y IVR required.

Edge Core Open-edge Host Y IVR required.

Edge EMC RPA Uses production VSAN.

Core EMC RPA Uses production VSAN.

Edge Core Dell EMCSystem-edge

Host No technical reason to block. Collisions withworld-wide node names (WWNN)/WWPNsrequire IVR, tier 2, ascertained prior to sale.

Core Dell EMCSystem-edge

Host No technical reason to block. Collisions withworld-wide node names WWNN/WWPNsrequire IVR, tier 2, ascertained prior to sale.

The following table provides the data flow for Isilon:

<->Leaf<->

<->Spine<->

<->Leaf<-> Device Permitted Comments

Leaf Host Y Jumbo frames VXLAN configured on leafLayer 2. Recommended by Dell EMC.

27 | Architecture resources

Page 28: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

<->Leaf<->

<->Spine<->

<->Leaf<-> Device Permitted Comments

Leaf Spine Leaf Host Y Jumbo frames VXLAN configured on leafLayer 2. Recommended by Dell EMC.

Leaf Spine Dell EMCSystem-leaf

Host Y Jumbo frames VXLAN configured on leafLayer 2. Recommended by Dell EMC.

Leaf Spine Open-leaf Host Y Jumbo frames VXLAN configured on leafLayer 2. Recommended by Dell EMC.

Leaf Spine Border-leaf Mgmt-Host Y Jumbo frames VXLAN configured on leafLayer 2. Recommended by Dell EMC.

Leaf Spine Border-leaf Cust N

The following table provides the data flow for X-Blades or NAS shares:

<->Leaf<->

<->Spine<-> <->Leaf<-> Device Permitted Comments

Leaf Host Y Jumbo frames Layer 3 capable with no VXLAN.Adds COS/QOS concerns. Layer 2 optional,requires VXLAN Layer 3 ToR gateway.

Leaf Spine Leaf Host Y Jumbo frames Layer 3 capable with no VXLAN.Adds COS/QOS concerns. Layer 2 optional,requires VXLAN Layer 3 ToR gateway.

Leaf Spine Dell EMCSystem-leaf

Host Y Jumbo frames Layer 3 capable with no VXLAN.Adds COS/QOS concerns. Layer 2 optional,requires VXLAN Layer 3 ToR gateway.

Leaf Spine Open-leaf Host Y Jumbo frames Layer 3 capable with no VXLAN.Adds COS/QOS concerns. Layer 2 optional,requires VXLAN Layer 3 ToR gateway.

Leaf Spine Border-leaf Mgmt-host

Y Jumbo frames Layer 3 capable with no VXLAN.Adds COS/QOS concerns. Layer 2 optional,requires VXLAN Layer 3 ToR gateway.

Leaf Spine Border-leaf Cust N

Refer to the Architecture Overview for your system for additional information on each component andstorage array.

File storage

File-level storage is deployed in Network Attached Storage (NAS) System and configured with NFS orSMB/CIFS protocol. The storage system is connected directly to the Vscale Fabric leaf switches toprovide Ethernet connections.

Architecture resources | 28

Page 29: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

The following file storage options are available:

Storage option Description

Isilon • Allows you to scale out NAS capacity and performance up to 50 PB per cluster.

• Connects to leaf switches to provide file services.

• Contains internal storage capacity and does not require external SAN connectivity.

Unity • Unity Storage Processors (SPs) can provide NAS file stores from the same arrayproviding block storage to SAN hosts.

• Unity 10 Gb Ethernet ports connect to leaf switches to provide file services.

• Unity storage arrays can connect to multiple leaf switches provided there aresufficient ports on each SP to provide redundant connections.

Unified VNX • One or more X-Blades connected to an VNX block storage array can provide NASfile stores from the same array providing block storage to SAN hosts.

• Contains dedicated links to the VNX block storage and does not require SANaccess.

• X-Blades connect to leaf switches to provide file services.

• X-Blades can connect to multiple leaf switches provided there is a failover X-Bladewith corresponding connections.

VMAX3 eNAS, VMAX3AFA eNAS

• Uses internal ports to connect to storage devices on the array, which provide thecapacity for file shares.

• Ethernet ports (on the VMAX3 engines) dedicated for NAS can connect to leafswitches in the same manner as the unified VNX X-Blades.

NAS gateways • Gateways do not contain internal storage and must connect to a Vscale FabricTechnology Extension with VMAX resources. This provides the same NASfunctionality as a unified VNX X-Blade.

• Uses SAN to connect to an array where the file systems are stored. SAN ports canconnect to core, but must be local (on the same SAN switch) to the array providingthe block storage.

• X-Blades can connect to leaf switches in the same manner as the unified VNX X-Blades.

Data protection resourcesData protection resources can be added to a Vscale Fabric Technology Extension to provide resources toinfrastructure components within the Vscale Fabric.

Components

The following table provides a description of data protection components:

Component Provides

Cisco Nexus 9396PX Switches Ethernet leaf switch for LAN access.

Cisco MDS 9148S Multilayer Fabric Switch

FC edge switch for FC block access.Cisco MDS 9396S 16G Multilayer Fabric Switch

Cisco MDS 9700 Series Multilayer Director

29 | Architecture resources

Page 30: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Component Provides

• Avamar

• Data Domain

• RecoverPoint Appliance

• VPLEX

One or more data protection instances.

Cisco Nexus 3172TQ Switch Management

Connectivity

Place data protection resources directly on a Vscale Fabric Technology Extensionwith shared resourcesfor SAN connections and FC access. VPLEX connections must be in close proximity to back-end storagearrays. Data protection resources connect directly to the Vscale Fabric from the Ethernet leaf switch toaccess Avamar and/or Data Domain.

Data flow

The following table provides possible data flows between the source and destination devices:

Device <->Edge<-> <->Core<-> <->Edge<-> Device Permitted Comments

Host Open-edge Core Array Y Inter-VSAN routing (IVR) required.

Host Open-edge Core Edge Array Y IVR required.

Host Open-edge Array N Dell EMCdoes not support devices.

Host Open-edge Core VPLEX Y IVR required.

Host Open-edge Core Edge VPLEX N Exceeds the maximum number ofhops.

Array Open-edge Core VPLEX N Dell EMC does not manageinteroperability of the end array.

Array Open-edge Core Edge VPLEX N Exceeds the maximum number ofhops.

Array Open-edge Edge Host Y IVR required. No running VMs onthe Dell EMC hardware from datastores from open-edge connectedarrays.

Array Open-edge Core Dell EMCSystem-edge

Host Y IVR required. No running VMs onthe Dell EMC hardware from datastores from open-edge connectedarrays.

Array Open-edge Core RPA N

Array Open-edge Core Dell EMCSystem-edge

VPLEX N

The Dell EMC Integrated Data Protection Product Guide contains additional information about dataprotection options.

Architecture resources | 30

Page 31: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

System resourcesAdditional resources are required for Converged Systems to connect to the Vscale Fabric.

Components

The following table provides a description of required resources:

Component Provides

Converged System Vblock System, VxBlock System, or VxRack System functionality

Cisco Nexus 9396PX Switches Ethernet leaf switch required for LAN access.

Cisco MDS 9148S Multilayer FabricSwitches FC edge switch required for FC block access. Required if the existing

block is using unified networking.Cisco MDS 9700 Series MultilayerDirectors

Cisco Nexus 3172TQ Switch Management switch

Connectivity

Unified networking switches cannot connect directly to the Vscale Fabric. Cisco MDS 9148 MultilayerFabric Switch, Cisco MDS 9506 Multilayer Director, and Cisco MDS 9513 Multilayer Directors requireimpact assessment to determine validity for connections. System resources connect directly to the VscaleFabric using Ethernet spine.

The following table provides connection models for the system resources:

Model Description

Ethernet Same connectivity as blocks with multiple resources.

FC Unified networking systems require Cisco MDS switches.

Data flow

The following table provides possible data flows for system resources between the source and destinationdevices:

Device <->Edge<-> <->Core<-> <->Edge<-> Device Permitted Comments

Host ConvergedSystem -edge

Array Y

Host ConvergedSystem -edge

Core Array Y

Host ConvergedSystem -edge

Core Edge Array Y Inter-VSAN routing (IVR)required for collisions.

Host ConvergedSystem -edge

Core Open-edge Array Y IVR required.

Host ConvergedSystem -edge

VPLEX N

31 | Architecture resources

Page 32: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Device <->Edge<-> <->Core<-> <->Edge<-> Device Permitted Comments

Array ConvergedSystem -edge

Core VPLEX Y IVR required for collisions.

Array ConvergedSystem -edge

Host Y

Array ConvergedSystem -edge

Core Edge Host Y IVR required for collisions.

Array ConvergedSystem -edge

Core Open-edge Host Y IVR required for collisions.

Array ConvergedSystem -edge

VPLEX Y

Array ConvergedSystem -edge

Core VPLEX N Too many hops.

Array ConvergedSystem -edge

Core Edge VPLEX N

Array ConvergedSystem -edge

RecoverPoint

Y

Array ConvergedSystem -edge

Core RecoverPoint

Y IVR required for collisions.

Architecture resources | 32

Page 33: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Hosting management applicationsThe Vscale Management Platform (VMP) is an extensible platform used to host Dell EMC coremanagement applications required forConverged Systems and Vscale Fabric operations.

VMP may also host ecosystem and optional management applications.

VMP is based on a standard Vblock System that manages Converged Systems and Vscale FabricTechnology Extensions in a single data center or across multiple data centers.

To maintain systems operation and stability, VMP workloads are categorized into three discrete classesthat are partitioned into dedicated clusters and data stores to maintain system performance, availability,and security.

The following table describes the different types of management workloads:

Managementworkload

Description

Core Contains management applications that are required to install, operate, and support aConverged System, and core to systems operation.

Dell EMCoptional

Contains non-core, management workloads that extend the Dell EMC core systems capability,such as data protection, security or storage management tools. These are supported andinstalled by Dell EMC to manage the Converged System or Vscale Fabric Technology Extensioncomponents. These include, but are not limited to Avamar Administrator, InsightIQ for Isilon,VMware vRealize Operations Manager.

Ecosystem Contains management workloads other than core or Dell EMC optional to manage ConvergedSystem or Vscale Fabric Technology Extension components that an organization can use tomanage their IT environment, such as logging, management and orchestration, security eventand incident management (SEIM).

33 | Hosting management applications

Page 34: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

The following illustration shows core VMP concepts:

The following table describes the different types of management workloads:

Managementworkload

Description

Core • VMware vCenter Hypervisor management

• Element Manager: Unisphere

• Fabric Manager (subset of the Data Center Network Manager)

• Secure Remote Support

• PowerPath

• Vision Intelligent Operations

• Tools: resources to install, operate, and support a Converged System or Vscale FabricTechnology Extension

Dell EMC optional This list is inclusive, but not limited to:

• Data protection, security or storage management tools

• RecoverPoint or VPLEX

• Avamar Administrator

• InsightIQ for Isilon

• VMware vCloud Network and Security appliances (VMware vShield Edge/Manager)

• VMware vRealize Operations Manager

Hosting management applications | 34

Page 35: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Managementworkload

Description

Ecosystem This list is inclusive, but not limited to:

• VMware vCloud Director

• VMware View Connection Brokers

• Cisco UCS Director

Compute componentsThe Vscale Management Platform (VMP) does not have a physical advanced management platform(AMP), which is the management server for Converged Systems. All management functions are locatedwithin the compute component.

Compute requires a minimum of four servers to support the core management and Dell EMC optionalworkload layers. If an ecosystem workload is present, two additional servers are required to support thatconfiguration.

The following table provides the minimum requirements for compute servers in VMP:

Components Minimum requirement

Memory 128 GB of RAM

CPU 2x E5-2600 v2 series CPUs with 6 cores each

Refer to the appropriate Architecture Overview for more information about your system.

Storage componentsVscale Management Platform(VMP) consists of the storage and file disk pools.

Sizing of the storage and file disk pools depends on the core, Dell EMC optional, and ecosystemworkload applications deployed on the hosts managed by the VMs.

The following table provides the storage/file disk pool specifications:

Storage disk pool (core and non-core enabled clusters) File disk pool

12.5 TB minimum usable disk space (70% threshold/target IOPS18K)

3 TB minimum usable disk space

• Storage tier 1 - 200 GB SSD RAID-5 (4+1)

• Storage tier 2 - 600 GB SAS 10K RAID-5 (4+1)

• Storage tier 3 - 2 TB NL-SAS 7.2K RAID-6 (6+2)

This is not applicable with XtremIO.

• 10K SAS minimum or NL-SAS withFAST VP

• RAID 5

VMP offers the following features with Unity, VNX, VMAX only:

• FAST VP with flash disks (recommended for heavily used VMware vCenter and VMware vCenterOperations Manager environments)

• FAST (recommended for VMP storage environments where available)

35 | Hosting management applications

Page 36: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Refer to the appropriate Architecture Overview for more information about your system.

Network componentsThe Vscale Management Platform (VMP) network architecture is based on a standard Converged Systemconfiguration.

The VMP environment has a hybrid virtual switch design that uses the VMware vSphere Standard Switchand VMware vSphere Distributed Switch (VDS). Dell EMC does not support the Cisco Nexus 1000VSeries Switches with VMP.

VMP differs from a standard Converged System in the following ways:

• VMP is a centralized management platform that hosts all of the element managers associatedwith the Vscale Architecture. This simplifies the management infrastructure since an AdvancedManagement Platform (AMP) is not required for each Converged System or Vscale FabricTechnology Extension. In addition, this enables better security, and simplifies patching andmaintenance of the management infrastructure.

• All resources contained within Converged Systems or Vscale Fabric Technology Extensionmanaged by VMP across the out-of-band network.

• Layer 3 connectivity is configured between specific hosts and VLANs to enable managementfunctionality. These routing configurations are implemented in customer-provided, networkinfrastructure. The VLAN design is similar to a standard Converged System.

• For Layer 2 connectivity, core VLANs are extended through the Vscale Architecture network intothe Converged System or Vscale Fabric Technology Extension managed by VMP.

• The logical network design for VMP reduces impact from network outages and optimizes theenvironment for advanced security implementations.

Network architecture

The Vscale Management Platform (VMP) uses in-band, out-of-band, inter-Converged System, and virtualswitch network architecture.

In-band

In-band network traffic traverses the production network switches in the VMP.

The following table provides a list of the VLANs for in-band network traffic:

VLAN Description

vcesys_esx_mgmt Local VMware management and applications that may impact production

vcesys_esx_L3vmotion vMotion traffic between VMware vSphere ESXi hosts in the VMP

vcesys_esx_ft VMware fault-tolerance traffic VMware vSphere ESXi hosts in the VMP

vcesys_nfs NFS traffic internal to VMP

vcesys_esx_build Automated deployment of VMware vSphere ESXi hosts

vcesys_brs_data Backup/recovery with data protection solution

Hosting management applications | 36

Page 37: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

VLAN Description

fcoe_fabric_a FC over Ethernet (FCoE) A VLAN for Cisco UCS connectivity

fcoe_fabric_b FCoE B VLAN for Cisco UCS connectivity

Out-of-band

Out-of-band network traffic traverses the management network switches in the VMP and cannot beleveraged for any production data.

The following table provides a list of the VLANs for out-of-band network traffic:

VLAN Description

vcesys_oob_mgmt VMs and device ports are used for control plane only. There is no data on this VLAN.

Inter-Converged System network

Inter-Converged System VLANs provide network connectivity to the Converged System managed by theVMP, or to the management and production networks.

The following table provides a list of the VLANs for inter-Converged System network traffic:

VLAN Description

vmp_oob_mgmt Control plane only. There is no data on this VLAN.

vcesys_esx_L3vmotion Cross vCenter vMotion traffic between VMware vSphere ESXi hosts.

vcesys_esx_L3prov Isolate traffic for cold migration, VM clones, and snapshots.

vcesys_nfs NFS traffic internal and external to VMP.

vmp_esx_mgmt VMware management and applications that may impact production.

vmp_vceopt_mgmt Dell EMC optional management workload VMs (may be collapsed into core).

vmp_eco_mgmt Ecosystem management workload VMs.

Virtual switch

VMP uses the VMware vSphere Standard Switch and the VMware vSphere Distributed Switch (VDS) in ahybrid design for virtual networking. Regardless of the virtual networking technology, VMP does notsupport virtual networking capabilities with non-VMP, VMware hosts or clusters. VMP can use a virtualnetworking solution other than the Converged Systems it manages.

Multiple Converged Systems managed by a single VMP can use different virtual networking solutions. Forexample, one Converged System can use the Cisco Nexus 1000V Switch with Advanced Edition while aVscale Fabric Technology Extension can use a Cisco Nexus 1000V Switch with Essentials. The VMP thatis managing the two infrastructure components can use a VMware VDS.

If the Cisco Nexus 1000V Essentials or Advanced Edition is selected on Converged System or VscaleFabric Technology Extension managed by VMP, enable Layer 3 mode on each system. If the existingsystem uses Layer 2 mode, modify the VLAN to Layer 3 for the Cisco Nexus 1000V Switch. Managementof the Cisco Nexus 1000V Switch Layer 3 control is collapsed into the VMP esx_mgmt VLAN.

Deploy the virtual networking components on the VMP and arrange them to support a maximum level ofredundancy (where available).

37 | Hosting management applications

Page 38: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Production management

The following table lists the production management VLANs:

VLAN Description

vmp_oob_mgmt Carries inter-Converged System management traffic to and from the VMs in theVMP_PROD-COMMON management workload. The following VLAN design requirementsapply:

• If Layer 2 network connectivity is required, the assigned subnet must be a /22 subnet orbigger to accommodate for IP addressing of a minimum of 650 (up to 1000) VMwarevSphere ESXi hosts.

• If Layer 3 network connectivity is required, the assigned subnet must be sized toaccommodate less than 30 VMs, thus /27 should be sufficient based on the currentdesign.

vmp_esx_mgmt Carries inter-Converged Systemmanagement traffic to and from the VMP_PROD-CENTRALmanagement workload. The following VLAN design requirements apply:

• If Layer 2 network connectivity is required, then the assigned subnet must be a /22subnet or bigger to accommodate for IP addressing of a minimum of 650 (up to 1000)VMware vSphere ESXi hosts.

• If Layer 3 network connectivity is required, then the assigned subnet must be sizedaccording to the number of VMs that are required to support the external, managedConverged System or Vscale Fabric Technology Extension. Each VMware vCenterinstance requires four VMs. Each common VMs (such as element manager and fabricmanager) requires six VMs.

• Must have Layer 2 or Layer 3 connectivity through the customer-provided network toestablish management functionality between VMP and the managed Converged Systemor Vscale Fabric Technology Extension. For each option, refer to the requirements listedin the use cases for inter-Converged System networking.

vmp_vceopt_mgmt Carries inter-Converged System management traffic to and from the VMs in the OptionalManagement Workload vSphere cluster resource pool. VLAN design requirements are to besupplied.

vmp_eco_mgmt Carries inter-Converged System management traffic to and from the VMs in the ecosystemmanagement workload VMware vSphere cluster resource pool beneath the VXVMP-ECOVMware vSphere Cluster. VLAN design requirements are to be supplied.

In production management, the client network can access hosts and VMs on VLANs 101, 201, 105, and205 within the Vscale Fabric Technology Extension that contains VMP.

Hosting management applications | 38

Page 39: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

The following illustration reflects the connections between the devices, but not the quantity of theseconnections for the logical network connectivity of the VMP in the production management environment:

To manage VMP resources within the Vscale Fabric Technology Extension:

• The VMP local VMware vCenter environment has hosts and VMs on vcesys_esx_mgmt and VMson vcesys_oob_mgmt.

• The VMP production VMware vCenter environment has VMs on vmp_oob_mgmt andvmp_esx_mgmt.

These are accessible from external networks.

The following VLANs are local to Vscale Fabric network environment:

• vcesys_nfs leverages Layer 3 connectivity toVscale Fabric Technology Extension for routed NFSusing VXLAN

39 | Hosting management applications

Page 40: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

• vcesys_esx_L3vmotion leverages Layer 3 connectivity to Vscale Fabric Technology Extension forcross VMware vCenter vMotion

• vcesys_esx_L3prov leverages Layer 3 connectivity to Vscale Fabric Technology Extension forOVF provisioning and cold vMotion

• vmp_oob_mgmt leverages Layer 3 connectivity to Vscale Fabric Technology Extension for OOB

• vmp_esx_mgmt leverages Layer 3 connectivity to Vscale Fabric Technology Extension forVMware vSphere ESX management

The VMP must be in the same data center or within a metro (150 milliseconds RTT) latency distance tothe Converged System, VxRack System, or Vscale Fabric Technology Extension managed by VMP.

VM placement and VLAN assignment

The following illustration shows the placement of each VM within the VMP along with its correspondingVLAN:

Hosting management applications | 40

Page 41: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

VMware vSphere virtual switch designsThe Vscale Management Platform (VMP) combines the VMware vSphere Standard Switch and theVMware vSphere Distributed Switch (VDS) into a hybrid design for virtual networking.

Each VMware vSphere ESXi host has VMkernel port groups, vMotion, and NFS (if used) configured onthe VMware VDS. The remaining port groups are configured on the VMware vSphere Standard Switchand hosts are managed by the local VMware vCenter Server that resides in the local managementworkload pool.

If migrating an existing VMware vCenter Server environment from an external Converged System to theVMP into a centralized VMware vCenter Server instance, the following conditions apply:

• The VMP and individual compute hosts must be within the same DC and latency limitations.

• The managed Cisco Nexus 1000V Switches must be in Layer 3 mode.

• Any Converged System managed by VMP must be at a supported Release Certification Matrix(RCM).

• VMware vCenter resources from a VMware vCenter instance running on VMP cannot be used.

• Management must be moved in its entirety, including the consolidation or migration of allassociated VMware vCenter Services.

41 | Hosting management applications

Page 42: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

VMware vSphere Standard SwitchThe following illustration shows the connections between the devices for the VMware vSphere StandardSwitch on the VMP:

Hosting management applications | 42

Page 43: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

VMware vSphere Distributed Switch

The following illustration shows the VMware VDS configuration on the VMP:

Management workload cluster and resource poolsDedicated clusters and system resource pools segregate the resources required to run efficiently andmaintain performance without hindering other workloads.

The following clusters support the management workloads:

• Core cluster (VXVMP-CORE)

• Ecosystem cluster (VXVMP-ECO)

43 | Hosting management applications

Page 44: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Core cluster

The core cluster components consists of the following resource pool workloads that support VscaleManagement Platform (VMP) and external Converged Systems management:

Workload Description

VMP management Manages components local to the VMP. The local management workload consists of theVMs required for the VMware vSphere management components that run the VscaleFabric Technology Extension with VMP.

Optional management This VMware vSphere system resource pool provides all the available data protectionsoftware components.

Production sharedmanagement

Manages all external Converged Systems and provides the central workloads for theVMware vSphere management components and common components such as elementmanager, Secure Remote Services, fabric manager and PowerPath.

The following table lists the VMs that belong in the local and production management workloads:

Workload Components Manages

VMP management • Local database server

• Local VMware vCenter server

• Local update manager

• Local VMware vCenter Platform ServiceController 1 and 2

• Local element manager

Local Converged Systemcomponents

Production management • Production database server

• Production VMware vCenter Server

• Production update manager

• Production VMware vCenter PlatformService Controller 1 and 2

• Production element manager

• Production fabric manager

• Production Secure Remote Servicesappliance

• Production PowerPath license server

Primary point of management forConverged System resources to bemanaged (including storage,compute, virtualization, and network)

Ecosystem cluster

The ecosystem management workload consists of non-Dell EMC supported management tools fromCisco, Dell EMC, and VMware. The workload also consists of software that is certified asVblock System-ready, such as VMware vRealize Suite, VMware Horizon View, Cisco UCS Director, Ionix UnifiedInfrastructure Manager (UIM/P and UIM/O), VMware VMTurbo, and Cloud Lifecycle Management.

A new VMware vSphere system resource pool is created within a new and separate VMware vSpherecluster known as VXVMP-ECO. The separation of the ecosystem management workload enables therequired control and shaping of the enterprise resource management tools.

Hosting management applications | 44

Page 45: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Managing resource usage and VMs

The creation of resource pools/vApps and the grouping of associated VMs within them allow for aneffective and efficient method to manage their resource usage and execution of power on/offmaintenance tasks.

The following example provides a configuration example of pools and vApps for resource managementand VM control:

45 | Hosting management applications

Page 46: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Hosting management applications | 46

Page 47: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

VirtualizationThe Vscale Management Platform(VMP LOCAL) workloads have a local VMware vCenter server instanceto manage the workloads and one or more VMware vCenter servers for the production environment.

Local VMware vCenter Server

The local management workload resides on the Vscale Fabric Technology Extension with VMP and iscontrolled by the local VMware vCenter instance.

Production VMware vCenter Server

The production management workload (VMP PROD-CENTRAL/VMPprod01) for VMP operates as amonolithic VMware vCenter Server instance for centralized management of the VMware vSphere ESXihosts from external Converged Systems.

47 | Hosting management applications

Page 48: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Sample configurationsCabinet elevations vary based on the specific configuration requirements.

Sample ACI configuration with the Cisco Nexus 9504 SwitchElevations are provided for sample purposes only. For specifications for a specific design, consult yourvArchitect.

Cabinet 1

Sample configurations | 48

Page 49: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Cabinet 2

Sample ACI configuration with the Cisco Nexus 9508 SwitchElevations are provided for sample purposes only. For specifications for a specific design, consult yourvArchitect.

49 | Sample configurations

Page 50: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Cabinets 1 and 2

Sample configurations | 50

Page 51: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Cabinet 3

Sample technology connect configurationElevations are provided for sample purposes only. For specifications for a specific design, consult yourvArchitect.

51 | Sample configurations

Page 52: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Cabinet 1

Sample system with computeElevations are provided for sample purposes only. For specifications for a specific design, consult yourvArchitect.

Sample configurations | 52

Page 53: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Cabinet 1

Sample system with storageElevations are provided for sample purposes only. For specifications for a specific design, consult yourvArchitect.

53 | Sample configurations

Page 54: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Cabinet 1

Sample Vscale Management Platform with VNX5200Elevations are provided for sample purposes only. For specifications for a specific design, consult yourvArchitect.

Sample configurations | 54

Page 55: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Cabinet 1

55 | Sample configurations

Page 56: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Additional references

Virtualization componentsVirtualization component information and links to documentation are provided.

Product Description Link to documentation

VMware vCenter Server Provides a scalable and extensibleplatform that forms the foundation forvirtualization management and providesVMware high availability (HA) anddynamic resource scheduling (DRS).

www.vmware.com/products/vcenter-server/

VMware vSphere ESXi Abstracts hardware to support virtualizedworkloads.

www.vmware.com/products/vsphere/

Compute componentsCompute component information and links to documentation are provided.

Product Description Link

Cisco UCS B-SeriesBlade Servers

Servers that adapt to application demands,intelligently scale energy use, and offer best-in-class virtualization.

www.cisco.com/en/US/products/ps10280/index.html

Cisco UCS Manager Provides centralized management capabilities forthe Cisco Unified Computing System (UCS).

www.cisco.com/en/US/products/ps10281/index.html

Cisco UCS 2200 SeriesFabric Extenders

Bring unified fabric into the blade-server chassis,providing up to eight 10 Gbps connections eachbetween blade servers and the fabric interconnect.

www.cisco.com/c/en/us/support/servers-unified-computing/ucs-2200-series-fabric-extenders/tsd-products-support-series-home.html

Cisco UCS 2300 SeriesFabric Extenders

Bring unified fabric into the blade-server chassis,providing up to four 40 Gbps connections eachbetween blade servers and the fabric interconnect.

www.cisco.com/c/en/us/support/servers-unified-computing/ucs-2300-series-fabric-extenders/tsd-products-support-series-home.html

Cisco UCS 5108 SeriesBlade Server Chassis

Chassis that supports up to eight blade serversand up to two fabric extenders in a six RUenclosure.

www.cisco.com/en/US/products/ps10279/index.html

Cisco UCS 6200 SeriesFabric Interconnects

Cisco UCS family of line-rate, low-latency,lossless, 10 Gigabit Ethernet, Fibre Channel overEthernet (FCoE), and Fibre Channel functions.Provide network connectivity and managementcapabilities.

www.cisco.com/en/US/products/ps11544/index.html

Additional references | 56

Page 57: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Product Description Link

Cisco UCS 6300 SeriesFabric Interconnects

Cisco UCS family of line-rate, low-latency,lossless, 40 Gigabit Ethernet, Fibre Channel overEthernet (FCoE), and Fibre Channel functions.Provide network connectivity and managementcapabilities.

www.cisco.com/c/en/us/support/servers-unified-computing/ucs-6300-series-fabric-interconnects/tsd-products-support-series-home.html

Network componentsNetwork component information and links to documentation are provided.

Product Description Link

Cisco Nexus 1000VSeries Switches

Delivers Cisco VN-Link services to VMshosted on that server.

www.cisco.com/en/US/products/ps9902/index.html

Cisco Nexus 3000Series Switches

Provides management access to allConverged System components using vPCtechnology to increase redundancy andscalability

www.cisco.com/c/en/us/products/switches/nexus-3000-series-switches/index.html

Cisco Nexus 5000Series Switches

Simplifies data center transformation byenabling a standards-based, high-performance, unified fabric.

www.cisco.com/c/en/us/products/switches/nexus-5000-series-switches/index.html

Cisco MDS 9000 SeriesSwitches

Provides industry-leading availability,scalability, security, and management.

www.cisco.com/c/en/us/products/storage-networking/mds-9000-series-multilayer-switches/index.html

Cisco Nexus 9000Series Switches

Delivers proven high performance anddensity, low latency, and exceptional powerefficiency in a broad range of compact formfactors.

www.cisco.com/c/en/us/products/switches/nexus-9000-series-switches/index.html

VMware vSphereDistributed Switch(VDS)

Delivers advanced network services to VMshosted on that server.

www.vmware.com/products/vsphere/features/distributed-switch.html

57 | Additional references

Page 58: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

Storage componentsStorage component information and links to documentation are provided.

Product Description Link to documentation

Unity Delivers a fully integrated SAN and NAS arraywith seamless tiered storage and streamlinedmanagement

www.emc.com/en-us/storage/unity.htm

XtremIO Delivers industry-leading performance, scale,and efficiency for hybrid cloud environments.

www.emc.com/collateral/software/specification-sheet/h12451-XtremIO-ss.pdf

VMAX3 AFA Family Delivers performance, scale, high availability,and advanced data services for all mission-critical applications.

www.emc.com/en-us/storage/vmax-all-flash.htm

VMAX3 Hybrid Family Delivers industry-leading performance, scale,and efficiency for hybrid cloud environments.

www.emc.com/collateral/hardware/specification-sheet/h13217-vmax3-ss.pdf

VNX Series Gateways Provides NAS storage in a centrally managedinformation storage system. The gateways allowyou to grow, share, and cost effectively managethe Vblock System with multi-protocol fileaccess.

www.emc.com/storage/vnx/vnx-series-gateways.htm#!

VNX Delivers high-performance, unified storage withunsurpassed simplicity and efficiency, optimizedfor virtual applications.

www.emc.com/products/series/vnx-series.htm

Additional references | 58

Page 59: Dell EMC Vscale Architecture Overview · PDF file• New or existing Converged Systems (Vblock Systems, VxBlock Systems, or VxRack Systems)

The information in this publication is provided "as is." Dell Inc. makes no representations or warranties of any kind withrespect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitnessfor a particular purpose.

Use, copying, and distribution of any software described in this publication requires an applicable software license.

Copyright © 2017 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are trademarks ofDell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners. Published in the USA inFebruary 2017.

Dell EMC believes the information in this document is accurate as of its publication date. The information is subject tochange without notice.

59 | Copyright