Transcript
Page 1: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure PackagesReference Architecture

Introduction

Goal of This DocumentThis document describes and provides high-level design considerations for deploying Vblock™ Infrastructure Packages. Vblock Infrastructure Packages are enterprise- and service provider-class infrastructure solutions using VMware® vSphere/vCenter™ on a Cisco® Unified Computing System™ (UCS) connected to EMC® CLARiiON® CX4 Series or Celerra® Unified Storage platforms or Symmetrix® V-Max™ Series arrays via a Cisco MDS 9506 Multilayer Director class SAN Switch1 or a Cisco Nexus® 5010.

AudienceThe target audience for this document includes sales engineers, field consultants, advanced services specialists, and customers who want to deploy a virtualized infrastructure using VMware vSphere/vCenter on Cisco UCS connected to EMC Symmetrix V-Max, Celerra, and CLARiiON storage products. The document also explores potential business benefits of interest to senior executives.

ObjectivesThis document is intended to describe:

• The role of Vblock Infrastructure Packages within a data center

• The capabilities and benefits of Vblock Infrastructure Packages

• The components of the two types of Vblock Infrastructure Packages: Vblock 2 and Vblock 1

This document also highlights the collaborative efforts of three partner companies—Cisco and EMC with VMware—working together on a common goal of providing proven technology to customers.

1. Or, optionally, a Cisco MDS 9222i Multiservice Modular Fibre Channel Switch.

© 2010 Cisco EMC VMware. All rights reserved.

Page 2: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Overview

Vblock Infrastructure Packages OverviewIT is undergoing a transformation. The current “accidental architecture” in the data center increases procurement, management costs, and complexity while making it difficult to meet customer service level agreements. This makes it difficult for IT to respond to the needs of the business in a timely manner and creates the perception of IT being a cost center. The data center is now moving towards a “private cloud” model, which is a new model for delivering IT as a service, whether that service is provided internally (IT today), externally (service provider), or in combination. This new model requires a new way of thinking about both the underlying technology and the delivery model for customer success.

While the need for a new model has never been more clear, navigating the path to that model has never been more complicated. The benefits of private clouds are capturing the collective imagination of the business in organizations of all sizes around the world. The realities of outdated technologies, rampant incremental approaches, and the absence of a compelling end-state architecture are impeding adoption by customers.

By harnessing the power of virtualization, private clouds place considerable business benefits within reach. These include:

• Business enablement—Increased business agility and responsiveness to changing priorities; speed of deployment and the ability to address the scale of global operations with business innovation

• Service-based business models—Ability to operate IT as a service

• Facilities optimization—Lower energy usage; better (less) use of data center real estate

• IT budget savings—Efficient use of resources through consolidation and simplification

• Reduction in complexity—Moving away from fragmented, “accidental architectures” to integrated, optimized technology that lowers risk, increases speed, and produces predictable outcomes

• Flexibility—Ability of IT to gain responsiveness and scalability through federation to cloud service providers while maintaining enterprise-required policy and control

Moore’s Law (1965) was, in the 1980s and 1990s, replaced by the unwritten rule that everyone knew and did not lament loudly enough: enterprise IT doubles in complexity and total cost of ownership (TCO) every five years and IT gets more pinched by the pressure points.

Enterprise IT solutions over the past 30 years have become more costly to analyze and design, procure, customize, integrate, inter-operate, scale, service, and maintain. This is due to the inherent complexity in each of these lifecycle stages of the various solutions.

Within the last decade, we have seen the rise of diverse inter-networks—variously called “fabrics,” “grids,” and, generically, the “cloud”—constructed on commodity hardware, heavily yet selectively service-oriented with a scale of virtualized power never before contemplated, housed in massive data centers on- and off-premises.

Yet amid the buzzword din—onshoring/offshoring, in-/out-/co-sourcing, blades and RAIDs, LANs and storage area networks (SANs), massive scale and hand-held computing—virtualization (an abiding computing capacity since early mainframe days) has met secure networking (around since DARPAnet), both perfected, to form the basis for the next wave, rightly delineated.

It has only been in the past several years that the notion of “cloud computing”—infrastructure, software, or whatever-business-needs as an IT service—has been taken seriously in its own right, championed by pioneers who have proved the model’s viability even if on too limited a basis.

With enterprise-level credibility, enabled by the best players in the IT industry, the next wave of computing will be ushered in on terms that make business sense to the business savvy.

2Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 3: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Overview

What Constitutes Vblock Infrastructure Packages?Vblock Infrastructure Packages are pre-engineered, tested, and validated units of IT infrastructure that have a defined performance, capacity, and availability Service Level Agreement (SLA). The promise that Vblock Infrastructure Packages offer is to deliver IT infrastructure in a new way and accelerate organizations’ migration to private clouds.

Vblock Infrastructure Packages grew out of an idea to simplify IT infrastructure acquisition, deployment, and operations. Removing choice is part of that simplification process. To that end, many decisions regarding current form factors may limit the scope to customize or remove components. For example, substituting components is not permitted as it breaks the tested and validated principle. While Vblock Infrastructure Packages are tightly defined to meet specific performance and availability bounds, their value lies in a combination of efficiency, control, and choice. Another guiding principle of Vblock Infrastructure Packages is the ability to expand the capacity of Vblock Infrastructure Packages as the architecture is very flexible and extensible. The following sections provide definitions for Vblock Infrastructure Packages configurations with mandatory, recommended, and optional hardware and software.

Vblock Infrastructure Packages—A New Way of Delivering IT to BusinessVblock Infrastructure Packages accelerate infrastructure virtualization and private cloud adoption:

• Production-ready

– Integrated and tested units of virtualized infrastructure

– Best-of-breed virtualization, network, compute, storage, security, and management products

• SLA-driven

– Predictable performance and operational characteristics

• Reduced risk and compliance

– Tested and validated solution with unified support and end-to-end vendor accountability

Customer benefits include:

• Simplifies expansion and scaling

• Add storage or compute capacity as required

• Can connect to existing LAN switching infrastructure

• Graceful, non-disruptive expansion

• Self-contained SAN environment with known standardized platform and processes

• Enables introduction of Fibre Channel over IP (FCIP), Storage Media Encryption (SME), and so on, later for Multi-pod

• Enables scaling to multi-Vblock Infrastructure Packages and multi-data center architectures

• Multi-tenant administration, role-based security, and strong user authentication

Vblock Infrastructure Packages Design PrinciplesA data center is a collection of pooled Vblock Infrastructure Packages aggregated in “Zones.”

• A unit of assembly that provides a set of services, at a known level, to target consumers

3Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 4: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Overview

• Self contained, but it may also use external shared services

• Optimized for the classes of services it is designed to provide

• Can be clustered to provide availability or aggregated for scalability, but Vblock Infrastructure Packages are each still viable on their own

• Fault and service isolation—The failure of one or more Vblock Infrastructure Packages will not impact the operation of other Vblock Infrastructure Packages (service level degradation may occur unless availability or continuity services are present)

Vblock Infrastructure Packages are architectures that are pre-tested, fully-integrated, and scalable. They are characterized by:

• Repeatable “units” of construction based on “matched” performance, operational characteristics, and discrete of power, space, and cooling

• Repeatable design patterns facilitate rapid deployment, integration, and scalability

• Designed from the “Facilities to the Workload” to be scaled for the highest efficiencies in virtualization and workload re-platforming

• An extensible management and orchestration model based on industry-standard tools, APIs, and methods

• Built to contain, manage, and mitigate failure scenarios in hardware and software environments

Vblock Infrastructure Packages offer deterministic performance and predictable architecture:

• Predictable SLA—Granular SLA measurement and assurance

• Deterministic space and weight—Floor tiles become unit of capacity planning

• Power and cooling—Consistent power consumption and cooling (KWh/BTUs) per unit

• Pre-determined capacity and scalability—Uniform workload distribution and mobility

• Deterministic fault and security isolation

Vblock Infrastructure Packages benefits include:

• Accelerate the journey to pervasive virtualization and private cloud computing while lowering risk and operating expenses

• Ensure security and minimize risk with certification paths

• Support and manage SLAs

– Resource metering and reporting

– Configuration and provisioning

– Resource utilization

• Validated platforms that enable seamless extension of the environment

• O/S and application support:

– Vblock Infrastructure Packages accelerate virtualization of applications by standardizing IT infrastructure and IT processes

– Broad range of O/S support

– All current applications that work in a VMware environment also work in a Vblock Infrastructure Packages environment

– Vblock Infrastructure Packages validated applications include SAP, VMware View 4, and Web applications.

Vblock Infrastructure Packages are scalable platforms for building solutions:

4Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 5: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Overview

• Modular architecture enables graceful scaling of Vblock Infrastructure Packages environments

• Consistent policy enforcement and IT operational processes

• Add capacity to one or more existing Vblock Infrastructure Packages or add more Vblock Infrastructure Packages

• Mix-and-match Vblock Infrastructure Packages to meet specific application needs

Vblock Infrastructure Packages Architecture Components

Figure 1 provides a high-level overview of the components in the Vblock Infrastructure Packages architecture1.

Figure 1 Vblock Infrastructure Packages Architecture Components

1. The network layer represented in Figure 1 by the Cisco Nexus 7000 is not a Vblock Infrastructure Packages component. EMC Ionix™ is optional and available at additional cost.

Network

Compute/Network

Management

Cisco Nexus 7000

VMware vCenterCisco UCS ManagerEMC Ionix UIMSMC or Navisphere

VMware vSphere

Cisco UCS 5108Blade Chassis

Cisco UCS 6100Fabric Interconnect

Cisco Nexus 1000V

Storage

CLARiiON CX4-480

OR

Symmetrix V-Max

SAN

Cisco MDS 9506

Unified

Cisco Nexus 501022

8037

Storage

Celerra NS-960

5Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 6: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Overview

Vblock 2 Components

Vblock 2 is a high-end configuration that is extensible to meet the most demanding IT requirements of large enterprises or service providers. By delivering high-performance and large-scale virtualization, Vblock 2 can support a substantial number of virtual machines in a compact footprint.

Vblock 2 components:

• Compute

– 32-64 Cisco UCS B-series half-slot blade servers

• Network

– Cisco Nexus 1000V

– The UCS uses 6100 series fabric interconnects that carry the network and storage (IP-based) traffic from the blades to the connected SAN and LAN

• Storage

– EMC Symmetrix V-Max

– 96-146 TB capacity

– iSCSI and SAN

– Celerra NS-G8 (optional)

– Cisco MDS 9506

• VMware vSphere 4.0/vCenter 4.0 update 11

• Management

– EMC Ionix Unified Infrastructure Manager (optional)

– VMware vCenter

– EMC Symmetrix Management console

– EMC PowerPath/VE

– Cisco UCS Manager

– Cisco Fabric Manager

Vblock 1 Components

Vblock 1 is a mid-sized configuration in a compact footprint that delivers a broad range of IT capabilities. Vblock 1 is suitable for data centers of any size, including remote-office locations.

Vblock 1 components:

• Compute

– 16-32 Cisco UCS B-series half-slot blade servers

• Network

– Cisco Nexus 1000V

– The UCS uses 6100 series fabric interconnects that carry the network and storage (IP-based) traffic from the blades to the connected SAN and LAN

• Storage (two options)

1. VMware vSphere 4 Enterprise Plus Suite is the licensed software.

6Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 7: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Overview

– Vblock 1 enabled by SAN Storage—EMC CLARiiON CX4-480 (38-64 TB capacity, Small Computer System Interface over IP [iSCSI] and SAN), Celerra NS-G2 (optional), and Cisco MDS 9506 (optionally Cisco MDS 9222i)

– Vblock 1 enabled by Unified Storage—EMC Celerra NS-960 and Cisco Nexus 5010

• VMware vSphere 4.0/vCenter 4.0 update 11

• Management

– EMC Ionix Unified Infrastructure Manager (optional)

– VMware vCenter

– EMC Navisphere®

– EMC PowerPath®/VE

– Cisco UCS Manager

– Cisco Fabric Manager

Vblock Infrastructure Packages Design and Configuration DetailsFigure 2 provides a high-level topological view of the components of Vblock 2 and Vblock 1 enabled by SAN Storage, and Figure 3 provides a similar view of the components of Vblock 1 enabled by Unified Storage.

Figure 2 High-Level Topological View of Vblock 2 and Vblock 1 Enabled by SAN Storage

1. VMware vSphere 4 Enterprise Plus Suite is the licensed software.

UCS Blade Chassis

UCS Cluster Links

Cisco MDS9506-SAN

B

Cisco MDS9506-SAN

A

ManagementLinks10/100/1000

Up Links

UCS 6100 SeriesFabric Interconnect

UCS Blade Server

Cisco UCS FabricExtender (in back)

Fabric Links (10GE x 4)

CLARiiON CX4-480 orSymmetrix V-Max

22

80

38

EMC Storage

LAN Management

Fibre Channel

7Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 8: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Overview

Figure 3 High-Level Topological View of Vblock 1 Enabled by Unified Storage

Vblock Infrastructure Packages consist of a minimum and maximum amount of components that offer balanced I/O, bandwidth, and storage capacity relative to the compute and storage arrays offered. Vblock Infrastructure Packages are fully-redundant autonomous systems that have 1+1 or N+1 redundancy by default.

In Vblock 1, each UCS chassis contains B-200 blades, six (6) of which have 48GB RAM and two (2) of which have 96GB RAM. This provides good price/performance and supports some memory-intensive applications such as in-memory databases within the Vblock Infrastructure Packages definition. For Vblock 2, all B-200 series have been defined with 96GB RAM by default due to the system’s performance capabilities, where it is more likely to be running very dense VM or memory intensive mission-critical applications.

The amount of RAM per blade within either a Vblock 2 or Vblock 1 may be adjusted if you have specific requirements within the definition of a Vblock Infrastructure Package. For example, if you have specific needs to have a mixture of RAM densities, you can specify 32GB, 48GB, 72GB, and 96GB RAM options. This however requires careful consideration of the operational environment and introduces some variance.

B-250 series modules were not tested, but will be a future option. If B-250 series modules are a requirement for memory densities greater than 96GB per module, this may be accommodated within Vblock 2 and Vblock 1 once testing and validation have been completed. Note that as the B-250 is a full-slot module, this will have density and performance impacts that need to be ascertained. Because a B-250 blade is a full-slot module, it is expected that these will reduce the impact as the number of CPUs per slot is reduced by 50%, which will reduce IOPs and potentially disk capacity.

Within a Vblock 1, there are no hard disks on the B-200 series blades as all boot services and storage are provided by the SAN. However, a small hard drive may be installed if local page memory is required for vSphere. If the local disk is used for main storage or operating system storage, units so equipped are not considered Vblock Infrastructure Packages and are custom implementations at this point.

UCS Blade Chassis

UCS Cluster Links

Cisco Nexus 5010

B

CiscoNexus 5010

A

ManagementLinks10/100/1000

Up Links

Fibre Channel

UCS 6100 SeriesFabric Interconnect

UCS Blade Server

Cisco UCS FabricExtender (in back)

Fabric Links (10GE x 4)

Celerra NS-960

Ethernet Ethernet

22

88

08

EMC Storage

LAN Management

8Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 9: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Overview

For Vblock 2, each B-200 series blade module has 72GB SATA drives for page memory purposes. If required, these may be removed to reduce power and cooling overhead, increase Mean Time Between Failure (MTBF), or save costs.

Each 61x0 has either 4 or 8 10GE/Unified Fabric uplinks to the aggregation layer (the aggregation layer is not a part of Vblock Infrastructure Packages) Cisco Nexus 7000 (new build out) or Cisco Catalyst 6500 (upgrade to an existing data center) switches and either 4 or 8*4G Fibre Channel connections to the SAN aggregation provided by a pair of Cisco MDS 9506 director-class switches (SAN A and B support).

The Cisco MDS 9506 switches are recommended, but may optionally be changed for 9509s or 9513s to scale capacity or reduced to a Cisco MDS 9222i if less density is required. However, the performance may be acceptable for small Vblock 1 implementations. For the Vblock 1 enabled by Unified Storage architecture, the MDS is replaced by a Cisco Nexus 5010, which is used to provide limited Fibre Channel connectivity and local 10 Gb Ethernet connectivity between the UCS Fabric Interconnect and the Celerra NS-960 Data Mover ports.

Figure 5 illustrates the interconnection of the Cisco MDS 9506 in Vblock 2, Figure 6 illustrates the interconnection of the Cisco MDS 9222i in Vblock 1 enabled by SAN Storage, and Figure 7 illustrates the interconnection of the Cisco Nexus 5010 in Vblock 1 enabled by Unified Storage.

For more information on the Cisco MDS 9222i and MDS 9506, see Storage Area Network—Cisco MDS Fibre Channel Switch. For more information on the Cisco Nexus 5010, see Cisco Nexus 5010.

VMware vSphere 4 Enterprise Plus licenses are mandatory within all Vblock Infrastructure Packages definitions (to enable the Cisco 1000V and EMC PowerPath/VE) and per CPU licensing is included within the defined bill-of-materials. It is also acceptable for operating system and applications to be run directly on the B-200 series blades. It should be noted that other hypervisors are not supported by Vblock Infrastructure Packages and invalidate the Vblock Infrastructure Packages support agreement.

Cisco Nexus 1000V and Enterprise Plus are mandatory components due to the inherent richness that they offer in terms of policy control, segmentation, flexibility, and instrumentation.

Networking

Vblock 2 and Vblock 1 enabled by SAN Storage do not contain any form of network switches except for MDS SAN, however Vblock 1 enabled by Unified Storage introduces the use of a Cisco Nexus 5010 to provide limited local Fibre Channel connectivity and additional 10 Gb Ethernet connectivity to connect the UCS Fabric Interconnect to the Celerra NS-960 Data Mover ports. The MDS 9000 series are necessary components to provide Fibre Channel connectivity between the storage arrays and UCS 61x0 series Fabric Interconnects and ultimately the UCS B-200 series blades.

For upstream connectivity, the UCS 61x0 are connected using either 4*10GE/Unified Fabric (Type 1) or 8*10GE/Unified Fabric (Type 2) connections, which equates to an oversubscription factor of 4:1. There is no provision for an intermediate layer of “access” switches at this time.

If you require a Celerra Gateway within Vblock 2 or Vblock 1 enabled by SAN Storage, there are two possibilities:

• Connect the Celerra Gateway to the Cisco Nexus 70001 aggregation layer directly

• Use a local Cisco Nexus 50102 switch to provide connectivity

Figure 4 illustrates the interconnection of the EMC Celerra Gateway in Vblock 2 and Vblock 1 enabled by SAN Storage.

1. The Cisco Nexus 7000 is not a component of Vblock Infrastructure Packages.

2. The Cisco Nexus 5010 is not a component of Vblock 1 enabled by SAN Storage.

9Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 10: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Overview

Figure 4 EMC Celerra Gateway in Vblock 2 and Vblock 1 Enabled by SAN Storage

For more information, see NFS Datastores and Native File Services—EMC Celerra Gateway.

Storage

Storage capacity has been tuned to match the I/O performance of the attached UCS systems. Additionally, some analysis of the likely underlying applications has also been taken into account to characterize user or VM densities that are likely for a given Vblock Infrastructure Package. Obviously, these numbers are highly variable based upon your use cases and requirements; the numbers are intended to provide guidance on typical densities.

Figure 5 illustrates the interconnection of the EMC Symmetrix V-Max in Vblock 2, Figure 6 illustrates the interconnection of the EMC CLARiiON CX4-480 in Vblock 1 enabled by SAN Storage, and Figure 7 illustrates the interconnection of the EMC Celerra NS-960 in Vblock 1 enabled by Unified Storage.

SAN

10Gb Ethernet 4 Gbps FC

UCS EMC StorageEMC Celerra NS-G

Data Mover Cache

Physical Disks

Data Mover

Data Mover

5100 BladeChassis

5100 BladeChassis

61x0 FabricInterconnect

22

80

42

IP

Front-end Fibre Channel Ports

10Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 11: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Overview

Figure 5 EMC Symmetrix V-Max in Vblock 2

Figure 6 EMC CLARiiON CX4-480 in Vblock 1 Enabled by SAN Storage

IP

Cisco MDS9506

Cisco MDS9506

UCS (4 to 8 Chassis) V-Max (1 to 2 Engines)

FA = Fibre Channel

64-128 GB Cache

16-32 FC Front-end Ports

220-355 Physical Disks

5100 BladeChassis

5100 BladeChassis

FA FA FA FA FA FA FA FA

61x0 FabricInterconnect

22

80

41

4 Gbps FC

10Gb Ethernet

IP

Cisco MDS9222i or 9506

Cisco MDS9222i or 9506

UCS CLARiiON CX4-480

16 GB Cache

8-16 FC Front-end Ports4x10 Gbps iSCSI Front-end Ports

105-180 Physical Disks5100 Blade

Chassis

5100 BladeChassis

61x0 FabricInterconnect

22

80

40

Service Processor A Service Processor B

4 Gbps FC

10Gb Ethernet

11Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 12: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Overview

Figure 7 EMC Celerra NS-960 in Vblock 1 Enabled by Unified Storage

The CLARiiON system is configured with an amount of Flash, Fibre Channel, and SATA drives with N+1 spares redundancy. This means that although the minimum Vblock 1 density is 61TB of raw storage, 42TB is usable when factoring in system spares and overheads.

Within the Vblock Infrastructure Packages definitions, network-attached storage (NAS) access is recommended for vSphere. A NAS Gateway, while optional, provides this service, including CIFS for applications. Although these have been tested, the NAS Gateways have not been performance validated for a pure NAS environment; further testing is required to ensure that boot (PXE) as well as file access can be supported in a balanced fashion. It should be noted that UCS does not currently support iSCSI boot of physical servers (VMs can boot on iSCSI through vSphere), so this is neither a tested nor validated solution.

For the interim, Vblock 2 and Vblock 1 enabled by SAN Storage can support NAS with the provision that primary boot services are provided across the SAN. NAS functionality is integrated into the Celerra NS-960 in Vblock 1 enabled by Unified Storage.

For Vblock 2, the characteristics of the system are such that mission-critical applications will be hosted that will require Fibre Channel access to maintain performance. Again, it is highly recommended that one or more NAS Gateways are deployed for vSphere, the exact number required being ascertained during Vblock Infrastructure Packages planning phases.

EMC PowerPath/VE (PP/VE) provides several benefits in terms of performance, availability, and operations, so the base PP/VE license is mandatory for Vblock 2 and Vblock 1.

For more information on the:

10Gb Ethernet

UCS

5100 BladeChassis

5100 BladeChassis

61x0 FabricInterconnect

Celerra NS-960

16 GB Cache

8-16 FC Front-end Ports2-4 GB iSCSI Front-end Ports

105-180 Physical Disks

Service Processor A Service Processor B

22

88

09

Data Mover

Data Mover

Data Mover

4 Gbps FC

CiscoNexus 5010

CiscoNexus 5010

12Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 13: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Overview

• EMC CLARiiON storage system, see http://www.emc.com/products/family/clariion-family.htm

• EMC Celerra Unified Storage, see http://www.emc.com/products/family/celerra-family.htm

• EMC Symmetrix V-Max storage system, see http://www.emc.com/products/family/symmetrix-family.htm

Vblock Infrastructure Packages Management

Within the Vblock Infrastructure Packages there are several managed elements, some of which are managed by their respective element managers. These elements offer corresponding interfaces that provide an extensible, open management framework. The Vblock Infrastructure Packages management framework showing relationships and interfaces is shown in Figure 8. The individual element managers and managed components are:

• VMware vCenter Server

• Cisco UCS Manager

• Cisco Fabric Manager

• EMC Symmetrix Management Console

• EMC Navisphere Manager

A Vblock Infrastructure Packages element manager, Unified Infrastructure Manager (UIM)1, manages the configuration, provisioning, and compliance of individual Vblock Infrastructure Packages and multiple mixed Vblock Infrastructure Packages. This accrues several benefits as it provides a “single pane of glass” for systems configuration and integration and provides Vblock Infrastructure Packages service catalogs and Vblock Infrastructure Packages self-service portal capabilities.

1. Optional and available at additional cost.

13Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 14: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Overview

Figure 8 Vblock Infrastructure Packages Management

It should be noted that Ionix UIM does not provide fault, performance monitoring, billing capabilities, or software lifecycle management capabilities. By the abstractions offered by UIM, and using UIM as a single point of integration, this simplifies Vblock Infrastructure Packages integration into IT service catalogs and workflow engines. In this respect, UIM can dramatically simplify Vblock Infrastructure Packages deployment by abstracting the overall provisioning aspects of Vblock Infrastructure Packages, while offering granular access to individual components for troubleshooting and fault management.

It should be noted however that Vblock Infrastructure Packages have an open management framework that allows an organization to integrate Vblock Infrastructure Packages management with their choice of management tools should they so desire.

Vblock Infrastructure Packages Qualification of Existing Environments

Many organizations have extensive EMC and VMware components within their data centers. However, simply adding a UCS system to this environment does not constitute a Vblock 2 or Vblock 1 from a number of aspects, including:

• Do the existing arrays meet the published system capacity for Vblock 2 and Vblock 1?

• What firmware/software versions are running within the infrastructure?

• Is vSphere 4 deployed?

• Which other hypervisors are in use: Xen, Hyper-V?

• What management packages are being used?

• What other equipment is accessing the storage arrays?

Vblock

UCS Manager vCenterSymmetrix

Management Consoleor Navisphere

Vblock

UCS Manager vCenterSymmetrix

Management Consoleor Navisphere

Vblock

UCS Manager vCenterSymmetrix

Management Consoleor Navisphere

25

35

47

Unified Vblock Element Management

VblockSelf-Service Portal, Service Profile Catalog,Policy Based Management, Unified Provisioning,Configuration Change & Compliance Analysis,

Infrastructure Recovery (DR)

Enterprise SystemManagement Platform

Configuration andCompliance Events

Availability, Discoveryand Performance

Example: EMC Ionix UIM

Enterprise MonitoringPlatform

14Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 15: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Overview

Before the existing equipment can be supported as Vblock Infrastructure Packages, these questions need to be addressed. A plan would then be developed to remediate the environment to meet Vblock Infrastructure Packages standards.

In practical terms, it may not be possible or desirable to do this depending upon the complexity of the environment; it may be simpler to deploy a new Vblock 2 or Vblock 1, migrate workloads first, and then migrate existing storage arrays to that infrastructure over time. Each of these situations would need to be assessed on their relative merits and would require extensive audits.

Expanding Vblock Infrastructure Packages

One guiding principle of Vblock Infrastructure Packages is the ability to expand the capacity of Vblock Infrastructure Packages. The Vblock Infrastructure Packages architecture is very flexible and extensible and is architected to be easily expandable from a few hundred VMs/users to several thousand users. In addition, this capacity may be aggregated (clustered) as a single pool of shared capacity or segmented into smaller isolated pools.

Using Figure 9 as a reference (for expansion of Vblock 2 and Vblock 1 enabled by SAN Storage), the first Vblock Infrastructure Packages deployment may be a single Vblock 1 enabled by SAN Storage. As the organization requires more capacity, the initial Vblock 1 enabled by SAN Storage may be extended by adding another Vblock 1 enabled by SAN Storage and clustering the two systems to aggregate their capacity. If a Vblock 2 is added, this capacity may be segmented from the Vblock 1 enabled by SAN Storage storage and compute for regulatory, policy, or operational reasons.

Figure 9 Vblock Infrastructure Packages Expansion—Vblock 2 and Vblock 1 Enabled by SAN

Storage

In order to scale capacity within Vblock Infrastructure Packages, the initial Vblock Infrastructure Packages configuration includes a Cisco MDS 9506 that has a 24-port 2/4/8G Fibre Channel module. If additional capacity is required, an expansion to the original Vblock 2 or Vblock 1 enabled by SAN Storage simply connects the UCS 61x0 and CLARiiON or Symmetrix V-Max to the existing MDS interfaces. If additional capacity is required on the Cisco MDS 9506 switch, additional interface modules can be installed as necessary. If additional compute or storage is required beyond what is provided in the

25

35

57

Vblock 1Base

Vblock 1Expansion

Vblock 1Expansion

Vblock 1Base

Vblock 1Base

Vblock 1Storage

Expansion

VblockCompute

Expansion

Vblock 2Base

15Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 16: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Component Details

Vblock 1 enabled by Unified Storage configuration, another Vblock 1 enabled by Unified Storage can be deployed to gain the additional compute and storage. This enables the storage environment to scale as storage access, being IP-based, can be shared between Vblock Infrastructure Packages. Ethernet connectivity can be directly connected between the Cisco Nexus 5010 switches or this can be established via the customer’s existing upstream aggregation layer switches.

As Vblock Infrastructure Packages are added, the capacity of the Vblock Infrastructure Packages scales either as an aggregated pool, whereby any UCS blade can access any storage disks on the SAN or as isolated silos. For example, it is perfectly acceptable to aggregate two Vblock 1s to provide capacity for 6,000 VMs that can share common storage capacity. This offers an organization the ability to configure Vblock Infrastructure Packages to achieve their compliance, security, and fault isolation objectives using a single, flexible infrastructure. As long as storage capacity is added in conjunction with compute capacity to maintain balanced performance as published within the Vblock Infrastructure Packages, the system does not require any additional validation.

If compute or storage needs to be added asynchronously, the Vblock Infrastructure Packages environment must be carefully considered. If storage is increased above that specified on a per Vblock Infrastructure Packages maximum, there is no real concern as the performance limitation are either at the system controller or compute node. In most cases, the performance of the system’s controller and UCS system has been balanced, so this should not be a concern.

If compute is to be scaled to be in excess of the minimum or maximum storage capacity, systemic problems from I/O or capacity may be introduced that need careful consideration. Some applications that may require this flexibility are high-performance compute environments (CFD, data mining, parametric execution, etc.). This requires services engagement to validate and certify before being accepted as Vblock Infrastructure Packages.

In order to satisfy the performance needs of Vblock Infrastructure Packages, it is recommended that only similar Vblock Infrastructure Packages are pooled so as to maintain the performance and availability SLA associated with that Vblock Infrastructure Package. This is easily achieved on the MDS 9500 director switches using Virtual SAN capabilities.

Vblock Infrastructure Packages Component DetailsThis section contains more detailed descriptions of the main components of Vblock 2 and Vblock 1:

• Compute—Unified Computing System (UCS)

• Network

– Cisco Nexus 1000V

• Storage

– EMC CLARiiON CX4 Series

– EMC Celerra Unified Storage—NS-960

– EMC Symmetrix V-Max Storage System

– NFS Datastores and Native File Services—EMC Celerra Gateway

– Storage Area Network—Cisco MDS Fibre Channel Switch

– Cisco Nexus 5010

• Virtualization

16Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 17: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Component Details

Compute—Unified Computing System (UCS)The Cisco Unified Computing System (UCS) is a next-generation data center platform that unites compute, network, and storage access. The platform, optimized for virtual environments, is designed within open industry standard technologies and aims to reduce TCO and increase business agility. The system integrates a low-latency, lossless 10 Gigabit Ethernet unified network fabric with enterprise-class, x86-architecture servers. The system is an integrated, scalable, multichassis platform in which all resources participate in a unified management domain.

UCS Components

The Cisco Unified Computing System is built from the following components:

• Cisco UCS 6100 Series Fabric Interconnects (http://www.cisco.com/en/US/partner/products/ps10276/index.html) is a family of line-rate, low-latency, lossless, 10-Gbps Ethernet and Fibre Channel over Ethernet interconnect switches.

• Cisco UCS 5100 Series Blade Server Chassis (http://www.cisco.com/en/US/partner/products/ps10279/index.html) supports up to eight blade servers and up to two fabric extenders in a six rack unit (RU) enclosure.

• Cisco UCS 2100 Series Fabric Extenders (http://www.cisco.com/en/US/partner/products/ps10278/index.html) bring Unified Fabric into the blade-server chassis, providing up to four 10-Gbps connections each between blade servers and the fabric interconnect.

• Cisco UCS B-Series Blade Servers (http://www.cisco.com/en/US/partner/products/ps10280/index.html) adapt to application demands, intelligently scale energy use, and offer best-in-class virtualization.

• Cisco UCS B-Series Network Adapters (http://www.cisco.com/en/US/partner/products/ps10280/index.html) offer a range of options, including adapters optimized for virtualization, compatibility with existing driver stacks, or efficient, high-performance Ethernet.

• Cisco UCS Manager (http://www.cisco.com/en/US/partner/products/ps10281/index.html) provides centralized management capabilities for the Cisco Unified Computing System.

Fore more information, see: http://www.cisco.com/en/US/partner/netsol/ns944/index.html.

Table 1 summarizes the various components that constitute the Cisco UCS.

Table 1 Cisco Unified Computing System Components

UCS ManagerEmbedded—Manages entire system

UCS Fabric Interconnect20 Port 10Gb FCoE

40 Port 10Gb FCoE

17Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 18: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Component Details

Figure 10 provides an overview of the components of the Cisco UCS.

UCS Fabric ExtenderRemote line card

UCS Blade Server ChassisFlexible bay configurations

UCS Blade ServerIndustry-standard architecture

UCS Virtual AdaptersChoice of multiple adapters

Table 1 Cisco Unified Computing System Components

18Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 19: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Component Details

Figure 10 Cisco Unified Computing System

Cisco UCS and UCS Manager (UCSM)

The Cisco UCS is a revolutionary new architecture for blade server computing. The Cisco UCS is a next-generation data center platform that unites compute, network, storage access, and virtualization into a cohesive system designed to reduce TCO and increase business agility. The system integrates a low-latency, lossless 10 Gigabit Ethernet unified network fabric with enterprise-class, x86-architecture servers. The system is an integrated, scalable, multi-chassis platform in which all resources participate in a unified management domain. Managed as a single system whether it has one server or 320 servers with thousands of virtual machines, the Cisco UCS decouples scale from complexity. The Cisco UCS accelerates the delivery of new services simply, reliably, and securely through end-to-end provisioning and migration support for both virtualized and non-virtualized systems.

UCS 5100 Blade Chassis

IPAggregation

Layer

StorageArea Network

(SAN)

UCS 6100 Fabric Interconnect

10Gb x 4 EthernetPort Channel to

Aggregation Layer

10Gb Connection From EachFabric Extender to Each Blade

CiscoUCS Manager

UCS 6100 Fabric Interconnect

CiscoUCS Manager

UCS B200 Blade

UCS B200 Blade

UCS B200 Blade

UCS B200 Blade

Power and Cooling

UCS B200 Blade

UCS B200 Blade

UCS B200 Blade

UCS B200 Blade

Fa

bric E

xte

nd

er

Fa

bric E

xte

nd

er

UCS 5100 Blade Chassis

UCS B200 Blade

UCS B200 Blade

UCS B200 Blade

UCS B200 Blade

UCS B200 Blade

UCS B200 Blade

UCS B200 Blade

UCS B200 Blade

Fa

bric E

xte

nd

er

Fa

bric E

xte

nd

er

Unified Fabric• 10Gb FCoE• 10Gb Ethernet

4Gb x 4 FibreChannel to SAN

22

80

39

Power and Cooling

Power and Cooling

19Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 20: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Component Details

UCS Manager

Data centers have become complex environments with a proliferation of management points. From a network perspective, the access layer has fragmented, with traditional access layer switches, switches in blade servers, and software switches used in virtualization software all having separate feature sets and management paradigms. Most current blade systems have separate power and environmental management modules, adding cost and management complexity. Ethernet NICs and Fibre Channel HBAs, whether installed in blade systems or rack-mount servers, require configuration and firmware updates. Blade and rack-mount server firmware must be maintained and BIOS settings must be managed for consistency. As a result, data center environments have become more difficult and costly to maintain, while security and performance may be less than desired. Change is the norm in data centers, but the combination of x86 server architectures and the older deployment paradigm makes change difficult:

• In fixed environments in which servers run OS and application software stacks, rehosting software on different servers as needed for scaling and load management is difficult to accomplish. I/O devices and their configuration, network configurations, firmware, and BIOS settings all must be configured manually to move software from one server to another, adding delays and introducing the possibility of errors in the process. Typically, these environments deploy fixed spare servers already configured to meet peak workload needs. Most of the time these servers are either idle or highly underutilized, raising both capital and operating costs.

• Virtual environments inherit all the drawbacks of fixed environments, and more. The fragmentation of the access layer makes it difficult to track virtual machine movement and to apply network policies to virtual machines to protect security, improve visibility, support per-virtual machine QoS, and maintain I/O connectivity. Virtualization offers significant benefits; however, it adds more complexity.

Cisco UCS 6100 Series Fabric Interconnects

A core part of the Cisco UCS, the Cisco UCS 6100 Series Fabric Interconnects provide both network connectivity and management capabilities to all attached blades and chassis. The Cisco UCS 6100 Series offers line-rate, low-latency, lossless 10 Gigabit Ethernet and Fibre Channel over Ethernet (FCoE) functions.

The interconnects provide the management and communication backbone for the Cisco UCS B-series half-slot blade servers and UCS 5100 Series Blade Server Chassis. All chassis, and therefore all blades, attached to the interconnects become part of a single, highly available management domain. In addition, by supporting unified fabric, the Cisco UCS 6100 Series provides both the LAN and SAN connectivity for all blades within its domain.

Typically deployed in redundant pairs, fabric interconnects provide uniform access to both networks and storage, eliminating the barriers to deploying a fully virtualized environment. Two models are available: the 20-port Cisco UCS 6120XP and the 40-port Cisco UCS 6140XP.

Both models offer key features and benefits, including:

• High performance Unified Fabric with line-rate, low-latency, lossless 10 Gigabit Ethernet, and FCoE.

• Centralized unified management with Cisco UCS Manager software.

• Virtual machine optimized services with the support for VN-Link technologies.

• Efficient cooling and serviceability with front-to-back cooling, redundant front-plug fans and power supplies, and rear cabling.

• Available expansion module options provide Fibre Channel and/or 10 Gigabit Ethernet uplink connectivity.

20Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 21: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Component Details

For more information on the Cisco UCS 6100 Series Fabric Interconnects, see:

http://www.cisco.com/en/US/products/ps10276/index.html

Vblock Infrastructure Packages Configuration and Design Considerations

• Vblock 2—6140 Fabric Interconnect

– (40) 10 Gb fixed ports to blade chassis/aggregation layer

– (8) 4 Gb ports to SAN fabric

• Vblock 1—6120 Fabric Interconnect

– (20) 10 Gb fixed ports to blade chassis/aggregation layer

– (4) 4 Gb ports to SAN fabric

• Always configured in pairs for availability and load balancing

• Predictable performance:

– 4:1 network oversubscription

– Balanced configuration

Cisco UCS 5100 Series Blade Server Chassis

The Cisco UCS 5100 Series Blade Server Chassis is a crucial building block of the Cisco UCS, delivering a scalable and flexible architecture for current and future data center needs, while helping reduce total cost of ownership.

Cisco’s first blade-server chassis offering, the Cisco UCS 5108 Blade Server Chassis, is six rack units (6RU) high and mounts in an industry-standard 19-inch rack. A chassis can accommodate up to either eight half-slot or four full-slot Cisco UCS B-Series Blade Servers, two redundant 2104XP Fabric Extenders, eight cooling fans, and four power supply units. The cooling fans and power supply are hot-swappable and redundant. The chassis requires only two power supplies for normal operation; the additional power supplies are for redundancy. The highly-efficient (in excess of 90%) power supplies, in conjunction with the simple chassis design that incorporates front to back cooling, make the UCS system very reliable and energy efficient.

The Cisco UCS 5108 Blade Server Chassis revolutionizes the use and deployment of blade-based systems. By incorporating unified fabric and fabric-extender technology, the Cisco Unified Computing System enables the chassis to:

• Have fewer physical components

• Require no independent management

• Be more energy efficient than traditional blade-server chassis

This simplicity eliminates the need for dedicated chassis management and blade switches, reduces cabling, and allowing scalability to 40 chassis without adding complexity. The Cisco UCS 5108 Blade Server Chassis is a critical component in delivering the simplicity and IT responsiveness for the data center as part of the Cisco UCS.

For more information on the Cisco UCS 5100 Series Blade Server Chassis, see:

http://www.cisco.com/en/US/products/ps10279/index.html

Vblock Infrastructure Packages Configuration and Design Considerations

• Vblock 2

21Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 22: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Component Details

– 4 to 8 blade server chassis

• Vblock 1

– 2 to 4 blade server chassis

• Availability:

– Two Fabric Extenders per chassis

– N+1 cooling and power

• Predictable performance:

– 2:1 Oversubscription—40 Gb per chassis

– Balanced configuration

– Distribute vHBA and vNIC between fabrics

Cisco UCS B-200 M1 Blade Server

The Cisco UCS B-200 M1 Blade Server balances simplicity, performance, and density for production-level virtualization and other mainstream data center workloads. The server is a half-width, two-socket blade server with substantial throughput and 50 percent more industry-standard memory compared to previous-generation Intel Xeon two-socket servers.

Features of the Cisco UCS B-200 M1 include:

• Up to two Intel® Xeon® 5500 Series processors, which automatically and intelligently adjust server performance according to application needs, increasing performance when needed and achieving substantial energy savings when not.

• Up to 96 GB of DDR3 memory in a half-width form factor for mainstream workloads, which serves to balance memory capacity and overall density.

• Two optional Small Form Factor (SFF) Serial Attached SCSI (SAS) hard drives available in 73GB 15K RPM and 146GB 10K RPM versions with an LSI Logic 1064e controller and integrated RAID.

• One dual-port mezzanine card for up to 20 Gbps of I/O per blade. Mezzanine card options include either a Cisco UCS VIC M81KR Virtual Interface Card1, a converged network adapter (Emulex or QLogic compatible), or a standard 10GB Ethernet Adapter.

For more information on the Cisco UCS B-200 M1 Blade Server, see:

http://www.cisco.com/en/US/products/ps10299/index.html

Vblock Infrastructure Packages Configuration and Design Considerations

• Vblock 2

– 32-64 blades

– 256-512 cores

– 3072-6144 GB memory

– 96 GB per blade

– (2) 73 GB internal HDD

• Vblock 1

– 16-32 blades

1. Will be included soon.

22Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 23: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Component Details

– 128-256 cores

– 960-1920 GB Memory

– 6 blade/chassis = 48 GB

– 2 blades/chassis = 96 GB

• Availability:

– N+1 blades per chassis

– Trunk and Port Group configuration

• One dual port Converged Network Adapter (Unified Network)

– vNIC

– vHBA

• Internal connections to both Fabric Extenders

• Predictable performance

– Dual quad core Xeon 5500 Series processors

– Balanced configuration

– Network

– Memory

– Compute

• Scalability and flexibility

– VLAN, Trunks and Port Groups

Network

Cisco Nexus 1000V

The Nexus 1000V (http://www.cisco.com/en/US/products/ps9902/index.html) is a software switch on a server that delivers Cisco VN-Link (http://www.cisco.com/en/US/netsol/ns894/index.html) services to virtual machines hosted on that server. It takes advantage of the VMware vSphere (http://www.cisco.com/survey/exit.html?http://www.vmware.com/products/cisco-nexus-1000V/index.html) framework to offer tight integration between server and network environments and help ensure consistent, policy-based network capabilities to all servers in the data center. It allows policies to move with a virtual machine during live migration, ensuring persistent network, security, and storage compliance, resulting in improved business continuance, performance management, and security compliance. Last but not least, it aligns management of the operational environment for virtual machines and physical server connectivity in the data center, reducing the total cost of ownership (TCO) by providing operational consistency and visibility throughout the network. It offers flexible collaboration between the server, network, security, and storage teams while supporting various organizational boundaries and individual team autonomy.

For more information, see: http://www.cisco.com/en/US/products/ps9902/index.html.

23Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 24: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Component Details

StorageStorage components include:

• EMC CLARiiON CX4 Series

• EMC Celerra Unified Storage—NS-960

• EMC Symmetrix V-Max Storage System

• NFS Datastores and Native File Services—EMC Celerra Gateway

• Storage Area Network—Cisco MDS Fibre Channel Switch

• Cisco Nexus 5010

EMC CLARiiON, Celerra Unified Storage, and Symmetrix

• Storage configurations are application-specific

• Logical device considerations

– LUN size

– Consistent size based on application requirements

– RAID protection

– RAID 1

– RAID 5

– RAID 6

– LUN aggregation using meta devices

– Size

– Performance

– Virtual provisioning

– Thin pool

– Thin devices/fully allocated

• Simplifies storage provisioning

– Storage tiers based on drive and protection

– Storage templates

– Storage policies

• Local and remote replication requirements

EMC CLARiiON CX4 Series

Figure 6 illustrates the interconnection of the EMC CLARiiON CX4-480 in Vblock 1 enabled by SAN Storage.

The EMC CLARiiON CX4 series delivers industry-leading innovation in midrange storage with the fourth-generation CLARiiON CX storage platform. The unique combination of flexible, scalable hardware design and advanced software capabilities enables EMC CLARiiON CX4 series systems, powered by Intel Xeon processors, to meet the growing and diverse needs of today’s midsize and large

24Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 25: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Component Details

enterprises. Through innovative technologies like Flash drives, UltraFlex™ technology, and CLARiiON Virtual Provisioning™ customers can decrease costs and energy use and optimize availability and virtualization.

The EMC CLARiiON CX4-480 supports up to 256 highly available, dual-connected hosts and has the capability to scale up to 480 disk drives for a maximum capacity of 939 TB. The CX4-480 supports Flash drives for maximum performance and comes pre-configured with Fibre Channel and iSCSI connectivity, allowing customers to choose the best connectivity for their specific applications.

Delivering up to twice the performance and scale as the previous CLARiiON generation, CLARiiON CX4 is the leading midrange storage solution to meet a full range of needs—from departmental applications to data-center-class business-critical systems.

EMC CLARiiON CX4 Technology Advancements

Enterprise Flash drives—EMC-customized Flash drive technology provides low latency and high throughput to break the performance barriers of traditional disk technology. EMC is the first to bring Flash drives to midrange storage and expects the technology to become mainstream over the next few years while revolutionizing networked storage.

Flash drives extend the storage tiering capabilities of CLARiiON by:

• Delivering 30 times the IOPS of a 15K RPM FC drive

• Consistently delivering less than 1 ms response times

• Requiring 98 percent less energy per I/O than 15K rpm Fibre Channel drives

• Weighing 58 percent less per TB than a typical Fibre Channel drive

• Providing better reliability due to no moving parts and faster RAID rebuilds

UltraFlex technology—The CLARiiON CX4 architecture features UltraFlex technology—a combination of a modular connectivity design and unique FLARE® operating environment software capabilities that deliver:

• Dual protocol support with FC and iSCSI as the base configuration on all models

• Easy, online expansion via hot-pluggable I/O modules

• Ability to easily add and/or upgrade I/O modules to accommodate future technologies as they become available (i.e., FCoE)

CLARiiON Virtual Provisioning—Allows CLARiiON users to present an application with more capacity than is physically allocated to it in the storage system. CLARiiON Virtual Provisioning can lower total cost of ownership and offers customers these benefits:

• Efficient tiering that improves capacity utilization and optimizes tiering capabilities across all drive types

• Ease of provisioning that simplifies and accelerates processes and delivers “just-in-time” capacity allocation and flexibility

• Comprehensive monitoring, alerts, and reporting for efficient capacity planning

• Supports advanced capabilities including Virtual LUN, Navisphere QoS Manager, Navisphere Analyzer, and SnapView™

Multi-core Intel Xeon processors, increased memory, and 64-bit FLARE—The CX4 boasts up to twice the performance of the previous generation and provides up to 2.5 times more processing power with multi-core Intel Xeon processors. The CX4 architecture also delivers twice the capacity scale (up to 960 drives), twice the memory, and twice the LUNs compared with the previous generation CLARiiON.

25Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 26: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Component Details

With the CLARiiON CX4, the FLARE operating environment has also been upgraded from a 32-bit to a 64-bit environment. This enhancement enables the scalability improvements and also provides the foundation for more advanced software functionality such as Virtual Provisioning.

Low-power SATA II drives, adaptive cooling, and drive spin-down:

• Low-power SATA II drives deliver the highest density at the lowest cost and require 96 percent less energy per terabyte than 15K rpm Fibre Channel drives, and 32 percent less than traditional 7.2K rpm SATA drives.

• Adaptive cooling is a new feature that provides improved energy efficiency by dynamically adjusting cooling and airflow within the CX4 arrays based on system activity.

• Drive spin-down allows customers to set policies at the RAID group level to place inactive drives in sleep mode. Target applications include backup-to-disk, archiving, and test and development.

For more information, see: http://www.emc.com/products/detail/hardware/clariion-cx4-model-480.htm.

EMC Celerra Unified Storage—NS-960

Figure 7 illustrates the interconnection of the EMC Celerra NS-960 in Vblock 1 enabled by Unified Storage.

The EMC Celerra NS-960 unified storage system brings advanced failover and fully automated storage tiering to multi-protocol environments. With Celerra NS-960, you can connect to multiple storage networks via NAS, iSCSI, Fibre Channel SAN, and EMC Celerra Multi-Path File System (MPFS). MPFS improves performance over traditional NAS.

Gaining the Flexibility of NFS without Compromise

EMC Celerra was originally developed to provide customers with the same levels of availability, reliability, performance, and scalability for file shares that they had come to rely on in their EMC Symmetrix block-storage systems. Specifically, general purpose and appliance servers were not sufficient for running mission-critical engineer and manufacturing operations. Celerra has since evolved to be the industry-leading NAS system. Moreover, the attributes that make Celerra the preferred infrastructure for mission-critical file-sharing applications also make it the best storage infrastructure for deploying VMware.

Infrastructure for VMware

Key attributes of an NFS infrastructure for VMware include:

• Availability—Storage systems must deliver high availability and advanced features to ensure virtualized application performance.

• Protection—The NFS infrastructure must maintain virtualized application availability and protect information without adversely affecting business operations.

• Efficiency—One of the key benefits of VMware is improved provisioning and operational efficiency. The storage infrastructure must be able to extend efficiencies to storage management.

• Flexibility—Virtualized infrastructures enable rapid changes in application deployments and resource allocation. The NFS infrastructure must be able to respond to changes in storage capacity, performance, and protection requirements.

• Manageability—One of the benefits of deploying VMware on NFS is consolidated storage management. The NFS solution must provide simplified and automated management and extend this by integrating control into the VMware vCenter™ Console.

26Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 27: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Component Details

• Service and support—The infrastructure provider must be capable of ensuring that customers realize the value of their virtualized infrastructures while reducing risk and accelerating projects.

For more information, see: http://www.emc.com/products/detail/hardware/celerra-ns960.htm.

EMC Symmetrix V-Max Storage System

Figure 5 illustrates the interconnection of the EMC Symmetrix V-Max in Vblock 2.

The EMC Symmetrix V-Max Series provides an extensive offering of new features and functionality for the next era of high-availability virtual data centers. With advanced levels of data protection and replication, the Symmetrix V-Max system is at the forefront of enterprise SAN technology. Additionally, the Symmetrix V-Max array has the speed, capacity, and efficiency to transparently optimize service levels without compromising its ability to deliver performance on demand. These capabilities are of greatest value for large virtualized server deployments such as VMware Virtual Data Centers. Symmetrix Fully Automated Storage Tiering (FAST)1 automatically and dynamically moves data across storage tiers, so that it is in the right place at the right time simply by pooling storage resources, defining the policy, and applying it to an application. FAST enables applications to always remain optimized by eliminating trade-offs between capacity and performance. As a result, you are able to lower costs and deliver higher services levels at the same time.

The Symmetrix V-Max system is EMC’s high-end storage array that is purpose-built to deliver infrastructure services within the next-generation data center. Built for reliability, availability, and scalability, Symmetrix V-Max uses specialized engines, each of which includes two redundant director modules providing parallel access and replicated copies of all critical data.

Symmetrix V-Max’s Enginuity™ operating system provides several advanced features, such as Auto-Provisioning Groups for simplification of storage management, Virtual Provisioning for ease of use and improved capacity utilization, and Virtual LUN technology for non-disruptive mobility between storage tiers. All of the industry-leading features for Business Continuity and Disaster Recovery have been the hallmarks of EMC Symmetrix storage arrays for over a decade and continue in the Symmetrix V-Max system. These are further integrated into the VMware Virtual Infrastructure for disaster recovery with EMC’s custom Site Recovery Adapter for VMware’s Site Recovery Manager.

Combined with the rich capabilities of EMC Ionix ControlCenter and EMC’s Storage Viewer for vCenter, administrators are provided with end-to-end visibility and control of their virtual data center storage resources and usage. EMC’s new PowerPath/VE support for vSphere provides optimization of usage on all available paths between virtual machines and the storage they are using, as well as proactive failover management.

The Symmetrix V-Max system (see Figure 11) operating environment is a new enterprise-class storage array that is built on the strategy of simple, intelligent, modular storage. The array incorporates a new high-performance fabric interconnect designed to meet the performance and scalability demands for enterprise storage within the most demanding virtual data center installations. The storage array seamlessly grows from an entry-level configuration with a single, highly available Symmetrix V-Max Engine and one storage bay into the world’s largest storage system with eight engines and ten storage bays. The largest supported configuration is shown in Figure 12. When viewing Figure 12, refer to the following list that indicates the range of configurations supported by the Symmetrix V-Max storage array:

• 2-16 director boards

• 48-2,400 disk drives

• Up to 2 PB usable capacity

• Up to 128 Fibre Channel ports

1. Optional.

27Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 28: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Component Details

• Up to 64 FICON ports

• Up to 64 Gig-E/iSCSI ports

Figure 11 Symmetrix V-Max System

2271

63

28Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 29: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Component Details

Figure 12 Symmetrix V-Max System Features

The enterprise-class deployments in a modern data center are expected to be always available. The design of the Symmetrix V-Max storage array enables it to meet this stringent requirement. The replicated components that compose every Symmetrix V-Max configuration assure that no single point of failure can bring the system down. The hardware and software architecture of the Symmetrix V-Max storage array allows capacity and performance upgrades to be performed online with no impact to production applications. In fact, all configuration changes, hardware and software updates, and service procedures are designed to be performed online and non-disruptively. This ensures that customers can consolidate without compromising availability, performance, and functionality, while leveraging true pay-as-you-grow economics for high-growth storage environments.

The Symmetrix V-Max system can include two to 16 directors inside one to eight Symmetrix V-Max Engines. Each Symmetrix V-Max Engine has its own redundant power supplies, cooling fans, SPS Modules, and Environmental Modules. Furthermore, the connectivity between the Symmetrix V-Max array engines provides direct connections from each director to every other director, creating a redundant and high-availability Virtual Matrix™. Each Symmetrix V-Max Engine has two directors that can offer up to eight host access ports each, therefore allowing up to 16 host access ports per Symmetrix V-Max Engine.

Figure 13 shows a schematic representation of a single Symmetrix V-Max storage engine.

!

!

A B C D

!

!

A B C D

!

!

A B C D

!

!

A B C D

!

!

A B C D

!

!

A B C D

!

!

A B C D

!

!

A B C D

! ! ! !

LAN1

LAN2

!

!

A B C D

!

!

A B C D

!

!

A B C D

!

!

A B C D

!

!

A B C D

!

!

A B C D

!

!

A B C D

!

!

A B C D

! ! ! !

LAN1

LAN2

227117

29Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 30: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Component Details

Figure 13 Symmetrix V-Max Storage Engine

The powerful, high-availability Symmetrix V-Max Engine provides the building block for all Symmetrix V-Max systems. It includes four quad-core Intel Xeon processors; 64-128 GB of Global Memory; 8-16 ports for front-end host access or Symmetrix Remote Data Facility channels using Fibre Channel, FICON, or Gigabit Ethernet; and 16 back-end ports connecting to up to 360 storage devices using 4Gb Fibre Channel SATA or Enterprise Flash Drives.

Each of the two integrated directors in a Symmetrix V-Max Engine has main three parts: the back-end director, the front-end director, and the cache memory module. The back-end director consists of two back-end I/O modules with four logical directors that connect directly into the integrated director. The front-end director consists of two front-end I/O modules with four logical directors that are located in the corresponding I/O annex slots. The front-end I/O modules are connected to the director via the midplane.

The cache memory modules are located within each integrated director, each with eight available memory slots. Memory cards range from 2 to 8 GB, consequently allowing anywhere between 16 to 64 GB per integrated director. For added redundancy, the Symmetrix V-Max system uses mirrored cache, so memory is mirrored across engines in a multi-engine setup. In case of a single-engine configuration, the memory is mirrored inside the engine across the two integrated directors.

EMC’s Virtual Matrix interconnection fabric permits the connection of up to eight Symmetrix V-Max Engines together to scale out total system resources and flexibly adapt to the most demanding virtual data center requirements.

Figure 14 shows a schematic representation of a maximum Symmetrix V-Max configuration.

22

71

18

Host & Disk Ports

Core Core

Core Core

Core Core

Core Core

CPUComplex

Host & Disk Ports

Core Core

Core Core

Core Core

Core Core

GlobalMemory

CPUComplex

Virtual Matrix InterfaceVirtual Matrix Interface

GlobalMemory

BABA

Host & Disk Ports

Core Core

Core Core

Core Core

Core Core

Core Core

Core Core

Core Core

Core Core

CPUComplex

Host & Disk Ports

Core Core

Core Core

Core Core

Core Core

Core Core

Core Core

Core Core

Core Core

GlobalMemoryGlobal

MemoryGlobal

Memory

Virtual Matrix InterfaceVirtual Matrix Interface Virtual Matrix InterfaceVirtual Matrix Interface

Back End Front End Back End

GlobalMemoryGlobal

MemoryGlobal

Memory

BBAABBAA

CPUComplex

Front End

30Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 31: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Component Details

Figure 14 Fully Configured Symmetrix V-Max Storage System

For additional information about utilizing VMware Virtual Infrastructure with EMC Symmetrix storage arrays, refer to Using EMC Symmetrix Storage in VMware Virtual Infrastructure Environments—TechBook, available at: http://www.emc.com/collateral/hardware/solution-overview/h2529-vmware-esx-svr-w-symmetrix-wp-ldv.pdf.

NFS Datastores and Native File Services—EMC Celerra Gateway

Figure 4 illustrates the interconnection of the EMC Celerra Gateway in Vblock 2 and Vblock 1 enabled by SAN Storage.

Performance bottlenecks, security issues, and the high cost of data protection and management associated with deploying file servers using general-purpose operating systems become non-issues with the EMC Celerra Gateway family. Each Celerra Gateway product—the NS-G2 or NS-G8—is a dedicated network server optimized for file access and advanced functionality in a scalable, easy-to-use package. Best-in-class EMC Symmetrix and CLARiiON back-end array technologies, combined with Celerra’s impressive I/O system architecture, deliver industry-leading availability, scalability, performance, and ease of management to your business.

Celerra Gateway platforms extend the value of existing EMC storage array technologies, delivering a comprehensive, consolidated storage solution that adds IP storage (NAS) in a centrally-managed information storage system, enabling you to dynamically grow, share, and cost-effectively manage file systems with multi-protocol file access. Take advantage of simultaneous support for NFS and CIFS protocols by letting UNIX and Microsoft® Windows® clients share files using the DART (Data Access in Real Time) operating system’s sophisticated file-locking mechanisms. The high-end features offered with the Celerra Gateway platform enable entry-level data center consolidation resulting in lower total TCO of your server and storage assets—while enabling you to grow your IP storage environment into the hundreds of TB from a single point of management. And you can improve performance over standard NAS by simply adding MPFS to your environment without application modification.

Core

Core Core

Core

Core Core

Core Core

C

PU

Cl

Virtual M

atrix Interface

Host &

Disk Ports

Core Core

Core Core

Core

Core Core

Core

CPU

C

omplex

Host &

Disk Ports

Core Core

Core Core

Core

Core Core

Core

G

lobalM

emory

CPU

C

omplex

Virtual M

atrix Interface V

irtual Matrix Interface

G

lobal M

emory

Front End

Front End

Back E

ndB

ack End

AB

AB

Core

Core Core

Core

Core Core

Core Core

C

PU

Cl

Virtual M

atrix Interface

Host &

Disk Ports

Core Core

Core Core

Core

Core Core

Core

CPU

C

omplex

Host &

Disk Ports

Core Core

Core Core

Core

Core Core

Core

G

lobalM

emory

CPU

C

omplex

Virtual M

atrix Interface V

irtual Matrix Interface

G

lobal M

emory

Front End

Front End

Back E

ndB

ack End

AB

AB

Core Core Core Core

Core Core Core Core

CPU C l

Virtual Matrix Interface

Host & Disk Ports

Core Core Core Core

Core Core Core Core

CPU Complex

Host & Disk Ports

Core Core Core Core

Core Core Core Core

Global

Memory

CPU Complex

Virtual Matrix Interface Virtual Matrix Interface

Global

Memory

Front End Front End Back EndBack End

A B A B

Core Core Core Core

Core Core Core Core

CPU C l

Virtual Matrix Interface

Host & Disk Ports

Core Core Core Core

Core Core Core Core

CPU Complex

Host & Disk Ports

Core Core Core Core

Core Core Core Core

Global

Memory

CPU Complex

Virtual Matrix Interface Virtual Matrix Interface

Global

Memory

Front End Front End Back EndBack End

A B A B

Co

re

Core

Core C

ore

Co

re

Core

Core C

ore

C

PU

Cl

Vir

tual

Mat

rix

Inte

rfac

e

H

ost &

Dis

k Po

rts

Core

Co

re

Core C

ore

Co

re

Core

Core C

ore

CPU

C

ompl

ex

H

ost &

Dis

k Po

rts

Core

Co

re

Core C

ore

Co

re

Core

Core C

ore

G

loba

lM

emor

y

CPU

C

ompl

ex

Vir

tual

Mat

rix

Inte

rfac

e V

irtu

al M

atri

x In

terf

ace

G

loba

l M

emor

y

Fron

t End

Fron

t End

Bac

k E

ndB

ack

End

AB

AB

Co

re

Core

Core C

ore

Co

re

Core

Core C

ore

C

PU

Cl

Vir

tual

Mat

rix

Inte

rfac

e

H

ost &

Dis

k Po

rts

Core

Co

re

Core C

ore

Co

re

Core

Core C

ore

CPU

C

ompl

ex

H

ost &

Dis

k Po

rts

Core

Co

re

Core C

ore

Co

re

Core

Core C

ore

G

loba

lM

emor

y

CPU

C

ompl

ex

Vir

tual

Mat

rix

Inte

rfac

e V

irtu

al M

atri

x In

terf

ace

G

loba

l M

emor

y

Fron

t End

Fron

t End

Bac

k E

ndB

ack

End

AB

AB

Core Core Core Core

Core Core Core Core

CPU Cl

Virtual Matrix Interface

Host & Disk Ports

Core Core Core Core

Core Core Core Core

CPU Complex

Host & Disk Ports

Core Core Core Core

Core Core Core Core

GlobalMemory

CPU Complex

Virtual Matrix Interface Virtual Matrix Interface

Global Memory

Front EndFront EndBack End Back End

ABAB

Core Core Core Core

Core Core Core Core

CPU Cl

Virtual Matrix Interface

Host & Disk Ports

Core Core Core Core

Core Core Core Core

CPU Complex

Host & Disk Ports

Core Core Core Core

Core Core Core Core

GlobalMemory

CPU Complex

Virtual Matrix Interface Virtual Matrix Interface

Global Memory

Front EndFront EndBack End Back End

ABAB

VirtualMatrixVirtualMatrix

22

71

12

31Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 32: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Component Details

EMC Celerra Gateway platforms combine a NAS head and SAN storage for a flexible, cost-effective implementation that maximizes the utilization of your existing resources. This approach offers the utmost in configuration options, including:

• One-to-eight X-Blade configurations

• Flash drives, Fibre Channel, SATA and low-power SATA drive support

• Performance/availability mode in the entry-level NS-G2

• EMC Symmetrix or CLARiiON storage

• Native integration with Symmetrix and CLARiiON replication software providing a single replication solution for all your SAN and IP storage disaster recovery requirements

EMC Celerra Gateway Platform System Elements

The Celerra Gateway is composed of one or more autonomous servers called X-Blades that connect via FC SAN to a CLARiiON or Symmetrix storage array. The X-Blades control data movement from the disks to the network. Each X-Blade consists of an Intel-based server with redundant data paths, power supplies, and multiple Gigabit Ethernet ports and optional, multiple 10 Gigabit Ethernet optical ports. X-Blades run EMC’s Data Access in Real Time (DART) operating system, designed and optimized for high-performance and multi-protocol network file access. All the X-Blades in a system are managed by the Control Station (two control stations for HA are supported on the NS-G8), which operates out of the data path and provides a single point of configuration management and administration as well as handling X-Blade failover and maintenance support.

Vblock Infrastructure Packages Configuration and Design Considerations

EMC Celerra Gateway is a file server for Vblock 2 and Vblock 1 enabled by SAN Storage (Celerra file services may be shared across multiple Vblock Infrastructure Packages):

• Gateway configuration sharing CLARiiON or Symmetrix storage

• Vblock 2 NS-G8

– 2 to 8 Data Movers

For more information on the EMC Celerra NS-G8, see: http://www.emc.com/products/detail/hardware/celerra-ns-g8.htm.

• Vblock 1 enabled by SAN Storage NS-G2

– 2 Data Movers

For more information on the EMC Celerra NS-G2, see: http://www.emc.com/products/detail/hardware/celerra-ns-g2.htm.

• May be shared across multiple Vblock Infrastructure Packages

Storage Area Network—Cisco MDS Fibre Channel Switch

Figure 5 illustrates the interconnection of the Cisco MDS 9506 in Vblock 2 and Figure 6 illustrates the interconnection of the Cisco MDS 9222i in Vblock 1 enabled by SAN Storage.

Cisco MDS 9222i Multiservice Modular Switch

The Cisco MDS 9222i Multiservice Modular Switch delivers state-of-the-art multiprotocol and distributed multiservice convergence, offering:

• High-performance SAN extension and disaster recovery solutions

32Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 33: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Component Details

• Intelligent fabric services such as storage media encryption

• Cost-effective multiprotocol connectivity

Its compact form factor, expansion slot modularity, and advanced capabilities make the MDS 9222i an ideal solution for departmental and remote branch-office SANs requiring director-class features at a lower cost.

Product highlights include:

• High-density Fibre Channel switch; scales up to 66 Fibre Channel ports

• Integrated hardware-based virtual fabric isolation with virtual SANs (VSANs) and Fibre Channel routing with inter-VSAN routing

• Remote SAN extension with high-performance FCIP

• Long distance over Fibre Channel with extended buffer-to-buffer credits

• Multiprotocol and mainframe support (Fibre Channel, FCIP, iSCSI, and IBM Fiber Connection [FICON])

• IPv6 capable

• Platform for intelligent fabric applications such as storage media encryption

• In-Service Software Upgrade (ISSU)

• Comprehensive network security framework

• High-performance intelligent application with the combination of 16-port storage services node

For more information on the Cisco MDS 9222i, see: http://www.cisco.com/en/US/products/ps8420/index.html.

For more information on the Cisco MDS 9200 Series Multilayer Switches, see: http://www.cisco.com/en/US/products/ps5988/index.html.

Cisco MDS 9506 Multilayer Director

The Cisco MDS 9506 Multilayer Director provides industry-leading availability, scalability, security, and management. The Cisco MDS 9506 allows you to deploy high-performance SANs with lowest TCO. Layering a rich set of intelligent features onto a high-performance, protocol-independent switch fabric, the Cisco MDS 9506 addresses the stringent requirements of large data center storage environments: uncompromising high availability, security, scalability, ease of management, and transparent integration of new technologies. Compatible with first, second, and third generation Cisco MDS 9000 Family switching modules, the Cisco MDS 9506 provides advanced functionality and unparalleled investment protection, allowing the use of any Cisco MDS 9000 Family switching module in this compact system.

The Cisco MDS 9506 offers the following benefits:

• Scalability and availability—The Cisco MDS 9506 combines nondisruptive software upgrades, stateful process restart/failover, and full redundancy of all major components for best-in-class availability. Supporting up to 192 Fibre Channel ports in a single chassis, up to 1,152 Fibre Channel ports in a single rack, the Cisco MDS 9506 is designed to meet the requirements of large data center storage environments.

• Compact design—The Cisco MDS 9506 provides high port density in a small footprint, saving valuable data center floor space. The seven-rack-unit chassis allows up to six Cisco MDS 9506 multilayer directors in a standard rack, maximizing the number of available Fibre Channel ports.

• 1/2/4/8-Gbps and 10-Gbps Fibre Channel—Supports new 8-Gbps as well as existing 10-Gbps, 4-Gbps, and 2-Gbps MDS Fibre Channel switching modules.

33Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 34: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Component Details

• Flexibility and investment protection—Supports mix of mix of new, second, and first generation Cisco MDS 9000 Family modules providing forward and backward compatibility and unparalleled investment protection.

• TCO-driven design—The Cisco MDS 9506 offers advanced management tools for overall lowest TCO. It includes VSAN technology for hardware-enforced, isolated environments within a single physical fabric for secure sharing of physical infrastructure, further decreasing TCO.

• Multiprotocol—The multilayer architecture of the Cisco MDS 9000 Family enables a consistent feature set over a protocol-independent switch fabric. The Cisco MDS 9506 transparently integrates Fibre Channel, FICON, iSCSI, and FCIP in one system.

• Intelligent network services—Provides integrated support for VSAN technology, access control lists (ACLs) for hardware-based intelligent frame processing, and advanced traffic-management features such as Fibre Channel Congestion Control (FCC) and fabric-wide quality of service (QoS) to enable migration from SAN islands to enterprise-wide storage networks.

• Integrated Cisco Storage Media Encryption (SME) as distributed fabric service—Supported on the Cisco MDS 18/4-Port Multiservice Module, Cisco SME encrypts data at rest on heterogeneous tape drives and virtual tape libraries (VTLs) in a SAN environment using secure IEEE standard Advanced Encryption Standard (AES) 256-bit algorithms. The Cisco MDS 18/4-Port Multiservice Module helps ensure ease of deployment, scalability, and high availability by using innovative technology to transparently offer Cisco SME capabilities to any device connected to the fabric without the need for reconfiguration or rewiring. Cisco SME provisioning is integrated into the Cisco Fabric Manager; no additional software is required. Cisco SME key management can be provided by either the Cisco Key Management Center (KMC) or with RSA Key Manager for the Datacenter from RSA, the Security Division of EMC.

• Open platform for intelligent storage applications—Provides the intelligent services necessary for hosting and/or accelerating storage applications such as network-hosted volume management, data migration, and backup.

• Integrated hardware-based VSANs and Inter-VSAN Routing (IVR)—Enables deployment of large-scale multi-site and heterogeneous SAN topologies. Integration into port-level hardware allows any port within a system or fabric to be partitioned into any VSAN. Integrated hardware-based inter-VSAN routing provides line-rate routing between any ports within a system or fabric without the need for external routing appliances.

• Advanced FICON services—Supports 1/2/4-Gbps FICON environments, including cascaded FICON fabrics, VSAN-enabled intermix of mainframe and open systems environments, and N_Port ID virtualization for mainframe Linux partitions. Control Unit Port (CUP) support enables in-band management of Cisco MDS 9000 Family switches from the mainframe management console.

• Comprehensive security framework—Supports RADIUS and TACACS+, Fibre Channel Security Protocol (FC-SP), Secure File Transfer Protocol (SFTP), Secure Shell (SSH) Protocol and Simple Network Management Protocol Version 3 (SNMPv3) implementing Advanced Encryption Standard (AES), VSANs, hardware-enforced zoning, ACLs, and per-VSAN role-based access control.

• Sophisticated diagnostics—Provides intelligent diagnostics, protocol decoding, and network analysis tools as well as integrated Call Home capability for added reliability, faster problem resolution, and reduced service costs.

• Unified SAN management—The Cisco MDS 9000 Family includes built-in storage network management with all features available through a command-line interface (CLI) or Cisco Fabric Manager, a centralized management tool that simplifies management of multiple switches and fabrics. Integration with third-party storage management platforms allows seamless interaction with existing management tools.

34Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 35: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Component Details

• Cisco TrustSec Fibre Channel Link Encryption—Delivers transparent, hardware-based, line-rate encryption of Fibre Channel data between any Cisco MDS 9000 Family 8-Gbps modules.

For more information on the Cisco MDS 9506 Multilayer Director, see: http://www.cisco.com/en/US/products/hw/ps4159/ps4358/ps5395/index.html.

For more information on the Cisco MDS 9500 Series Multilayer Directors, see: http://www.cisco.com/en/US/products/ps5990/index.html.

Vblock 2 SAN Configuration

• (2) Cisco MDS 9506

– (8) 4 Gb N-ports to each Fabric Interconnect

– (8-16) 4 Gb N-ports to each Symmetrix V-Max Storage Processor

Vblock 1 Enabled by SAN Storage SAN Configuration

• (2) Cisco MDS 9506 (optionally 2 Cisco MDS 9222i)

– (8) 4 Gb N-ports to each Fabric Interconnect

– (4-8) 4 Gb N-ports to each CLARiiON Storage Processor

Cisco Nexus 5010

Figure 7 illustrates the interconnection of the Cisco Nexus 5010 in Vblock 1 enabled by Unified Storage.

The innovative architecture of the Cisco Nexus 5010 Series Switches simplifies data center transformation by helping to enable a standards-based, high-performance unified fabric. Next-generation data centers increasingly have dense, multi-core, virtual-machine-intensive servers.

With these switches, you can:

• Consolidate the data center and protect investments in existing server, network, storage, and facilities assets

• Decrease the TCO by simplifying the data center infrastructure

• Increase business agility with easier, faster, and pervasive data center virtualization

• Enhance business resilience with greater operational continuity

• Use existing operational models and administrative domains for easy deployment

For more information, see: http://www.cisco.com/en/US/products/ps9670/index.html.

Storage Design Considerations

• Balanced configuration

– Capacity, connectivity, workload (IOPs/MBs)

• Availability

– Enterprise class storage

– RAID protection

– Extensive remote replication capabilities using MirrorView and SRDF®

• Predictable performance

35Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 36: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Vblock Infrastructure Packages Component Details

– Large cache

– Tiered storage including Enterprise Flash Drives

• Ease of deployment and management

– Template-based provisioning

– Wizards

– Fully Automated Storage Tiering (FAST)

– Virtual Provisioning

– Local replication capability using SnapView and TimeFinder®

Virtualization

VMware vSphere/vCenter

• VMware vSphere 4 is the virtualized infrastructure for the Vblock Infrastructure Packages

– Virtualizes all application servers

– Provides VMWare High Availability (HA) and Dynamic Resource Scheduling (DRS)

• Templates enable rapid provisioning

VMware vSphere and vCenter Server

VMware vSphere and vCenter Server offer the highest levels of availability and responsiveness for all applications and services with VMware vSphere, the industry’s most reliable platform for data center virtualization. Optimize IT service delivery and deliver the highest levels of application service agreements with the lowest total cost per application workload by decoupling your business critical applications from the underlying hardware for unprecedented flexibility and reliability.

VMware vCenter Server provides a scalable and extensible platform that forms the foundation for virtualization management (http://www.vmware.com/solutions/virtualization-management/). VMware vCenter Server, formerly VMware VirtualCenter, centrally manages VMware vSphere (http://www.vmware.com/products/vsphere/) environments, allowing IT administrators dramatically improved control over the virtual environment compared to other management platforms. VMware vCenter Server:

• Provides centralized control and visibility at every level of virtual infrastructure.

• Unlocks the power of vSphere through proactive management.

• Is a scalable and extensible management platform with a broad partner ecosystem.

For more information, see http://www.vmware.com/products/.

36Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 37: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Rack Layouts

Rack LayoutsRack layouts are provided for:

• Vblock 2 Minimum Configuration—Rack Layout Front View

• Vblock 2 Maximum Configuration—Rack Layout Front View

• Vblock 1 Enabled by SAN Storage Minimum Configuration—Rack Layout Front View

• Vblock 1 Enabled by SAN Storage Maximum Configuration—Rack Layout Front View

• Vblock 1 Enabled by Unified Storage Minimum Configuration—Rack Layout Front View

• Vblock 1 Enabled by Unified Storage Maximum Configuration—Rack Layout Front View

37Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 38: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Rack Layouts

Vblock 2 Minimum Configuration—Rack Layout Front View • 4 UCS Chassis, 8 blades, 96 GB RAM, 2 * 73 GB disk drives

• 2 UCS 6140 Fabric Interconnect, 40 fixed ports, 8 ports 4 GB Fibre Channel

• 2 MDS 9506, 36 ports 4 GB Fibre Channel

• V-Max, 2 Engines

• Celerra NS-G8, 2 to 8 Data Movers (optional)

Figure 15 Vblock 2 Minimum Configuration—Rack Layout Front View

MDS 9500 Series

42 U

MDS 9500 Series

25

35

93

System Bay

with 2 Engines Disk Expansion Bay

4 -UCS 5108 Chassis

2 -UCS 6140

2 -MDS 9506

UCS 5108

OK FAIL

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

OK FAIL OK FAIL OK FAIL

UCS 5108

OK FAIL

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

OK FAIL OK FAIL OK FAIL

UCS 5108

OK FAIL

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

OK FAIL OK FAIL OK FAIL

UCS 5108

OK FAIL

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

OK FAIL OK FAIL OK FAIL

38Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 39: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Rack Layouts

Vblock 2 Maximum Configuration—Rack Layout Front View • 8 UCS Chassis, 8 blades, 96 GB RAM, 2 * 73 GB disk drives

• 2 UCS 6140 Fabric Interconnect, 40 fixed ports, 8 ports 4 GB Fibre Channel

• 2 MDS 9506, 36 ports 4 GB Fibre Channel

• V-Max, 2 Engines

• Celerra NS-G8, 2 to 8 Data Movers (optional)

Figure 16 Vblock 2 Maximum Configuration—Rack Layout Front View

42 USystem Bay

with 2 Engines Disk Expansion Bay

4 -UCS 5108 Chassis

25

35

94

Disk Expansion Bay

MDS 9500 Series

42 U

MDS 9500 Series

4 -UCS 5108 Chassis

2 -UCS 6140

2 -MDS 9506

UCS 5108

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

!

! ResetConsole

!

!

! ResetConsole

!

OK

FAIL

OK

FAIL

OK

FAIL

OK

FAIL

UCS 5108

OK FAIL

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

OK FAIL OK FAIL OK FAIL

UCS 5108

OK FAIL

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

OK FAIL OK FAIL OK FAIL

UCS 5108

OK FAIL

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

OK FAIL OK FAIL OK FAIL

UCS 5108

OK FAIL

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

OK FAIL OK FAIL OK FAIL

UCS 5108

OK FAIL

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

OK FAIL OK FAIL OK FAIL

UCS 5108

OK FAIL

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

OK FAIL OK FAIL OK FAIL

UCS 5108

OK FAIL

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

OK FAIL OK FAIL OK FAIL

UCS 5108

OK FAIL

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

OK FAIL OK FAIL OK FAIL

39Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 40: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Rack Layouts

Vblock 1 Enabled by SAN Storage Minimum Configuration—Rack Layout Front View

• 2 UCS Chassis, 8 blades each, 6 * 48 GB RAM + 2 * 96 GB RAM (total 480 GB RAM)

• 2 UCS 6120 Fabric Interconnect, 20 fixed ports, 8 ports 4 GB Fibre Channel

• 2 MDS 9506, 36 ports 4 GB Fibre Channel (optionally 2 MDS 9222i, 18 ports 4 GB Fibre Channel)

• CLARiiON CX4-480

• Celerra NS-G2, 2 Data Movers (optional)

Figure 17 Vblock 1 Enabled by SAN Storage Minimum Configuration—Rack Layout Front View

PS1

PS2

FAN STAT FA

N1FA

N2

FAN STAT

STAT

OK

FAIL

OK

FAIL

PS1

PS2

FAN STAT FA

N1FA

N2

FAN STAT

STAT

OK

FAIL

OK

FAIL

MDS 9500 Series

42 U

MDS

MDS 9500 Series

Fabric Interconnect

6120 * 2

UCS Chassis5108

UCS Chassis5108

CLARiiON CX4-480

25

35

91

UCS 5108

OK FAIL

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

OK FAIL OK FAIL OK FAIL

UCS 5108

OK FAIL

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

OK FAIL OK FAIL OK FAIL

40Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 41: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Rack Layouts

Vblock 1 Enabled by SAN Storage Maximum Configuration—Rack Layout Front View

• 4 UCS Chassis, 8 blades, 6 * 48 GB RAM + 2 * 96 GB RAM (total 1920 GB RAM)

• 2 UCS 6120 Fabric Interconnect, 20 fixed ports, 8 ports 4 GB Fibre Channel

• 2 MDS 9506, 36 ports 4 GB Fibre Channel (optionally 2 MDS 9222i, 18 ports 4 GB Fibre Channel)

• CLARiiON CX4-480

• Celerra NS-G2, 2 Data Movers (optional)

Figure 18 Vblock 1 Enabled by SAN Storage Maximum Configuration—Rack Layout Front View

PS1

PS2

FAN STAT FA

N1FA

N2

FAN STAT

STAT

OK

FAIL

OK

FAIL

PS1

PS2

FAN STAT FA

N1FA

N2

FAN STAT

STAT

OK

FAIL

OK

FAIL

MDS 9500 Series

42 U

MDS

MDS 9500 Series

Fabric Interconnect

6120 * 2

UCS Chassis5108

UCS Chassis5108

UCS Chassis5108

UCS Chassis5108 CLARiiON CX4-480

Incl 1Disk Expansion b

25

35

92

UCS 5108

OK FAIL

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

OK FAIL OK FAIL OK FAIL

UCS 5108

OK FAIL

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

OK FAIL OK FAIL OK FAIL

UCS 5108

OK FAIL

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

OK FAIL OK FAIL OK FAIL

UCS 5108

OK FAIL

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

OK FAIL OK FAIL OK FAIL

41Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 42: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Rack Layouts

Vblock 1 Enabled by Unified Storage Minimum Configuration—Rack Layout Front View

• 2 UCS Chassis, 8 blades each, 6 * 48 GB RAM + 2 * 96 GB RAM (total 480 GB RAM)

• 2 UCS 6120 Fabric Interconnect, 20 fixed ports, 8 ports 4 GB Fibre Channel

• 2 Nexus 5010, 4 ports Gbps Fibre Channel, 8 ports 10 Gb Ethernet

• Celerra NS-960

Figure 19 Vblock 1 Enabled by Unified Storage Minimum Configuration—Rack Layout Front

View

PS1

PS2

FAN STAT FA

N1FA

N2

FAN STAT

STAT

OK

FAIL

OK

FAIL

PS1

PS2

FAN STAT FA

N1FA

N2

FAN STAT

STAT

OK

FAIL

OK

FAIL

42 U

Cisco Nexus

5010 * 2

Fabric Interconnect

6120 * 2

UCS Chassis5108

UCS Chassis5108

Celerra NS-960

22

88

14

UCS 5108

OK FAIL

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

OK FAIL OK FAIL OK FAIL

UCS 5108

OK FAIL

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

OK FAIL OK FAIL OK FAIL

FAIL/OK

N5K-PAC-550W N5K-PAC-550W

PS1

PS2

FAIL/OK

FAN

1

FANSTAT

FAN

2

N5K-C5010-FAN

STA

T

FANSTAT

N5K-C5010-FAN

FAIL/OK

N5K-PAC-550W N5K-PAC-550W

PS1

PS2

FAIL/OK

FAN

1

FANSTAT

FAN

2

N5K-C5010-FAN

STA

T

FANSTAT

N5K-C5010-FAN

42Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 43: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

Rack Layouts

Vblock 1 Enabled by Unified Storage Maximum Configuration—Rack Layout Front View

• 4 UCS Chassis, 8 blades, 6 * 48 GB RAM + 2 * 96 GB RAM (total 1920 GB RAM)

• 2 UCS 6120 Fabric Interconnect, 20 fixed ports, 8 ports 4 GB Fibre Channel

• 2 Nexus 5010, 4 ports Gbps Fibre Channel, 8 ports 10 Gb Ethernet

• Celerra NS-960

Figure 20 Vblock 1 Enabled by Unified Storage Maximum Configuration—Rack Layout Front

View

PS1

PS2

FAN STAT FA

N1FA

N2

FAN STAT

STAT

OK

FAIL

OK

FAIL

PS1

PS2

FAN STAT FA

N1FA

N2

FAN STAT

STAT

OK

FAIL

OK

FAIL

Cisco Nexus

5010 * 2

Fabric Interconnect

6120 * 2

UCS Chassis5108

UCS Chassis5108

UCS Chassis5108

UCS Chassis5108

Celerra NS-960

22

88

11

UCS 5108

OK FAIL

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

OK FAIL OK FAIL OK FAIL

UCS 5108

OK FAIL

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

OK FAIL OK FAIL OK FAIL

UCS 5108

OK FAIL

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

OK FAIL OK FAIL OK FAIL

UCS 5108

OK FAIL

! ResetConsole

! !UCS B200 M1

1 2

3 4

5 6

7 8

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

! ResetConsole

! !UCS B200 M1

OK FAIL OK FAIL OK FAIL

FAIL/OK

N5K-PAC-550W N5K-PAC-550W

PS1

PS2

FAIL/OK

FAN

1

FANSTAT

FAN

2

N5K-C5010-FAN

STA

T

FANSTAT

N5K-C5010-FAN

FAIL/OK

N5K-PAC-550W N5K-PAC-550W

PS1

PS2

FAIL/OK

FAN

1

FANSTAT

FAN

2

N5K-C5010-FAN

STA

TFANSTAT

N5K-C5010-FAN

43Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.

Page 44: Vblock Infrastructure Packages Reference Architecture · PDF fileVblock Infrastructure Packages Reference ... Vblock Infrastructure Packages Reference Architecture ... The current

References

References • VMware View Reference Architecture

http://www.vmware.com/resources/techresources/1084

• VMware View 4.0http://www.vmware.com/products/view/

• Cisco UCShttp://www.cisco.com/go/unifiedcomputing

• Cisco Data Center Solutionshttp://www.cisco.com/go/datacenter

• Cisco Validated Designshttp://www.cisco.com/go/designzone

• EMC CLARiiON Familyhttp://www.emc.com/products/family/clariion-family.htm

• EMC Celerra Familyhttp://www.emc.com/products/family/celerra-family.htm

• EMC Symmetrix V-Max Systemhttp://www.emc.com/products/detail/hardware/symmetrix-v-max.htm

• EMC Symmetrix V-Max System and VMware Virtual Infrastructure white paperhttp://www.emc.com/collateral/hardware/white-papers/h6209-symmetrix-v-max-vmware-virtual-infrastructure-wp.pdf

• Using EMC Symmetrix Storage in VMware Virtual Infrastructure TechBookhttp://www.emc.com/collateral/hardware/solution-overview/h2529-vmware-esx-svr-w-symmetrix-wp-ldv.pdf

Cisco Systems, Inc.170 West Tasman DriveSan Jose, CA 95134 USA

Tel: 408-526-4000 or 800-553-6387 (NETS)Fax: 408-527-0883

www.cisco.com

EMC Corporation176 South StreetHopkinton, MA 01748 USA

Tel: 508-435-1000

www.emc.com

VMware, Inc.3401 Hillview AvePalo Alto, CA 94304 USA

Tel: 650-427-5000 or 877-486-9273Fax: 650-427-5001

www.vmware.com

Copyright © 2010 Cisco Systems, Inc. All rights reserved. Cisco, the Cisco logo, and Cisco Systems are registered trademarks or trademarks of Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries. All other trademarks mentioned in this document or Website are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company.

Copyright © 2010 EMC Corporation. All rights reserved. EMC2, EMC, Celerra, CLARiiON, Enginuity, FLARE, Ionix, Navisphere, Powerpath, SnapView, SRDF, Symmetrix, TimeFinder, UltraFlex, Virtual Matrix, V-Max and where information lives are registered trademarks or trademarks of EMC Corporation in the United States or other countries. All other trademarks used herein are the property of their respective owners. Published in the USA. P/N h6935.1

Copyright © 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.

44Vblock Infrastructure Packages Reference Architecture

© 2010 Cisco EMC VMware. All rights reserved.


Recommended