27
vCloud on Vblock Design Considerations Document Version 1.2 October 2011

VMware vCloud on Vblock Design Considerations

Embed Size (px)

DESCRIPTION

Disclaimer: The ideas presented forth may not be 100% supported by VCE and/or VMware and are simply presented as options to solve the challenge of vCloud solutions on Vblock hardware technology.The purpose of this document is to provide guidance and insight into some areas of interest when building a VMware vCloud solution on top of a Vblock hardware infrastructure.

Citation preview

Page 1: VMware vCloud on Vblock Design Considerations

vCloud on Vblock Design ConsiderationsDocument Version 1.2October 2011

Contributing Authors:

Chris Colotti, Consulting Architect - VMwareKendrick Coleman, Senior vArchitect – VCEJeramiah Dooley, Principal Solution’s vArchitect, SP & Verticals Group - VCESumner Burkart, Senior Consultant - VMwareSony Francis, Platform Engineering - VCE

Page 2: VMware vCloud on Vblock Design Considerations

Table Of Contents

Table Of Contents........................................................................................................2

Executive Summary......................................................................................................3Disclaimer...................................................................................................................................................................... 3Document Goals...........................................................................................................................................................3Target Audience.......................................................................................................................................................... 3Assumptions................................................................................................................................................................. 4Requirements............................................................................................................................................................... 4

Management Infrastructure.........................................................................................5Advanced Management Pod (AMP)....................................................................................................................5VMware vCloud Director Management Requirements..............................................................................6Additional Servers......................................................................................................................................................6Existing vSphere Instance.......................................................................................................................................7Consuming Vblock Blades.......................................................................................................................................7vCloud Management..................................................................................................................................................8Why Two VMware vCenter Servers?..................................................................................................................8

AMP VMware vCenter.............................................................................................................................................. 8AMP Cluster Datacenter.......................................................................................................................................... 8vCloud Director Management Cluster Datacenter......................................................................................9Vblock VMware vCenter.......................................................................................................................................... 9vCenter Protection..................................................................................................................................................... 9

Networking Infrastructure............................................................................................9The Cisco Nexus 1000V.........................................................................................................................................10Networking Solution for VMware vCloud Director and Vblock using Cisco Nexus 1000v......11Networking Solution for VMware vCloud Director and Vblock using VMware vNetwork Distrubuted Switch..................................................................................................................................................14

Storage Infrastructure................................................................................................15Overview......................................................................................................................................................................16FAST VP........................................................................................................................................................................ 16Use Case #1: Standard Storage Tiering..........................................................................................................16Use Case #2: FAST VP-Based Storage Tiering.............................................................................................17Tiering Policies..........................................................................................................................................................17FAST Cache................................................................................................................................................................. 18Storage Metering and Chargeback....................................................................................................................18

VMware vCloud Director and Vblock Scalability.........................................................19

Reference Links..........................................................................................................20

Page 3: VMware vCloud on Vblock Design Considerations

Executive Summary

DisclaimerAlthough this paper deals with some design considerations, it should be noted the opinions and ideas expressed in this paper are those of the authors and not of their respective companies. The contributing authors do work in the field and have collectively discussed ideas to help customers handle this particular solution. The ideas presented forth may not be 100% supported by VCE and/or VMware and are simply presented as options to solve the challenge of vCloud solutions on Vblock hardware technology.

Document GoalsThe purpose of this document is to provide guidance and insight into some areas of interest when building a VMware vCloud solution on top of a Vblock hardware infrastructure. Both technologies provide flexibility in different areas to enable an organization, or service provider, to successfully deploy a VMware vCloud environment on VCE Vblock™ Infrastructure Platforms. To ensure proper architecture guidelines are met between Vblock and vCloud Director, certain design considerations need to be met. This solution brief is intended to provide guidance to properly architect and manage infrastructure, virtual and physical networking, storage configuration, and scalability of any VMware vCloud Director on Vblock environment. As VMware vCloud Director is being increasingly deployed on VCE Vblock, employees, partners and customers have been seeking additional information specific to a combined solution, which requires some additional considerations. We will address them in the following four specific target areas:

• Management Infrastructure• Networking Infrastructure• Storage Infrastructure• Scalability

Target AudienceThe target audience of this document is the individual with a highly technical background that will be designing, deploying, managing or selling a vCloud Director on Vblock solution, including, but not limited to; technical consultants, infrastructure architects, IT managers, implementation engineers, partner engineers, sales engineers, and potentially customer staff. This solutions brief is not intended to replace or override existing certified designs for both VMware vCloud Director or VCE Vblock, but instead, is meant to supplant knowledge and provide additional guidelines in deploying or modifying any environment that deploys the two in unison.

Page 4: VMware vCloud on Vblock Design Considerations

AssumptionsThe following is a list of overall assumptions and considerations before utilizing information contained in this document:

Any reader designing or deploying should already be familiar with both VMware vCloud Director and VCE Vblock reference architectures and terminology

All readers should have sufficient understanding of the following subject areas or products:

o Cisco Nexus 1000V administrationo vNetwork Distributed Switch (vDS) administrationo vSphere Best Practices and principles, including, but not limited to:

HA and DRS clusters Fault tolerance

o EMC Storage included as part of a Vblock: FAST pools Storage tiering Disk technologies such as EFD, FC, and SATA

o Physical and virtual networking areas relating to VLANs, subnets, routing, and switching

o Database server administration (or access to existing enterprise database resources, including administration staff)

o Extra components needed are not standardized in the VCE Vblock bill of materials

Please note that vCloud Director API integration will not be addressed in this document.

Requirements Recommendations contained throughout this document have considered the following design requirements and/or constraints:

VCE Vblock ships with one of the following AMP cluster configurations:o Mini Ampo HA Amp

The most recent version of the highly available (HA) AMP cluster utilizes a standalone EMC VNXe 3100

A Vblock definition to UIM will only addresses a single UCSM domain which is a maximum of 64 UCS blades

Every VMware vCenter instance must be made highly available A Cisco Nexus 1000V is optional in the design EMC Ionix UIM will be used to provision VMware vSphere hosts that are

members of each and every vCloud Director resource group

Page 5: VMware vCloud on Vblock Design Considerations

Management InfrastructureThe management infrastructure of both VMware vCloud Director and Vblock is critical to the availability of each and every individual component. The VCE Vblock management cluster controls the physical layer of the solution, while the VMware vCloud Director management cluster controls the virtual layer of the solution. Each layer is equally important and has its own special requirements – it is therefore imperative to understand what components manage each layer when designing a unified architecture.

Advanced Management Pod (AMP)The AMP cluster is included with every Vblock instance and the desired AMP configuration for VMware vCloud Director on Vblock Platform is the HA AMP. The HA AMP is comprised of (2) two Cisco C200 rack mount servers and hosts all virtual machines necessary to manage the VCE Vblock hardware stack. Vblock virtual machine server components consist of, but aren’t necessarily limited to, EMC’s Ionix UIM, PowerPath Licensing, Unisphere, VMware’s vCenter and Update Manager. Currently, this cluster is configured with an EMC VNXe 3100, providing storage for all AMP management VMs.

Since the AMP cluster is a design element of the VCE Engineering Vblock Reference Architecture, it should not be modified or removed in order to stay true to the original design. Changing the configuration of the AMP Cluster requires additional validation, input and review from various internal parties, and ultimately would not provide a timely solution. Additional justifications for not modifying this cluster include:

Cisco C200 Servers are not cabled and connected to Vblock SAN Storage An AMP Cluster of only 2 nodes does not satisfy N+1 availability

requirements of VMware vSphere Utilizing the AMP cluster as a host platform for vCloud Director could

possibly result in downtime and should be avoided

Figure 1 - AMP Cluster Logical Design

Page 6: VMware vCloud on Vblock Design Considerations

VMware vCloud Director Management RequirementsThe current VMware vCloud Director Reference Architecture calls for separate management and compute clusters in order to provide a scalable VMware vCloud Director infrastructure. With the requirement for a dedicated and highly available vCloud management cluster, the solution is to create a second management cluster. Restating the need to leave the AMP management cluster unchanged, a second management cluster must be created and can be done in three different configurations. As shown in Figure 2 below, there are a significant number of virtual machines called for by the vCloud Director infrastructure; some mandatory, others optional, depending on the overall solution and existing infrastructure. In addition, VMware vCloud Director Reference Architecture dictates any vSphere vCenter Server configured to vCloud Director have additional security roles assigned to it in order to protect the virtual machines deployed into it. This becomes very difficult if all items are managed by a single vSphere vCenter, therefore two instances should be provided.

Additional ServersThe first scenario consists of four (4) Cisco C-200 hosts to be deployed to support vCloud Director. This vCloud management cluster will tie into the existing VCE Vblock fabric. This allows the (4) four C200 servers hosting vCloud management virtual machines to use the EMC SAN for storage and all network connections need to be made fully redundant by attaching them to the Cisco Nexus 5000 and MDS 9000 series switches. The (4) four C200 servers can be packaged with the Cisco Nexus 1000v and EMC PowerPath V/E components, but is not required. This is the recommended approach to run vCloud Director on Vblock because it allows greater scalability and resiliency to failures.

An additional design aspect to keep in perspective is the physical networking for VMware vSphere. A standard Cisco C200 server is equipped with two (2) on-board 1Gb NICs by default. A suggested minimum for a 1Gb vSphere design calls for 6 1Gb NICs. The Cisco C200 servers will utilize the Vblock storage using Fiber Channel HBAs, which will consume a PCI-e slot leaving little room for expansion. These C200 servers can be maxed out with one additional PCI-express card. The suggested card to use is the Broadcom 5709 Quad Port 10/100/1000 NIC to maximize redundancy and reduce possibility of bandwidth contention.

The next piece to consider is port count and switch location. Every C200 will consume 6 network ports and multiplies by the amount of servers in the Pod. Connecting these servers to the Cisco Nexus 5000 switches via 1Gb SFPs will achieve networking functionality but at the loss of 10GbE ports. It’s also possible to connect these servers to a different set of switches located outside the Vblock to obtain networking functionality.

A second option for networking is utilizing the Cisco UCS P81E Virtual Interface Card. Equipping each Cisco C200 server with the P81E VIC will allow 10GbE

Page 7: VMware vCloud on Vblock Design Considerations

network connectivity and FCoE storage connectivity. The Vblock utilizes Cisco UCS 6140XP Fabric-Interconnects (6120XP in Vblock 300 EX) for unified computing. Depending on available ports, the P81E adapters can gain network and FCoE functionality through these devices. Remember to keep in mind over-subscription ratios for the 6140XPs and 10GbE licenses when determining this approach.

Existing vSphere InstanceMany customers adopting vCloud Director may already have an existing vSphere server farm. If the customer chooses to do so, they may use an existing vSphere server farm to provide resources for the vCloud management components. The existing vSphere instance must be fully redundant and have high bandwidth connections to the Vblock. The existing vSphere farm must also follow all the guidelines as shown above by providing at least (3) three to (4) four hosts dedicated to management to satisfy N+1 or N+2 redundancy. For customers pursuing this route, the vCenter instance controlling the Vblock will reside in the customer’s existing vSphere environment and needs to be migrated from the AMP. This is perfectly acceptable for vCloud Director design because the Vblock becomes dedicated as vCloud resources to be consumed.

Consuming Vblock BladesThe final option is to use (4) four Cisco B-series blades inside the Vblock. The blades used for vCloud management can be any standard blade pack offered by VCE. This approach will require the (4) four servers in the cluster to come from a minimum of (2) two different chassis. The blades will automatically be packaged with Cisco Nexus 1000v and EMC PowerPath V/E components. This approach is not

Figure 2 - vCloud Director Management Stack

Page 8: VMware vCloud on Vblock Design Considerations

recommended because like any solution, scalability proves to be a point of a limitation. Consuming (4) blades as a management cluster will ultimately remove the ability to scale up vCloud resources in a single Vblock to its full potential.

vCloud ManagementJustifications supporting a second vCenter instance and management cluster:

The approach aligns with the VMware vCloud Director Reference Architecture, which calls for a separate management cluster (or “pod”)

It provides maximum scalability within the vCloud Director management cluster through addition of individual components

It ensures proper vSphere HA capacity for both N+1 redundancy and maintenance mode

Additional network and SAN ports requirements cannot be satisfied with the existing AMP cluster design

Adding additional Cisco C200 servers provides a much simpler solution than modifying any existing approved AMP cluster design

Creating a separate vCloud Director Pod removes any contention for resources or potential conflicts if all management virtual machines were hosted in a single HA AMP cluster

The separation of each tier of management allows greater control of the Vblock, isolates VCE AMP management from VMware vCloud Director management, and preserves the current configuration(s) with added flexibility. Although there may be other designs possible that satisfy all requirements, the recommended approach was to separate the two environments completely.

Why Two VMware vCenter Servers?Based on the architecture suggested above and aligning with the vCloud Director Reference architecture, we want to make sure readers of this document understand where each vCenter is not only hosted, but which vCenter Server is also managing what ESXi hosts and VMware Virtual Machines.

AMP VMware vCenterThe first instance of VMware vCenter Server will be hosted inside the Advanced Management Pod. This VMware vCenter will serve two primary functions and will be organized in two separate datacenter objects for separation.

AMP Cluster DatacenterThis datacenter has a single cluster object housing two (2) AMP C200 servers. Essentially this vCenter datacenter will be managing itself since it is also running in that same cluster. It will provide vCenter functions to these two servers such as Update Manager, templates, and cloning functions. This datacenter object will have one set of access roles and permissions. (Customers may or may not have access to these ESXi hosts depending on their agreement with VCE.)

Page 9: VMware vCloud on Vblock Design Considerations

vCloud Director Management Cluster DatacenterThis also has a single cluster object defined that is made up of four (4) Cisco C200 rack servers or the customers chosen vCloud Management Pod configuration as stated previously. This cluster may have separate distinct access roles and permissions than the first cluster. The customer will generally need full access to this by any vSphere administrators to manage the VMware Virtual Machines in the management pod. This cluster, however, is outside the vCenter Server which provides out of cluster management. This is a generally accepted best practice with vSphere Architecture.

Vblock VMware vCenterThe second VMware vCenter instance is hosted inside the vCloud management pod, the hosts which are in turn managed by the AMP VMware vCenter instance. Simply speaking, this will be a VMware Virtual Machine in the AMP vCenter instance running on the four-node VMware vCloud Management Cluster. This may have multiple datacenter and/or cluster objects depending on the number of UCS blades initially deployed and scaled up over time. Per the VMware vCloud Reference Architecture, this instance will only manage vCloud hosts and virtual machines. UIM will also point to this vCenter as it provisions UCS blades for consumption by VMware vCloud Director. Lastly, this vCenter instance will have completely separate permissions to protect vCloud controlled objects from being mishandled.

vCenter ProtectionThe existence of vCenter is critical in a vCloud Director implementation because vCenter is now a secondary layer in the vCloud Director Stack. The vCloud Director servers are a layer higher in the management stack and control the vCenter servers. The recommended approach is to protect the vCenter instance hosted inside the vCloud Management Pod by utilizing vCenter Heartbeat. This is not a required component of the vCloud Director on Vblock design.

Networking InfrastructureVMware vCloud Director provides Layer-2 networking as isolated entities that can be provisioned on demand and consumed by tenants in the cloud. These isolated entities are created as network pools, which can be used to create organization networks which vApps rely on. vApps are the core building block for deploying a preset number of Virtual Machines configured for a specific purpose. When deployed, there are 3 different types of networks, which can be connected:

External (Public) Networks External Org Networks (Direct connected or NAT-routed to external

networks) Internal Org Networks (Isolated, direct connected or NAT-routed to external

networks)

Page 10: VMware vCloud on Vblock Design Considerations

The virtual machines within a vApp can be placed on any one or more of the networks presented for varying levels of connectivity based on each use case. In addition, vCloud Director uses three types of pool types to create these networks. Below is a basic comparison of the three network pool types (for more detailed information, please refer to the VMware vCloud Director documentation):

Port Group Backed Poolso Benefits – supported by all three virtual switch types; Cisco Nexus

1000V, VMware vDS and vSwitcho Constraints – manual provisioning; vSphere backed switches have to

be pre-configured; must be available on every host in cluster VLAN backed Pools

o Benefits – separation of traffic through use of VLAN taggingo Constraints – currently only supported by VMware vDS; consumes a

VLAN ID for every network pool vCD-NI backed Pools

o Benefits – automated provisioning of network pools; consumption of just 1 VLAN ID

o Constraints – currently only supported by VMware vDS; maximum performance requires an MTU size of at least 1524 on physical network ports (both host and directly attached switches)

The Cisco Nexus 1000VThe Cisco Nexus 1000V is an integral part of the Vblock platform, allowing for advanced feature sets of the Cisco NX-OS to live in the virtual space. The NX-OS gives network administrators the ability to see deeper into network traffic and inspect traffic that traverses the network. It interoperates with VMware vCloud Director, and extends the benefits of Cisco NX-OS features, feature consistency, and Cisco’s non-disruptive operational model to enterprise private clouds and service provider hosted public clouds managed by VMware vCloud Director. VMware vCloud Director Network Isolation (vCD-NI) is a VMware technology that provides isolated Layer-2 networks for multiple tenants of a cloud without consuming VLAN address space. vCD-NI provides Layer-2 network isolation by means of a network overlay technology utilizing MAC in MAC encapsulation and is not available with the Cisco Nexus 1000v at the time of this writing. The Cisco Nexus 1000v requires port groups to be pre-provisioned for use by VMware vCloud Director.

Page 11: VMware vCloud on Vblock Design Considerations

Networking Solution for VMware vCloud Director and Vblock using Cisco Nexus 1000vThe Vblock solution for VMware vCloud Director takes an approach where both the Cisco Nexus 1000V and the VMware vNetwork Distributed Switch (vDS) are used in conjunction with each other. The logical Vblock platform build process will be done slightly differently with VMware vCloud Director on Vblock. Every ESXi host will have both a Nexus 1000V and a VMware vDS.

Every Cisco UCS half width blade inside the Vblock platform comes with one M81KR (PALO) Virtual Interface card while Cisco UCS full width blades are configured with two. The M81KR is unique because each card has two 10GbE adapters that can allocate resources into virtual interfaces. The vCloud Director on Vblock solution uses the Cisco UCS M81KR adapters to present four (4) virtual 10GbE adapters to each ESX host. This doesn’t mean every host has 40Gb of available throughput, but all 4 virtual network interfaces share 20Gb of available bandwidth. Two (2) 10GbE adapters are given to each virtual switch, which allow for simultaneous use and full redundancy.

This changes slightly when using a Cisco B-series full width blade. Since there are two M81KR (PALO) Virtual interface cards in each blade it has 4 10GbE adapters that resources can use. The vCloud Director on Vblock solution uses the Cisco UCS M81KR adapters to present four (4) 10GbE adapters to each ESXi host. Two (2) 10GbE adapters, one from each M81KR card, are given to each virtual switch type, which allow for simultaneous use and full redundancy.

The VMware vCloud on Vblock solution uses the Cisco Nexus 1000V assigned to port group-backed network pools for everything entering and exiting the Vblock on external networks. This approach allows the network team to control everything on the network up to the Vblock components. Currently the Cisco Nexus 1000V capability extends only as far as pre-provisioned configuration of port groups in vSphere. VMware vCloud Director external network port groups must be created manually in VMware vCloud Director and then associated with a pre-provisioned vSphere port group. All external port groups need to be created on the Cisco Nexus 1000V by the network administrator and assigned as needed inside VMware vCloud Director. This approach allows the network team to maintain control of the network for every packet that is external to the VMware vCloud Director cloud.

The VMware vDS is responsible for all External Organization and Internal Organization Networks which are internal to the cloud. This allows VMware vCloud Director to natively automate the process of creating new port groups that are backed with either VLAN-backed pools or vCD-NI-backed pools. VMware vDS gives cloud administrators the ability to dynamically create the VMware vCloud based isolated networks with little to no intervention by the network team. It is also recommended that the vCD-NI pools are used since they provide the greatest flexibility with the least number of required VLANs. External Org and/or Internal

Page 12: VMware vCloud on Vblock Design Considerations

Org networks using network pools backed by VLAN or vCD-NI port groups, that are Layer-2 segments, route between hosts in the same VMware cluster.

When a vApp (or VM inside a vApp) needs to access an external network, the traffic is routed internally on the ESX host from the VMware vDS to the Cisco Nexus 1000V by use of the vShield Edge appliance using a NAT-routed configuration. The vShield Edge appliance is configured with 2 NICs, one connected to an organization network on the vNetwork Distributed Switch, and one connected to an external network on the Cisco Nexus 1000V, bridging the two networks together. Additionally, a vApp could be configured to directly access an external network based on a specific use case and therefore would only be attached to the Cisco Nexus 1000V. The first diagram below illustrates basic connectivity of a NAT-routed vApp with VMware vShield Edge:

Page 13: VMware vCloud on Vblock Design Considerations

The second alternative configuration where either the vApp (Internal Org) or External Org network could be direct attached to the External (public) network is shown below. In this case, virtual machines inside a vApp are essentially directly connected to the external network and therefore would not be able to take advantage of NAT and/or firewall functionality provided by vShield Edge and would be consuming external IP addresses from the external network pool.

Networking Solution for VMware vCloud Director and Vblock using VMware vNetwork Distributed SwitchThe Cisco Nexus 1000v is not a required component in a VCE Vblock running VMware vCloud Director 1.0.x. This decision was made because of the additional

Figure 3 - NAT Routed vApp Network to External Network

Figure 4 - Direct Attached vApp Network to External Network

Page 14: VMware vCloud on Vblock Design Considerations

steps and requirements needed from a logical build perspective as well as licensing costs of vShield Edge in addition to the lack of integration with vCD-NI. The Cisco Nexus 1000v is still recommended for the vCloud management cluster to give network administrators access. However, if a customer decides to implement a networking solution based on the VMware vNetwork Distributed Switch, everything related to the vCloud is responsible under the Cloud Administrator’s role.

The logical build of a Cisco B-Series blade that will be used by vCloud Director will only need two (2) 10GbE adapters (or vNICs) and two (2) vHBAs assigned to it by UIM’s creation of Service Profiles in UCSM. These two (2) vNICs will serve as the standard configuration from VCE’s logical build and will abide by the QoS templates already preset by VCE standards. These two (2) vNICs will be responsible for all network traffic including management, vMotion, and virtual machine traffic.

The vNetwork Distributed Switch will be responsible for controlling all three types of networks: External Networks, External Organization Networks, and Internal Organization Networks. This will allow VMware vCloud Director to natively automate and orchestrate the creation and destruction of port groups that are created by VLAN-backed network pools or vCD-NI backed network pools.

A vNetwork Distributed Switch still needs to comply with basic vCloud Director requirements. All external port groups must be created before hand, including VLANs that are going to be utilized for vCD-NI layer 2 transmissions. Recommendations for vNetwork Distributed Switch settings for vCD-NI can all be found in the VMware vCloud Architecture Toolkit version 1.6.

vShield Edge devices will be used natively against vCloud Director to serve as DHCP/Firewall/NAT devices for fenced networks and organizational networks.

Page 15: VMware vCloud on Vblock Design Considerations

Storage Infrastructure

OverviewStorage is a key design element in a VMware vCloud environment, both at the physical infrastructure level, as well as the Provider Virtual Datacenter (VDC) level. The functionality of the storage layer can improve performance, increase scalability, and provide more options in the service creation process. EMC arrays at the heart of the VCE Vblock Infrastructure platform and offer a number of features that can be leveraged in a vCloud environment, including FAST VP, FAST Cache and the ability to provide a unified storage platform that can serve both file and block storage.

FAST VPVNX FAST VP is a policy-based auto-tiering solution. The goal of FAST VP is to efficiently utilize storage tiers is to lower the overall cost of the storage solution by moving “slices” of colder data to high-capacity disks and to increase performance by keeping hotter slices of data on performance drives. In a VMware vCloud environment, FAST VP is a way for provider to offer a blended storage offering, reducing the cost of a traditional single-type offering while allowing for a wider range of customer-use cases and accommodating a larger cross-section of VMs with different performance characteristics. 1

Use Case #1: Standard Storage TieringIn a non-FAST VP enabled array, typically multiple storage tiers are presented to the vCloud environment, and each of these offerings is abstracted out into separate Provider VDCs. For example, a provider may choose to provision an EFD (SSD/Flash) tier, a FC/SAS tier and a SATA tier, and then abstract these into a Gold, Silver and Bronze Provider VDCs. The customer then chooses resources from these for use in their Organizational VDC.

This provisioning model is limited for a number of reasons: VMware vCloud Director doesn’t allow for a non-disruptive way to move VMs

from one Provider VDC to another, meaning the customer must provide for downtime if the vApp needs to be moved to a more appropriate tier

For workloads with a variable I/O personality, there is no mechanism to automatically migrate those workloads to a more appropriate tier of disk

With the cost of EFDs still being significant, creating an entire tier of them can be prohibitively expensive, especially with few workloads having an I/O pattern that takes full advantage of this particular storage medium

One way in which the standard storage tiering model can be a benefit is when multiple arrays are being utilized to provide different kinds of storage of different to support different I/O workloads.1 http://www.emc2.ro/collateral/hardware/white-papers/h8220-fast-suite-sap-vnx-wp.pdf

Page 16: VMware vCloud on Vblock Design Considerations

Use Case #2: FAST VP-Based Storage TieringOn a Vblock platform that is licensed for FAST VP, there are ways to provide more flexibility and a more cost-effective platform when compared to a standard tiering model. Rather than using a single disk type per Provider VDC, companies can blend both the cost and performance characteristics of multiple disk types. Some examples of this would include:

Creating a FAST VP pool that contains 20% EFD and 80% FC/SAS disks as a “Performance Tier” offering for customers who may need the performance of EFD during certain times, but who don’t want to pay for that performance all the time.

Creating a FAST VP pool that contains 50% FC/SAS disks and 50% SATA disks as a “Production Tier” where most standard enterprise apps can take advantage of the standard FC/SAS performance, yet the ability to de-stage cold data to SATA disk brings the overall cost of the storage down per GB.

Creating a FAST VP pool that contains 90% SATA disks and 10% FC/SAS disks as an “Archive Tier” where mostly near-line data is stored, with the FC/SAS disks being used for those instances where the customer needs to go to the archive to recover data, or for customers who are dumping a significant amount of data into the tier.

Tiering PoliciesFAST VP offers a number of policy settings in how data is placed, how often data is promoted and how data movement is managed. In a vCloud Director environment, the following policy settings are recommended to best accommodate the types of I/O workloads produced:

By default, the Data Relocation Schedule is set to migrate data 7 days a week, between 11pm and 6am, reflecting the standard business day, and to use a Data Relocation Rate of “Medium” which can relocate 300-400 GB of data per hour. In a vCloud environment, VCE recommends opening up the Data Relocation window to run 24-hours a day, but reduce the Data Relocation Rate to “Low.” This will allow for a constant promotion and demotion of data, yet will limit the impact on host I/O.

By default, FAST VP-enabled LUNs/Pools are set to use the “Auto-Tier,” spreading data across all tiers of disk evenly. In a vCloud environment, where customers are generally paying for the lower tier of storage but leveraging the ability to promote workloads to higher performing disk when needed, the VCE recommendation is to use the “Lowest Available Tier” policy. This places all data onto the lower tier of disk initially, keeping the higher tier of disk free for data that needs it.

Page 17: VMware vCloud on Vblock Design Considerations

FAST CacheFAST Cache is an industry-leading feature, supported by all 300-series Vblock platforms, which extended the VNX array’s read-write cache and ensures that unpredictable I/O spikes are serviced at EFD speeds2, which is of particular benefit in a vCloud environment. Multiple VMs, on multiple VMFS datastores spread across multiple hosts can generate a very random I/O pattern, placing stress on both the storage processors as well as on the DRAM cache. FAST Cache, a standard feature on all Vblocks, mitigates the effects of this kind of I/O by extending the DRAM cache for both reads and writes, increasing the overall cache performance of the array, improving I/O during usage spikes and dramatically reducing the overall number of dirty pages and cache misses.

Because FAST Cache is aware of EFD disk tiers available in the array, FAST VP and FAST Cache work together in concert to improve array performance. Data that has been promoted to an EFD tier will never be cached inside FAST Cache, ensuring that both options are leveraged in the most efficient way.

In a vCloud Director environment, VCE recommends a minimum of 100 GB of FAST Cache, with the amount of FAST Cache increasing as the number of VMs increases. The following table details the recommendations from VCE:

# of VMs FAST Cache Configuration0-249 100GB Total (2x100GB, RAID1)

250-499 400GB Total (4x200GB, RAID1)500-999 600GB Total (6x200GB, RAID1)

1000+ 1000GB Total (10x200GB, RAID1)

The combination of FAST VP and FAST Cache allows the vCloud environment to scale better, support more VMs and a wider variety of service offerings, and protects against I/O spikes and bursting workloads in a way that is unique in the industry. These two technologies in tandem are a significant differentiator for the VCE Vblock infrastructure platform.

Storage Metering and ChargebackHaving flexibility in how you deliver storage offerings is important, but in a vCloud environment having the ability to meter and charge for that storage is equally critical. While not a required component of VMware vCloud Director on Vblock, this design uses the VMware vCenter Chargeback product, in conjunction with the VMware Cloud Director Data Collector and vShield Manager Data Collector. Configuration of this product is outside the scope of this paper, but resources can be found on the VMware website.

2 http://www.emc.com/collateral/hardware/white-papers/h8217-introduction-vnx-wp.pdf

Page 18: VMware vCloud on Vblock Design Considerations

After Chargeback is configured properly3, Organizations created in vCloud Director will be imported into vCenter Chargeback, including all of the Organization VDCs, the media and template files, vApps, virtual machines and networks. Each level of the customer organization is represented in the vCenter Chargeback hierarchy, allowing reporting with as much granularity as necessary.

3 http://www.vmware.com/resources/techresources/10153

Page 19: VMware vCloud on Vblock Design Considerations

VMware vCloud Director and Vblock ScalabilityTo understand the scalability of VMware vCloud Director on VCE Vblock we need to address items that will affect decisions and recommendations. First, every Vblock ships with EMC Ionix Unified Infrastructure Manager (UIM). EMC’s UIM software is used as a hardware-provisioning piece to deploy physical hardware in a Vblock platform. Second, while every Vblock also uses VMware vCenter to manage the vSphere layer, in vCloud deployments the vSphere layer is actually controlled by VMware vCloud Director.

EMC Ionix UIM software communicates to VMware vCenter to provision physical blades with VMware ESX or ESXi and integrates them into vSphere objects that VMware vCloud Director can then consume. These can either be existing vSphere cluster objects or they can be completely new objects located in the same VMware vCenter.

An existing VMware vCenter instance managing Vblock for vCloud Director resources can scale to the maximums set by VMware, which based on current documentation, is 1000 hosts. In the past, as more provisioned blades were needed, another UIM instance was created along with a new VMware vCenter instance. Since UIM is crucial to the orchestration of hosts, and the maximums of each product differ, recommendations can be based on individual customer requirements and the specific use case for UIM.

As each new Vblock is deployed, orchestration workflows can discover the new Vblock, create UIM service offerings, associate them to specific vCenter instances, initiate new services on top of the Vblock, as well as provision new ESX hosts into vCenter clusters. Each additional Vblock, from a hardware perspective, mirrors the configuration of the first Vblock, the exception being each and every new one does not require a new vCenter. UIM on the other hand, is directed at the original vCenter service available in the vCloud Director Management Stack and new blades are provisioned and added to a new VMware cluster. As additional Vblocks are added to vCenter for vCloud Director capacity, the recommended maximum host configuration stands at 640 blades. Once the 640 blade maximum has been reached, a new vCenter instance becomes necessary and new Vblocks are then assigned to it.

The design philosophy of architecting minimal VMware vCenter Servers, each representing a building block, enables customers to realize the strengths of Vblock scalability while reducing VMware vCenter and vCloud environment complexity. Customers simply purchase more compute resources (in the form of Vblocks) and add them to their VMware vCloud Director Cloud environment in a quick and convenient manner – especially in UIM-based deployments. By leveraging the rapid hardware provisioning of EMC Ionix UIM and the elasticity of VMware vCloud Director, the best of both worlds are joined to provide consistent, readily available, and scalable resource deployment for cloud consumers.

Page 20: VMware vCloud on Vblock Design Considerations

Reference Linkshttp://www.vmware.com/files/pdf/VMware-Architecting-vCloud-WP.pdf

http://www.emc.com/collateral/software/white-papers/h8058-fast-vp-unified-storage-wp.pdf

http://www.emc.com/collateral/hardware/white-papers/h8217-introduction-vnx-wp.pdf

http://www.emc2.ro/collateral/hardware/white-papers/h8220-fast-suite-sap-vnx-wp.pdf

Cisco Nexus 1000V Integration with VMware vCloud Director