Transcript
Page 1: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Vbl ock Sol uti on for Trus ted M ulti-Tenancy: D esign Guide Tabl e of C ontents

VBLOCK™ SOLUTION FOR TRUSTED MULTI-TENANCY: DESIGN GUIDE

Version 2.0 March 2013

www.v ce.com

© 2013 VCE Company, LLC. All Rights Reserved.

Page 2: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Copy right 2013 VCE Company , LLC. All Rights Reserv ed.

VCE believ es the inf ormation in this publication is accurate as of its publication date. The inf ormation is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." VCE MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OR MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

2 © 2013 VCE Company, LLC. All Rights Reserved.

Page 3: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Contents Introduction ......................................................................................................................................... 7

About this guide................................................................................................................................. 7 Audience ............................................................................................................................................ 8 Scope ................................................................................................................................................. 8 Feedback ........................................................................................................................................... 8

Trusted multi-tenancy foundational elements .............................................................................. 9 Secure separation ...........................................................................................................................10 Service assurance...........................................................................................................................10 Security and compliance ................................................................................................................11 Availability and data protection ......................................................................................................11 Tenant management and control...................................................................................................11 Service provider management and control ...................................................................................12

Technology overview .......................................................................................................................13 Management....................................................................................................................................14

Advanced Management Pod ......................................................................................................14 EMC Ionix Unified Infrastructure Manager/Provis ioning...........................................................14

Compute technologies ....................................................................................................................15 Cisco Unif ied Computing System ...............................................................................................15 VMw are vSphere .........................................................................................................................15 VMw are vCenter Server ..............................................................................................................15 VMw are vCloud Director..............................................................................................................15 VMw are vCenter Chargeback.....................................................................................................16 VMw are vShield ...........................................................................................................................16

Storage technologies ......................................................................................................................16 EMC Fully Automated Storage Tiering ......................................................................................16 EMC FA ST Cache .......................................................................................................................17 EMC Pow erPath/V E ....................................................................................................................17 EMC Unif ied Storage...................................................................................................................17 EMC Unisphere Management Suite...........................................................................................17 EMC Unisphere Quality of Service Manager .............................................................................18

Netw ork technologies......................................................................................................................18 Cisco Nexus 1000V Series .........................................................................................................18 Cisco Nexus 5000 Series ............................................................................................................18 Cisco Nexus 7000 Series ............................................................................................................18 Cisco MDS....................................................................................................................................18 Cisco Data Center Netw ork Manager ........................................................................................19

3 © 2013 VCE Company, LLC. All Rights Reserved.

Page 4: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Security technologies ......................................................................................................................19 RSA Archer eGRC .......................................................................................................................19 RSA enV ision ...............................................................................................................................19

Design framework.............................................................................................................................20 End-to-end topology ........................................................................................................................20

Virtual machine and c loud resources layer ................................................................................21 Virtual access layer/vSw itch .......................................................................................................22 Storage and SA N layer ................................................................................................................22 Compute layer ..............................................................................................................................22 Netw ork layers .............................................................................................................................23

Logical topology ..............................................................................................................................23 Tenant traff ic f low representation ...............................................................................................26 VMw are vSphere logical framew ork overview...........................................................................28

Logical design..................................................................................................................................32 Cloud management cluster logical des ign .................................................................................32 vSphere cluster specif ications ....................................................................................................33 Host logical design specif ications for cloud management c luster ...........................................33 Host logical configuration for resource groups ..........................................................................34 VMw are vSphere cluster host des ign specif ication for resource groups ................................34 Security .........................................................................................................................................35

Tenant anatomy overview ..............................................................................................................35

Design considerations for management and orchestration.....................................................37 Configuration ...................................................................................................................................39 Enabling services ............................................................................................................................40

Creating a service offering ..........................................................................................................41 Provisioning a service..................................................................................................................41

Design considerations for compute..............................................................................................42 Design cons iderations for secure separation................................................................................43

Cisco UCS ....................................................................................................................................43 VMw are vCloud Director .............................................................................................................52

Design cons iderations for service assurance ...............................................................................58 Cisco UCS ....................................................................................................................................58 VMw are vCloud Director .............................................................................................................60

Design cons iderations for security and compliance .....................................................................62 Cisco UCS ....................................................................................................................................62 VMw are vCloud Director .............................................................................................................65 VMw are vCenter Server ..............................................................................................................67

4 © 2013 VCE Company, LLC. All Rights Reserved.

Page 5: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Design cons iderations for availability and data protection...........................................................67 Cisco UCS ....................................................................................................................................68 Virtualization.................................................................................................................................69

Design cons iderations for tenant management and control ........................................................73 VMw are vCloud Director .............................................................................................................73

Design cons iderations for service prov ider management and control........................................74 Virtualization.................................................................................................................................74

Design considerations for storage................................................................................................78 Design cons iderations for secure separation................................................................................78

Segmentation by VSA N and zoning...........................................................................................78 Separation of data at rest............................................................................................................80 Address space separation...........................................................................................................80 Separation of data access...........................................................................................................83

Design cons iderations for service assurance ...............................................................................89 Dedication of runtime resources .................................................................................................89 Quality of service control .............................................................................................................89 EMC V NX FA ST V P ....................................................................................................................90 EMC FA ST Cache .......................................................................................................................92 EMC Unisphere Management Suite...........................................................................................92 VMw are vCloud Director .............................................................................................................92

Design cons iderations for security and compliance .....................................................................93 Authentication w ith LDA P or Active Directory ...........................................................................93 VNX and RSA enV ision...............................................................................................................96

Design cons iderations for availability and data protection...........................................................97 High availability ............................................................................................................................97 Local and remote data protection ...............................................................................................99

Design cons iderations for service prov ider management and control......................................101

Design considerations for networking .......................................................................................102 Design cons iderations for secure separation..............................................................................102

VLANs .........................................................................................................................................102 Virtual routing and forw arding...................................................................................................103 Virtual dev ice context ................................................................................................................105 Access control list ......................................................................................................................105

Design cons iderations for service assurance .............................................................................106 Design cons iderations for security and compliance ...................................................................108

Data center f irew alls ..................................................................................................................109 Services layer .............................................................................................................................112 Cisco Application Control Engine .............................................................................................112 Cisco Intrusion Prevention System ..........................................................................................114

5 © 2013 VCE Company, LLC. All Rights Reserved.

Page 6: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Cisco A CE, Cisco ACE Web Application Firew all, Cisco IPS traff ic f low s............................117 Access layer ...............................................................................................................................118 Security recommendations .......................................................................................................123 Threats mit igated .......................................................................................................................124 Vblock™ Systems secur ity features .........................................................................................124

Design cons iderations for availability and data protection.........................................................125 Physical redundancy design cons ideration .............................................................................125

Design cons iderations for service prov ider management and control......................................129

Design considerations for additional security technologies .................................................130 Design cons iderations for secure separation..............................................................................131

RSA Archer eGRC .....................................................................................................................131 RSA enV ision .............................................................................................................................131

Design cons iderations for service assurance .............................................................................131 RSA Archer eGRC .....................................................................................................................131 RSA enV ision .............................................................................................................................132

Design cons iderations for security and compliance ...................................................................133 RSA Archer eGRC .....................................................................................................................133 RSA enV ision .............................................................................................................................134

Design cons iderations for availability and data protection.........................................................134 RSA Archer eGRC .....................................................................................................................134 RSA enV ision .............................................................................................................................135

Design cons iderations for tenant management and control ......................................................135 RSA Archer eGRC .....................................................................................................................135 RSA enV ision .............................................................................................................................135

Design cons iderations for service prov ider management and control......................................136 RSA Archer eGRC .....................................................................................................................136 RSA enV ision .............................................................................................................................136

Conclusion .......................................................................................................................................137 Next steps ........................................................................................................................................139 Acronym glossary ..........................................................................................................................140

6 © 2013 VCE Company, LLC. All Rights Reserved.

Page 7: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Introduction

The Vblock™ Solution for Trusted Multi-Tenancy (TMT) Design Guide describes how Vblock™ Systems allow enterprises and service providers to rapidly build virtualized data centers that support the unique challenges of provisioning Infrastructure as a Service (IaaS) to multiple tenants.

The trusted multi-tenancy solution comprises six foundational elements that address the unique requirements of the IaaS cloud service model:

Secure separation

Service assurance

Security and compliance

Availability and data protection

Tenant management and control

Service provider management and control

The trusted multi-tenancy solution deploys compute, storage, network, security, and management Vblock System components that address each element while offering service providers and tenants numerous benefits. The following table summarizes these benefits.

Provider benefits Tenant benefits

Lower cost-to-serv e Cost sav ings transf erred to tenants

Standardized off erings Faster incident resolution with standardized serv ices

Easier growth and scale using standard inf rastructures

Secure isolation of resources and data

More predictable planning around capacity and workloads

Usage-based serv ices model, such as backup and storage

About this guide

This design guide explains how service providers can use specific products in the compute, network, storage, security, and management component layers of Vblock Systems to support the six foundational elements of trusted multi-tenancy. By meeting these objectives, Vblock Systems offer service providers and enterprises an ideal business model and IT infrastructure to securely provision IaaS to multiple tenants. This guide demonstrates processes for:

Designing and managing Vblock Systems to deliver infrastructure multi-tenancy and service multi-tenancy

Managing and operating Vblock Systems securely and reliably

7 © 2013 VCE Company, LLC. All Rights Reserved.

Page 8: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

The specific goal of this guide is to describe the design of and rationale behind the solution. The guide looks at each layer of the Vblock System and shows how to achieve trusted multi-tenancy at each layer. The design includes many issues that must be addressed prior to deployment, as no two environments are alike.

Audience

The target audience for this guide is highly technical, including technical consultants, professional services personnel, IT managers, infrastructure architects, partner engineers, sales engineers, and service providers deploying a trusted multi-tenancy environment with leading technologies from VCE.

Scope

Trusted multi-tenancy can be used to offer dedicated IaaS (compute, storage, network, management, and virtualization resources) or leverage single instances of services and applications for multiple consumers. This guide only addresses design considerations for offering dedicated IaaS to multiple tenants. While this design guide describes how Vblock Systems can be designed, operated, and managed to support trusted multi-tenancy, it does not provide specific configuration information, which must be specifically considered for each unique deployment. In this guide, the terms “Tenant” and “Consumer” refer to the consumers of the services provided by a service provider.

Feedback

To suggest documentation changes and provide feedback on this paper, send email to [email protected]. Include the title of this paper, the name of the topic to which your comment applies, and your feedback.

8 © 2013 VCE Company, LLC. All Rights Reserved.

Page 9: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Trusted multi-tenancy foundational elements

The trusted multi-tenancy solution comprises six foundational elements that address the unique requirements of the IaaS cloud service model:

Secure separation

Service assurance

Security and compliance

Availability and data protection

Tenant management and control

Service provider management and control

Figure 1. Six elements of the Vblock Solution for Trusted Multi-Tenancy

9 © 2013 VCE Company, LLC. All Rights Reserved.

Page 10: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Secure separation

Secure separation refers to the effective segmentation and isolation of tenants and their assets within the multi-tenant environment. Adequate secure separation ensures that the resources of existing tenants remain untouched and the integrity of the applications, workloads, and data remains uncompromised when the service provider provisions new tenants. Each tenant might have access to different amounts of network, compute, and storage resources in the converged stack. The tenant sees only those resources allocated to them.

From the standpoint of the service provider, secure separation requires the systematic deployment of various security control mechanisms throughout the infrastructure to ensure the confidentiality, integrity, and availability of tenant data, services, and applications. The logical segmentation and isolation of tenant assets and information is essential for providing confidentiality in a multi-tenant environment. In fact, ensuring the privacy and security of each tenant becomes a key design requirement in the decision to adopt cloud services.

Service assurance

Service assurance plays a vital role in providing tenants with consistent, enforceable, and reliable service levels. Unlike physical resources, virtual resources are highly scalable and easy to allocate and reallocate on demand. In a multi-tenant virtualized environment, the service provider prioritizes virtual resources to accommodate the growth and changing business needs of tenants. Service level agreements (SLA) define the level of service agreed to by the tenant and service provider. The service assurance element of trusted multi-tenancy provides technologies and methods to ensure that tenants receive the agreed-upon level of service.

Various methods are available to deliver consistent SLAs across the network, compute, and storage components of the Vblock System, including:

Quality of service in the Cisco Unified Computing System (UCS) and Cisco Nexus platforms

EMC Symmetrix Quality of Service tools

EMC Unisphere Quality of Service Manager (UQM)

VMware Distributed Resource Scheduler (DRS)

Without the correct mix of service assurance features and capabilities, it can be difficult to maintain uptime, throughput, quality of service, and availability SLAs.

10 © 2013 VCE Company, LLC. All Rights Reserved.

Page 11: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Security and compliance

Security and compliance refers to the confidentiality, integrity, and availability of each tenant’s environment at every layer of the trusted multi-tenancy stack. Trusted multi-tenancy ensures security and compliance using technologies like identity management and access control, encryption and key management, firewalls, malware protection, and intrusion prevention. This is a primary concern for both service provider and tenant.

The trusted multi-tenancy solution ensures that all activities performed in the provisioning, configuration, and management of the multi-tenant environment, as well as day-to-day activities and events for individual tenants, are verified and continuously monitored. It is also important that all operational events are recorded and that these records are available as evidence during audits.

As regulatory requirements expand, the private cloud environment will become increasingly subject to security and compliance standards, such as Payment Card Industry Data Security Standards (PCI-DSS), HIPAA, Sarbanes-Oxley (SOX), and Gramm-Leach-Bliley Act (GLBA). With the proper tools, achieving and demonstrating compliance is not only possible, but it can often become easier than in a non-virtualized environment.

Availability and data protection

Resources and data must be available for use by the tenant. High availability means that resources such as network bandwidth, memory, CPU, or data storage are always online and available to users when needed. Redundant systems, configurations, and architecture can minimize or eliminate points of failure that adversely affect availability to the tenant.

Data protection is a key ingredient in a resil ient architecture. Cloud computing imposes a resource trade-off from high performance. Increasingly robust security and data classification requirements are an essential tool for balancing that equation. Enterprises need to know what data is important and where it is located as prerequisites to making performance cost-benefit decisions, as well as ensuring focus on the most critical areas for data loss prevention procedures.

Tenant management and control

In every cloud services model there are elements of control that the service provider delegates to the tenant. The tenant’s administrative, management, monitoring, and reporting capabilities need to be restricted to the delegated resources. Reasons for delegating control include convenience, new revenue opportunities, security, compliance, or tenant requirement. In all cases, the goal of the trusted multi-tenancy model is to allow for and simplify the management, visibility, and reporting of this delegation.

11 © 2013 VCE Company, LLC. All Rights Reserved.

Page 12: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Tenants should have control over relevant portions of their service. Specifically, tenants should be able to:

Provision allocated resources

Manage the state of all virtualized objects

View change management status for the infrastructure component

Add and remove administrative contacts

Request more services as needed

In addition, tenants taking advantage of data protection or data backup services should be able to manage this capability on their own, including setting schedules and backup types, initiating jobs, and running reports.

This tenant-in-control model allows tenants to dynamically change the environment to suit their workloads as resource requirements change.

Service provider management and control

Another goal of trusted multi-tenancy is to simplify management of resources at every level of the infrastructure and to provide the functionality to provision, monitor, troubleshoot, and charge back the resources used by tenants. Management of multi-tenant environments comes with challenges, from reporting and alerting to capacity management and tenant control delegation. The Vblock System helps address these challenges by providing scalable, integrated management solutions inherent to the infrastructure, and a rich, fully developed application programming interface (API) stack for adding additional service provider value.

Providers of infrastructure services in a multi-tenant environment require comprehensive control and complete visibility of the shared infrastructure to provide the availability, data protection, security, and service levels expected by tenants. The ability to control, manage, and monitor resources at all levels of the infrastructure requires a dynamic, efficient, and flexible design that allows the service provider to access, provision, and then release computing resources from a shared pool – quickly, easily, and with minimal effort.

12 © 2013 VCE Company, LLC. All Rights Reserved.

Page 13: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Technology overview

The Vblock System from VCE is the world's most advanced converged infrastructure—one that optimizes infrastructure, lowers costs, secures the environment, simplifies management, speeds deployment, and promotes innovation. The Vblock System is designed as one architecture that spans the entire portfolio, includes best-in-class components, offers a single point of contact from initiation through support, and provides the industry's most robust range of configurations.

Vblock Systems provide production ready (fully tested) virtualized infrastructure components, including industry-leading technologies from Cisco, EMC, and VMware. Vblock Systems are designed and built to satisfy a broad range of specific customer implementation requirements. To design trusted multi-tenancy, you need to understand each layer (compute, network, and storage) of the Vblock System architecture. Figure 2 provides an example of Vblock System architecture.

Figure 2. Example of Vblock System architecture

Note: Cisco Nexus 7000 is not part of the Vblock System architecture.

This section describes the technologies at each layer of the Vblock System addressed in this guide to achieve trusted multi-tenancy.

13 © 2013 VCE Company, LLC. All Rights Reserved.

Page 14: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Management

Management technologies include Advanced Management Pod (AMP) and EMC Ionix Unified Infrastructure Manager/Provisioning (UIM/P) (optional).

Advanced Management Pod

Vblock Systems include an AMP that provides a single management point for the Vblock System. It enables the following benefits:

Monitors and manages Vblock System health, performance, and capacity

Provides fault isolation for management

Eliminates resource overhead on the Vblock System

Provides a clear demarcation point for remote operations

Two versions of the AMP are available: a mini-AMP and a high-availability version (HA AMP). A high-availability AMP is recommended.

For more information on AMP, refer to the Vblock Systems Architecture Overview documentation located at www.vce.com/vblock.

EMC Ionix Unified Infrastructure Manager/Provisioning

EMC Ionix UIM/P can be used to provide automated provisioning capabilities for the Vblock System in a trusted multi-tenancy environment by combining provisioning with configuration, change, and compliance management. With UIM/P, you can speed service delivery and reduce errors with policy-based, automated converged infrastructure provisioning. Key features include the ability to:

Easily define and create infrastructure service profiles to match business requirements

Separate planning from execution to optimize senior IT technical staff

Respond to dynamic business needs with infrastructure service life cycle management

Maintain Vblock System compliance through policy-based management

Integrate with VMware vCenter and VMware vCloud Director for extended management capabilities

14 © 2013 VCE Company, LLC. All Rights Reserved.

Page 15: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Compute technologies

Within the computing infrastructure of the Vblock System, multi-tenancy concerns at multiple levels must be addressed, including the UCS server infrastructure and the VMware vSphere Hypervisor.

Cisco Unif ied Computing System

The Cisco UCS is a next-generation data center platform that unites network, compute, storage, and virtualization into a cohesive system designed to reduce total cost of ownership and increase business agility. The system integrates a low-latency, lossless, 10 Gb Ethernet (GbE) unified network fabric with enterprise class x86 architecture servers. The system is an integrated, scalable, multi-chassis platform in which all resources participate in a unified management domain. Whether it has only one server or many servers with thousands of virtual machines (VM), the Cisco UCS is managed as a single system, thereby decoupling scale from complexity.

Cisco UCS Manager provides unified, centralized, embedded management of all software and hardware components of the Cisco UCS across multiple chassis and thousands of virtual machines. The entire UCS is managed as a single logical entity through an intuitive graphical user interface (GUI), a command-line interface (CLI), or an XML API. UCS Manager delivers greater agility and scale for server operations while reducing complexity and risk. It provides flexible role- and policy-based management using service profiles and templates, and it facilitates processe s based on IT Infrastructure Library (ITIL) concepts.

VMw are vSphere

VMware vSphere is a complete, scalable, and powerful virtualization platform, delivering the infrastructure and application services that organizations need to transform their information technology and deliver IT as a service. VMware vSphere is a host operating system that runs directly on the Cisco UCS infrastructure and fully virtualizes the underlying hardware, allowing multiple virtual machine guest operating systems to share the UCS physical resources.

VMw are vCenter Server

VMware vCenter Server is a simple and efficient way to manage VMware vSphere. It provides unified management of all the hosts and virtual machines in your data center from a single console with aggregate performance monitoring of clusters, hosts and virtual machines. VMware vCenter Server gives administrators deep insight into the status and configuration of clusters, hosts, virtual machines, storage, the guest operating system, and other critical components of a virtual infrastructure. It plays a key role in helping achieve secure separation, availability, tenant management and control, and service provider management and control.

VMw are vCloud Director

VMware vCloud Director gives customers the ability to build secure private clouds that dramatically increase data center efficiency and business agility. With VMware vSphere, VMware vCloud Director delivers cloud computing for existing data centers by pooling virtual infrastructure resources and delivering them to users as catalog-based services.

15 © 2013 VCE Company, LLC. All Rights Reserved.

Page 16: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

VMw are vCenter Chargeback

VMware vCenter Chargeback is an end-to-end metering and cost reporting solution for virtual environments that enables accurate cost measurement, analysis, and reporting of virtual machines using VMware vSphere. Virtual machine resource consumption data is collected from VMware vCenter Server. Integration with VMware vCloud Director also enables automated chargeback for private cloud environments.

VMw are vShield

The VMware vShield family of security solutions provides virtualization-aware protection for virtual data centers and cloud environments. VMware vShield products strengthen application and data security, enable trusted multi-tenancy, improve visibility and control, and accelerate IT compliance efforts across the organization.

VMware vShield products include vShield App and vShield Edge. vShield App provides firewall capability between virtual machines by placing a firewall fil ter on every virtual network adapter. It allows for easy application of firewall policies. vShield Edge virtualizes data center perimeters and offers firewall, VPN, Web load balancer, NAT, and DCHP services.

Storage technologies

The features of multi-tenancy offerings can be combined with standard security methods such as storage area network (SAN) zoning and Ethernet virtual local area networks (VLAN) to segregate, control, and manage storage resources among the infrastructure tenants.

EMC Fully Automated Storage Tiering

EMC Fully Automated Storage Tiering (FAST) automates the movement and placement of data across storage resources as needed. FAST enables continuous optimization of your applications by eliminating trade-offs between capacity and performance, while simultaneously lowering cost and delivering higher service levels.

EMC VNX FAST VP

EMC VNX FAST VP is a policy-based auto-tiering solution that efficiently uti lizes storage tiers by moving slices of colder data to high-capacity disks. It increases performance by keeping hotter slices of data on performance drives.

In a VMware vCloud environment, FAST VP enables providers to offer a blended storage offering, reducing the cost of a traditional single-type offering while allowing for a wider range of customer use cases. This helps accommodate a larger cross-section of virtual machines with different performance characteristics.

16 © 2013 VCE Company, LLC. All Rights Reserved.

Page 17: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

EMC FA ST Cache

EMC FAST Cache is an industry-leading feature supported by Vblock Systems. It extends the EMC VNX array’s read-write cache and ensures that unpredictable I/O spikes are serviced at enterprise flash drive (EFD) speeds, which is of particular benefit in a VMware vCloud Director environment. Multiple virtual machines on multiple virtual machine file system (VMFS) data stores spread across multiple hosts can generate a very random I/O pattern, placing stress on both the storage processors as well as the DRAM cache. FAST Cache, a standard feature on all Vblock Systems, mitigates the effects of this kind of I/O by extending the DRAM cache for reads and writes, increasing the overall cache performance of the array, improving l/O during usage spikes, and dramatically reducing the overall number of dirty pages and cache misse s.

Because FAST Cache is aware of EFD disk tiers available in the array, FAST VP and FAST Cache work together to improve array performance. Data that has been promoted to an EFD tier is never cached inside FAST Cache, ensuring that both options are leveraged in the most efficient way.

EMC Pow erPath/V E

EMC PowerPath/VE delivers PowerPath multipathing features to optimize storage access in VMware vSphere virtual environments by removing the administrative overhead associated with load balancing and failover. Use PowerPath/VE to standardize path management across heterogeneous physical and virtual environments. PowerPath/VE enables you to automate optimal server, storage, and path util ization in a dynamic virtual environment.

PowerPath/VE works with VMware vSphere ESXi as a multipathing plug-in that provides enhanced path management capabilities to ESXi hosts. It installs as a kernel module on the vSphere host and plugs in to the vSphere I/O stack framework to bring the advanced multipathing capabilities of PowerPath–dynamic load balancing and automatic failover–to the VMware vSphere platform.

EMC Unif ied Storage

The EMC Unified Storage system is a highly available architecture capable of five nines availability. The Unified Storage arrays achieve five nines availability by eliminating single points of failure throughout the physical storage stack, using technologies such as dual-ported drives, hot spares, redundant back-end loops, redundant front-end and back-end ports, dual storage processors, redundant fans and power supplies, and cache battery backup.

EMC Unisphere Management Suite

EMC Unisphere provides a simple, integrated experience for managing EMC Unified Storage through both a storage and VMware lens. Key features include a Web-based management interface to discover, monitor, and configure EMC Unified Storage; self-service support ecosystem to gain quick access to realtime online support tools; automatic event notification to proactively manage critical status changes; and customizable dashboard views and reporting.

17 © 2013 VCE Company, LLC. All Rights Reserved.

Page 18: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

EMC Unisphere Quality of Service Manager

EMC Unisphere Quality of Service (QoS) Manager enables dynamic allocation of storage resources to meet service level requirements for critical applications. QoS Manager monitors storage system performance on an appliance-by-application basis, providing a logical view of application performance on the storage system. In addition to displaying real-time data, performance data can be archived for offline trending and data analysis.

Network technologies

Multi-tenancy concerns must be addressed at multiple levels within the network infrastructure of the Vblock System. Various methods, including zoning and VLANs, can enforce network separation. Internet Protocol Security (IPsec) also provides application-independent network encryption at the IP layer for additional security.

Cisco Nexus 1000V Series

The Cisco Nexus 1000V is a software switch embedded in the software kernel of VMware vSphere. The Nexus 1000V provides virtual machine-level network visibility, isolation, and security for VMware server virtualization. With the Nexus 1000V Series, virtual machines can leverage the same network configuration, security policy, diagnostic tools, and operational models as their physical server counterparts attached to dedicated physical network ports. Virtualization administrators can access predefined network policies that follow mobile virtual machines to ensure proper connectivity, saving valuable resources for virtual machine administration.

Cisco Nexus 5000 Series

Cisco Nexus 5000 Series switches are data center class, high performance, standards-based Ethernet and Fibre Channel over Ethernet (FCoE) switches that enable the consolidation of LAN, SAN, and cluster network environments onto a single unified fabric.

Cisco Nexus 7000 Series

Cisco Nexus 7000 Series switches are modular switching systems designed for use in the data center. Nexus 7000 switches deliver the scalability, continuous systems operation, and transport flexibility required for 10 GB/s Ethernet networks today. In addition, the system architecture is capable of supporting future 40 GB/s Ethernet, 100 GB/s Ethernet, and unified I/O modules.

Cisco MDS

The Cisco MDS 9000 Series helps build highly available, scalable storage networks with advanced security and unified management. The Cisco MDS 9000 family facilitates secure separation at the network layer with virtual storage area networks (VSAN) and zoning. VSANs help achieve higher security and greater stability in fibre channel (FC) fabrics by providing isolation among devices that are physically connected to the same fabric. The zoning service within a fibre channel fabric provides security between devices sharing the same fabric.

18 © 2013 VCE Company, LLC. All Rights Reserved.

Page 19: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Cisco Data Center Netw ork Manager

Cisco Data Center Network Manager provides an effective tool to manage the Cisco data center infrastructure and actively monitor the SAN and LAN.

Security technologies

RSA Archer eGRC and RSA enVision security technologies can be used to achieve security and compliance.

RSA Archer eGRC

The RSA Archer eGRC Platform for enterprise governance, risk, and compliance has the industry’s most comprehensive library of policies, control standards, procedures, and asse ssments mapped to current global regulations and industry guidelines. The flexibil ity of the RSA Archer framework, coupled with this library, provides the service providers and tenants in a trusted multi-tenant environment the mechanism to successfully implement a governance, risk, and compliance program over the Vblock System. This addresse s both the components and technologies comprising the Vblock System and the virtualized services and resources it hosts.

Organizations can deploy the RSA Archer eGRC Platform in a variety of configurations, based on the expected user load, utilization, and availability requirements. As business needs evolve, the environment can adapt and scale to meet the new demands. Regardless of the size and solution architecture, the RSA Archer eGRC Platform consists of three logical layers: a .NET Web-enabled interface, the application layer, and a Microsoft SQL database backend.

RSA enV ision

The RSA enVision platform is a security information and event management (SIEM) solution that offers a scalable, distributed architecture to collect, store, manage, and correlate event logs generated from all the components comprising the Vblock System–from the physical devices and software products to the management and orchestration and security solutions.

By seamlessly integrating with RSA Archer eGRC, RSA enVision provides both service providers and tenants a powerful solution to collect and correlate raw data into actionable information. Not only does RSA enVision satisfy regulatory compliance requirements, it helps ensure stability and integrity through robust incident management capabilities.

19 © 2013 VCE Company, LLC. All Rights Reserved.

Page 20: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Design framework

This section provides the following information:

End-to-end topology

Logical topology

Logical design details

Overview of tenant anatomy

End-to-end topology

Secure separation creates trusted zones that shield each tenant’s applications, virtual machines, compute, network, and storage from compromise and resource effects caused by adjacent tenants and external threats. The solution framework presented in this guide considers additional technologies that comprehensively provide appropriate in-depth defense. A combination of protective, detective, and reactive controls and solid operational processes are required to deliver protection against internal and external threats.

Key layers include:

Virtual machine and cloud resources (VMware vSphere and VMware vCloud Director)

Virtual access/vSwitch (Cisco Nexus 1000V)

Storage and SAN (Cisco MDS and EMC storage)

Compute (Cisco UCS)

Access and aggregation (Nexus 5000 and Nexus 7000)

Figure 3 il lustrates the design framework.

20 © 2013 VCE Company, LLC. All Rights Reserved.

Page 21: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 3. Trusted multi-tenancy design framework

Virtual machine and c loud resources layer

VMware vSphere and VMware vCloud Director are used in the cloud layer to accelerate the delivery and consumption of IT services while maintaining the security and control of the data center.

VMware vCloud Director enables the consolidation of virtual infrastructure across multiple clusters, the encapsulation of application services as portable vApps, and the deployment of those services on-demand with isolation and control.

21 © 2013 VCE Company, LLC. All Rights Reserved.

Page 22: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Virtual access layer/vSw itch

Cisco Nexus 1000V distributed virtual switch acts as the virtual network access layer for the virtual machines. Edge LAN policies such as quality of service marking and vNIC ACLs are implemented at this layer in Nexus 1000V port-profiles.

The following table describes the virtual access layer.

Component Description

One data center One primary Nexus 1000V Virtual Superv isor Module (VSM)

One secondary Nexus 1000V Virtual Superv isor Module

VMware ESXi serv ers Each running an instance of the Nexus 1000V Virtual Ethernet Module (VEM)

Tenant Multiple v irtual machines, which hav e diff erent applications such as Web serv er, database, and so f orth, for each tenant

Storage and SA N layer

The trusted multi-tenancy design framework is based on the use of storage arrays supporting fibre channel connectivity. The storage arrays connect through MDS SAN switches to the UCS 6120 switches in the access layer. Several layers of security (including zoning, access controls at the guest operating system and ESXi level, and logical unit number (LUN) masking within the VNX) tightly control access to data on the storage system.

Compute layer

The following table provides an example of the components of a multi-tenant environment virtual compute farm.

Note: A Vblock System may have more resources than what is described in the f ollowing table.

Component Description

Three UCS 5108 chassis 11 UCS B200 servers (dual quad-core Intel Xeon X5570 CPU at 2.93 GHZ and 96 GB RAM)

Four UCS B440 serv ers (f our Intel Xeon 7500 series processors and 32 dual in-line memory module slots with 256 GB memory)

Ten GbE Cisco VIC conv erged network adapters (CNA) organized into a VMware ESXi cluster

15 serv ers (4 clusters) Each serv er has two CNAs and are dual-attached to the UCS 6100 f abric interconnect

The CNAs provide: - LAN and SAN connectivity to the serv ers, which run

VMware ESXi 5.0 hypervisor - LAN and SAN services to the hy pervisor

22 © 2013 VCE Company, LLC. All Rights Reserved.

Page 23: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Netw ork layers

Access layer

Nexus 5000 is used at the access layer and connects to the Cisco UCS 6120s. In the Layer 2 access layer, redundant pairs of Cisco UCS 6120 switches aggregate VLANs from the Nexus 1000V distributed virtual switch. FCoE SAN traffic from virtual machines is handed off as FC traffic to a pair of MDS SAN switches, and then to a pair of storage array controllers. FC expansion modules in the UCS 6120 switch provide SAN interconnects to dual SAN fabrics. The UCS 6120 switches are in N Port virtualization (NPV) mode to interoperate with the SAN fabric.

Aggrega tion la yer

Nexus 7000 is used at the aggregation layer. The virtual device context (VDC) feature in the Nexus 7000 separates it into sub-aggregation and aggregation virtual device contexts for Layer 3 routing. The aggregation virtual device context connects to the core network to route the internal data center traffic to the Internet and from the Internet back to the internal data center.

Logical topology

Figure 4 shows the logical topology for the trusted multi-tenancy design framework.

23 © 2013 VCE Company, LLC. All Rights Reserved.

Page 24: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 4. Trusted multi-tenancy logical topology

24 © 2013 VCE Company, LLC. All Rights Reserved.

Page 25: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

The logical topology represents the virtual components and virtual connections that exist within the physical topology. The following table describes the topology.

Component Details

Nexus 7000 Virtualized aggregation lay er switch. Prov ides redundant paths to the Nexus 5000 access lay er. Virtual port channel prov ides a logically loopless topology with conv ergence times based on EtherChannel.

Creates three v irtual dev ice contexts (VDC): WAN edge v irtual dev ice context, sub-aggregation v irtual dev ice context, and aggregation v irtual dev ice context. Sub-aggregation v irtual dev ice context connects to Nexus 5000 and aggregation v irtual dev ice context by v irtual port channel.

Nexus 5000 Unif ied access lay er switch. Prov ides 10 GbE IP connectiv ity between the Vblock System and the outside world. In a unif ied storage conf iguration, the switches also connect the f abric interconnects in the compute lay er to the data mov ers in the storage lay er. The switches also prov ide connectiv ity to the AMP.

Two UCS 6120 f abric interconnects

Prov ides a robust compute lay er platf orm. Virtual port channel prov ides a topology with redundant chassis, cards, and links with Nexus 5000 and Nexus 7000.

Each connects to one MDS 9148 to f orm its own f abric.

Four 4 GB/s FC links connect the UCS 6120 to MDS 9148.

The MDS 9148 switches connect to the storage controllers. In this example, the storage array has two controllers. Each MDS 9148 has two connections to each FC storage controller. These dual connections prov ide redundancy if an FC controller f ails and the MDS 9148 is not isolated.

Connect to the Nexus 5000 access switch through EtherChannel with dual-10 GbE.

Three UCS chassis Each chassis is populated with blade serv ers and Fabric Extenders f or redundancy or aggregation of bandwidth.

UCS blade serv ers Connect to the SAN f abric through the Cisco UCS 6120XP f abric interconnect, which uses an 8-port 8 GB f ibre channel expansion module to access the SAN.

Connect to LAN through the Cisco UCS 6120XP f abric interconnects.

These ports require SFP+ adapters. The serv er ports of f abric interconnects can operate at 10 GB/s and Fibre Channel ports of f abric interconnects can operate at 2/4/8 GB/s.

EMC VN X storage Connects to the f abric interconnect with 8 GB f ibre channel f or block. Connects to the Nexus 5000 access switch through EtherChannel with dual-10 GbE f or f ile.

25 © 2013 VCE Company, LLC. All Rights Reserved.

Page 26: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Tenant traff ic f low representation

Figure 5 depicts the traffic flow through each layer of the solution, from the virtual machine level to the storage layer.

Figure 5. Tenant traffic flow

26 © 2013 VCE Company, LLC. All Rights Reserved.

Page 27: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Traffic flow in the data center is classified into the following categories:

Front-end—User to data center, Web, GUI

Back-end—Within data center, multi-tier application, storage, backup

Management—Virtual machine access, application administration, monitoring, and so forth

Note: Front-end traffic, also called client-to-server traffic, trav erses the Nexus 7000 aggregation layer and a select number of network-based services.

At the application layer, each tenant may have multiple vApps with applications and have different virtual machines for different workloads. The Cisco Nexus 1000V distributed virtual switch acts as the virtual access layer for the virtual machines. Edge LAN policies, such as quality of service marking and vNIC ACLs, can be implemented at the Nexus 1000V. Each ESXi server becomes a virtual Ethernet blade of Nexus 1000V, called Virtual Ethernet Module (VEM). Each vNIC connects to Nexus 1000V through a port group; each port group specifies one or more VLANs used by a virtual machine NIC. The port group can also specify other network attributes, such as rate limit and port security. The VM uplink port profile forwards VLANs belonging to virtual machines. The system uplink port profile forwards VLANs belonging to management traffic. The virtual machine traffic for different tenants traverses the network through different uplink port profiles, where port security, rate limiting, and quality of service apply to guarantee secure separation and assurance.

VMware vSphere virtual machine NICs are associated to the Cisco Nexus 1000V to be used as the uplinks. The network interface virtualization capabilities of the Cisco adapter enable the use of VMware multi-NIC design on a server that has two 10 GB physical interfaces with complete quality of service, bandwidth sharing, and VLAN portability among the virtual adapters. vShield Edge controls all network traffic to and from the virtual data center and helps provide an abstraction of the separation in the cloud environment.

Virtual machine traffic goes through the UCS FEX (I/O module) to the fabric interconnect 6120.

If the traffic is aligned to use the storage resources and it is intended to use FC storage, it passe s over an FC port on the fabric interconnect and Cisco MDS, to the storage array, and through a storage processor, to reach the specific storage pool or storage groups. For example, if a tenant is using a dedicated storage resource with specific disks inside a storage array, traffic is routed to the assigned LUN with a dedicated storage group, RAID group, and disks. If there is NFS traffic, it passes over a network port on the fabric interconnect and Cisco Nexus 5000, through a virtual port channel to the storage array, and over a data mover, to reach the NFS data store. The NFS export LUN is tagged with a VLAN to ensure the security and isolation with a dedicated storage group, RAID group, and disks. Figure 5 shows an example of a few dedicated tenant storage resources. However, if the storage is designed for a shared traffic pool, traffic is routed to a specific storage pool to pull resources.

ESXi hosts for different tenants pass the server-client and management traffic over a server port and reach the access layer of the Nexus 5000 through virtual port channel.

Server blades on UCS chassis are allocated for the different tenants. The resource on UCS can be dedicated or shared. For example, if using dedicated servers for each tenant, VLANs are assigned for different tenants and are carried over the dot1Q trunk to the aggregation layer of the Nexus 7000, where each tenant is mapped to the Virtual Routing and Forwarding (VRF). Traffic is routed to the external network over the core.

27 © 2013 VCE Company, LLC. All Rights Reserved.

Page 28: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

VMw are vSphere logical framew ork overview

Figure 6 shows the virtual VMware vSphere layer on top of the physical server infrastructure.

Figure 6. vSphere logical framework

The diagram shows blade server technology with three chassis initially dedicated to the VMware vCloud environment. The physical design represents the networking and storage connectivity from the blade chassis to the fabric and SAN, as well as the physical networking infrastructure. (Connectivity between the blade servers and the chassis switching is different and is not shown here.) Two chassis are initially populated with eight blades each for the cloud resource clusters, with an even distribution between the two chassis of blades belonging to each resource cluster.

In this scenario, VMware vSphere resources are organized and separated into management and resource clusters with three resource groups (Gold, Silver, and Bronze). Figure 7 illustrates the management cluster and resource groups.

28 © 2013 VCE Company, LLC. All Rights Reserved.

Page 29: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 7. Management cluster and resource groups

Cloud management clusters

A cloud management cluster is a management cluster containing all core components and services needed to run the cloud. It is a resource group or “compute cluster” that represents dedicated resources for cloud consumption. It is best to use a separate cluster outside the Vblock System resources.

Each resource group is a cluster of VMware ESXi hosts managed by a VMware vCenter Server, and is under the control of VMware vCloud Director. VMware vCloud Director can manage the resources of multiple resource groups or multiple compute clusters.

Cloud management components

The following components run as minimum-requirement virtual machines on the management cluster hosts:

Components Number of virtual machines

vCenter Serv er 1

vCenter Database 1

vCenter Update Manager 1

vCenter Update Manager Database 1

vCloud Director Cells 2 (f or multi-cell)

vCloud Director Database 1

29 © 2013 VCE Company, LLC. All Rights Reserved.

Page 30: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Components Number of virtual machines

vCenter Chargeback Serv er 1

vCenter Chargeback Database 1

v Shield Manager 1

Note: A vCloud Director cluster contains one or more vCloud Director serv ers; these servers are referred to as cells and f orm the basis of the VMware cloud. A cloud can be formed from multiple cells. The number of vCloud Director cells depends on the size of the vCloud environment and the level of redundancy.

Figure 8 highlights the cloud management cluster.

Figure 8. Cloud management cluster

Resources allocated for cloud use have little overhead reserved. For example, cloud resource groups would not host vCenter management virtual machines. Best practices encourage separating the cloud management cluster from the cloud resource groups(s) in order to:

Facilitate quicker troubleshooting and problem resolution. Management components are strictly contained in a specified cluster and manageable management cluster.

Keep cloud management components separate from the resources they are managing.

Consistently and transparently manage and carve up resource groups.

Provide an additional step for high availability and redundancy for the trusted multi-tenancy infrastructure.

30 © 2013 VCE Company, LLC. All Rights Reserved.

Page 31: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Resource groups

A resource group is a set of resources dedicated to user workloads and managed by VMware vCenter Server. vCloud Director manages the resources of all attached resource groups within vCenter Servers. All cloud-provisioning tasks are initiated through VMware vCloud Director and passed down to the appropriate vCenter Server instance.

Figure 9 highlights cloud resource groups.

Figure 9. Cloud resource groups

Provisioning resources in standardized groupings promotes a consistent approach for scaling vCloud environments. For consistent workload experience, place each resource group on a separate resource cluster.

The resource group design represents three VMware vSphere High Availability (HA) Distributed Resource Scheduler (DRS) clusters and infrastructure used to run the vApps that are provisioned and managed by VMware vCloud Director.

31 © 2013 VCE Company, LLC. All Rights Reserved.

Page 32: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Logical design

This section provides information about the logical design, including:

Cloud management cluster logical design

VMware vSphere cluster specifications

Host logical design specifications

Host logical configurations for resource groups

VMware vSphere cluster host design specifications for resource groups

Security

Cloud management cluster logical des ign

The compute design encompasses the VMware ESXi hosts contained in the management cluster. Specifications are listed below.

Attribute Specification

Number of ESXi hosts 3

v Sphere datacenter 1

VMware Distributed Resource Scheduler conf iguration Fully automated

VMware High Av ailability (HA) Enable Host Monitoring Yes

VMware High Av ailability Admission Control Policy Cluster tolerances 1 host f ailure (percentage based)

VMware High Av ailability percentage 67%

VMware High Av ailability Admission Control Response Prev ent v irtual machines f rom being powered on if they violate av ailability constraints

VMware High Av ailability Def ault VM Restart Priority N/A

VMware High Av ailability Host Isolation Response Leav e v irtual machine powered on

VMware High Av ailability Enable VM Monitoring Yes

VMware High Av ailability VM Monitoring Sensitiv ity Medium

Note: In this section, the scope is limited to only the Vblock System supporting the management component workloads.

32 © 2013 VCE Company, LLC. All Rights Reserved.

Page 33: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

vSphere cluster specif ications

Each VMware ESXi host in the management cluster has the following specifications.

Attribute Specification

Host ty pe and v ersion VMware ESXi installable – v ersion 5.0

Processors x86 compatible

Storage presented SAN boot f or ESXi – 20 GB SAN LUN f or v irtual machines – 2 TB

NFS shared LUN f or v Cloud Director cells – 1 TB

Networking Connectiv ity to all needed VLANs

Memory Size to support all management v irtual machines. In this case, 96 GB memory in each host.

Note: VMware v Cloud Director deployment requires storage for sev eral elements of the ov erall framework. The first is the storage needed to house the vCloud Director management cluster. This includes the repository for configuration information, organizations, and allocations that are stored in an Oracle database. The second is the vSphere storage objects presented to vCloud Director as data stores accessed by ESXi serv ers in the vCloud Director configuration. This storage is managed by the v Sphere administrator and consumed by vCloud Director users depending on vCloud Director configuration. The third is the existence of a single NFS data store to serv e as a staging area for vApps to be uploaded to a catalog.

Host logical design specif ications for cloud management c luster

The following table identifies management components that rely on high availability and fault tolerance for redundancy.

Management component High availability enabled?

vCenter Serv er Yes

vCloud Director Yes

vCenter Chargeback Serv er Yes

v Shield Manager Yes

33 © 2013 VCE Company, LLC. All Rights Reserved.

Page 34: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Host logical configuration for resource groups

The following table identifies the specifications for each VMware ESXi host in the resource cluster.

Attribute Specification

Host ty pe and v ersion VMware ESXi Installable – v ersion 5.0

Processors x86 compatible

Storage presented SAN boot f or ESXi – 20 GB SAN LUN f or v irtual machines – 2 TB

Networking Connectiv ity to all needed VLANs

Memory Size to support v irtual machine workloads

VMw are vSphere cluster host des ign specif ication for resource groups

All VMware vSphere resource clusters are configured similarly with the following specifications.

Attribute Specification

VMware Distributed Resource Scheduler conf iguration

Fully automated

VMware Distributed Resource Scheduler Migration Threshold

3 stars

VMware High Av ailability Enable Host Monitoring

Yes

VMware High Av ailability Admission Control Policy

Cluster tolerances 1 host f ailure (percentage based)

VMware High Av ailability percentage 83%

VMware High Av ailability Admission Control Response

Prev ent v irtual machines f rom being powered on if they v iolate av ailability constraints

VMware High Av ailability Def ault VM Restart Priority

N/A

VMware High Av ailability Host Isolation Response

Leav e v irtual machine powered on

34 © 2013 VCE Company, LLC. All Rights Reserved.

Page 35: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Security

The RSA Archer eGRC Platform can be run on a single server, with the application and database components running on the same server. This configuration is suitable for organizations:

With fewer than 50 concurrent users

That do not require a high-performance or high availability solution

For the trusted multi-tenancy framework, RSA enVision can be deployed as a virtual appliance in the AMP. Each Vblock System component can be configured to utilize it as its centralized event manager through its identified collection method. RSA enVision can then be integrated with RSA Archer eGRC per the RSA Security Incident Management Solution configuration guidelines.

Tenant anatomy overview

This design guide uses three tenants as examples: Orange (tenant 1), Vanilla (tenant 2), and Grape (tenant 3). All tenants share the same infrastructure and resources. Each tenant has its own virtual compute, network, and storage resources. Resources are allocated for each tenant based on their business model, requirements, and priorities. Traffic between tenants is restricted, separated, and protected for the trusted multi-tenancy environment.

Figure 10. Trusted multi-tenancy tenant anatomy

35 © 2013 VCE Company, LLC. All Rights Reserved.

Page 36: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

In this design guide (and associated configurations), three levels of services are provided in the cloud: Bronze, Silver, and Gold. These tiers define service levels for compute, storage, and network performance. The following table provides sample network and data differentiations by service tier.

Bronze Silver Gold

Serv ices No additional serv ices Firewall serv ices Firewall and load-balancing serv ices

Bandwidth 20% 30% 40%

Segmentation One VLAN per client, single Virtual Routing and Forwarding (VRF)

Multiple VLANs per client, single VRF

Multiple VLANs per client, single VRF

Data Protection None Snap – v irtual copy (local site)

Clone – mirror copy (local site)

Disaster Recov ery None Remote application (with specif ic recov ery point objectiv e (RPO) / recov ery time objectiv e (RTO))

Remote replication (any -point-in-time recov ery )

Using this tiered model, you can do the following:

Offer service tiers with well-defined and distinct SLAs

Support customer segmentation based on desired service levels and functionality

Allow for differentiated application support based on service tiers

36 © 2013 VCE Company, LLC. All Rights Reserved.

Page 37: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Design considerations for management and orchestration

Service providers can leverage Unified Infrastructure Manager/Provisioning to provision the Vblock System in a trusted multi-tenancy environment. The AMP cluster of hosts holds UIM/P, which is accessed through a Web browser.

Use UIM/P as a domain manager to provision Vblock Systems as a single entity. UIM/P interacts with the individual element managers for compute, storage, SAN, and virtualization to automate the most common and repetitive operational tasks required to provision services. It also interacts with VMware vCloud Director to automate cloud operations, such as the creation of a virtual data center.

For provisioning, this guide focuses on the functional capabilities provided by UIM/P in a trusted multi-tenancy environment.

As shown in Figure 11, the UIM/P dashboard gives service provider administrators a quick summary of available infrastructure resources. This eliminates the need to perform manual discovery and documentation, thereby reducing the time it takes to begin deploying resources. Once administrators have resource availability information, they can begin to provision existing service offerings or create new ones.

Figure 11. UIM/P dashboard

37 © 2013 VCE Company, LLC. All Rights Reserved.

Page 38: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 12. UIM/P service offerings

38 © 2013 VCE Company, LLC. All Rights Reserved.

Page 39: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Configuration

While UIM/P automates the operational tasks involved in building services on Vblock Systems, administrators need to perform initial task sets on each domain manager before beginning service provisioning. This section describes both key initial tasks to perform on the individual domain managers and operational tasks managed through UIM/P.

The following table shows what is configured as part of initial device configuration and what is configured through UIM/P.

Device manager Initial configuration Operational configuration completed with UIM/P

UCS Manager Management conf iguration (IP and credentials

Chassis discov ery Enable ports KVMIP pool Create VLANs Assign VLANs VSANs

LAN MAC pool SAN World Wide Name (WWN)

pool WWPN pool Boot policies Serv ice templates Select pools Select boot policy Serv er UUID pool Create service profile Associate profile to server Install v Sphere ESXi

Unisphere MDS/Nexus Management conf iguration (IP and credentials)

RAID group, storage pool, or both Create LUNs

Create storage group Associate host and LUN Zone Aliases Zone sets

vCenter Create Windows v irtual machine Create database Install vCenter software

Create data center Create clusters High availability policy DRS policy Distributed power

management (DPM) policy Add hosts to cluster Create data stores Create networks

39 © 2013 VCE Company, LLC. All Rights Reserved.

Page 40: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Enabling services

After completing the initial configurations, use the following high-level workflow to enable services.

Stage Workflow action Description

1 Vblock System discov ery Gather data f or Vblock Sy stem dev ices, interconnectiv ity, and external networks, and populate data in UIM database.

2 Serv ice planning Collect serv ice resource requirements, including: The number of servers and serv er attributes Amount of boot and data storage and storage attributes Networks to be used for connectivity between the service

resources and external networks vCenter Server and ESXi cluster information

3 Serv ice prov isioning Reserv e resources based on the serv er and storage requirements def ined f or the serv ice during serv ice planning. Install ESXi on the serv ers. Conf igure connectiv ity between the cluster and external networks.

4 Serv ice activ ation Turn on the sy stem, start up Cisco UCS serv ice prof iles, activ ate network paths, and make resources av ailable f or use. The workf low separates prov isioning and activ ation, to allow activ ation of the serv ice as needed.

5 vCenter sy nchronization Sy nchronize the ESXi clusters with the vCenter Serv er. Once y ou prov ision and activ ate a serv ice, the sy nchronizing process includes adding the ESXi cluster to the vCenter serv er data store and registering the cluster hosts prov isioned with v Center Serv er.

6 vCloud sy nchronization Discov er vCloud and build a connection to the v Center serv ers. The clusters created in v Center Serv er are pushed to the appropriate vCloud. UIM/P integrates with vCloud Director in the same way it integrates with v Center Serv er.

40 © 2013 VCE Company, LLC. All Rights Reserved.

Page 41: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 13 describes the provisioning, activation, and synchronization process, including key sub-steps during the provisioning process.

Figure 13. Provisioning, activation, and synchronization process flow

Creating a service offering

To create a service offering:

1. Select the operating system.

2. Define server characteristics.

3. Define storage characteristics for startup.

4. Define storage characteristics for application data.

5. Create network profile.

Provisioning a service

To provision a service:

1. Select the service offering.

2. Select Vblock System.

3. Select servers.

4. Configure IP and provide DNS hostname for operating system installation.

5. Select storage.

6. Select and configure network profile and vNICs.

7. Configure vCenter cluster settings.

8. Configure vCloud Director settings.

41 © 2013 VCE Company, LLC. All Rights Reserved.

Page 42: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Design considerations for compute

Within the computing infrastructure of Vblock Systems, multi-tenancy concerns can be managed at multiple levels, from the central processing unit (CPU), through the Cisco Unified Computing System (UCS) server infrastructure, and within the VMware solution elements.

This section describes the design of and rationale behind the trusted multi-tenancy framework. The design includes many issues that must be addressed prior to deployment, as no two environments are alike. Design considerations are provided for the components listed in the following table.

Component Version Description

Cisco UCS 2.0 Core component of the Vblock System that prov ides compute resources in the cloud. It helps achiev e secure separation, serv ice assurance, security , av ailability , and serv ice prov ider management in the trusted multi-tenancy f ramework.

VMware v Sphere 5.0 Foundation of underly ing cloud inf rastructure and components. Includes:

VMware ESXi hosts VMware v Center Serv er Resource pools VMware High Av ailability and Distributed Resource

Scheduler VMware v Motion

VMware v Cloud Director 1.5 Builds on VMware v Sphere to prov ide a complete multi-tenant inf rastructure. It deliv ers on-demand cloud inf rastructure so users can consume v irtual resources with maximum agility. It consolidates data centers and deploys workloads on shared inf rastructure with built-in security and role-based access control. Includes:

VMware v Cloud Director Serv er (two instances, each installed on a Red Hat Linux virtual machine and ref erred to as a “cell”)

VMware v Cloud Director Database (one instance per clustered set of VMware vCloud Director cells)

VMware v Shield 5.0 Prov ides network security serv ices, including NAT and f irewall. Includes:

v Shield Edge (deployed automatically on hosts as v irtual appliances by VMware v Cloud Director to separate tenants)

v Shield App (deployed on ESXi host layer to zone and secure v irtual machine traffic)

v Shield Manager (one instance per vCenter Server in the cloud resource groups to manage vShield Edge and v Shield App)

VMware v Center Chargeback

1.6.2 Prov ides resource metering and chargeback models. Includes: VMware v Center Chargeback Serv er VMware Chargeback Data Collector VMware v Cloud Data Collector VMware v Shield Manager Data Collector

42 © 2013 VCE Company, LLC. All Rights Reserved.

Page 43: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Design considerations for secure separation

This section discusse s using the following technologies to achieve secure separation at the compute layer:

Cisco UCS

VMware vCloud Director

Cisco UCS

The UCS blade servers contain a pair of Cisco Virtual Interface Card (VIC) Ethernet uplinks. Cisco VIC presents virtual interfaces (UCS vNIC) to the VMware ESXi host, which allow for further traffic segmentation and categorization across all traffic types based on vNIC network policies.

Using port aggregation between the fabric interconnect vNIC pairs enhances the availability and capacity of each traffic category. All inbound traffic is stripped of its VLAN header and switched to the appropriate destination’s virtual Ethernet interface. In addition, the Cisco VIC allows for the creation of multiple virtual host bus adapters (vHBA), permitting FC-enabled startup across the same physical infrastructure.

Each VMware virtual interface type, VMkernel, and individual virtual machine interface connects directly to the Cisco Nexus 1000V software distributed virtual switch. At this layer, packets are tagged with the appropriate VLAN header and all outbound traffic is aggregated to the two Cisco fabric interconnects.

This section contains information about the high-level UCS features that help achieve secure separation in the trusted multi-tenancy framework:

UCS service profiles

UCS organizations

VLAN considerations

VSAN considerations

UCS serv ice profiles

Use UCS service profiles to ensure secure separation at the compute layer. Hardware can be presented in a stateless manner that is completely transparent to the operating system and the applications that run on it. A service profile creates a hardware overlay that contains specific information sensitive to the operating system:

MAC addresses

WWN values

UUID

BIOS

Firmware versions

43 © 2013 VCE Company, LLC. All Rights Reserved.

Page 44: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

In a multi-tenant environment, the service provider can define a service profile giving access to any server in a predefined server resource with specific processor, memory, or other administrator-defined characteristics. The service provider can then provision one or more servers through service profiles, which can be used for an organization or a tenant. Service profiles are particularly useful when deployed with UCS Role-Based Access Control (RBAC), which provides granular administrative access control to UCS system resources based on administrative roles in a service provider environment.

Servers instantiated by service profiles start up from a LUN that is tied to the specified WWPN, allowing an installed operating system instance to be locked with the service profile. The independence from server hardware allows installed systems to be re-deployed between blades. Through the use of pools and templates, UCS hardware can be quickly deployed and scaled.

The trusted multi-tenancy framework uses three distinct server roles to segregate and classify UCS blade servers. This helps identify and associate specific service profiles depending on their purpose and policy. The following table describes these roles.

Role Description

Management These serv ers can be associated with a serv ice prof ile that is meant only for cloud management or any type of serv ice prov ider inf rastructure workload.

Dedicated These serv ers can be associated with diff erent serv ice prof iles, serv er pools, and roles with VLAN policy ; f or example, for a specific tenant VLAN allowed access to those serv ers that are meant only for specif ic tenants.

The trusted multi-tenancy f ramework considers a f ew tenants who strongly want to hav e a dedicated UCS cluster to f urther segregate workloads in the v irtualization lay er as needed. It also considers tenants who want dedicated workload throughput f rom the underly ing compute inf rastructure, which maps to the VMware Distributed Resource Scheduler cluster.

Mixed These serv ers can be associated with a diff erent serv ice prof ile meant for shared resource clusters f or the VMware Distributed Resource Scheduler cluster. Depending on tenant requi rements, UCS can be designed to use a dedicated compute resource or a shared resource. The trusted multi-tenancy framework uses mixed serv ers f or shared resource clusters as an example.

These servers can be spread across the UCS fabric to minimize the impact of a single point of failure or a single chassis failure.

44 © 2013 VCE Company, LLC. All Rights Reserved.

Page 45: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 14 shows an example of how the three servers are designed in the trusted multi-tenancy framework.

Figure 14. Trusted multi-tenancy framework server design

45 © 2013 VCE Company, LLC. All Rights Reserved.

Page 46: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 15 shows an example of three tenants (Orange, Vanilla, and Grape) using three service profiles on three different physical blades to ensure secure separation at the blade level.

Figure 15. Secure separation at the blade level

46 © 2013 VCE Company, LLC. All Rights Reserved.

Page 47: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

UCS organizations

The Cisco UCS organizations feature helps with multi-tenancy by logically segmenting physical system resources. Organizations are logically isolated in the UCS fabric. UCS hardware and policies can be assigned to different organizations so that the appropriate tenant or organizational unit can access the assigned compute resources. A rich set of policies in UCS can be applied per organization to ensure that the right sets of attributes and I/O policies are assigned to the correct organization. Each organization can have its own pool of resources, including the following:

Resource pools (server, MAC, UUID, WWPN, and so forth)

Policies

Service profiles

Service profile templates

UCS organizations are hierarchical. Root is the top-level organization. System-wide policies and pools in root are available to all organizations in the system. Any policies and pools created in other organizations are available only to organizations below it in the same hierarchy.

The functional isolation provided by UCS is helpful for a multi-tenant environment. Use the UCS features of RBAC and locales (a UCS feature to isolate tenant compute resources) on top of organizations to assign or restrict user privileges and roles by organization.

Figure 16 shows the hierarchical organization of UCS clusters starting from Root. It shows three types of cluster configurations (Management, Dedicated, and Mixed). Below that are the three tenants (Orange, Vanilla, and Grape) with their service levels (Gold, Silver, and Bronze).

Figure 16. UCS cluster hierarchical organization

47 © 2013 VCE Company, LLC. All Rights Reserved.

Page 48: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

UCS allows the creation of resource pools to ensure secure separation between tenants. Use the following:

LAN resources

IP pool

MAC pool

VLAN pool

Management resources

KVM addresse s pool

VLAN pool

SAN resources

WWN addresses pool

VSANs

Identity resources

UUID pool

Compute resources

Server pools

Figure 17 illustrates how creating separate resource pools for the three tenants helps with secure separation at the compute layer.

Figure 17. Resource pools

48 © 2013 VCE Company, LLC. All Rights Reserved.

Page 49: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 18 is an example of a UCS Service Profile workflow diagram for three tenants.

Figure 18. UCS service profile workflow

VLAN considerations

In Cisco UCS, a named VLAN creates a connection to a specific management LAN and tenant-specific VLANs. The VLAN isolates traffic, including broadcast traffic, to that external LAN. The name assigned to a VLAN ID adds a layer of abstraction that you can use to globally update all servers associated with service profiles using the named VLAN. You do not need to reconfigure servers individually to maintain communication with the external LAN. For example, if a service provider wanted to isolate a group of compute clusters for a specific tenant, the specific tenant VLAN needs to be allowed in the service profile of that tenant. This provides another layer of abstraction in secure separation.

49 © 2013 VCE Company, LLC. All Rights Reserved.

Page 50: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

To illustrate, if Tenant Orange has dedicated UCS blades, it is recommended to allow only Tenant Orange-specific VLANs to ensure that only Tenant Orange has access to those blades. Figure 19 shows a dedicated service profile for Tenant Orange that uses a vNIC template as Orange. Tenant Orange VLANs are allowed to use that specific vNIC template. However, a global vNIC template can sti ll be used for all blades, providing the ability to allow or disallow specific VLANs from updating service profile templates.

Figure 19. Dedicated service profile for Tenant Orange

VSAN considerations in UCS

A named VSAN creates a connection to a specific external SAN. The VSAN isolates traffic, including broadcast traffic, to that external SAN. The traffic on one named VSAN knows that the traffic on another named VSAN exists, but it cannot read or access that traffic.

The name assigned to a VSAN ID adds a layer of abstraction that allows you to globally update all servers associated with service profiles that use the named VSAN. You do not need to individually reconfigure servers to maintain communication with the external SAN. You can create more than one named VSAN with the same VSAN ID.

In a cluster configuration, a named VSAN is configured to be accessible to only the FC uplinks on both fabric interconnects.

50 © 2013 VCE Company, LLC. All Rights Reserved.

Page 51: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 20 shows that VSAN 10 and VSAN 11 are configured in UCS SAN Cloud and uplinked to an FC port.

Figure 20. VSAN configuration in UCS

Figure 21 shows how an FC port is assigned to a VSAN ID in UCS. In this case, uplink FC Port 1 is assigned to VSAN10.

Figure 21. Assigning a VSAN to FC ports

51 © 2013 VCE Company, LLC. All Rights Reserved.

Page 52: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

VMw are vCloud Director

VMware vCloud Director introduces logical constructs to facilitate multi-tenancy and provide interoperability between vCloud instances built to the vCloud API standard.

VMware vCloud Director helps administer tenants—such as a business unit, organization, or division—by policy. In the trusted multi-tenancy framework, each organization has isolated virtual resources, independent LDAP-based authentication, specific policy controls, and unique catalogs. To ensure secure separation in a trusted multi-tenancy environment where multiple organizations share Vblock System resources, the framework includes VMware vCloud Director along with VMware vShield perimeter protection, port-level firewall, and NAT and DHCP services.

Figure 22 shows a logical separation of organizations in VMware vCloud Director.

Figure 22. Organization separation

52 © 2013 VCE Company, LLC. All Rights Reserved.

Page 53: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

A service provider may want to view all the listed tenants or organizations in vCloud Director to easily manage them. Figure 23 shows the service provider’s tenant view in VMware vCloud Director.

Figure 23. Tenant view in vCloud Di rector

Organizations are the unit of multi-tenancy within vCloud Director. They represent a single logical security boundary. Each organization contains a collection of users, computing resources, catalogs, and vApp workloads. Organization users can be local users or imported from an LDAP server. LDAP integration can be specific to an organization, or it can leverage an organizational unit within the system LDAP configuration, as defined by the vCloud system administrator. The name of the organization, specified during creation time, maps to a unique URL that allows access to the GUI for that organization. For example, Figure 24 shows that Tenant Orange maps to a specific default organization URL. Each tenant accesses the resource using its own URL and authentication.

Figure 24. Organization unique identifier URL

53 © 2013 VCE Company, LLC. All Rights Reserved.

Page 54: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

The vCloud Director network provides an extra layer of separation. vCloud Director has three different types of networking, each with a specific purpose:

External network

Organization network

vApp network

External network

The external network is the connection to the outside world. An external network always needs a port group, meaning that a port group needs to be available within VMware vSphere and the distributed switch.

Tenants commonly require direct connections from inside the vCloud environment into the service provider networking backbone. This is analogous to extending a wire from the network switch containing the network or VLAN to be used, all the way through the vCloud layers into the vApp. Each organization in the trusted multi-tenancy environment has an internal organization network and a direct connect external organization network.

Organization network

An organization network provides network connectivity to vApp workloads within an organization. Users in an organization have no visibility into external networks and connect to outside networks through external organization networks. This is analogous to users in an organization connecting to a corporate network that is uplinked to a service provider for Internet access.

The following table lists connectivity options for organization networks.

Network type Connectivity

External organization Direct connection

External organization NAT/routed

Internal organization Isolated

A directly connected external organization network places the vApp virtual machines in the port group of the external network. IP address assignments for vApps follow the external network IP addressing.

Internal and routed external organization networks are instantiated through network pools by vCloud system administrators. Organization administrators do not have the ability to provision organization networks but can configure network services such as firewall, NAT, DHCP, VPN, and static routing.

Note: Organization network is meant only for the intra-organization network and is specific to an organization.

54 © 2013 VCE Company, LLC. All Rights Reserved.

Page 55: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 25 shows an example of an internal and external network configuration.

Figure 25. Internal and external organization networks

Service providers provision organization networks using network pools. Figure 26 shows the service provider’s administrator view of the organization networks.

Figure 26. Administrator view of organization networks

55 © 2013 VCE Company, LLC. All Rights Reserved.

Page 56: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

v App network

A vApp network is similar to an organization network. It is meant for a vApp internal network. It acts as a boundary for isolating specific virtual machines within a vApp. A vApp network is an isolated segment created for a particular application stack within an organization’s network to enable multi-tier applications to communicate with each other and, at the same time, isolate the intra-vApp traffic from other applications within the organization. The resources to create the isolation are managed by the organization administrator and allocated from a pool provided by the vCloud administrator.

Figure 27 shows a vApp configuration for Tenant Grape.

Figure 27. Micro-segmentation of virtual workloads

Network pools

All three network classe s can be backed using the virtual network features of the Nexus 1000V. It is important to understand the relationship between the virtual networking features of the Nexus 1000V and the classes of networks defined and implemented in a vCloud Director environment. Typically, a network class (specifically, an organization and vApp) is described as being backed by an allocation of isolated networks. For an organization administrator to create an isolated vApp network, the administrator must have a free isolation resource to consume and use in order to provide that isolated network for the vApp.

56 © 2013 VCE Company, LLC. All Rights Reserved.

Page 57: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

To deploy an organization or vApp network, you need a network pool in vCloud Director. Network pools contain network definitions used to instantiate private/routed organization and vApp networks. Networks created from network pools are isolated at Layer 2. You can create three types of network pools in vCloud Director, as shown in the following table.

Network Pool Type Description

v Sphere port group backed Network pools are backed by pre-prov isioned port groups in Cisco Nexus 1000V or VMware distributed switch.

VLAN backed A range of pre-prov isioned VLAN IDs back network pools. This assumes all VLANs specif ied are trunked.

vCloud Director network isolation backed

Network pools are backed by v Cloud isolated networks, which are an ov erlay network uniquely identif ied by a f ence ID implemented through encapsulation techniques that span hosts and prov ide traffic isolation f rom other networks. It requires a distributed switch. vCloud Director creates port groups automatically on distributed switches as needed.

Figure 28 shows how network pool types are presented in VMware vCloud Director.

Figure 28. Network pools

57 © 2013 VCE Company, LLC. All Rights Reserved.

Page 58: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Each pool has specific requirements, l imitations, and recommendations. The trusted multi-tenancy framework use s a port group backed network pool with a Cisco Nexus 1000V Distributed switch. Each port group is isolated to its own VLAN ID. Each tenant (network, in this case) is associated with its own network pool, each backed by a set of port groups.

VMware vCloud Director automatically deploys vShield Edge devices to facilitate routed network connections. vShield Edge uses MAC encapsulation for NAT routing, which helps prevent Layer 2 network information from being seen by other organizations in the environment. vShield Edge also provides a firewall service that can be configured to block inbound traffic to virtual machines connected to a public access organization network.

Design considerations for service assurance

This section discusse s using the following technologies to achieve service assurance at the compute layer:

Cisco UCS

VMware vCloud Director

Cisco UCS

The following UCS features support service assurance:

Quality of service

Port channels

Server pools

Redundant UCS fabrics

Compute, storage, and network resources need to be categorized in order to provide a differential service model for a multi-tenant environment. The following table shows an example of Gold, Silver, and Bronze service levels for compute resources.

Level Compute resource

Gold UCS B440 blades

Silv er UCS B200 and B440 blades

Bronze UCS B200 blades

System classes in the UCS specify the bandwidth allocated for traffic types across the entire system. Each system class reserve s a specific segment of the bandwidth for a specific type of traffic. Using quality of service policies, the UCS assigns a system class to the outgoing traffic and then matches a quality of service policy to the class of service (CoS) value marked by the Nexus 1000V Series switch for each virtual machine.

UCS quality of service configuration can help achieve service assurance for multiple tenants. A best practice to ensure guaranteed quality of service throughout a multi-tenant environment is to configure quality of service for different service levels on the UCS.

58 © 2013 VCE Company, LLC. All Rights Reserved.

Page 59: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 29 shows different quality of service weight values configured for different class of service values that correspond to Gold, Silver, and Bronze service levels. This helps ensure traffic priority for tenants associated with those service levels.

Figure 29. Quality of service configuration

Quality of service policies assign a system class to the outgoing traffic for a vNIC or vHBA. Therefore, to configure the vNIC or vHBA, include a quality of service policy in a vNIC or vHBA policy and then include that policy in a service profile. Figure 30 shows how to create quality of service policies.

Figure 30. Creating quality of service policy

59 © 2013 VCE Company, LLC. All Rights Reserved.

Page 60: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

VMw are vCloud Director

VMware vCloud Director provides several allocation models to achieve service levels in the trusted multi-tenancy framework. An organization virtual data center allocates resources from a provider virtual data center and makes them available for use by a given organization. Multiple organization virtual data centers can take from the same provider virtual data center. One organization can have multiple organization virtual data centers.

Resources are taken from a provider virtual data center and allocated to an organization virtual data center using one of three resource allocation models, as shown in the following table.

Model Description

Pay as y ou go Resources are reserv ed and committed for v Apps only as v Apps are created. There is no upf ront reserv ation of resources.

Allocation A baseline amount (guarantee) of resources f rom the prov ider v irtual data center is reserv ed f or the organization v irtual data center’s exclusiv e use. An additional percentage of resources are av ailable to ov ersubscribe CPU and memory, but this taps into compute resources that are shared by other organization v irtual data centers drawing f rom the prov ider v irtual data center.

Reserv ation All resources assigned to the organization v irtual data center are reserv ed exclusiv ely f or the organization v irtual data center’s use.

With all the above models, the organization can be set to deploy an unlimited or l imited number of virtual machines. In selecting the appropriate allocation model, consider the service definition and organization’s use case workloads.

Although all tenants use the shared infrastructure, the resources for each tenant are guaranteed based on the allocation model in place. The service provider can set the parameters for CPU, memory, storage, and network for each tenant’s organization virtual data center, as shown in Figure 31, Figure 32, and Figure 33.

60 © 2013 VCE Company, LLC. All Rights Reserved.

Page 61: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 31. Organization virtual data center allocation configuration

Figure 32. Organization virtual data center storage allocation

61 © 2013 VCE Company, LLC. All Rights Reserved.

Page 62: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 33. Organization virtual data center network pool allocation

Design considerations for security and compliance

This section discusse s using the following technologies to achieve security and compliance at the compute layer:

Cisco UCS

VMware vCloud Director

VMware vCenter Server

Cisco UCS

The UCS Role-Based Access Control (RBAC) feature helps ensure security by providing granular administrative access control to the UCS system resources based on administrative roles, tenant organization, and locale.

The RBAC function of the Cisco UCS allows you to control service provider user access to the actions and resources in the UCS. RBAC is a security mechanism that can greatly lower the cost and complexity of Vblock System security administration. RBAC simplifies security administration by using roles, hierarchies, and constraints to organize privileges. Cisco UCS Manager offers flexible RBAC to define the roles and privileges for different administrators within the Cisco UCS environment.

The UCS RBAC allows access to be controlled based on the roles assigned to individuals. The following table lists the elements of the UCS RBAC model.

62 © 2013 VCE Company, LLC. All Rights Reserved.

Page 63: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Element Description

Role A job f unction within the context of locale, along with the authority and responsibility giv en to the user assigned to the role

User A person using the UCS; users are assigned to one or more roles

Action Any task a user can perf orm in the UCS that is subject to access control; an action is perf ormed on a resource

Priv ilege Permission granted or denied to a role to perf orm an action

Locale A logical object created to manage organizations and determine which users hav e priv ileges to use the resources in organizations

The UCS RBAC feature can help service providers segregate roles to manage multiple tenants. One example is using UCS RBAC with LDAP integration to ensure all roles are defined and have specific accesse s as per their roles. A service provider can leverage this feature in a multi-tenant environment to ensure a high level of centralized security control. LDAP groups can be created for different administration roles, such as network, storage, server profiles, security, and operations. This helps providers keep security and compliance in place by having designated roles to configure different parts of the Vblock System.

Figure 34 shows an LDAP group mapped to a specific role in a UCS. An Active Directory group called ucsnetwork is mapped to a predefined network role in UCS. This means that anyone belonging to the ucsnetwork group in Active Directory can perform a network task in UCS; other features are shown as read-only.

Figure 34. LDAP group mapping in UCS

63 © 2013 VCE Company, LLC. All Rights Reserved.

Page 64: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 35 illustrates how UCS groups provide hierarchy. It shows how group ucsnetwork is laid out in an Active Directory domain.

Figure 35. Active Directory groups for UCS LDAP

Additional UCS security control features include the following:

Administrative access to the Cisco UCS is authenticated by using either:

- A remote protocol such as LDAP, RADIUS, or TACACS+

- A combination of local database and remote protocols

HTTPS provides authenticated and encrypted access to the Cisco UCS Manager GUI. HTTPS uses components of the Public Key Infrastructure (PKI), such as digital certificates, to establish secure communications between the client’s browser and Cisco UCS Manager.

64 © 2013 VCE Company, LLC. All Rights Reserved.

Page 65: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

VMw are vCloud Director

Role-based and centralized user authentication through multi-party Active Directory/LDAP integration is the best way to manage the cloud. In VMware vCloud Director, each organization represents a collection of end users, groups, and computing resources. Users authenticate at the organization level, using credentials validated through LDAP. Set this up based on the cloud organization’s requirements.

For example, Service Provider–VCE can have its own Active Directory infrastructure for user and groups to authenticate to the vCloud environment. Tenant Orange can have its own Active Directory to manage authentication to the vCloud environment. Having each organization with their own Active Directory improves security by providing ease of integration with organization identity and access management processe s and controls, and it ensures that only authorized users have access to the tenant cloud infrastructure. Figure 36 and Figure 37 show both the service provider and organization LDAP integration and the difference in LDAP server settings.

Figure 36. Service provider LDAP integration

65 © 2013 VCE Company, LLC. All Rights Reserved.

Page 66: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 37. Organization LDAP integration

Each tenant has its own user and group management and provides role-based security access, as shown in Figure 38. The users are shown only the vApps that they can access. vApps that users do not have access to are not visible, even if they reside within the same organization.

Figure 38. User role management

66 © 2013 VCE Company, LLC. All Rights Reserved.

Page 67: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

VMw are vCenter Server

VMware vCenter Server is installed using a local administrator account. When vCenter Server is joined to a domain, it results in any domain administrator gaining administrative privileges to vCenter. To remove this potential security risk, it is recommended to always create a vCenter Administrator group in an Active Directory and assign it to the vCenter Server Administrator role, making it possible to remove the local administrators group from this role.

Note: Ref er to the vSphere Security Hardening Guide at www.vmware.com for more inf ormation.

In Figure 39, in the trusted multi-tenancy framework there is a VMware Admins group created in an Active Directory. This group has access to the trusted multi-tenancy vCenter data center. A user member of this group can perform the administration of vCenter.

Figure 39. vCenter administration

Design considerations for availability and data protection

Availability and Disaster Recovery (DR) focuses on the recovery of systems and infrastructure after an incident interrupts normal operations. A disaster can be defined as partial or complete unavailability of resources and services, including applications, the virtualization layer, the cloud layer, or the workloads running in the resource groups.

Good practices at the infrastructure level will lead to easier disaster recovery of the cloud management cluster. This includes technologies such as high availability, DRS, and vMotion for reactive and proactive protection of your infrastructure.

This section discusse s using the following technologies to achieve availability and data protection at the compute layer:

Cisco UCS

Virtualization

67 © 2013 VCE Company, LLC. All Rights Reserved.

Page 68: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Cisco UCS

Fabric interconnect clustering allows each fabric interconnect to continuously monitor the other’s status. If one fabric interconnect becomes unavailable, the other takes over automatically.

Figure 40 shows how Cisco UCS is deployed as a high availability cluster for management layer redundancy. It is configured as two Cisco UCS 6100 Series fabric interconnects directly connected with Ethernet cables between the L1 (L1-to-L1) and L2 (L2-to-L2) ports.

Figure 40. Fabric interconnect clustering

Service profile dynamic mobility provides another layer of protection. When a physical blade server fails, it automatically transfers the service profile to an available server in the pool.

Virtual port channel in UCS

With virtual port channel uplinks, there is minimal impact of both physical link failures and upstream switch failures. With more physical member links in one larger logical uplink, there is the potential for even better overall uplink load balancing and better high availability.

68 © 2013 VCE Company, LLC. All Rights Reserved.

Page 69: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 41 shows how port channel 101 and 102 are configured with four uplink members.

Figure 41. Virtual port channel in UCS

Virtualization

Enable overall cloud availability design for tenants using the following features:

VMware vSphere HA

VMware vCenter Heartbeat

VMware vMotion

VMware vCloud Director cells

VMware v Sphere High Av ailability

VMware High Availability clusters enable a collection of VMware ESXi hosts to work together to provide, as a group, higher levels of availability for virtual machines than each ESXi host could provide individually. When planning the creation and use of a new VMware High Availability cluster, the options you select affect how that cluster responds to failures of hosts or virtual machines.

VMware High Availability provides high availability for virtual machines by pooling the machines and the hosts on which they reside into a cluster. Hosts in the cluster are monitored and in the event of a failure, the virtual machines on the failed host are restarted on alternate hosts.

69 © 2013 VCE Company, LLC. All Rights Reserved.

Page 70: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

In the trusted multi-tenancy framework, all VMware High Availability clusters are deployed with identical server hardware. Using identical hardware provides a number of key advantages, including the following:

Simplified configuration and management of the servers using host profiles

Increased ability to handle server failures and reduced resource fragmentation

VMware v Motion

VMware vMotion enables the live migration of running virtual machines from one physical server to another with zero downtime, continuous service availability, and complete transaction integrity. Use VMware vMotion to:

Perform hardware maintenance without scheduled downtime

Proactively migrate virtual machines away from failing or underperforming servers

Automatically optimize and allocate entire pools of resources for optimal hardware utilization and alignment with business priorities

VMware v Center Heartbeat

Use VMware vCenter Heartbeat to protect vCenter Server in order to provide an additional layer of resiliency. The vCenter Heartbeat server works by replicating all vCenter configuration and data to a secondary passive server using a dedicated network channel. The secondary server is up all the time, with the live configuration of the active server, but an IP packet fi lter masks it from the active network.

Figure 42 shows a scenario when the complete hardware goes down, the operating system crashes, or the active vCenter l ink is down.

Figure 42. vCenter Heartbeat scenario

70 © 2013 VCE Company, LLC. All Rights Reserved.

Page 71: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

VMware v Cloud Director cells

VMware vCloud Director cells are stateless front-end processors for the vCloud. Each cell has a variety of purposes and self-manages various functions among cells while connecting to a central database. The cell manages connectivity to the cloud and provides both API and GUI end-points/clients.

Figure 43 shows the trusted multi-tenancy framework using multiple cells (a load-balanced group) to address availability and scale. This is typically achieved by load balancing or content switching this front-end layer. Load balancers present a consistent address for services regardless of the underlying node responding. They can spread session load across cells, monitor cell health, and add or remove cells from the active service pool.

Figure 43. vCloud Di rector multi-cell

71 © 2013 VCE Company, LLC. All Rights Reserved.

Page 72: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Single point of failure

To ensure successful implementation of availability, which is a crucial part of the trusted multi-tenancy design, carefully consider each component listed in the following table.

Component Availability options

ESXi hosts Conf igure all VMware ESXi hosts in highly av ailable clusters with a minimum of n+1 redundancy . This prov ides protection not only f or the v irtual machines, but also f or the v irtual machines hosting the platf orm portal/management applications and all of the v Shield Edge appliances.

ESXi host network connectiv ity Conf igure the ESXi host with a minimum of two phy sical paths to each required net work (port group) to ensure that a single link f ailure does not impact platf orm or v irtual machine connectivity. This should include management and v Motion networks. The Load Based Teaming mechanism is used to av oid ov ersubscribed network links.

ESXi host storage connectiv ity Conf igure ESXi hosts with a minimum of two phy sical paths to each LUN or NFS share to ensure that a single storage path f ailure does not impact serv ice.

vCenter Serv er Run v Center Serv er as a virtual machine and make use of v Center Serv er Heartbeat.

vCenter database vCenter Heartbeat prov ides v Center database resiliency.

v Shield Manager v Shield Manager receiv es the additional protection of VMware FT, resulting in seamless f ailov er between hosts in the ev ent of a host f ailure

vCenter Chargeback Deploy vCenter Chargeback v irtual machines as a two-node, load-balanced cluster. Deploy multiple Chargeback data collectors remotely to av oid a single point of failure.

vCloud Director Deploy the v Cloud Director virtual machines as a load-balanced, highly av ailable clustered pair in an N+1 redundancy setup, with the option to scale out when the env ironment requires this.

VMware Site Recov ery Manager

In addition to other components, you can use VMware Site Recovery Manager (SRM) for disaster recovery and availability. Site Recovery Manager accelerates recovery by automating the recovery process, and it simplifies the management of disaster recovery plans by making disaster recovery an integrated element of the management of your VMware virtual infrastructure. VMware Site Recovery Manager is fully supported on the Vblock System; however, it is not supported with VMware vCloud Director and is not within the scope of this design guide.

72 © 2013 VCE Company, LLC. All Rights Reserved.

Page 73: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Design considerations for tenant management and control

This section discusse s using VMware vCloud Director to achieve tenant management and control at the compute layer.

VMw are vCloud Director

VMware vCloud Director provides an intuitive Web portal (vCloud Self Service Portal) that organization users u se to manage their compute, storage, and network resources. In general, a dedicated group of users in a tenant manages the organization resources, such as creating or assigning networks and catalogs and allocating memory, CPU, or storage resources to an organization.

In Figure 44, the tenants can create the vApps or deploy them from templates. Tenants can create the vApp network as needed from the network pool; use the browser plug-in to upload media and access the console of the virtual machines in the vApp; and start and stop the virtual machines as needed. For example, when Tenant Orange wants to access its virtual environment, it needs to point to the URL https://vcd1.pluto.vcelab.net/cloud/org/orange.

Figure 44. vApp administration

73 © 2013 VCE Company, LLC. All Rights Reserved.

Page 74: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Te nant in-control configuration

The tenants can manage users and groups, policies, and the catalogs for their environment, as shown in Figure 45.

Figure 45. Environment administration

Design considerations for service provider management and control

This section discusse s using virtualization technologies to achieve service provider management and control at the compute layer.

Virtualization

A service provider will have access to the entire VMware vSphere and VMware vCloud environment to flexibly manage and monitor the environment. A service provider can access and manage the following:

vCenter with a virtual infrastructure (VI) client

Cisco UCS

vCloud with a Web browser pointing to the vCloud Director cell address

vShield Manager with a Web browser pointing to the IP or hostname

vCenter Chargeback with a Web browser pointing to the IP or hostname

Cisco Nexus 1000V with SSH to Virtual Supervisor Module

74 © 2013 VCE Company, LLC. All Rights Reserved.

Page 75: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

For example, in vCloud Director, the service provider is in complete control of the physical infrastructure. The service provider can:

Enable or disable ESXi hosts and data stores for cloud usage

Create and remove the external networks that are needed for communicating with the Internet, backup networks, IP-based storage networks, VPNs, and MPLS networks, as well as the organization networks and network pools

Create and remove the organization, administration users, provider virtual data center, and organization virtual data centers

Determine which organization can share the catalog with others

Figure 46 shows how a service provider views the complete physical infrastructure in vCloud Director.

Figure 46. Service provider view

VMware v Center Chargeback

VMware vCenter Chargeback is an end-to-end metering and cost reporting solution for virtual environments using VMware vSphere. It has the following core components:

Data Collectors:

- Chargeback Data Collector—responsible for vCenter Server data collection

- vCloud Director (vCD) and vShield Manager (vSM) data collectors — responsible for util ization/allocation collection on the new abstraction layer created by vCloud Director

Load Balancer (embedded in vCenter Chargeback) — receives and routes all user requests to the application; needs to be installed only once for the Chargeback cluster

Chargeback Server and chargeback database

75 © 2013 VCE Company, LLC. All Rights Reserved.

Page 76: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 47 shows a Vblock System chargeback deployment architecture model.

Figure 47. Vblock System chargeback deploy ment architecture

76 © 2013 VCE Company, LLC. All Rights Reserved.

Page 77: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Key Vblock System metrics

When determining a metering methodology for trusted multi-tenancy, consider the following:

What metrics (units, components, or attributes) will be monitored?

How will the metrics be obtained?

What sampling frequency will be used for each metric?

How will the metrics be aggregated and correlated to formulate meaningful business value?

Within a Vblock System virtualized computing environment, the infrastructure chargeback details can be modeled as fully loaded measurements per virtual machine. The virtual machine essentially becomes the point resource allocated back to users/customers. Below are the some of the key metrics to collect when measuring virtual machine resource utilization:

Resource Chargeback metrics Unit of measurement

CPU CPU usage GHz

Virtual CPU (v CPU) Count

Memory Memory usage GB

Memory size GB

Network Network receiv ed/transmitted usage GB

Disk Storage usage GB

Disk read/write usage GB

For more information, see Guidelines for Metering and Chargeback Using VMware vCenter Chargeback on www.vce.com.

77 © 2013 VCE Company, LLC. All Rights Reserved.

Page 78: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Design considerations for storage

Multi-tenancy features can be combined with standard security methods, such as storage area network (SAN) zoning and Ethernet VLANs, to segregate, control, and manage storage resources among the infrastructure’s tenants. Multi-tenancy offerings include data-at-rest encryption; secure transmission of data; and bandwidth, cache, CPU, and disk drive isolation.

This section describes the design of and rationale behind storage technologies in the trusted multi-tenancy framework. The design includes many issues that must be addressed prior to deployment.

Design considerations for secure separation

The fundamental principle that makes multi-tenancy secure is that no tenant can access another’s data. Secure separation is essential to reaching this goal. At the storage layer, secure separation can be divided into the following basic requirements:

Segmentation of path by VSAN and zoning

Separation of data at rest

Address space separation

Separation of data access

Segmentation by VSA N and zoning

To extend secure separation to the storage layer, consider the isolation mechanisms available in a SAN environment.

Cisco MDS storage area networks (SAN) offer true segmentation mechanisms, similar to VLANs in Ethernet. These mechanisms, called VSANs, work with fibre channel zones; however, VSANs do not tie into the virtual host bus adapter (HBA) of a virtual machine. VSANs and zones associate to a host rather than a virtual machine. All virtual machines running on a particular host belong to the same VSAN or zone. Since it is not possible to extend SAN isolation to the virtual machine, VSANs or FC zones are used to isolate hosts from each other in the SAN fabric.

To keep management overhead low, we do not recommend deploying a large number of VSANs. Instead, the trusted multi-tenancy design leverages fibre channel soft zone configuration to isolate the storage layer on a per-host basis. It combines this method with zoning through WWN/device alias for administrative flexibility.

Fibre channel zones

SAN zoning can restrict visibility and connectivity between devices connected to a common fibre channel SAN. It is a built-in security mechanism available in an FC switch that prevents traffic leaking between zones.

78 © 2013 VCE Company, LLC. All Rights Reserved.

Page 79: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Design scenarios of VSAN and zoning

VSANs and zoning are two powerful tools within the Cisco MDS 9000 family of products that aid the cloud administrator in building robust, secure, and manageable storage networking environments while optimizing the use and cost of storage switching hardware. In general, VSANs are used to divide a redundant physical SAN infrastructure into separate virtual SAN islands, each with its own set of fibre channel fabric services. Having each VSAN support an independent set of fibre channel services enables a VSAN-enabled infrastructure to house numerous applications without risk of fabric resource or event conflicts between the virtual environments. Once the physical fabric is divided, use zoning to implement a security layout that is tuned to the needs of each application within each VSAN. Figure 48 illustrates the VSAN physical topology.

Figure 48. VSAN physical topology

79 © 2013 VCE Company, LLC. All Rights Reserved.

Page 80: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

VSANs are first created as isolated fabrics within a common physical topology. Once VSANs are created, apply individual unique zone sets as necessary within each VSAN. The following table summarizes the primary differences between VSANs and zones.

Characteristic VSANs Zoning

Maximum per switch/f abric 1024 per switch 1000+ zones per f abric (VSAN)

Membership criteria Phy sical port Phy sical port, WWN

Isolation enf orcement method Hardware Hardware

Fibre channel serv ice model New set of serv ices per VSAN Same set of serv ices for entire f abric

Traff ic isolation method Hardware-based tagging Implicit using hardware ACLs

Traff ic accounting Yes per VSAN No

Separate manageability Yes per VSAN (f uture) No

Traff ic engineering Yes per VSAN No

Note: Note that UIM supports only one VSAN f or each fabric.

Separation of data at rest

Today, most deployments treat physical storage as a shared infrastructure. However, in multi-tenancy, it is sometimes necessary to ensure that a specific dataset does not share spindles with any other dataset. This separation could be required between tenants or even within a single tenant’s dataset. Business reasons for this include competitive companies using the same shared service, and governance/regulatory requirements.

EMC VNX provides flexible RAID and volume configurations that allow spindles to be dedicated to LUNs or storage pools. VNX allows the creation of tenant-specific storage pools that can be used to dedicate specified spindles to particular tenants.

Address space separation

In some situations, each tenant is completely unaware of the other tenants. However, without proper mitigation there is the potential for address space overlap. Fibre channel World Wide Names (WWN) and iSCSI device names are globally unique, with no possibility of contention in either area. IP addresses, however, are not globally unique and may conflict.

To remedy this situation, the service provider can assign infrastructure-wide IP addresses within a service offering. Each X-Blade or VNX storage processor supports one IP address space. However, an X-Blade can support multiple logical IP interfaces and both storage processors and X-Blades support VLAN tagging. VLAN tagging allows multiple networks to access resources without the risk of traversing address spaces. In the event of an IP address conflict, the server log file reports any duplicate address warnings. IP addressing conflicts can be addressed in higher layers of the stack. This is most easily accomplished at the compute layer.

80 © 2013 VCE Company, LLC. All Rights Reserved.

Page 81: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 49 is a graphical representation of how VMware vSphere can be used to separate each tenant’s address space.

Figure 49. Address space separation with VMware vSphere

81 © 2013 VCE Company, LLC. All Rights Reserved.

Page 82: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Virtual machine data store separation

VMware uses a cluster fi le system called a virtual machine fi le system (VMFS). An ESXi host associates a VMFS volume, which is made up of a larger logical unit. Each virtual machine directory is stored in the Virtual Machine Disk (VMDK) sub-directory in the VMFS volume. While a virtual machine is in operation, the VMFS volume locks those files to prevent other ESXi servers from updating them. One VMDK directory is associated with a single virtual machine; multiple virtual machines cannot access the same VMDK directory.

We recommend implementing LUN masking (that is, storage groups) to assign storage to ESXi servers. LUN masking is an authorization process that makes a LUN available only to specific hosts on the EMC SAN as further protection against misbehaving servers corrupting disks belonging to other servers. This complements the use of zoning on the MDS, effectively extending zoning from the front-end port on the array to the device on which the physical disk resides.

Virtual data mov er on VNX

VNX provides a multinaming domain solution for a data mover in the UNIX environment by implementing an NFS server per virtual data mover (VDM). A data mover hosting several VDMs can serve UNIX clients that are members of different LDAP or NIS domains, assuming that each VDM works for a unique naming domain. Several NFS servers are emulated on the data mover in order to serve the fi le system resources of the data mover for different naming domains. Each NFS server is assigned to one or more data mover network interfaces.

The VDMs loaded on a data mover use the network interfaces configured on the data mover. You cannot duplicate an IP address for two VDM interfaces configured on the same data mover. Once a VDM interface is assigned, you can manage NFS exports on a VDM. CIFS and NFS protocols can share the same network interface; however, only one NFS endpoint and CIFS server is addressed through a particular logical network interface.

The multinaming domain solution implements an NFS server per VDM-named NFS endpoint. The VDM acts as a container that includes the file systems exported by the NFS endpoint and/or the CIFS server. These VDM file systems are visible through a subset of data mover network interfaces attached to the VDM. The same network interface can be shared by both CIFS and NFS protocols on that VDM. The NFS endpoint and CIFS server are addressed through the network interfaces attached to that particular VDM. This allows users to perform either of the following:

Move a VDM, along with its NFS and CIFS exports and configuration data (LDAP, net groups, and so forth), to another data mover

Back up the VDM, along with its NFS and CIFS exports and configuration data

This feature supports at least 50 NFS VDMs per physical data mover and up to 25 LDAP domains.

82 © 2013 VCE Company, LLC. All Rights Reserved.

Page 83: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 50 shows a physical data mover with VDM implementation.

Figure 50. Physical data mover with VDM implementation

Note: VDM f or NFS is available on VNX OE for File Version 7.0.50.2. You cannot use Unisphere to configure VDM f or NFS.

Refer to Configuring Virtual Data Movers on VNX for more information (Powerlink access required).

Separation of data access

Separation of data access ensures that a tenant cannot see or access any other tenant’s data. The data access protocol in use determines how this is accomplished. Protocols for how tenant data traffic flows inside EMC VNX are:

CIFS

NFS

iSCSI

Fibre Channel over Ethernet/Fibre Channel (FCoE/FC)

83 © 2013 VCE Company, LLC. All Rights Reserved.

Page 84: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 51 displays the access protocols and the respective protocol stack that can be used to access data residing on a unified system.

Figure 51. Protocol stack

CIFS stack

The following table summarizes how tenant data traffic flows inside EMC VNX for the CIFS stack. Secure separation is maintained at each layer throughout the CIFS stack.

CIFS stack component Description

VLAN The secure separation of data access starts at the bottom of the CIFS stack on the IP network with the use of Virtual Local Area Networks (VLAN) to separate indiv idual tenants.

IP Interf ace VLAN Tagged The VLAN-tagging model extends into the unif ied sy stem by VLAN tagging the indiv idual IP interf aces so they understand and honor the tags being used.

IP Packet Ref lection IP packet ref lection guarantees that any traff ic sent from the storage sy stem in response to a client request will go out ov er the same phy sical connection and VLAN on which the request was receiv ed.

Virtual Data Mov er The v irtual data mov er is a logical conf iguration container that wraps around a CIFS f ile-sharing instance.

CIFS Serv er The v irtual data mov er resides on the CIFS serv er.

84 © 2013 VCE Company, LLC. All Rights Reserved.

Page 85: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

CIFS stack component Description

CIFS Share CIFS shares are built upon the CIFS serv ers.

ABE At the top of the stack is a Windows f eature called Access Based Enumeration (ABE). ABE shows a user only the f iles that he/she has permission to access, thus extending the separation all the way to end users if desired.

NFS stack

The following table summarizes how tenant data traffic flows inside EMC VNX for the NFS stack.

NFS stack component Description

VLAN The secure separation of data access starts at the bottom of the NFS stack on the IP network, using VLANs to separate indiv idual tenants.

IP Interf ace VLAN tagged The VLAN tagging model extends into the unif ied system by VLAN tagging the indiv idual IP interf aces so they understand and honor the tags being used.

IP packet ref lection IP packet ref lection guarantees that any traff ic sent from the storage sy stem in response to a client request will go out ov er the same phy sical connection and VLAN on which the request was receiv ed.

NFS export VLAN tagged NFS exports can be associated with specif ic VLANs.

NFS export hiding NFS export hiding tightly controls which users access the NFS exports. It enhances standard NFS serv er behav ior by prev enting users f rom seeing NFS exports f or which they do not hav e access-lev el permission. It will appear to each tenant that they hav e their own indiv idual NFS serv er.

85 © 2013 VCE Company, LLC. All Rights Reserved.

Page 86: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 52 shows an NFS export and how a specific subnet has access to the NFS share.

Figure 52. NFS export configuration

In this example, VLAN 112 and VLAN 111 subnet has access to the /nfs1 share. VNX also provides granular access to the NFS share. An NFS export can be presented to a specific tenant subnet or specific host or group of hosts in the network.

iSCSI stack

The following table summarizes how tenant data traffic flows inside EMC VNX for the iSCSI stack.

iSCSI stack component Description

VLAN The secure separation of data access starts at the bottom of the iSCSI stack on the IP network with the use of VLAN to separate indiv idual tenants.

IP Interf ace VLAN tagged The VLAN-tagging model extends into the unif ied sy stem by VLAN tagging the indiv idual IP interf aces so they understand and honor the tags being used.

iSCSI Portal Target LUN

Access then f lows through an iSCSI portal to a target dev ice, where it is ultimately addressed to a LUN.

LUN Masking LUN masking is a f eature f or block-based protocols that ensures that LUNs are v iewed and accessed only by those SAN clients with the appropriate permissions.

86 © 2013 VCE Company, LLC. All Rights Reserved.

Page 87: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Support for VLAN tagging in iSCSI

VLAN is supported for iSCSI data ports and management ports on VNX storage systems. In addition to better performance, ease of management, and cost benefits, VLANs provide security advantages since devices configured with VLAN tags can see and communicate with each other only if they belong to the same VLAN. Therefore, you can:

Set up multiple virtual ports on the VNX and segregate hosts into different VLANs based on your security policy

Restrict sensitive data to one VLAN

VLANs make it more difficult to sniff traffic, as they require sniffing across multiple networks. This provides extra security.

Figure 53 shows the iSCSI port properties for a port with VLANs enabled and two virtual ports configured.

Figure 53. iSCSI Port Properties with VLAN tagging enabled

87 © 2013 VCE Company, LLC. All Rights Reserved.

Page 88: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Fibre Channel ov er Ethernet/fibre channel stack

The lower layers of the fibre channel stack look quite different because it is not an IP-based protocol. The following table summarizes how tenant data traffic flows inside EMC VNX for the FCoE/FC stack.

FCoE/FC stack component Description

FC Zone FC zoning controls which FC/Fibre Channel ov er Ethernet (FCoE) interf aces can communicate with each other within the f abric.

VSAN Virtual Storage Area Networks can be used to f urther subdiv ide indiv idual zones without the need f or phy sical separation.

Target LUN

Access flows to a target dev ice, where it is ultimately addressed to a LUN.

LUN Masking LUN masking is a f eature f or block-based protocols that ensures that LUNs are v iewed and accessed only by those SAN clients with the appropriate permissions.

Figure 54 and Figure 55 show how a 20 GB FC boot LUN and 2 TB LUN map to each host in VNX. It ensures each LUN presented to the ESXi host is properly masked and granted access to the specific LUN and spread out in different RAID groups.

Figure 54. Boot LUN and host mapping

Figure 55. Data LUN and host mapping

88 © 2013 VCE Company, LLC. All Rights Reserved.

Page 89: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Design considerations for service assurance

Once you achieve secure separation of each tenant’s data and path to that data, the next priority is predictable and reliable access that meets the tenant’s SLA. Furthermore, in a service provider chargeback environment, it may be important that tenants do not receive more performance than they paid for simply because there is no contention for shared storage resources.

Service assurance ensures that SLAs are met at appropriate levels through the dedication of runtime resources and quality of service control.

Additionally, storage tiering with FAST lowers overall storage costs and simplifies management while allowing different applications to meet different service-level requirements on distinct pools of storage within the same storage infrastructure. FAST technology automates the dynamic allocation and relocation of data across tiers for a given FAST policy, based on changing application performance requirements. FAST helps maximize the benefits of preconfigured tiered storage by optimizing cost and performance requirements to put the right data on the right tier at the right time.

Dedication of runtime resources

Each VNX data mover has dedicated CPUs, memory, front-end, and back-end networks. A data mover can be dedicated to a single tenant or shared among several tenants. To further ensure the dedication of runtime resources, data movers can be clustered into active/standby groupings. From a hardware perspective, dedicating pools, spindles, and network ports to a specific tenant or application can further ensure adherence to SLAs.

Quality of service control

EMC has several software tools available that organize the dedication of runtime resources. At the storage layer, the most powerful of these is Unisphere Quality of Service Manager (UQM), which allows VNX resources to be managed based on service levels.

UQM utilizes policies to set performance goals for high-priority applications, set limits on lower-priority applications, and schedules policies to run on predefined timetables. These policies direct the management of any or all of the following performance aspects:

Response time

Bandwidth

Throughput

UQM provides a simple user interface for service providers to control policies. This control is invisible to tenants and can ensure that the activity of one tenant does not impact that of another. For example, if a tenant requests a dedicated disk, storage groups, and spindles for its storage resources, apply these control policies to get optimum storage I/O performance.

89 © 2013 VCE Company, LLC. All Rights Reserved.

Page 90: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 56 shows how you can create policies with a specific set of I/O class to ensure that SLAs are maintained.

Figure 56. EMC VNX – QoS configuration

EMC V NX FA ST V P

With standard storage tiering in a non-FAST VP enabled array, multiple storage tiers are typically presented to the vCloud environment, and each offering is abstracted out into separate provider virtual data centers (vDC). A provider may choose to provision an EFD [SSD/Flash] tier, an FC/SAS tier, and a SATA/NL-SAS tier, and then abstract these into Gold, Silver, and Bronze provider virtual data centers. The customer then chooses resources from these for use in their organizational virtual data center.

This provisioning model is limited for a number of reasons, including the following:

VMware vCloud Director does not allow for a non-disruptive way to move virtual machines from one provider virtual data center to another. This means the customer must provide for downtime if the vApp needs to be moved to a more appropriate tier.

For workloads with a variable I/O personality, there is no mechanism to automatically migrate those workloads to a more appropriate disk tier.

With the cost of enterprise flash drives (EFD) sti ll significant, creating an entire tier of them can be prohibitively expensive, especially with few workloads having an I/O pattern that takes full advantage of this particular storage medium.

One way in which the standard storage tiering model can be beneficial is when multiple arrays are used to provide different kinds of storage to support different I/O workloads.

90 © 2013 VCE Company, LLC. All Rights Reserved.

Page 91: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

EMC FAST VP storage tiering

There are ways to provide more flexibility and a more cost-effective platform when compared with a standard tiering model. Instead of using a single disk type per provider virtual data center, organizations can blend both the cost and performance characteristics of multiple disk types. The following table shows examples of this approach.

Create a FAST VP pool containing…

As this type of tier… For…

20% EFD and 80% FC/SAS disks

Perf ormance tier Customers who might need the perf ormance of EFD at certain times, but do not want to pay f or that perf ormance all the time

50% FC/SAS disks and 50% SATA disks

Production tier Most standard enterprise applications to take adv antage of the standard FC/SAS perf ormance, y et hav e the ability to de-stage cold data to SATA disk to lower the ov erall cost of storage per GB

90% SATA disks and 10% FC/SAS disks

Archiv e tier Storing mostly nearline data, with the FC/SAS disks used f or those instances where the customer needs to go to the archiv e to recov er data, or f or customers who are dumping a signif icant amount of data into the tier.

Tiering policies

EMC FAST VP offers a number of policy settings to determine how data is placed, how often it is promoted, and how data movement is managed. In a VMware vCloud Director environment, the following policy settings are recommended to best accommodate the types of I/O workloads produced.

Policy Default setting Recommended setting

Data Relocation Schedule Set to migrate data sev en day s a week, between 11pm and 6am, ref lecting the standard business day .

Set to use a Data Relocation Rate of Medium, which can relocate 300-400 GB of data per hour.

In a v Cloud Director env ironment, open up the Data Relocation window to run 24 hours a day.

Reduce the Data Relocation Rate to Low. This allows f or constant promotion and demotion of data, yet limits the impact on host I/O.

FAST VP-enabled LUNs/Pools

Set to use the Auto-Tier, spreading data ev enly across all tiers of disks.

In a v Cloud Director env ironment, where customers are generally pay ing f or the lower tier of storage but lev eraging the ability to promote workloads to higher-perf orming disk when needed, the recommendation is to use the Lowest Available Tier policy . This places all data onto the lower tier of disk initially , keeping the higher tier of disk f ree f or data that needs it.

91 © 2013 VCE Company, LLC. All Rights Reserved.

Page 92: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

EMC FA ST Cache

In a VMware vCloud Director environment, VCE recommends a minimum of 100 GB of EMC FAST Cache, with the amount of FAST Cache increasing as the number of virtual machines increases.

The combination of FAST VP and FAST Cache allows the vCloud environment to scale better, support more virtual machines and a wider variety of service offerings, and protect against I/O spikes and bursting workloads in a way that is unique in the industry. These two technologies in tandem are a significant differentiator for the Vblock System.

EMC Unisphere Management Suite

EMC Unisphere provides a simple, integrated experience for managing EMC Unified Storage through both a storage and VMware lens. It is designed to provide simplicity, flexibility, and automation, which are all key requirements for using private clouds.

Unisphere includes a unique self-service support ecosystem that is accessible with one-click, task‐based navigation and controls for intuitive, context-based management. It provides customizable dashboard views and reporting capabilities that present users with valuable storage management information.

VMw are vCloud Director

A provider virtual data center is a resource pool consisting of a cluster of VMware ESXi servers that access a shared storage resource. The provider virtual data center can contain one of the following:

Part of a data store (shared by other provider virtual data centers)

All of a data store

Multiple data stores

As storage is provisioned to organization virtual data centers, the shared storage pool for the provider virtual data center is seen as a single pool of storage with no distinction of storage characteristics, protocol, or other characteristics differentiating it from being a single large address space.

If a provider virtual data center contains more than one data store, it is considered best practice that those data stores have equal performance capability, protocol, and quality of service. Otherwise, the slower storage in the collective pool will impact the performance of that provider virtual data storage pool. Some virtual data centers might end up with faster storage than others.

To gain the benefits of different storage tiers or protocols, define separate provider virtual data centers, where each provider virtual data center has storage of different protocols or differing quality-of-service storage. For example, provision the following:

A provider virtual data center built on a data store backed by 15K RPM FC disks with loads of cache in the disk for the highest disk performance tier

A second provider virtual data center built on a data store backed by SATA drives and not much cache in the array for a lower tier

92 © 2013 VCE Company, LLC. All Rights Reserved.

Page 93: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

When a provider virtual data center shares a data store with another provider virtual data center, the performance of one provider virtual data center may impact performance of the other provider virtual data center. Therefore, it is considered best practice to have a provider virtual data center that has a dedicated data store such that isolation of the storage reduces the chances of introducing different quality-of-service storage resources in a provider virtual data center.

Design considerations for security and compliance

This section provides information about:

Authentication with LDAP or Active Directory

EMC VNX and RSA enVision

Authentication w ith LDA P or Active Directory

VNX can authenticate users against an LDAP directory, such as Active Directory. Authentication against an LDAP server simplifies management because you do not need a separate set of credentials to manage VNX storage systems. It is also more secure, as enterprise password policies can be enforced for the storage environment.

Figure 57 shows LDAP integration in VNX.

Figure 57. LDAP configuration in VNX

93 © 2013 VCE Company, LLC. All Rights Reserved.

Page 94: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Role mapping

Once communications are established with the LDAP service, give specific LDAP users or groups access to Unisphere by mapping them to Unisphere roles. The LDAP service merely performs the authentication. Once authenticated, a user’s authorization is determined by the assigned Unisphere role. The most flexible configuration is to create LDAP groups that correspond to Unisphere roles. This allows you to control access to Unisphere by managing the members of the LDAP groups.

For example, Figure 58 shows two LDAP groups: Storage Admins and Storage Monitors. It shows how you can map specific LDAP groups into specific roles.

Figure 58. Mapping LDAP groups

Component access control

Component access control settings define access to a product by external and internal systems or components.

CHAP co mponent authentication

SCSI's primary authentication mechanism for iSCSI initiators is the Challenge Handshake Authentication Protocol (CHAP). CHAP is an authentication protocol used to authenticate iSCSI initiators at target login and at various random times during a connection. CHAP security consists of a username and password. You can configure and enable CHAP security for initiators and for targets. The CHAP protocol requires initiator authentication. Target authentication (mutual CHAP) is optional.

94 © 2013 VCE Company, LLC. All Rights Reserved.

Page 95: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

LUN masking component authorization

A storage group is an access control mechanism for LUNs. It segregates groups of LUNs from access by specific hosts. When you configure a storage group, you identify a set of LUNs that will be used by only one or more hosts. The storage system then enforces access to the LUNs from the host. The LUNs are presented to only the hosts in the storage group. The hosts can see only the LUNs in the group.

IP fi ltering

IP filtering adds another layer of security by allowing administrators and security administrators to configure the storage system to restrict administrative access to specified IP addresse s. These settings can be applied to the local storage system or to the entire domain of storage systems.

Audi t logging

Audit logging is intended to provide a record of all activities, so that the following can occur:

Checks for su spicious activity can be performed periodically.

The scope of suspicious activity can be determined.

Audit logs are especially important for financial institutions that are monitored by regulators.

Audit information for VNX storage systems is contained within the event log on each storage processor. The log also contains hardware and software debugging information and a time-stamped record for each event. Each record contains the following information:

Event code

Description of event

Name of the storage system

Name of the corresponding storage processor

Hostname associated with the storage processor

95 © 2013 VCE Company, LLC. All Rights Reserved.

Page 96: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

VNX and RSA enV ision

VNX storage systems are made even more secure by leveraging the continuous collecting, monitoring, and analyzing capabilities of RSA enVision. RSA enVision performs the functions listed in the following table.

RSA function Description

Collects logs Can collect ev ent log data f rom ov er 130 ev ent sources–f rom firewalls to databases. RSA enVision can also collect data f rom custom, proprietary sources using standard transports such as Syslog, OBDC, SNMP, SFTP, OPSEC, or WMI.

Securely stores logs Compresses and encry pts log data so it can be stored f or later analy sis, while maintaining log conf identiality and integrity .

Analy zes logs Analy zes data in real time to check f or anomalous behav ior requiring an immediate alert and response. RSA enVision proprietary logs are also optimized f or later reporting and f orensic analysis. Built-in reports and alerts allow administrators and auditors quick and easy access to log data.

Figure 59 provides a detailed look at storage behavior in RSA enVision.

Figure 59. RSA enVision storage behavior

96 © 2013 VCE Company, LLC. All Rights Reserved.

Page 97: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Network encryption

The Storage Management server provides 256-bit symmetric encryption of all data passed between it and the administrative client components that communicate with it, as l isted under Port Usage (Web browser, Secure CLI), as well as all data passed between Storage Management servers. The encryption is provided through SSL/TLS and uses the RSA encryption algorithm, providing the same level of cryptographic strength as is employed in e-commerce. Encryption protects the transferred data from prying eyes—whether on the local LANs behind the corporate firewalls, or if the storage systems are being remotely managed over the Internet.

Design considerations for availability and data protection

Availability goes hand in hand with service assurance. While service assurance directs resources at the tenant level, availability secures resources at the service provider level. Availability ensures that resources are available for all tenants utilizing a service provider’s infrastructure, by meeting the requirements of high availability and local and remote data protection.

High availability

In the storage layer, the high availability design is consistent with the high availability model implemented at other layers in the Vblock System, comprising physical redundancy and path redundancy. These are listed in the following types of redundancies:

Link redundancy

Hardware and node redundancy

Link redundancy

Pending the availability of FC port channels on UCS FC ports and FC port trunking, multiple individual FC links from the 6120 fabric interconnects are connected to each SAN fabric, and VSAN membership of each link is explicitly configured in the UCS. In the event of an FC (NP) port link failure, affected hosts will re-logon in a round-robin manner using available ports. FC port channel support, when available, means that redundant links in the port channel will provide active/active failover support in the event of a link failure.

Multipathing software from VMware or EMC PowerPath software further enhances high availability, optimizing use of the available link bandwidth and enhancing load balancing across multiple active host adapter ports and links with minimal disruption in service.

Hardware and node redundanc y

The Vblock System trusted multi-tenancy design leverages best practice methodologies for SAN high availability, prescribing full hardware redundancy at each device in the I/O path from host to SAN. In terms of hardware redundancy this begins at the server, with dual port adapters per host. Redundant paths from the hosts feed into dual, redundant MDS SAN switches (that is, with dual supervisors) and then into redundant SAN arrays with tiered, RAID protection. RAID 1 and 5 were deployed in this particular design as two more commonly used levels; however the selection of a RAID protection level depends on a balancing of cost versus the critical nature of the data to be stored.

97 © 2013 VCE Company, LLC. All Rights Reserved.

Page 98: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

The ESXi hosts are protected by the VMware vCenter high availability feature. Storage paths can be protected using EMC PowerPath/VE. Figure 60 shows the storage path protection.

Figure 60. Storage path protection

Virtual machines and application data can be protected using EMC Avamar, EMC Data Domain, and EMC Replication Manager. However these are not within the scope of this guide.

Single point of failure

High availability (HA) systems are the foundation upon which any enterprise-class multi-tenancy environment is built. High availability systems are designed to be fully redundant with no single point of failure (SPOF). Additional availability features can be leveraged to address single point of failure in the trusted multi-tenancy design. The following are some high-level SPOF entity needs to consider:

Dual-ported drives

Redundant FC loops

Battery-backed mirrored write cache dual storage processors

Asymmetric Logical Unit Access (ALUA) dual paths to storage

N+M X-Blade failover clustering

Network link aggregation

Fail-safe network

98 © 2013 VCE Company, LLC. All Rights Reserved.

Page 99: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Local and remote data protection

It is important to ensure that data is protected for the entirety of its lifecycle. Local replication technologies, such as snapshots and clones, allow users to roll back to recent points in time in the event of corruption or accidental deletion. Local replication technologies include SnapSure and SnapView for VNX. Use Network Data Management Protocol (NDMP) backup to deeply efficient storage platforms, such as Data Domain, for restoration of data from a point further back in time. Remote replication is key to protecting user data from site failures. EMC RecoverPoint and MirrorView software enable remote replication between EMC’s Unified Storage systems. Use Replication Manager to ease the management of replication and ensure consistency between replicas.

Below are some key points for each of these products; however, they are not within the scope of this guide.

SnapSure

Use SnapSure to create and manage checkpoints on thin and thick file systems. Checkpoints are point-in-time, logical images of a file system. Checkpoints can be created on file systems that use pool LUNs or traditional LUNs.

SnapView

For local replication, SnapView snapshots and clones are supported on thin and thick LUNs. SnapView clones support replication between thick, thin, and traditional LUNs. When cloning from a thin LUN to a traditional LUN or thick LUN, the physical space of the traditional/thick LUN must equal the host-visible capacity of the thin LUN. This results in a fully allocated thin LUN if the traditional LUN/thick LUN is reverse-synchronized. Cloning from traditional/thick to thin LUN results in a fully allocated thin LUN as the initial synchronization will force the initialization of all the subscribed capacity.

For more information, refer to EMC SnapView for VNX (Powerlink access required).

Recov erPoint

Replication is also supported through RecoverPoint. Continuous data protection (CDP) and continuous remote replication (CRR) support replication for thin LUNs, thick LUNs, and traditional LUNs. When using RecoverPoint to replicate to a thin LUN, only data is copied; unused space is ignored so the target LUN is thin after the replication. This can provide significant space savings when replicating from a non-thin volume to a thin volume. When using RecoverPoint, we recommend that you not use journal and repository volumes on thin LUNs.

99 © 2013 VCE Company, LLC. All Rights Reserved.

Page 100: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

MirrorView

When mirroring a thin LUN to another thin LUN, only consumed capacity is replicated between the storage systems. This is most beneficial for initial synchronizations. Steady state replication is similar, since only new writes are written from the primary storage system to the secondary system.

When mirroring from a thin LUN to a traditional or thick LUN, the thin LUN’s host-visible capacity must be equal to the traditional LUN’s capacity or the thick LUN’s user capacity. Any failback scenario that requires a full synchronization from the secondary to the thin primary image causes the thin LUN to become fully allocated. When mirroring from a thick LUN or traditional LUN to a thin LUN, the secondary thin LUN is fully allocated.

With MirrorView, if the secondary image LUN is added with the no initial synchronization option, the secondary image retains its thin attributes. However, any subsequent full synchronization from the traditional LUN or thick LUN to the thin LUN, as a result of a recovery operation, causes the thin LUN to become fully allocated.

For more information on using pool LUNs with MirrorView, see MirrorView Knowledgebook (Powerlink access required).

PowerPath Migration Enabler

EMC PowerPath Migration Enabler (PPME) is a host-based migration tool that enables non-disruptive or minimally disruptive data migration between storage systems or between logical units within a single storage system. The Host Copy technology in PPME works with the host operating system to migrate data from the source logical unit to the target. With PPME 5.3, the Host Copy technology supports migrating virtually provisioned devices. When migrating to a thin target, the target’s thin-device capability is maintained.

100 © 2013 VCE Company, LLC. All Rights Reserved.

Page 101: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Design considerations for service provider management and control

EMC Unisphere includes a unique self-service support ecosystem that is accessible through one-click, task-based navigation and controls for intuitive, context-based management. It provides customizable dashboard views and reporting capabilities that present users with valuable storage management information.

EMC Unisphere, a unified element management interface for NAS, SAN, replication, and more, offers a single point of control from which a service provider can manage all aspects of the storage layer.

Service providers can use Unified Infrastructure Manager/Provisioning to manage the entire stack (compute, network, and storage).

These two products mark a paradigm shift in the way infrastructure is managed.

Figure 61 shows a service provider view of the Unisphere dashboard and shows a connected vCenter with all the ESXi hosts.

Figure 61. EMC Unisphere dashboard

101 © 2013 VCE Company, LLC. All Rights Reserved.

Page 102: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Design considerations for networking

Various methods, including zoning and VLANs can enforce network separation. Internet Protocol Security (IPsec) provides application-independent network encryption at the IP layer for additional security.

This section describes the design of and rationale behind the trusted multi-tenancy framework for Vblock System network technologies. The design includes many issues that must be addressed prior to deployment, as no two environments are alike. Design considerations are provided for each trusted multi-tenancy element.

Design considerations for secure separation

This section discusse s using the following technologies to achieve secure separation at the network layer:

VLANs

Virtual Routing and Forwarding

Virtual Device Context

Access Control List

VLANs

VLANs provide a Layer 2 option to scale virtual machine connectivity, providing application tier separation and multitenant isolation. In general, Vblock Systems have two types of VLANs:

Routed – Include management VLANs, virtual machine VLANs, and data VLANs; will pass through Layer 2 trunks and be routed to the external network

Internal – Carry VMkernel traffic, such as vMotion, service console, NFS, DRS/HA, and so forth

This design guide uses three tenants: Tenant Orange, Tenant Vanilla and Tenant Grape. Each tenant has multiple virtual machines for different applications (such as Web server, email server, and database), which are associated with different VLANs. It is always recommended to separate data and management VLANs.

102 © 2013 VCE Company, LLC. All Rights Reserved.

Page 103: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

The following table lists example VLAN categories used in the Vblock System trusted multi-tenancy design framework.

VLAN type VLAN name VLAN number

Management VLANs (routed) Core Inf ra management C200_ESX_mgt

C299_ESX_v motion

UCS_mgt and KVM

Vblock_ESX_mgt

Vblock_ESX_v motion

100 101

102

103

104

105

Internal VLANs (local to Vblock System) Vblock_ESX_build Vblock_N1k_pkg

Vblock_N1k_control

Vblock_NFS

106 107

108

111

Data VLANs (routed VLAN) Fcoe_USC_to_storageA Fcoe_UCS_to_storageB

Vblock_VMNet work

Tenant 1_VMNet work

Tenant-2_VMNetwork

Tenant-3_VMNetwork

109 110

112

113

118

123

Configure VLAN (both Layer 2 and Layer 3) in all network devices supported in the trusted multi-tenancy infrastructure to ensure that management, tenant, and Vblock System internal VLANs are isolated from each other.

Note: Serv ice providers may need additional VLANs for scalability, depending on size requirements.

Virtual routing and forw arding

Use Virtual Routing and Forwarding (VRF) to virtualize each network device and all its physical interconnects. From a data plane perspective, the VLAN tags can provide logical isolation on each point-to-point Ethernet link that connects the virtualized Layer 3 network device.

Cisco VRF Lite uses a Layer 2 separation method to provide path isolation for each tenant across a shared network link. Using VRF Lite in the core and aggregation layers enables segmentation of tenants hosted on the common physical infrastructure. VRF Lite completely isolates the Layer 2 and Layer 3 control and forwarding planes of each tenant, allowing flexibility in defining an optimum network topology for each tenant.

103 © 2013 VCE Company, LLC. All Rights Reserved.

Page 104: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

The following table summarizes the benefits that the Cisco VRF Lite technology provides a trusted multi-tenancy environment.

Benefit Description

Virtual replication of phy sical inf rastructure

Each v irtual network represents an exact replica of the underly ing phy sical inf rastructure. This eff ect results f rom VRF Lite’s per hop technique, which requires ev ery network dev ice and its interconnections to be v irtualized.

True routing and f orwarding separation

Dedicated data and control planes are def ined to handle traff ic belonging to groups with v arious requirements or policies. These groups represent an additional lev el of segregation and security as no communication is allowed among dev ices belonging to different VRFs unless explicitly conf igured.

Network separation at Layer 2 is accomplished using VLANs. Figure 62 shows how the VLANs defined on each access layer device for each tenant are mapped to the same tenant VRF at the distribution layer.

Figure 62. VLAN to VRF mapping

104 © 2013 VCE Company, LLC. All Rights Reserved.

Page 105: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Use VLANs to achieve network separation at Layer 2. While VRFs are used to identify a tenant, VLAN-IDs provide isolation at Layer 2.

Tenant VRFs are applied on the Cisco Nexus 7000 Series Switch at the aggregation and core layer, which are mapped with unique VLANs. All VLANs are carried over the 802.1Q trunking ports.

Virtual dev ice context

The Layer 2 VLANs and Layer 3 VRF features help ensure trusted multi-tenancy secure separation at the network layer. You can also use the Virtual Device Context (VDC) feature on the Nexus 7000 Series Switch to virtualize the device itself, presenting the physical switch as multiple logical devices.

A virtual device context can contain its own unique and independent set of VLANs and VRFs. Each virtual device context can be assigned to its physical ports, allowing for the hardware data plane to be virtualized as well.

Access control list

Access control list (ACL), VLAN access control list (VACL), and port security can be applied in trusted multi-tenancy Layer 2 and Layer 3 to allow only the desired traffic for an expected destination within the same tenant domain or among different tenants. This is shown in the following table.

Device name ACL supported

Cisco Nexus 1000V Series Switch Yes

Cisco Nexus 5000 Series Switch Yes

Cisco Nexus 7000 Series Switch Yes

105 © 2013 VCE Company, LLC. All Rights Reserved.

Page 106: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Design considerations for service assurance

Service assurance is a core requirement for shared resources and their protection. Network, compute, and storage resources are guaranteed based on service level agreements. Quality of service enables differential treatment of specific traffic flows, helping to ensure that in the event of congestion or failure conditions, critical traffic is provided with a sufficient amount of available bandwidth to meet throughput requirements.

Figure 63 shows the traffic flow types defined in the Vblock System trusted multi-tenancy design.

Figure 63. Traffic flow types

106 © 2013 VCE Company, LLC. All Rights Reserved.

Page 107: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

The traffic flow types break down into three traffic categories, as shown in the following table.

Traffic Category Description

Inf rastructure Comprises management and control traff ic and v Motion communication. This is ty pically set to the highest priority to maintain administrativ e communications during periods of instability or high CPU utilization.

Tenant Diff erentiated into Gold, Silv er, and Bronze serv ice lev els; may include v irtual machine-to-v irtual machine, virtual machine-to-storage, and/or v irtual machine-to-tenant traff ic.

Gold tenant traffic is highest priority, requiring low latency and high bandwidth guarantees

Silv er traffic requires medium latency and bandwidth guarantees Bronze traffic is delay -tolerant, requiring low bandwidth guarantees

Storage The Vblock Sy stem trusted multi-tenancy design incorporates both FC and IP-attached storage. Since these traff ic ty pes are treated diff erently throughout the network, storage requires two subcategories:

FC traffic requires a no drop policy NFS data store traffic is sensitiv e to delay and loss

QoS service assurance for Vblock Systems has been introduced at each layer. Consider the following features for service assurance at the network layer:

Quality of service tenant marking at the edge

Traffic flow matching

Quality of service bandwidth guarantee

Quality of service rate limit

Traffic originates from three sources:

ESXi hosts and virtual machines

External to data center

Networked-attached devices

Consider traffic classification, bandwidth guarantee with queuing, and rate l imiting based on tenant traffic priority for networking service assurance.

107 © 2013 VCE Company, LLC. All Rights Reserved.

Page 108: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Design considerations for security and compliance

Trusted multi-tenancy infrastructure networks require intelligent services, such as firewall and load balancing of servers and hosted applications. This design guide focuses on the Vblock System trusted multi-tenancy framework, in which a firewall module and other load balancers are the external devices connected to the Vblock System. A multi-tenant environment consists of numerous service and infrastructure devices, depending on the business model of the organization. Often, servers, firewalls, network intrusion prevention systems (IPS), host IPSs, switches, routers, application firewalls, and server load balancers are used in various combinations within a multi-tenant environment.

The Cisco Firewall Services Module (FWSM) provides Layer 2 and Layer 3 firewall inspection, protocol inspection, and network address translation (NAT). The Cisco Application Control Engine (ACE) module provides server load balancing and protocol (IPSec, SSL) off-loading. Both the FWSM and ACE module can be easily integrated into existing Cisco 6500 Series switches, which are widely deployed in data center environments.

Note: To use the Cisco ACE module, you must add a Cisco 6500 Series switch.

To succesfully achive trusted multi-tenancy, a service provider needs to adopt each key component as discussed below. As shown in Figure 3, the trusted multi-tenancy framework has the following key components:

Component Description

Core Prov ides a Lay er 3 routing module f or all traff ic in and out of the serv ice prov ider data center.

Aggregation Serv es as the Lay er 2 and Lay er 3 boundary for the data center inf rastructure. In this design, the aggregation lay er also serv es as the connection point f or the primary data center f irewalls.

Serv ices Deploy s serv ices such as serv er load balancers, intrusion prev ention sy stems, application-based f irewalls, network analy sis modules, and additional f irewall serv ices.

Access The data center access lay er serv es as a connection point f or the serv er f arm. The v irtual access lay er ref ers to the virtual network that resides in the phy sical serv ers when conf igured f or v irtualization.

With this framework, you can add components as demand and load increase.

108 © 2013 VCE Company, LLC. All Rights Reserved.

Page 109: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

The following table describes the high-level security functions for each layer of the data center.

Data center layer Security component Purpose

Aggregation Data center f irewalls Initial f ilter f or data center ingress and egress traff ic. Virtual context is used to split policies f or serv er-to-serv er filtering.

Inf rastructure security Inf rastructure security features are enabled to protect dev ice, traff ic plane, and control plane.

Virtual data center prov ides internal/external segmentation.

Serv ice Security serv ices Additional f irewall serv ices f or serv er farm–specif ic protection.

Serv er load balancing masks serv ers and applications.

Application f irewall mitigates XSS-, HTTP-, SQL-, and XML-based attacks.

Data center serv ices IPS/IDS prov ide traffic analy sis and f orensics. Network analy sis prov ides traff ic monitoring and data analy sis.

XML Gateway protects and optimizes Web-based serv ices.

Access ACLs, CISC, port security, quality of serv ice, CoPP, VN tag

Virtual access Lay er 2 security f eatures are av ailable within the phy sical serv er f or each virtual machine. Features include ACLs, CISF, port security , Netflow ERSPAN, quality of serv ice, CoPP, VN tag.

Data center f irew alls

The aggregation layer provides an excellent fi ltering point and the first layer of protection for the data center. It provides a building block for deploying firewall services for ingress and egress fi ltering. The Layer 2 and Layer 3 recommendations for the aggregation layer also provide symmetric traffic patterns to support stateful packet filtering.

Because of the performance requirements, this design uses a pair of Cisco ASA firewalls connected directly to the aggregation switches. The Cisco ASA firewalls meet the high-performance data center firewall requirements by providing 10 GB/s of stateful packet inspection.

The Cisco ASA firewalls are configured in transparent mode, which means the firewalls are configured in a Layer 2 mode and will bridge traffic between interfaces. The Cisco ASA firewalls are configured for multiple contexts using the virtual context feature, which allows the firewall to be divided into multiple logical firewalls, each supporting different interfaces and policies.

Note: The modular aspect of this design allows additional firewalls to be deploy ed at the aggregation lay er as the serv er f arm grows and perf ormance requirements increase.

109 © 2013 VCE Company, LLC. All Rights Reserved.

Page 110: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

The firewalls are configured in an active-active design, which allows load sharing across the infrastructure based on the active Layer 2 and Layer 3 traffic paths. Each firewall is configured for two virtual contexts:

Virtual context 1 is active on ASA1

Virtual context 2 is active on ASA2

This corresponds to the active Layer 2 spanning tree path and the Layer 3 Hot Standby Routing Protocol (HSRP) configuration.

Figure 64 shows an example of each firewall connection.

Figure 64. Cisco ASA virtual contexts and Cisco Nexus 7000 virtual device contexts

Virtual context details

The context details on the firewall provide different forwarding paths and policy enforcement, depending on the traffic type and destination. Incoming traffic that is destined for the data center services layer (ACE, WAF, IPS, and so on) is forwarded over VLAN 161 from VDC1 on the Cisco Nexus 7000 to virtual context 1 on the Cisco ASA. The inside interface of virtual context 1 is configured on VLAN 162. The Cisco ASA filters the incoming traffic and then, in this case, bridges the traffic to the inside interface on VLAN 162. VLAN 162 is carried to the services switch where traffic has additional services applied. The same applies to virtual context 2 on VLANs 151 and 152. This context is active on ASA2.

110 © 2013 VCE Company, LLC. All Rights Reserved.

Page 111: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Deployment recommenda tions

Firewalls enforce access policies for the data center. A best practice is to create a multilayered security model to protect the data center from internal or external threats.

The firewall policy will differ, based on the organizational security policy and the types of applications deployed.

Regardless of the number of ports and protocols allowed either to and from the data center, or from server to server, there are some baseline recommendations that serve as a starting point for most deployments. The firewalls should be hardened in a similar fashion to the infrastructure devices. The following configuration notes apply:

Use HTTPS for device access. Disable HTTP access.

Configure authentication, authorization, and accounting.

Use out-of-band management and limit the types of traffic allowed over the management interface(s).

Use Secure Shell (SSH). Disable Telnet.

Use Network Time Protocol (NTP) servers.

Depending on traffic types and policies, the goal might not be to send all traffic flows to the services layer. Some incoming application connections, such as those from a DMZ or client batch jobs (such as backup), might not need load balancing or additional services. An alternative is to deploy another context on the firewall to support the VLANs that are not forwarded to the services switches.

Cav eats

Using transparent mode on the Cisco ASA firewalls requires that an IP address be configured for each context. This is required to bridge traffic from one interface to another and to manage each Cisco ASA context. While in transparent mode, you cannot allocate the same VLAN across multiple interfaces for management purposes. A separate VLAN is used to manage each context. The VLANs created for each context can be bridged back to the primary management VLAN on an upstream switch if desired.

Note: This prov ides a workaround and does not require allocating new network-wide management VLANs and IP subnets to manage each context.

111 © 2013 VCE Company, LLC. All Rights Reserved.

Page 112: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Services layer

Data center security services can be deployed in a variety of combinations. The goal of these designs is to provide a modular approach to deploying security by allowing additional capacity to be added easily for each service. Additional Web application firewalls, intrusion prevention systems (IPS), firewalls, and monitoring services can all be scaled without requiring an overall redesign of the data center.

Figure 65 illustrates how the services layer fits into the data center security environment.

Figure 65. Data center security and the services layer

Cisco Application Control Engine

This design features the Cisco Application Control Engine (ACE) service module for the Cisco Catalyst 6500. Cisco ACE is designed as an application- and server-scaling tool, but it has security benefits as well. Cisco ACE can mask a server’s real IP address and provide a single IP address for clients to connect over a single or multiple protocols such as HTTP, HTTPS, FTP, and so forth.

This design uses Cisco ACE to scale the Web application firewall appliances, which are configured as a server farm. Cisco ACE distributes connections to the Web application firewall pool.

As an added benefit, Cisco ACE can store server certificates locally. This allows Cisco ACE to proxy Secure Socket Layer (SSL) connections for client requests and forward the requests in clear text to the server.

112 © 2013 VCE Company, LLC. All Rights Reserved.

Page 113: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Cisco ACE provides a highly available and scalable data center solution from which the VMware vCloud Director environment can benefit. Use Cisco ACE to apply a different context and associated policies, interfaces, and resources for one vCloud Director cell and a completely different context for another vCloud Director cell.

In this design, Cisco ACE is terminating incoming HTTPS requests and decrypting the traffic prior to forwarding it to the Web application firewall farm. The Web application firewall and subsequent Cisco IPS devices can now view the traffic in clear text for inspection purposes.

Note: Some compliance standards and security policies dictate that traffic be encry pted f rom client to server. It is possible to modify the design so traffic is re-encrypted on Cisco ACE after inspection prior to being forwarded to the serv er.

Web Application Firewall

Cisco ACE Web Application Firewall (WAF) provides firewall services for Web-based applications. It secures and protects Web applications from common attacks, such as identity theft, data theft, application disruption, fraud, and targeted attacks. These attacks can include cross-site scripting (XSS) attacks, SQL and command injection, privilege escalation, cross-site request forgeries (CSRF), buffer overflows, cookie tampering, and denial-of-service (DoS) attacks.

In the trusted multi-tenancy design, the two Web application firewall appliances are considered as a cluster and are load balanced by Cisco ACE. Each Web application firewall cluster member can be seen in the Cisco ACE Web Application Firewall Management Dashboard.

The Cisco ACE Web Application Firewall acts as a reverse proxy for the Web servers it is configured to protect. The Virtual Web Application creates a virtual URL that intercepts incoming client connections. You can configure a virtual Web application based on the protocol and port as well as the policy you want applied.

The destination server IP address is Cisco ACE. Because the Web application firewall is being load balanced by Cisco ACE, it is configured as a one-armed connection to Cisco ACE to send and receive traffic.

113 © 2013 VCE Company, LLC. All Rights Reserved.

Page 114: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Cisco ACE and Web Application Firewall design

Cisco ACE Web Application Firewall is deployed in a one-armed design and is connected to Cisco ACE over a single interface.

Figure 66. Cisco ACE module and Web Application Firewall integration

Cisco Intrusion Prevention System

The Cisco Intrusion Prevention System (IPS) provides deep packet and anomaly inspection to protect against both common and complex embedded attacks.

The IPS devices used in this design are Cisco IPS 4270s with 10 GbE modules. Because of the nature of IPS and the intense inspection capabilities, the amount of overall throughput varies depending on the active policy. Default IPS policies were used in the examples presented in this design guide.

In this design, the IPS appliances are configured for VLAN pairing. Each IPS is connected to the services switch with a single 10 GbE interface. In this example, VLAN 163 and VLAN 164 are configured as the VLAN pair.

114 © 2013 VCE Company, LLC. All Rights Reserved.

Page 115: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

The IPS deployment in the data center leverages EtherChannel load balancing from the service switch. This method is recommended for the data center because it allows the IPS services to scale to meet the data center requirements. This is shown in Figure 67.

Figure 67. IPS ECLB in the services layer

A port channel is configured on the services switch to forward traffic over each 10 GB link to the receiving IPS. Since Cisco IPS does not support Link Aggregate Control Protocol (LACP) or Port Aggregation Protocol (PAgP), the port channel is set to on to ensure no negotiation is necessary for the channel to become operational.

It is very important to ensure all traffic for a specific flow goes to the same Cisco IPS. To best accomplish this, it is recommended to set the hash for the port channel to source and destination IP address. Each EtherChannel supports up to eight ports per channel.

115 © 2013 VCE Company, LLC. All Rights Reserved.

Page 116: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

This design can scale up to eight Cisco IPS 4270s per channel. Figure 68 il lustrates Cisco IPS EtherChannel load balancing.

Figure 68. Cisco IPS EtherChannel load balancing

Cav eats

Spanning tree plays an important role in IPS redundancy in this design. Under normal operating conditions traffic, a VLAN always follows the same active Layer 2 path. If a failure occurs (a service switch failure or a service switch link failure), spanning tree converges, and the active Layer 2 traffic path changes to the redundant service switch and Cisco IPS appliances.

116 © 2013 VCE Company, LLC. All Rights Reserved.

Page 117: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Cisco A CE, Cisco ACE Web Application Firew all, Cisco IPS traff ic f low s

The security services in this design reside between the VDC1 and VDC2 on the Cisco Nexus 7000 Series Switch. All security services are running in a Layer 2 transparent configuration. As traffic flows from VDC1 to the outside Cisco ASA context, it is bridged across VLANs and forwarded through each security service until it reaches the inside VDC2, where it is routed directly to the correct server or application.

Figure 69 shows the service flow for client-to-server traffic through the security services in the red traffic path. In this example, the client is making a Web request to a virtual IP address (VIP) defined on the Cisco ACE virtual context.

Figure 69. Security service traffic flow (client to server)

117 © 2013 VCE Company, LLC. All Rights Reserved.

Page 118: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

The following table describes the stages associated with Figure 69.

Stage What happens

1 Client is directed through Cisco Nexus 7000-1 VDC1 to the activ e Cisco ASA v irtual context transparently bridging traff ic between VDC1 and VDC2 on the Cisco Nexus 7000.

2 The transparent Cisco ASA v irtual context forwards traffic from VLAN 161 to VLAN 162 towards Cisco Nexus 7000-1 VDC2.

3 VDC2 shows spanning tree root f or VLAN 162 through connection to serv ices switch SS1. SS1 shows spanning tree root f or VLAN 162 through the Cisco ACE transparent v irtual context.

4 The Cisco ACE transparent v irtual context applies an input serv ice policy on VLAN 162. This serv ice policy , named AGGREGATE_SLB, has the v irtual IP def inition. The v irtual IP rules associated with this policy enf orce SSL-termination serv ices and load-balancing serv ices to a Web application f irewall serv er farm. HTTP-based probes determine the state of the Web application f irewall serv er f arm. The request is f orwarded to a specif ic Web application f irewall appliance def ined in the Cisco ACE serv er farm. The client IP address is inserted as an HTTP header by Cisco ACE to maintain the integrity of serv er-based logging within the f arm. The source IP address of the request f orwarded to the Web application f irewall is that of the originating client—in this example, 10.7.54.34.

5 In this example, the Web application f irewall has a v irtual Web application def ined named Crack Me. The Web application f irewall appliance receiv es on port 81 the HTTP request that was f orwarded f rom Cisco ACE. The Web application f irewall applies all relev ant security policies f or this traffic and proxies the request back to a VIP (10.8.162.200) located on the same v irtual Cisco ACE context on VLAN interf ace 190.

6 Traff ic is f orwarded f rom the Web application f irewall on VLAN 163. A port channel is conf igured to carry VLAN 163 and VLAN 164 on each member trunk interf ace. Cisco IPS receiv es all traff ic on VLAN 163, perf orms inline inspection, and f orwards the traff ic back ov er the port channel on VLAN 164.

Access layer

In this design, the data center access layer provides Layer 2 connectivity for the server farm. In most cases the primary role of the access layer is to provide port density for scaling the server farm. Figure 70 shows the data center access layer.

118 © 2013 VCE Company, LLC. All Rights Reserved.

Page 119: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 70. Data center access layer

Recommenda tions

Security at the access layer is primarily focused on securing Layer 2 flows. Best practices include:

Using VLANs to segment server traffic

Associating access control lists (ACL) to prevent any undesired communication

Additional security mechanisms that can be deployed at the access layer include:

Private VLANs (PVLAN)

Catalyst Integrated Security features, which include Dynamic Address Re solution Protocol (ARP) inspection, Dynamic Host Configuration Protocol (DHCP) Snooping, and IP Source Guard

Port security can also be used to lock down a critical server to a specific port.

The access layer and virtual access layer serve the same logical purpose. The virtual access layer is a new location and a new footprint of the traditional physical data center access layer. These features are also applicable to the traditional physical access layer.

119 © 2013 VCE Company, LLC. All Rights Reserved.

Page 120: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Virtual access layer security

Server virtualization is creating new challenges for security deployments. Visibility into virtual machine activity and isolation of server traffic becomes more difficult when virtual machine–sourced traffic can reach other virtual machines within the same server without being sent outside the physical server.

When applications reside on virtual machines and multiple virtual machines reside within the same physical server, it may not be necessary for traffic to leave the physical server and pass through a physical access switch for one virtual machine to communicate with another. Enforcing network policies in this type of environment can be a significant challenge. The goal remains to provide in this new virtual access layer many of the same security services and features as are used in the traditional access layer.

The virtual access layer resides in and across the physical servers running virtualization software. Virtual networking occurs within these servers to map virtual machine connectivity to that of the physical server. A virtual switch is configured within the server to provide virtual machine port connectivity. How each virtual machine connects, and to which physical server port it is mapped, are configured on this virtual switching component. While this new access layer resides within the server, it is really the same concept as the traditional physical access layer. It is just participating in a virtualized environment.

Figure 71 illustrates the deployment of a virtual switching platform in the context of this environment.

Figure 71. Cisco Nexus 1000V data center deploy ment

120 © 2013 VCE Company, LLC. All Rights Reserved.

Page 121: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

When a network policy is defined on the Cisco Nexus 1000V, it is updated in the virtual data center and displayed as a port group. The network and security teams can configure a predefined policy and make it available to the server administrators using the same methods they use to apply policies today. Cisco Nexus 1000V policies are defined through a feature called port profiles.

Policy enforcement

Use port profiles to configure network and security features under a single profile that can be applied to multiple interfaces. Once you define a port profile, you can inherit that profile and any setting defined on one or more interfaces. You can define multiple profiles—all assigned to different interfaces.

This feature provides multiple security benefits:

Network security policies are still defined by network and security administrators and are applied to the virtual switch in the same way as on physical access switches.

Once the features are defined in a port profile and assigned to an interface, the server administrator need only pick the available port group and assign it to the virtual machine. This alleviates the chances of misconfiguration and overlapping, or of non-compliant security policies being applied.

Visibility

Server virtualization brings new challenges for visibility into what is occurring at the virtual network level. Traffic flows can occur within the server between virtual machines without needing to traverse a physical access switch. Although vCloud Director and vShield Edge restrict vApp traffic inside the organization, if there is a specific situation where dedicated tenant environment virtual machines are available and a tenant-specific virtual machine is infected or compromised, it may be more difficult for administrators to spot the problem without the traffic forwarding through security appliances.

Encapsulated Remote Switched Port Analyzer (ERSPAN) is a useful tool for gaining visibility into network traffic flows. This feature is supported on the Cisco Nexus 1000V. ERSPAN can be enabled on the Cisco Nexus 1000V and traffic flows can be exported from the server to external devices. See Figure 72.

121 © 2013 VCE Company, LLC. All Rights Reserved.

Page 122: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 72. Cisco Nexus 1000V and ERSPAN IDS and NAM at services switch

The following table describes what happens in Figure 72.

Stage What happens

1 ERSPAN f orwards copies of the v irtual machine traffic to the Cisco IPS appliance and the Cisco Network Analy sis Module (NAM). Both the Cisco IPS and Cisco NAM are located at the serv ice lay er in the serv ice switch.

2 A new v irtual sensor (VS1) has been created on the existing Cisco IPS appliances to prov ide monitoring f or only the ERSPAN session f rom the serv er. Up to f our v irtual sensors can be conf igured on a single Cisco IPS appliance and they can be conf igured in either intrusion prev ention system or instruction detection sy stem (IDS) mode. In this case the new v irtual sensor VS1 has been set to IDS or monitor mode. It receiv es a copy of the v irtual machine traffic ov er the ERSPAN session f rom the Cisco Nexus 1000V.

3 Two ERSPAN sessions hav e been created on the Cisco Nexus 1000V:

Session 1 has a destination of the Cisco NAM Session 2 has a destination of the Cisco IPS appliance

Each session terminates on the 6500 serv ice switch.

122 © 2013 VCE Company, LLC. All Rights Reserved.

Page 123: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Using a different ERSPAN-id for each session provides isolation. A maximum of 66 source and destination ERSPAN sessions can be configured per switch.

Cav eats

ERSPAN can affect overall system performance, depending on the number of ports sending data and the amount of traffic being generated. It is always a good idea to monitor system performance when you enable ERSPAN to verify the overall effects on the system.

Note: You must permit protocol type header 0x88BE f or ERSPAN Generic Routing Encapsulation (GRE) connections.

Security recommendations

The following are some best practice security recommendations:

Harden data center infrastructure devices and use authentication, authorization, and accounting for role-based access control and logging.

Authenticate and authorize device access using TACACS+ to a Cisco Access Control Server (ACS).

Enable local fallback if the Cisco ACS is unreachable.

Define local usernames and secrets for user accounts in the ADMIN group. The local username and secret should match that defined in the TACACS server.

Define the ACLs to l imit the type of traffic to and from the device from the out-of-band management network.

Enable network time protocol (NTP) on all devices. NTP synchronizes timestamps for all logging across the infrastructure, which makes it an invaluable tool for troubleshooting.

For detailed infrastructure security recommendations and best practices, see the Cisco Network Security Baseline and the following URL:

www.cisco.com/en/US/docs/solutions/Enterprise/Security/Baseline_Security/securebasebook.html

123 © 2013 VCE Company, LLC. All Rights Reserved.

Page 124: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Threats mit igated

The following table indicates the threats mitigated with the data security design described in this guide.

Cisco ASA

Firewall

Cisco IPS

Cisco ACE

Cisco

ACE W AF

RSA

enVision

Infrastructure

protection

Authorized access Yes Yes Yes Yes Yes

Malware, v iruses, worms, DoS

Yes Yes Yes Yes Yes

Application attacks (XSS, SQL injection, directory transv ersal, and so f orth)

Yes Yes Yes Yes

Tunneled attacks Yes Yes Yes Yes Yes Yes

Visibility Yes Yes Yes Yes Yes Yes

Vblock™ Systems secur ity features

Within the Vblock System, the following security features can be applied to the trusted multi-tenancy design framework:

Port security

ACLs

Port security

Cisco Nexus 5000 Series switches provide port security features that reject intrusion attempts and report these intrusions to the administrator.

Typically, any fibre channel device in a SAN can attach to any SAN switch port and access SAN services based on zone membership. Port security features prevent unauthorized access to a switch port in the Cisco Nexus 5000 Series switch.

ACLs

A router ACL (RACL) is an ACL that is applied to an interface with a Layer 3 address assigned to it. It can be applied to any port that has an IP address, including the following:

Routed interfaces

Loopback interfaces

VLAN interfaces

The security boundary is to permit or deny traffic moving between subnets or networks. The RACL is supported in hardware and has no effect on performance.

124 © 2013 VCE Company, LLC. All Rights Reserved.

Page 125: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

A VLAN access control list (VACL) is an ACL that is applied to a VLAN. It can be applied only to a VLAN–no other type of interface. The security boundary is to permit or deny moving traffic between VLANs and permit or deny traffic within a VLAN. The VLAN ACL is supported in hardware.

A port access control list (PACL) entry is an ACL applied to a Layer 2 switch port interface. It cannot be applied to any other type of interface. It works in only the ingress direction. The security boundary is to permit or deny moving traffic within a VLAN. The PACL is supported in hardware and has no effect on performance.

Design considerations for availability and data protection

Availability is defined as the probability that a service or network is operational and functional as needed at any point in time. Cloud data centers offer IaaS to either internal enterprise customers or external customers of service providers. The services are controlled using SLAs, which can be stricter in service provider deployments than in enterprise deployments. A highly available data center infrastructure is the foundation of SLA guarantee and successful cloud deployment.

Physical redundancy design cons ideration

To build an end-to-end resilient design, hardware redundancy is the first layer of protection that provides rapid recovery from failures. Physical redundancy must be enabled at various layers of the infrastructure, as described in the following table.

Physical redundancy method Details

Node redundancy Redundant pair of dev ices

Hardware redundancy within the node Dual superv isors Distributed port-channel across line cards

Redundant line cards per v irtual dev ice context

Link redundancy Distributed port-channel across line cards Virtual port channel

Figure 73 shows the overall network availability for each layer.

125 © 2013 VCE Company, LLC. All Rights Reserved.

Page 126: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 73. Network availability for each layer

In addition to physical layer redundancy, the following logical redundancy features help provide a highly reliable and robust environment that will guarantee the customer’s service with minimum interruption during the network failure or maintenance:

Virtual port channel

Hot standby router protocol

Nexus 1000V and Mac pinning

Nexus 1000V VSM redundancy

126 © 2013 VCE Company, LLC. All Rights Reserved.

Page 127: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Virtual port channel

A virtual port channel (vPC) is a port-channeling concept extending link aggregation to two separate physical switches. It allows links that are physically connected to two Cisco Nexus devices to appear as a single port channel to any other device, including a switch or server. This feature is transparent to neighboring devices. A virtual port channel can provide Layer 2 multipathing–which creates redundancy through increased bandwidth–to enable multiple active parallel paths between nodes and to load balance traffic where alternative paths exist. The following devices support virtual port channels:

Cisco Nexus 1000V Series Switch

Cisco Nexus 5000 Series Switch

Cisco Nexus 7000 Series Switch

Cisco UCS 6120 fabric interconnect

Hot sta ndby router protocol

Hot Standby Router Protocol (HSRP) is Cisco's standard method of providing high network availability by providing first-hop redundancy for IP hosts on an IEEE 802 LAN configured with a default gateway IP address. HSRP routes IP traffic without relying on the availability of any single router. It enables a set of router interfaces to work together to present the appearance of a single virtual router or default gateway to the hosts on a LAN.

When HSRP is configured on a network or segment, it provides a virtual Media Access Control (MAC) address and an IP address that is shared among a group of configured routers. HSRP allows two or more HSRP-configured routers to use the MAC address and IP network address of a virtual router. The virtual router does not exist; it represents the common target for routers that are configured to provide backup to each other. One of the routers is selected to be the active router and another to be the standby router, which assumes control of the group MAC and IP address should the designated active router fail.

Figure 74 shows active and standby HSRP routers configured on Switch 1 and Switch 2.

127 © 2013 VCE Company, LLC. All Rights Reserved.

Page 128: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 74. Active and standby HSRP routers

Virtual port channel is used across the trusted multi-tenancy network between the different layers. HSRP is configured at the Nexus 7000 sub-aggregation layer, which provides the backup default gateway if the primary default gateway fails.

Cisco Nexus 1000V and MAC pinning

The Cisco Nexus 1000V Series Switch uses the MAC pinning feature to provide more granular load-balancing methods and redundancy. Virtual machine NICs can be pinned to an uplink path using port profiles definitions. Using port profiles, an administrator defines the preferred uplink path to use. If these uplinks fail, another uplink is dynamically chosen. If an active physical link goes down, the Cisco Nexus 1000V Series Switch sends notification packets upstream of a surviving link to inform upstream switches of the new path required to reach these virtual machines. These notifications are sent to the Cisco UCS 6100 Series fabric interconnect, which updates its MAC address tables and sends gratuitous ARP messages on the uplink ports so the data center access layer network can learn the new path.

Nexus 1000V VSM redundanc y

Define one Virtual Supervisor Module (VSM) as the primary module and the other as the secondary module. The two VSMs run as an active-standby pair, similar to supervisors in a physical chassi s, and provide high-availability switch management. The Cisco Nexus 1000V Series VSM is not in the data path, so even if both VSMs are powered down, the Virtual Ethernet Module (VEM) is not affected and continues to forward traffic. Each VSM in an active-standby pair is required to run on a separate VMware ESXi host. This setup helps ensure high availability if even one VMware ESXi server fails.

128 © 2013 VCE Company, LLC. All Rights Reserved.

Page 129: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Design considerations for service provider management and control

The Cisco Data Center Network Manager infrastructure can actively monitor the SAN and LAN. With DCNM, many features of Cisco NX-OS–including Ethernet switching, physical ports and port channels, and ACLs–can be configured and monitored.

Integration of Cisco Data Center Network Manager and Cisco Fabric Manager provides overall uptime and reliability of the cloud infrastructure and improves business continuity.

Nexus 5000 Series switches provide many management features to help provision and manage the device, including:

CLI-based console to provide detailed out-of-band management

Virtual port channel configuration synchronization

SSHv2

Authentication, authorization, and accounting

Authentication, authorization, and accounting with RBAC

129 © 2013 VCE Company, LLC. All Rights Reserved.

Page 130: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Design considerations for additional security technologies

Security and compliance ensures the confidentiality, integrity, and availability of each tenant’s environment at every layer of the trusted multi-tenancy stack using technologies like identity management and access control, encryption and key management, firewalls, malware protection, and intrusion prevention. This is a primary concern for both service provider and tenant. The ability to have an accurate, clear picture of the security and compliance posture of the Vblock System is vital to the success of the service provider in ensuring a trusted, multi-tenant environment; and for the tenants to adopt the converged resources in alignment with their business objectives.

The trusted multi-tenancy design ensures that all activities performed in the provisioning, configuration, and management of the multi-tenant environment, as well as day-to-day activities and events for individual tenants, are verified and continuously monitored. It is also important that all operational events are recorded and that these records are available as evidence during audits.

The security and compliance element of trusted multi-tenancy encircles the other elements. It is the verify component of the maxim–“Trust, but verify”–in that all configurations, technologies, and solutions must be auditable and their status verifiable in a timely manner. Governance, Risk, and Compliance (GRC), specifically IT GRC, is the foundation of this element.

The IT GRC domain focuses on the management of IT-related controls. This is vital to the converged infrastructure provider, as surveys indicate that security ranks highest among the concerns for using cloud-based solutions. The ability to ensure oversight and report on security controls such as firewalls, hardening configurations, and identity access management; and non-technical controls such as consistent use of processe s, background checks for employees, and regular review of policies is paramount to the success of the provider in ensuring the security and compliance objectives demanded by their customers. Key benefits of a robust IT GRC solution include:

Creating and distributing policies and controls and mapping them to regulations and internal compliance requirements

Assessing whether the controls are actually in place and working, and remediating those that are not

Easing risk assessment and mitigation

130 © 2013 VCE Company, LLC. All Rights Reserved.

Page 131: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Design considerations for secure separation

This section discusse s using RSA Archer eGRC and RSA enVision to achieve secure separation.

RSA Archer eGRC

With respect to secure separation, the RSA Archer eGRC Platform is a multi-tenant software platform, supporting the configuration of separate instances in provider-hosted environments. These individual instances support data segmentation, as well as discrete user experiences and branding. By utilizing inherited record permissions and role-based access controls built into the platform, both service providers and tenants are provided secure and separate spaces within a single installation of RSA Archer eGRC.

Based upon tenant requirements, it is also possible to provision a discrete RSA Archer eGRC instance per tenant. Unless a larger number of concurrent users will be accessing the instance or a high-availability solution is required, this deployment can run within a single virtual machine with the application and database components running on the same server.

RSA enV ision

Deploying separate instances of RSA enVision for the service provider and the tenants results in a discrete and secure separation of the collected and stored data. For the service provider, an RSA enVision instance centrally collects and stores event information from all the Vblock System components separately from each tenants’ data.

Design considerations for service assurance

This section discusse s using RSA Archer eGRC and RSA enVision to achieve service assurance.

RSA Archer eGRC

The RSA Archer eGRC Platform supports the trusted multi-tenancy element of service assurance by providing a clear and consistent mechanism for providing metric and service level agreement data to both service providers and tenants through robust reporting and dashboard views. Through integration with RSA enVision and engagements with RSA Professional Services, these reports and dashboards can be automated using data points from the element managers and products using RSA enVision.

Figure 75 shows an example RSA Archer eGRC dashboard.

131 © 2013 VCE Company, LLC. All Rights Reserved.

Page 132: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Figure 75. Sample RSA Archer eGRC dashboard

RSA enV ision

RSA enVision integrates with RSA Archer eGRC in the RSA Security Incident Management Solution to complete and streamline the entire lifecycle for security incident management. By capturing all event and alert data from the Vblock System components, service providers are able to establish baselines and then be automatically alerted to anomalies–from an operational and security perspective.

The correlation capabilities allow for seemingly innocent information from separate logs to identify real events when read holistically. This allows for quick responses to those events in the environment, their resolution, and subsequent root cause analysis and remediation. From the tenant point of view, this provides for a more stable and reliable solution for business needs.

132 © 2013 VCE Company, LLC. All Rights Reserved.

Page 133: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Design considerations for security and compliance

This section discusse s using RSA Archer eGRC and RSA enVision to achieve security and compliance.

RSA Archer eGRC

The RSA Solution for Cloud Security and Compliance for RSA Archer eGRC enables user organizations and service providers to orchestrate and visualize the security of their virtualization infrastructure and physical infrastructure from a single console. The solution extends the Enterprise, Compliance, and Policy modules within the RSA Archer eGRC Platform with content from the Archer Library, dashboard views, and questionnaires to provide a solution based on cloud security and compliance.

The RSA Solution for Cloud Security and Compliance provides the service provider the mechanism to perform continuous monitoring of the VMware infrastructure against the more than 130 control procedures in the library written specifically against the VMware vSphere 4.0 Security Hardening Guide. In addition to providing the service provider the necessary means to oversee and govern the security and compliance posture, the RSA Solution also allows for:

1. Discovery of new devices

2. Configuration measurement of new devices

3. Establishment of baselines using questionnaires

4. Remediation of compliance issues

Figure 76. RSA Solution for Cloud Security and Compl iance

133 © 2013 VCE Company, LLC. All Rights Reserved.

Page 134: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Using this solution gives the service provider a means to ensure and, very importantly, prove the compliance of the virtualized infrastructure to authoritative sources such as PCI-DSS, COBIT, NIST, HIPAA, and NERC.

RSA enV ision

RSA enVision includes preconfigured integration with all Vblock System infrastructure components, including the Cisco UCS and Nexus components, EMC storage, and VMware vSphere, vCenter, vShield, and vCloud Director. This ensures a consistent and centralized means of collecting and storing the events and alerts generated by the various Vblock System components.

From the service provider viewpoint, RSA enVision provides the means to ensure compliance with regulatory requirements regarding secure logging and monitoring.

Design considerations for availability and data protection

This section discusse s using RSA Archer eGRC and RSA enVision to achieve availability and data protection.

RSA Archer eGRC

The powerful and flexible nature of the RSA Archer eGRC Platform provides both service providers and tenants the mechanism to integrate business critical data points and information into their governance program. The consistent understanding of where business sensitive data is located, as well as its criticality rating, is fundamental in making provisioning and availability decisions. Through consultation with RSA Professional Services, it is possible to integrate workflow-managed questionnaires to ensure consistent capturing of this information. This captured information can then be used as data points for the creation of custom reporting dashboards and reports.

Figure 77. Workflow questionnaire

In addition to this information classification, RSA Archer integrates with RSA enVision as its collection entity from sources such as data loss prevention, anti-virus, and intruder detection/prevention systems to bring these data points into the centralized governance dashboards.

134 © 2013 VCE Company, LLC. All Rights Reserved.

Page 135: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

RSA enV ision

RSA enVision helps the service provider ensure the continued availability of the environment and the protection of the data contained in the Vblock System. By centralizing and correlating alerts and events, RSA enVision provides the service provider the visibil ity into the environment needed to identify and act upon security events within the environment. Real-time notification provides the means to prevent possible compromises and impact to the services and the tenants.

Design considerations for tenant management and control

This section discusse s using RSA Archer eGRC and RSA enVision to achieve tenant management and control.

RSA Archer eGRC

The multi-tenant reporting capabilities of the RSA Archer eGRC Platform give each tenant a comprehensive, real-time view of the eGRC program. Tenants can take advantage of prebuilt reports to monitor activities and trends and generate ad hoc reports to access the information needed to make decisions, address issues, and complete tasks. The cloud provider can build customizable dashboards tailored by tenant or audience, so that users get exactly the information they need based on their roles and responsibilities.

RSA enV ision

For tenants requiring centralized event management for their virtualized systems, dedicated instances of RSA enVision are provisioned for their exclusive use. As a virtual appliance under the tenant’s control, RSA enVision in this use case provides the mechanism for the virtualized operating systems, applications, and services to centralize their event and logs. The tenant can use the reports and dashboards within their RSA enVision instance, or integrate it with an instance of RSA Archer eGRC, to ensure transparency to the operational and security events within their hosted environment.

135 © 2013 VCE Company, LLC. All Rights Reserved.

Page 136: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Design considerations for service provider management and control

This section discusse s using RSA Archer eGRC and RSA enVision to achieve service provider management and control.

RSA Archer eGRC

Similar to providing the tenants with reporting capabilities, the RSA Archer eGRC Platform empowers the service provider with comprehensive, real-time visibility into their governance, risk, and compliance program. This transparency allows the provider to more effectively manage the risks to their environment, and in turn, manage the risks to their customers’ hosted resources. Through the continuous monitoring of controls and the remediation workflow capabilities, service providers can ensure that the shared and dedicated infrastructure meets both the requirements set forth by regulatory authorities and those agreed upon with their tenants.

Figure 78. Sample report

RSA enV ision

Service providers in a multi-tenant environment need the complete visibility that RSA enVision provides into their converged infrastructure environment. By consolidating the alerts and events from all the Vblock System components, service providers can efficiently and effectively monitor, manage, and control the environment. The realtime knowledge of what is happening in the Vblock System empowers the service provider in the facilitation of each of the VCE elements of trusted multi-tenancy.

136 © 2013 VCE Company, LLC. All Rights Reserved.

Page 137: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Conclusion

The six foundational elements of secure separation, service assurance, security and compliance, availability and data protection, tenant management and control, and service provider management and control form the basis of the Vblock System trusted multi-tenancy design framework.

The following table summarizes the technologies used to ensure trusted multi-tenancy at each layer of the Vblock System.

Trusted multi-tenancy element

Compute

Storage

Network

Security technologies

Secure separation Use of serv ice prof iles f or tenants

Phy sical blade separation

UCS organizational groups

UCS RBAC, serv ice prof iles, and serv er pools

UCS VLANs

UCS VSANs

VMware v Cloud Director

VSAN segmentation Zoning

Mapping and masking

RAID groups and pools

Virtual Data Mov er

VLAN segmentation VRF

Cisco Nexus 7000 Virtual Dev ice Context (VDC) Access Control Lists (ACL), Nexus 1000V port prof iles

VMware v Shield Apps, Edge

Discrete, separate instances of RSA Archer eGRC and RSA enVision f or the serv ice prov ider and f or each tenant as needed

Serv ice assurance UCS quality of serv ice

Port channels

Serv er pools

VMware v Cloud Director

VMware High Av ailability

VMware Fault Tolerance

VMware Distributed Resource Scheduler (DRS)

VMware v Sphere Resource Pools

EMC Unisphere Quality of Serv ice Manager

EMC Fully Automated Storage Tiering (FAST)

Pools

Nexus 1000/5000/7000 quality of serv ice

Quality of serv ice bandwidth control

Quality of serv ice rate limiting

Quality of serv ice traffic classif ication

Quality of serv ice queuing

Robust reports and dashboard v iews with RSA Archer eGRC

Audit logging and alerting with RSA enVision integrated into the incident management lif ecycle

Security and compliance

UCS RBAC LDAP

vCenter Administrator group

RADIUS or TACACS+

Authentication with LDAP or Activ e Directory

VNX User Account Roles

VNX and RSA enVision

IP Filtering

ASA f irewalls Cisco Application Control Engine Cisco Intrusion Prev ention Sy stem (IPS)

Port security

ACLs

Lif ecycle and reporting of automated and non-automated control compliance with RSA Archer eGRC

Regulatory logging and auditing requirements met with RSA enVision

137 © 2013 VCE Company, LLC. All Rights Reserved.

Page 138: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Trusted multi-tenancy element

Compute

Storage

Network

Security technologies

Av ailability and data protection

Cisco UCS High Av ailability (dual f abric interconnect)

Fabric interconnect clustering

Serv ice prof ile dy namic mobility

VMware v Sphere High Av ailability

VMware v Motion

VMware v Center Heartbeat

VMware v Cloud Director cells

VMware v Center Site Recov ery Manager (SR M)

High Av ailability : link redundancy , hardware and node redundancy

Local and remote data protection EMC SnapSure

EMC SnapView

EMC Recov erPoint

EMC Mir rorView

EMC PowerPath Migration Enabler

Cisco Nexus OS v irtual port channels (v PC)

Cisco Hot Standby Router Protocol

Cisco Nexus 1000V and MAC pinning

Dev ice/Link Redundancy

Nexus 1000V Activ e/Standby VSM

Data classif ication questionnaires with RSA Archer eGRC

Real-time correlations and alerting through integration of sy stems with RSA enVision

Tenant management and control

VMware v Cloud Director RSA enVision

VMware v Cloud Director

VMware v Cloud Director

Tenant v isibility into their security and compliance posture through discrete instances of RSA Archer eGRC

Instances of RSA enVision to address specif ic tenant requirements and regulatory needs

Serv ice prov ider management and control

VMware v Center Cisco UCS Manager

VMware v Cloud Director

VMware v Shield Manager

VMware v Center Chargeback

Cisco Nexus 1000V

EMC Ionix Unif ied Inf rastructure Manager

EMC Unisphere EMC Ionix UIM/P

Cisco Data Center Network Manager (DCNM)

Cisco Fabric Manager (FM)

Prov ider gov ernance and insight ov er entire security and compliance posture with RSA Archer eGRC

Centralize logging and alerting to maximize efficiencies with RSA enVision

138 © 2013 VCE Company, LLC. All Rights Reserved.

Page 139: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Next steps

To learn more about this and other solutions, contact a VCE representative or visit www.vce.com.

139 © 2013 VCE Company, LLC. All Rights Reserved.

Page 140: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Acronym glossary

The following table defines acronyms used throughout this guide.

Acronym Definition

ABE Access based enumeration

ACE Application Control Engine

ACL Access control list

ACS Access Control Serv er

AD Activ e Directory

AMP Adv anced Management Pod

API Application programming interf ace

CDP Continuous data protection

CHAP Challenge Handshake Authentication Protocol

CLI Command-line interf ace

CNA Conv erged network adapter

CoS Class of serv ice

CRR Continuous remote replication

DR Disaster recov ery

DRS Distributed Resource Scheduler

EFD Enterprise f lash driv e

ERSPAN Encapsulated Remote Switched Port Analy zer

FAST Fully Automated Storage Tiering

FC Fibre channel

FCoE Fibre Channel ov er Ethernet

FWSM Firewall Serv ices Module

GbE Gigabit Ethernet

HA High Av ailability

HBA Host bus adapter

HSRP Hot standby router protocol

IaaS Inf rastructure as a serv ice

IDS Intrusion detection system

IPS Intrusion prev ention system

IPsec Internet protocol security

140 © 2013 VCE Company, LLC. All Rights Reserved.

Page 141: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Acronym Definition

LACP Link aggregate control protocol

LUN Logical unit number

MAC Media access control

NAM Network Analy sis Module

NAT Network address translation

NDMP Network Data Management Protocol

NPV N port v irtualization

NTP Network Time Protocol

PAgP Port Aggregation Protocol

PACL Port access control list

PCI-DSS Pay ment card industry data security standards

PPME PowerPath Migration Enabler

QoS Quality of serv ice

RACL Router access control list

RBAC Role-based access control

SAN Storage area net work

SLA Serv ice lev el agreement

SPOF Single point of f ailure

SRM Site Recov ery Manager

SSH Secure shell

SSL Secure socket lay er

TMT Trusted multi-tenancy

UIM/P Unif ied Inf rastructure Manager Prov isioning

UCS Unif ied Computing System

UQM Unisphere Quality of Serv ice Manager

VACL VLAN access control list

vCD vCloud Director

vDC Virtual data center

VDC Virtual dev ice context

VDM Virtual data mov er

VEM Virtual Ethernet Module

vHBA Virtual host bus adapter

141 © 2013 VCE Company, LLC. All Rights Reserved.

Page 142: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

Acronym Definition

VIC Virtual interf ace card

VIP Virtual IP

VLAN Virtual local area network

VM Virtual machine

VMDK Virtual machine disk

VMFS Virtual machine f ile sy stem

vNIC Virtual network interf ace card

v PC Virtual port channel

VRF Virtual routing and f orwarding

VSAN Virtual storage area network

v SM v Shield Manager

VSM Virtual Superv isor Module

WAF Web application f irewall

142 © 2013 VCE Company, LLC. All Rights Reserved.

Page 143: VCE Vblock Solution for Trusted Multi-Tenancy Design Guide

ABOUT VCE VCE, formed by Cisco and EMC with investments from VMware and Intel, accelerates the adoption of converged infrastructure and cloud-based computing models that dramatically reduce the cost of IT while improving time to market for our customers. VCE, through the Vblock Systems, delivers the industry's only fully integrated and fully virtualized cloud infrastructure system. VCE solutions are available through an extensive partner network, and cover horizontal applications, vertical industry offerings, and application development environments, allowing customers to focus on business innovation instead of integrating, validating, and managing IT infrastructure. For more information, go to www.vce.com.

Copyright 2013 VCE Company, LLC. All Rights Reserved. Vblock and the VCE logo are registered trademarks or trademarks of VCE Company, LLC and/or its affiliates in the United States or other countries. All other trademarks used herein are the property of their respective owners.

© 2013 VCE Company, LLC. All Rights Reserved.


Recommended