Upload
others
View
11
Download
0
Embed Size (px)
Citation preview
VMWARE VSAN AND NSX: DESIGN AND DEPLOY
Anuj SharmaPrincipal Engineer Solutions ArchitectureDell [email protected]
Knowledge Sharing Article © 2018 Dell Inc. or its subsidiaries.
2018 Dell EMC Proven Professional Knowledge Sharing 2
Table of Contents
Architecture and Components of vSAN ........................................................................................................ 3
Major components of a vSAN Environment ............................................................................................. 4
Things to know when designing and implementing a VSAN Environment ............................................... 7
Configuring vSAN .................................................................................................................................... 12
VMware vSAN Licensing.......................................................................................................................... 13
Architecture and Components of NSX ........................................................................................................ 15
Things to know when designing and implementing a NSX Environment ............................................... 16
Things to knows about NSX licensing...................................................................................................... 17
References .................................................................................................................................................. 22
Table of Figures
Figure 1 vSAN Logical Layout ........................................................................................................................ 3
Figure 2 vSAN Disk Groups ............................................................................................................................ 4
Figure 3 vSAN Objects ................................................................................................................................... 5
Figure 4 vSAN Components .......................................................................................................................... 6
Figure 5 vSAN Storage Policies ...................................................................................................................... 7
Figure 6 vSAN VMkernel Port ....................................................................................................................... 8
Figure 7 vSAN Hardware Compatibility......................................................................................................... 9
Figure 8 vSAN Config 1 ................................................................................................................................ 12
Figure 9 vSAN Config 2 ................................................................................................................................ 13
Figure 10 vSAN Config 3 .............................................................................................................................. 13
Figure 11 vSAN Licensing ............................................................................................................................ 14
Figure 12 ..................................................................................................................................................... 15
Figure 13 NSX Logical Components ............................................................................................................ 15
Disclaimer: The views, processes or methodologies published in this article are those of the author. They
do not necessarily reflect Dell EMC’s views, processes or methodologies.
2018 Dell EMC Proven Professional Knowledge Sharing 3
Architecture and Components of vSAN
Figure 1 vSAN Logical Layout
Before discussing VMware vSAN, let’s first discuss a traditional Storage Area Network (SAN). A
traditional SAN consists of Hosts accessing Storage from a Storage Array using Fiber Channel Switches.
Storage Array provides storage to multiple hosts in the Data Center which mitigates the limitation of
host local storage scalability. Now if we look deep in to a storage array, it consists of controllers,
memory, disk array enclosures, and front end ports. A Storage Array can be logically visualized as
another host with processing power and memory power serving storage services for other hosts in the
environment. Of course, we know that implementing a SAN requires capital expenditure.
2018 Dell EMC Proven Professional Knowledge Sharing 4
In VMware vSAN all hosts in a vSphere cluster contribute their local storage to a common pool of
storage as vSAN Datastore. This means that customers can still have the functionality of shared storage
as well as vSphere features such as vMotion, HA without a dedicated storage array and by using existing
infrastructure. With this overview of VMware vSAN in mind, let’s dig deeper into the architecture to
know how it’s accomplished.
Major components of a vSAN Environment
Hosts
Servers contribute their local storage to the common vSAN Datastore in a cluster.
Network
Ethernet network for storage traffic flow between the hosts. Existing Ethernet network can be
leveraged for the storage traffic flow as well.
Disk Group
Storage in each host is organized into disk groups. Diskgroup consists of at least one SDD disk
serving as a cache tier with the remainder can be SSD’s or ATA disks.
Figure 2 vSAN Disk Groups
In Figure 2 we can see that ESXi Cluster RegionA01-COMP01 has 3 hosts with vSAN enabled and
each host has 2 disk groups with 3 disks each. Each disk group will have at least one SSD disk for
caching.
2018 Dell EMC Proven Professional Knowledge Sharing 5
Objects
Figure 3 vSAN Objects
Virtual Machine data are stored as objects in a vSAN Data Store. In Figure 3 we see the various
objects in a VM, i.e. VM home namespace files is treated as one object, VMDK files is treated as
one object, and so on.
Components
Objects are further divided into components when stored on vSAN Data Store as a RAID tree. A
single component can be 255 GB maximum. So, if a vmdk file is 512 GB in size, it will be split into
two components and stored on vSAN datastore.
2018 Dell EMC Proven Professional Knowledge Sharing 6
Figure 4 vSAN Components
In the above screenshot, we see that TEST VM has multiple objects like VM Home and Hard disk1.
VM Home object has a storage policy applied for failure to tolerate as 1, meaning that in a cluster
of 3 hosts, failure of 1 host does not affect the availability of data. To achieve this, vSAN policy
stores two copies of a component on different hosts; esx-02a and esx-01a. We can also see a
component witness. As in normal quorum scenarios witness is deployed to avoid a split brain, i.e.
in case a host becomes isolated I/O should continue from a host that has communication with
witness. While all these witness placements are done automatically, you can find out more about
internals in this blog https://blogs.vmware.com/vsphere/2014/04/vmware-virtual-san-witness-
component-deployment-logic.html.
Storage Policies
Storage Policies define the way that the object components are stored for availability. This means
that we know how many copies of the object we want to store for high availability and how to
store them.
2018 Dell EMC Proven Professional Knowledge Sharing 7
Figure 5 vSAN Storage Policies
In Figure 5, we see that we have an option for Primary Level of Failures to tolerate, i.e. will decide the
number of host or disk failures can be tolerated in a cluster. Number of disk stripes per object decide how
the object is stored. For example, if the number of disk stripes per object is 1, the object component will
be stored on one disk; if the number of disk stripes per object is 2, the object will be divided into two
components and each component will be stored on different disk/host. Policy setting also offers more
options such as Force Provisioning; for example, if the policy conditions cannot be met during the time of
application it will still deploy the object with the minimum conditions available and as soon as the
condition for policy are met, objects are stored as per policy. For instance, if number of failure to tolerate
are mentioned as 2 but current cluster cannot satisfy it, Virtual Machine will still be deployed with the
available resources and as soon as the resources are added objects will be re distributed.
Things to know when designing and implementing a vSAN Environment
One SAS or SATA host bus adapter (HBA), or a RAID controller that is in passthrough or RAID 0
mode. Pass-through mode is recommended.
Hybrid disk group configuration: At least one flash cache device, and one or more SAS, NL-SAS or
SATA magnetic disks.
All-flash disk group configuration: One SAS or SATA solid state disk (SSD) or PCIe flash device used
for caching, and one or more flash devices used for capacity.
Each host can have maximum of 5 Disk Groups.
Each disk group can have a maximum of 8 disks including cache disk. This implies host can have a
maximum of 40 disks.
2018 Dell EMC Proven Professional Knowledge Sharing 8
In vSAN 6.5, hybrid cluster SSD will provide both a write buffer (30%) and a read cache (70%). The
more SSD capacity in the host, the greater the performance since more I/O can be cached.
In vSAN all-flash cluster, 100% of the cache is allocated for writes. Thus, read performance from
capacity flash is more than sufficient.
Not every node in a vSAN cluster needs to have local storage although a balanced configuration
is recommended. Hosts with no local storage can still leverage the distributed vSAN datastore.
Each host must have minimum bandwidth dedicated to vSAN. 1 GbE for hybrid capacity, 10 GbE
for all-flash capacity
A Distributed Switch can be optionally configured between all hosts in the vSAN cluster, although
VMware Standard Switches (VSS) will also work.
A vSAN VMkernel port must be configured for each host. With a Distributed Switch, Network I/O
Control can also be enabled to dedicate bandwidth to the vSAN network.
Figure 6 vSAN VMkernel Port
Layer 2 multicast must be enabled on the physical switch that handles vSAN traffic prior
to vSAN 6.6. Unicast will work for vSAN 6.6 onward.
Version 6.2 and later of vSAN support IPv4-only configurations, IPv6-only configurations,
and also configurations where both IPv4 and IPv6 are enabled. This addresses
requirements for customers moving to IPv6 and, additionally, supports mixed mode for
migrations.
The VMkernel port is labeled vSAN. This port is used for intra-cluster node communication
and for read and writes when one of the vSphere hosts in the cluster owns a particular
virtual machine, but the actual data blocks making up the virtual machine files are located
on a different vSphere host in the cluster. In this case, I/O will need to traverse the
network configured between the hosts in the cluster.
2018 Dell EMC Proven Professional Knowledge Sharing 9
vSAN Maximum and Minimums
For updated values of maximum and minimums please refer to latest guides available on VMware
website.
VMware updates the server compatibility database regularly. Before deciding on the server please
validate the compatibility on the portal
https://www.vmware.com/resources/compatibility/search.php?deviceCategory=vsan
There may be cases where even after validating the compatibility check online vCenter shows
Hardware Compatibility as failed. In this case, you will need to download the latest .json file from
internet and manually upload to the health tab of vCenter so that it has the latest compatibility
database.
http://partnerweb.vmware.com/service/vsan/all.json.gz
Figure 7 vSAN Hardware Compatibility
2018 Dell EMC Proven Professional Knowledge Sharing 10
VMware recommends deploying ESXi hosts with similar or identical configurations across all
cluster members, including similar or identical storage configurations. This will ensure an even
balance of virtual machine storage components across the disks and hosts cluster.
To tolerate “n” failures, there needs to be “2n + 1” hosts in the cluster.
In case of Erasure coding, for RAID 5 there is a minimum 4 hosts; the minimum for RAID 6 is 6
hosts in a cluster.
Minimum requirements for various storage policy availability options
Choose servers with 4X10 Gig Network Interface Cards, Two dedicated for Production Traffic and
two dedicated for vSAN network. LAG should be used for teaming.
Since SSD drives have endurance ratings, it’s recommended to use high endurance SSD drives for
cache as they are prone to more writes and reads than capacity SSD drives in all flash
configurations. Use high capacity SSD drives in Capacity Tier and low capacity SSD drives in Cache
Tier.
VMware also recommends keeping 30% space in a datastore as slack space, i.e. free space.
While sizing CPU and Memory, add at least 10% buffer for vSAN.
Use fault domains to distribute the components of an object in different fault domains. By fault
domain we mean a domain that is prone to failure as one. In a multi-rack ESXi environment each
rack is a failure domain and data from a same object won’t be stored in servers in same rack but
instead be stored across racks.
Create multiple disk groups in a server if possible. For example, if the cache disk in a disk group
fails, the whole disk group will be offline. Multiple disk groups result in better performance and
smaller fault domains.
VMware strongly recommends not placing vCenter Server Instance or KMS instances within the
encrypted vSAN DataStore.
If a host has more than 512 GB memory, it must boot from a drive or SATADOM. Booting from
USB or SD is not supported in this case.
2018 Dell EMC Proven Professional Knowledge Sharing 11
vSAN port requirements
In a purely vSAN 6.6 environment, multicast ports are no longer required.
vSAN Data Network should meet <5ms latency requirements. Bandwidth is workload dependent
but a minimum of 10 Gbps for most workloads is recommended.
In case of a stretched cluster, latency between the witness and data site should be <=100 ms if
total hosts are greater than 20 and <=200 ms if total hosts are less than 20. As a rule of thumb,
bandwidth between witness and data site requires 2Mbps for every 1,000 objects.
2018 Dell EMC Proven Professional Knowledge Sharing 12
Configuring vSAN
Enabling vSAN is very simple task once you have the required hardware in place. The screenshot below
(Figure 8) provides a high level overview.
1) As vSAN is enabled at the cluster level, click on the Cluster and then click configure. Navigate
to vSAN and click configure.
Figure 8 vSAN Config 1
2) Next screen will validate that all hosts in a cluster have vSAN vmkernel port configured.
2018 Dell EMC Proven Professional Knowledge Sharing 13
Figure 9 VSAN Config 2
3.) Next screen will show the disk groups. Click finish. The vSAN datastore will now be visible to the cluster
hosts.
Figure 10 vSAN Config 3
VMware vSAN Licensing
vSAN works with any edition of vSphere.
vSAN Standard, Advanced, and Enterprise licenses are per-CPU (socket) licenses.
All hosts in the cluster must be licensed.
Stretched cluster configurations and data-at-rest encryption require Enterprise licenses.
vSAN for Desktop are concurrent user (CCU) licenses available in a pack of 10 and 100.
vSAN for ROBO are per-VM licenses available in a pack of 25.
vSAN for ROBO licenses can be spread across multiple remote offices.
2018 Dell EMC Proven Professional Knowledge Sharing 14
Only one vSAN for ROBO Standard or Advanced or Enterprise 25-pack of licenses can be used at a
remote office. Running more than 25 virtual machines at a single remote office location
disqualifies the use of vSAN for ROBO licensing at that location.
vSAN for Desktop licenses can only be used to run virtual desktop workloads.
VMware Horizon Advanced and Enterprise licensing includes vSAN Advanced licenses to run
virtual desktops workloads only.
A 2-host vSAN cluster with a witness host can be deployed with any license edition.
Any cluster with three or more physical hosts plus a witness host is a stretched cluster, which
requires vSAN Enterprise licensing.
vSAN license types and available features
Figure 11 vSAN Licensing
For more details refer to https://blogs.vmware.com/virtualblocks/2017/05/12/vmware-vsan-6-6-
licensing-guide/
2018 Dell EMC Proven Professional Knowledge Sharing 15
Architecture and Components of NSX
Figure 12
VMware NSX Network Virtualization Platform decouples the underlying Physical Network and acts as
abstraction layer in a similar manner as Server Hypervisor does in Server Virtualization. VMware NSX
Network Virtualization helps overcome the physical boundaries of a physical network. For example,
Servers in different datacenters can logically be part of the same network using VMware NSX without
using any Layer 2 extension technologies, such as Cisco OTV. This can be easily accomplished by the
existing equipment in the datacenter by using VMware NSX. VMware NSX provides all the network
services that a physical network provides, i.e. switching, routing, firewall, NAT, etc. Let’s look closer at
how its accomplished and the components involved.
Figure 13 NSX Logical Components
2018 Dell EMC Proven Professional Knowledge Sharing 16
As in a physical network architecture, a NSX environment will have the same logical data plane, control
plane and management plane.
Data Plane
ESXi host kernel-embedded modules comprise the data plane. Each participating ESXi host has NSX VIB
installed that add the functionality for logical switching, distributed logical router, and firewall.
Distributed logical router is responsible for the routing within the NSX environment and Edges are
responsible for communication to the outside world, applications, and users that are not part of the NSX
Ecosystem. Edges are capable of iBGP and eBGP depending on the design and requirements. NAT
functionality as well as load balancing functionality is also provided in the data plane.
Control Plane
NSX Controller VM’s are part of the Control Plane. As the name suggests, controller is responsible for
controlling the hypervisor switching, routing, security, and load balancing modules .
Management Plane
NSX Manager VM is a management VM for managing the NSX environment. All the configuration is
initiated after logging in to NSX Manager. There is a one-to-one relation between NSX Manager and
vCenter.
Things to know when designing and implementing a NSX Environment
VMware NSX is a vast product that touches all the aspects of a datacenter. Therefore best practices will
always vary as per the design, requirements and constraints. Major aspects that should be factored
include:
If you are going to deploy NSX as the backbone of your network keep in mind that while it will
give you lots of flexibility, scalability, and agility, it comes with a need for robust design. We cannot
compromise the availability of our services to achieve these benefits. This means making sure that
the design is fault tolerant in a similar way as the physical network.
Since Edges will form a critical component for communication to the outside world, it is very
important that they are deployed in a HA configuration as per NSX best practices.
Edges should be able to handle the workload so they should be properly sized in terms of number
of Network Interface Cards, Memory, and CPU.
Controller are another important component so, as in Physical Network Switches or Routers we
have redundancy, deploy controllers in redundant combinations as well. Starting with 3
controllers is a nice start.
Schedule regular backups of NSX components. The link below is useful in planning the backups.
https://docs.vmware.com/en/VMware-NSX-for-
vSphere/6.3/com.vmware.nsx.upgrade.doc/GUID-72EFCAB1-0B10-4007-A44C-
09D38CD960D3.html
Always refer to the VMware Compatibility matrix for deciding upon Servers and Switches for your
NSX Environment.
2018 Dell EMC Proven Professional Knowledge Sharing 17
Moving to NSX is a significant step so upskilling the workforce is an important factor that should
be considered.
I always refer to the below design guide as a starting point
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/nsx/vmw-
nsx-network-virtualization-design-guide.pdf
Always plan NSX implementation in phases. Start with the test and dev environment. Stabilize,
observe and tune the NSX behavior on test and dev environments before proceeding to
Production Environments.
Know the application interdependencies as you might migrate an application to NSX but later find
that it’s dependent upon other applications not reachable from NSX Environment.
There are additional factors but the important point is to know that migrating to NSX is a big leap and
should be planned in detail to avoid any unwanted issues.
Things to knows about NSX licensing
NSX is licensed per/CPU socket.
NSX licenses come in different flavors – Standard, Advanced, and Enterprise – enabling different
sets of functions and features.
NSX for vSphere 6.2.x and 6.3.x
Feature Standard Advanced Enterprise
Hypervisors Supported
Platform
ESXi* Yes Yes Yes
vCenter* Yes Yes Yes
Cross vCenter Networking & Security No No Yes
Controller Architecture
NSX Controller Yes Yes Yes
Universal Controller for X-VC No No Yes
Optimized ARP Learning, BCAST suppression Yes Yes Yes
Switching
Encapsulation Format
VXLAN Yes Yes Yes
Replication Mode for VXLAN
Multicast Yes Yes Yes
Hybrid Yes Yes Yes
Unicast Yes Yes Yes
Overlay to VLAN bridging
SW Bridge (ESXi-based) Yes Yes Yes
2018 Dell EMC Proven Professional Knowledge Sharing 18
Hardware VTEP (OVSDB) with L2 Bridging No No Yes
Universal Distributed Logical Switching (X-VC) No No Yes
Multiple VTEP Support Yes Yes Yes
Routing
Distributed Routing (IPv4 Only)
Distributed Routing - Static Yes Yes Yes
Distributed Routing - Dynamic Routing with BGP Yes Yes Yes
Distributed Routing - Dynamic Routing with OSPF Yes Yes Yes
Equal Cost Multi-Pathing with Distributed Routing Yes Yes Yes
Universal Distributed Logical Router (X-VC) No No Yes
Dynamic Routing without Control VM (Static Only) Yes Yes Yes
Active-standby Router Control VM Yes Yes Yes
Edge Routing (N-S)
Edge Routing Static - IPv4 Yes Yes Yes
Edge Routing Static - IPv6 Yes Yes Yes
Dynamic Routing with NSX Edge (BGP) IPv4 Yes Yes Yes
Dynamic Routing with NSX Edge (OSPFv2) IPv4 Yes Yes Yes
Equal Cost Multi-Pathing with NSX Edge Yes Yes Yes
Egress Routing Optimization in X-VC No No Yes
DHCP Relay Yes Yes Yes
Active-Standby NSX Edge Routing Yes Yes Yes
VLAN Trunk (sub-interface) support Yes Yes Yes
VXLAN Trunk (sub-interface) support Yes Yes Yes
Per Interface RPF check on NSX Edge Yes Yes Yes
Services
NAT Support for NSX Edge
NAT Support for NSX Edge Yes Yes Yes
Source NAT Yes Yes Yes
Destination NAT Yes Yes Yes
Stateless NAT
ALG Support for NAT Yes Yes Yes
DDI
DHCP Server Yes Yes Yes
DHCP Relay Yes Yes Yes
2018 Dell EMC Proven Professional Knowledge Sharing 19
DNS Relay Yes Yes Yes
VPN
IPSEC VPN No No Yes
SSL VPN No No Yes
L2 VPN (L2 extension with SSL VPN) No No Yes
802.1Q Trunks over L2 VPN No No Yes
Security
Firewall - General
Single UI for Firewall Rule Enforcement - NS+ EW No Yes Yes
Spoofguard No Yes Yes
Firewall Logging Yes Yes Yes
Rule Export No Yes Yes
Auto-save & Rollback of Firewall rules No Yes Yes
Granular Sections of Firewall rule table No Yes Yes
Distributed Firewall
DFW - L2, L3 Rules No Yes Yes
DFW - vCenter Object Based Rules No Yes Yes
Identity Firewall Rules (AD Integration) No Yes Yes
IPFix Support for DFW No Yes Yes
Context-based control of FW enforcement
(applied to objects)
No Yes Yes
Edge Firewall
Edge Firewall Yes Yes Yes
Edge High-Availability Yes Yes Yes
Service Composer
Security Policy Yes Yes Yes
Security Tags Yes Yes Yes
vCenter Object based security groups Yes Yes Yes
IPSet, MACset based security groups Yes Yes Yes
Data Security
Scan Guest VMs for Sensitive Data No Yes Yes
2018 Dell EMC Proven Professional Knowledge Sharing 20
Third Party Integration
Endpoint Service Insertion - Guest Introspection Yes Yes Yes
Network Service Insertion No Yes Yes
Public API based Integration Yes Yes Yes
Load-Balancing
Edge Load-Balancing
Protocols
TCP (L4 - L7) No Yes Yes
UDP No Yes Yes
FTP No Yes Yes
HTTP No Yes Yes
HTTPS (Pass-through) No Yes Yes
HTTPS (SSL Termination) No Yes Yes
LB Methods No Yes Yes
Round Robin No Yes Yes
Src IP Hash No Yes Yes
Least Connection No Yes Yes
URI, URL, HTTP (L7 engine) No Yes Yes
vCenter Context-aware LB No Yes Yes
L7 Application Rules No Yes Yes
Health Checks
TCP No Yes Yes
ICMP No Yes Yes
UDP No Yes Yes
HTTP No Yes Yes
HTTPS No Yes Yes
Connection Throttling No Yes Yes
High-Availability No Yes Yes
Monitoring
View VIP/Pool/Server Objects No Yes Yes
View VIP/Pool/Server Stats No Yes Yes
Global Stats VIP Sessions No Yes Yes
Distributed Load-Balancing
L4 Load-balancing No No Yes (tech-preview)
2018 Dell EMC Proven Professional Knowledge Sharing 21
Health checks No No Yes (tech-preview)
Operations
Tools
Tunnel Health Monitoring No No No
TraceFlow Yes Yes Yes
Port-Connections Tool No No No
Server Activity Monitoring No Yes Yes
Flow Monitoring No Yes Yes
IPFix (VDS Feature) Yes Yes Yes
Endpoint Monitoring No No Yes
Application Rule Manager No Yes Yes
VMware Tools
vR Operations Manager Yes Yes Yes
vR Log Insight Yes Yes Yes
Cloud Management Platform
vRealize Automation
Logical Switch Creation Yes Yes Yes
Distributed router creation Yes Yes Yes
Distributed firewall security consumption No Yes Yes
Load-balancing consumption No Yes Yes
App Isolation No Yes Yes
VMware Integrated OpenStack (Neutron Plugin)
VLAN Provider Networks Yes Yes Yes
Overlay Provider Networks Yes Yes Yes
Overlay Tenant Networks Yes Yes Yes
Metadata Proxy Service Yes Yes Yes
DHCP Server Yes Yes Yes
Neutron Router - Centralized - Shared Yes Yes Yes
Neutron Router - Centralized - Exclusive Yes Yes Yes
Neutron Router - Distributed Yes Yes Yes
Static Routes on Neutron Router Yes Yes Yes
Floating IP Support Yes Yes Yes
No-NAT Neutron Routers Yes Yes Yes
Neutron Security Groups using Stateful Firewall No Yes Yes
Port Security Yes Yes Yes
Neutron L2 Gateway Yes Yes Yes
2018 Dell EMC Proven Professional Knowledge Sharing 22
Load Balancing (LBaaS) Yes Yes Yes
Admin Utility (Consistency Check, Cleanup) Yes Yes Yes
Cross VC Logical Networking and Security No No No
References
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/nsx/vmw-
nsx-network-virtualization-design-guide.pdf
https://docs.vmware.com/en/VMware-NSX-for-
vSphere/6.3/com.vmware.nsx.upgrade.doc/GUID-72EFCAB1-0B10-4007-A44C-
09D38CD960D3.html
https://blogs.vmware.com/virtualblocks/2017/05/12/vmware-vsan-6-6-licensing-guide/
https://blogs.vmware.com/vsphere/2014/04/vmware-virtual-san-witness-component-
deployment-logic.html.
Dell EMC believes the information in this publication is accurate as of its publication date. The
information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” DELL EMC MAKES NO
RESPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS
PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE.
Use, copying and distribution of any Dell EMC software described in this publication requires an
applicable software license.
Dell, EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries.