Upload
others
View
16
Download
0
Embed Size (px)
Citation preview
#vmworld
SDDC Reference Designwith NSX Data Center
for vSphereGregory Smith, VMware, Inc.
Nimish Desai, VMware, Inc.
NET1559BU
#NET1559BU
VMworld 2018 Content: Not for publication or distribution
Disclaimer
2©2018 VMware, Inc.
This presentation may contain product features orfunctionality that are currently under development.
This overview of new technology represents no commitment from VMware to deliver these features in any generally available product.
Features are subject to change, and must not be included in contracts, purchase orders, or sales agreements of any kind.
Technical feasibility and market demand will affect final delivery.
Pricing and packaging for any new features/functionality/technology discussed or presented, have not been determined.
VMworld 2018 Content: Not for publication or distribution
3©2018 VMware, Inc. 3
BRANCH
BRANCH
BRANCH
BRANCH
BRANCH
BRANCH
BRANCH
BRANCH
TELCO/NFV
TELCO/NFV
EDGE/IOT
TELCO/NFV
BRANCH
BRANCH
EDGE/IOT
EDGE/IOT
The Virtual Cloud NetworkConnect and Protect your BusinessVMworld 2018 Content: Not for publication or distribution
4©2018 VMware, Inc. 4
Identity
Apps and Data
Policy ScalabilityAnalytics and Insights
Secure Connectivity Availability
Users
Private Data Centers
VMs, Containers, Microservices
Branch Offices
Public Clouds
Telco Networks
Things
Virtual Cloud NetworkingConnect & Protectany workload across any environment
Built-in
Automated
Programmable
Application Centric
VMworld 2018 Content: Not for publication or distribution
Agenda
5©2018 VMware, Inc.
Agenda
5
Use Cases & Components
Architecture & Design
Edge Cluster Design
Routing Design
Automation & Tenancy
Security
VMworld 2018 Content: Not for publication or distribution
6©2018 VMware, Inc. 6
Use Cases & Components
VMworld 2018 Content: Not for publication or distribution
7©2018 VMware, Inc. 7
Use Cases
Security Automation Application Continuity
IT automating IT
Developer cloud
Multi-tenant infrastructure
Micro-segmentation
DMZ anywhere
Secure end user
Disaster recovery
Multi data center pooling
Cross cloudVMworld 2018 Content: Not for publication or distribution
8©2018 VMware, Inc. 8
Logical Router
Control VM
Components
Management Plane
Consumption
Control Plane
NSX Manager
vCenter
NSX Controllers
Data Plane
ESX Host(s)
Distributed Services
Hypervisor Kernel Modules+
NSX Edge VM(ESG)
• Self Service Portal• vRA, OpenStack, Custom
• Single Configuration Portal• Rest API Entry-Point
• Manages Logical Networks• Control and Data Plane Seperation
• High Performance Data Plane• Scale Out Distributed Forwarding Model
Physical Network • NSX is agnostic to underlay network topologyVMworld 2018 Content: Not for publication or distribution
9©2018 VMware, Inc. 9
Characterization of Critical Components
Convergence
Availability
Rate of Learning
DFW Rule Logic
Availability
API
Control Plane
Learning
Convergence
Availability
Bandwidth
Convergence
Availability
Services
Scale
NSX Manager NSX Controller NSX Edge (ESG) DLR Control VM
Design Guide: https://communities.vmware.com/docs/DOC-27683
• 1:1 Mapping to vCenter
• 3 controllers is the only supported configuration
• Can be deployed for a single Service Stateful Service or Multiple
• Control Plane only, Outside the Data Path
• Holds DFW rules logic, flow monitoring and local logging data
• Majority Required in order to function
• Forms Adjacency with the outside world for all N/S Traffic Flows
• Active/Standby at ”Application Level”
• Availability addressed though NSX and Traditional Means
• Storage resiliency and DRS Affinity Rules are manual design considerations
• Availability is both at the ESG Level and the VM Level
• Updates the DLR Routing Tables on each host to chages
VMworld 2018 Content: Not for publication or distribution
10©2018 VMware, Inc. 10
Architecture & Design
VMworld 2018 Content: Not for publication or distribution
13©2018 VMware, Inc. 13
Dedicated Management Cluster Approach
• Dedicated vCenter Server for managing multiple vCenter domains
• Separate vCenter Server into the Management Cluster to manage the Edge and Compute Clusters NSX Manager also deployed into the Management Cluster and pared with this second vCenter Server
Can deploy multiple NSX Manager/vCenter Server pairs (separate NSX domains)
• NSX Controllers must be deployed into the same vCenter server where NSX Manager is attached to, therefore the controllers are usually also deployed into the Edge Cluster
NSX Controllers DLR Control VM
NSX Edge VM
(ESG)
Edge Cluster Domain A
Management ClustervCenter VM Domain A
vCenter VM Management
NSX Manager VM Domain A
1:1
Compute Cluster #1 Domain A
Compute Cluster #n Domain A
vCenter VM Domain B
NSX Manager VM Domain B
1:1
NSX ControllersDLR Control VM
NSX Edge VM
(ESG)
Edge Cluster Domain B
Compute Cluster #1 Domain B
Compute Cluster #n Domain B
Multi-Site Netwotkingand Security
NET1536BU
VMworld 2018 Content: Not for publication or distribution
14©2018 VMware, Inc. 14
ESX VMkernel Networking
• NSX is agnostic to underlay network topology
• L2 or L3 or any combination
• VXLAN VLAN ID must be Consistent
• Only two requirements• IP Connectivity• Minimum MTU of 1600
bytes
Configuring Jumbo Frame on ESX : VMware KB
vSphere Distributed Switch (vDS)Uplink Port Group
ESXi Host
vmk0 vmk1 vmk2 vmk3
Management PG VLAN 10
vMotion PGVLAN 11
IP Storage PGVLAN 12
VXLAN PGVLAN 13
10.10.10.79 10.11.11.79 10.12.12.79 10.13.13.79
(VTEP)
802.1.Q VLAN TRUNK
VLAN 10 VLAN 11 VLAN 12 VLAN 13
Layer 2 or Layer 3 Uplinks
VMworld 2018 Content: Not for publication or distribution
15©2018 VMware, Inc. 15
vDS Uplink Design
NSX create dvUplink port-groups for VXLAN enabled hosts. This uplink connectivity carrying VXLAN traffic
Must be consistent for all hosts belonging to the VDS
•Recommended teaming mode is “Route Based on Originating Port”
•Simplicity•Bandwidth requirements
•LACP teaming mode is discouraged:•LACP teaming mode only allows single VTEP•LACP teaming forces all the other traffic types to use the same teaming mode
•LACP can be used in compute cluster for traffic optimization
Teaming and Failover Mode
NSX Support
Multi-VTEPSupport
Uplink Behaviour
2 x 10G
Route based on Originating Port ✓ ✓ Both Active
Route based on Source MAC
hash✓ ✓ Both Active
LACP ✓ x Flow based –both active
Route based on IP Hash (Static EtherChannel)
✓ x Flow based –both active
Explicit Failover Order ✓ x Only one link is
active
Route based on Physical NIC Load (LBT)
x x xVMworld 2018 Content: Not for publication or distribution
16©2018 VMware, Inc. 16
VXLAN Design Replication Modes and Design Decisions
Flexibility for VXLAN Transport
• Does not require complex multicast configurations on physical network
Various Replication Modes (Unicast, Multicast, Hybrid)
Unicast Mode
VXLAN Design• For large Layer 2 domains, Hybrid mode is recommended• For Layer 3 domains, most cases Unicast is fine, for very large
Hybrid may be used
VMworld 2018 Content: Not for publication or distribution
17©2018 VMware, Inc. 17
vDS and Transport Zone
Controller Cluster
NSX Edges
vCenter
NSX Manager
Compute vDS
Transport Zone Spanning Three Clusters
Management Cluster
Edge vDS
VTEP
VTEP
VTEP
VTEP
VTEP
VTEP
Edge ClusterCompute Cluster 1 Compute Cluster n
Workloads Workloads
NSX Edges
VMworld 2018 Content: Not for publication or distribution
18©2018 VMware, Inc. 18
Edge Cluster Design
VMworld 2018 Content: Not for publication or distribution
19©2018 VMware, Inc. 19
Edge Cluster
Edge Cluster design and capacity planning depends on many factors
Components in Edge clusters• Edge VMs for N-S Traffic• Control-VM for DLR routing• Services VM - Load Balancer and VPN• Optional - Controllers, Monitoring, Log
Insight VMs
Type of workload & oversubscriptions desired
• N-S vs. East-West• Host uplinks, NIC & CPU
Single vs. multi-VC and DR with SRM
Availability and Scaling• Limiting VLANs sprawl for peering with
ToR• HA, DRS, BW and Rack Availability
Size of DC – Small, Medium & Large
Design Considerations
WAN / Internet
L2
L3Should I mix clusters ?
What type of Edge (ESG) ?
Where to place Control VM?
How to scale ?
Growth and advanced functions ?
Minimum Configuration ?
VMworld 2018 Content: Not for publication or distribution
20©2018 VMware, Inc. 20
NSX Edge SizingESG : Edge Services Gateway
• Edge services gateway can be deployed in multiple sizes depending on services used • Multiple Edge nodes can be deployed at once e.g. ECMP, LB and Active-Standby for NAT• When needed the Edge size can be increased or decreased • In small deployment the Large form is sufficient for many services such as ECMP & LB• X-Large is required for high performance L7 load balancer configurations
NSX Edge VM (ESG) Size Specific Usage
X-Large Suitable for L7 high performance LB and VPN
Quad-LargeSuitable for high
performance ECMP or FW/LB/VPN deployment
Large Small to medium DC or multi-tenant
Compact Small deployments or single service use or PoC
NSX Edge (ESG)
Firewall
Load Balancer
VPN
Routing
NAT
DNS/DHCP
VMworld 2018 Content: Not for publication or distribution
21©2018 VMware, Inc. 21
Distributed Logical Routing ComponentsLogical Router Control VM
Active Standby
ESX Host
DLR Kernel Module
LIF1 LIF2
Transit LS (Logical Switch / VXLAN) Uplink
DLR Control VM forms the adjacencies with Edge node (BGP or OSPF)
NSX Manager sends LIF information to the Control VM and Controller Cluster
Control VM sends Routing updates to the Controller Cluster and them populates Routing Table on each ESXi Host
DLR Control VM and NSX Controller are not in the data path
Active-Standby configuration
Can exist in edge cluster or in compute cluster
VM Default Gateway traffic is handled by LIFs on the appropriate network
An ARP table is maintained per LIF
vMAC is the MAC address of an internal LIF• vMAC is same across all hypervisors and it is never
seen by the physical network (only by VMs)
VMworld 2018 Content: Not for publication or distribution
22©2018 VMware, Inc. 22
Flexible, Scalable, Secure & Multi-use
Distributed Logical Router
Flexibility – DLR, Stand-alone, Services & Isolation• DLR for production workload• DevOps & QA isolation• Per app services
Scalability• ECMP BW as needed• Edge-HA based on use case• In line routed LB segment• In line NAT & private segment
Secure• DFW and Edge FW• Multi-vendor integration
Automation – Blueprints and SecurityMulti-use topology
• Automated DevOps segments• VDI Segments• Enterprise work load
Web LS App LS Db LS Web LS App LS Db LS
One Armed LB In-line LBRouted
Web LS App LS Db LS
In-line LBNAT & Private
ECMP Edges
Distributed Firewall
Transit LS
VMworld 2018 Content: Not for publication or distribution
23©2018 VMware, Inc. 23
Scalable Platform
Sizing vCenter Workload Edge Type N-S BW
Cluster Choice Requirement
Small 1 ConsistentLarge – 2 vCPU
ESG or ECMP< 20G Collapsed Harder to separate
Management later
Medium 1Consistence
Some on-demand
Large to Quad2 or 4 vCPU
ESG or ECMP
< 40G Management/ Edge
Growth not likely, no other smaller
Data Center
Medium with
multiple DC or compute
growth
> 1 On-demand with DR
Quad – 4 vCPU
ECMP for N-SESG for local LB
<= 40G
Separate Management,
Edge and Compute Clusters
Growth or other Data Center
integration must
Large > 1
Variable, on-demand, DR, inter-site and
dev-ops
Quad – 4 vCPU
Multi-tier for services
> 40G and
multi-tenant
Separate Management,
Edge and Compute Clusters
Scale & Availability
VMworld 2018 Content: Not for publication or distribution
24©2018 VMware, Inc. 24
Edge Services Gateway Placement and Oversubscription
Minimum four hosts Cluster• Two host to hold two ECMP Edge VMs
• two for DLR Control-VM – Active-Standby
• Do Not Mix ECMP and DLR-Control VM on same host
Avoiding race condition due to compundfailures of components
Anti-affinity is automatically enabled for DLR Control-VM
Host Uplink & VDS• Use “SRC_ID with Failover” teaming for
VXLAN traffic
• Route peering maps to unique link
Performance & Sizing• Intel, Broadcom or Emulex supporting
VXLAN offload including RSS and TSO offload
Oversubscription Dependent on• Upstream connectivity from the ToR
• Application requirements
• Density of Edge VM per hosts
L2
VLAN 10L3
VLAN 20
L2
VLAN 10L3
VLAN 20
1:2 OversubscriptionNo Oversubscription
L2
VLAN 10L3
VLAN 20
VM DRS Group 1 VM DRS Group 2
VM DRS Group 3
VMworld 2018 Content: Not for publication or distribution
25©2018 VMware, Inc. 25
Edge Topology (Active/Standby vs ECMP)
Active / Standby Edge (Stateful)
HA Pair (Heartbeat Default : 15 sec, canbe tuned down to 9 sec)
Multiple tenants can have dedicated edges
Anti affinity for HA pair is auto provisioned
Routing timers with 40/120 Hello/Dead
ECMP Edge (Stateless)
Minimum 4 hosts recommended
Anti affinity provisioning is manual
Aggressive routing timers 1/3 Hello/Dead
DLR Active Control VM and Edge (ESG) should not be on the same host to avoid double failure
Distributed Logical Router
Web LS App LS Db LS
DLR Control VM
Active
Standby
Routing Adjacency
Edge1 Active Edge1
StandbyEdge HA
Pair
Transit LS
Distributed Logical Router
Web LS App LS Db LS
DLR Control VM
Active
Standby
Routing Adjacency
Edge1 Edge8
Transit LS
VMworld 2018 Content: Not for publication or distribution
26©2018 VMware, Inc. 26
Enterprise Topology – Two Tier Design with ECMP
ECMP Edge mode scalable BW and faster convergence
• 240 Gig BW
• Faster convergence up to 3 seconds and 1/8 of the traffic loss
• DLR to Edge timers tunable as well
• Disable firewall explicitly
Edge Scaling
Per tenant scaling – each workload/tenant gets its own Edge and DLR
ECMP based scaling of incremental BW gain
• 30 Gbps upgrade per spin up of Edge up to maximum of 240 Gbps (8 Edges)
Multiple DLR scaling with Edge allows > 240 Gbps
Recommended Topology
Distributed Logical Router
DLR Control VM
Active
Standby
Routing Adjacency
Edge
Core Network
Transit LS
Physical Router
Application 1 Application n
Web LS App LS Db LS Web LS App LS Db LS
Distributed Logical Router
DLR Control VM
Active
Standby
Routing Adjacency
Edge1 Edge8
Core Network
Physical Router
Web LS App LS Db LS
Transit LS
VMworld 2018 Content: Not for publication or distribution
29©2018 VMware, Inc. 29
Cluster Design
VMworld 2018 Content: Not for publication or distribution
31©2018 VMware, Inc. 31
Compute ClusterDesign Consideration
Compute Cluster
Management Cluster
Edge Cluster
DC Fabric WAN / Internet
L2
L3
Rack based vs. multi-rack (horizontal) stripping
• Availability vs. localized domain – CPU & mobility constraint & simplification of connectivity (IP, VTEP, Automation)
Lifecycle of the workload drives the consideration for• Growth, availability and changes in the application flows• Multi-rack, zoning ( type of customer, tenancy etc.)
Typically rack connectivity is streamlined and repeated • Same four VLANs typically streamlines the configuration
of ToR• Connectivity to the fabric and requirement for additional
capacity remains the same since its abstracted from infrastructure
Workloads type, compliance and SLA can be met via
• Cluster separation• Separate VXLAN network• Per tenant separation routing domains• DRS
VMworld 2018 Content: Not for publication or distribution
34©2018 VMware, Inc. 34
Management, Edge and Compute Clusters
Separation of Management, Edge and Compute functions with following design advantage
Managing life-cycle of resources for compute and Edge functions
• Ability to isolate and develop span of control
• Capacity planning – CPU, Memory & NIC• Upgrades & migration flexibility
High availability based on functional need• Workload specific SLA (DRS & FT)• Network centric connectivity – P/V, ECMP• vMotion boundary
Automation control over area or function that requires frequent changes
• app-tier, micro-segmentation & load-balancer
Three areas of technology require considerations
• Interaction with physical network• Overlay (VXLAN) consideration• Integration with vSphere clustering
Compute Cluster
Management Cluster
Edge Cluster
DC Fabric WAN / Internet
L2
L3
VMworld 2018 Content: Not for publication or distribution
35©2018 VMware, Inc. 35
Small DC Design
Single cluster for small design, expand to medium with separation of compute cluster
• Single cluster can start with DFW only design
– NSX Manager is the only component required
– VDS license comes with NSX
• Centralize Edge without DLR allows one or two rack deployment
– Static routing for simplicity and reduced need of deploying control-VM
• Progress to full stack for other services such as FW, LB, VPN and VxLAN
Single Collapsed Cluster
WAN / Internet
Single Cluster with NSX
L2
L3
VMworld 2018 Content: Not for publication or distribution
36©2018 VMware, Inc. 36
Medium DC Design
Mixing compute and edge workload requires
• Balanced Compute workload can be mixed with Edge VM resources
• However the growth of compute can put additional burden on managing resource reservation to protect the Edge VM CPU
Collapsing edge OR compute with management components (VC and NSX manager)
• Requires management component to be dependent on VXLAN since VXLAN enablement is per cluster bases
• Expansion or decoupling of management required for growth
– moving management cluster to remote location
– Having multiple VCs to manage separation
Mixing Edge and Management is a better strategy
• Consistent static requirements of the resources – mgmt. is relatively time idle resources compared to compute workload
Collapsed Management & Edge, Separate Compute Cluster
WAN / Internet
Management & Edge Cluster
L2
L3
Compute Cluster
VMworld 2018 Content: Not for publication or distribution
37©2018 VMware, Inc. 37
Medium DC Design – Continued…
Small to medium design can utilize the edge service gateway features where
• N-S BW is not more then 10 G• Desire to reduce external FW usage
with Edge FW functionality• Using built in Load Balancer• Use VPN or SSL functionality
Edge Services Sizing• Start with Large (2 vCPU) if the line
rate BW is not required• Can be upgraded to Quad-Large (4
vCPU) for growth in BW
Consider LB in single arm mode to be near the application segments
Collapsed Management & Edge, Separate Compute Cluster
WAN / Internet
Management & Edge Cluster
L2
L3
Compute Cluster
VMworld 2018 Content: Not for publication or distribution
38©2018 VMware, Inc. 38
Large DC Design
Workload characteristics• Variable• On-demand• Compliance requirements
For cross-VC and SRM Deployment
• Separation of management cluster is inevitable
Large scale Edge Cluster Design• Dedicated minimum four hosts• Minimum four ECMP Edge
(Quad Large) 240 Gbps total BW
• Separate host with DRS protection between ECMP Edge VM and Active Control-VM
• Capacity for services VMs
Separate Management, Compute and Edge Cluster
Compute Cluster
Management Cluster
Edge Cluster
DC Fabric WAN / Internet
L2
L3
VMworld 2018 Content: Not for publication or distribution
42©2018 VMware, Inc. 42
Mapping Tenants to Physical Underlay
Web LS App LS Db LS
In-line LBRouted
Web LS App LS Db LS
In-line NAT
NSX Edge Active / Standby
Distributed Logical Router
Web LS App LS Db LS
ECMP Edges
Tenant 10 = VRF 10 = VLAN 10Tenant 20 = VRF 20 = VLAN 20
Each dedicated Tenant Edge can connect to a separate VRF in the upstream physical router
The Department or Zone maintains• VLAN and/or VRF level Isolation
DLR and ECMP for Production
Edge with services for QA/Dev
Physical Routers
VMworld 2018 Content: Not for publication or distribution
44©2018 VMware, Inc. 44
Routing Design
VMworld 2018 Content: Not for publication or distribution
45©2018 VMware, Inc. 45
Transit VXLAN
Cisco vPC and Routing Peer Termination
Edge
Physical Routers
Transit VXLAN
L2
L3
ESX Host
Edge
Uplink Teaming Mode
LACP
Peering over vPCrequires certain Cisco
NX-OS Version or Hardware Linecard
X
Not Recommended Edge
Physical Routers
L2
L3
ESX Host
Uplink Teaming Mode
Originating Virtual Port ID
Peering NSX Edge on Rach Mount Server
Edge vDSEdge Uplink 1
VLAN 10(vmnic1active
vmnic2 unused)
Edge Uplink 20VLAN w0
(vmnic2 activevmnic1 unused)
dvUplink A dvUplink B
Routing Adjacency
Edge Uplink = Host Uplink = VLAN = Adjacency
Peering over Non VPC Parallel Links Blade Server Chassis
Blade Switch #1
Blade Switch #2
VLAN 10
SVI Router A
VLAN 20 SVI
Router B
VMworld 2018 Content: Not for publication or distribution
46©2018 VMware, Inc. 46
NSX Connectivity with BGPDesign Considerations
BGP Protocol Feature
NSX Edge
Active/ Standby
NSX Edge ECMP
DLR Control
VM
eBGP Yes Yes Yes
iBGP Yes Yes Yes
Redsistribution Yes Yes Yes
Keepalive 60 60 60
Hold 180 180 180
Multi-path Yes Yes Yes
Graceful Restart Yes N/A Yes
Distributed Logical Router
DLR Control VM
Active
Standby
eBGP
Edge1 Active Edge1 Standby
Core Network
Physical Routers
Web LS App LS Db LS
Transit LSiBGP
BGP AS 65100
Distributed Logical Router
DLR Control VM
Active
Standby
eBGP
Edge1 Edge8
Core Network
Physical Routers
Web LS App LS Db LS
Transit LS
VLAN 20VLAN 10
iBGPBGP AS 6500
Uplink VLAN
Routing Adjacency
Routing Adjacency
ECMP
Active/Standby
VMworld 2018 Content: Not for publication or distribution
47©2018 VMware, Inc. 47
NSX Connectivity with BGP Continued…
BGP connectivity is preferred for multi tenancy and better route control
In ECMP case, Edges announce static summary routes to avoid Control VM failure recovery
Edges need to allow redistribution of subnet of links connected to ToR to carry next hop into iBGP
Private AS can be used, additional configuration required to remove AS path being advertised into public BGP peering
Design Considerations
Distributed Logical Router
DLR Control VM
Active
Standby
eBGP
Edge1 Edge8
Web LS App LS Db LS
Transit LS
iBGP
BGP AS 65200
Routing Adjacency
Core Network
BGP AS 65100
SEND DEFAULT ROUTE TO NSX EDGES
SEND DEFAULT ROUTE TO NSX EDGES
ADVERTISE RECEIVED BGP ROUTES FROM NSX EDGE TO
AS 65100
ADVERTISE RECEIVED BGP ROUTES FROM NSX EDGE TO
AS 65100
REDISTRIBUTE CONNECTED
1. REDISTRIBUTE STATIC ROUTE SUMMARIZING LOGICAL SUBNETS ADDRESS SPACE TO EBGP (ONLY TO PHYSICAL ROUTERS)
2. REDISTRIBUTED CONNECTED UPLINK SUBNETS TO IBGP (ONLYTO DLR)
1. REDISTRIBUTE STATIC ROUTE SUMMARIZING LOGICAL SUBNETS ADDRESS SPACE TO EBGP( ONLY TO PHYSICAL ROUTERS)
2. REDISTRIBUTED CONNECTED UPLINK SUBNETS TO IBGP (ONLY TO DLR)
GEEK SLIDE
VMworld 2018 Content: Not for publication or distribution
51©2018 VMware, Inc. 51
NSX DatacenterSecurity Design
VMworld 2018 Content: Not for publication or distribution
52©2018 VMware, Inc. 52
Design and Architectural Goals• Built In and not bolt on• On demand and dynamic
security enforcement• Follow life cycle of resources• Run time redirection and insertion • Topology independent, Not tied to physical • DR and multi-site capable• Build eco-systems• Protect, detect, inoculate –
Any application, any time, anywhere
NSX Security Architecture Overview
Any App, Any VM,
Anywhere
DFW
Service Composer
Security Groups
Policy
Eco System
VMworld 2018 Content: Not for publication or distribution
53©2018 VMware, Inc. 53
Security Design Life Cycle
Risk & Control
Scope/Zone/Area
Access Pattern
Dependencies
Grouping
Policy Model
How do one take multi dimensional problem of securitization of assets and resources:
Typically answer lies into developing framework and then policy model for each
Lifecycle applies to specific to domain or use case
Develop right level of control and risk with a flexibility of automation
• Per zone or tenant• Regulated Environment• Workload centric – EUC, Prod, QA• Infrastructure traffic• Physical FW and devices interaction
Typically an inventory or grouping of application for a given zone or tenant or tiers is required
• What methodology is used to group?• How to discover?• How to automate?
VMworld 2018 Content: Not for publication or distribution
54©2018 VMware, Inc. 54
Existing policy of isolation, segmentation and regulation is the base line
Existing infrastructure services identification • Shared services could be specific to zone or enterprise,
either one requires discovery
Develop dependencies model of security – level and inheritance based on app tier, zone, regulation
• Whitelist or blacklist• Either requires known-knowns or known-unknown
Use Log Insight, vRNI and SPLUNK to develop detail dependencies
• Default allow with log• Default deny with log
Degree of micro-segmentation determines the level of discovery and grouping criteria
Repeat for each zone, tenant or workload
Security Design Life CycleIdentify
Group/Apps/Zone
Decide Default Allow or Deny &
Log
Shared Services
Rules
Monitor Logs to
R/Define Rules
New App or Zone
Inventory
E-W Intra-App Rules
VMworld 2018 Content: Not for publication or distribution
55©2018 VMware, Inc. 55
Components Of Security PlatformInternet
Intranet/Extranet
Perimeter Firewall
(Physical)
NSX Edge Service
Gateway
SDDC (Software Defined DC)
DFW
DFW
DFW
Distributed FW - DFW
Virtual
Compute Clusters
Stateful Perimeter Protection
Inter/Intra VM
Protection
DFW Objects and ”Apply to” • Identity – AD Groups• VC Container Objects – DC, Cluster, Port-Groups, Logical SW• VM Characteristics– VM Names, Security Tags, Attributes, OS
Names• Protocols, Ports, Services• TAGs
Services Composer• Security Groups • Security Policy - application centric policy like DFW rules (l2-L4)
Static and Dynamic Grouping• Nested and inheritance• Intelligent Grouping
Automated Discovery• Log Insight and vRNI (formally Arkin)
Automation and API• App Isolation• Dynamic Management of security VMworld 2018 Content: Not for publication or distribution
56©2018 VMware, Inc. 56
Micro Segmentation Design Patterns
STOPStateful DFW
STOP
STOP
Stateful DFW
Stateful DFW
STOP
ControlledCommunication
STOP
Stateful DFW
Stateful DFW
STOP
STOP
Stateful DFW
Stateful DFW
ControlledCommunication
ControlledCommunication
STOP
STOP
Stateful DFW
Stateful DFW
ControlledCommunication
PhysicalRouter
PhysicalRouter
Edge ServicesGateway
Distributed Logical Router
Distributed Logical Router
PolicyPolicy Policy
PolicyPolicy
Traffic Steering Partner Advanced
Services
Traffic Steering Partner Advanced
Services
Distributed Segmentation Distributed Segmentation with Network IsolationDistributed Segmentation with Network Isolation And Service Insertion
Distributed Segmentation with Network Overlay Isolation Distributed Segmentation with Network Overlay Isolation and Service Insertion
VMworld 2018 Content: Not for publication or distribution
58©2018 VMware, Inc. 58
Micro-Segmentation with vRA
vRA is an excellent fit for automating Micro-Segmentation
Provides application context to enable a policy based approach to security
Granular security requires a mix of different vRA options:• Existing or On-Demand SGs for Common Services access• Existing SGs to control traffic within the deployment• App Isolation to block traffic across deployments
Rule ordering is defined by Security Policy’s Weight
Service Composer configured to apply rules to Policy’s SGs:
VMworld 2018 Content: Not for publication or distribution
Security Scope Per Use Case
Use Case Tools Advance Tools& Automation Analytics & Discovery
Isolation between Apps Tags (OS, VM name, VC Objects)
Service ComposerARM vRNI, ARM, vRLI
Isolation inside each App Tags (OS, VM name, VC Objects)
Service ComposerARM vRNI, ARM, vRLI
EUC/VDI Tag (IDFW)Service ComposerARM and Context
Aware FWvRNI, ARM, vRLI
Multi-tenant Security DFW Apply-To Service ComposerARM vRNI, ARM, vRLI
Advanced Services Third Party Services Insertion
Service Composer and/or 3rd party
consolevRNI & Third Party Tools
DMZ AnywhereTags, OS, VM name, vC
objects + 3rd party + Edge FW
Service ComposerARM
vRNI, ARM, vRLI,Third party tools
Automated Isolation vRA, Python, Ansible, PowerNSX, OpenStack NSX APIs vRNI, ARM &LI
60
VMworld 2018 Content: Not for publication or distribution
61©2018 VMware, Inc. 61
NSX Security Certifications and Compliance
Distributed Firewall
Edge Firewall
VPN
http://pubs.vmware.com/Release_Notes/en/nsx/6.3.0/releasenotes_nsx_vsphere_630.html
https://solutionexchange.vmware.com/store/products/vmware-pci-compliance-and-cyber-risk-solutions
http://ir.vmware.com/overview/press-releases/press-release-details/2016/Newly-Released-STIG-Validates-
VMware-NSX-Meets-the-Security-Hardening-Guidance-Required-for-Installment-on-Department-of-Defense-DoD-
Networks/default.aspx
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/vmware-product-applicability-guide-hipaa-
hitech.pdf
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/vmware-product-applicability-guide-for-fedramp-v1-0.pdf
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/vmware-product-applicability-guide-nerc-cip.pdf
VMworld 2018 Content: Not for publication or distribution
62©2018 VMware, Inc. 62
NETWORKING AND SECURITY MANAGEMENT AND AUTOMATION
vRealize AutomationEnd-to-end workload automation
Network InsightNetwork discovery and insights
Cloud-Based Management Workflow Automation Blueprints / Templates Insights / Discovery Visibility
NETWORK AND SECURITY VIRTUALIZATION
AppDefenseModern application
security
NSX SD-WANby VeloCloud
WAN connectivity services
NSX Hybrid ConnectData center and cloud
workload migration
NSX Data CenterNetworking and
security for data centerworkloads
NSX CloudNetworking and
security for Public Cloud workloads
Security Integration Extensibility Automation Elasticity
VMware NSX PortfolioThe Foundation of the Virtual Cloud Network
VMworld 2018 Content: Not for publication or distribution
63©2018 VMware, Inc. 63
Driving Value with Our NSX Partner Ecosystem
Cloud Network Infrastructure
Networking & Security Services
Orchestration & Management
HCI PlatformsOperations & Visibility
vSANReady Node
BARE METAL
vRealize Automation
vCloud Director
vRealize Orchestrator VIO
Network Insight
To help protect your privacy, PowerPoint has blocked automatic download of this picture.
VMworld 2018 Content: Not for publication or distribution
64©2018 VMware, Inc. 64
Join the NSX VMUG Communityvmug.com/nsxConnect with your Peerscommunities.vmware.com
Embrace the NSX Mindsetnsxmindset.comFind NSX Resourcesvmware.com/go/networkingRead the Network Virtualization Blogblogs.vmware.com/networkvirtualization
Where to Get Started
Attend the Networking and Security SessionsShowcases, breakouts, quick talks & group discussions
Visit the VMware BoothProduct overviews, use-case demos
Visit Technical Partner BoothsIntegration demos – Infrastructure, security, operations, visibility, and more
Meet the ExpertsJoin our experts in an intimate roundtable discussion
Free Hands-on Labslabs.hol.vmware.com
Virtual Cloud Network Guided Demovcndemo.com
VMware Education - Training and Certificationvmware.com/go/nsxtraining
Free NSX Training on Courseravmware.com/go/coursera
Engage and Learn Experience
Try Take
VMworld 2018 Content: Not for publication or distribution
PLEASE FILL OUTYOUR SURVEY.Take a survey and enter a drawingfor a VMware company store gift card.
#vmworld #NET1559BU
VMworld 2018 Content: Not for publication or distribution
THANK YOU!
#vmworld #NET1559BU
VMworld 2018 Content: Not for publication or distribution
67©2018 VMware, Inc. 67
Appendix
VMworld 2018 Content: Not for publication or distribution
NIC Card PerformanceSingle Core Limits
• 5 to 20Gbps per core based on MTU
Multi Core• 4 X times of single core limits• Can go slightly beyond 80G
PCIe 3.0 Limitations• ~8Gbps per lane• Most NICs are x8 lanes
– ~ 64 Gbps limit– Use two NICs for > 40G Throughput
For high throughput• 2 x 8 lane NICs or 1 x 16 lane NIC• Higher MTU• TSO, LRO & RSS enabled cards
Disable CPU power saving mode
Hyper-threading Disabled on host
CPU 1
Core 1
Core n-1
Core 2
Core n
CPU 2
Core 1
Core n-1
Core 2
Core n
40
Gb
ps
40
Gb
ps
40
Gb
ps
40
Gb
ps
NIC
PC
Ie3.
0X
8 (8
Lan
es)
NIC
PC
Ie3.
0X
8 (8
Lan
es)
Per Core – ~5 – 20 Gbps
Based on MTU
Per lane – ~8 Gbps
Max Throughput of ~64Gbps on 8x (8 lane) PCIe 3.0
NIC
VMworld 2018 Content: Not for publication or distribution
VXLAN with UCS
Only for VIC 1340/80 supports VXLAN Offload• Connection Policy Dynamic • Transmit Queues 1 / Ring Size 256• Receive Queues 4 / Ring Size 512• Completion Queues 5 / Interrupts 32• Receive Side Scaling (RSS): Enabled• Virtual Extensible Lab Enabled • Interrupt Mode MSX X • VXLAN Configuration steps:
http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/sw/gui/config/guide/2-2/b_UCSM_GUI_Configuration_Guide_2_2/b_UCSM_GUI_Configuration_Guide_2_2_chapter_010101.html#task_3B9A228959A24E64A2C308E0DCC4A85E
Older drivers have serious bugs: Cisco VXLAN offload bug ID: CSCut02603
VMware does not recommend enabling VXLAN offload mode on VNIC if IPV6 is required
Reference
VMworld 2018 Content: Not for publication or distribution
UCS VMQ (NetQueue) ConfigConsistence Performance for both VLAN and VxLAN, all VIC HW
Adapter Policy• vNIC MTU 9000 • Connection Policy VMQ • Transmit Queues: 8 • Receive Queues: 4• Completion Queues: 12• Interrupts 14• Interrupt Mode: MSI X
VMQ Connection Policy• Number of VMQs 16• Number of Interrupts: 34 [2 x VMQ + 2]
Note: VMQ and VXLAN Offload are mutually exclusive
Ad
apte
r P
olic
yVMQ Connection Policy
Reference
VMworld 2018 Content: Not for publication or distribution