Upload
cisco-canada
View
446
Download
3
Embed Size (px)
Citation preview
Cisco Confidential© 2015 Cisco and/or its affiliates. All rights reserved. 1
T-DC-15-ICisco Connect Toronto 2016
Hitchhikers Guide to Data Center Virtualization & Workload Consolidation
Joshua Craig KayaTechnology Solution Architect - Data CenterMay 19, 2016
In collaboration with
© 2016 Cisco and/or its affiliates. All rights reserved. 2
Agenda:
• Introduction to Data Center Workload Consolidation
• Modern Network Segmentation
• Advanced Microsegmentation
• Container Workload Consolidation
Cisco Confidential 3C97-732424-00 © 2014 Cisco and/or its affiliates. All rights reserved.
Introduction to Data Center Workload Consolidation
© 2016 Cisco and/or its affiliates. All rights reserved. 6
Hypervisors & Containers
6
Hardware
Operating System
Hypervisor
Virtual Machine
Operating
System
Bins / libs
App App
Virtual Machine
Operating
System
Bins / libs
App App
Hardware
Hypervisor
Virtual Machine
Operating
System
Bins / libs
App App
Virtual Machine
Operating
System
Bins / libs
App App
Hardware
Operating System
Container
Bins / libs
App App
Container
Bins / libs
App App
Type 1 Hypervisor Type 2 Hypervisor Linux Containers (LXC)
Containers share the OS kernel of the host and thus are lightweight.
However, each container must have the same OS kernel.
Containers are isolated, but share OS
and, where appropriate, libs / bins.
© 2016 Cisco and/or its affiliates. All rights reserved. 7
VM Networking ExampleCisco Nexus 1000V - Bringing Network Edge to Hypervisor
VM Connection Policy Defined in the network
Applied in vCenter
Linked to VM UUID
Cisco® Nexus
1000V VEM
Faster VM Deployment - Policy Based VM Connectivity
vCenter Cisco Nexus 1000V VSM
WEB Apps
HR
DB
DMZ
Port Profile Defined Policies
VMs Need to Move VMotion
DRS
SW upgrade/path
Hardware failure
VM policy mobility
VMotion for the network
Better VM security
Resulting in:
A consistent connection state
Operational efficiency for VI and network admins
Secure workload mobility with rich services
Cisco Nexus®
1000V Virtual
Ethernet Module
(VEM)
VMware vSphere
VMware vSphere
© 2016 Cisco and/or its affiliates. All rights reserved. 8
Cisco Nexus 1000V for Hyper-VConsistent Multi-Hypervisor Platform
SCVMM Integration
VXLAN based Network Virtualization
Advance NX-OS feature-set
VSG based distributed Security
Nexus 1000V VSM
Extensible vSwitch
Capture
Filtering
Forwarding
VNICs
PNICs
Consistent operational model
VM VM VM VM
Nexus 1000V VEM
© 2016 Cisco and/or its affiliates. All rights reserved. 9
Application Centric Infrastructure Components
Application
Network Profile
Orchestration Frameworks
Hypervisor Management
OVM
Systems Management
Centralized Policy Management
Open APIs, Open Source, Open StandardsAPIC
Fabric
Automation Enterprise MonitoringACI
Ecosystem
Partners
End Points
Physical & Virtual
Physical Networking
Nexus 2K
Nexus 7K
Hypervisors and Virtual Networking
Compute L4–L7Services
Storage Multi DC WAN and Cloud
Integrated
WAN Edge
© 2016 Cisco and/or its affiliates. All rights reserved. 10
• OPFLEX enabled vSwitch
• Single point of control via APIC
• Consistent policy between virtual and physical fabric ports.
• Supports a Full Layer 2 Network (Nexus 7k/6k/5k/3k/2k/FI) between Nexus 9k and AVS: Investment Protection
• VDS (VMware Distributed Switch) can only support a single L2 switch between N9k and VDS
• AVS enables Micro segmentation (VM attributes based) and Distributed Firewall
AVS Providing Advanced Virtual Security Features for ACI
L2 NetworkO
pF
lex
Op
Fle
x
Op
Fle
x
VMVM VM VM
VMVM VM VM
VMVM VM VM
© 2016 Cisco and/or its affiliates. All rights reserved. 11
Unified Fabric’s SingleConnect TechnologyProvides an Efficient Foundation for Growth
One connection type for all protocols
SINGLECONNECT TECHNOLOGY
Efficient capacity scaling
Automated I/O bandwidth allocation
Auto-discovery & self-integrating components: network and compute
Direct SAN access
Wire once then manage through software
Traditional Cisco Unified Fabric
As you scale, simplified architecture reduces cost and facilitates growth
SAN A SAN BETH 1 ETH 2
10 GE Ethernet
© 2016 Cisco and/or its affiliates. All rights reserved. 12
UCSD Express
UCS 6200 Series
Fabric Interconnect
UCS Manager
UCS C240 M4 Series
Rack Server
UCS C3160 Rack
Server
Unified Management with UCSD Express for Big DataProgrammability, Scalability and Automation
OS ProfileCisco UCS Template
Hadoop
© 2016 Cisco and/or its affiliates. All rights reserved. 13
Comparing Traditional Architectures to UCS CPA for Big Data
As your Big Data deployment grows, significant and ongoing savings create a compelling business case
# CABLES
Traditional Approach With Cisco UCS
At 32 Servers 180 80
At 64 Servers 360 128-160
At 160 Servers 530-870 320-400
© 2016 Cisco and/or its affiliates. All rights reserved. 14
Hyperconverged Scale Out and Distributed File System
CONTROLLER
VMHYPERVISOR
VM VM VM
HYPERCONVERGED DATA PLATFORMHYPERCONVERGED DATA PLATFORMHYPERCONVERGED DATA PLATFORM
Start With as Few as Three Nodes
Hyperconverged Data Platform
Installs in Minutes
Add Servers, One or More at a Time
Linearly Scale Compute, Storage
Performance, and Capacity
Distribute and Rebalance Data Across Servers Automatically
Retire Older Servers
HYPERCONVERGED DATA PLATFORM
CONTROLLER
VMHYPERVISOR
VM VM VM
CONTROLLER
VMHYPERVISOR
VM VM VM
CONTROLLER
VMHYPERVISOR
VM VM VM
CONTROLLER
VMHYPERVISOR
VM VM VM
© 2016 Cisco and/or its affiliates. All rights reserved. 15
High Resiliency, Fast Recovery
Platform Can Sustain Simultaneous 2 Node Failure Without Data Loss; Replication Factor Is Tunable
If a Node Fails, the Evacuated VMs Re-attach With No Data Movement Required
Replacement Node Automatically Configured Via UCS Service Profile
HX Data Platform Automatically Re-Distributes Data to Node
CONTROLLERHYPERVISORCONTROLLERHYPERVISOR CONTROLLERHYPERVISOR CONTROLLERHYPERVISOR
VM VMVM VM VMVM VM VMVM VM VMVM
HX Data PlatformHX Data Platform
© 2016 Cisco and/or its affiliates. All rights reserved. 16
High SecurityInternalDMZ
Typical Network Topology - Shortcomings
ADC ADC
FW FW
(vla
n/s
ubnet)
(vla
n/s
ubnet)
(vla
n/s
ubnet)
(vla
n/s
ubnet)
(vla
n/s
ubnet)
(vla
n/s
ubnet)
Cisco Confidential 17C97-732424-00 © 2014 Cisco and/or its affiliates. All rights reserved.
Modern Network Segmentation
© 2016 Cisco and/or its affiliates. All rights reserved. 18
EXISTING 2/3-TIER DESIGNS PROGRAMMABLE SDN OVERLAY MODELAPPLICATION CENTRIC
INFRASTRUCTURE
Modernized Operating System
Programmable Open APIs
Linux Containers
Integrated Network Virtualization(no Gateways)
VXLAN / BGP
Third Party Controller
Any Hypervisor
Physical & Virtual
Open API’s & Controller
APIC
Modernizing the Data Center – Nexus 9K and ACIBroad and Deep Ecosystem
© 2016 Cisco and/or its affiliates. All rights reserved. 19
• VXLAN Provides the Ethernet L2 services as VLAN does, but with greater extensibility and flexibility.
L2 overlay over L3 underlay with use of any IP routing protocol.
Uses MAC in IP (UDP) encapsulation, allowing 24-bit VXLAN id enabling up to 16 million unique networks.
• Optimized Flooding
Leverages multicast in the transport network to simulate flooding behavior for broadcast, unknown unicast, and multicast in the L2 segment
• Optimal Routing
Leverage ECMP (Equal Cost Multi-pathing) to achieve optimal path usage over the transport network.
VXLAN—Virtual Extensible LAN
© 2016 Cisco and/or its affiliates. All rights reserved. 21© 2015 Cisco and/or its affiliates. All rights reserved. Public
VxLAN & EVPN – Ethernet VPN
RFC 7348 Virtual eXtensible Local Area Network
RFC 7432 BGP MPLS based Ethernet VPNs
A Network Virtualization Overlay Solution using EVPN
• draft-ietf-bess-evpn-overlay
Integrated Routing and Bridging in EVPN
• draft-ietf-bess-evpn-inter-subnet-forwarding
IP Prefix Advertisement in E-VPN
• draft-rabadan-l2vpn-evpn-evpn-prefix-advertisement
VXLAN/EVPN interoperability demonstrated during MPLS/SDN World Congress in Paris
Participating Vendors are Cisco, Juniper, Alcatel Lucent & Ixia
Independently Tested at EANTC with public available WhitepaperCiscohttp://www.eantc.de/showcases/mpls_sdn_2015/intro.html
© 2016 Cisco and/or its affiliates. All rights reserved. 22
Cisco Confidential 12© 2013-2014 Cisco and/or its affiliates. All rights reserved.
Standards based Overlay (VXLAN) with Standards based Control-Plane (EVPN MP-BGP)
Layer-2 MAC and Layer-3 IP information distribution by Control-Plane(BGP)
Forwarding decision based on Control-Plane (minimizes flooding)
Integrated Routing/Bridging (IRB) for Optimized Forwarding in theOverlay
Higher scalability than VXLAN Multicast-based only transport (F&L)
Control Plane only or with Data plane function (Leafs and Border)
What is VXLAN/EVPN?
© 2016 Cisco and/or its affiliates. All rights reserved. 23
Why VXLAN Overlay?
Customer Needs VXLAN Delivered
Any workload anywhere – VLANs limited by L3 boundaries
Any Workload anywhere- across Layer 3 boundaries
VM Mobility Seamless VM Mobility
Scale above 4k Segments (VLAN limitation)
Scale up to 16M segments
Secure Multi-tenancy Traffic & Address Isolation
VTEP VTEP VTEP VTEP VTEP
VXLAN Overlay
© 2016 Cisco and/or its affiliates. All rights reserved. 24
VXLAN provides a Fabric with Segmentation, IP Mobility & Scale
Why VXLAN?
“Standards” based Overlay
Leverages Layer-3 ECMP – all links forwarding
Increased Name-Space to 16M identifier
Integration of Physical and Virtual
It’s SDN
© 2016 Cisco and/or its affiliates. All rights reserved. 25
Challenges with Traditional VXLAN DeploymentsScale and Mobility Limitations
LIMITED SCALE
Flood and learn (BUM)- Inefficient Bandwidth Utilization
Resource Intensive – Large MAC Tables
LIMITED WORKLOAD MOBILITY
Centralized Gateways – Traffic Hair-pining
Sub-Optimal Traffic Flow
VTEP VTEP VTEP VTEP VTEP
VXLAN Overlay
Barrier for Scaling out Large Data Centers and Cloud Deployments
© 2016 Cisco and/or its affiliates. All rights reserved. 26
Next-Gen VXLAN Fabric with BGP-EVPN Control PlaneDelivering Multi-Tenancy and Seamless Host Mobility at Cloud Scale
INCREASED SCALE
Eliminates Flooding
Conversational Learning
Policy-Based Updates
OPTIMIZED MOBILITY
Distributed Anycast Gwy
Integrated Routing /Bridging
vPC & ECMP
INTEROPERABLE
Standards Based
BGP-EVPN
VXLAN
VTEP VTEP VTEP VTEP VTEP
Route
Reflector
Route
Reflector
BGP-EVPN VXLAN Overlay
BGP Peers
Breaking the Traditional VXLAN Scale Barriers
OPERATIONAL FLEXIBILITY
Layer 2 or Layer 3
Controller Choice
© 2016 Cisco and/or its affiliates. All rights reserved. 27
ACI Fabric – An IP network with an Integrated OverlayVirtual and Physical
• Cisco’s ACI solution leverages an integrated VXLAN based overlay
• IP Network for Transport
• VXLAN based tunnel end points (VTEP)
• VTEP discovery via infrastructure routing
• Directory (Mapping) service for EID (host MAC and IP address) to VTEP lookup
PayloadIPVXLANVTEP
APIC
VTEP VTEP VTEP VTEP VTEP VTEP
vSwitchvSwitch VTEPVTEP
IP Transport
© 2016 Cisco and/or its affiliates. All rights reserved. 28
VXLAN
VNID = 5789VXLAN
VNID = 11348
NVGRE
VSID = 7456
Any to Any
802.1Q
VLAN 50
Normalized
Encapsulation
Localized
Encapsulation
IP Fabric Using
VXLAN Tagging
PayloadIPVXLANVTEP
• All traffic within the ACI Fabric is encapsulated with an extended VXLAN header
• External VLAN, VXLAN, NVGRE tags are mapped at ingress to an internal VXLAN tag
• Forwarding is not limited to, nor constrained within, the encapsulation type or
encapsulation ‘overlay’ network
• External identifies are localized to the Leaf or Leaf port, allowing re-use and/or translation
if required
Payload
Payload
Payload
Payload
Payload
Eth
IPVXLAN
Outer
IP
IPNVGREOuter
IP
IP802.1Q
Eth
IP
Eth
MAC
Normalization of Ingress
Encapsulation
ACI Fabric – Integrated OverlayData Path - Encapsulation Normalization
28
© 2016 Cisco and/or its affiliates. All rights reserved. 29
ACI FabricIETF VXLAN Group Based Policy
ACI VXLAN (VXLAN) header provides a tagging mechanism to identify properties associated with frames forwarded through an
ACI capable fabric. It is an extension of the Layer 2 LISP protocol (draft-smith-lisp-layer2-01) with the additional of policy group,
load and path metric, counter and ingress port and encapsulation information. The VXLAN header is not associated with a
specific L2 segment or L3 domain but provides a multi-function tagging mechanism used in ACI Application Defined Networking
enabled fabric.
Ethernet
HeaderPayload FCS
Outer
IP
Outer
UDPVXLAN
Outer
Ethernet
Inner
EthernetPayload
New
FCS
VXLAN Instance ID (VNID) M/LB/SPSource GroupFlags
Rsvd
8 Bytes
1 Byte
N L Rsvd I
N: The N bit is the nonce-present bitL: The L bit is the Locator-Status-Bits field enabled bitI: The I bit is the Instance ID bit, Indicates the presence of the VXLAN Network ID
(VNID) field. When set, it indicates that the VNID field is valid
IP
Header
Inner IP
Header
Flags/DR
E
© 2016 Cisco and/or its affiliates. All rights reserved. 31
Troubleshooting Workflows (e.g. EP to EP)
• Webserver and Application servers are having issues.
• Used the tool and gave us logical topology and helped us in isolating the issue.
• We found issue is -LLDP neighbor is bridge and its port vlan 1 mismatches with the local port vlanUnspecified
Go see all this working in the World of Solutions
Cisco Confidential 33C97-732424-00 © 2014 Cisco and/or its affiliates. All rights reserved.
Advanced Microsegmentation
© 2016 Cisco and/or its affiliates. All rights reserved. 34
IP Routing SPT
VLAN
IP
Bridging
Start putting aside your networking notions
© 2016 Cisco and/or its affiliates. All rights reserved. 35
High SecurityInternalDMZ
Review: Typical Network Topology - Shortcomings
ADC ADC
FW FW
(vla
n/s
ubnet)
(vla
n/s
ubnet)
(vla
n/s
ubnet)
(vla
n/s
ubnet)
(vla
n/s
ubnet)
(vla
n/s
ubnet)
© 2016 Cisco and/or its affiliates. All rights reserved. 37
DB DB
Web Web App Web App
• The Cisco Application Centric Infrastructure
Fabric (ACI) fabric includes Cisco Nexus
9000 Series switches with the APIC to run
in the leaf/spine ACI fabric mode
• These switches form a “fat-tree” network by
connecting each leaf node to each spine
node; all other devices connect to the leaf
nodes
ACI Terminology – ACI Fabric
Highlights:
• Turnkey integrated solution with security, centralized
management, compliance and scale
• Automated application centric-policy model with
embedded security
• Broad and deep ecosystem
© 2016 Cisco and/or its affiliates. All rights reserved. 38
ACI Terminology – Application Policy Infrastructure ControllerCentralized Point of Management, Automation and Policy Enforcement
POLICY: Application centric network policy
SECURE: Security and performance at scale
VISIBILITY: System-wide visibility, telemetry and health
OPENNESS: Open Northbound and Southbound
EXTENSIBLE: Hypervisors, L4-7 services integration/chaining
INTEGRATED OVERLAY (Physical/Virtual)
© 2016 Cisco and/or its affiliates. All rights reserved. 39
Reviewing: Tenant Model
39
Tenant
Bridge Domain
Bridge Domain
Bridge Domain
Subnet ASubnet B
Subnet DSubnet BSubnet F
EPG A
EPG C
EPG B
EPG A EPG
B
EPG C
Customer/ BU/ Group
VRF
L2 Boundary
IP Space(s)
Groups of end-points and the
policies that define their connection
Context Context
© 2016 Cisco and/or its affiliates. All rights reserved. 40
Tight Coupling with the NetworkL4-L7 Services, Location, Identity, Connectivity
Physical Servers Virtual Machines
network
Interface, VLAN, Subnet, Gateway
© 2016 Cisco and/or its affiliates. All rights reserved. 41
ACI Abstraction Policy Model
End Point Group (EPG)
End Points
Physical Servers Virtual Machines
EPGs are a grouping of end-points representing
application or application components independent of
other network constructs.
© 2016 Cisco and/or its affiliates. All rights reserved. 42
Reviewing: Defining EPG Relationships Via Contracts
42
EPG Web
EP 1
EP 2
EPG App
EP 1
EP 2
Contract
Subject 1 Filter | Action | Label
EPG communication is defined by mapping EPGs to one another via contracts.
Subject 2
© 2016 Cisco and/or its affiliates. All rights reserved. 43
Applying Policy between EPGs: ACI contracts
EPG A
EPGB
EPG CContract 02
The policy model allows for both unidirectional and bidirectional policies.
Contracts define the way in which EPGs interact.
Unidirectional
Communication
Bidirectional
CommunicationContract 01
Ex: ACI Logical Model applied to the “3-Tier App” ANP
© 2016 Cisco and/or its affiliates. All rights reserved. 44
Reviewing: ACI Contracts
Application Network ProfileC ContractContracts define what an EPG
exposes to other app tiers and how
Contracts are reusable for multiple EPGs and
EPGs can inherit multiple contracts
The use of contracts separates ‘what’ a policy is from ‘where’ it exists, extending its use.
C
C
EPG NFS
EPG MGMT
EPG DBEPG AppEPG WebC CC
44
© 2016 Cisco and/or its affiliates. All rights reserved. 51
ACI – Prescriptive Microsegmentation Design Options
© 2016 Cisco and/or its affiliates. All rights reserved. 52
Summary: Network Profiles
52
Entity Description
Tenant Tenant represents a policy owner in the virtual fabric.
Application Network Profile Application Profile is the definition of tenant's policy representing a set of requirements that given application instance has on virtualizable fabric. Such policy regulates connectivity and visibility amongst end-points in-scope.
End Point Group (EPG)End point groups represent groups of elements (virtual machines, physical servers, etc.) essentially identified by port on a network. EPG’s essentially capture groups of machines with the same policies. This is highly efficient as policy changes are propagated from higher level orchestration systems
Contracts Contracts represent policies between EPGs. Contracts are “provided” by one EPG and “consumed” by another.
Filters Filters encode specific rules within a contract
Bridge Domain Bridge domain is a L2 context (may or may not include broadcast semantics)
Context L3 context, essentially a VRF
© 2016 Cisco and/or its affiliates. All rights reserved. 53
Hypervisor Interaction with Cisco ACI
Integrated Mode
• Cisco ACI fabric as a policy authority
• Encapsulations normalized and dynamically provisioned
• Integrated policy domains across physical and virtual
APP WEB DB DB
Nonintegrated Mode
• Cisco® ACI fabric as an IP-Ethernet transport
• Encapsulations manually allocated
• Separate policy domains for physical and virtual
VLAN10
VLAN10
VXLAN10000
© 2016 Cisco and/or its affiliates. All rights reserved. 54
Hypervisor Integration with Cisco ACIControl Channel - VMM Domains
• Relationship is formed between Cisco®
APIC and Virtual Machine Manager
(VMM)
• Multiple VMMs likely on a single Cisco
ACI Fabric
• Each VMM and associated virtual hosts
are grouped within Cisco APIC
• Called VMM domain
• There is 1:1 relationship between
a virtual switch and VMM domainVMware
vCenter DVS
VMM Domain 1
VMware
vCenter AVS
VMM Domain 2 VMM Domain 3
VMware
vSphere
VMware
vSphere
Microsoft System
Center
Virtual Machine
Manager 2012
Microsoft
SCVMM
© 2016 Cisco and/or its affiliates. All rights reserved. 55
Hypervisor Integration with Cisco ACI
• Cisco® ACI fabric implements policy on
virtual networks by mapping endpoints to
EPGs
• Endpoints in a virtualized environment
are represented as the vNICs
• VMM applies network configuration by
placement of vNICs into port groups or
VM networks
• EPGs are exposed to the VMM as a 1:1
mapping to port groups or VM networks
Application Network Profile
F/W L/BEPGA
PP
APP PORT
GROUP
EPG
DB
DB PORT
GROUP
EPG
WEB
WEB PORT
GROUP
VM VMVM
© 2016 Cisco and/or its affiliates. All rights reserved. 60
VMware IntegrationThree Options
Application Virtual Switch
(AVS)
• Encapsulations: VLAN, VXLAN
• Installation: VIB through VUM or
Console
• VM discovery: OpFlex
• Software/Licenses: VMware vCenter
with Enterprise+ License
vCenter + vShield
• Encapsulations: VLAN, VXLAN
• Installation: Native
• VM discovery: LLDP
• Software/Licenses: VMware vCenter
with Enterprise+ License, vShield
Manager with vShield License
Distributed Virtual Switch
(DVS)
• Encapsulations: VLAN
• Installation: Native
• VM discovery: LLDP
• Software/Licenses: VMware vCenter
with Enterprise+ License
VMware vSphere +VMware
vShield
VMware
vSphere
© 2016 Cisco and/or its affiliates. All rights reserved. 61
Microsoft Interaction with Cisco ACITwo Options
Integration with Microsoft SCVMM
• Policy management: Through Cisco® APIC
• Software and license: Microsoft Windows Server with
HyperV and SCVMM
• VM discovery: OpFlex
• Encapsulations: VLAN and NVGRE (future)
• Plug-in installation: Manual
Microsoft System Center
Virtual Machine Manager
Integration with Microsoft Azure Pack
• Superset of Microsoft SCVMM
• Policy management: Through Cisco APIC or Microsoft Azure Pack
• Software and license: Microsoft Windows Server with HyperV,
SCVMM, and Azure Pack (free)
• VM discovery: OpFlex
• Encapsulations: VLAN and NVGRE (future)
• Plug-in installation: Integrated
Windows Azure
Microsoft System Center
Virtual Machine Manager
+
© 2016 Cisco and/or its affiliates. All rights reserved. 65
Cisco OpenStack Cisco ACI ModelNeutron API Mapping
OpenStack Cisco® ACI
Tenant Tenant
No Equivalent Application Profile
Network EPG + Bridge Domain
Subnet Subnet
Security Group Handled by Host
Security Group Rule Handled by Host
Router Layer 3 Context
Network: External Layer 3 Outside
© 2016 Cisco and/or its affiliates. All rights reserved. 66
Group-Based Policy in OpenStackJuno Release
https://blueprints.launchpad.net/neutron/+spec/group-based-policy-abstraction
• Messy mapping Cisco® ACI to current
OpenStack component
− Endpoint groups (ports + security groups)
− Contracts (security groups + security group
rules)
• Goal: Introduce Cisco ACI model into
OpenStack
• Starting with groups and group-based
policies
© 2016 Cisco and/or its affiliates. All rights reserved. 71
Embedded ACI Security
ACI Embedded Security
L4-7 Services
Cisco Security
ACI Services Graph World’s Most Deployed NGFW
Highest Rated NGIPS and Breach Detection
White-list Policy, Micro-Segmentation
L4-L7 Service Automation
L4 Distributed Firewall, Multi-Tenancy
ASA / FirePOWER / AMP
Deep Forensic Analysis
Dynamic Workload Quarantine
Advanced Protection with ASA, FirePOWER, AMP
Integrated protection
© 2016 Cisco and/or its affiliates. All rights reserved. 72
L4-L7 Service Automation – Support for All DevicesAny device and cluster manager support
Cisco Confidential
L4-7 Services
Virtual Firewalls
L4-7 Service Automation from Virtual/Physical
Fabrics
Full L4-L7 Centralized Service Automation (With Device Package)
Large Ecosystem and Investment Protection
Centralized Network Automation (With NO Device Package)
New support for L4-L7 Cluster Managers
L4- L7 Device
Package
No Device
Package
Service Cluster
Manager
© 2016 Cisco and/or its affiliates. All rights reserved. 74
Issues with stateless firewall
Source class Source Port Dest class Destination Port Action
Consumer * Provider 80 Permit
Provider 80 Consumer * permit
Stateless Filter
Problem: Server can connect to any client port
Consumer Provider
IP_C, 1234, IP_P, 80, SYN
IP_P, 80, IP_C, 1234, SYN+ACK
IP_P, 80, IP_C, 2000, SYN+ACK
IP_P, 80, IP_C, 4000, SYNNot blocked by fabric
Not blocked by fabric
Connection Established
IP_P, 80, IP_C, 4000, SYN+ACK
© 2016 Cisco and/or its affiliates. All rights reserved. 75
Hardware Assisted Stateful firewall
Provider
B
Consumer
A
Src
clas
s
Src
port
Dest
Clas
s
Dest
port
Flag Action
A * B 80 * Allow
B 80 A * ACK Allow
• Create flow table entry
• Forward packet to iLeaf
Leaf evaluates
stateless
policy
Hardware policy
permits the packet
Create flow state only for TCP SYN
packet received from PNIC
Deliver packet to destination VM
Vlan Prot
o
Src
ip
Src
port
Dst
ip
Dst
port
Vlan Prot
o
Src ip Src
port
Dst ip Dst
port
Vlan Proto Src
ip
Src
port
Dst ip Dst
port
A tcp IP_A 1234 IP_B 80
A tcp IP_B 80 IP_A 1234
Vlan Prot
o
Src
ip
Src
port
Dst
ip
Dst
port
B tcp IP_
A
1234 IP_B 80
B tcp IP_
B
80 IP_A 1234
1
2
3
4
5
© 2016 Cisco and/or its affiliates. All rights reserved. 76
Hardware Assisted Stateful firewall
Provider
B
Consumer
A
Src
clas
s
Src
port
Dest
Clas
s
Dest
port
Flag Action
A * B 80 * Allow
B 80 A * ACK Allow Hardware policy
permits the packet
Vlan Prot
o
Src
ip
Src
port
Dst
ip
Dst
port
Vlan Prot
o
Src ip Src
port
Dst ip Dst
port
Vlan Proto Src
ip
Src
port
Dst ip Dst
port
A tcp IP_A 1234 IP_B 80
A tcp IP_B 80 IP_A 1234
Vlan Prot
o
Src
ip
Src
port
Dst
ip
Dst
port
B tcp IP_
A
1234 IP_B 80
B tcp IP_
B
80 IP_A 1234
Response from VM
Perform flow table lookup
On flow table hit forward packet to ileaf
Policy Enforcement done
at iLeaf
Connection
Tracking at vLeaf
8
6
7
9
© 2016 Cisco and/or its affiliates. All rights reserved. 77
Hardware Assisted Stateful firewallCase 1: SYN + ACK attack from Provider
Provider
B
Consumer
A
Entr
y
Src
clas
s
Src
port
Dest
Clas
s
Dest
port
Flag Action
100 A * B 80 * Allow
200 B 80 A * ACK Allow
Vla
n
Prot
o
Src
ip
Src
port
Dst
ip
Dst
port
Vla
n
Prot
o
Src
ip
Src
port
Dst
ip
Dst
port
Vlan Proto Src
ip
Src
port
Dst
ip
Dst
port
A tcp IP_
A
123
4
IP_B 80
A tcp IP_
B
80 IP_A 1234
Vlan Prot
o
Src
ip
Src
port
Dst
ip
Dst
port
B tcp IP_
A
1234 IP_B 80
B tcp IP_
B
80 IP_A 1234
SYN + ACK packets Attack from Provider,
for which connection is not initiated by
Consumer (dest Port != 1234)
Packet dropped by vLeaf
because of missing flow
entry
1
2
© 2016 Cisco and/or its affiliates. All rights reserved. 78
Hardware Assisted Stateful firewallCase 2: SYN attack from Provider
Provider
B
Consumer
A
Entr
y
Src
clas
s
Src
port
Des
t
Clas
s
Dest
port
Flag Action
100 A * B 80 * Allow
200 B 80 A * ACK Allow
Leaf evaluates
stateful policy
Vla
n
Prot
o
Src
ip
Src
port
Dst
ip
Dst
port
Vla
n
Prot
o
Src
ip
Src
port
Dst
ip
Dst
port
Vlan Proto Src
ip
Src
port
Dst
ip
Dst
port
A tcp IP_
A
123
4
IP_B 80
A tcp IP_
B
80 IP_A 1234
Vlan Prot
o
Src
ip
Src
port
Dst
ip
Dst
port
B tcp IP_
A
1234 IP_B 80
B tcp IP_
B
80 IP_A 1234
SYN Attack from Provider
SYN packets dropped by
hardware on iLeaf due to
policy
1
2
© 2016 Cisco and/or its affiliates. All rights reserved. 79
Distributed Firewall (DFW) on AVS
79
• Connection tracking support (TCP) on AVS
• DFW is only applicable to Virtual End Points.
• DFW is not applicable to system ports (vmkernel ports) and uplinks.
• Global (per AVS host) flow limit: 250,000
• Per Interface (End Point) flow limit: 10,000
• Aging Interval: Adaptive aging (5 minutes – 2 hours)
• States for a flow:-•STATE_SYN_RECV
•STATE_SYN_ACK_RECV
•STATE_ESTABLISHED
•STATE_FIN_RECV
•STATE_ESTABLISHED_ONE_DIR
•STATE_2ND_FIN_RECV
•STATE_FTP_DATA
© 2016 Cisco and/or its affiliates. All rights reserved. 81
Docker – What is it and its goal ?Docker is an open platform for Sys Admins and developers
to build, ship and run distributed applications.
Docker enables applications to be easy and quickly
assembled from reusable components, therefore
eliminating the silo-ed approach between development,
QA, and production environments.
At a high-level, Docker is build of :
• Docker Engine: portable and lightweight, runtime and
packaging tool;
• Docker Hub: a cloud service for sharing applications and
automating workflows,
Docker’s main purpose: the lightweight packaging and
deployment of applications
© 2016 Cisco and/or its affiliates. All rights reserved. 82
• Open-Source Container for Dummies
• Open Source engine to commoditize LXC
• Create lightweight, portable, isolated, self-sufficient container from any application.
• Delivers on full DevOps goal:
Build once… run anywhere.
Configure once… run anything
• Ecosystems! OS, VM’s, PaaS, IaaS…
What is containers ?
© 2016 Cisco and/or its affiliates. All rights reserved. 84
Docker – How isolation works ?
Processes executing in a Docker container are isolated
from processes running on the host OS or in other Docker
containers. Nevertheless, all processes are executing in
the same Linux kernel.
Docker leverages LXC to provide separate namespaces
for containers, a technology that has been present in
Linux kernels for 5+ years.
It also uses Control Groups (cgroups), which have been
in the Linux kernel even longer, to implement resource
auditing and limiting.
© 2016 Cisco and/or its affiliates. All rights reserved. 86
Running Docker on your own machine
Directly at OS-X
On a VM “wrap” (Vagrant)
At Windows, Linux or OS-X
© 2016 Cisco and/or its affiliates. All rights reserved. 87
Docker misconceptionsFrom a multi-host & mission critical applications perspective.
• If I use Docker then I don't need a configuration management (CM) tool (Ansible, Puppet, etc.);
• If I learn Docker then I don't have to learn the other systems and CM tools;
• You should have only one process per Docker container;
• I should use Docker right now for all!
• I have to use Docker in order to get the speed and consistency advantages
… but, using Docker makes all the above easier from a DevOps perspective…
© 2016 Cisco and/or its affiliates. All rights reserved. 88
Hypervisors vs. Linux Containers
Hardware
Operating System
Hypervisor
Virtual Machine
Operating
System
Bins / libs
App App
Virtual Machine
Operating
System
Bins / libs
App App
Hardware
Hypervisor
Virtual Machine
Operating
System
Bins / libs
App App
Virtual Machine
Operating
System
Bins / libs
App App
Hardware
Operating System
Container
Bins / libs
App App
Container
Bins / libs
App App
Type 1 Hypervisor Type 2 Hypervisor Linux Containers (LXC)
Containers share the OS kernel of the host and thus are lightweight.
However, each container must have the same OS kernel.
Containers are isolated, but
share OS and, where
appropriate, libs / bins.
© 2016 Cisco and/or its affiliates. All rights reserved. 89
Hypervisor VM vs. LXC vs. Docker containers
© 2016 Cisco and/or its affiliates. All rights reserved. 90
Container Networking Solutions
Flannel CoreOS
WeaveNet WeaveWorks
OVN VMWare
Contiv Cisco
Calico MetaSwitch Networks
Libnetwork Docker
OpenShift SDN RedHat
Nuage-SDN Nokia
OpenContrail Juniper
Contiv
© 2016 Cisco and/or its affiliates. All rights reserved. 91
Considerations Containers (Docker / LXC) Hypervisors
Virtualization approach At the Operating System (OS) level At the Hardware Level
Abstraction Application from OS OS from Hardware
Applications availability Linux apps able to run on kernel 3.8 and
beyond
Any that could run into a VM
“Application-ready” time ~ 0.5 s (for fire up) ~ 20 s (for VM boot up)
Storage consumption Single storage + per layer storage delta Storage space for each instance
Save of “new status” New app “delta” layer added to the image VM Snapshot or boot new VM (*)
Performance Run directly on top of Linux Kernel (**) Hypervisor as a performance “shim”
Security Via cGroups and namespaces. SELinux
helps.
Per-VM basis, leverages hypervisor
Linux Space User-Space (can leverage Linux kernel
modules)
Isolated into the VM space. Access to Hypervisor
kernel functions varies per solution / vendor.
(*) If it’s the same OS in every VM, why keep duplicating it in each VM (and then have the storage array de-duplicate it) ?
(**) For an application in need of network performance, why put it on a VM in the first place and then bypass the hypervisor for kernel-based performance ?
Considerations on VM vs. Docker containers
© 2016 Cisco and/or its affiliates. All rights reserved. 92
Docker in OpenStack• Havana
Nova virt driver which integrates with docker REST API on backend
Glance translator to integrate docker images with Glance
• Icehouse
Heat plugin for docker
• Both options are still under development
nova-docker virt driver docker heat plugin
DockerInc::Docker::C
ontainer(plugin)
© 2016 Cisco and/or its affiliates. All rights reserved. 93
VM or BM
Basics of Container Networking
Minimally Provides:
-IP Connectivity in Container’s Network Namespace
-IPAM, and Network Device Creation (eth0)
-Route Advertisement or Host NAT for external connectivity
Container
eth0
Container
eth0
Physical Network
Linux/Windows OS Networking
ensp0
© 2016 Cisco and/or its affiliates. All rights reserved. 94
Container
CNM (Container Network Model)
Network Namespace
eth0 eth1
Network BlueNetwork Green
Endpoint
Sandbox
Network
© 2016 Cisco and/or its affiliates. All rights reserved. 95
CNM (Container Network Model) - Details
• An endpoint is container's interface into a network
• A network is collection of arbitrary endpoints
• A container can belong to multiple endpoints (and therefore multiple networks)
• CNM allows for co-existence of multiple drivers, with a network managed by one driver
• Provides Driver APIs for IPAM and Endpoint creation/deletion
• IPAM Driver APIs: Create/Delete Pool, Allocate/Free IP Address
• Network Driver APIs: Network Create/Delete, Endpoint Create/Delete/Join/Leave
• Used by docker engine, docker swarm, and docker compose
• Also works with other schedulers that runs standard docker containers e.g. Nomad or Mesos docker containerizer
© 2016 Cisco and/or its affiliates. All rights reserved. 96
Container
(aka Network Namespace)
eth0 . . . eth1
CNI (Container Network Interface)
Driver Plumbing
Differences (from CNM):
- Gives Driver freedom to manipulate network namespace
- Provide Container Id, Params to drivers
- Just Two APIs:
-Add Container to Network
-Delete Container from Network
© 2016 Cisco and/or its affiliates. All rights reserved. 97
CNI (Container Network Interface) - Details
• Provide Container Create/Delete events
• Provides access to network namespace to the driver to plumb networking
• Provides container id (uuid) for which network interface is being created
• No separate IPAM Driver
Container Create returns the IAPM information along with other data
• Used by Kubernetes i.e. supported by various Kubernetes network plugins
© 2016 Cisco and/or its affiliates. All rights reserved. 98
Mac/Windows/Linux
Self Guided Hands on Lab – Topology, etc.• Two Linux VMs, interconnected on two networks
• Self Paced: https://github.com/jainvipin/tutorial
tutorial-node1
C1, C2, …
eth0 eth1 eth2
External Network
tutorial-node1
C1, C2, …
eth0 eth1 eth2
Vlan Bridge
Control/VXLAN IP-Router
Mgmt Mgmt
© 2016 Cisco and/or its affiliates. All rights reserved. 99
Basic Container Networking – Hands on Lab• Default Network Drivers: null, host, bridge
• Running Containers in default ‘bridge’ driver
• Inspecting Container Network and Container
• Peeking Inside the Container
• Reaching outside world
vanilla-c
eth0
docker0 linux bridge
vethxxxx
© 2016 Cisco and/or its affiliates. All rights reserved. 100
Networking with Scheduler Integration• In a very basic terms, scheduler determines the best place to run an App
The algorithm is selectable, and varies e.g. pack a host before scheduling on another
Often, takes into consideration the constraints of the application against resources
Supports scale-out model for applications to grow/shrink
Supports many features and is the substrate of the agile application deployment
• Networking becomes more application centric with scheduler integrated
Application tiers, their network connectivity, policies come and go with Apps
Must integrate the association of Apps to their policy and domain
The network, policies, priority, etc. must move with the application
• Popular Schedulers
Docker’s Swarm, Google’s Kubernetes, Apache Mesos, Hashicorp’s Nomad, etc.
© 2016 Cisco and/or its affiliates. All rights reserved. 101
Container Networking Challenges1. Scale: 200-500 containers per host may not be unusual
More Endpoints i.e. IPs
More Networks
More of Everything!
2. Speed: Comes up in a second (many more simultaneously in a cluster)
Automation is a MUST
Network (IPAM, DNS, Route-Advertisement) must be quick to provision
And work at scale!
3. Layers of Networking: Container Layer, VM Layer, Physical Layer
Challenges Visibility: Encap in encap in encap makes it obscure
Makes Monitoring/Diagnostics difficult
Reduces Performance: Processing at each layer, and Encaps reduce performance
More Orchestration layers to deal with (if present)
© 2016 Cisco and/or its affiliates. All rights reserved. 102
Container Networking Challenges, Cont…4. Application Centric (vs. Infrastructure centric)
Creating networks as applications need, and dispose them accordingly
Must integrate with application blue-print
Keeping it easy to consume for application
5. Shared Resources – Resource Acquisition
Ops Policies to define deployment structure
6. Hybrid Cloud
Consistency, Security, Connectivity
7. Security
Tenancy, Isolation, white-list of specific ports
8. Telemetry and Diagnostics
Need to be real time, Must work at the scale/speed
© 2016 Cisco and/or its affiliates. All rights reserved. 103
• Container industry is focused on creating ability to define applications through Docker Compose, Kubernetes Pod definition etc.
• As applications move from development to production, there is need to able to define and enforce infrastructure operational policies
• Contiv is creating industry thought leadership around need for infrastructure policies for containerized applications in a shared infrastructure
• Contiv provides framework and implementation to address operation intent for Infrastructure.
Contiv Addressing Enabling Infrastructure to Run Production Containerized Applications Better
© 2016 Cisco and/or its affiliates. All rights reserved. 104
Takeaways1. Container Networking is pluggable; there are two flavors
- (CNI,CNM) for (Kubernetes, Docker) ecosystem respectively
2. Container Networking is met with a new set of challenges
- There are solutions to those problems
- Some are being addressed
3. Native Connectivity brings better performance, visibility and scale
- Layering may obscure visibility, decrease scale and performance
4. Contiv Networking provides a variety of container connectivity options
- With Native connectivity, it can provide scale, performance and visibility
- It provides secure connectivity to group of applications
© 2016 Cisco and/or its affiliates. All rights reserved. 105
Container References
1. CNI Specification
https://github.com/containernetworking/cni/blob/master/SPEC.md
2. CNM Design
https://github.com/docker/libnetwork/blob/master/docs/design.md
3. Contiv User Guide
http://docs.contiv.io
4. Contiv Networking Code
https://github.com/contiv/netplugin
5. Basic Networking Tutorial – Self Guided
https://github.com/jainvipin/tutorial
6. Contiv Policy Tutorial – Self Guided
https://github.com/jainvipin/libcompose/tree/deploy/deploy
7. Other Documentation:
https://docs.docker.com, http://docs.kubernetes.io