47
Local Edition Datacenter Fabric Futures Loy Evans DC Consulting Systems Engineer, Commercial [email protected] @loyevans

Dcf fabric-futures clle-2014-v3

Embed Size (px)

DESCRIPTION

 

Citation preview

  • 1. Local Edition Datacenter Fabric Futures Loy Evans DC Consulting Systems Engineer, Commercial [email protected] @loyevans

2. 2014 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public Local Edition Session Objectives Datacenter Fabric Futures Data centers are quickly making significant transitions to new technology. This session is designed help you learn what these transitions are, how Cisco can help you enable them, and how applications will be the central focus for Data Center technology in the future. Understand how a spine-leaf architecture improves data center communications, speeds the adoption of 10/40/100GB networking, and positions you for the next great networking technical and management innovation called Application Centric Infrastructure (ACI). 2 3. 2014 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public Local Edition Agenda Next Generation DC Technology Primer 40G Adoption The Revolution: Application Centric Infrastructure Application Centric Infrastructure Intro Fabric Elements Services & Hypervisor Integration Application Policy Infrastructure Controller Integration & Migration Summary Q&A 3 4. 2014 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public Local Edition Spine/Leaf vs. 3 Tier Model WHY? Loss of a single aggregation Box = a loss of 50% capacity in the pod East West traffic dominates in DC Spine/leaf = non-blocking links for optimal traffic & flow completion times >2x increase in network availability Traditional Spanning Tree Based Network Spine-Leaf Based Network FullyNon-Blocking 2, 048 Servers 8 Access Switches 64 Access Switches 2, 048 Servers Blocked Links Oversubscription16:1 8:12:1 4 Pods Broken Links S 5. 2014 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public Local Edition Why Spine-Leaf Design? Pay as You Grow Model Need more host ports? Add a leaf 384 ports 4x96 10G (3840 Gbps total) 480 ports 5x96 10G (4800 Gbps total) PerSpine Utilization Need even more host ports? Add another leaf 576 ports 6x96 10G (5760 Gbps total) To speed up flow completion times, add more backplane, spread load across more spines FCT FCT FCT PerSpine Utilization FCT FCT FCT 10G host ports 40G fabric ports Lower FCT = FASTER APPLICATIONS AS * FCT = Flow Completion Times 6. 2014 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public Local Edition Host 1 Host 3 Host 2 Host 4 Host 5 Host 7 Host 6 Spine/Leaf DC Fabric Large Non-Blocking Switch Host 1 Host 3 Host 4 Host 5 Host 7 Host 2 Host 6 AS 7. 2014 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public Local Edition Spine/Leaf DC Fabric Large Non-Blocking Switch Host 1 Host 3 Host 2 Host 4 Host 5 Host 7 Host 6 LCLCLCLCLC LCLCLCLCLC FMFMFM AS 8. 2014 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public Local Edition Host 1 Host 3 Host 2 Host 4 Host 5 Host 7 Host 6 S Spine/Leaf DC Fabric Large Non-Blocking Switch (Output Queue Switch The Theoretical Ideal, but not practical) 9. 2014 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public Local Edition Agenda Next Generation DC Technology Primer 40G Adoption The Revolution: Application Centric Infrastructure Application Centric Infrastructure Intro Fabric Elements Services & Hypervisor Integration Application Policy Infrastructure Controller Integration & Migration Summary Q&A 9 10. 2014 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public Local Edition Impact of Link Speed the Drive Past 10G 2010Gbps Downlinks 2010Gbps Uplinks 2010Gbps Downlinks 540Gbps Uplinks 2010Gbps Downlinks 2100Gbps Uplinks AS 11. 2014 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public Local Edition Statistical Probabilities Intiution: Higher speed links improve ECMP efficiency 2010Gbps Uplinks 2100Gbps Uplinks 1110Gbps flows (55% load) 1 2 1 2 20 Prob of 100% throughput = 3.27% Prob of 100% throughput = 99.95% 12. 2014 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public Local Edition http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6627738 http://simula.stanford.edu/~alizade/ http://www.hoti.org/hoti21/slides/Alizadeh.pdf Lower FCT is Better Impact of Link Speed on Flow Completion Times 0 2 4 6 8 10 12 14 16 18 20 30 40 50 60 70 80 FCT(normalizedtooptimal) Load (%) Avg FCT: Large (10MB,) background flows OQ-Switch 20x10Gbps 5x40Gbps 2x100Gbps 40/100Gbps fabric: ~ same FCT as non-blocking switch 10Gbps fabric links: FCT up 40% worse than 40/100G S Flow Completion is dependent on queuing and latency. 40G is not just about the bandwidth, its about latency. 13. 2014 Cisco and/or its affiliates. All rights reserved.Presentation_ID Cisco Public Local Edition OM4 Fiber PlantMMF LC Patch cord MMF LC Patch cord OM4 Fiber Plant MPO MPO Used Fiber Pair Used Fiber Pair Used Fiber Pair Used Fiber Pair Used Fiber Pair OM4 Fiber PlantMMF LC Patch cord MMF LC Patch cord Used Fiber Pair $995 $995 + $200 10G @ $2190 Assumption: Single pair run of MMF fiber = $200US Component prices are estimated $2995 $2995 $600 + $800 40G @ $7390 $1095 $1095 + $200 40G @ $2390 40G BiDi Optics Preserve Existing 10G Cabling AS 10G @ $2190 40G @ $2390 LIST price QSFP-40G-SR-BD $1095 QSFP-40G-SR4 $2995 SFP-10G-SR $995 SFP-10G-SR $995 QSFP-40G-SR4 $2995 QSFP-40G-SR-BD $1095 Distance