51
IBM System Networking For PureFlex Systems Nov 1, 2013

Redes en pure flex portfolio

Embed Size (px)

DESCRIPTION

 

Citation preview

  • 1. IBM System Networking For PureFlex SystemsNov 1, 2013

2. System NetworkingFlex System Chassis ArchitectureIBM Confidential, Do Not Share or Distribute 2013 IBM Corporation 3. Optimized, Automated and Integrated network architecture Fits within your existing and future environmentExtreme Flexibility Multiple connectivity options today and more to come Pay for what you need today with Features on Demand (FoD)Highest Performance First 40Gb capable Ethernet Switch First 16Gb capable SAN Switch First 56Gb capable Infiniband FDR switch Up to 220Gb uplink BW and 25%CampusIn DCs there is a huge amount of transfers between servers because of multi tier architecture and VM Motionup to 75% SAN 5. Performance against Cisco UCS IBM can deliver up to 77% lower server to server latency1 IBM communications stay within the chassis at less than 1 microsecond (us) Node to SI4093 to Node Cisco communications must exit the chassis and go to the Fabric Interconnect adding an extra step, meaning you have the latency of the Fabric Extender (0.65us) and Fabric Interconnect (2us) to consider. Node to FEX to UCS6k to FEX to Node Latency 3.3us = 0.65us + 2us + 0.65us Communication between blades requires traffic through Nexus 6100 Nexus 6100 Cisco UCSManagement Performance IBM has separate management ports Cisco UCS has management traffic going over the same networking ports as their data!5This is an IBM internal study of IBM PureFlex System solution designed to replicate a typical IBM customer workload usage in the marketplace. The results were obtained under laboratory conditions, and not in an actual customer environment. IBM's internal workload studies are not benchmark applications, nor are they based on any benchmark standard. As such, customer applications, differences in the stack deployed, and other systems variations or testing conditions may produce different results and may vary based on actual confi guration, applications, specific queries and other variables in a production environment. Prices, where applicable, are based on published US list prices for both IBM and competitor, and the cost calcul ation compares the cost per request for the 3yr life of the machine. 3 year total cost of acquisition comparisons are based on similar expected hardware, software, service & support offerings* Notes IBM would require twice the Ethernet Modules in certain configurations 6. PureFlex Intra-Chassis Provides Outstanding Results In Low Latency Tests Coalition CompetitorPureFlex System (Intel) 1024 byte messagesLLM1024 byte messagesLinuxLLM LinuxLLMLLMLinuxLinuxFlex System x240 E5-2680 2s/16c (2.7GHz) Sandy Bridge2.3x Higher throughput77% Lower Latency27.5 9.4 18,803E5-2680 2s/16c (2.7GHz) Sandy Bridge63.0 Microseconds latency due to network 40.0 Messages per second 7,920 Microseconds latency per messageA 1-millisecond advantage in trading applications can be worth $100 million a year to a major brokerage firm 1 This is an IBM internal study of IBM PureFlex System solution designed to replicate a typical IBM customer workload usage in the marketplace. The results were obtained under laboratory conditions, and not in an actual customer environment. IBM's internal workload studies are not benchmark applications, nor are they based on any benchmark standard. As such, customer applications, differences in the stack deployed, and other systems variations or testing conditions may produce different results and may vary based on actual configuration, applications, specific queries and other variables in a production environment. Prices, where applicable, are based on published US list prices for both IBM and competitor, and the cost calculation compares the cost per request for the 3yr life of the machine. 3 year total cost of acquisition comparisons are based on similar expected hardware, software, service & support offerings6 (1)Wall Streets Quest To Process Data At The Speed Of Light, InformationWeek http://www.informationweek.com/wall-streets-quest-to-process-data-at-th/199200297 7. Four Scalable switches enable high speed connectivity Ethernet (iSCSI), Fibre Channel and InfiniBandFour high performance Scalable Switch Modules17324 8. Flexible networking solution, allowing for best price/performance IBM 10Gb Switch: Wired for up to 16 10Gb ports per node and twenty two external portsNetworking CPU-Node Mezz card-1 ASIC1MidplaneUplink PortsUp to 4 KR Ports Per Switch BayUpstream SwitchScS Bay 2Upstream SwitchASIC1ScS Bay 3Upstream SwitchASIC2System infrastructureScS Bay 1ScS Bay 4Upstream SwitchASIC2 4-port Mezz Mezz card-24-port Mezz8 9. Compute Nodes and Mezzanine cards Compute Nodes Single width nodeDouble width nodeSystem InfrastructureTo bays 1, 212x LP DIMMsTo bays 3, 448x LP DIMMsTo bays 1, 2 To bays 3, 4 To bays 1, 2 To bays 3, 42x IO Mezzanine Cards 2x Hot Swap, Small Form Factor HDDs2x Intel E5 2400 Processors4x IO Mezzanine Cards 2x Hot Swap, Small Form Factor HDDs4x Intel E5 4600 Processors 10. Internal cabling diagram10 11. Naming Convention11 12. System NetworkingEthernet AdaptersIBM Confidential, Do Not Share or Distribute 2013 IBM Corporation 13. Compute Nodes and Mezzanine cardsSystem InfrastructureLOM NICs (LAN On Motherboard) 1GbE Embedded Adapter Some models of the x220 include an Embedded 1Gb Ethernet controller (also known as LAN on Motherboard or LOM) -Broadcom BCM5718 based Dual-port Gigabit Ethernet controller PCIe 2.0 x2 host bus interface Supports Wake on LAN Supports Serial over LAN Supports IPv6 Note: TCP/IP Offload Engine (TOE) is not supported. 10GbE Embedded Adapter x2x models: Two 10Gb Ethernet ports with Embedded 10Gb Virtual Fabric Ethernet LAN-onmotherboard (LOM) controller Based on the Emulex BladeEngine 3 (BE3), which is a single-chip, dual-port 10 Gigabit Ethernet (10GbE) Ethernet Controller. vNICs /UFP Support FCoE/iSCSI Power Compute nodes do NOT have any networking integrated on the system13 http://www.redbooks.ibm.com/abstracts/tips0885.html http://www.redbooks.ibm.com/abstracts/tips0860.html 14. IBM Flex System EN2024 4-port 1Gb Ethernet Adapter EN2024 Features 4 physical 1GbE portsSystem Infrastructure Two Broadcom 5718 Gigabit Ethernet ASICs Reliable, low-cost solution for redundant 1 GbE LAN connectivity PCIe 2.0 x1 host interface with MSI/MSI-X I/O virtualization features like VMware NetQueue and Microsoft VMQ technologies Warranty 1 year or matches the chassis14 http://www.redbooks.ibm.com/abstracts/tips0861.html 15. IBM Flex System CN4054 10Gb Virtual Fabric Adapter EN4054, CN4054, CN4054R FeaturesSystem Infrastructure 4 physical 10GbE ports UFP/vNIC: CN4054 and CN4054R default configuration of 4 virtual Ethernet ports per physical port (16 virtual Ethernet ports). EN4054 does NOT support vNICs Can configure the four physical ports as single 10 Gb NICs through the UEFI utility Available licenses to enable iSCSI or FCoE capabilities through IBM Features on Demand EN4054 Adapter does NOT support this upgrade Warranty 1 year or matches the chassis 15 http://www.redbooks.ibm.com/abstracts/tips0868.html 16. IBM Flex System CN4058 8-Port 10Gb Converged Network Adapter CN4058 Features Power Systems only 8 physical 10GbE ports System Infrastructure FCoE support Hardware protocol offloads for TCP/IP and FCoE CN4058 does NOT support iSCSI hardware offload CN4058 does NOT support UFP/vNICs The type of switch that is installed in the IBM Flex System Enterprise Chassis will determine the number of physical ports that are supported on the adapter Warranty 1 year or matches the chassis16 http://www.redbooks.ibm.com/abstracts/tips0909.html 17. IBM Flex System EN4132 2-port 10Gb Ethernet Adapter EN4132System Infrastructure Features X nodes only 2 physical 10GbE ports Based on Mellanox ConnectX-3 EN technology Performance-driven server and storage applications in enterprise data centers: Clustered databases, web infrastructure, and high frequency trading RoCE RDMA over Converged Ethernet SR-IOV Does NOT support UFP/vNIC Warranty 1 year or matches the chassis Do not confuse this adapter with the EN4132 2port 10Gb RoCE Adapter. These are two separate adapters.17 http://www.redbooks.ibm.com/abstracts/tips0873.html 18. IBM Flex System EN4132 2-port 10Gb RoCE Adapter EN4132 RoCE Features Power Systems only 2 physical 10GbE portsSystem Infrastructure Based on Mellanox ConnectX-2 technology RDMA over Converged Ethernet (RoCE) for low latency application: Clustered DB2 databases, web infrastructure, and high frequency trading Does NOT support UFP/vNIC Warranty 1 year or matches the chassis Do not confuse this adapter with the EN4132 2port 10Gb Ethernet Adapter. These are two separate adapters.18 http://www.redbooks.ibm.com/abstracts/tips0913.html 19. IBM Flex System EN6132 2-port 40Gb Ethernet Adapter EN6132 Features X nodes only 2 physical 40GbE portsSystem Infrastructure Based on Mellanox Connect-X3 technology RoCE support RDMA over Converged Ethernet Support for industry-standard SR-IO Virtualization technology Does NOT support UFP/vNIC Warranty 1 year or matches the chassis19 http://www.redbooks.ibm.com/Redbooks.nsf/RedbookAbstracts/tips0912.html 20. System NetworkingEthernet SwitchesIBM Confidential, Do Not Share or Distribute 2013 IBM Corporation 21. IBM Flex System EN4091 10Gb Ethernet Pass-thru NetworkingSimple 1/10Gb Pass-thru for seamless connectivity to upstream networksNetworking Infrastructure Overview21Simple & Low Cost Unmanaged Ability to auto-negotiate Seamless interoperabilityhttp://www.redbooks.ibm.com/abstracts/tips0865.html Part Number 88Y6043 14 x 1/10Gb server port 14 x 1/10Gb uplink ports Leadership Exceptional Price/Performance Seamless interoperability with other vendors switches Warranty 1 year or matches the chassis 22. IBM Flex System EN4091 10Gb Ethernet Pass-thru Internal PortsExternal Ports1 223344556677889910101111121213131422114 23. Connecting 4 ports with Pass-Throughs 1/3Single width node23Empty1EmptyEN4091x EmptyBay 2xEN4091EN2024/ Bay CN4054 1324Bay 3 Bay 4 24. Connecting 4 ports with Pass-Throughs 2/3 Single width node1EN2024/ Bay CN4054 3 Bay 424xEN4091x EN4091Bay 2EN4091xEN4091EN2024/ Bay CN4054 1324x 25. Connecting 4 ports with Pass-Throughs 3/3 Double width nodeEN4091EmptyxxEmptyBay 2 Bay 3 Bay 4 EN2024/ Bay CN4054 1 Bay 2 Bay 3 Bay 425xEN4091EN2024/ Bay CN4054 11324x 26. IBM Flex System EN2092 1Gb Ethernet Scalable Switch Scalable 1Gb Ethernet with 1/10Gb uplinks Leadership Exceptional Price/Performance Investment Protection pay-as-you-grow VM aware & VM Mobility with VMready Seamless interoperability Warranty is 1 year or will match chassis warranty Recommended ToR switch Multiple chassis of 1Gb connection G8052 Multiple chassis of 10Gb connection G826426http://www.redbooks.ibm.com/abstracts/tips0861.htmlAdd 4x10GbEPay as you grow scalability Optimized for performance Efficient network automation Enhanced virtualization intelligence Lower TCO Seamless interoperability Layer 2/3 Base Config: 14x 1Gb internal port & 10x 1Gb external - Part Number: 49Y4294 Upgrade 1: 28x 1Gb internal port & 20x 1Gb external. Part Number: 90Y3562 Upgrade 2: activates the four 10Gb external Part Number: 49Y4298 Upgrades can be applied in any order 14 x 1/10Gb uplink portsAdd 10x1GbEI/O Infrastructure OverviewBase 10 x 1GbE portsEthernet Connectivity 27. Scale for bandwidth, ports or both EN2092: Wired for up to two 1Gb ports per node, twenty 1Gb and four 10Gb14 internal ports Base Switch: Enables fourteen internal 1Gb ports (one to each server) and ten external 1Gb ports Supports the 2 port 1Gb LOMBase Switch Pool of uplink ports14 internal portsIBM EN2092 1Gb Scalable Switchexternal ports Upgrade 1 via FoD: Enables second set of fourteen internal 1Gb ports (one to each server) and 10 additional external 1 GbE ports for a total of twenty 1 GbE uplinks Supports 4-port 1Gb adaptersUpgrade 1 Upgrade 2 via FoD: Enables four external 10 Gb uplinks with SFP+ connectors Can be applied on the base switch27 28. Connecting 4 ports with Switches 1/2Bay 3 Bay 412832EmptySwitch(base) Switch(Upgr 1)EmptyEmptyBay 2Switch(Upgr 1)EN2024/ Bay CN4054 1Switch(base)Single width node4 29. Connecting 4 ports with Switches 2/229Switch(base)Switch(base)Switch(Upgr 1)Bay 4Switch(Upgr 1)Switch(base)1EN2024/ Bay CN4054 3Switch(Upgr 1)Bay 2Switch(Upgr 1)EN2024/ Bay CN4054 1Switch(base)Single width node324 30. IBM Flex System Fabric EN4093/R 10Gb Scalable Switch Upgrade2 requires Upgrade1 10Gb Uplinks40Gb UplinksBase System14100Upgrade #128102Upgrade #242142Pay as you grow scalability Optimized for performance Efficient network automation Enhanced virtualization intelligence Lower TCO Seamless interoperability Leadership Performance - < 1 s latency, up to 1.28Tbps NAS, iSCSI or FCoE transit switch (CEE/DCB) Stacking vLAG (multichassis link aggregation) SPAR for Multi-tenancy Virtual Fabric & UFP carve up virtual pipes Seamless interoperability with other vendors Warranty 1 year or matches the chassis Multiple chassis of 10Gb connection G8264 Multiple chassis of 40Gb connection G8316 30http://www.redbooks.ibm.com/abstracts/tips0864.html1GbE Mgmt Recommended ToR switch#1 = 2x40GbESystem infrastructureTotal Ports#2 = 4x10GbE10Gb to ServerBase 10 x 10GbE SFP+Scalable 10Gb Ethernet Switch 10/40Gb uplinksNetwork 31. Scale for bandwidth, ports or both14 internal ports14 internal ports31 Base Switch: Enables fourteen internal 10Gb ports (one to each server) and ten external 10Gb ports Supports the 2 port 10Gb LOM and Virtual Fabric capabilityBase SwitchUpgrade 1Upgrade 2Pool of uplink ports14 internal portsIBM 10Gb Virtual Fabric SwitchEN4093: Wired for up to three 10Gb ports per node and twenty two external ports. First Upgrade via FoD: Enables second set of fourteen internal 10Gb ports (one to each server) and two 40Gb ports Each 40Gb port can be used as four 10Gb ports Supports the 4-port Virtual Fabric adapter Second Upgrade via FoD: Enables third set of fourteen internal 10Gb ports (one to each server) and four external 10Gb ports Capable of supporting a six port card 32. IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch Upgrades can be applied in any order Omniports do NOT support 1Gb 10Gb to ServerSFP+ PortsOmni PortsQSFP+ PortsPay as you grow scalability Base System (00D5823) 14 2 6 0 Upgrade #1 (00D5845) 28 2 6 2 Optimized for performance Upgrade #2 (00D5847) 28 2 12 0 Upgrade #1 & #2 42 2 12 2 Designed for convergence Leadership Enhanced virtualization Omni Ports allow flexibility 10GbE or 4/8Gb FC intelligence Performance - Low latency, up to 1.28Tbps Pay-as-you-grow Stacking Lower TCO vLAG Virtual Fabric & UFP carve up virtual pipes Seamless interoperability Seamless interoperability with other vendors32http://www.redbooks.ibm.com/abstracts/TIPS0910.html1GbE Mgmt Warranty is 1 year or will match the chassis warranty (Includes software upgrades)12 x 10GbE Omni Ports (6 in base + 6 #2)Networking InfrastructureTotal Ports2X10Gb SFP+ #1 = 2x40GbEConverged 10Gb Ethernet with 10/40Gb Ethernet and 4/8Gb Fibre Channel uplinksNetworking 33. FCoE Standards FC-BB-5FC-BB-6(current)(in development)cFCFFCF CEE InitTargetCEE Target All traffic flows through the FCF CEE switch routes at MAC layer FCF does:FDF InitTargetFDF Target FCF control plane (cFCF) distributes zoning and routing information to FDFs FCoE Data-plane Forwarder (FDF) can do: Zoning enforcement Zoning enforcement FC routing FC routing Result is suboptimal data flow 33 Result is optimal data flow 34. IBM Flex System Fabric CN4093 Architecture INTERNAL PORTSEXTERNAL PORTSCN4093 424343 2x 10GbEFCoE Target2x 40GbE29FCoE Target5253Ethernet 15 14FCoE Initiator34FCoE InitiatorFCFFCoE Target61 62 631LAN Ethernet OmniPorts (Eth mode)525328FCoE Initiator6464FC SwitchSAN OmniPorts (FC mode) 35. IBM Flex System Fabric SI4093 System Interconnect Module10Gb Downlinks10Gb Uplinks40Gb UplinksBase System14100w/ Upgrade #128102w/ Upgrade #242142Total Ports Leadership Loop-free, no STP Flex chassis as a regular big server FCoE convergence as a transit switch Easy to deploy, simple connectivity Easy interoperability with other vendors switches Warranty to match the chassis Alternative to Pass-thru, Cisco FEX or HP VChttp://www.redbooks.ibm.com/Redbooks.nsf/RedbookAbstracts/tips1045.html1. 2. 35Default transparent end host mode VLAN aware mode for multi-tenancy environments or environments requiring more control of VLAN Layer 2 forwarding. Also enables FCoE FIPS Snooping Bridge support enhancing the local FCoE operation in the chassis.1GbE Mgmt #1 = 2x40GbESimple Management Optimized for performance Pay as you grow scalability Lowest TCO Easy interoperability Loop free, no STP#2 = 4x10GbEI/O Infrastructure Upgrade2 requires Upgrade1Base 10 x 10GbE SFP+Converged 10Gb Ethernet with 10/40Gb Ethernet and 4/8Gb Fibre Channel uplinksEthernet Connectivity 36. SI4093 Tunnel Mode Non Virtualized ToRsToR 1EXT1ToR 2XEXT2Pass-Through Domain-1INT136EXT3Pass-Through Domain-2INT2Pass-Through Domain-3INT3INT4INT5 37. SI4093 Tunnel Mode Virtualized ToRsvLAG, Stacking, vPC, VSS, etc.ToR 1EXT1EXT2Pass-Through Domain-1INT137ToR 2EXT3Pass-Through Domain-2INT2Pass-Through Domain-3INT3INT4INT5 38. SI4093 VLAN-Aware Mode Non virtualized ToRsToR 1ToR 2X EXT1 VLAN-100 VLAN-500VLAN-300INT138VLAN-600VLAN-400VLAN-200INT2INT3INT4INT5 39. SI4093 VLAN-Aware Mode Virtualized ToRsvLAG, Stacking, vPC, VSS, etc.ToR 1ToR 2EXT1 VLAN-100 VLAN-500VLAN-300INT139VLAN-600VLAN-400VLAN-200INT2INT3INT4INT5 40. SI4093 MultitenancyX40 41. IBM Flex System EN6131 40Gb Ethernet Switch NetworkingAll Internal and External 40Gb Ethernet portsNetworking Infrastructure Overview41Low latency High bandwidth More workloads per server without I/O bottlenecks Lower TCO Performance needed to support clustered databases, parallel processing, transactional services, and high-performance embedded I/O applications Reduces task completion time and lowers the cost per operation. Leadershiphttp://www.redbooks.ibm.com/abstracts/tips0911.html 14 internal and 18 external 40GbE ports Non-blocking design Layer 2 only Less than 0.7 s latency node to node Ideal for high speed trading, Web 2.0, virtualization and cloud computing Low power consumption - less than 0.1 watts per Gbps 42. IBM Flex System EN4023 Scalable Switch Networking10Gb Ethernet Switch 10/40Gb uplinksNetworking Infrastructure Overview Brocade VCS switch Up to 42 x10Gb internal ports and 14 x10Gb and 2 x 40Gb external ports Dynamic Ports on Demand (DPOD), ports are licensed as they come online Upgrades can be applied in any order Does NOT support EXT 1Gb connections GA December 6, 2013 (X & Power)No FCoE High bandwidth Dynamic Ports on Demand (DPOD) Brocade Virtual Cluster Switching (VCS) FabricBase licenseUpgr 1Upgr 224x 10GbE ports (int & ext)10040x 10GbE ports (int & ext)10140x 10GbE ports (int & ext) 2x ext 40GbE QSFP+ports11056x 10GbE ports (int & ext) 2x ext 40GbE QSFP+ ports111Total Ports Positioning 42 http://www.redbooks.ibm.com/abstracts/TIPS1070.html Customers looking to connect Flex System chassis to existing Brocade networks Reduces management and interoperability when connecting to a Brocade network 43. Brocade VCS Fabric Technology43 44. Cisco Nexus B22 Fabric Extender for IBM Flex System NetworkingAll Internal and External 40Gb Ethernet portsNetworking Infrastructure Overview44Transparent connectivity Reduce management Remote line card Requires existing Nexus ToR Cisco Nexus B22 Fabric Extender for IBM Flex System Cisco Nexus B22 Fabric Extender with FET bundle for IBM Flex System 14 x 10Gb internal ports and 8 x 10Gb uplinks to connect to Nexus 5K or 6K Does NOT support 1GbE on EXT ports December 6, 2013 (X & Power) Positioning Customers looking to connect Flex System chassis to their Cisco NEXUS network Reduces management and interoperability when connecting to Cisco NEXUS network Supports Ethernet, iSCSI and FCoE protocol. Managed via Nexus 5000 or 6000 Top Of Rack Switch http://www.redbooks.ibm.com/abstracts/tips1086.html 45. B22 Is Freedom of Choice But Things To Consider B22 has no network integration with the Flex Systems Manager (FSM) Clients wanting lowest latency for inter chassis communications SI/EN4093 provide over two times lower latency Clients considering implementing the x222 compute nodes B22 is not supported Clients wanting to use 4/8-port adapter B22 only can use 2-ports When clients want separate Ethernet and FC out of the chassis (Example: clients that have a Brocade SAN Brocade has about 2/3 of FC SAN market) CN4093 (Up to 25% less expensive than 2xEthernet and 2xFC modules, two separate adapters) When clients want Flexibility or Advanced Networking capability EN4093R Layer 2 or Layer 3 mode for clients that want networking features OpenFlow mode support for clients considering Software Defined Networking today or in future Easy Connect mode: similar to the SI4093 but allowing change in the future to other modes. There are other enhanced features stacking, virtualization, etc. When Clients want 40Gb Ethernet EN6131 Cisco can not match this capability today 45 46. Required upstream FEX Parent Nexus 5548Nexus 6004Nexus 5596 Supported Level Cisco Nexus 5500 Series Switch running Cisco NX- OS Software Release 6.0(2)N2(1a) and later Cisco Nexus 6000 Series Switch running Cisco NX- OS Software Release 6.0(2)N2(1a) and later CMM running firmware GA 4.1 46 47. Cisco Nexus and FEXSource: http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/layer2/521_n1_2/b_5k_Layer2_Config_521N12_chapter_010000.html#con_119705447 48. List prices as of 16 Oct 2013 I/O ModulePart NumberList PriceEN4091 (Pass-Through)88Y6043EUR 4.291,00EN2092 (1GbE Switch)49Y4294EUR 1.864,00EN4093R (10Gb Switch)95Y3309EUR 11.365,00CN4093 (10Gb Converged Switch)00D5823EUR 16.565,00SI4093 (10Gb System Interconnect)95Y3313EUR 8.606,00EN6131 (40Gb Ethernet Switch)90Y9346EUR 22.984,00EN4023 (10Gb Brocade Switch)94Y5212EUR 12.341,00Nexus B22 (Cisco Fabric Extender)94Y5350EUR 9.059,00 Transceivers and/or upgrades NOT included 48 49. System NetworkingSwitch PositioningIBM Confidential, Do Not Share or Distribute 2013 IBM Corporation 50. High Level Positioning of the Ethernet PortfolioLead Offerings Differentiation & ValueSpecific Needs SI4093 Low cost simple connectivity 10Gb Ethernet, iSCSI or FCoE 40Gb Uplinks EN4093R Extreme Flexibility 10Gb Ethernet, iSCSI or FCoE Modes: L2, L3, easy connect and OpenFlow for SDN Supports Ethernet, iSCSI and FCoE protocol CN4093 Convergence (FCoE + FC) Break out Ethernet and FC in chassis 10Gb Ethernet, iSCSI or FCoE Ideal for V7000 storage node EN2092 Clients requesting 1Gb Ethernet & iSCSI, 10Gb uplinks EN6131 (40GbE) Clients needing 40Gb Ethernet end-toend Cisco B22 FEX Clients that are only open to Cisco EN4023 (Brocade) Clients that have bought into the Brocade VCS approach EN4091 (Pass-Through) Clients requesting low module cost 1-to-1 10Gb connections No competitive advantage50 51. Ethernet Market Positioning IBM Ethernet onlyDescriptionHPDellCisco UCS1Gb OnlyHP 6125G Cisco 3020 / 3120GPowerConnect M6220 / M6348 Not Available Cisco 3032 / 3130G1Gb/10Gb uplinks1/10 Virtual Connect HP 6125G/XG ProCurve 6120G/XG Cisco 3120XPowerConnect M6220 PowerConnect M6348 Cisco 3130XEN2092 1Gb Ethernet SwitchNot AvailableSI4093 System Interconnect ModulePowerEdge M I/O Aggregator UCS 2104XP UCS Cisco Nexus B22DELL Blade 2208XP & Fabric Fabric Extender InterconnectEN4093R 10GbE SwitchAll 10GbProCurve 6120XGPowerConnect M8024-k Dell Force10 MXLNot AvailableCN4093 10GbE Converged SwitchFC/FCoEHP Virtual Connect FlexFabricDell M8428-k Converged 10GbE SwitchNot AvailableCisco Nexus B22 FEXCisco FEXCisco Fabric Extender for Cisco Nexus B22Dell Blade HP BladeSystem Fabric ExtenderNot AvailableEN4023 10Gb Switch (Brocade)Brocade VCSNot AvailableNot AvailableNot AvailableEN6131 40GbE Switch (Mellanox)51Virtual Connect Flex-10 10Gb (better option VC Flex-10/10D than pass-thru) Cisco Fabric Extender40Gb EthernetMellanox SX1018HP Ethernet Blade SwitchNot AvailableNot Available