Upload
roger-chambers
View
220
Download
1
Embed Size (px)
Citation preview
CONFIDENTIAL
Server and StorageConnectivity Solutions
April 2009
© 2009 MELLANOX TECHNOLOGIES - CONFIDENTIAL - 2
Efficient Solutions for Efficient Computing
Cloud Computing
Enterprise Data CenterHigh-Performance
Computing
Leading Connectivity Solution Provider For Servers and StorageLeading Connectivity Solution Provider For Servers and Storage
© 2009 MELLANOX TECHNOLOGIES - CONFIDENTIAL - 3
Leading End-to-End Data Center Products
• Dual-Port 10/20/40Gb/s InfiniBand, 10GigE with FCoE & Data Center Ethernet
• 36-port 40Gb/s switch silicon device• 36 to 324-port 40Gb/s InfiniBand Switches
InfiniScale® IV
Adapter ICs & Cards Switch ICs & Systems
Cables
Blade & Rack Servers StorageSwitch
Cables
Gateway
Cables
• 10/20/40G InfiniBand or 10GigE to 10GigE and/or 2/4/8G Fibre Channel Gateway
Gateway ICs & Systems Cables
• Robust active and passive cables• Supporting data rates up to 40Gb/s
Switch ICs & SystemsAdapter ICs & Cards Adapter ICs & CardsGateway ICs & Systems
© 2009 MELLANOX TECHNOLOGIES - CONFIDENTIAL - 4
InfiniBand Leadership and Market Expansion
InfiniBand market and performance leader• First to market with 20Gb/s and 40Gb/s adapters and switches– Mature, 4th generation silicon and software
• Strong industry adoption of 40Gb/s InfiniBand ~34% of 4Q 2008 revenue
• Roadmap to 80Gb/s in 2010
Expansion into high transaction processing and virtualization• Cloud computing, Oracle data base, VMware I/O consolidation• Data distribution, algorithmic trading for financial services
Expansion into commercial HPC• Automotive, Digital media, EDA, Oil & Gas, and simulation
© 2009 MELLANOX TECHNOLOGIES - CONFIDENTIAL - 5
10 Gigabit Ethernet Solutions Leadership
Ethernet Leadership• First to market with dual-port PCIe Gen2 10GigE adapter• First to market with 10GigE w/FCoE with hardware offload–Awarded “Best of Interop” 2008
Industry-wide Acceptance and Certification• Multiple design wins & deployments– Servers, LAN on Motherboard (LOM), and storage systems
• VMware Virtual Infrastructure 3.5 • Citrix XenServer 4.1 in-the-box support• Windows Server 2003 & 2008, RedHat 5, SLES 11
© 2009 MELLANOX TECHNOLOGIES - CONFIDENTIAL - 6
Maximizing Productivity Since 2001
© 2009 MELLANOX TECHNOLOGIES - CONFIDENTIAL - 7
China 863 Grid program• Biggest government project in China IT industry
Dawning5000A supercomputer• 1920 nodes Dawning blade system, 180.6TFlop, ~80% efficiency
Highest ranked Windows HPC Server 2008 based system Mellanox ConnectX and switch based systems
• Delivering highest scalability for Windows based clusters
Shanghai Supercomputer Center
© 2009 MELLANOX TECHNOLOGIES - CONFIDENTIAL - 8
Largest supercomputer in the world• Los Alamos Nation Lab, #1 on June 2008 Top500 list
– Nearly 3x faster than the leading contenders on Nov 2007 list
• Usage - national nuclear weapons, astronomy, human genome science and climate change
Breaking through the “Petaflop barrier" • More than 1,000 trillion operations per second• 12,960 CPUs, 3,456 tri-blade units• Mellanox ConnectX 20Gb/s InfiniBand adapters• Mellanox InfiniScale III 20Gb/s switches
Mellanox InfiniBand is the only scalable high-performance solution for Petascale computing• Scalability, efficiency, performance
Roadrunner – The First Petaflop System
© 2009 MELLANOX TECHNOLOGIES - CONFIDENTIAL - 9
Center for High-End Computing Systems (CHECS)• CHECS research activities are the foundation for the development of
next generation, power-aware high-end computing resources
Mellanox end-to-end 40Gb/s solution • Mellanox 40Gb/s - the only 40Gbs technology on the Top500 list
324 Apple Mac Pro Servers• Total of 2592 Intel quad-core CPU cores
Energy efficient 22.3TF system
“Unlike most of the clusters I have ever used, we have never had a Linpack run failure with this cluster, not one.”
Dr. Srinidhi Varadarajan
Virginia Tech – 40Gb/s InfiniBand QDR System
© 2009 MELLANOX TECHNOLOGIES - CONFIDENTIAL - 10
Mellanox InfiniBand-AcceleratedHP Oracle Database Machine
Mellanox 20Gb/s InfiniBand-accelerated rack servers and native InfiniBand EXADATA Storage Servers with Oracle 11g • Solves I/O bottleneck between database servers
and storage servers At least 10X Oracle data warehousing query
performance• Faster access to critical business information
“Oracle Exadata outperforms anything we’ve tested to date by 10 to 15 times. This product flat-out screams.”
Walt LitzenbergerDirector Enterprise Database Systems, CME Group
World’s Largest Futures Exchange
© 2009 MELLANOX TECHNOLOGIES - CONFIDENTIAL - 11
InfiniBand HCA Silicon and Cards
Performance driven architecture • MPI latency <1us, 6.6GB/s with 40Gb/s InfiniBand (bi-directional)
• MPI message rate of >40 Million/sec
Superior real application performance• Scalability, efficiency, productivity
Mellanox ConnectX MPI Latency - Multi-core Scaling
0
2
4
6
1 2 3 4 5 6 7 8
# of CPU cores (# of processes)
La
ten
cy
(u
se
c)
Mellanox ConnectX MPI Latency - Multi-core Scaling
0
3
6
9
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
# of CPU cores (# of processes)
La
ten
cy
(u
se
c)
8-cores 16-cores
© 2009 MELLANOX TECHNOLOGIES - CONFIDENTIAL - 12
ConnectX Ethernet Benefits
Optimized for cost, power, board space
• Single chip with integrated PHYs
Highest bandwidth
• 2 line-rate 10GigE ports over PCIe2.0
HW-based virtualization
• For native OS performance
• Better resource utilization
Network convergence
• Converged Enhanced Ethernet (CEE)
• Fibre Channel over Ethernet (FCoE)
• Low Latency Ethernet (LLE)
• Efficient RDMA
• iSCSI acceleration
– Through OS-compatible stateless offloads
SLES 10 / iPERF / 8 Cores / 3.2GHz / PCIe Gen2
0
1
2
3
4
5
6
7
8
9
10
Message Size
Ban
dw
idth
(G
b/s
)
1 Stream
2 Streams
4 Streams
8 Streams
16 Streams
SLES 10 / NetPERF / TCP RR / 8 Cores / 3.2GHz
0
5
10
15
20
25
30
35
40
45
50
55
60
1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K 16K 32K
Message Size
Late
ncy i
n u
Seco
nd
s
1500 MTU
9600 MTU
Line rate from 128B onwards
© 2009 MELLANOX TECHNOLOGIES - CONFIDENTIAL - 1313
ConnectX Virtual Protocol Interconnect
StorageNFS, CIFS, iSCSI
NFS-RDMA, SRP, iSER,Fibre Channel, Clustered
NetworkingTCP/IP/UDP
Sockets
ClusteringMPI, DAPL, RDS, Sockets
ManagementSNMP, SMI-S
OpenView, Tivoli, BMC, Computer Associates
LLE
Consolidated Application Programming Interface
App1 App2 App3 App4 AppX…
Acceleration Engines
10GigE Data CenterEthernet
Any Protocol over Any Convergence Fabric
Protocols
Applications
Networking VirtualizationClustering Storage RDMA
10/20/40 InfiniBand
© 2009 MELLANOX TECHNOLOGIES - CONFIDENTIAL - 14
40Gb/s InfiniBand Switch Systems
Scalable switch architecture• DDR (20Gb/s) and QDR (40Gb/s)• Latency as low as 100ns• Adaptive routing, congestion management, QoS• Multiple subnets, mirroring
MTS3600• 1U 36 port QSFP• Up to 2.88Tb/s switching capacity
MTS3610• 19U 18 slot chassis• Up to 25.9Tb/s switching capacity• 18 QSFP ports per switch blade
MTS3630• 648-port chassis• Up to 51.8Tb/s switching capacity
Accelerating QDR Deployment
© 2009 MELLANOX TECHNOLOGIES - CONFIDENTIAL - 15
BridgeX Enables True IO Unification
Cost-effective bridging
• InfiniBand → Ethernet
• InfiniBand → Fibre Channel
• Ethernet → Fibre Channel
Protocol encapsulation
• No termination
Full wire speed, low power
Simplicity, scalability and flexibility
© 2009 MELLANOX TECHNOLOGIES - CONFIDENTIAL - 16
Efficient High Performance Solutions
40Gb/s Network InfiniBand Eth over IB FC over IB FC over Eth*
SwitchesBridges
Storage
ServersAdapters
Ethernet Storage
FC Storage
IB Storage
40G InfiniBand, FCoIB10G Ethernet, FCoE
IB to EthIB to FCEth to FC
* via ecosystem products
40G InfiniBand 10G Ethernet 8G Fibre Channel
© 2009 MELLANOX TECHNOLOGIES - CONFIDENTIAL - 17
Coming to the Theaters…
Adaptive routing and static routing Congestion control Virtual secured subnets 80Gb/s InfiniBand MPI offloads
HS 2:10 means 10 links with 2 to 1 oversubscription
© 2009 MELLANOX TECHNOLOGIES - CONFIDENTIAL - 18
Enabling Energy Efficiency and Cost Savings
TCO
50%
Reduction
Energy Costs
67%
Reduction
Infrastructure
62%
Saving
SwitchesBridges
StorageServers
Adapters
Virtualization One Wire VPI High-Performance
Bottom Line Benefits for IT:
* Based on end-users testimonies
40G InfiniBand 10G Ethernet 8G Fibre Channel
Complete Scalable I/O Consolidation Solutions
© 2009 MELLANOX TECHNOLOGIES - CONFIDENTIAL - 19
Energy Efficiency and Increased Productivity
TCO
50%
Reduction
Performance
100%
Increase
Infrastructure
62%
Saving
SwitchesBridges
StorageServers
Adapters
Virtualization One Wire VPI High-Performance
Bottom Line Benefits for IT:
* Based on end-users testimonies
40G InfiniBand 10G Ethernet 8G Fibre Channel
Complete Scalable I/O Consolidation Solutions
CONFIDENTIAL20
Thank You
www.mellanox.com