Upload
nell-downs
View
60
Download
0
Embed Size (px)
DESCRIPTION
The NE010 iWARP Adapter. Gary Montry Senior Scientist +1-512-493-3241 [email protected]. Users. switch. ▪ ▪ ▪ ▪ ▪ ▪ ▪. LAN Ethernet. NAS. Applications. Block Storage Fibre Channel. networking. adapter. block storage. adapter. SAN. clustering. adapter. Clustering - PowerPoint PPT Presentation
Citation preview
www.openfabrics.org
The NE010 iWARP Adapter
Gary MontrySenior Scientist
+1-512-493-3241 [email protected]
2www.openfabrics.org
SANBlock StorageFibre Channel
block storage adapter▪ ▪
▪ ▪ ▪ ▪
▪switch
Today’s Data Center
ClusteringMyrinet, Quadrics,
InfiniBand, etc.
clustering adapter
▪ ▪ ▪ ▪
▪ ▪ ▪switch
NAS
Users
LANEthernet
networking adapter
Applications▪ ▪
▪ ▪ ▪ ▪
▪switch
3www.openfabrics.org
device driver
OS
Overhead & Latency in Networking
application
I/O library
user
device driver
OS
I/O adapter
server software
kernel
I/O cmd
TCP/IPTCP/IP
software
hardware
standard Ethernet TCP/IP packet
adapter bufferadapter buffer
app bufferapp buffer
OS bufferOS buffer
driver bufferdriver buffer
I/O cmdI/O cmd
I/O cmdI/O cmd
I/O cmdI/O cmd
I/O cmdI/O cmd
I/O cmdI/O cmd % CPU Overhead
100%application to OScontext switches 40%
transport processing 40%
Intermediate buffer copies20%
context switch
Sources
TCP/IPTCP/IP
app bufferapp buffer
I/O cmdI/O cmd
application to OScontext switches 40%
Intermediate buffer copies20%
60%
40%application to OScontext switches 40%
The iWARP* Solution
• Transport (TCP) offload
• RDMA / DDP
• User-Level Direct Access / OS Bypass
Packet ProcessingIntermediate Buffer CopiesCommand Context Switches
* iWARP: IETF ratified extensions to the TCP/IP Ethernet Standard
4www.openfabrics.org
ClusteringMyrinet, Quadrics,
InfiniBand, etc.
ClusteringiWARP Ethernet
StorageiWARP Ethernet
SANBlock StorageFibre Channel
networkingstorageclustering
Tomorrow’s Data CenterSeparate Fabrics for Networking, Storage, and Clustering
networkingstorageclustering
▪ ▪ ▪ ▪
▪ ▪ ▪switch
networkingstorageclustering
Applications
networking
storage
clustering
adapter
adapter
adapter
▪ ▪ ▪ ▪
▪ ▪ ▪switch
LANiWARP Ethernet
NAS
Users
LANEthernet
▪ ▪ ▪ ▪
▪ ▪ ▪switch
5www.openfabrics.org
ConvergediWARP Ethernet SAN
Single Adapter for All TrafficConverged Fabric for Networking, Storage, and Clustering
Users• Smaller footprint• Lower complexity• Higher bandwidth• Lower power• Lower heat dissipation NAS
▪ ▪ ▪ ▪
▪ ▪ ▪switchnetworking
storageclustering
adapter
Applications
6www.openfabrics.org
NetEffect’s NE010 iWARP Ethernet Channel Adapter
7www.openfabrics.org
High Bandwidth – Storage/Clustering Fabrics
Message Size (Bytes)
Ban
dw
idth
MB
/s
InfiniBand - PCI-XNE010 10 GbE NIC InfiniBand - PCIe
Bandwidth TestsNE010 vs InfiniBand & 10 GbE NIC
Source: InfiniBand - OSU website, 10 Gb NIC and NE010 - NetEffect Measured
0
100
200
300
400
500
600
700
800
900
1000
128 256 512 1024 2048 4096 8192 16384
8www.openfabrics.org
Multiple Connections Realistic Data Center Loads
Multiple QP ComparisonNE010 vs InfiniBand
Message Size (Bytes)
BW
(M
B/s
)
NE010 (1 QP) NE010 (2048 QP) InfiniBand (1 QP) InfiniBand (2048 QP)
Source: InfiniBand - Mellanox Website; NE010 - NetEffect Measured
0
100
200
300
400
500
600
700
800
900
1000
128 256 512 1024 2048 4096 8192 16384
9www.openfabrics.org
0
100
200
300
400
500
600
700
800
900
1000
128 256 512 1024 2048 4096 8192 16384
Message Size (Bytes)
Ban
dw
idth
MB
/s
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
CP
U U
tili
zati
on
NE010 Processor Off-load
Bandwidth and CPU UtilizationNE010 vs 10 GbE NIC
BW NIC BW NE010 CPU Util NIC CPU Util NE010
10www.openfabrics.org
Low Latency – Required for Clustering Fabrics
Application LatencyNE010 vs InfiniBand & 10 GbE NIC
Message Size (Bytes)
Lat
ency
(u
sec)
InfiniBand 10 GbE NIC
Source: InfiniBand - OSU website, 10 Gb NIC and NE010 - NetEffect Measured
-
20
40
60
80
100
120
140
160
128 256 512 1024 2048 4096 8192 16384
NE010
11www.openfabrics.org
Low Latency – Required for Clustering FabricsApplication Latency
D. Dalessandro – OSC May ‘06
12www.openfabrics.org
Summary & Contact Information
iWARP Enhanced TCP/IP Ethernet Next Generation of Ethernet for All Data Center Fabrics Full Implementation Necessary to Meet the Most
Demanding Requirement – Clustering & Grid True Converged I/O Fabric for Large Clusters
NetEffect Delivers “Best of Breed” Performance Latency Directly Competitive with Proprietary Clustering Fabrics Superior Performance under Load for Throughput and Bandwidth Dramatically Lower CPU Utilization
For more information contact:Gary Montry
Senior Scientist+1-512-493-3241 | [email protected]
www.NetEffect.com
www.openfabrics.org