9
http://www.cs.berkeley.edu/~ravenben/tapestry CITRIS Poster Supporting Wide-area Applications Complexities of global deployment Network unreliability BGP slow convergence, redundancy unexploited Management of large scale resources / components Locate, utilize resources despite failures Decentralized Object Location and Routing (DOLR) wide-area overlay application infrastructure Self-organizing, scalable Fault-tolerant routing and object location Efficient (b/w, latency) data delivery Extensible, supports application-specific protocols Recent work Tapestry, Chord, CAN, Pastry, Kademlia, Viceroy, …

Supporting Wide-area Applications

Embed Size (px)

DESCRIPTION

Supporting Wide-area Applications. Complexities of global deployment Network unreliability BGP slow convergence, redundancy unexploited Management of large scale resources / components Locate, utilize resources despite failures Decentralized Object Location and Routing (DOLR) - PowerPoint PPT Presentation

Citation preview

Page 1: Supporting Wide-area Applications

http://www.cs.berkeley.edu/~ravenben/tapestryCITRIS Poster

Supporting Wide-area Applications Complexities of global deployment

Network unreliability BGP slow convergence, redundancy unexploited

Management of large scale resources / components Locate, utilize resources despite failures

Decentralized Object Location and Routing (DOLR) wide-area overlay application infrastructure

Self-organizing, scalable Fault-tolerant routing and object location Efficient (b/w, latency) data delivery

Extensible, supports application-specific protocols Recent work

Tapestry, Chord, CAN, Pastry, Kademlia, Viceroy, …

Page 2: Supporting Wide-area Applications

http://www.cs.berkeley.edu/~ravenben/tapestryCITRIS Poster

Tapestry: Decentralized Object Location and Routing

Zhao, Kubiatowicz, Joseph, et. al.

Mapping keys to physical network Large sparse Id space N Nodes in overlay network have NodeIds N Given k N, overlay deterministically maps k to

its root node (a live node in the network) Base API

Publish / Unpublish (Object ID) RouteToNode (NodeId) RouteToObject (Object ID)

Page 3: Supporting Wide-area Applications

http://www.cs.berkeley.edu/~ravenben/tapestryCITRIS Poster

4

2

3

3

3

2

2

1

2

4

1

2

3

3

1

34

1

1

4 3

2

4

Tapestry MeshIncremental prefix-based routing

NodeID0x43FE

NodeID0xEF31NodeID

0xEFBA

NodeID0x0921

NodeID0xE932

NodeID0xEF37

NodeID0xE324

NodeID0xEF97

NodeID0xEF32

NodeID0xFF37

NodeID0xE555

NodeID0xE530

NodeID0xEF44

NodeID0x0999

NodeID0x099F

NodeID0xE399

NodeID0xEF40

NodeID0xEF34

Page 4: Supporting Wide-area Applications

http://www.cs.berkeley.edu/~ravenben/tapestryCITRIS Poster

Object LocationRandomization and Locality

Page 5: Supporting Wide-area Applications

http://www.cs.berkeley.edu/~ravenben/tapestryCITRIS Poster

Single Node Architecture

Transport Protocols

Network Link Management

Application Interface / Upcall API

DecentralizedFile Systems

Application-LevelMulticast

ApproximateText Matching

RouterRouting Table

&Object Pointer DB

Dynamic Node

Management

Page 6: Supporting Wide-area Applications

http://www.cs.berkeley.edu/~ravenben/tapestryCITRIS Poster

Status and Deployment Planet Lab global network

98 machines at 42 institutions, in North America, Europe, Australia (~ 60 machines utilized)

1.26Ghz PIII (1GB RAM), 1.8Ghz PIV (2GB RAM) North American machines (2/3) on Internet2

Tapestry Java deployment 6-7 nodes on each physical machine IBM Java JDK 1.30 Node virtualization inside JVM and SEDA Scheduling between virtual nodes increases latency

Page 7: Supporting Wide-area Applications

http://www.cs.berkeley.edu/~ravenben/tapestryCITRIS Poster

Node to Node Routing

Ratio of end-to-end routing latency to shortest ping distance between nodes

All node pairs measured, placed into buckets

0

5

10

15

20

25

30

35

0 50 100 150 200 250 300

Internode RTT Ping time (5ms buckets)

RD

P (m

in, m

ed, 9

0%) Median=31.5, 90th percentile=135

Page 8: Supporting Wide-area Applications

http://www.cs.berkeley.edu/~ravenben/tapestryCITRIS Poster

Object Location

Ratio of end-to-end latency for object location, to shortest ping distance between client and object location

Each node publishes 10,000 objects, lookup on all objects

90th percentile=158

0

5

10

15

20

25

0 20 40 60 80 100 120 140 160 180 200

Client to Obj RTT Ping time (1ms buckets)

RD

P (

min

, m

ed

ian

, 9

0%

)

Page 9: Supporting Wide-area Applications

http://www.cs.berkeley.edu/~ravenben/tapestryCITRIS Poster

Parallel Insertion Latency

Dynamically insert nodes in unison into a Tapestry of 200 Shown as function of insertion group size / network size Node virtualization effect: CPU scheduling contention results in

timeout of Ping measurements, resulting in high deviation

0

2000

4000

6000

8000

10000

12000

14000

16000

18000

20000

0 0.05 0.1 0.15 0.2 0.25 0.3

Ratio of Insertion Group Size to Network Size

La

ten

cy

to

Co

nv

erg

en

ce

(m

s)

90th percentile=55042