chord_p2p_1

Embed Size (px)

Citation preview

  • 7/31/2019 chord_p2p_1

    1/11

    Structured P2P Networks by Example

    Chord, DKS(N,k,f)

    Jun Qin

    Computational Engineering

    Technische Universitaet Dresden

    August 26, 2006

    Abstract

    Recent developments in the area of peer-to-peer computing show that structured overlaynetworks implementing distributed hash tables scale well and can serve as infrastructures forInternet scale applications. This paper presents a study on two representative examples instructured P2P networks: Chord and DKS(N,k,f). It explains how they work, shows theirperformance and does some discussion on them. Chord solves a fundamental problem thatconfronts peer-to-peer applications: how to efficiently locate the node that stores a particulardata item. DKS(N,k,f), which could be perceived as an optimal generalization of Chord,overcomes additional bandwidth consumption in correcting routing table entries in Chord.

    Contents1 Introduction 2

    2 Basics 2

    3 Chord 3

    3.1 The Overlay Graph and Items Mapping . . . . . . . . . . . . . . . . . . . . . . . . 3

    3.2 The Lookup Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    3.3 Joins, Leaves and Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    3.4 Replication and Fault Tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    3.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    4 DKS(N,f,k) 6

    4.1 The Overlay Graph and Items Mapping . . . . . . . . . . . . . . . . . . . . . . . . 6

    4.2 The Lookup Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    4.3 Joins, Leaves and Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    4.4 Replication and Fault Tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    4.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    5 Related Work 8

    6 Discussion 9

    7 Conclusion 10

    1

  • 7/31/2019 chord_p2p_1

    2/11

    2 2 BASICS

    1 Introduction

    The need for sharing information and computing resources in large scale networks is motivatinga significant amount of research in the area of P2P computing. Unstructured P2P networks haveproblems such as single points of failure or control(i.e. Napster [Nap]) and lacking scalabilitybecause of widespread use of broadcasts(i.e. Gnutella [200]). Recent developments in the areaof peer-to-peer computing show that structured overlay networks implementing distributed hashtables(DHTs) scale well and can serve as infrastructures for Internet scale applications.

    As one of the mainstream techniques of structured P2P networks, Chord [ISB03] is a scalableprotocol for lookup in a dynamic peer-to-peer system with frequent node arrivals and departures.The Chord protocol supports just one operation: given a key, it maps the key onto a node. Sim-plicity, correctness and performance are the three features that distinguish Chord from many otherpeer-to-peer protocols. DKS(N,k,f) [LOAH03] could be perceived as an optimal generalization ofChord. It stands for Distribute k-ary Search and overcomes additional bandwidth consumption

    in correcting routing table entries in Chord with correction-on-use technique.This paper presents a study on these two representative examples in structured P2P networks:

    Chord and DKS(N,k,f). It mainly explains how Chord and DKS(N,k,f) work, shows the evaluationsof these two DHT based P2P networks and does some discussion on them.

    2 Basics

    The need for sharing information and computing resources in large scale networks is motivating asignificant amount of research in the area of peer-to-peer(P2P) computing. Previously in Napster,

    files where exchanged between computers(Peers) relying on a central directory for knowing whichpeer has which file. Napster was followed by a number of systems like Gnutella, where the centraldirectory was replaced with a flooding process where each computer connects to random membersin a peer-to-peer network and queries his neighbors who act similarly until a query is resolved.Such P2P networks are called unstructured P2P networks because of the use of random graphof peers.

    Random overlay networks attracted academic researchers from the networking and the dis-tributed systems communities with the simplicity of the solution and its ability to completelydiffuse central authority. From a computer science point of view, this elimination of central con-trol is very attractive for eliminating single points of failure and building large-scale distributedsystems. But the huge amount of induced traffic still renders the solution unscalable. The problemof having a scalable P2P overlay network with no central control became a scientifically challeng-

    ing problem and the efforts to solve it resulted in the emergence of what is known as structuredP2P overlay networks, referred to also by the term Distributed Hash Tables(DHTs), which is themain approach that was introduced by the academics to build structured overlay networks.

    Distributed Hash Table is the distributed version of a hash table data structure with the twoprimitive operations Put(key,value) and Get(Key). The Put operation should result in the storageof the value at one of the peers such that any of the peers can perform the Get operation andreach the peer that has the value. More importantly, both operations need to take a smallnumber of hops. A first naive solution would be that every peer knows all other peers, and thenevery Get operation would be resolved in one hop. Apparently, that is not scalable. Therefore,a second constraint is needed. Each node should know a small number of other peers. From agraph-theory point of view, this means that a directed graph of a certain known structure ratherthan a random graph needs to be constructed with scalable sizes of both the outgoing degree ofeach node and the diameter of the graph. Chord and DKS(N,k,f) are two representative examplesof DHT based structured P2P networks.

  • 7/31/2019 chord_p2p_1

    3/11

    3

    3 Chord

    Chord is a scalable peer-to-peer lookup service for Internet applications. The Chord protocol sup-

    ports just one operation: given a key, it maps the key onto a node. Depending on the applicationusing Chord, that node might be responsible for storing a value associated with the key. Chordadapts efficiently as nodes join and leave the system, and can answer queries even if the system iscontinuously changing.

    3.1 The Overlay Graph and Items Mapping

    At its heart, Chord provides fast distributed computation of a hash function mapping keys tonodes responsible for them. It uses consistent hashing.

    The consistent hash function assigns each node and key an m-bit identifier using a base hashfunction such as SHA-1[Sta95]. A nodes identifier is chosen by hashing the nodes IP address,while a key identifier is produced by hashing the key. The identifier length m must be large enough

    to make the probability of two nodes or keys hashing to the same identifier negligible.

    Figure 1: An identifier circle consisting of the three nodes 0, 1, and 3. In this example, key 1 islocated at node 1, key 2 at node 3, and key 6 at node 0. [ISB03]

    Consistent hashing assigns keys to nodes as follows. Identifiers are ordered in an identifiercircle modulo 2m. Key k is assigned to the first node whose identifier is equal to or follows (theidentifier of) k in the identifier space. This node is called the successor node of key k, denotedby successor(k). If identifiers are represented as a circle of numbers from 0 to 2m 1, thensuccessor(k) is the first node clockwise from k.

    Figure 1 shows an identifier circle with m=3. The circle has three nodes: 0, 1, and 3. Thesuccessor of identifier 1 is node 1, so key 1 would be located at node 1. Similarly, key 2 would be

    located at node 3, and key 6 at node 0.Consistent hashing is designed to let nodes enter and leave the network with minimal disrup-

    tion. To maintain the consistent hashing mapping when a node n joins the network, certain keyspreviously assigned to ns successor now become assigned to n. When node n leaves the network,all of its assigned keys are reassigned to ns successor. No other changes in assignment of keys tonodes need occur. In the example above, if a node were to join with identifier 7, it would capturethe key with identifier 6 from the node with identifier 0.

    3.2 The Lookup Process

    The lookup process comes as a natural result of how the identifier space is partitioned. Both theinsertion and querying items depend on finding the successor of an identifier.

    As before, let m be the number of bits in the key/node identifiers. Each node, n , maintains arouting table with (at most) m entries, called the finger table. The ith entry in the table at node n

  • 7/31/2019 chord_p2p_1

    4/11

    4 3 CHORD

    contains the identity of the first node, s , that succeeds n by at least 2i1 on the identifier circle,i.e., s = successor(n + 2i1), where 1 i m (and all arithmetic is modulo 2m). We call nodes the ith finger of node n, and denote it by n.finger[i].node(see Table 1). A finger table entry

    includes both the Chord identifier and the IP address (and port number) of the relevant node.Note that the first finger of n is its immediate successor on the circle; for convenience we oftenrefer to it as the successor rather than the first finger.

    In the example shown in Figure 2, the finger table of node 1 points to the successor nodes ofidentifiers (1 + 20) mod 23 = 2, (1 + 21) mod 23 = 3, and (1 + 22) mod 23 = 5, respectively. Thesuccessor of identifier 2 is node 3 , as this is the first node that follows 2, the successor of identifier3 is (trivially) node 3, and the successor of 5 is node 0.

    Figure 2: Finger tables and key locations for a net with nodes 0, 1, and 3, and keys 1, 2, and6. [ISB03]

    As a query example, consider the Chord ring in Figure 2. Suppose node 3 wants to find the suc-cessor of identifier 1. Since 1 belongs to the circular interval [7 , 3), it belongs to 3.finger[3].interval;node 3 therefore checks the third entry in its finger table, which is 0. Because 0 precedes 1, node3 will ask node 0 to find the successor of 1. In turn, node 0 will infer from its finger table that 1ssuccessor is the node 1 itself, and return node 1 to node 3. In general, under normal conditions alookup takes O(log2(N)) hops.

    3.3 Joins, Leaves and Maintenance

    In a dynamic network, nodes can join (and leave) at any time. Joins and leaves make the network

    change constantly.To join the network, a node n performs a lookup for its own id through some first contact in

    the network and inserts itself in the ring between its successor s and the predecessor of s using aperiodic stabilization algorithm. Initialization of ns routing table is done by copying the routingtable of s or letting s lookup each required edge of n. The subset of nodes that need to adjust theirtables to reflect the presence of n, will eventually do that because all nodes run a stabilizationalgorithm that periodically goes through the routing table and looks up the value of each edge.The last task is transfer part of the items stored at s, namely items with id less than or equal ton need to be transferred to n and that is also handled by the application layers of n and s.

    Graceful removals (leaves) are done by first transferring all items to the successor and informingthe predecessor and successor. The rest of the fingers are corrected by the virtue of the stabilizationalgorithm.

    A basic stabilization protocol is used to keep nodes successor pointers up to date, whichis the most important and sufficient to guarantee correctness of lookups and to add nodes to a

  • 7/31/2019 chord_p2p_1

    5/11

    3.4 Replication and Fault Tolerance 5

    Chord ring in a way that preserves reachability of existing nodes, even in the face of concurrentjoins and lost and reordered messages.

    As a simple example, suppose node n joins the system, and its ID lies between nodes np andns. n would acquire ns as its successor. Node ns, when notified by n, would acquire n as itspredecessor. When np next runs stabilize, it will ask ns for its predecessor (which is now n);np would then acquire n as its successor. Finally, np will notify n, and n will acquire np as itspredecessor. At this point, all predecessor and successor pointers are correct.

    3.4 Replication and Fault Tolerance

    When a node n fails, nodes whose finger tables include n must find ns successor. In addition,the failure of n must not be allowed to disrupt queries that are in progress as the system isre-stabilizing.

    The key step in failure recovery is maintaining correct successor pointers. To help achieve this,each Chord node maintains a successor-list of its r nearest successors on the Chord ring. Inordinary operation, a modified version of the stabilize routine maintains the successor-list. If noden notices that its successor has failed, it replaces it with the first live entry in its successor list.At that point, n can direct ordinary lookups for keys for which the failed node was the successorto the new successor. As time passes, stabilize will correct finger table entries and successor-listentries pointing to the failed node.

    After a node failure, but before stabilization has completed, other nodes may attempt to sendrequests through the failed node as part of a lookup. Ideally the lookups would be able to proceed,after a timeout, by another path despite the failure. In many cases this is possible. All that isneeded is a list of alternate nodes, easily found in the finger table entries preceding that of thefailed node. If the failed node had a very low finger table index, nodes in the successor-list arealso available as alternates.

    For an item to be lost or the ring to be disconnected, O(log2(N) + 1) successive nodes have tofail simultaneously.

    The successor-list mechanism also helps higher layer software replicate data. A typical applica-tion using Chord might store replicas of the data associated with a key at the k nodes succeedingthe key. The fact that a Chord node keeps track of its r successors means that it can inform thehigher layer software when successors come and go, and thus when the software should propagatenew replicas.

    3.5 Evaluation

    In the experiment of simultaneous node failures, we evaluate the ability of Chord to regain consis-tency after a large percentage of nodes fail simultaneously. Figure 3 plots the mean lookup failurerate and the 95% confidence interval as a function ofp. The lookup failure rate is almost exactlyp. Since this is just the fraction of keys expected to be lost due to the failure of the responsiblenodes, we conclude that there is no significant lookup failure in the Chord network. For example,if the Chord network had partitioned in two equal-sized halves, we would expect one-half of therequests to fail because the querier and target would be in different partitions half the time. Ourresults do not show this, suggesting that Chord is robust in the face of multiple simultaneous nodefailures.

    Figure 4 shows the measured latency of Chord lookups over a range of numbers of nodes. Ex-periments with a number of nodes larger than ten are conducted by running multiple independentcopies of the Chord software at each site. The lesson from Figure 4 is that lookup latency growsslowly with the total number of nodes, confirming the simulation results that demonstrate Chordsscalability.

  • 7/31/2019 chord_p2p_1

    6/11

    6 4 DKS(N,F,K)

    Figure 3: The fraction of lookups that fail as a function of the rate (over time) at which nodesfail and join. Only failures caused by Chord state inconsistency are included, not failures due tolost keys. [ISB03]

    4 DKS(N,f,k)

    DKS(N,k,f) could be perceived as an optimal generalization of Chord. It stands for Distributek-ary Search and overcomes additional bandwidth consumption in correcting routing table entriesin Chord with correction-on-use technique.

    4.1 The Overlay Graph and Items MappingDKS could be perceived as an optimal generalization of Chord to provide shorter diameter withlarger routing tables. In the mean time, DKS could be perceived as a meta-system from whichother systems could be instantiated. DKS stands for Distributed k-ary Search and it was designedafter perceiving that many DHT systems are instances of a form of k-ary search. Figure 5 showsthe division of the space done in DKS. It has in common with Chord that each node perceivesitself as the start of the space. In the mean time, each interval is divided into k rather than 2intervals.

    Along with the goal of DKS to act as a meta-system, mapping items onto nodes is also leftas a design choice. A Chord like mapping is a valid as a simple first choice. However, differentmappings are possible as well.

    4.2 The Lookup Process

    When a node n receives a lookup request for key identifier t, from its user, node n checks if t isbetween its predecessor, p and itself. If this is case, node n does a local lookup to find the valueassociated to t. The result is returned to user. Otherwise, node n triggers a forwarding processthat goes level by level, and that consists in routing lookup messages to the node that succeedst on the identifier circle. Each lookup message carries necessary information(level and interval)for detection and correction of routing entries. When the node n successor of t is reached, nperforms a local lookup to retrieve the value associated to t. The result is forwarded backward orsent directly to the origin of the lookup. A query arriving at a node is forwarded to the first nodein the interval to which the id of the node belongs. Therefore, a lookup is resolved in logk(N)hops.

    Inserting key/value pairs in the system is similar to the lookup. In addition, messages forinserting key/value pairs are also used for detection and correction of routing entries.

  • 7/31/2019 chord_p2p_1

    7/11

    4.3 Joins, Leaves and Maintenance 7

    Figure 4: The fraction of lookups that fail as a function of the rate (over time) at which nodesfail and join. Only failures caused by Chord state inconsistency are included, not failures due tolost keys. [ISB03]

    4.3 Joins, Leaves and Maintenance

    Unlike Chord, DKS avoids any kind of periodic stabilization both for the maintenance of thesuccessors, the predecessor and the routing table. Instead, it relies on three principles, localatomic actions, correction-on-use and correction-on-change. When a node joins, a form of anatomic distributed transaction is performed to insert it on the ring. Routing tables are thenmaintained using the correction-on-use technique, an approach introduced in DKS. Every lookupmessage contains information about the position of the receiver in the routing table of the sender.Upon receiving that information, the receiver can judge whether the sender has an updated routingtable. If correct, the receiver continues the lookup, otherwise the receiver notifies the sender of thecorruption of his routing table and advises him about a better candidate for the lookup accordingto the receivers knowledge. The sender then contacts the candidate and the process is repeateduntil the correct node for the routing table of the sender is used for the lookup.

    By applying the correction-on-use technique, a routing table entry is not corrected until thereis a need to use it in some lookup. This approach reduces the maintenance cost significantly.However, the number of joins and leaves are assumed to be reasonably less than the numberof lookup messages. In cases where this assumption does not hold, DKS combines it with thecorrection-on-change technique [LOAH04a]. Correction-on-change notifies all nodes that need tobe updated upon the occurrence of a join, leave or failure.

    4.4 Replication and Fault Tolerance

    In early versions of DKS, fault tolerance was handled similar to Chord where replicas of anitem are placed on the successor pointers. In later developments [AGH04], DKS tries to addressreplication more on the DHT level rather than delegating most of the work to the applicationlayer. Additionally, to avoid congestion in a particular segment of the ring, replicas are placed indispersed well-chosen positions and not on the successor list. In general, for the correction-on-usetechnique to work, an invariant is maintained where the predecessor pointer has always to becorrect and that is provided by the atomic actions on the circle.

    4.5 Evaluation

    Figure 6 shows that as we increase the number of lookups, the average lookup length tends to1/2log2(2

    10) and the 99th percentile of the lookup length tends to log2(210). Those are the typical

  • 7/31/2019 chord_p2p_1

    8/11

    8 5 RELATED WORK

    Figure 5: Illustration of how a DKS node divides the space in an identifier space of size N = 28= 256. [LOAH03]

    lookup bounds offered by the Chord system. The DKS(N,k,f) system offers the same bounds, yetwithout active stabilization.

    Figure 6: The average, the 1st and the 99th percentile of the lookup length as a result of increasing

    the lookup traffic in a system bootstrapped with 500 nodes and 3500 joins are done concurrentlywith lookups. [LOAH03]

    In figure 7, the 99th percentile of the lookup length for the case where k = 4 tends to be highwhen there is no enough lookup traffic which is natural, since the number of out-of-date entries islarger because of the larger routing tables. As the lookup traffic increases, the system with k = 4,starts to outperform the system with k = 2. In such experiments, the number of lookup failuresobserved was negligible with respect to the amount of lookup requests injected.

    5 Related Work

    While Chord maps keys onto nodes, traditional name and location services provide a direct map-ping between keys and values. A value can be an address, a document, or an arbitrary data

  • 7/31/2019 chord_p2p_1

    9/11

    9

    Figure 7: The 99th percentile of the lookup length as a result of increasing the lookup trafficin a system of actual size of 210 while 10% of the nodes leave, and another 10% join concur-rently. [LOAH03]

    item. Chord can easily implement this functionality by storing each key/value pair at the node towhich that key maps. For this reason and to make the comparison clearer, the rest of this sectionassumes a Chord-based service that maps keys onto values.

    DNS provides a host name to IP address mapping [MD88]. Chord can provide the sameservice with the name representing the key and the associated IP address representing the value.Chord requires no special servers, while DNS relies on a set of special root servers. DNS namesare structured to reflect administrative boundaries; Chord imposes no naming structure. DNS is

    specialized to the task of finding named hosts or services, while Chord can also be used to finddata objects that are not tied to particular machines.

    Chord can be used as a lookup service to implement a variety of systems. In particular, it canhelp avoid single points of failure or control that systems like Napster possess, and the lack ofscalability that systems like Gnutella display because of their widespread use of broadcasts.

    A couple of applications including a cooperative file-system [FDS01] was built on top of Chord.As a general purpose service, a broadcast algorithm was also developed for Chord [ SEAH03].

    For DKS general purpose a multicast [LOAH04b] algorithm was developed.

    6 Discussion

    Therere some questions raised during the talk on this study. Discussions about them are asfollows:

    1, About the confusion on the basic definitions and assumptions in Chord.Values. The set of values V such as files, directory entries etc.. Each value has a corresponding

    key from the set of Keys(V). If a value is a file, the key could be, for instance, its checksum, acombination of owner, creation date and name of any such unique attribute.

    Nodes. The set P of machines/processes also referred to as nodes or peers. Keys(P) is theset of unique keys for members of P, usually the IP addresses or public keys of the nodes.

    The Identifier Space. A common and fundamental assumption of all DHTs is that the keysof the values and keys of the nodes are mapped into one range using a hashing function. Forinstance, the IP addresses of the nodes and the checksums of files are hashed using SHA-1 toobtain 128-bit identifiers. The term identifier is used to refer to hashed keys of items and ofnodes. The term identifier space refers to the range of possible values of identifiers and its sizeis usually referred to by N. We use id as an abbreviation for identifier most of the time.

  • 7/31/2019 chord_p2p_1

    10/11

    10 7 CONCLUSION

    Items. When a new value is inserted in the hash table, its key is saved with it. We use theterm item to refer to a key-value pair.

    2, About the SHA-1 hash function used in Chord.

    The standard SHA-1 function is used as the base hash function. This makes Chord protocoldeterministic. Producing a set of keys that collide under SHA-1 can be seen, in some sense, asinverting, or decrypting the SHA-1 function. This is believed to hard to do.

    3, About the application example in Chord, which is a P2P storage system.

    Theres a main idea about this system, in which Chord identifies responsible node for storinga block and talks to the server on that node. Which is the server or client in this system can beseen clear. In pure P2P systems, any peer can act as the server or the client, it depends on whatrole this peer actually plays in the system.

    4, About the concepts Round Trip Time(RTT) of the problems in Chord.

    Besides additional bandwith consumption for maintaining routing tables, Chord also has suchproblem like Nodes close on ring can be far in the network, as shown in Figure 8.

    Figure 8: Nodes close on ring can be far in the network. [Kaa]

    A partial solution could be Weight neighbor nodes by Round Trip Time(RTT) when routing.We can choose neighbor who is closer to destination(with lowest RTT) to reduce path latency.

    In telecommunications, the term round-trip delay time or round-trip time (RTT) has thefollowing meanings: 1, The elapsed time for transit of a signal over a closed circuit, or timeelapsed for a message to a remote place and back again; 2, In primary or secondary radar systems,the time required for a transmitted pulse to reach a target and for the echo or transponder replyto return to the receiver. Round-trip delay time is significant in systems that require two-wayinteractive communication, such as voice telephony, or ACK/NAK data systems where the round-trip time directly affects the throughput rate, such as the Transmission Control Protocol. It mayrange from a very few microseconds for a short line-of-sight (LOS) radio system to many seconds

    for a multiple-link circuit with one or more satellite links involved. This includes the node delaysas well as the media transit time.

    7 Conclusion

    This paper presents a study on these two representative examples in structured P2P networks:Chord and DKS(N,k,f). It mainly explains how Chord and DKS(N,k,f) work, shows the evaluationsof these two DHT based P2P networks and does some discussion on them.

    Chord emphasizes simplicity, providing scalable distributed hash service. It solves a funda-mental problem that confronts peer-to-peer applications: how to efficiently locate the node thatstores a particular data item. DKS(N,k,f), which could be perceived as an optimal generalizationof Chord, significantly decreases the additional bandwidth consumption in correcting routing tableentries.

  • 7/31/2019 chord_p2p_1

    11/11

    REFERENCES 11

    Chord and DKS(N,k,f) are two representative examples in structured P2P networks, which isfully distributed and scalable service, with distributed hash table technique(DHT) as a mainstreamtechnique, can arrive target node with less routing information, and has no blooding algorithm.

    Though structured P2P networks have many good characteristics, it still have problems suchas Representative programs generally assumes nodes with same possibilities, adapt well to smallscale systems, more complex compared with unstructured P2P systems, and lack of successfuldeployment of large-scale P2P systems based on DHT in internet. Better solutions still need tobe developed.

    References

    [200] Gnutella 2003. http://www.gnutella.com. 1

    [AGH04] Luc Onana Alima Ali Ghodsi and Seif Haridi. A novel replication scheme for load-balancing and increased security. Technical Report TR-2004-11, SICS, June 2004. 4.4

    [FDS01] David Karger Robert Morris Frank Dabek, M. Frans Kaashoek and Ion Stoica. Wide-area cooperative storage with cfs. In Proceedings of the 18th ACM Symposium on Op-erating Systems Principles (SOSP01), Chateau Lake Louise, Banff, Canada, October2001. 5

    [ISB03] David Liben-Nowell David Karger M. Frans Kaashoek Frank Dabek Ion Stoica,Robert Morris and Hari Balakrishnan. Chord: A scalable peer-to-peer lookup ser-vice for internet applications. IEEE/ACM TRANSACTIONS ON NETWORKING,11(1):1732, February 2003. 1, 1, 2, 3, 4

    [Kaa] Frans Kaashoek. Slides on peer-to-peer computing research: a fad? MIT. 8

    [LOAH03] Per Brand Luc Onana Alima, Sameh El-Ansary and Seif Haridi. Dks(n,k,f): A family

    of low communication, scalable and fault-tolerant infrastructures for p2p applications.In The 3rd International Workshop On Global and Peer-To-Peer Computing on LargeScale Distributed Systems (CCGRID 2003), May 2003. 1, 5, 6, 7

    [LOAH04a] Ali Ghodsi Luc Onana Alima and Seif Haridi. A framework for structured peer-to-peeroverlay networks. In LNCS volume of the post-proceedings of the Global Computing2004 workshop. Springer-Verlag, 2004. 4.3

    [LOAH04b] Per Brand Luc Onana Alima, Ali Ghodsi and Seif Haridi. Multicast in dks(n,k,f)overlay networks. In The 7th International Conference on Principles of DistributedSystems (OPODIS2003). Springer-Verlag, 2004. 5

    [MD88] P. MOCKAPETRIS and K. J. DUNLAP. Development of the domain name system.

    In Proc. ACM SIGCOMM (Stanford, CA, 1988), pages 123133, 1988. 5

    [Nap] Napster. Open source napster server, 2002. http://opennap.sourceforge.net. 1

    [SEAH03] Per Brand Sameh El-Ansary, Luc Onana Alima and Seif Haridi. Efficient broadcastin structured p2p netwoks. In 2nd International Workshop on Peer-to-Peer Systems(IPTPS03), Berkeley, CA, USA, February 2003. 5

    [Sta95] Secure Hash Standard. Department of Commerce/NIST, National Technical Infor-mation Service, Springfield, VA, April 1995. 3.1