Link Reversal Algorithms Jennifer L. Welch [Welch and Walter, 2012]

Preview:

Citation preview

Link Reversal Algorithms

Jennifer L. Welch

[Welch and Walter, 2012]

2

What is Link Reversal?

Distributed algorithm design technique Used in solutions for a variety of

problems routing, leader election, mutual exclusion,

scheduling, resource allocation,… Model problem as a directed graph and

reverse the direction of links appropriately

Use local knowledge to decide which links to reverse

Outline Routing in a Graph: Correctness Routing in a Graph: Complexity Routing and Leader Election in a

Distributed System Mutual Exclusion in a Distributed System Scheduling in a Graph Resource Allocation in a Distributed

System

3

Routing [Gafni & Bertsekas 1981]

Undirected connected graph represents communication topology of a system

Unique destination node Assign virtual directions to the graph

edges (links) s.t. if nodes forward messages over the links,

they reach the destination Directed version of the graph

(orientation) must be acyclic have destination as only sink

Thus every node has path to destination. 4

Routing Example

5

D

1 2 3

4 5 6

Mending Routes

What happens if some edges go away?Might need to change the virtual

directions on some remaining edges (reverse some links)

More generally, starting with an arbitrary directed graph, each node should decide independently which of its incident links to reverse

6

Mending Routes Example

7

D

1 2 3

4 5 6

Sinks

A vertex with no outgoing links is a sink.

The property of being a sink can be detected locally.

A sink can then reverse some incident links

Basis of several algorithms…

8

Full Reversal Routing Algorithm

9

Input: directed graph G with destination vertex D

Let S(G) be set of sinks in G other than D

while S(G) is nonempty do reverse every link incident on a vertex

in S(G)G now refers to resulting directed graph

10

Full Reversal (FR) Routing Example

D

1 2 3

4 5 6

D

1 2 3

4 5 6

D

1 2 3

4 5 6

D

1 2 3

4 5 6

D

1 2 3

4 5 6

D

1 2 3

4 5 6

Why Does FR Terminate?

Suppose it does not. Let W be vertices that take infinitely many steps. Let X be vertices that take finitely many steps;

includes D. Consider neighboring nodes w in W, x in X. Consider first step by w after last step by x: link

is w x and stays that way forever. Then w cannot take any more steps,

contradiction.

11

Why is FR Correct?

Assume input graph is acyclic. Acyclicity is preserved at each iteration:

Any new cycle introduced must include a vertex that just took a step, but such a vertex is now a source (has no incoming links)

When FR terminates, no vertex, except possibly D, is a sink.

A DAG must have at least one sink: if no sink, then a cycle can be constructed

Thus output graph is acyclic and D is the unique sink.

12

Pair Algorithm

Can implement FR by having each vertex v keep an ordered pair (c,v), the height (or vertex label) of vertex v c is an integer counter that can be incremented v is the id of vertex v

View link between v and u as being directed from vertex with larger height to vertex with smaller height (compare pairs lexicographically)

If v is a sink then v sets c to be 1 larger than maximum counter of all v’s neighbors

13

Pair Algorithm Example

14

0

1 2 3

(0,2)(0,1)

(1,0)

(2,3)(2,1)

Pair Algorithm Example

15

0

1 2 3

(0,2)(0,1)

(1,0)

(2,3)(2,1) (3,2)

Pair Algorithm Example

16

0

1 2 3

(0,2)(0,1)

(1,0)

(2,3)(2,1) (3,2)

Partial Reversal Routing Algorithm

Try to avoid repeated reversals of the same link.

Vertices keep track of which incident links have been reversed recently.

Link (u,v) is reversed by v iff the link has not been reversed by u since the last iteration in which v took a step.

17

18

Partial Reversal (PR) Routing Example

D

1 2 3

4 5 6

D

1 2 3

4 5 6

D

1 2 3

4 5 6

D

1 2 3

4 5 6

D

1 2 3

4 5 6

D

1 2 3

4 5 6

Why is PR Correct?

Termination can be proved similarly as for FR: difference is that it might take two steps by w after last step by x until link is w x .

Preservation of acyclicity is more involved, deferred to later.

19

Triple Algorithm

Can implement PR by having each vertex v keep an ordered triple (a,b,v), the height (or vertex label) of vertex v a and b are integer counters v is the id of node v

View link between v and u as being directed from vertex with larger height to vertex with smaller height (compare triples lexicographically)

If v is a sink then v sets a to be 1 greater than smallest a of all its neighbors sets b to be 1 less than smallest b of all its neighbors with

new value of a (if none, then leave b alone)

20

Triple Algorithm Example

21

0

1 2 3

(0,0,2)(0,0,1)

(0,1,0)

(0,2,3)(1,0,1)

Triple Algorithm Example

22

0

1 2 3

(0,0,2)(0,0,1)

(0,1,0)

(0,2,3)(1,0,1) (1,-1,2)

Triple Algorithm Example

23

0

1 2 3

(0,0,2)(0,0,1)

(0,1,0)

(0,2,3)(1,0,1) (1,-1,2)

General Vertex Label Algorithm

Generalization of Pair and Triple algorithms Assign a label to each vertex s.t.

labels are from a totally ordered, countably infinite set

new label for a sink depends only on old labels for the sink and its neighbors

sequence of labels taken on by a vertex increases without bound

Can prove termination and acyclicity preservation, and thus correctness.

24

Binary Link Labels Routing [Charron-Bost et al. SPAA 2009]

Alternate way to implement and generalize FR and PR

Instead of unbounded vertex labels, apply binary link labels to input DAG link directions are independent of labels (in contrast to

algorithms using vertex labels)

Algorithm for a sink: if at least one incident link is labeled 0, then reverse all

incident links labeled 0 and flip labels on all incident links if no incident link is labeled 0, then reverse all incident

links but change no labels

25

Binary Link Labels Example

26

0

1 2 3

0

0

1

1

Binary Link Labels Example

27

0

1 2 30

0

1

1

Binary Link Labels Example

28

0

1 2 3

0

10

1

Why is BLL Correct?

Termination can be proved very similarly to termination for PR.

What about acyclicity preservation? Depends on initial labeling:

29

0

00 1

0

1

3

2

0

10 1

0

1

3

2

Conditions on Initial Labeling

All labels are the sameall 1’s => Full Reversalall 0’s => Partial Reversal

Every vertex has all incoming links labeled the same (“uniform” labeling)

Both of the above are special cases of a more general condition that is necessary and sufficient for preserving acyclicity

30

What About Complexity?

Busch et al. (2003,2005) initiated study of the performance of link reversal routing

Work complexity of a vertex: number of steps taken by the vertex

Global work complexity: sum of work complexity of all vertices

Time complexity: number of iterations, assuming all sinks take a step in each iteration (“greedy” execution)

31

Worst-Case Work Complexity Bounds [Busch et al.]

bad vertex: has no (directed) path to destination

Pair algorithm (Full Reversal): for every input, global work complexity is

O(n2), where n is number of initial bad vertices for every n, there exists an input with n bad

vertices with global work complexity Ω(n2) Triple algorithm (Partial Reversal): same

as Pair algorithm

32

Exact Work Complexity Bounds

A more fine-grained question: Given any input graph and any vertex in that graph, exactly how many steps does that vertex take?

Busch et al. answered this question for FR.

Charron-Bost et al. answered this question for BLL (as long as labeling satisfies Acyclicity Condition): includes FR and PR.

33

Definitions [Charron-Bost et al. SPAA 2009]

Let X = <v1, v2, …, vk> be a chain in the labeled input DAG (series of vertices s.t. either (vi,vi+1) or (vi+1,vi) is a link).

r: number of links that are labeled 1 and rightway ((vi,vi+1) is a link)

s: number of occurrences of vertices s.t. the two adjacent links are incoming and labeled 0

Res: 1 if last link in X is labeled 0 and rightway, else 0

ω: equal to 2(r+s)+Res

34

Example of Definitions

For chain <D,7,6,5>: r = 1, s = 0, Res = 1, ω = 3For chain <D,1,2,3,4,5>: r = 2, s = 1, Res = 0, ω = 6

35

D0

0

1

1 0 1

7

1 2 3 8

6 5

4

1 01

Outline of BLL Work Complexity

Claim 1: A step taken by v decreases the ω value of all chains from D to v by the same amount; steps taken by other vertices have no effect on the ω value of chains from D to v.

Claim 2: When algorithm terminates, at least one chain from D to v is the reverse of a path from v to D value of ω for this chain is 0, since no right-way links

Thus number of steps by v is number required for the reverse of a D-to-v chain to become a path for the first time

Need to quantify how ωmin decreases when v takes a step (ωmin is min, over all chains X from D to v, of ω for X)

36

37

Grouping the Nodes

Define S as set of all sinks whose links are all labeled 0 N as set of all nodes whose incoming links are

all labeled 1 O as all other nodes

0

1

1

1

1

0

1

0

00

0 group Sgroup Ngroup O

Finishing BLL Work Complexity

Claim 3: Let X be a D-to-v chain. When v takes a step, if v in S, then ω for X decreases by 1 and v moves to N if v in N, then ω for X decreases by 2 and v stays in N if v in O, then ω for X decreases by 1 and v stays in O

Theorem: Number of steps taken by v is (ωmin+1)/2 if v in S initially

ωmin/2 if v in N initially

ωmin if v in O initially

38

Work Complexity for FR

Corollary: For FR (all 1’s labeling), work complexity of vertex v is minimum, over all chains from D to v, of r, number of links in the chain that are rightway (directed away from D).

Worst-case graph for global work complexity:

D 1 2 … n vertex i has work complexity i global work complexity then is Θ(n2)

39

Work Complexity for PR

Corollary: for PR (all 0’s labeling), work complexity of vertex v is min, over all D-to-v chains, of s + Res if v is a sink or a

source min, over all D-to-v chains, of 2s + Res if v is neither a

sink nor a source Worst-case graph for global work complexity:

D 1 2 3 … n work complexity of vertex i is Θ(i) global work complexity is Θ(n2)

40

Comparing FR and PR

Looking at worst-case global work complexity shows no difference – both are quadratic in number of bad nodes

Can use game theory to show some meaningful differences (Charron-Bost et al. ALGOSENSORS 2009): global work complexity of FR can be larger than optimal (w.r.t. all

uniform labelings) by a factor of Θ(n) global work complexity of PR is never more than twice the

optimal Another advantage of PR over FR:

In PR, if k links are removed, each bad vertex takes at most 2k steps

In FR, if 1 link is removed, a vertex might have to take n-1 steps

41

Time Complexity

Time complexity is number of iterations in greedy execution

Busch et al. observed that time complexity cannot exceed global work complexity Thus O(n2) iterations for both FR and PR

Busch et al. also showed graphs on which FR and PR require Ω(n2) iterations

Charron-Bost et al. (2011) derived an exact formula for the last iteration in which any vertex takes a step in any graph for BLL…

42

FR Time Complexity Overview

Let Wv(t) be number of steps v has taken by iteration t

Identify a recurrence relation for Wv(t) based on understanding how nodes and their neighbors take turns being sinks this recurrence is linear in the min-plus algebra

Thus the set of recurrences for all the vertices can be represented as a matrix

This matrix can be interpreted as the adjacency matrix of a graph H

Restate value of Wv(t) in terms of properties of paths in H Derive a formula for time complexity of vertex v based on

properties of paths in H Translate previous formula into properties of original input graph

43

FR Time Complexity

Theorem: For every bad vertex v, termination time of v is1 + max{len(X): X is chain ending at v with r = σv – 1}

where σv is the work complexity of v Worst-case graph for global time

complexity:

44

D

n-1n

n/2+1

n/221

n/2+2

n/2+3

vertex n/2 has work complexity n/2;consider chain that goes around the loopcounter-clockwise n/2-1 times starting andending at n/2: has r = n/2-1 and length Θ(n2)

BLL Time Complexity

What about other link labelings? Transform to FR! In more detail: for every labeled input

graph G (satisfying the Acyclicity Condition), construct another graph T(G) s.t. for every execution of BLL on G, there is a

“corresponding” execution of FR on T(G) time complexities of relevant vertices in the

corresponding executions are the same

45

Idea of Transformation

If a vertex v is initially in the category O (a sink with some links labeled 0 and some labeled 1, or not a sink with an incoming link labeled 0), then its incident links are partitioned into two sets: all links on one set reverse at odd-numbered steps by v all links in the other set reverse at even-numbered

steps by v Transformation replaces each vertex in O with

two vertices, one corresponding to odd steps by v and the other to even steps, and inserts appropriate links

46

PR Time Complexity

Theorem: For every bad vertex v, termination time of v is 1 + max{len(X): X is a chain ending at v with s + Res = σv

– 1} if v is a sink or a source initially 1 + max{len(X): X is a chain ending at v with 2s + Res =

σv – 1} otherwise

Worst-case graph for global time complexity:D 1 2 3 … n/2 n/2+1 … n

Vertex n/2 has work complexity n/2. Consider chain that starts at n/2, ends at n/2, and goes back and forth between n/2 and n making (n-2)/4 round trips. 2s+Res for this chain is n/2-1, and length is Θ(n2).

47

FR vs. PR Again

On chain on previous slide, PR has quadratic time complexity.

But FR on that chain has linear time complexity in fact, FR has linear time complexity on

any tree On chain on previous slide, PR has

slightly better global work complexity than FR.

48

From Graph to Distributed System

To adapt previous ideas to a distributed system: processor is a vertex communication channel is an edge (link)

Issues to be overcome: Neighboring processors need to communicate

to agree on which way the link between them should be directed: delays and losses

Topology can change due to movement and failures; might not always be connected

49

Routing in a Dynamic System

In any execution that experiences a finite number of topology changes, after the last topology change:every node in same connected

component as D (destination) should have a path to D

every node not in the same component as D stops trying to find a route to D or forward a message to D

50

What’s Wrong with FR?

51

D

21

3

D

21

3

D

21

3

D

21

3

TORA [Park & Corson 1997]

Modify the generalized algorithm of Gafni & Bertsekas using increasing vertex labels

Vertex labels, or heights, are 5-tuples one entry is current time: Temporally Ordered Routing

Algorithm Every proc in same connected component as D

eventually has a path to D in the directed version of the communication graph induced by the heights

Clever use of additional entries in the heights allows node to tell when they are partitioned from D and should stop participating

52

53

Heights in TORA

[ , oid , r , , i ]

reference level delta

time this ref. levelwas started

id of nodeoriginatingthis ref.level

reflectionbit

ordersnodes withsame ref.level

id of node,breaks ties

TORA Overview

Route Creation: use standard spanning tree construction ideas to set ref levels to (0,0,0) and deltas to distances from D

Route Maintenance and Partition Detection: see next slide

Route Erasure: When partition is detected, flood “clear” messages throughout component

54

55

TORA Route Maintenance

If node i loses last outgoing link: due to a link failure (Case Generate):

set ref level to (current time, i, 0), a full reversal due to a height change and

nbrs don’t have same ref level (Case Propagate): adopt max ref level and set to effect a partial reversal

nbrs have same ref level with r = 0 (Case Reflect): adopt new ref level, set r to 1, set to 0 (full reversal)

nbrs have same ref level with r = 1 and oid = i (Case Detect):

Partition! Start process of erasing routes.

Apr 13, 2006 Applications of Link Reversal Algorithms in MANETs 56

TORA Example – Partition

D

1 2 3 4

5

Generate

D

1 2 3 4

5

Propagate Propagate

D

1 2 3 4

5Reflect

Reflect

D

1 2 3 4

5

PropagatePropagate

D

1 2 3 4

5

Detect

TORA Discussion

Works best with perfectly synchronized clocks How to prove correctness?

Gafni & Bertsekas result does not directly apply because of asynchronous delay in updating neighbors about new heights

Other issues remain: partition detection, route creation, route erasure

Can be adapted to solve leader election: when partition is detected, elect a new leader! (Cf. Ingram et al. 2009)

57

Link Reversal for Mutual Exclusion [Snepscheut 1987]

Goal: no-lockout mutual exclusion in a message-passing system with a tree communication topology

Solution: Pass around a unique “token” message. impose logical directions on communication channels s.t. token

holder is unique sink when a proc has the token, it can enter the critical section when a proc needs the token, it sends a “request” message on

its unique outgoing link – toward the token holder when a proc receives a request, it remembers it in a FIFO queue

and forwards it toward the token-holder (if not already waiting) when token holder responds to a request, it forwards the token

to the neighbor at the head of the queue, and reverses direction of that link

58

From a Tree to a DAG

For a general communication topology, the previous algorithm can be run on a spanning tree overlay of the graph.

However, this does not take advantage of the redundancy offered by additional links.

Instead, direct all links in the graph: request message can be forwarded on any

outgoing link when a node receives the token, all its outgoing

links are reversed, to make it a sink

59

Scheduling in a Graph[Barbosa & Gafni 1989]

What happens when the Full Reversal routing algorithm is executed without a destination? I.e., every vertex in the graph does a reversal

when it is a sink Call this algorithm FRND (FR with No

Destination). When a vertex is a sink, it is said to be

scheduled: can take some action with the guarantee that none of its neighbors are scheduled at the same time.

60

Behavior of FRND

Claim 1: FRND maintains acyclicity. Proof: Same as for FR.

Claim 2: Every vertex is a sink infinitely often. Proof: By Claim 1, at each iteration there is at least

one sink, so FRND never terminates. If some vertices take finitely many steps and some take infinitely many, then use same argument as for showing FR terminates to get a contradiction.

Thus every vertex is scheduled infinitely often.

61

FRND Example

62

0 21 3 4

0 21 3 4

0 21 3 4

0 21 3 4

0 21 3 4

0 21 3 4

Behavior of FRND

Claim 3: In the greedy execution of FRND from an initial DAG, eventually the pattern of sinks becomes periodic. Proof: Let S1, S2,… be the sequence of

sets of sinks in the execution. Since finite number of vertices, Si = Sj for some i and j. Thus Si+1 = Sj+1, etc.

Note: every vertex appears at least once in every period.

63

Q&A about FRND

How long until the execution becomes periodic? At most polynomial number of iterations [Malka

& Rajsbaum 1991] How long is the period?

At least 2 iterations, since neighbors cannot be sinks simultaneously

Can be as bad as exponential [Malka et al. 1993] How “fair” is the period?

every vertex takes same number of steps…

64

Period is Fair

Claim 1: Difference in number of steps taken by u and v at any iteration is at most the distance between them. Proof is by induction on the distance.

Claim 2: Every vertex takes same number of steps in the period. Proof: Suppose u appears a times and v appears b

times, b > a. After k-th execution of the period, u has taken ka steps and v has taken kb steps. Eventually kb – ka exceeds the distance between u and v, contradicting Claim 1.

65

Multiplicity and Concurrency

Multiplicity of the period is the number of times that each vertex takes a step in the period.

Concurrency is ratio of multiplicity m to the period length p, i.e., m/p.

Concurrency is fraction of iterations during which any given vertex takes steps.at most 1/2at least 1/n

66

Multiplicity and Concurrency

Multiplicity of the period is the number of times that each vertex takes a step in the period.

Concurrency is ratio of multiplicity m to the period length p, i.e., m/p.

Concurrency is fraction of iterations during which any given vertex takes steps.at most 1/2at least 1/n

67

Concurrency Example

68

Period length is 5multiplicity is 2concurrency is 2/5

Exact Expression for Concurrency

Claim 1: For any initial orientation of a tree, the greedy execution of FRND reaches a periodic orientation with length 2 and multiplicity 1, so concurrency = 1/2.

Claim 2: For any periodic orientation G of a non-tree graph, the concurrency is equal to the minimum, over all (simple) circuits k in G, of the fraction of links in k that are right-way.

69

Choosing a Good Initial Orientation For trees, the initial orientation is unimportant:

they all lead to a period with concurrency 1/2 For non-trees, the initial orientation can make a

big difference to the concurrency: consider a ring of n vertices, where n is even if initially there is just 1 sink, there will never be more

than 1 sink in any orientation: concurrency is 1/n if initially every other vertex is a sink, vertices keep

alternating: concurrency is 1/2 Unfortunately, determining the best orientation

is NP-complete!

70

Resource Allocation in a Distributed System [Chandy & Misra 1984]

Dining philosophers (or resource allocation) problem is a generalization of the mutual exclusion problem: conflict graph: vertices correspond to the procs,

edge between i and j means i and j compete for exclusive access to a resource

Ensure exclusion: no two neighbors in the conflict graph are in their critical sections simultaneously

Ensure fairness: every proc requesting access to its critical section eventually is granted access

71

First Solution

Use FRND on the conflict graph: when a proc is a sink, it can enter its critical sectionevery proc is a sink infinitely oftenno two neighbors are sinks simultaneously

Issues:How to adapt FRND to asynchronous message

passing?Why bother a proc that is not interested in

entering its critical section?

72

Chandy & Misra’s Solution

Key data structure is precedence graph, directed version of conflict graph

Precedence graph is represented in a distributed fashion by having each proc keep a variable for each neighbor indicating who yields to whom variables are initialized so that precedence graph is acyclic

Each pair of neighbors i and j share a token to ensure exclusion if i doesn’t have token when it wants to enter C.S. it sends

request to j j sends back the token immediately if j is in its remainder section

or if it is in its trying section and i has precedence over j, otherwise j defers the request from i

73

Chandy & Misra’s Solution

Thus precedence graph is used to arbitrate between contending neighbors, but otherwise is ignored.

Once i has all its tokens, it enters the C.S. When i leaves the C.S., it satisfies all deferred

requests and does a full reversal in the precedence graph

74

Correctness Ideas

Management of tokens ensures exclusion.

By starting with an acyclic conflict graph and only modifying it with full reversal, the precedence graph is always acyclic:no deadlock can be caused by a cycle

of procs waiting on each other

75

Conclusion Other applications of link reversal include:

distributed queueingk-mutual exclusionpublish/subscribesimulated annealinggraph coloringneural networks

Appeal of the approach is using local knowledge to solve global problems

76

References Barbosa and Gafni, “Concurrency in Heavily Loaded Systems,” ACM TOPLAS

1989. Busch, Surapaneni and Tirthapura, “Analysis of Link Reversal Routing

Algorithms for Mobile Ad Hoc Networks,” SPAA 2003. Busch and Tirthapura, “Analysis of Link Reversal Routing Algorithms,” SIAM

JOC 2005. Chandy and Misra, “The Drinking Philosophers Problem,” ACM TOPLAS

1984. Charron-Bost, Fuegger, Welch and Widder, “Full Reversal Routing as a

Linear Dynamical System,” SIROCCO 2011. Charron-Bost, Fuegger, Welch and Widder, “Partial is Full,” SIROCCO 2011. Charron-Bost, Gaillard, Welch and Widder, “Routing Without Ordering,”

SPAA 2009. Charron-Bost, Welch and Widder, “Link Reversal: How to Play Better to

Work Less,” ALGOSENSORS 2009. Gafni and Bertsekas, “Distributed Algorithms for Generating Loop-Free

Routes in Networks with Frequently Changing Topology,” IEEE Trans. Comm. 1981.

77

References Ingram, Shields, Walter and Welch, “An Asynchronous Leader

Election Algorithm for Dynamic Networks,” IPDPS 2009. Malka, Moran and Zaks, “A Lower Bound on the Period Length of

a Distributed Scheduled,” Algorithmica 1993. Malka and Rajsbaum, “Analysis of Distributed Algorithms Based

on Recurrence Relations”, WDAG 1991. Park and Corson, “A Highly Adaptive Distributed Routing

Algorithm for Mobile Wireless Networks,” INFOCOM 1997. van de Snepscheut, “Fair Mutual Exclusion on a Graph of

Processes,” Distributed Computing 1987. Welch and Walter, “Link Reversal Algorithms,” Synthesis

Lectures on Distributed Computing Theory #8, Morgan & Claypool Publishers, 2012.

78

Recommended