1 Concurrent Counting is harder than Queuing Costas Busch Rensselaer Polytechnic Intitute Srikanta...

Preview:

DESCRIPTION

3 Distributed Counting Some processors request a counter value count

Citation preview

1

Concurrent Counting is harder than Queuing

Costas BuschRensselaer Polytechnic Intitute

Srikanta TirthapuraIowa State University

2

Arbitrary graph

3

Distributed Counting

Some processors request a counter value

count

count

count

count

4

Distributed Counting

Final state

2

1

34

5

Distributed Queuing

Some processors perform enqueue operations

Enq(A) Enq(B)

Enq(D)

Enq(C)A B

C

D

6

Distributed Queuing

A B

C

D

headTail

A BC D

Previous=nil

Previous=B

Previous=D

Previous=A

7

Counting: - parallel scientific applications - load balancing (counting networks)

Queuing:- distributed directories for mobile

objects- distributed mutual exclusion

Applications

8

Multicast with the condition:all messages received at all nodes in the same order

Either Queuing or Counting will do

Which is more efficient?

Ordered Multicast

9

Queuing vs Counting?

Total ordersQueuing = finding predecessorNeeds local knowledge

Counting = finding rankNeeds global knowledge

10

ProblemIs there a formal sense in which Counting

is harder problem than Queuing?

Reductions don’t seem to help

11

Our Result Concurrent Counting is harder than

Concurrent Queuing

on a variety of graphs including: many common interconnection topologies complete graph, mesh hypercube perfect binary trees

12

Model

Synchronous system G=(V,E) - edges of unit delay

Congestion: Each node can process only one message in a single time step

Concurrent one-shot scenario:a set R subset V of nodes issue queuing (or

counting) operations at time zeroNo more operations added later

13

Cost Model : delay till v gets back queuing result

Cost of algorithm A on request set R is

Queuing Complexity =

Define Counting Complexity Similarly

)(vCQ

Rv

QQ vCRAC )(),(

)},({maxmin RACQVRA

14

Lower Bounds on Counting

Counting Cost = )log( * nn

For arbitrary graphs:

For graphs with diameter :D)( 2DCounting Cost =

15

Theorem:

Proof:

Consider some arbitrary algorithmfor counting

For graphs with diameter :D)( 2DCounting Cost =

16

Take shortest path of length

Graph

D

17

Graph

make these nodes to count

12 3 4567 8

18

k

Node of count decides after at least time steps

k

21k

Needs to be aware of other processors k-1

21k

21k

19

Counting Cost: )(21 2

1DkD

k

End of Proof

20

Theorem:

Counting Cost = )log( * nnFor arbitrary graphs:

Proof:

Consider some arbitrary algorithmfor counting

21

Prove it for a complete graph with nodes:any algorithm on any graph with nodes can be simulated on the complete graph

nn

22

The initial state affects the outcomeRed: countBlue: don’t count

v

Initial State

23

Final state

Red: count

v

1

23

4

5

Blue: don’t count

24

Initial state

Red: count

v

Blue: don’t count

25

Final state

Red: count

v

1

23

45

6 78

Blue: don’t count

26

v

Let be the set of nodes whose input may affect the decision of

)(vAv

)(vA

27

Suppose that there is an initial statefor which decides v k

kvA |)(|Then:

v

)(vA

k

28

v

)(vA

k

v

)(vA

k

These two initial states give same result forv

29

v

)(vA

k

If , then would decide less than

kvA |)(|k

Thus, kvA |)(|

v

30

Suppose that decides at timetv

We show:

22|)(| vA

22 t times

31

Suppose that decides at timetv

kvA |)(|

22|)(| vA

22 t

timeskt *log

32

kt *log

)log(log *

1

* nnkn

k

Cost of node : v

If nodes wish to count:n

Counting Cost =

33

v

),( tvA

Nodes that affectup to time

vt

v

),( tvB

Nodes that affectsup to timev

t

|),(|max)( txAta x |),(|max)( txBtb x

34

v

)1,( tvA

v

)1,( tvB

After , the sets grow1t

1)( ta 1)( tb

35

),( tvA

v

)1,( tvA

36

),( tvAv

)1,( tvA

),( tzA

Eligible to send messageat time

z

1t

There is an initial state such thatthat sends a message to z v

37

),( tvAv

),( tzAz

)1,( tvA

),( tsAs

),(),( tzAtsA

Eligible to send messageat time 1t

Suppose thatThen, there is an initial state such that both send message tov

38

),( tvAv

),( tzAz

)1,( tvA

),( tsAs

Eligible to send messageat time 1t

However, can receive one message at a time v

39

),( tvAv

),( tzAz

)1,( tvA

),( tsAs

Eligible to send messageat time 1t

),(),( tzAtsATherefore:

40

),( tzAz

),( tsAs

Number of nodes like : s )()( tbta

|),(|max txAx |),(|max txBx

41

),( tvAv

),( tzAz

)1,( tvA

),( tsAs

Therefore: )()()(),()1,( tbtatatvAtvA

42

)()(1)()1( tbtatata

We can also show: )(21)()1( tatbtb

Thus:

Which give:22)( a

22

times

End of Proof

43

Upper Bound on Queuing

)log( nnOQueuing Cost =

For graphs with spanning trees of constant degree:

)(nOQueuing Cost =

For graphs whose spanning trees are lists or perfect binary trees:

44

An arbitrary graph

45

Spanning tree

46

Spanning tree

47

A

A

Distributed Queue

HeadTailTail

Previous = Nil

48

A

B enq(B)

Tail AHeadTail

Previous = ?

Previous = Nil

49

A

B

enq(B)

Tail AHeadTail

Previous = ?

Previous = Nil

50

A

B

enq(B)

Tail AHeadTail

Previous = ?

Previous = Nil

51

A

B

enq(B)

Tail AHeadTail

Previous = ?

Previous = Nil

52

A

B

enq(B)

Tail AHeadTail

Previous = ?

Previous = Nil

53

A

A

B

Tail

Previous = A

BA informs BHeadTail

Previous = Nil

54

A

Concurrent Enqueue Requests

Tail

BC

AHeadTail

enq(B)enq(C)Previous = ? Previous = ?

Previous = Nil

55

A

Tail

BC

AHeadTail

enq(B)enq(C)Previous = ? Previous = ?

Previous = Nil

56

A

Tail

BC

AHeadTail

enq(B)

enq(C)

Previous = ? Previous = ?

Previous = Nil

57

A

Tail

BC

enq(B)

ACHeadTail

Previous = A Previous = ?

Previous = Nil

58

A

Tail

BCenq(B)

ACHeadTail

Previous = A Previous = ?

Previous = Nil

59

A

Tail

BC

ACHeadTail

C informs B

Previous = CPrevious = A

B

Previous = Nil

60

A

Tail

BC

ACHeadTail

Previous = CPrevious = A

B

Previous = Nil

61

A

Tail

BC

ACHeadTail

Previous = CPrevious = A

B

Previous = Nil

Paths of enqueue requests

62

Nearest-Neighbor TSP tour on Spanning tree

A

C B

D

Origin(first element in queue)

E F

63

A

C B

Visit closest unused node in Tree

ED F

64

A

C B

Visit closest unused node in Tree

ED F

65

A

C B

ED F

Nearest-Neighbor TSP tour

66

Queuing Cost 2 x Nearest-Neighbor TSP length

[Herlihy, Tirthapura, Wattenhofer PODC’01]

For spanning tree of constant degree:

67

Nearest-NeighborTSP length Optimal

TSP length

[Rosenkratz, Stearns, Lewis SICOMP1977]

If a weighted graph satisfies triangular inequality:

nlogx

68

A

C

D

B

E F

A

C

D

B

F2

4

41

3

1 2

31

weighted graph of distances

E1

23 4

21

69

Satisfies triangular inequality

A

C

D21

11e

2e3e )()()( 321 ewewew

A

C

D

B

F2

4

41

3

1 2

31

E1

23 4

21

70

A

C

E

B

FD

Nearest Neighbor TSP tour

A

C

D

B

F2

4

41

3

1 2

31

E1

23 4

21

Length=8

71

A

C

E

B

FD

A

C

D

B

F2

4

41

3

1 2

31

E1

23 4

21

Optimal TSP tour

Length=6

72

It can be shown that:

Optimal TSP length n2

A

C

E

B

FD

(Nodes in graph)

Since every edge is visited twice

73

Therefore, for constant degree spanning tree:

Queuing Cost = O(Nearest-Neighbor TSP)

)log( nnO= O(Optimal TSP x )nlog

=

74

For special cases we can do better:

Spanning Tree is

Queuing Cost = )(nO

List balanced binary tree

75

Graphs with Hamiltonian path,have spanning trees which are lists

Complete graph

Queuing Cost = )(nO

Mesh Hypercube

Counting Cost = )log( * nn

76

If the spanning tree is a list, then

Theorem:

Queuing Cost = )(nO

77

Nearest Neighbor TSP

Queuing Cost = O(Nearest-Neighbor TSP)

Proof:

78

x yyx

ab

ab 2

79

Length doubles

Total length n2

n nodes

Even sides

80

Length doubles

n nodes

Total length n2

Odd sides

81

n nodes

Total length nnn 422

Even+Odd sides

End of proof

Recommended