Upload
others
View
16
Download
0
Embed Size (px)
Citation preview
12. Transitive Closure, All Pairs ShortestPaths
Re�exive transitive closure [Ottman/Widmayer, Kap. 9.2 Cormen et al, Kap.25.2] Floyd-Warshall Algorithm [Ottman/Widmayer, Kap. 9.5.3 Cormen et al,Kap. 25.2]
290
Adjacency Matrix Product
1 2
4
3
5
B := A2G =
0 1 1 1 00 0 0 0 00 1 0 1 10 0 0 0 00 0 1 0 1
2
=
0 1 0 1 10 0 0 0 00 0 1 0 10 0 0 0 00 1 1 1 2
291
Interpretation
Theorem 12
LetG = (V,E) be a graph and k ∈ N. Then the element a(k)i,j of the matrix
(a(k)i,j )1≤i,j≤n = (AG)k provides the number of paths with length k from vi
to vj .
292
Graphs and Relations
Graph G = (V,E)adjacencies AG =̂ Relation E ⊆ V × V over V
re�exive⇔ ai,i = 1 for all i = 1, . . . , n. (loops)symmetric⇔ ai,j = aj,i for all i, j = 1, . . . , n (undirected)transitive ⇔ (u, v) ∈ E, (v, w) ∈ E ⇒ (u,w) ∈ E. (reachability)
295
Graphs and Relations
Graph G = (V,E)adjacencies AG =̂ Relation E ⊆ V × V over Vre�exive⇔ ai,i = 1 for all i = 1, . . . , n. (loops)symmetric⇔ ai,j = aj,i for all i, j = 1, . . . , n (undirected)transitive ⇔ (u, v) ∈ E, (v, w) ∈ E ⇒ (u,w) ∈ E. (reachability)
295
Re�exive Transitive ClosureRe�exive transitive closure of G ⇔ Reachability relation E∗: (v, w) ∈ E∗i� ∃ path from node v to w.
0 1 0 0 10 0 0 1 00 1 0 0 00 0 1 0 00 0 0 1 0
1
2
3 4
5
G = (V, E)
⇒
1 1 1 1 10 1 1 1 00 1 1 1 00 1 1 1 00 1 1 1 1
1
2
3 4
5
G∗ = (V, E∗) 296
Algorithm A · A
Input: (Adjacency-)Matrix A = (aij)i,j=1...n
Output: Matrix Product B = (bij)i,j=1...n = A ·A
B ← 0for r ← 1 to n do
for c← 1 to n dofor k ← 1 to n do
brc ← brc + ark · akc // Number of Paths
return B
Counts number of paths of length 2
297
Algorithm A⊗ A
Input: Adjacency-Matrix A = (aij)i,j=1...n
Output: Modified Matrix Product B = (bij)i,j=1...n = A⊗A
B ← A // Keep pathsfor r ← 1 to n do
for c← 1 to n dofor k ← 1 to n do
brc ← max{brc, ark · akc} // Path: yes/no
return B
Computes which paths of length 1 and 2 exist
298
Computation of the Re�exive Transitive Closure
Goal: computation of B = (bij)1≤i,j≤n with bij = 1⇔ (vi, vj) ∈ E∗
First idea:
Start with B ← A and set bii = 1 for each i (Re�exivity.).Compute
Bn =n⊗
i=1B
how⇒ running time n3dlog2 ne
299
Computation of the Re�exive Transitive Closure
Goal: computation of B = (bij)1≤i,j≤n with bij = 1⇔ (vi, vj) ∈ E∗ First idea:
Start with B ← A and set bii = 1 for each i (Re�exivity.).Compute
Bn =n⊗
i=1B
how
⇒ running time n3dlog2 ne
299
Computation of the Re�exive Transitive Closure
Goal: computation of B = (bij)1≤i,j≤n with bij = 1⇔ (vi, vj) ∈ E∗ First idea:
Start with B ← A and set bii = 1 for each i (Re�exivity.).Compute
Bn =n⊗
i=1B
with powers of 2 B2 := B ⊗B, B4 := B2 ⊗B2, B8 = B4 ⊗B4 ...⇒ running time n3dlog2 ne
299
Improvement: Algorithm of Warshall (1962)
Inductive procedure: all paths known over nodes from {vi : i < k}. Addnode vk.
1
2
3 4
5
1 1 0 0 10 0 0 1 00 1 0 0 00 0 1 0 00 0 0 1 0
300
Improvement: Algorithm of Warshall (1962)
Inductive procedure: all paths known over nodes from {vi : i < k}. Addnode vk.
1
2
3 4
5
1 1 0 1 10 1 0 1 00 1 0 1 00 0 1 0 00 0 0 1 0
300
Improvement: Algorithm of Warshall (1962)
Inductive procedure: all paths known over nodes from {vi : i < k}. Addnode vk.
1
2
3 4
5
1 1 0 1 10 1 0 1 00 1 1 1 00 1 1 1 00 0 0 1 0
300
Improvement: Algorithm of Warshall (1962)
Inductive procedure: all paths known over nodes from {vi : i < k}. Addnode vk.
1
2
3 4
5
1 1 1 1 10 1 1 1 00 1 1 1 00 1 1 1 00 1 1 1 0
300
Improvement: Algorithm of Warshall (1962)
Inductive procedure: all paths known over nodes from {vi : i < k}. Addnode vk.
1
2
3 4
5
1 1 1 1 10 1 1 1 00 1 1 1 00 1 1 1 00 1 1 1 1
300
Algorithm TransitiveClosure(AG)
Input: Adjacency matrix AG = (aij)i,j=1...n
Output: Reflexive transitive closure B = (bij)i,j=1...n of G
B ← AG
for k ← 1 to n dobkk ← 1 // Reflexivityfor r ← 1 to n do
for c← 1 to n dobrc ← max{brc, brk · bkc} // All paths via vk
return B
Runtime Θ(n3).
301
Correctness of the Algorithm (Induction)
Invariant (k): all paths via nodes with maximal index < k considered.Base case (k = 1): All directed paths (all edges) in AG considered.Hypothesis: invariant (k) ful�lled.Step (k → k + 1): For each path from vi to vj via nodes with maximalindex k: by the hypothesis bik = 1 and bkj = 1. Therefore in the k-thiteration: bij ← 1.
vi vk vj
(v<k) (v<k)
302
All shortest Paths
Compute the weight of a shortest path for each pair of nodes.|V |× Application of Dijkstra’s Shortest Path algorithmO(|V | · (|E|+ |V |) · log |V |) (with Fibonacci Heap: O(|V |2 log |V |+ |V | · |E|))|V |× Application of Bellman-Ford: O(|E| · |V |2)There are better ways!
303
Induction via node number
Consider weights of all shortest paths Sk with intermediate nodes in20V k := {v1, . . . , vk}, provided that weights for all shortest paths Sk−1 withintermediate nodes in V k−1 are given.vk no intermediate node of a shortest path of vi vj in V k: Weight of ashortest path vi vj in Sk−1 is then also weight of shortest path in Sk.vk intermediate node of a shortest path vi vj in V k: Sub-pathsvi vk and vk vj contain intermediate nodes only from Sk−1.
20like for the algorithm of the re�exive transitive closure of Warshall304
Induction via node number
dk(u, v) = Minimal weight of a path u v with intermediate nodes in V k
Induktion
dk(u, v) = min{dk−1(u, v), dk−1(u, k) + dk−1(k, v)}(k ≥ 1)d0(u, v) = c(u, v)
305
Algorithm Floyd-Warshall(G)
Input: Graph G = (V, E, c) without negative weight cycles.Output: Minimal weights of all paths dd0 ← cfor k ← 1 to |V | do
for i← 1 to |V | dofor j ← 1 to |V | do
dk(vi, vj) = min{dk−1(vi, vj), dk−1(vi, vk) + dk−1(vk, vj)}
Runtime: Θ(|V |3)Remark: Algorithm can be executed with a single matrix d (in place).
306
13. Minimum Spanning Trees
Motivation, Greedy, Algorithm Kruskal, General Rules, ADT Union-Find,Algorithm Jarnik, Prim, Dijkstra [Ottman/Widmayer, Kap. 9.6, 6.2, 6.1, Cormenet al, Kap. 23, 19]
307
Problem
Given: Undirected, weighted, connected graph G = (V,E, c).Wanted: Minimum Spanning Tree T = (V,E ′): connected, cycle-freesubgraph E ′ ⊂ E, such that ∑
e∈E′ c(e) minimal.
s
t
u
v
w
x
1
1
2
43
2
26
310
Application Examples
Network-Design: �nd the cheapest / shortest network that connects allnodes.Approximation of a solution of the travelling salesman problem: �nd around-trip, as short as possible, that visits each node once. 21
21The best known algorithm to solve the TS problem exactly has exponential runningtime.
311
Greedy Procedure
Greedy algorithms compute the solution stepwise choosing locallyoptimal solutions.Most problems cannot be solved with a greedy algorithm.The Minimum Spanning Tree problem can be solved with a greedystrategy.
312
Greedy Idea (Kruskal, 1956)
Construct T by adding the cheapest edge that does not generate a cycle.
s
t
u
v
w
x
1
1
2
43
2
26
(Solution is not unique.)
313
Greedy Idea (Kruskal, 1956)
Construct T by adding the cheapest edge that does not generate a cycle.
s
t
u
v
w
x
1
1
2
43
2
26
(Solution is not unique.)
313
Greedy Idea (Kruskal, 1956)
Construct T by adding the cheapest edge that does not generate a cycle.
s
t
u
v
w
x
1
1
2
43
2
26
(Solution is not unique.)
313
Greedy Idea (Kruskal, 1956)
Construct T by adding the cheapest edge that does not generate a cycle.
s
t
u
v
w
x
1
1
2
43
2
26
(Solution is not unique.)
313
Greedy Idea (Kruskal, 1956)
Construct T by adding the cheapest edge that does not generate a cycle.
s
t
u
v
w
x
1
1
2
43
2
26
(Solution is not unique.)
313
Greedy Idea (Kruskal, 1956)
Construct T by adding the cheapest edge that does not generate a cycle.
s
t
u
v
w
x
1
1
2
43
2
26
(Solution is not unique.)
313
Algorithm MST-Kruskal(G)
Input: Weighted Graph G = (V, E, c)Output: Minimum spanning tree with edges A.
Sort edges by weight c(e1) ≤ ... ≤ c(em)A← ∅for k = 1 to |E| do
if (V, A ∪ {ek}) acyclic thenA← A ∪ {ek}
return (V, A, c)
314
Implementation Issues
Consider a set of sets i ≡ Ai ⊂ V . To identify cuts and cycles: membershipof the both ends of an edge to sets?
315
Implementation Issues
General problem: partition (set of subsets) .e.g.{{1, 2, 3, 9}, {7, 6, 4}, {5, 8}, {10}}Required: Abstract data type “Union-Find” with the following operationsMake-Set(i): create a new set represented by i.Find(e): name of the set i that contains e .Union(i, j): union of the sets with names i and j.
316
Union-Find Algorithm MST-Kruskal(G)Input: Weighted Graph G = (V, E, c)Output: Minimum spanning tree with edges A.
Sort edges by weight c(e1) ≤ ... ≤ c(em)A← ∅for k = 1 to |V | do
MakeSet(k)
for k = 1 to m do(u, v)← ek
if Find(u) 6= Find(v) thenUnion(Find(u), Find(v))A← A ∪ ek
else // conceptual: R← R ∪ ek
return (V, A, c)
317
Implementation Union-Find
Idea: tree for each subset in the partition,e.g.{{1, 2, 3, 9}, {7, 6, 4}, {5, 8}, {10}}
1
2 3
9
6
7 4
5
8
10
roots = names (representatives) of the sets,trees = elements of the sets
318
Implementation Union-Find
1
2 3
9
6
7 4
5
8
10
Representation as array:
Index 1 2 3 4 5 6 7 8 9 10Parent 1 1 1 6 5 6 6 5 3 10
319
Implementation Union-Find
Index 1 2 3 4 5 6 7 8 9 10Parent 1 1 1 6 5 6 6 5 3 10
Make-Set(i) p[i]← i; return i
Find(i) while (p[i] 6= i) do i← p[i]return i
Union(i, j) 22 p[j]← i;
22i and j need to be names (roots) of the sets. Otherwise use Union(Find(i),Find(j))320
Optimisation of the runtime for Find
Tree may degenerate. Example: Union(8, 7), Union(7, 6), Union(6, 5), ...
Index 1 2 3 4 5 6 7 8 ..Parent 1 1 2 3 4 5 6 7 ..
Worst-case running time of Find in Θ(n).
321
Optimisation of the runtime for Find
Idea: always append smaller tree to larger tree. Requires additional sizeinformation (array) g
Make-Set(i) p[i]← i; g[i]← 1; return i
Union(i, j)if g[j] > g[i] then swap(i, j)p[j]← iif g[i] = g[j] then g[i]← g[i] + 1
⇒ Tree depth (and worst-ase running time for Find) in Θ(log n)
322
Alterantive improvement
Link all nodes to the root when Find is called.Find(i):j ← iwhile (p[i] 6= i) do i← p[i]while (j 6= i) do
t← jj ← p[j]p[t]← i
return i
Cost: amortised nearly constant (inverse of the Ackermann-function).23
23When combined with union by size, we do not go into any details here. Cf. Cormen etal, Kap. 21.4
323
Running time of Kruskal’s Algorithm
Sorting of the edges: Θ(|E| log |E|) = Θ(|E| log |V |). 24
Initialisation of the Union-Find data structure Θ(|V |)|E|× Union(Find(x),Find(y)): O(|E| log |E|) = O(|E| log |V |).
Overal Θ(|E| log |V |).
24because G is connected: |V | ≤ |E| ≤ |V |2324
Algorithm of Jarnik (1930), Prim, Dijkstra (1959)
Idea: start with some v ∈ V and grow the spanning tree from here by theacceptance rule.
A← ∅S ← {v0}for i← 1 to |V | do
Choose cheapest (u, v) mit u ∈ S, v 6∈ SA← A ∪ {(u, v)}S ← S ∪ {v} // (Coloring)
S
V \ S
Remark: a union-Find data structure is not required. It su�ces to colornodes when they are added to S.
325
Implementation and Running time
Implementation like with Dijkstra’s ShortestPath. Only di�erence:Shortest PathsRelax (u, v):
if ds[v] > d[u] + c(u, v) thends[v]← ds[u] + c(u, v)πs[v]← u
⇒ Minimum Spanning TreeRelax (u, v):
if ds[v] > c(u, v) thends[v]← c(u, v)πs[v]← u
With Min-Heap: costs O(|E| · log |V |):
Initialization (node coloring) O(|V |)|V |× ExtractMin = O(|V | log |V |),|E|× Insert or DecreaseKey: O(|E| log |V |),
326