Upload
carlostheran
View
266
Download
1
Embed Size (px)
Citation preview
All-Pairs Shortest Path Theory and Algorithms
Carlos Andres Theran Suarez Program Mathematics and Scientific Computing
University of Puerto Rico
October – 2011
Mayaguez-Puerto Rico
Dr Marko Schütz
Introduction
• In this section we consider de problem of finding shortest path between all pair of vertices in a directed graph 𝑮 = (𝑽, 𝑬).
– With a weight function 𝒘:𝑬 → ℝ
𝒘 𝒑 = 𝒘(𝒖𝒊, 𝒖𝒊+𝟏)𝒌−𝟏𝒊=𝟎 where 𝒑 ∈ 𝑬 𝒂𝒏𝒅 𝒌 ∈ ℕ.
For this goal we are going to use the adjacency-matrix of a graph.
• The input: is a 𝑽 𝒙 𝑽 𝒎𝒂𝒕𝒓𝒊𝒙 𝒄𝒂𝒍𝒍𝒆𝒅 𝑾 = 𝒘𝒊𝒋 which is adjacency-matrix of a 𝑮 = 𝑽, 𝑬 .
• The output: is the 𝑽 𝒙 𝑽 matrix of SPL 𝜹 𝒊, 𝒋 , ∀𝒊𝒋 ∈ 𝑽.
Recall
• Single-Source shortest paths.
• Nonnegative edge weight.
– Dijkstra’s algorithm: Running time Array 𝚶 𝑽𝟐
Running time Binary heap 𝚶( 𝑽 + 𝑬 log𝑽)
Running time Fibonacci heap 𝚶(𝑬 + 𝑽 log𝑽)
• General.
– Bellman-Ford: Running time 𝚶 𝑽𝑬
• UDG.
– Breadth for search: Running time 𝚶 𝑽 + 𝑬
What do you think?
Can we solve all-pair shortest paths by running a
single source-paths algorithms?
What do you think?
Can we solve all-pair shortest paths by running a
single source-paths algorithms?
All-pair shortest paths.
• Nonnegative edge weight Dijkstra’s algorithm: Running time Array 𝚶 𝑽𝟑
What do you think?
Can we solve all-pair shortest paths by running a
single source-paths algorithms?
All-pair shortest paths.
• Nonnegative edge weight Dijkstra’s algorithm: Running time Array 𝚶 𝑽𝟑
Running time Binary heap 𝚶( 𝑽𝟐 + 𝑽𝑬 log𝑽)
What do you think?
Can we solve all-pair shortest paths by running a
single source-paths algorithms?
All-pair shortest paths.
• Nonnegative edge weight Dijkstra’s algorithm: Running time Array 𝚶 𝑽𝟑
Running time Binary heap 𝚶( 𝑽𝟐 + 𝑽𝑬 log𝑽)
Running time Binary heap 𝚶 𝑽𝑬 + 𝑽𝟐 log𝑽
What do you think?
Can we solve all-pair shortest paths by running a
single source-paths algorithms?
All-pair shortest paths.
• Nonnegative edge weight Dijkstra’s algorithm: Running time Array 𝚶 𝑽𝟑
Running time Binary heap 𝚶( 𝑽𝟐 + 𝑽𝑬 log𝑽)
Running time Binary heap 𝚶 𝑽𝑬 + 𝑽𝟐 log𝑽
• General.
Bellman-Ford: Running time 𝚶 𝑽𝟐𝑬
What do you think?
Can we solve all-pair shortest paths by running a
single source-paths algorithms?
All-pair shortest paths.
• Nonnegative edge weight Dijkstra’s algorithm: Running time Array 𝚶 𝑽𝟑
Running time Binary heap 𝚶( 𝑽𝟐 + 𝑽𝑬 log𝑽)
Running time Binary heap 𝚶 𝑽𝑬 + 𝑽𝟐 log𝑽
• General.
Bellman-Ford: Running time 𝚶 𝑽𝟐𝑬
In a dense graph 𝚶 𝑽𝟒
Predecessor Matrix
Let 𝚷 = 𝝅𝒊𝒋 the predecessor Matrix, where
𝝅𝒊𝒋 = 𝑵𝑰𝑳 𝒊𝒇 𝒊 = 𝒋 𝒐𝒓 𝒕𝒉𝒆𝒓𝒆 𝒊𝒔 𝒏𝒐 𝒑𝒂𝒕𝒉 𝒇𝒓𝒐𝒎 𝒊 𝒕𝒐 𝒋𝒕𝒉𝒆 𝒑𝒓𝒆𝒅𝒆𝒄𝒆𝒔𝒔𝒐𝒓 𝒐𝒇 𝒋 𝒐𝒏 𝒔𝒐𝒎𝒆 𝑺. 𝑷 𝒇𝒓𝒐𝒎 𝒋.
Now we define the predecessor subgraph of G for 𝒊 as
𝑮𝝅,𝒊 = (𝑽𝝅,𝒊, 𝑬𝝅,𝒊).
Where 𝑽𝝅,𝒊 = 𝒋 ∈ 𝑽: 𝝅𝒊𝒋 ≠ 𝑵𝑰𝑳 ∪ 𝒊
𝑬𝝅,𝒊 = 𝝅𝒊𝒋, 𝒋 : 𝒋 ∈ 𝑽𝝅,𝒊 − {𝒊}
Predecessor Matrix
Outline
1. Present a dynamic programming algorithms based on
matrix multiplication to solve the problem.
2. Dynamic programming algorithms called Floyd-Warshall
algorithms.
3. Unlike the others algorithms, Johnson's algorithms used
adjacency-list representation of a graph.
Shortest path and matrix multiplication
1. The structure of shortest path.
Let suppose that we have a shortest path 𝒑 form vertix 𝒊 to
vertex 𝒋, and suppose that 𝒑 have at most 𝒎 < ∞ edge.
• If 𝒊 = 𝒋 then 𝒑 have weight 0.
• If 𝒊 ≠ 𝒋 then 𝒊 ↝𝒑′ 𝒌 → 𝒋, where 𝒑′ has at most 𝒎− 𝟏 edge,
by lemma 24.1 𝒑′ is a shortest path from 𝒊 𝒕𝒐 𝒌.
So 𝜹 𝒊, 𝒋 = 𝜹 𝒊, 𝒌 + 𝒘𝒌𝒋.
2. A recursive solution.
Let 𝒍𝒊𝒋(𝒎)
be the minimum weight of any path from vertex 𝒊 to vertex
𝒋 that contains at most 𝒎 edge.
• 𝒍𝒊𝒋(𝟎) =
𝟎 𝒊𝒇 𝒊 = 𝒋∞ 𝒊𝒇 𝒊 ≠ 𝒋
• 𝒍𝒊𝒋(𝒎) = min(𝒍𝒊𝒋
𝒎−𝟏 , min1≤𝑘≤𝒏
𝒍𝒊𝒌(𝒎−𝟏) +𝒘𝒌𝒋 )
= min1≤𝑘≤𝒏
𝒍𝒊𝒌(𝒎−𝟏) +𝒘𝒌𝒋
𝜹 𝒊, 𝒋 = 𝒍𝒊𝒋(𝒏−𝟏) = 𝒍𝒊𝒋
(𝒏) = 𝒍𝒊𝒋(𝒏+𝟏) = ⋯
Shortest path and matrix multiplication (cont.)
3. Computing shortest-path weight bottom up
Input 𝑾 = (𝒘𝒊𝒋). We compute 𝑳𝟏, 𝑳𝟐, … , 𝑳𝒏−𝟏.
𝑳𝒎 = (𝒍𝒊𝒋(𝒎)) 𝐰𝐡𝐞𝐫𝐞 𝒎 = 𝟏, 𝟐,… , 𝒏 − 𝟏.
Shortest path and matrix multiplication (cont.)
• Now we can see the relation to the matrix multiplication.
Let 𝑪 = 𝑨 ∗ 𝑩 the matrix product of 𝒏𝒙𝒏. For 𝒊, 𝒋 = 𝟏,… , 𝒏.
We have 𝒄𝒊𝒋 = 𝒂𝒊𝒌 ∗ 𝒃𝒌𝒋𝒏𝒌=𝟏 .
If we set; 𝒍(𝒎−𝟏) → 𝒂 𝒘 → 𝒃
𝒍𝒎 → 𝒄 𝒎𝒊𝒏 → +
+→∗
𝒄𝒊𝒋 ← 𝒄𝒊𝒋 + 𝒂𝒊𝒌 ∗ 𝒃𝒌𝒋
Shortest path and matrix multiplication (cont.)
Computing the sequence of 𝒏 − 𝟏 matrix
𝑳(𝟏) = 𝑳(𝟎) ∗ 𝑾 = 𝑾
𝑳(𝟐) = 𝑳(𝟏) ∗ 𝑾 = 𝑾𝟐
⋮
𝑳(𝒏−𝟏) = 𝑳(𝒏−𝟐) ∗ 𝑾 = 𝑾𝒏−𝟏
Shortest path and matrix multiplication (cont.)
• Improving the running time.
Our goal, is to compute 𝑳𝒏−𝟏 matrices, let go to see that we
can compute 𝑳𝒏−𝟏 with only log( 𝒏 − 𝟏) matrix product.
Shortest path and matrix multiplication (cont.)
𝑳(𝟏) = 𝑾
𝑳(𝟐) = 𝑾(𝟐) = 𝑾 ∗𝑾
𝑳(𝟒) = 𝑾(𝟒) = 𝑾𝟐 ∗ 𝑾𝟐
𝑳(𝟖) = 𝑾(𝟖) = 𝑾𝟒 ∗ 𝑾𝟒
⋮
𝑳(𝟐 log 𝒏−𝟏 ) = 𝑾(𝟐 log 𝒏−𝟏 ) = 𝑾𝟐 log 𝒏−𝟏 −𝟏
∗ 𝑾𝟐 log 𝒏−𝟏 −𝟏
Shortest path and matrix multiplication (cont.)
Shortest path and matrix multiplication (cont.)
The algorithm consider a intermediate vertices of a shortest path.
1. The structure of a shortest path.
Intermediate vertex 𝒑 =< 𝒖𝟏, 𝒖𝟐, … , 𝒖𝒏 > in a any vertex of 𝒑 other than 𝒖𝟏 or 𝒖𝒏, so it can be the set 𝒖𝟐, … , 𝒖𝒏−𝟏 .
Let assume that the vertex of 𝑮 are 𝑽 = 𝟏, 𝟐,… , 𝒏 and a subset 𝟏, 𝟐,… , 𝒌 for some 𝒌.
• If 𝒌 ∉ 𝑽𝑰𝑺 of path 𝒑, then all the vertices intermediate 𝒑 are in the set 𝟏, 𝟐,… , 𝒌 − 𝟏 . Thus, a shortest path from vertex 𝒊 to vertex 𝒋 with all intermediate vertices in the set 𝟏, 𝟐,… , 𝒏 − 𝟏 is also a shortest path form 𝒊 to 𝒋 with all 𝑽𝑰𝑺 in the set 𝟏, 𝟐,… , 𝒏 .
The Floyd-Warshall algorithm
• If 𝒌 ∈ 𝑽𝑰𝑺 of path 𝒑, we break 𝒑 down into 𝒊 ↝𝒑𝟏 𝒌 ↝𝒑𝟐 𝒋. 𝒑𝟏 is a shortest path from 𝒊 to 𝒌, so 𝒌 ∉ 𝑽𝑰𝑺 of 𝒑𝟏, thus 𝒑𝟏 is
a shortest path form 𝒊 to 𝒌 with all 𝑽𝑰 in the set
𝟏, 𝟐,… , 𝒌 − 𝟏 . Similarly 𝒑𝟐 is a shortest path form 𝒌 to 𝒋 with all 𝑽𝑰 in the set 𝟏, 𝟐,… , 𝒌 − 𝟏 .
The Floyd-Warshall algorithm (cont)
3. A recursive solution.
𝒅𝒊𝒋(𝒌) =
𝒘𝒊𝒋 𝒊𝒇 𝒌 = 𝟎
min 𝒅𝒊𝒋𝒌−𝟏 , 𝒅𝒊𝒌
𝒌−𝟏 + 𝒅𝒌𝒋𝒌−𝟏 𝒊𝒇 𝒌 ≥ 𝟎
Since for every path, all intermediate vertices are in the set
𝟏, 𝟐,… , 𝒏 , matriz 𝑫(𝒏) = (𝒅𝒊𝒋(𝒏)) gives the final answer:
𝒅𝒊𝒋(𝒏) = 𝜹 𝒊, 𝒋 .
The Floyd-Warshall algorithm (cont)
• input: A 𝒏 𝒙 𝒏 matrix 𝑾
• output: A 𝒏 𝒙 𝒏 matrix 𝑫(𝒏) of shortest path weight.
• 𝒅𝒊𝒋(𝒌)
= min 𝒅𝒊𝒋𝒌−𝟏 , 𝒅𝒊𝒋
𝒌−𝟏 + 𝒅𝒊𝒋𝒌−𝟏
The Floyd-Warshall algorithm (cont)
The Floyd-Warshall algorithm (cont)
4. Constructing a Shortest path
We compute the predecessor matrix 𝚷 just as the Floyd-warshall algorithm compute the matrices 𝑫(𝒌).
so 𝚷 = 𝚷(𝒌) = (𝝅𝒊𝒋(𝒌)).
Recursive formulation.
𝝅𝒊𝒋(𝒌) =
𝝅𝒊𝒋(𝒌) 𝒊𝒇 𝒅𝒊𝒋
𝒌−𝟏 ≤ 𝒅𝒊𝒌𝒌−𝟏 + 𝒅𝒌𝒋
𝒌−𝟏
𝝅𝒌𝒋(𝒌) 𝒊𝒇 𝒅𝒊𝒋
𝒌−𝟏 > 𝒅𝒊𝒌𝒌−𝟏 + 𝒅𝒌𝒋
𝒌−𝟏
𝝅𝒊𝒋(𝟎) =
𝑵𝑰𝑳 𝒊𝒇 𝒊 = 𝒋 𝒐𝒓 𝒘𝒊𝒋 = ∞.
𝒊 𝒊𝒇 𝒊 ≠ 𝒋 𝒂𝒏𝒅 𝒘𝒊𝒋 < ∞.
The Floyd-Warshall algorithm (cont)
• It is asymtoticaly better than repeated squaring of matrices
or the Floyd-Warshall algoritm.
• It use a subroutine both Dijkstra’s algorithm and Bellman-
Ford algorithm.
• Johnson's algorithm use the technique of reweighting.
Johnson's algorithm for sparse graphs.
Reweighting
If 𝑮 has a negative weight edge but no negative weight cycle,
we compute a new set of nonnegative edge weight 𝒘 that
allow as to use Dijkstra’s algorithm.
The new set of edge must satisfy two condition.
1. 𝒘(𝒑) is a shortest path form 𝒖 𝒐 𝒗 ⇔ 𝒘 (𝒑) is a shortest
path form 𝒖 𝒐 𝒗 .
2. For all edges (𝒖, 𝒗), the new weight 𝒘 (𝒖, 𝒗) is
nonnegative.
Johnson's algorithm for sparse graphs (cont.).
• Lemma
Give a weighted, directed graph 𝑮 = (𝑽, 𝑬) with weight
funtion 𝒘:𝑬 → ℝ be any funtion mapping vertices to real
numbers. For each edge (𝒖, 𝒗) ∈ 𝑬, define.
𝒘 𝒖, 𝒗 = 𝒘 𝒖, 𝒗 + 𝒉 𝒖 − 𝒉 𝒗 .
Johnson's algorithm for sparse graphs (cont.).
• Producing no negative weight by reweighting
Johnson's algorithm for sparse graphs (cont.).
Johnson's algorithm for sparse graphs (cont.).
Johnson's algorithm for sparse graphs (cont.).