243
Analysis of Algorithms Single Source Shortest Path Andres Mendez-Vazquez November 9, 2015 1 / 108

20 Single Source Shorthest Path

Embed Size (px)

Citation preview

Page 1: 20 Single Source Shorthest Path

Analysis of AlgorithmsSingle Source Shortest Path

Andres Mendez-Vazquez

November 9, 2015

1 / 108

Page 2: 20 Single Source Shorthest Path

Outline1 Introduction

Introduction and Similar Problems2 General Results

Optimal Substructure PropertiesPredecessor GraphThe Relaxation ConceptThe Bellman-Ford AlgorithmProperties of Relaxation

3 Bellman-Ford AlgorithmPredecessor Subgraph for BellmanShortest Path for BellmanExampleBellman-Ford finds the Shortest PathCorrectness of Bellman-Ford

4 Directed Acyclic Graphs (DAG)Relaxing EdgesExample

5 Dijkstra’s AlgorithmDijkstra’s Algorithm: A Greedy MethodExampleCorrectness Dijkstra’s algorithmComplexity of Dijkstra’s Algorithm

6 Exercises2 / 108

Page 3: 20 Single Source Shorthest Path

Outline1 Introduction

Introduction and Similar Problems2 General Results

Optimal Substructure PropertiesPredecessor GraphThe Relaxation ConceptThe Bellman-Ford AlgorithmProperties of Relaxation

3 Bellman-Ford AlgorithmPredecessor Subgraph for BellmanShortest Path for BellmanExampleBellman-Ford finds the Shortest PathCorrectness of Bellman-Ford

4 Directed Acyclic Graphs (DAG)Relaxing EdgesExample

5 Dijkstra’s AlgorithmDijkstra’s Algorithm: A Greedy MethodExampleCorrectness Dijkstra’s algorithmComplexity of Dijkstra’s Algorithm

6 Exercises3 / 108

Page 4: 20 Single Source Shorthest Path

Introduction

Problem descriptionGiven a single source vertex in a weighted, directed graph.We want to compute a shortest path for each possible destination(Similar to BFS).

ThusThe algorithm will compute a shortest path tree (again, similar to BFS).

4 / 108

Page 5: 20 Single Source Shorthest Path

Introduction

Problem descriptionGiven a single source vertex in a weighted, directed graph.We want to compute a shortest path for each possible destination(Similar to BFS).

ThusThe algorithm will compute a shortest path tree (again, similar to BFS).

4 / 108

Page 6: 20 Single Source Shorthest Path

Introduction

Problem descriptionGiven a single source vertex in a weighted, directed graph.We want to compute a shortest path for each possible destination(Similar to BFS).

ThusThe algorithm will compute a shortest path tree (again, similar to BFS).

4 / 108

Page 7: 20 Single Source Shorthest Path

Similar Problems

Single destination shortest paths problemFind a shortest path to a given destination vertex t from each vertex.

By reversing the direction of each edge in the graph, we can reducethis problem to a single source problem.

5 / 108

Page 8: 20 Single Source Shorthest Path

Similar Problems

Single destination shortest paths problemFind a shortest path to a given destination vertex t from each vertex.

By reversing the direction of each edge in the graph, we can reducethis problem to a single source problem.

Reverse Directions

5 / 108

Page 9: 20 Single Source Shorthest Path

Similar Problems

Single pair shortest path problemFind a shortest path from u to v for given vertices u and v.

If we solve the single source problem with source vertex u, we alsosolve this problem.

6 / 108

Page 10: 20 Single Source Shorthest Path

Similar Problems

Single pair shortest path problemFind a shortest path from u to v for given vertices u and v.

If we solve the single source problem with source vertex u, we alsosolve this problem.

6 / 108

Page 11: 20 Single Source Shorthest Path

Similar Problems

All pairs shortest paths problemFind a shortest path from u to v for every pair of vertices u and v.

7 / 108

Page 12: 20 Single Source Shorthest Path

Outline1 Introduction

Introduction and Similar Problems2 General Results

Optimal Substructure PropertiesPredecessor GraphThe Relaxation ConceptThe Bellman-Ford AlgorithmProperties of Relaxation

3 Bellman-Ford AlgorithmPredecessor Subgraph for BellmanShortest Path for BellmanExampleBellman-Ford finds the Shortest PathCorrectness of Bellman-Ford

4 Directed Acyclic Graphs (DAG)Relaxing EdgesExample

5 Dijkstra’s AlgorithmDijkstra’s Algorithm: A Greedy MethodExampleCorrectness Dijkstra’s algorithmComplexity of Dijkstra’s Algorithm

6 Exercises8 / 108

Page 13: 20 Single Source Shorthest Path

Optimal Substructure Property

Lemma 24.1Given a weighted, directed graph G = (V ,E) with p =< v1, v2, ..., vk >be a Shortest Path from v1 to vk . Then,

pij =< vi , vi+1, ..., vj > is a Shortest Path from vi to vj , where1 ≤ i ≤ j ≤ k.

We have thenSo, we have the optimal substructure property.Bellman-Ford’s algorithm uses dynamic programming.Dijkstra’s algorithm uses the greedy approach.

In addition, we have thatLet δ(u, v) = weight of Shortest Path from u to v.

9 / 108

Page 14: 20 Single Source Shorthest Path

Optimal Substructure Property

Lemma 24.1Given a weighted, directed graph G = (V ,E) with p =< v1, v2, ..., vk >be a Shortest Path from v1 to vk . Then,

pij =< vi , vi+1, ..., vj > is a Shortest Path from vi to vj , where1 ≤ i ≤ j ≤ k.

We have thenSo, we have the optimal substructure property.Bellman-Ford’s algorithm uses dynamic programming.Dijkstra’s algorithm uses the greedy approach.

In addition, we have thatLet δ(u, v) = weight of Shortest Path from u to v.

9 / 108

Page 15: 20 Single Source Shorthest Path

Optimal Substructure Property

Lemma 24.1Given a weighted, directed graph G = (V ,E) with p =< v1, v2, ..., vk >be a Shortest Path from v1 to vk . Then,

pij =< vi , vi+1, ..., vj > is a Shortest Path from vi to vj , where1 ≤ i ≤ j ≤ k.

We have thenSo, we have the optimal substructure property.Bellman-Ford’s algorithm uses dynamic programming.Dijkstra’s algorithm uses the greedy approach.

In addition, we have thatLet δ(u, v) = weight of Shortest Path from u to v.

9 / 108

Page 16: 20 Single Source Shorthest Path

Optimal Substructure Property

Lemma 24.1Given a weighted, directed graph G = (V ,E) with p =< v1, v2, ..., vk >be a Shortest Path from v1 to vk . Then,

pij =< vi , vi+1, ..., vj > is a Shortest Path from vi to vj , where1 ≤ i ≤ j ≤ k.

We have thenSo, we have the optimal substructure property.Bellman-Ford’s algorithm uses dynamic programming.Dijkstra’s algorithm uses the greedy approach.

In addition, we have thatLet δ(u, v) = weight of Shortest Path from u to v.

9 / 108

Page 17: 20 Single Source Shorthest Path

Optimal Substructure Property

Lemma 24.1Given a weighted, directed graph G = (V ,E) with p =< v1, v2, ..., vk >be a Shortest Path from v1 to vk . Then,

pij =< vi , vi+1, ..., vj > is a Shortest Path from vi to vj , where1 ≤ i ≤ j ≤ k.

We have thenSo, we have the optimal substructure property.Bellman-Ford’s algorithm uses dynamic programming.Dijkstra’s algorithm uses the greedy approach.

In addition, we have thatLet δ(u, v) = weight of Shortest Path from u to v.

9 / 108

Page 18: 20 Single Source Shorthest Path

Optimal Substructure Property

Lemma 24.1Given a weighted, directed graph G = (V ,E) with p =< v1, v2, ..., vk >be a Shortest Path from v1 to vk . Then,

pij =< vi , vi+1, ..., vj > is a Shortest Path from vi to vj , where1 ≤ i ≤ j ≤ k.

We have thenSo, we have the optimal substructure property.Bellman-Ford’s algorithm uses dynamic programming.Dijkstra’s algorithm uses the greedy approach.

In addition, we have thatLet δ(u, v) = weight of Shortest Path from u to v.

9 / 108

Page 19: 20 Single Source Shorthest Path

Optimal Substructure Property

CorollaryLet p be a Shortest Path from s to v, wherep = s p1 u → v = p1 ∪ {(u, v)}. Then δ(s, v) = δ(s, u) + w(u, v).

10 / 108

Page 20: 20 Single Source Shorthest Path

The Lower Bound Between Nodes

Lemma 24.10Let s ∈ V . For all edges (u, v) ∈ E , we have δ(s, v) ≤ δ(s, u) + w(u, v).

11 / 108

Page 21: 20 Single Source Shorthest Path

Now

ThenWe have the basic concepts

StillWe need to define an important one.

The Predecessor GraphThis will facilitate the proof of several concepts

12 / 108

Page 22: 20 Single Source Shorthest Path

Now

ThenWe have the basic concepts

StillWe need to define an important one.

The Predecessor GraphThis will facilitate the proof of several concepts

12 / 108

Page 23: 20 Single Source Shorthest Path

Now

ThenWe have the basic concepts

StillWe need to define an important one.

The Predecessor GraphThis will facilitate the proof of several concepts

12 / 108

Page 24: 20 Single Source Shorthest Path

Outline1 Introduction

Introduction and Similar Problems2 General Results

Optimal Substructure PropertiesPredecessor GraphThe Relaxation ConceptThe Bellman-Ford AlgorithmProperties of Relaxation

3 Bellman-Ford AlgorithmPredecessor Subgraph for BellmanShortest Path for BellmanExampleBellman-Ford finds the Shortest PathCorrectness of Bellman-Ford

4 Directed Acyclic Graphs (DAG)Relaxing EdgesExample

5 Dijkstra’s AlgorithmDijkstra’s Algorithm: A Greedy MethodExampleCorrectness Dijkstra’s algorithmComplexity of Dijkstra’s Algorithm

6 Exercises13 / 108

Page 25: 20 Single Source Shorthest Path

Predecessor Graph

Representing shortest pathsFor this we use the predecessor subgraph

It is defined slightly differently from that on Breadth-First-Search

Definition of a Predecessor SubgraphThe predecessor is a subgraph Gπ = (Vπ,Eπ) where

Vπ = {v ∈ V |v.π 6= NIL} ∪ {s}Eπ = {(v.π, v)| v ∈ Vπ − {s}}

PropertiesThe predecessor subgraph Gπ forms a depth first forest composed ofseveral depth first trees.The edges in Eπ are called tree edges.

14 / 108

Page 26: 20 Single Source Shorthest Path

Predecessor Graph

Representing shortest pathsFor this we use the predecessor subgraph

It is defined slightly differently from that on Breadth-First-Search

Definition of a Predecessor SubgraphThe predecessor is a subgraph Gπ = (Vπ,Eπ) where

Vπ = {v ∈ V |v.π 6= NIL} ∪ {s}Eπ = {(v.π, v)| v ∈ Vπ − {s}}

PropertiesThe predecessor subgraph Gπ forms a depth first forest composed ofseveral depth first trees.The edges in Eπ are called tree edges.

14 / 108

Page 27: 20 Single Source Shorthest Path

Predecessor Graph

Representing shortest pathsFor this we use the predecessor subgraph

It is defined slightly differently from that on Breadth-First-Search

Definition of a Predecessor SubgraphThe predecessor is a subgraph Gπ = (Vπ,Eπ) where

Vπ = {v ∈ V |v.π 6= NIL} ∪ {s}Eπ = {(v.π, v)| v ∈ Vπ − {s}}

PropertiesThe predecessor subgraph Gπ forms a depth first forest composed ofseveral depth first trees.The edges in Eπ are called tree edges.

14 / 108

Page 28: 20 Single Source Shorthest Path

Predecessor Graph

Representing shortest pathsFor this we use the predecessor subgraph

It is defined slightly differently from that on Breadth-First-Search

Definition of a Predecessor SubgraphThe predecessor is a subgraph Gπ = (Vπ,Eπ) where

Vπ = {v ∈ V |v.π 6= NIL} ∪ {s}Eπ = {(v.π, v)| v ∈ Vπ − {s}}

PropertiesThe predecessor subgraph Gπ forms a depth first forest composed ofseveral depth first trees.The edges in Eπ are called tree edges.

14 / 108

Page 29: 20 Single Source Shorthest Path

Predecessor Graph

Representing shortest pathsFor this we use the predecessor subgraph

It is defined slightly differently from that on Breadth-First-Search

Definition of a Predecessor SubgraphThe predecessor is a subgraph Gπ = (Vπ,Eπ) where

Vπ = {v ∈ V |v.π 6= NIL} ∪ {s}Eπ = {(v.π, v)| v ∈ Vπ − {s}}

PropertiesThe predecessor subgraph Gπ forms a depth first forest composed ofseveral depth first trees.The edges in Eπ are called tree edges.

14 / 108

Page 30: 20 Single Source Shorthest Path

Predecessor Graph

Representing shortest pathsFor this we use the predecessor subgraph

It is defined slightly differently from that on Breadth-First-Search

Definition of a Predecessor SubgraphThe predecessor is a subgraph Gπ = (Vπ,Eπ) where

Vπ = {v ∈ V |v.π 6= NIL} ∪ {s}Eπ = {(v.π, v)| v ∈ Vπ − {s}}

PropertiesThe predecessor subgraph Gπ forms a depth first forest composed ofseveral depth first trees.The edges in Eπ are called tree edges.

14 / 108

Page 31: 20 Single Source Shorthest Path

Predecessor Graph

Representing shortest pathsFor this we use the predecessor subgraph

It is defined slightly differently from that on Breadth-First-Search

Definition of a Predecessor SubgraphThe predecessor is a subgraph Gπ = (Vπ,Eπ) where

Vπ = {v ∈ V |v.π 6= NIL} ∪ {s}Eπ = {(v.π, v)| v ∈ Vπ − {s}}

PropertiesThe predecessor subgraph Gπ forms a depth first forest composed ofseveral depth first trees.The edges in Eπ are called tree edges.

14 / 108

Page 32: 20 Single Source Shorthest Path

Outline1 Introduction

Introduction and Similar Problems2 General Results

Optimal Substructure PropertiesPredecessor GraphThe Relaxation ConceptThe Bellman-Ford AlgorithmProperties of Relaxation

3 Bellman-Ford AlgorithmPredecessor Subgraph for BellmanShortest Path for BellmanExampleBellman-Ford finds the Shortest PathCorrectness of Bellman-Ford

4 Directed Acyclic Graphs (DAG)Relaxing EdgesExample

5 Dijkstra’s AlgorithmDijkstra’s Algorithm: A Greedy MethodExampleCorrectness Dijkstra’s algorithmComplexity of Dijkstra’s Algorithm

6 Exercises15 / 108

Page 33: 20 Single Source Shorthest Path

The Relaxation Concept

We are going to use certain functions for all the algorithmsInitialize

I Here, the basic variables of the nodes in a graph will be initializedF v.d = the distance from the source s.F v.π = the predecessor node during the search of the shortest path.

Changing the v.dThis will be done in the Relaxation algorithm.

16 / 108

Page 34: 20 Single Source Shorthest Path

The Relaxation Concept

We are going to use certain functions for all the algorithmsInitialize

I Here, the basic variables of the nodes in a graph will be initializedF v.d = the distance from the source s.F v.π = the predecessor node during the search of the shortest path.

Changing the v.dThis will be done in the Relaxation algorithm.

16 / 108

Page 35: 20 Single Source Shorthest Path

The Relaxation Concept

We are going to use certain functions for all the algorithmsInitialize

I Here, the basic variables of the nodes in a graph will be initializedF v.d = the distance from the source s.F v.π = the predecessor node during the search of the shortest path.

Changing the v.dThis will be done in the Relaxation algorithm.

16 / 108

Page 36: 20 Single Source Shorthest Path

The Relaxation Concept

We are going to use certain functions for all the algorithmsInitialize

I Here, the basic variables of the nodes in a graph will be initializedF v.d = the distance from the source s.F v.π = the predecessor node during the search of the shortest path.

Changing the v.dThis will be done in the Relaxation algorithm.

16 / 108

Page 37: 20 Single Source Shorthest Path

The Relaxation Concept

We are going to use certain functions for all the algorithmsInitialize

I Here, the basic variables of the nodes in a graph will be initializedF v.d = the distance from the source s.F v.π = the predecessor node during the search of the shortest path.

Changing the v.dThis will be done in the Relaxation algorithm.

16 / 108

Page 38: 20 Single Source Shorthest Path

Initialize and Relaxation

The Algorithms keep track of v.d, v.π. It is initialized as followsInitialize(G, s)

1 for each v ∈ V [G]2 v.d =∞3 v.π = NIL4 s.d = 0

These values are changed when an edge (u, v) is relaxed.Relax(u, v,w)

1 if v.d > u.d + w(u, v)2 v.d = u.d + w(u, v)3 v.π = u

17 / 108

Page 39: 20 Single Source Shorthest Path

Initialize and Relaxation

The Algorithms keep track of v.d, v.π. It is initialized as followsInitialize(G, s)

1 for each v ∈ V [G]2 v.d =∞3 v.π = NIL4 s.d = 0

These values are changed when an edge (u, v) is relaxed.Relax(u, v,w)

1 if v.d > u.d + w(u, v)2 v.d = u.d + w(u, v)3 v.π = u

17 / 108

Page 40: 20 Single Source Shorthest Path

How are these functions used?

These functions are used1 Build a predecesor graph Gπ.2 Integrate the Shortest Path into that predecessor graph.

1 Using the field d.

18 / 108

Page 41: 20 Single Source Shorthest Path

How are these functions used?

These functions are used1 Build a predecesor graph Gπ.2 Integrate the Shortest Path into that predecessor graph.

1 Using the field d.

18 / 108

Page 42: 20 Single Source Shorthest Path

Outline1 Introduction

Introduction and Similar Problems2 General Results

Optimal Substructure PropertiesPredecessor GraphThe Relaxation ConceptThe Bellman-Ford AlgorithmProperties of Relaxation

3 Bellman-Ford AlgorithmPredecessor Subgraph for BellmanShortest Path for BellmanExampleBellman-Ford finds the Shortest PathCorrectness of Bellman-Ford

4 Directed Acyclic Graphs (DAG)Relaxing EdgesExample

5 Dijkstra’s AlgorithmDijkstra’s Algorithm: A Greedy MethodExampleCorrectness Dijkstra’s algorithmComplexity of Dijkstra’s Algorithm

6 Exercises19 / 108

Page 43: 20 Single Source Shorthest Path

The Bellman-Ford AlgorithmBellman-Ford can have negative weight edges. It will “detect”reachable negative weight cycles.Bellman-Ford(G, s,w)

1 Initialize(G, s)2 for i = 1 to |V [G]| − 13 for each (u, v) to E [G]4 Relax(u, v,w)5 for each (u, v) to E [G]6 if v.d > u.d + w (u, v)7 return false8 return true

Time ComplexityO (VE)

20 / 108

Page 44: 20 Single Source Shorthest Path

The Bellman-Ford AlgorithmBellman-Ford can have negative weight edges. It will “detect”reachable negative weight cycles.Bellman-Ford(G, s,w)

1 Initialize(G, s)2 for i = 1 to |V [G]| − 13 for each (u, v) to E [G]4 Relax(u, v,w)5 for each (u, v) to E [G]6 if v.d > u.d + w (u, v)7 return false8 return true

Time ComplexityO (VE)

20 / 108

Page 45: 20 Single Source Shorthest Path

The Bellman-Ford AlgorithmBellman-Ford can have negative weight edges. It will “detect”reachable negative weight cycles.Bellman-Ford(G, s,w)

1 Initialize(G, s)2 for i = 1 to |V [G]| − 13 for each (u, v) to E [G]4 Relax(u, v,w)5 for each (u, v) to E [G]6 if v.d > u.d + w (u, v)7 return false8 return true

Time ComplexityO (VE)

20 / 108

Page 46: 20 Single Source Shorthest Path

The Bellman-Ford AlgorithmBellman-Ford can have negative weight edges. It will “detect”reachable negative weight cycles.Bellman-Ford(G, s,w)

1 Initialize(G, s)2 for i = 1 to |V [G]| − 13 for each (u, v) to E [G]4 Relax(u, v,w)5 for each (u, v) to E [G]6 if v.d > u.d + w (u, v)7 return false8 return true

Time ComplexityO (VE)

20 / 108

Page 47: 20 Single Source Shorthest Path

Outline1 Introduction

Introduction and Similar Problems2 General Results

Optimal Substructure PropertiesPredecessor GraphThe Relaxation ConceptThe Bellman-Ford AlgorithmProperties of Relaxation

3 Bellman-Ford AlgorithmPredecessor Subgraph for BellmanShortest Path for BellmanExampleBellman-Ford finds the Shortest PathCorrectness of Bellman-Ford

4 Directed Acyclic Graphs (DAG)Relaxing EdgesExample

5 Dijkstra’s AlgorithmDijkstra’s Algorithm: A Greedy MethodExampleCorrectness Dijkstra’s algorithmComplexity of Dijkstra’s Algorithm

6 Exercises21 / 108

Page 48: 20 Single Source Shorthest Path

Properties of Relaxation

Some propertiesv.d, if not ∞, is the length of some path from s to v.v.d either stays the same or decreases with time.

ThereforeIf v.d = δ(s, v) at any time, this holds thereafter.

Something niceNote that v.d ≥ δ(s, v) always (Upper-Bound Property).After i iterations of relaxing an all (u, v), if the shortest path tov has i edges, then v.d = δ(s, v).If there is no path from s to v, then v.d = δ(s, v) =∞ is an invariant.

22 / 108

Page 49: 20 Single Source Shorthest Path

Properties of Relaxation

Some propertiesv.d, if not ∞, is the length of some path from s to v.v.d either stays the same or decreases with time.

ThereforeIf v.d = δ(s, v) at any time, this holds thereafter.

Something niceNote that v.d ≥ δ(s, v) always (Upper-Bound Property).After i iterations of relaxing an all (u, v), if the shortest path tov has i edges, then v.d = δ(s, v).If there is no path from s to v, then v.d = δ(s, v) =∞ is an invariant.

22 / 108

Page 50: 20 Single Source Shorthest Path

Properties of Relaxation

Some propertiesv.d, if not ∞, is the length of some path from s to v.v.d either stays the same or decreases with time.

ThereforeIf v.d = δ(s, v) at any time, this holds thereafter.

Something niceNote that v.d ≥ δ(s, v) always (Upper-Bound Property).After i iterations of relaxing an all (u, v), if the shortest path tov has i edges, then v.d = δ(s, v).If there is no path from s to v, then v.d = δ(s, v) =∞ is an invariant.

22 / 108

Page 51: 20 Single Source Shorthest Path

Properties of Relaxation

Some propertiesv.d, if not ∞, is the length of some path from s to v.v.d either stays the same or decreases with time.

ThereforeIf v.d = δ(s, v) at any time, this holds thereafter.

Something niceNote that v.d ≥ δ(s, v) always (Upper-Bound Property).After i iterations of relaxing an all (u, v), if the shortest path tov has i edges, then v.d = δ(s, v).If there is no path from s to v, then v.d = δ(s, v) =∞ is an invariant.

22 / 108

Page 52: 20 Single Source Shorthest Path

Properties of Relaxation

Some propertiesv.d, if not ∞, is the length of some path from s to v.v.d either stays the same or decreases with time.

ThereforeIf v.d = δ(s, v) at any time, this holds thereafter.

Something niceNote that v.d ≥ δ(s, v) always (Upper-Bound Property).After i iterations of relaxing an all (u, v), if the shortest path tov has i edges, then v.d = δ(s, v).If there is no path from s to v, then v.d = δ(s, v) =∞ is an invariant.

22 / 108

Page 53: 20 Single Source Shorthest Path

Properties of Relaxation

Some propertiesv.d, if not ∞, is the length of some path from s to v.v.d either stays the same or decreases with time.

ThereforeIf v.d = δ(s, v) at any time, this holds thereafter.

Something niceNote that v.d ≥ δ(s, v) always (Upper-Bound Property).After i iterations of relaxing an all (u, v), if the shortest path tov has i edges, then v.d = δ(s, v).If there is no path from s to v, then v.d = δ(s, v) =∞ is an invariant.

22 / 108

Page 54: 20 Single Source Shorthest Path

Properties of Relaxation

Lemma 24.10 (Triangle inequality)Let G = (V ,E) be a weighted, directed graph with weight functionw : E → R and source vertex s. Then, for all edges (u, v) ∈ E , we have:

δ (s, v) ≤ δ (s, u) + w (u, v) (1)

Proof1 Suppose that p is a shortest path from source s to vertex v.2 Then, p has no more weight than any other path from s to vertex v.3 Not only p has no more weiht tha a particular shortest path that goes

from s to u and then takes edge (u, v).

23 / 108

Page 55: 20 Single Source Shorthest Path

Properties of Relaxation

Lemma 24.10 (Triangle inequality)Let G = (V ,E) be a weighted, directed graph with weight functionw : E → R and source vertex s. Then, for all edges (u, v) ∈ E , we have:

δ (s, v) ≤ δ (s, u) + w (u, v) (1)

Proof1 Suppose that p is a shortest path from source s to vertex v.2 Then, p has no more weight than any other path from s to vertex v.3 Not only p has no more weiht tha a particular shortest path that goes

from s to u and then takes edge (u, v).

23 / 108

Page 56: 20 Single Source Shorthest Path

Properties of Relaxation

Lemma 24.10 (Triangle inequality)Let G = (V ,E) be a weighted, directed graph with weight functionw : E → R and source vertex s. Then, for all edges (u, v) ∈ E , we have:

δ (s, v) ≤ δ (s, u) + w (u, v) (1)

Proof1 Suppose that p is a shortest path from source s to vertex v.2 Then, p has no more weight than any other path from s to vertex v.3 Not only p has no more weiht tha a particular shortest path that goes

from s to u and then takes edge (u, v).

23 / 108

Page 57: 20 Single Source Shorthest Path

Properties of Relaxation

Lemma 24.10 (Triangle inequality)Let G = (V ,E) be a weighted, directed graph with weight functionw : E → R and source vertex s. Then, for all edges (u, v) ∈ E , we have:

δ (s, v) ≤ δ (s, u) + w (u, v) (1)

Proof1 Suppose that p is a shortest path from source s to vertex v.2 Then, p has no more weight than any other path from s to vertex v.3 Not only p has no more weiht tha a particular shortest path that goes

from s to u and then takes edge (u, v).

23 / 108

Page 58: 20 Single Source Shorthest Path

Properties of Relaxation

Lemma 24.10 (Triangle inequality)Let G = (V ,E) be a weighted, directed graph with weight functionw : E → R and source vertex s. Then, for all edges (u, v) ∈ E , we have:

δ (s, v) ≤ δ (s, u) + w (u, v) (1)

Proof1 Suppose that p is a shortest path from source s to vertex v.2 Then, p has no more weight than any other path from s to vertex v.3 Not only p has no more weiht tha a particular shortest path that goes

from s to u and then takes edge (u, v).

23 / 108

Page 59: 20 Single Source Shorthest Path

Properties of Relaxation

Lemma 24.11 (Upper Bound Property)Let G = (V ,E) be a weighted, directed graph with weight functionw : E → R. Consider any algorithm in which v.d, and v.π are firstinitialized by calling Initialize(G, s) (s is the source), and are onlychanged by calling Relax.Then, we have that v.d ≥ δ(s, v) ∀v ∈ V [G] , and this invariant ismaintained over any sequence of relaxation steps on the edges of G.Moreover, once v.d = δ(s, v), it never changes.

24 / 108

Page 60: 20 Single Source Shorthest Path

Properties of Relaxation

Lemma 24.11 (Upper Bound Property)Let G = (V ,E) be a weighted, directed graph with weight functionw : E → R. Consider any algorithm in which v.d, and v.π are firstinitialized by calling Initialize(G, s) (s is the source), and are onlychanged by calling Relax.Then, we have that v.d ≥ δ(s, v) ∀v ∈ V [G] , and this invariant ismaintained over any sequence of relaxation steps on the edges of G.Moreover, once v.d = δ(s, v), it never changes.

24 / 108

Page 61: 20 Single Source Shorthest Path

Properties of Relaxation

Lemma 24.11 (Upper Bound Property)Let G = (V ,E) be a weighted, directed graph with weight functionw : E → R. Consider any algorithm in which v.d, and v.π are firstinitialized by calling Initialize(G, s) (s is the source), and are onlychanged by calling Relax.Then, we have that v.d ≥ δ(s, v) ∀v ∈ V [G] , and this invariant ismaintained over any sequence of relaxation steps on the edges of G.Moreover, once v.d = δ(s, v), it never changes.

24 / 108

Page 62: 20 Single Source Shorthest Path

Proof of Lemma

Loop InvarianceThe Proof can be done by induction over the number of relaxation stepsand the loop invariance:

v.d ≥ δ (s, v) for all v ∈ V

For the Basisv.d ≥ δ (s, v) is true after initialization, since:

v.d =∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}.For s, s.d = 0 ≥ δ (s, s).

For the inductive step, consider the relaxation of an edge (u, v)By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V priorto relaxation.

25 / 108

Page 63: 20 Single Source Shorthest Path

Proof of Lemma

Loop InvarianceThe Proof can be done by induction over the number of relaxation stepsand the loop invariance:

v.d ≥ δ (s, v) for all v ∈ V

For the Basisv.d ≥ δ (s, v) is true after initialization, since:

v.d =∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}.For s, s.d = 0 ≥ δ (s, s).

For the inductive step, consider the relaxation of an edge (u, v)By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V priorto relaxation.

25 / 108

Page 64: 20 Single Source Shorthest Path

Proof of Lemma

Loop InvarianceThe Proof can be done by induction over the number of relaxation stepsand the loop invariance:

v.d ≥ δ (s, v) for all v ∈ V

For the Basisv.d ≥ δ (s, v) is true after initialization, since:

v.d =∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}.For s, s.d = 0 ≥ δ (s, s).

For the inductive step, consider the relaxation of an edge (u, v)By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V priorto relaxation.

25 / 108

Page 65: 20 Single Source Shorthest Path

Proof of Lemma

Loop InvarianceThe Proof can be done by induction over the number of relaxation stepsand the loop invariance:

v.d ≥ δ (s, v) for all v ∈ V

For the Basisv.d ≥ δ (s, v) is true after initialization, since:

v.d =∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}.For s, s.d = 0 ≥ δ (s, s).

For the inductive step, consider the relaxation of an edge (u, v)By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V priorto relaxation.

25 / 108

Page 66: 20 Single Source Shorthest Path

Proof of Lemma

Loop InvarianceThe Proof can be done by induction over the number of relaxation stepsand the loop invariance:

v.d ≥ δ (s, v) for all v ∈ V

For the Basisv.d ≥ δ (s, v) is true after initialization, since:

v.d =∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}.For s, s.d = 0 ≥ δ (s, s).

For the inductive step, consider the relaxation of an edge (u, v)By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V priorto relaxation.

25 / 108

Page 67: 20 Single Source Shorthest Path

Thus

If you call Relax(u, v,w), it may change v.d

v.d = u.d + w(u, v)≥ δ(s, u) + w(u, v) by inductive hypothesis≥ δ (s, v) by the triangle inequality

Thus, the invariant is maintained.

26 / 108

Page 68: 20 Single Source Shorthest Path

Thus

If you call Relax(u, v,w), it may change v.d

v.d = u.d + w(u, v)≥ δ(s, u) + w(u, v) by inductive hypothesis≥ δ (s, v) by the triangle inequality

Thus, the invariant is maintained.

26 / 108

Page 69: 20 Single Source Shorthest Path

Thus

If you call Relax(u, v,w), it may change v.d

v.d = u.d + w(u, v)≥ δ(s, u) + w(u, v) by inductive hypothesis≥ δ (s, v) by the triangle inequality

Thus, the invariant is maintained.

26 / 108

Page 70: 20 Single Source Shorthest Path

Thus

If you call Relax(u, v,w), it may change v.d

v.d = u.d + w(u, v)≥ δ(s, u) + w(u, v) by inductive hypothesis≥ δ (s, v) by the triangle inequality

Thus, the invariant is maintained.

26 / 108

Page 71: 20 Single Source Shorthest Path

Properties of Relaxation

Proof of lemma 24.11 cont...To proof that the value v.d never changes once v.d = δ (s, v):

I Note the following: Once v.d = δ (s, v), it cannot decrease becausev.d ≥ δ (s, v) and Relaxation never increases d.

27 / 108

Page 72: 20 Single Source Shorthest Path

Properties of Relaxation

Proof of lemma 24.11 cont...To proof that the value v.d never changes once v.d = δ (s, v):

I Note the following: Once v.d = δ (s, v), it cannot decrease becausev.d ≥ δ (s, v) and Relaxation never increases d.

27 / 108

Page 73: 20 Single Source Shorthest Path

Next, we have

Corollary 24.12 (No-path property)If there is no path from s to v, then v.d = δ(s, v) =∞ is an invariant.

ProofBy the upper-bound property, we always have ∞ = δ (s, v) ≤ v.d. Then,v.d =∞.

28 / 108

Page 74: 20 Single Source Shorthest Path

Next, we have

Corollary 24.12 (No-path property)If there is no path from s to v, then v.d = δ(s, v) =∞ is an invariant.

ProofBy the upper-bound property, we always have ∞ = δ (s, v) ≤ v.d. Then,v.d =∞.

28 / 108

Page 75: 20 Single Source Shorthest Path

More Lemmas

Lemma 24.13Let G = (V ,E) be a weighted, directed graph with weight functionw : E → R. Then, immediately after relaxing edge (u, v) by callingRelax(u, v,w) we have v.d ≤ u.d + w(u, v).

29 / 108

Page 76: 20 Single Source Shorthest Path

Proof

FirstIf, just prior to relaxing edge (u, v),

Case 1: if we have that v.d > u.d + w (u, v)I Then, v.d = u.d + w (u, v) after relaxation.

Now, Case 2If v.d ≤ u.d + w (u, v) just before relaxation, then:

neither u.d nor v.d changes

Thus, afterwardsv.d ≤ u.d + w (u, v)

30 / 108

Page 77: 20 Single Source Shorthest Path

Proof

FirstIf, just prior to relaxing edge (u, v),

Case 1: if we have that v.d > u.d + w (u, v)I Then, v.d = u.d + w (u, v) after relaxation.

Now, Case 2If v.d ≤ u.d + w (u, v) just before relaxation, then:

neither u.d nor v.d changes

Thus, afterwardsv.d ≤ u.d + w (u, v)

30 / 108

Page 78: 20 Single Source Shorthest Path

Proof

FirstIf, just prior to relaxing edge (u, v),

Case 1: if we have that v.d > u.d + w (u, v)I Then, v.d = u.d + w (u, v) after relaxation.

Now, Case 2If v.d ≤ u.d + w (u, v) just before relaxation, then:

neither u.d nor v.d changes

Thus, afterwardsv.d ≤ u.d + w (u, v)

30 / 108

Page 79: 20 Single Source Shorthest Path

Proof

FirstIf, just prior to relaxing edge (u, v),

Case 1: if we have that v.d > u.d + w (u, v)I Then, v.d = u.d + w (u, v) after relaxation.

Now, Case 2If v.d ≤ u.d + w (u, v) just before relaxation, then:

neither u.d nor v.d changes

Thus, afterwardsv.d ≤ u.d + w (u, v)

30 / 108

Page 80: 20 Single Source Shorthest Path

Proof

FirstIf, just prior to relaxing edge (u, v),

Case 1: if we have that v.d > u.d + w (u, v)I Then, v.d = u.d + w (u, v) after relaxation.

Now, Case 2If v.d ≤ u.d + w (u, v) just before relaxation, then:

neither u.d nor v.d changes

Thus, afterwardsv.d ≤ u.d + w (u, v)

30 / 108

Page 81: 20 Single Source Shorthest Path

Proof

FirstIf, just prior to relaxing edge (u, v),

Case 1: if we have that v.d > u.d + w (u, v)I Then, v.d = u.d + w (u, v) after relaxation.

Now, Case 2If v.d ≤ u.d + w (u, v) just before relaxation, then:

neither u.d nor v.d changes

Thus, afterwardsv.d ≤ u.d + w (u, v)

30 / 108

Page 82: 20 Single Source Shorthest Path

More Lemmas

Lemma 24.14 (Convergence property)Let p be a shortest path from s to v, wherep = p1s u → v = p1 ∪ {(u, v)}.If u.d = δ(s, u) holds at any time prior to calling Relax(u, v,w), thenv.d = δ(s, v) holds at all times after the call.

Proof:By the upper-bound property, if u.d = δ (s, u) at some moment beforerelaxing edge (u, v), holding afterwards.

31 / 108

Page 83: 20 Single Source Shorthest Path

More Lemmas

Lemma 24.14 (Convergence property)Let p be a shortest path from s to v, wherep = p1s u → v = p1 ∪ {(u, v)}.If u.d = δ(s, u) holds at any time prior to calling Relax(u, v,w), thenv.d = δ(s, v) holds at all times after the call.

Proof:By the upper-bound property, if u.d = δ (s, u) at some moment beforerelaxing edge (u, v), holding afterwards.

31 / 108

Page 84: 20 Single Source Shorthest Path

More Lemmas

Lemma 24.14 (Convergence property)Let p be a shortest path from s to v, wherep = p1s u → v = p1 ∪ {(u, v)}.If u.d = δ(s, u) holds at any time prior to calling Relax(u, v,w), thenv.d = δ(s, v) holds at all times after the call.

Proof:By the upper-bound property, if u.d = δ (s, u) at some moment beforerelaxing edge (u, v), holding afterwards.

31 / 108

Page 85: 20 Single Source Shorthest Path

Proof

Thus , after relaxing (u, v)

v.d ≤ u.d + w(u, v) by lemma 24.13= δ (s, u) + w (u, v)= δ (s, v) by corollary of lemma 24.1

NowBy lemma 24.11, v.d ≥ δ(s, v), so v.d = δ(s, v).

32 / 108

Page 86: 20 Single Source Shorthest Path

Proof

Thus , after relaxing (u, v)

v.d ≤ u.d + w(u, v) by lemma 24.13= δ (s, u) + w (u, v)= δ (s, v) by corollary of lemma 24.1

NowBy lemma 24.11, v.d ≥ δ(s, v), so v.d = δ(s, v).

32 / 108

Page 87: 20 Single Source Shorthest Path

Proof

Thus , after relaxing (u, v)

v.d ≤ u.d + w(u, v) by lemma 24.13= δ (s, u) + w (u, v)= δ (s, v) by corollary of lemma 24.1

NowBy lemma 24.11, v.d ≥ δ(s, v), so v.d = δ(s, v).

32 / 108

Page 88: 20 Single Source Shorthest Path

Proof

Thus , after relaxing (u, v)

v.d ≤ u.d + w(u, v) by lemma 24.13= δ (s, u) + w (u, v)= δ (s, v) by corollary of lemma 24.1

NowBy lemma 24.11, v.d ≥ δ(s, v), so v.d = δ(s, v).

32 / 108

Page 89: 20 Single Source Shorthest Path

Outline1 Introduction

Introduction and Similar Problems2 General Results

Optimal Substructure PropertiesPredecessor GraphThe Relaxation ConceptThe Bellman-Ford AlgorithmProperties of Relaxation

3 Bellman-Ford AlgorithmPredecessor Subgraph for BellmanShortest Path for BellmanExampleBellman-Ford finds the Shortest PathCorrectness of Bellman-Ford

4 Directed Acyclic Graphs (DAG)Relaxing EdgesExample

5 Dijkstra’s AlgorithmDijkstra’s Algorithm: A Greedy MethodExampleCorrectness Dijkstra’s algorithmComplexity of Dijkstra’s Algorithm

6 Exercises33 / 108

Page 90: 20 Single Source Shorthest Path

Predecessor Subgraph for Bellman

Lemma 24.16Assume a given graph G that has no negative weight cycles reachablefrom s. Then, after the initialization, the predecessor subgraph Gπ isalways a tree with root s, and any sequence of relaxations steps on edgesof G maintains this property as an invariant.

ProofIt is necessary to prove two things in order to get a tree:

1 Gπ is acyclic.2 There exists a unique path from source s to each vertex Vπ.

34 / 108

Page 91: 20 Single Source Shorthest Path

Predecessor Subgraph for Bellman

Lemma 24.16Assume a given graph G that has no negative weight cycles reachablefrom s. Then, after the initialization, the predecessor subgraph Gπ isalways a tree with root s, and any sequence of relaxations steps on edgesof G maintains this property as an invariant.

ProofIt is necessary to prove two things in order to get a tree:

1 Gπ is acyclic.2 There exists a unique path from source s to each vertex Vπ.

34 / 108

Page 92: 20 Single Source Shorthest Path

Predecessor Subgraph for Bellman

Lemma 24.16Assume a given graph G that has no negative weight cycles reachablefrom s. Then, after the initialization, the predecessor subgraph Gπ isalways a tree with root s, and any sequence of relaxations steps on edgesof G maintains this property as an invariant.

ProofIt is necessary to prove two things in order to get a tree:

1 Gπ is acyclic.2 There exists a unique path from source s to each vertex Vπ.

34 / 108

Page 93: 20 Single Source Shorthest Path

Predecessor Subgraph for Bellman

Lemma 24.16Assume a given graph G that has no negative weight cycles reachablefrom s. Then, after the initialization, the predecessor subgraph Gπ isalways a tree with root s, and any sequence of relaxations steps on edgesof G maintains this property as an invariant.

ProofIt is necessary to prove two things in order to get a tree:

1 Gπ is acyclic.2 There exists a unique path from source s to each vertex Vπ.

34 / 108

Page 94: 20 Single Source Shorthest Path

Proof of Gπ is acyclic

FirstSuppose there exist a cycle c =< v0, v1, ..., vk >, where v0 = vk . We havevi .π = vi−1 for i = 1, 2, ..., k.

SecondAssume relaxation of (vk−1, vk) created the cycle. We are going to showthat the cycle has a negative weight.

We claim thatThe cycle must be reachable from s (Why?)

35 / 108

Page 95: 20 Single Source Shorthest Path

Proof of Gπ is acyclic

FirstSuppose there exist a cycle c =< v0, v1, ..., vk >, where v0 = vk . We havevi .π = vi−1 for i = 1, 2, ..., k.

SecondAssume relaxation of (vk−1, vk) created the cycle. We are going to showthat the cycle has a negative weight.

We claim thatThe cycle must be reachable from s (Why?)

35 / 108

Page 96: 20 Single Source Shorthest Path

Proof of Gπ is acyclic

FirstSuppose there exist a cycle c =< v0, v1, ..., vk >, where v0 = vk . We havevi .π = vi−1 for i = 1, 2, ..., k.

SecondAssume relaxation of (vk−1, vk) created the cycle. We are going to showthat the cycle has a negative weight.

We claim thatThe cycle must be reachable from s (Why?)

35 / 108

Page 97: 20 Single Source Shorthest Path

Proof

FirstEach vertex on the cycle has a non-NIL predecessor, and so each vertex onit was assigned a finite shortest-path estimate when it was assigned itsnon-NIL value.

ThenBy the upper-bound property, each vertex on the cycle has a finiteshortest-path weight,

ThusMaking the cycle reachable from s.

36 / 108

Page 98: 20 Single Source Shorthest Path

Proof

FirstEach vertex on the cycle has a non-NIL predecessor, and so each vertex onit was assigned a finite shortest-path estimate when it was assigned itsnon-NIL value.

ThenBy the upper-bound property, each vertex on the cycle has a finiteshortest-path weight,

ThusMaking the cycle reachable from s.

36 / 108

Page 99: 20 Single Source Shorthest Path

Proof

FirstEach vertex on the cycle has a non-NIL predecessor, and so each vertex onit was assigned a finite shortest-path estimate when it was assigned itsnon-NIL value.

ThenBy the upper-bound property, each vertex on the cycle has a finiteshortest-path weight,

ThusMaking the cycle reachable from s.

36 / 108

Page 100: 20 Single Source Shorthest Path

ProofBefore call to Relax(vk−1, vk ,w):

vi .π = vi−1 for i = 1, ..., k − 1. (2)

ThusThis Implies vi .d was last updated by

vi .d = vi−1.d + w(vi−1, vi) (3)

for i = 1, ..., k − 1 (Because Relax updates π).

This impliesThis implies

vi .d ≥ vi−1.d + w(vi−1, vi) (4)

for i = 1, ..., k − 1 (Before Relaxation in Lemma 24.13).37 / 108

Page 101: 20 Single Source Shorthest Path

ProofBefore call to Relax(vk−1, vk ,w):

vi .π = vi−1 for i = 1, ..., k − 1. (2)

ThusThis Implies vi .d was last updated by

vi .d = vi−1.d + w(vi−1, vi) (3)

for i = 1, ..., k − 1 (Because Relax updates π).

This impliesThis implies

vi .d ≥ vi−1.d + w(vi−1, vi) (4)

for i = 1, ..., k − 1 (Before Relaxation in Lemma 24.13).37 / 108

Page 102: 20 Single Source Shorthest Path

ProofBefore call to Relax(vk−1, vk ,w):

vi .π = vi−1 for i = 1, ..., k − 1. (2)

ThusThis Implies vi .d was last updated by

vi .d = vi−1.d + w(vi−1, vi) (3)

for i = 1, ..., k − 1 (Because Relax updates π).

This impliesThis implies

vi .d ≥ vi−1.d + w(vi−1, vi) (4)

for i = 1, ..., k − 1 (Before Relaxation in Lemma 24.13).37 / 108

Page 103: 20 Single Source Shorthest Path

Proof

ThusBecause vk .π is changed by call Relax (Immediately before),vk .d > vk−1.d + w(vk−1, vk), we have that:

k∑i=1

vi .d >k∑

i=1(vi−1.d + w (vi−1, vi))

=k∑

i=1vi−1.d +

k∑i=1

w (vi−1, vi)

We have finally that

Becausek∑

i=1vi .d =

k∑i=1

vi−1.d, we have thatk∑

i=1w(vi−1, vi) < 0, i.e., a

negative weight cycle!!!

38 / 108

Page 104: 20 Single Source Shorthest Path

Proof

ThusBecause vk .π is changed by call Relax (Immediately before),vk .d > vk−1.d + w(vk−1, vk), we have that:

k∑i=1

vi .d >k∑

i=1(vi−1.d + w (vi−1, vi))

=k∑

i=1vi−1.d +

k∑i=1

w (vi−1, vi)

We have finally that

Becausek∑

i=1vi .d =

k∑i=1

vi−1.d, we have thatk∑

i=1w(vi−1, vi) < 0, i.e., a

negative weight cycle!!!

38 / 108

Page 105: 20 Single Source Shorthest Path

Proof

ThusBecause vk .π is changed by call Relax (Immediately before),vk .d > vk−1.d + w(vk−1, vk), we have that:

k∑i=1

vi .d >k∑

i=1(vi−1.d + w (vi−1, vi))

=k∑

i=1vi−1.d +

k∑i=1

w (vi−1, vi)

We have finally that

Becausek∑

i=1vi .d =

k∑i=1

vi−1.d, we have thatk∑

i=1w(vi−1, vi) < 0, i.e., a

negative weight cycle!!!

38 / 108

Page 106: 20 Single Source Shorthest Path

Proof

ThusBecause vk .π is changed by call Relax (Immediately before),vk .d > vk−1.d + w(vk−1, vk), we have that:

k∑i=1

vi .d >k∑

i=1(vi−1.d + w (vi−1, vi))

=k∑

i=1vi−1.d +

k∑i=1

w (vi−1, vi)

We have finally that

Becausek∑

i=1vi .d =

k∑i=1

vi−1.d, we have thatk∑

i=1w(vi−1, vi) < 0, i.e., a

negative weight cycle!!!

38 / 108

Page 107: 20 Single Source Shorthest Path

Some comments

Commentsvi .d ≥ vi−1.d + w(vi−1, vi) for i = 1, ..., k − 1 because whenRelax(vi−1, vi ,w) was called, there was an equality, and vi−1.d mayhave gotten smaller by further calls to Relax.vk .d > vk−1.d + w(vk−1, vk) before the last call to Relax becausethat last call changed vk .d.

39 / 108

Page 108: 20 Single Source Shorthest Path

Some comments

Commentsvi .d ≥ vi−1.d + w(vi−1, vi) for i = 1, ..., k − 1 because whenRelax(vi−1, vi ,w) was called, there was an equality, and vi−1.d mayhave gotten smaller by further calls to Relax.vk .d > vk−1.d + w(vk−1, vk) before the last call to Relax becausethat last call changed vk .d.

39 / 108

Page 109: 20 Single Source Shorthest Path

Proof of existence of a unique path from source s

Let Gπ be the predecessor subgraph.So, for any v in Vπ, the graph Gπ contains at least one path from sto v.Assume now that you have two paths:

This can only be possible if for two nodes x and y ⇒ x 6= y, butz.π = x = y!!!Contradiction!!! Therefore, we have only one path and Gπ is a tree.

40 / 108

Page 110: 20 Single Source Shorthest Path

Proof of existence of a unique path from source s

Let Gπ be the predecessor subgraph.So, for any v in Vπ, the graph Gπ contains at least one path from sto v.Assume now that you have two paths:

This can only be possible if for two nodes x and y ⇒ x 6= y, butz.π = x = y!!!Contradiction!!! Therefore, we have only one path and Gπ is a tree.

40 / 108

Page 111: 20 Single Source Shorthest Path

Proof of existence of a unique path from source s

Let Gπ be the predecessor subgraph.So, for any v in Vπ, the graph Gπ contains at least one path from sto v.Assume now that you have two paths:

s u

x

y

z v

Impossible!

This can only be possible if for two nodes x and y ⇒ x 6= y, butz.π = x = y!!!Contradiction!!! Therefore, we have only one path and Gπ is a tree.

40 / 108

Page 112: 20 Single Source Shorthest Path

Proof of existence of a unique path from source s

Let Gπ be the predecessor subgraph.So, for any v in Vπ, the graph Gπ contains at least one path from sto v.Assume now that you have two paths:

s u

x

y

z v

Impossible!

This can only be possible if for two nodes x and y ⇒ x 6= y, butz.π = x = y!!!Contradiction!!! Therefore, we have only one path and Gπ is a tree.

40 / 108

Page 113: 20 Single Source Shorthest Path

Proof of existence of a unique path from source s

Let Gπ be the predecessor subgraph.So, for any v in Vπ, the graph Gπ contains at least one path from sto v.Assume now that you have two paths:

s u

x

y

z v

Impossible!

This can only be possible if for two nodes x and y ⇒ x 6= y, butz.π = x = y!!!Contradiction!!! Therefore, we have only one path and Gπ is a tree.

40 / 108

Page 114: 20 Single Source Shorthest Path

Outline1 Introduction

Introduction and Similar Problems2 General Results

Optimal Substructure PropertiesPredecessor GraphThe Relaxation ConceptThe Bellman-Ford AlgorithmProperties of Relaxation

3 Bellman-Ford AlgorithmPredecessor Subgraph for BellmanShortest Path for BellmanExampleBellman-Ford finds the Shortest PathCorrectness of Bellman-Ford

4 Directed Acyclic Graphs (DAG)Relaxing EdgesExample

5 Dijkstra’s AlgorithmDijkstra’s Algorithm: A Greedy MethodExampleCorrectness Dijkstra’s algorithmComplexity of Dijkstra’s Algorithm

6 Exercises41 / 108

Page 115: 20 Single Source Shorthest Path

Lemma 24.17

Lemma 24.17Same conditions as before. It calls Initialize and repeatedly calls Relax untilv.d = δ(s, v) for all v in V . Then Gπ is a shortest path tree rooted at s.

ProofFor all v in V , there is a unique simple path p from s to v in Gπ

(Lemma 24.16).We want to prove that it is a shortest path from s to v in G.

42 / 108

Page 116: 20 Single Source Shorthest Path

Lemma 24.17

Lemma 24.17Same conditions as before. It calls Initialize and repeatedly calls Relax untilv.d = δ(s, v) for all v in V . Then Gπ is a shortest path tree rooted at s.

ProofFor all v in V , there is a unique simple path p from s to v in Gπ

(Lemma 24.16).We want to prove that it is a shortest path from s to v in G.

42 / 108

Page 117: 20 Single Source Shorthest Path

Lemma 24.17

Lemma 24.17Same conditions as before. It calls Initialize and repeatedly calls Relax untilv.d = δ(s, v) for all v in V . Then Gπ is a shortest path tree rooted at s.

ProofFor all v in V , there is a unique simple path p from s to v in Gπ

(Lemma 24.16).We want to prove that it is a shortest path from s to v in G.

42 / 108

Page 118: 20 Single Source Shorthest Path

Proof

ThenLet p =< v0, v1, ..., vk >, where v0 = s and vk = v. Thus, we havevi .d = δ(s, vi).

And reasoning as before

vi .d ≥ vi−1.d + w(vi−1, vi) (5)

This implies that

w(vi−1, vi) ≤ δ(s, vi)− δ(s, vi−1) (6)

43 / 108

Page 119: 20 Single Source Shorthest Path

Proof

ThenLet p =< v0, v1, ..., vk >, where v0 = s and vk = v. Thus, we havevi .d = δ(s, vi).

And reasoning as before

vi .d ≥ vi−1.d + w(vi−1, vi) (5)

This implies that

w(vi−1, vi) ≤ δ(s, vi)− δ(s, vi−1) (6)

43 / 108

Page 120: 20 Single Source Shorthest Path

Proof

ThenLet p =< v0, v1, ..., vk >, where v0 = s and vk = v. Thus, we havevi .d = δ(s, vi).

And reasoning as before

vi .d ≥ vi−1.d + w(vi−1, vi) (5)

This implies that

w(vi−1, vi) ≤ δ(s, vi)− δ(s, vi−1) (6)

43 / 108

Page 121: 20 Single Source Shorthest Path

Proof

Then, we sum over all weights

w (p) =k∑

i=1w (vi−1, vi)

≤k∑

i=1(δ (s, vi)− δ(s, vi−1))

= δ(s, vk)− δ(s, v0)= δ(s, vk)

FinallySo, equality holds and p is a shortest path because δ (s, vk) ≤ w (p).

44 / 108

Page 122: 20 Single Source Shorthest Path

Proof

Then, we sum over all weights

w (p) =k∑

i=1w (vi−1, vi)

≤k∑

i=1(δ (s, vi)− δ(s, vi−1))

= δ(s, vk)− δ(s, v0)= δ(s, vk)

FinallySo, equality holds and p is a shortest path because δ (s, vk) ≤ w (p).

44 / 108

Page 123: 20 Single Source Shorthest Path

Proof

Then, we sum over all weights

w (p) =k∑

i=1w (vi−1, vi)

≤k∑

i=1(δ (s, vi)− δ(s, vi−1))

= δ(s, vk)− δ(s, v0)= δ(s, vk)

FinallySo, equality holds and p is a shortest path because δ (s, vk) ≤ w (p).

44 / 108

Page 124: 20 Single Source Shorthest Path

Proof

Then, we sum over all weights

w (p) =k∑

i=1w (vi−1, vi)

≤k∑

i=1(δ (s, vi)− δ(s, vi−1))

= δ(s, vk)− δ(s, v0)= δ(s, vk)

FinallySo, equality holds and p is a shortest path because δ (s, vk) ≤ w (p).

44 / 108

Page 125: 20 Single Source Shorthest Path

Proof

Then, we sum over all weights

w (p) =k∑

i=1w (vi−1, vi)

≤k∑

i=1(δ (s, vi)− δ(s, vi−1))

= δ(s, vk)− δ(s, v0)= δ(s, vk)

FinallySo, equality holds and p is a shortest path because δ (s, vk) ≤ w (p).

44 / 108

Page 126: 20 Single Source Shorthest Path

Outline1 Introduction

Introduction and Similar Problems2 General Results

Optimal Substructure PropertiesPredecessor GraphThe Relaxation ConceptThe Bellman-Ford AlgorithmProperties of Relaxation

3 Bellman-Ford AlgorithmPredecessor Subgraph for BellmanShortest Path for BellmanExampleBellman-Ford finds the Shortest PathCorrectness of Bellman-Ford

4 Directed Acyclic Graphs (DAG)Relaxing EdgesExample

5 Dijkstra’s AlgorithmDijkstra’s Algorithm: A Greedy MethodExampleCorrectness Dijkstra’s algorithmComplexity of Dijkstra’s Algorithm

6 Exercises45 / 108

Page 127: 20 Single Source Shorthest Path

Again the Bellman-Ford AlgorithmBellman-Ford can have negative weight edges. It will “detect”reachable negative weight cycles.Bellman-Ford(G, s,w)

1 Initialize(G, s)2 for i = 1 to |V [G]| − 13 for each (u, v) to E [G]4 Relax(u, v,w)/ The Decision Part of the Dynamic

Programming for u.d and u.π.5 for each (u, v) to E [G]6 if v.d > u.d + w (u, v)7 return false8 return true

ObservationIf Bellman-Ford has not converged after V (G)− 1 iterations, thenthere cannot be a shortest path tree, so there must be a negativeweight cycle.

46 / 108

Page 128: 20 Single Source Shorthest Path

Again the Bellman-Ford AlgorithmBellman-Ford can have negative weight edges. It will “detect”reachable negative weight cycles.Bellman-Ford(G, s,w)

1 Initialize(G, s)2 for i = 1 to |V [G]| − 13 for each (u, v) to E [G]4 Relax(u, v,w)/ The Decision Part of the Dynamic

Programming for u.d and u.π.5 for each (u, v) to E [G]6 if v.d > u.d + w (u, v)7 return false8 return true

ObservationIf Bellman-Ford has not converged after V (G)− 1 iterations, thenthere cannot be a shortest path tree, so there must be a negativeweight cycle.

46 / 108

Page 129: 20 Single Source Shorthest Path

Example

Red Arrows are the representation of v.πs b

d

a

c

e h

g

f5

-4

5

3

3

6

-2

1

5

-2

2

7

-2

10

3

47 / 108

Page 130: 20 Single Source Shorthest Path

Example

Here, whenever we have v.d =∞ and v.u =∞ no change is dones b

d

a

c

e h

g

f5

-4

5

3

3

6

-2

1

5

-2

2

7

-2

106

5

5 2

48 / 108

Page 131: 20 Single Source Shorthest Path

Example

Here, we keep updating in the second iterations b

d

a

c

e h

g

f5

-4

5

3

3

6

-2

1

5

-2

2

7

-2

106

5

5

1

3

16

2

49 / 108

Page 132: 20 Single Source Shorthest Path

Example

Here, during it. we notice that e can be updated for a better values b

d

a

c

e h

g

f5

-4

5

3

3

6

-2

1

5

-2

2

7

-2

106

5

4

1

3

16

2

50 / 108

Page 133: 20 Single Source Shorthest Path

Example

Here, we keep updating in third iteration and d and g also is updateds b

d

a

c

e h

g

f5

-4

5

3

3

6

-2

1

5

-2

2

7

-2

106

5

4

1

2

16

44

2

51 / 108

Page 134: 20 Single Source Shorthest Path

Example

Here, we keep updating in fourth iterations b

d

a

c

e h

g

f5

-4

5

3

3

6

-2

1

5

-2

2

7

-2

102

5

4

1

2

14

44

2

52 / 108

Page 135: 20 Single Source Shorthest Path

Example

Here, f is updated during this iterations b

d

a

c

e h

g

f5

-4

5

3

3

6

-2

1

5

-2

2

7

-2

102

5

4

1

2

12

40

2

53 / 108

Page 136: 20 Single Source Shorthest Path

Outline1 Introduction

Introduction and Similar Problems2 General Results

Optimal Substructure PropertiesPredecessor GraphThe Relaxation ConceptThe Bellman-Ford AlgorithmProperties of Relaxation

3 Bellman-Ford AlgorithmPredecessor Subgraph for BellmanShortest Path for BellmanExampleBellman-Ford finds the Shortest PathCorrectness of Bellman-Ford

4 Directed Acyclic Graphs (DAG)Relaxing EdgesExample

5 Dijkstra’s AlgorithmDijkstra’s Algorithm: A Greedy MethodExampleCorrectness Dijkstra’s algorithmComplexity of Dijkstra’s Algorithm

6 Exercises54 / 108

Page 137: 20 Single Source Shorthest Path

v.d == δ(s, v) upon termination

Lemma 24.2Assuming no negative weight cycles reachable from s, v.d == δ(s, v)holds upon termination for all vertices v reachable from s.

ProofConsider a shortest path p, where p =< v0, v1, ..., vk >, where v0 = s andvk = v.

We know the followingShortest paths are simple, p has at most |V | − 1, thus we have thatk ≤ |V | − 1.

55 / 108

Page 138: 20 Single Source Shorthest Path

v.d == δ(s, v) upon termination

Lemma 24.2Assuming no negative weight cycles reachable from s, v.d == δ(s, v)holds upon termination for all vertices v reachable from s.

ProofConsider a shortest path p, where p =< v0, v1, ..., vk >, where v0 = s andvk = v.

We know the followingShortest paths are simple, p has at most |V | − 1, thus we have thatk ≤ |V | − 1.

55 / 108

Page 139: 20 Single Source Shorthest Path

v.d == δ(s, v) upon termination

Lemma 24.2Assuming no negative weight cycles reachable from s, v.d == δ(s, v)holds upon termination for all vertices v reachable from s.

ProofConsider a shortest path p, where p =< v0, v1, ..., vk >, where v0 = s andvk = v.

We know the followingShortest paths are simple, p has at most |V | − 1, thus we have thatk ≤ |V | − 1.

55 / 108

Page 140: 20 Single Source Shorthest Path

Proof

Something NotableClaim: vi .d = δ(s, vi) holds after the ith pass over edges.

In the algorithmEach of the |V | − 1 iterations of the for loop (Lines 2-4) relaxes all edgesin E .

Proof follows by induction on iThe edges relaxed in the i th iteration, for i = 1, 2, ..., k, is (vi−1, vi).By lemma 24.11, once vi .d = δ(s, vi) holds, it continues to hold.

56 / 108

Page 141: 20 Single Source Shorthest Path

Proof

Something NotableClaim: vi .d = δ(s, vi) holds after the ith pass over edges.

In the algorithmEach of the |V | − 1 iterations of the for loop (Lines 2-4) relaxes all edgesin E .

Proof follows by induction on iThe edges relaxed in the i th iteration, for i = 1, 2, ..., k, is (vi−1, vi).By lemma 24.11, once vi .d = δ(s, vi) holds, it continues to hold.

56 / 108

Page 142: 20 Single Source Shorthest Path

Proof

Something NotableClaim: vi .d = δ(s, vi) holds after the ith pass over edges.

In the algorithmEach of the |V | − 1 iterations of the for loop (Lines 2-4) relaxes all edgesin E .

Proof follows by induction on iThe edges relaxed in the i th iteration, for i = 1, 2, ..., k, is (vi−1, vi).By lemma 24.11, once vi .d = δ(s, vi) holds, it continues to hold.

56 / 108

Page 143: 20 Single Source Shorthest Path

Proof

Something NotableClaim: vi .d = δ(s, vi) holds after the ith pass over edges.

In the algorithmEach of the |V | − 1 iterations of the for loop (Lines 2-4) relaxes all edgesin E .

Proof follows by induction on iThe edges relaxed in the i th iteration, for i = 1, 2, ..., k, is (vi−1, vi).By lemma 24.11, once vi .d = δ(s, vi) holds, it continues to hold.

56 / 108

Page 144: 20 Single Source Shorthest Path

Finding a path between s and v

Corollary 24.3Let G = (V ,E) be a weighted, directed graph with source vertex s andweight function w : E → R, and assume that G contains nonegative-weight cycles that are reachable from s. Then, for each vertexv ∈ V , there is a path from s to v if and only if Bellman-Ford terminateswith v.d <∞ when it is run on G

ProofLeft to you...

57 / 108

Page 145: 20 Single Source Shorthest Path

Finding a path between s and v

Corollary 24.3Let G = (V ,E) be a weighted, directed graph with source vertex s andweight function w : E → R, and assume that G contains nonegative-weight cycles that are reachable from s. Then, for each vertexv ∈ V , there is a path from s to v if and only if Bellman-Ford terminateswith v.d <∞ when it is run on G

ProofLeft to you...

57 / 108

Page 146: 20 Single Source Shorthest Path

Outline1 Introduction

Introduction and Similar Problems2 General Results

Optimal Substructure PropertiesPredecessor GraphThe Relaxation ConceptThe Bellman-Ford AlgorithmProperties of Relaxation

3 Bellman-Ford AlgorithmPredecessor Subgraph for BellmanShortest Path for BellmanExampleBellman-Ford finds the Shortest PathCorrectness of Bellman-Ford

4 Directed Acyclic Graphs (DAG)Relaxing EdgesExample

5 Dijkstra’s AlgorithmDijkstra’s Algorithm: A Greedy MethodExampleCorrectness Dijkstra’s algorithmComplexity of Dijkstra’s Algorithm

6 Exercises58 / 108

Page 147: 20 Single Source Shorthest Path

Correctness of Bellman-Ford

Claim: The Algorithm returns the correct valuePart of Theorem 24.4. Other parts of the theorem follow easily fromearlier results.

Case 1: There is no reachable negative weight cycle.Upon termination, we have for all (u, v):

v.d = δ(s, v)

by lemma 24.2 (last slide) if v is reachable or v.d = δ(s, v) =∞ otherwise.

59 / 108

Page 148: 20 Single Source Shorthest Path

Correctness of Bellman-Ford

Claim: The Algorithm returns the correct valuePart of Theorem 24.4. Other parts of the theorem follow easily fromearlier results.

Case 1: There is no reachable negative weight cycle.Upon termination, we have for all (u, v):

v.d = δ(s, v)

by lemma 24.2 (last slide) if v is reachable or v.d = δ(s, v) =∞ otherwise.

59 / 108

Page 149: 20 Single Source Shorthest Path

Correctness of Bellman-Ford

Claim: The Algorithm returns the correct valuePart of Theorem 24.4. Other parts of the theorem follow easily fromearlier results.

Case 1: There is no reachable negative weight cycle.Upon termination, we have for all (u, v):

v.d = δ(s, v)

by lemma 24.2 (last slide) if v is reachable or v.d = δ(s, v) =∞ otherwise.

59 / 108

Page 150: 20 Single Source Shorthest Path

Correctness of Bellman-Ford

Then, we have thatv.d = δ(s, v)≤ δ(s, u) + w(u, v)≤ u.d + w (u, v)

Remember:

5. for each (u, v) to E [G]6. if v.d > u.d + w (u, v)7. return false

ThusSo algorithm returns true.

60 / 108

Page 151: 20 Single Source Shorthest Path

Correctness of Bellman-Ford

Then, we have thatv.d = δ(s, v)≤ δ(s, u) + w(u, v)≤ u.d + w (u, v)

Remember:

5. for each (u, v) to E [G]6. if v.d > u.d + w (u, v)7. return false

ThusSo algorithm returns true.

60 / 108

Page 152: 20 Single Source Shorthest Path

Correctness of Bellman-Ford

Then, we have thatv.d = δ(s, v)≤ δ(s, u) + w(u, v)≤ u.d + w (u, v)

Remember:

5. for each (u, v) to E [G]6. if v.d > u.d + w (u, v)7. return false

ThusSo algorithm returns true.

60 / 108

Page 153: 20 Single Source Shorthest Path

Correctness of Bellman-Ford

Then, we have thatv.d = δ(s, v)≤ δ(s, u) + w(u, v)≤ u.d + w (u, v)

Remember:

5. for each (u, v) to E [G]6. if v.d > u.d + w (u, v)7. return false

ThusSo algorithm returns true.

60 / 108

Page 154: 20 Single Source Shorthest Path

Correctness of Bellman-Ford

Then, we have thatv.d = δ(s, v)≤ δ(s, u) + w(u, v)≤ u.d + w (u, v)

Remember:

5. for each (u, v) to E [G]6. if v.d > u.d + w (u, v)7. return false

ThusSo algorithm returns true.

60 / 108

Page 155: 20 Single Source Shorthest Path

Correctness of Bellman-Ford

Case 2: There exists a reachable negative weight cyclec =< v0, v1, ..., vk >, where v0 = vk .Then, we have:

k∑i=1

w(vi−1, vi) < 0. (7)

Suppose algorithm returns trueThen vi .d ≤ vi−1.d + w(vi−1, vi) for i = 1, ..., k because Relax did notchange any vi .d.

61 / 108

Page 156: 20 Single Source Shorthest Path

Correctness of Bellman-Ford

Case 2: There exists a reachable negative weight cyclec =< v0, v1, ..., vk >, where v0 = vk .Then, we have:

k∑i=1

w(vi−1, vi) < 0. (7)

Suppose algorithm returns trueThen vi .d ≤ vi−1.d + w(vi−1, vi) for i = 1, ..., k because Relax did notchange any vi .d.

61 / 108

Page 157: 20 Single Source Shorthest Path

Correctness of Bellman-FordThus

k∑i=1

vi .d ≤k∑

i=1(vi−1.d + w(vi−1, vi))

=k∑

i=1vi−1.d +

k∑i=1

w(vi−1, vi)

Since v0 = vk

Each vertex in c appears exactly once in each of the summations,k∑

i=1vi .d

andk∑

i=1vi−1.d, thus

k∑i=1

vi .d =k∑

i=1vi−1.d (8)

62 / 108

Page 158: 20 Single Source Shorthest Path

Correctness of Bellman-FordThus

k∑i=1

vi .d ≤k∑

i=1(vi−1.d + w(vi−1, vi))

=k∑

i=1vi−1.d +

k∑i=1

w(vi−1, vi)

Since v0 = vk

Each vertex in c appears exactly once in each of the summations,k∑

i=1vi .d

andk∑

i=1vi−1.d, thus

k∑i=1

vi .d =k∑

i=1vi−1.d (8)

62 / 108

Page 159: 20 Single Source Shorthest Path

Correctness of Bellman-Ford

By Corollary 24.3vi .d is finite for i = 1, 2, ..., k, thus

0 ≤k∑

i=1w(vi−1, vi).

HenceThis contradicts (Eq. 7). Thus, algorithm returns false.

63 / 108

Page 160: 20 Single Source Shorthest Path

Correctness of Bellman-Ford

By Corollary 24.3vi .d is finite for i = 1, 2, ..., k, thus

0 ≤k∑

i=1w(vi−1, vi).

HenceThis contradicts (Eq. 7). Thus, algorithm returns false.

63 / 108

Page 161: 20 Single Source Shorthest Path

Outline1 Introduction

Introduction and Similar Problems2 General Results

Optimal Substructure PropertiesPredecessor GraphThe Relaxation ConceptThe Bellman-Ford AlgorithmProperties of Relaxation

3 Bellman-Ford AlgorithmPredecessor Subgraph for BellmanShortest Path for BellmanExampleBellman-Ford finds the Shortest PathCorrectness of Bellman-Ford

4 Directed Acyclic Graphs (DAG)Relaxing EdgesExample

5 Dijkstra’s AlgorithmDijkstra’s Algorithm: A Greedy MethodExampleCorrectness Dijkstra’s algorithmComplexity of Dijkstra’s Algorithm

6 Exercises64 / 108

Page 162: 20 Single Source Shorthest Path

Another Example

Something NotableBy relaxing the edges of a weighted DAG G = (V ,E) according to atopological sort of its vertices, we can compute shortest paths from asingle source in time.

Why?Shortest paths are always well defined in a DAG, since even if there arenegative-weight edges, no negative-weight cycles can exist.

65 / 108

Page 163: 20 Single Source Shorthest Path

Another Example

Something NotableBy relaxing the edges of a weighted DAG G = (V ,E) according to atopological sort of its vertices, we can compute shortest paths from asingle source in time.

Why?Shortest paths are always well defined in a DAG, since even if there arenegative-weight edges, no negative-weight cycles can exist.

65 / 108

Page 164: 20 Single Source Shorthest Path

Single-source Shortest Paths in Directed Acyclic Graphs

In a DAG, we can do the following (Complexity Θ (V + E))DAG -SHORTEST-PATHS(G,w, s)

1 Topological sort vertices in G2 Initialize(G, s)3 for each u in V [G] in topological sorted order4 for each v to Adj [u]5 Relax(u, v,w)

66 / 108

Page 165: 20 Single Source Shorthest Path

Single-source Shortest Paths in Directed Acyclic Graphs

In a DAG, we can do the following (Complexity Θ (V + E))DAG -SHORTEST-PATHS(G,w, s)

1 Topological sort vertices in G2 Initialize(G, s)3 for each u in V [G] in topological sorted order4 for each v to Adj [u]5 Relax(u, v,w)

66 / 108

Page 166: 20 Single Source Shorthest Path

Single-source Shortest Paths in Directed Acyclic Graphs

In a DAG, we can do the following (Complexity Θ (V + E))DAG -SHORTEST-PATHS(G,w, s)

1 Topological sort vertices in G2 Initialize(G, s)3 for each u in V [G] in topological sorted order4 for each v to Adj [u]5 Relax(u, v,w)

66 / 108

Page 167: 20 Single Source Shorthest Path

It is based in the following theorem

Theorem 24.5If a weighted, directed graph G = (V ,E) has source vertex s and nocycles, then at the termination of the DAG-SHORTEST-PATHSprocedure, v.d = δ (s, v) for all vertices v ∈ V , and the predecessorsubgraph Gπ is a shortest path.

ProofLeft to you...

67 / 108

Page 168: 20 Single Source Shorthest Path

It is based in the following theorem

Theorem 24.5If a weighted, directed graph G = (V ,E) has source vertex s and nocycles, then at the termination of the DAG-SHORTEST-PATHSprocedure, v.d = δ (s, v) for all vertices v ∈ V , and the predecessorsubgraph Gπ is a shortest path.

ProofLeft to you...

67 / 108

Page 169: 20 Single Source Shorthest Path

Complexity

We have that1 Line 1 takes Θ (V + E).2 Line 2 takes Θ (V ).3 Lines 3-5 makes an iteration per vertex:

1 In addition, the for loop in lines 4-5 relaxes each edge exactly once(Remember the sorting).

2 Making each iteration of the inner loop Θ (1)

ThereforeThe total running time is equal to Θ (V + E).

68 / 108

Page 170: 20 Single Source Shorthest Path

Complexity

We have that1 Line 1 takes Θ (V + E).2 Line 2 takes Θ (V ).3 Lines 3-5 makes an iteration per vertex:

1 In addition, the for loop in lines 4-5 relaxes each edge exactly once(Remember the sorting).

2 Making each iteration of the inner loop Θ (1)

ThereforeThe total running time is equal to Θ (V + E).

68 / 108

Page 171: 20 Single Source Shorthest Path

Complexity

We have that1 Line 1 takes Θ (V + E).2 Line 2 takes Θ (V ).3 Lines 3-5 makes an iteration per vertex:

1 In addition, the for loop in lines 4-5 relaxes each edge exactly once(Remember the sorting).

2 Making each iteration of the inner loop Θ (1)

ThereforeThe total running time is equal to Θ (V + E).

68 / 108

Page 172: 20 Single Source Shorthest Path

Complexity

We have that1 Line 1 takes Θ (V + E).2 Line 2 takes Θ (V ).3 Lines 3-5 makes an iteration per vertex:

1 In addition, the for loop in lines 4-5 relaxes each edge exactly once(Remember the sorting).

2 Making each iteration of the inner loop Θ (1)

ThereforeThe total running time is equal to Θ (V + E).

68 / 108

Page 173: 20 Single Source Shorthest Path

Complexity

We have that1 Line 1 takes Θ (V + E).2 Line 2 takes Θ (V ).3 Lines 3-5 makes an iteration per vertex:

1 In addition, the for loop in lines 4-5 relaxes each edge exactly once(Remember the sorting).

2 Making each iteration of the inner loop Θ (1)

ThereforeThe total running time is equal to Θ (V + E).

68 / 108

Page 174: 20 Single Source Shorthest Path

Complexity

We have that1 Line 1 takes Θ (V + E).2 Line 2 takes Θ (V ).3 Lines 3-5 makes an iteration per vertex:

1 In addition, the for loop in lines 4-5 relaxes each edge exactly once(Remember the sorting).

2 Making each iteration of the inner loop Θ (1)

ThereforeThe total running time is equal to Θ (V + E).

68 / 108

Page 175: 20 Single Source Shorthest Path

Outline1 Introduction

Introduction and Similar Problems2 General Results

Optimal Substructure PropertiesPredecessor GraphThe Relaxation ConceptThe Bellman-Ford AlgorithmProperties of Relaxation

3 Bellman-Ford AlgorithmPredecessor Subgraph for BellmanShortest Path for BellmanExampleBellman-Ford finds the Shortest PathCorrectness of Bellman-Ford

4 Directed Acyclic Graphs (DAG)Relaxing EdgesExample

5 Dijkstra’s AlgorithmDijkstra’s Algorithm: A Greedy MethodExampleCorrectness Dijkstra’s algorithmComplexity of Dijkstra’s Algorithm

6 Exercises69 / 108

Page 176: 20 Single Source Shorthest Path

Example

After Initialization, we have b is the source

a b c d e f5 2

3

7

42

-1 -2

6 1

70 / 108

Page 177: 20 Single Source Shorthest Path

Example

a is the first in the topological sort, but no update is done

a b c d e f5 2

3

7

42

-1 -2

6 1

71 / 108

Page 178: 20 Single Source Shorthest Path

Example

b is the next one

a b c d e f5 2

3

7

42

-1 -2

6 1

72 / 108

Page 179: 20 Single Source Shorthest Path

Example

c is the next one

a b c d e f5 2

3

7

42

-1 -2

6 1

73 / 108

Page 180: 20 Single Source Shorthest Path

Example

d is the next one

a b c d e f5 2

3

7

42

-1 -2

6 1

74 / 108

Page 181: 20 Single Source Shorthest Path

Example

e is the next one

a b c d e f5 2

3

7

42

-1 -2

6 1

75 / 108

Page 182: 20 Single Source Shorthest Path

Example

Finally, w is the next one

a b c d e f5 2

3

7

42

-1 -2

6 1

76 / 108

Page 183: 20 Single Source Shorthest Path

Outline1 Introduction

Introduction and Similar Problems2 General Results

Optimal Substructure PropertiesPredecessor GraphThe Relaxation ConceptThe Bellman-Ford AlgorithmProperties of Relaxation

3 Bellman-Ford AlgorithmPredecessor Subgraph for BellmanShortest Path for BellmanExampleBellman-Ford finds the Shortest PathCorrectness of Bellman-Ford

4 Directed Acyclic Graphs (DAG)Relaxing EdgesExample

5 Dijkstra’s AlgorithmDijkstra’s Algorithm: A Greedy MethodExampleCorrectness Dijkstra’s algorithmComplexity of Dijkstra’s Algorithm

6 Exercises77 / 108

Page 184: 20 Single Source Shorthest Path

Dijkstra’s Algorithm

It is a greedy based methodIdeas?

YesWe need to keep track of the greedy choice!!!

78 / 108

Page 185: 20 Single Source Shorthest Path

Dijkstra’s Algorithm

It is a greedy based methodIdeas?

YesWe need to keep track of the greedy choice!!!

78 / 108

Page 186: 20 Single Source Shorthest Path

Dijkstra’s Algorithm

Assume no negative weight edges1 Dijkstra’s algorithm maintains a set S of vertices whoseshortest path from s has been determined.

2 It repeatedly selects u in V − S with minimum shortest path estimate(greedy choice).

3 It store V − S in priority queue Q.

79 / 108

Page 187: 20 Single Source Shorthest Path

Dijkstra’s Algorithm

Assume no negative weight edges1 Dijkstra’s algorithm maintains a set S of vertices whoseshortest path from s has been determined.

2 It repeatedly selects u in V − S with minimum shortest path estimate(greedy choice).

3 It store V − S in priority queue Q.

79 / 108

Page 188: 20 Single Source Shorthest Path

Dijkstra’s Algorithm

Assume no negative weight edges1 Dijkstra’s algorithm maintains a set S of vertices whoseshortest path from s has been determined.

2 It repeatedly selects u in V − S with minimum shortest path estimate(greedy choice).

3 It store V − S in priority queue Q.

79 / 108

Page 189: 20 Single Source Shorthest Path

Dijkstra’s algorithm

DIJKSTRA(G,w, s)1 INITIALIZE(G, s)2 S = ∅3 Q = V [G]4 while Q 6= ∅5 u =Extract-Min(Q)6 S = S ∪ {u}7 for each vertex v ∈ Adj [u]8 Relax(u,v,w)

80 / 108

Page 190: 20 Single Source Shorthest Path

Dijkstra’s algorithm

DIJKSTRA(G,w, s)1 INITIALIZE(G, s)2 S = ∅3 Q = V [G]4 while Q 6= ∅5 u =Extract-Min(Q)6 S = S ∪ {u}7 for each vertex v ∈ Adj [u]8 Relax(u,v,w)

80 / 108

Page 191: 20 Single Source Shorthest Path

Dijkstra’s algorithm

DIJKSTRA(G,w, s)1 INITIALIZE(G, s)2 S = ∅3 Q = V [G]4 while Q 6= ∅5 u =Extract-Min(Q)6 S = S ∪ {u}7 for each vertex v ∈ Adj [u]8 Relax(u,v,w)

80 / 108

Page 192: 20 Single Source Shorthest Path

Outline1 Introduction

Introduction and Similar Problems2 General Results

Optimal Substructure PropertiesPredecessor GraphThe Relaxation ConceptThe Bellman-Ford AlgorithmProperties of Relaxation

3 Bellman-Ford AlgorithmPredecessor Subgraph for BellmanShortest Path for BellmanExampleBellman-Ford finds the Shortest PathCorrectness of Bellman-Ford

4 Directed Acyclic Graphs (DAG)Relaxing EdgesExample

5 Dijkstra’s AlgorithmDijkstra’s Algorithm: A Greedy MethodExampleCorrectness Dijkstra’s algorithmComplexity of Dijkstra’s Algorithm

6 Exercises81 / 108

Page 193: 20 Single Source Shorthest Path

Example

The Graph After Initializations b

d

a

c

e h

g

f5

4

5

3

3

6

2

1

5

2

2

7

2

10

3

82 / 108

Page 194: 20 Single Source Shorthest Path

Example

We use red edges to represent v.π and color black to represent theset S

s b

d

a

c

e h

g

f5

4

5

3

3

6

2

1

5

2

2

7

2

10

3

83 / 108

Page 195: 20 Single Source Shorthest Path

Example

s ←Extract-Min(Q) and update the elements adjacent to ss b

d

a

c

e h

g

f5

4

5

3

3

6

2

1

5

2

2

7

2

10

3

0

5

5

6

84 / 108

Page 196: 20 Single Source Shorthest Path

Example

a←Extract-Min(Q) and update the elements adjacent to as b

d

a

c

e h

g

f5

4

5

3

3

6

2

1

5

2

2

7

2

10

3

0

5

5

6

9

85 / 108

Page 197: 20 Single Source Shorthest Path

Example

e←Extract-Min(Q) and update the elements adjacent to es b

d

a

c

e h

g

f5

4

5

3

3

6

2

1

5

2

2

7

2

10

3

0

5

5

6

9

7

86 / 108

Page 198: 20 Single Source Shorthest Path

Example

b←Extract-Min(Q) and update the elements adjacent to bs b

d

a

c

e h

g

f5

4

5

3

3

6

2

1

5

2

2

7

2

10

3

0

5

5

6

9

7

16

87 / 108

Page 199: 20 Single Source Shorthest Path

Example

h←Extract-Min(Q) and update the elements adjacent to hs b

d

a

c

e h

g

f5

4

5

3

3

6

2

1

5

2

2

7

2

10

3

0

5

5

6

9

7

16

9 10

88 / 108

Page 200: 20 Single Source Shorthest Path

Example

c←Extract-Min(Q) and no-updates b

d

a

c

e h

g

f5

4

5

3

3

6

2

1

5

2

2

7

2

10

3

0

5

5

6

9

7

16

9 10

89 / 108

Page 201: 20 Single Source Shorthest Path

Example

d←Extract-Min(Q) and no-updates b

d

a

c

e h

g

f5

4

5

3

3

6

2

1

5

2

2

7

2

10

3

0

5

5

6

9

7

16

9 10

90 / 108

Page 202: 20 Single Source Shorthest Path

Example

g←Extract-Min(Q) and no-updates b

d

a

c

e h

g

f5

4

5

3

3

6

2

1

5

2

2

7

2

10

3

0

5

5

6

9

7

16

9 10

91 / 108

Page 203: 20 Single Source Shorthest Path

Example

f←Extract-Min(Q) and no-updates b

d

a

c

e h

g

f5

4

5

3

3

6

2

1

5

2

2

7

2

10

3

0

5

5

6

9

7

16

9 10

92 / 108

Page 204: 20 Single Source Shorthest Path

Outline1 Introduction

Introduction and Similar Problems2 General Results

Optimal Substructure PropertiesPredecessor GraphThe Relaxation ConceptThe Bellman-Ford AlgorithmProperties of Relaxation

3 Bellman-Ford AlgorithmPredecessor Subgraph for BellmanShortest Path for BellmanExampleBellman-Ford finds the Shortest PathCorrectness of Bellman-Ford

4 Directed Acyclic Graphs (DAG)Relaxing EdgesExample

5 Dijkstra’s AlgorithmDijkstra’s Algorithm: A Greedy MethodExampleCorrectness Dijkstra’s algorithmComplexity of Dijkstra’s Algorithm

6 Exercises93 / 108

Page 205: 20 Single Source Shorthest Path

Correctness Dijkstra’s algorithm

Theorem 24.6Upon termination, u.d = δ(s, u) for all u in V (assuming non negativeweights).

ProofBy lemma 24.11, once u.d = δ(s, u) holds, it continues to hold.

We are going to use the following loop InvarianceAt the start of each iteration of the while loop of lines 4–8,v.d = δ (s, v) for each vertex v ∈ S .

94 / 108

Page 206: 20 Single Source Shorthest Path

Correctness Dijkstra’s algorithm

Theorem 24.6Upon termination, u.d = δ(s, u) for all u in V (assuming non negativeweights).

ProofBy lemma 24.11, once u.d = δ(s, u) holds, it continues to hold.

We are going to use the following loop InvarianceAt the start of each iteration of the while loop of lines 4–8,v.d = δ (s, v) for each vertex v ∈ S .

94 / 108

Page 207: 20 Single Source Shorthest Path

Correctness Dijkstra’s algorithm

Theorem 24.6Upon termination, u.d = δ(s, u) for all u in V (assuming non negativeweights).

ProofBy lemma 24.11, once u.d = δ(s, u) holds, it continues to hold.

We are going to use the following loop InvarianceAt the start of each iteration of the while loop of lines 4–8,v.d = δ (s, v) for each vertex v ∈ S .

94 / 108

Page 208: 20 Single Source Shorthest Path

Proof

ThusWe are going to prove for each u in V , u.d = δ(s, u) when u is inserted inS .

InitializationInitially S = ∅, thus the invariant is true.

MaintenanceWe want to show that in each iteration u.d = δ (s, u) for the vertex addedto set S .

For this, note the followingNote that s.d = δ(s, s) = 0 when s is inserted, so u 6= s.In addition, we have that S 6= ∅ before u is added.

95 / 108

Page 209: 20 Single Source Shorthest Path

Proof

ThusWe are going to prove for each u in V , u.d = δ(s, u) when u is inserted inS .

InitializationInitially S = ∅, thus the invariant is true.

MaintenanceWe want to show that in each iteration u.d = δ (s, u) for the vertex addedto set S .

For this, note the followingNote that s.d = δ(s, s) = 0 when s is inserted, so u 6= s.In addition, we have that S 6= ∅ before u is added.

95 / 108

Page 210: 20 Single Source Shorthest Path

Proof

ThusWe are going to prove for each u in V , u.d = δ(s, u) when u is inserted inS .

InitializationInitially S = ∅, thus the invariant is true.

MaintenanceWe want to show that in each iteration u.d = δ (s, u) for the vertex addedto set S .

For this, note the followingNote that s.d = δ(s, s) = 0 when s is inserted, so u 6= s.In addition, we have that S 6= ∅ before u is added.

95 / 108

Page 211: 20 Single Source Shorthest Path

Proof

ThusWe are going to prove for each u in V , u.d = δ(s, u) when u is inserted inS .

InitializationInitially S = ∅, thus the invariant is true.

MaintenanceWe want to show that in each iteration u.d = δ (s, u) for the vertex addedto set S .

For this, note the followingNote that s.d = δ(s, s) = 0 when s is inserted, so u 6= s.In addition, we have that S 6= ∅ before u is added.

95 / 108

Page 212: 20 Single Source Shorthest Path

Proof

ThusWe are going to prove for each u in V , u.d = δ(s, u) when u is inserted inS .

InitializationInitially S = ∅, thus the invariant is true.

MaintenanceWe want to show that in each iteration u.d = δ (s, u) for the vertex addedto set S .

For this, note the followingNote that s.d = δ(s, s) = 0 when s is inserted, so u 6= s.In addition, we have that S 6= ∅ before u is added.

95 / 108

Page 213: 20 Single Source Shorthest Path

Proof

Use contradictionNow, suppose not. Let u be the first vertex such that u.d 6= δ (s,u) wheninserted in S .

Note the followingNote that s.d = δ(s, s) = 0 when s is inserted, so u 6= s; thus S 6= ∅ justbefore u is inserted (in fact s ∈ S).

96 / 108

Page 214: 20 Single Source Shorthest Path

Proof

Use contradictionNow, suppose not. Let u be the first vertex such that u.d 6= δ (s,u) wheninserted in S .

Note the followingNote that s.d = δ(s, s) = 0 when s is inserted, so u 6= s; thus S 6= ∅ justbefore u is inserted (in fact s ∈ S).

96 / 108

Page 215: 20 Single Source Shorthest Path

Proof

NowNote that there exist a path from s to u, for otherwise u.d = δ(s, u) =∞by corollary 24.12.

“If there is no path from s to v, then v.d = δ(s, v) =∞ is aninvariant.”

Thus exist a shortest path pBetween s and u.

ObservationPrior to adding u to S , path p connects a vertex in S , namely s, to avertex in V − S , namely u.

97 / 108

Page 216: 20 Single Source Shorthest Path

Proof

NowNote that there exist a path from s to u, for otherwise u.d = δ(s, u) =∞by corollary 24.12.

“If there is no path from s to v, then v.d = δ(s, v) =∞ is aninvariant.”

Thus exist a shortest path pBetween s and u.

ObservationPrior to adding u to S , path p connects a vertex in S , namely s, to avertex in V − S , namely u.

97 / 108

Page 217: 20 Single Source Shorthest Path

Proof

NowNote that there exist a path from s to u, for otherwise u.d = δ(s, u) =∞by corollary 24.12.

“If there is no path from s to v, then v.d = δ(s, v) =∞ is aninvariant.”

Thus exist a shortest path pBetween s and u.

ObservationPrior to adding u to S , path p connects a vertex in S , namely s, to avertex in V − S , namely u.

97 / 108

Page 218: 20 Single Source Shorthest Path

Proof

Consider the followingThe first y along p from s to u such that y ∈ V − S .And let x ∈ S be y’s predecessor along p.

98 / 108

Page 219: 20 Single Source Shorthest Path

Proof

Proof (continuation)Then, shortest path from s to u: s p1 x → y p2 u looks like...

S

s

x

y

u

Remark: Either of paths p1 or p2 may have no edges.

99 / 108

Page 220: 20 Single Source Shorthest Path

Proof

We claimy.d = δ(s, y) when u is added into S .

Proof of the claim1 Observe that x ∈ S .2 In addition, we know that u is the first vertex for which u.d 6= δ (s, u)

when it id added to S

100 / 108

Page 221: 20 Single Source Shorthest Path

Proof

We claimy.d = δ(s, y) when u is added into S .

Proof of the claim1 Observe that x ∈ S .2 In addition, we know that u is the first vertex for which u.d 6= δ (s, u)

when it id added to S

100 / 108

Page 222: 20 Single Source Shorthest Path

Proof

ThenIn addition, we had that x.d = δ(s, x) when x was inserted into S .

Then, we relaxed the edge between x and yEdge (x, y) was relaxed at that time!

101 / 108

Page 223: 20 Single Source Shorthest Path

Proof

ThenIn addition, we had that x.d = δ(s, x) when x was inserted into S .

Then, we relaxed the edge between x and yEdge (x, y) was relaxed at that time!

101 / 108

Page 224: 20 Single Source Shorthest Path

Proof

Remember? Convergence property (Lemma 24.14)

Let p be a shortest path from s to v, where p = p1s u → v. Ifu.d = δ(s, u) holds at any time prior to calling Relax(u, v,w), thenv.d = δ(s, v) holds at all times after the call.

ThenThen, using this convergence property.

y.d = δ(s, y) = δ(s, x) + w(x, y) (9)

The claim is implied!!!

102 / 108

Page 225: 20 Single Source Shorthest Path

Proof

Remember? Convergence property (Lemma 24.14)

Let p be a shortest path from s to v, where p = p1s u → v. Ifu.d = δ(s, u) holds at any time prior to calling Relax(u, v,w), thenv.d = δ(s, v) holds at all times after the call.

ThenThen, using this convergence property.

y.d = δ(s, y) = δ(s, x) + w(x, y) (9)

The claim is implied!!!

102 / 108

Page 226: 20 Single Source Shorthest Path

Proof

Remember? Convergence property (Lemma 24.14)

Let p be a shortest path from s to v, where p = p1s u → v. Ifu.d = δ(s, u) holds at any time prior to calling Relax(u, v,w), thenv.d = δ(s, v) holds at all times after the call.

ThenThen, using this convergence property.

y.d = δ(s, y) = δ(s, x) + w(x, y) (9)

The claim is implied!!!

102 / 108

Page 227: 20 Single Source Shorthest Path

Proof

Remember? Convergence property (Lemma 24.14)

Let p be a shortest path from s to v, where p = p1s u → v. Ifu.d = δ(s, u) holds at any time prior to calling Relax(u, v,w), thenv.d = δ(s, v) holds at all times after the call.

ThenThen, using this convergence property.

y.d = δ(s, y) = δ(s, x) + w(x, y) (9)

The claim is implied!!!

102 / 108

Page 228: 20 Single Source Shorthest Path

Proof

Now1 We obtain a contradiction to prove that u.d = δ (s, u).2 y appears before u in a shortest path on a shortest path from s to u.3 In addition, all edges have positive weights.4 Then, δ(s, y) ≤ δ(s, u), thus

y.d = δ (s, y)≤ δ (s, u)≤ u.d

I The last inequality is due to the Upper-Bound Property (Lemma24.11).

103 / 108

Page 229: 20 Single Source Shorthest Path

Proof

Now1 We obtain a contradiction to prove that u.d = δ (s, u).2 y appears before u in a shortest path on a shortest path from s to u.3 In addition, all edges have positive weights.4 Then, δ(s, y) ≤ δ(s, u), thus

y.d = δ (s, y)≤ δ (s, u)≤ u.d

I The last inequality is due to the Upper-Bound Property (Lemma24.11).

103 / 108

Page 230: 20 Single Source Shorthest Path

Proof

Now1 We obtain a contradiction to prove that u.d = δ (s, u).2 y appears before u in a shortest path on a shortest path from s to u.3 In addition, all edges have positive weights.4 Then, δ(s, y) ≤ δ(s, u), thus

y.d = δ (s, y)≤ δ (s, u)≤ u.d

I The last inequality is due to the Upper-Bound Property (Lemma24.11).

103 / 108

Page 231: 20 Single Source Shorthest Path

Proof

Now1 We obtain a contradiction to prove that u.d = δ (s, u).2 y appears before u in a shortest path on a shortest path from s to u.3 In addition, all edges have positive weights.4 Then, δ(s, y) ≤ δ(s, u), thus

y.d = δ (s, y)≤ δ (s, u)≤ u.d

I The last inequality is due to the Upper-Bound Property (Lemma24.11).

103 / 108

Page 232: 20 Single Source Shorthest Path

Proof

Now1 We obtain a contradiction to prove that u.d = δ (s, u).2 y appears before u in a shortest path on a shortest path from s to u.3 In addition, all edges have positive weights.4 Then, δ(s, y) ≤ δ(s, u), thus

y.d = δ (s, y)≤ δ (s, u)≤ u.d

I The last inequality is due to the Upper-Bound Property (Lemma24.11).

103 / 108

Page 233: 20 Single Source Shorthest Path

Proof

Now1 We obtain a contradiction to prove that u.d = δ (s, u).2 y appears before u in a shortest path on a shortest path from s to u.3 In addition, all edges have positive weights.4 Then, δ(s, y) ≤ δ(s, u), thus

y.d = δ (s, y)≤ δ (s, u)≤ u.d

I The last inequality is due to the Upper-Bound Property (Lemma24.11).

103 / 108

Page 234: 20 Single Source Shorthest Path

Proof

Now1 We obtain a contradiction to prove that u.d = δ (s, u).2 y appears before u in a shortest path on a shortest path from s to u.3 In addition, all edges have positive weights.4 Then, δ(s, y) ≤ δ(s, u), thus

y.d = δ (s, y)≤ δ (s, u)≤ u.d

I The last inequality is due to the Upper-Bound Property (Lemma24.11).

103 / 108

Page 235: 20 Single Source Shorthest Path

Proof

ThenBut because both vertices u and y where in V − S when u was chosen inline 5 ⇒ u.d ≤ y.d.

Thus

y.d = δ(s, y) = δ(s, u) = u.d

ConsequentlyWe have that u.d = δ (s, u), which contradicts our choice of u.Conclusion: u.d = δ (s, u) when u is added to S and the equality ismaintained afterwards.

104 / 108

Page 236: 20 Single Source Shorthest Path

Proof

ThenBut because both vertices u and y where in V − S when u was chosen inline 5 ⇒ u.d ≤ y.d.

Thus

y.d = δ(s, y) = δ(s, u) = u.d

ConsequentlyWe have that u.d = δ (s, u), which contradicts our choice of u.Conclusion: u.d = δ (s, u) when u is added to S and the equality ismaintained afterwards.

104 / 108

Page 237: 20 Single Source Shorthest Path

Proof

ThenBut because both vertices u and y where in V − S when u was chosen inline 5 ⇒ u.d ≤ y.d.

Thus

y.d = δ(s, y) = δ(s, u) = u.d

ConsequentlyWe have that u.d = δ (s, u), which contradicts our choice of u.Conclusion: u.d = δ (s, u) when u is added to S and the equality ismaintained afterwards.

104 / 108

Page 238: 20 Single Source Shorthest Path

Proof

ThenBut because both vertices u and y where in V − S when u was chosen inline 5 ⇒ u.d ≤ y.d.

Thus

y.d = δ(s, y) = δ(s, u) = u.d

ConsequentlyWe have that u.d = δ (s, u), which contradicts our choice of u.Conclusion: u.d = δ (s, u) when u is added to S and the equality ismaintained afterwards.

104 / 108

Page 239: 20 Single Source Shorthest Path

Finally

TerminationAt termination Q = ∅Thus, V − S = ∅ or equivalent S = V

Thusu.d = δ (s, u) for all vertices u ∈ V !!!

105 / 108

Page 240: 20 Single Source Shorthest Path

Finally

TerminationAt termination Q = ∅Thus, V − S = ∅ or equivalent S = V

Thusu.d = δ (s, u) for all vertices u ∈ V !!!

105 / 108

Page 241: 20 Single Source Shorthest Path

Outline1 Introduction

Introduction and Similar Problems2 General Results

Optimal Substructure PropertiesPredecessor GraphThe Relaxation ConceptThe Bellman-Ford AlgorithmProperties of Relaxation

3 Bellman-Ford AlgorithmPredecessor Subgraph for BellmanShortest Path for BellmanExampleBellman-Ford finds the Shortest PathCorrectness of Bellman-Ford

4 Directed Acyclic Graphs (DAG)Relaxing EdgesExample

5 Dijkstra’s AlgorithmDijkstra’s Algorithm: A Greedy MethodExampleCorrectness Dijkstra’s algorithmComplexity of Dijkstra’s Algorithm

6 Exercises106 / 108

Page 242: 20 Single Source Shorthest Path

Complexity

Running time isO(V 2) using linear array for priority queue.O((V + E) log V ) using binary heap.O(V log V + E) using Fibonacci heap.

107 / 108

Page 243: 20 Single Source Shorthest Path

Exercises

From Cormen’s book solve24.1-124.1-324.1-423.3-123.3-323.3-423.3-623.3-723.3-823.3-10

108 / 108