19
Lecture 16. Shortest Path Algorithms The single-source shortest path problem is the following: given a source vertex s, and a sink vertex v, we'd like to find the shortest path from s to v. Here shortest path means a sequence of directed edges from s to v with the smallest total weight. There are some subtleties here. Should we allow negative edges? Of course, there are no negative distances; nevertheless, there are actually some cases where negative edges make logical sense. But then there may not be a shortest path, because if there is a cycle with negative weight, we could simply go around that cycle as many times as we want and reduce the cost of the path as much as we like. To avoid this, we might want to detect negative cycles. This can be done in the algorithm itself. Non-negative cycles aren't helpful, either. Suppose our shortest path contains a cycle of non-negative weight. Then by cutting it out we get a path with the same weight or less, so we might as well cut it out.

Lecture 16. Shortest Path Algorithms

Embed Size (px)

DESCRIPTION

Lecture 16. Shortest Path Algorithms. The single-source shortest path problem is the following: given a source vertex s, and a sink vertex v, we'd like to find the shortest path from s to v. Here shortest path means a sequence of directed edges from s to v with the smallest total weight. - PowerPoint PPT Presentation

Citation preview

Page 1: Lecture 16. Shortest Path Algorithms

Lecture 16. Shortest Path Algorithms The single-source shortest path problem is the following: given a

source vertex s, and a sink vertex v, we'd like to find the shortest path from s to v. Here shortest path means a sequence of directed edges from s to v with the smallest total weight.

There are some subtleties here. Should we allow negative edges? Of course, there are no negative distances; nevertheless, there are actually some cases where negative edges make logical sense. But then there may not be a shortest path, because if there is a cycle with negative weight, we could simply go around that cycle as many times as we want and reduce the cost of the path as much as we like. To avoid this, we might want to detect negative cycles. This can be done in the algorithm itself.

Non-negative cycles aren't helpful, either. Suppose our shortest path contains a cycle of non-negative weight. Then by cutting it out we get a path with the same weight or less, so we might as well cut it out.

Page 2: Lecture 16. Shortest Path Algorithms

Weighted Graphs In a weighted graph, each edge has an associated numerical value,

called the weight of the edge Edge weights may represent distances, costs, etc. Example:

In a flight route graph, the weight of an edge represents the distance in miles between the endpoint airports

ORD PVD

MIADFW

SFO

LAX

LGA

HNL

849

802

13871743

1843

10991120

1233337

2555

142

12

05

Page 3: Lecture 16. Shortest Path Algorithms

Shortest Path Problem Given a weighted graph and two vertices u and v, we want to find a path

of minimum total weight between u and v. Length of a path is the sum of the weights of its edges.

Example: Shortest path between Providence and Honolulu

Applications Internet packet routing Flight reservations Driving directions

ORD PVD

MIADFW

SFO

LAX

LGA

HNL

849

802

13871743

1843

10991120

1233337

2555

142

12

05

Page 4: Lecture 16. Shortest Path Algorithms

Shortest Path PropertiesProperty 1:

A subpath of a shortest path is itself a shortest path

Property 2:There is a tree of shortest paths from a start vertex to all the other vertices

Example:Tree of shortest paths from Providence

ORD PVD

MIADFW

SFO

LAX

LGA

HNL

849

802

13871743

1843

10991120

1233337

2555

142

12

05

Page 5: Lecture 16. Shortest Path Algorithms

Dijkstra’s Algorithm The distance of a vertex v

from a vertex s is the length of a shortest path between s and v

Dijkstra’s algorithm computes the distances of all the vertices from a given start vertex s

Assumptions: the graph is connected the edges are

undirected the edge weights are

nonnegative

We grow a “cloud” of vertices, beginning with s and eventually covering all the vertices

We store with each vertex v a label d(v) representing the distance of v from s in the subgraph consisting of the cloud and its adjacent vertices

At each step We add to the cloud the vertex

u outside the cloud with the smallest distance label, d(u)

We update the labels of the vertices adjacent to u

Page 6: Lecture 16. Shortest Path Algorithms

Edge Relaxation Consider an edge e (u,z) such

that u is the vertex most recently

added to the cloud z is not in the cloud

The relaxation of edge e updates distance d(z) as follows:

d(z) min{d(z),d(u) weight(e)}

d(z) 75

d(u) 5010

zsu

d(z) 60

d(u) 5010

zsu

e

e

Page 7: Lecture 16. Shortest Path Algorithms

Example

CB

A

E

D

F

0

428

48

7 1

2 5

2

3 9

CB

A

E

D

F

0

328

5 11

48

7 1

2 5

2

3 9

CB

A

E

D

F

0

328

5 8

48

7 1

2 5

2

3 9

CB

A

E

D

F

0

327

5 8

48

7 1

2 5

2

3 9

Page 8: Lecture 16. Shortest Path Algorithms

Example (cont.)

CB

A

E

D

F

0

327

5 8

48

7 1

2 5

2

3 9

CB

A

E

D

F

0

327

5 8

48

7 1

2 5

2

3 9

Page 9: Lecture 16. Shortest Path Algorithms

Dijkstra’s Algorithm A priority queue stores the

vertices outside the cloud Key: distance Element: vertex

Locator-based methods insert(k,e) returns a

locator replaceKey(l,k) changes

the key (distance) of an item

We store two labels with each vertex: Distance (d(v) label) locator in priority queue

Algorithm DijkstraDistances(G, s)Q new heap-based priority queuefor all v G.vertices()

if v ssetDistance(v, 0)

else setDistance(v, )

l Q.insert(getDistance(v), v)setLocator(v,l)

while Q.isEmpty()u Q.removeMin() for all e G.incidentEdges(u)

{ relax edge e }z G.opposite(u,e)r getDistance(u) weight(e)if r getDistance(z)

setDistance(z,r) Q.replaceKey(getLocator(z),r)

Page 10: Lecture 16. Shortest Path Algorithms

Analysis Graph operations

Method incidentEdges is called once for each vertex Label operations

We set/get the distance and locator labels of vertex z O(deg(z)) times Setting/getting a label takes O(1) time

Priority queue operations Each vertex is inserted once into and removed once from the priority

queue, where each insertion or removal takes O(log n) time The key of a vertex in the priority queue is modified at most deg(w)

times, where each key change takes O(log n) time Dijkstra’s algorithm runs in O((n m) log n) time provided the graph is

represented by the adjacency list structure Recall that v deg(v) 2m

The running time can also be expressed as O(m log n) since the graph is connected

Page 11: Lecture 16. Shortest Path Algorithms

Extension Using the template

method pattern, we can extend Dijkstra’s algorithm to return a tree of shortest paths from the start vertex to all other vertices

We store with each vertex a third label: parent edge in the

shortest path tree In the edge relaxation

step, we update the parent label

Algorithm DijkstraShortestPathsTree(G, s)

for all v G.vertices()…

setParent(v, )…

for all e G.incidentEdges(u){ relax edge e }z G.opposite(u,e)r getDistance(u) weight(e)if r getDistance(z)

setDistance(z,r)setParent(z,e) Q.replaceKey(getLocator(z),r)

Page 12: Lecture 16. Shortest Path Algorithms

Why Dijkstra’s Algorithm Works Dijkstra’s algorithm is based on the greedy method. It

adds vertices by increasing distance.

CB

A

E

D

F

0

327

5 8

48

7 1

2 5

2

3 9

Suppose it didn’t find all shortest distances. Let F be the first wrong vertex the algorithm processed.

When the previous node, D, on the true shortest path was considered, its distance was correct.

But the edge (D,F) was relaxed at that time!

Thus, so long as d(F)>d(D), F’s distance cannot be wrong. That is, there is no wrong vertex.

Page 13: Lecture 16. Shortest Path Algorithms

Why It Doesn’t Work for Negative-Weight Edges

If a node with a negative incident edge were to be added late to the cloud, it could mess up distances for vertices already in the cloud.

CB

A

E

D

F

0

457

5 9

48

7 1

2 5

6

0 -8

Dijkstra’s algorithm is based on the greedy method. It adds vertices by increasing distance.

C’s true distance is 1, but it is already in the

cloud with d(C)=5!

Page 14: Lecture 16. Shortest Path Algorithms

Bellman-Ford Algorithm Works even with negative-

weight edges Must assume directed edges

(for otherwise we would have negative-weight cycles)

Iteration i finds all shortest paths of length i .

Running time: O(nm). Can be extended to detect a

negative-weight cycle if it exists.

Algorithm BellmanFord(G, s)for all v G.vertices()

if v ssetDistance(v, 0)

else setDistance(v, )

for i 1 to n-1 do (*)for each directed edge e = u z

{ relax edge e }r getDistance(u) weight(e)if r getDistance(z)

setDistance(z,r)

Page 15: Lecture 16. Shortest Path Algorithms

Correctness

Lemma. After i repetitions of the "for" loop in BELLMAN-FORD, if there is a path from s to u with at most i edges, then d[u] is at most the length of the shortest path from s to u with at most i edges.

Proof. By induction on i. The base case is i=0. Trivial.

For the induction step, consider the shortest path from s to u with at most i edges. Let v be the last vertex before u on this path. Then the part of the path from s to v is the shortest path from s to v with at most i-1 edges. By the inductive hypothesis, d[v] after i-1 executions of the (*) "for" loop is at most the length of this path. Therefore, d[v] + w(u,v) is at most the length of the path from s to u, via v (as i-1st node). In the i'th iteration, d[u] gets compared with d[v] + w(u,v), and is set equal to it if d[v] + w(u,v) is smaller.

Therefore, after i iterations of the (*) "for" loop, d[u] is at most the length of the shortest path from s to u that uses at most i edges. So the lemma holds.

Page 16: Lecture 16. Shortest Path Algorithms

-2

Bellman-Ford Example

0

48

7 1

-2 5

-2

3 9

0

48

7 1

-2 53 9

Nodes are labeled with their d(v) values

-2

-28

0

4

48

7 1

-2 53 9

8 -2 4

-15

61

9

-25

0

1

-1

9

48

7 1

-2 5

-2

3 94

Page 17: Lecture 16. Shortest Path Algorithms

DAG-based Algorithm

Works even with negative-weight edges

Uses topological order Doesn’t use any fancy data

structures Is much faster than

Dijkstra’s algorithm Running time: O(n+m).

Algorithm DagDistances(G, s)for all v G.vertices()

if v ssetDistance(v, 0)

else setDistance(v, )

Perform a topological sort of the verticesfor u 1 to n do {in topological order}

for each e G.outEdges(u){ relax edge e }z G.opposite(u,e)r getDistance(u) weight(e)if r getDistance(z)

setDistance(z,r)

Page 18: Lecture 16. Shortest Path Algorithms

-2

DAG Example

0

48

7 1

-5 5

-2

3 9

0

48

7 1

-5 53 9

Nodes are labeled with their d(v) values

-2

-28

0

4

48

7 1

-5 53 9

-2 4

-1

1 7

-25

0

1

-1

7

48

7 1

-5 5

-2

3 94

1

1

2 43

6 5

1

2 43

6 5

8

1

2 43

6 5

1

2 43

6 5

5

0

(two steps)

Page 19: Lecture 16. Shortest Path Algorithms

All-Pairs Shortest Paths: Dynamic Programming Find the distance between

every pair of vertices in a weighted directed graph G.

We can make n calls to Dijkstra’s algorithm (if no negative edges), which takes O(nmlog n) time.

Likewise, n calls to Bellman-Ford would take O(n2m) time.

We can achieve O(n3) time using dynamic programming (Floyd-Warshall algorithm).

Algorithm AllPair(G) {assumes vertices 1,…,n} for all vertex pairs (i,j)

if i jD0[i,i] 0

else if (i,j) is an edge in GD0[i,j] weight of edge (i,j)

elseD0[i,j] +

for k 1 to n do for i 1 to n do for j 1 to n do

Dk[i,j] min{Dk-1[i,j], Dk-1[i,k]+Dk-1[k,j]} return Dn

k

j

i

Uses only verticesnumbered 1,…,k-1 Uses only vertices

numbered 1,…,k-1

Uses only vertices numbered 1,…,k(compute weight of this edge)