Upload
kiral
View
84
Download
13
Tags:
Embed Size (px)
DESCRIPTION
Analysis & Design of Algorithms (CSCE 321). Prof. Amr Goneid Department of Computer Science, AUC Part 10. Dynamic Programming. Dynamic Programming. Dynamic Programming. Introduction What is Dynamic Programming? How To Devise a Dynamic Programming Approach The Sum of Subset Problem - PowerPoint PPT Presentation
Citation preview
Prof. Amr Goneid, AUC 1
Analysis & Design of Analysis & Design of AlgorithmsAlgorithms(CSCE 321)(CSCE 321)
Prof. Amr GoneidDepartment of Computer Science, AUC
Part 10. Dynamic Programming
Prof. Amr Goneid, AUC 2
Dynamic ProgrammingDynamic Programming
Prof. Amr Goneid, AUC 3
Dynamic ProgrammingDynamic Programming
Introduction What is Dynamic Programming? How To Devise a Dynamic Programming
Approach The Sum of Subset Problem The Knapsack Problem Minimum Cost Path Coin Change Problem Optimal BST DP Algorithms in Graph Problems Comparison with Greedy and D&Q Methods
Prof. Amr Goneid, AUC 4
1. Introduction1. Introduction
We have demonstrated that
Sometimes, the divide and conquerapproach seems appropriate but failsto produce an efficient algorithm.
One of the reasons is that D&Q produces overlapping subproblems.
Prof. Amr Goneid, AUC 5
IntroductionIntroduction
Solution: Buy speed using space Store previous instances to compute current instance instead of dividing the large problem into two (or more)
smaller problems and solving those problems (as we did in the divide and conquer approach), we start with the simplest possible problems.
We solve them (usually trivially) and save these results. These results are then used to solve slightly larger problems which are, in turn, saved and used to solve larger problems again.
This method is called Dynamic Programming
Prof. Amr Goneid, AUC 6
2. 2. What is Dynamic ProgrammingWhat is Dynamic Programming
An algorithm design method used when the solution is a result of a sequence of decisions (e.g. Knapsack, Optimal Search Trees, Shortest Path, .. etc).
Makes decisions one at a time. Never make an erroneous decision.
Solves a sub-problem by making use of previously stored solutions for all other sub-problems.
Prof. Amr Goneid, AUC 7
Dynamic ProgrammingDynamic Programming
Invented by American mathematician Richard Bellman in the 1950s to solve optimization problems“Programming” here means “planning”
Prof. Amr Goneid, AUC 8
When is Dynamic ProgrammingWhen is Dynamic Programming
Two main properties of a problem that suggest
that the given problem can be solved using
Dynamic programming.
Overlapping Subproblems
Optimal Substructure
Prof. Amr Goneid, AUC 9
Overlapping SubproblemsOverlapping Subproblems
Like Divide and Conquer, Dynamic Programming
combines solutions to subproblems. Dynamic Programming is mainly used when solutions of same subproblems are needed again and again.
Examples are computing the Fibonacci Sequence, Binomial Coefficients, etc.
Prof. Amr Goneid, AUC 10
Optimal Substructure: Principle of Optimal Substructure: Principle of OptimalityOptimality
Dynamic programming uses the Principle of Optimality to avoid non-optimal decision sequences.
For an optimal sequence of decisions, the remaining decisions must constitute an optimal sequence.
Example: Shortest PathFind Shortest path from vertex (i) to vertex (j)
Prof. Amr Goneid, AUC 11
Principle of OptimalityPrinciple of Optimality
Let k be an intermediate vertex on a shortest i-to-j path i , a , b, … , k , l , m , … , j . The path i , a , b, … , k must be shortest i-to-k , and the path k , l , m , … , j must be shortest k-to-j
i
a b
k
l
m
j
Prof. Amr Goneid, AUC 12
3. How To Devise a Dynamic 3. How To Devise a Dynamic Programming ApproachProgramming Approach Given a problem that is solvable by a Divide &
Conquer method Prepare a table to store results of sub-problems Replace base case by filling the start of the table Replace recursive calls by table lookups Devise for-loops to fill the table with sub-problem
solutions instead of returning values Solution is at the end of the table Notice that previous table locations also contain valid
(optimal) sub-problem solutions
Prof. Amr Goneid, AUC 13
Example(1): Fibonacci SequenceExample(1): Fibonacci Sequence
The Fibonacci graph is not a tree, indicating an Overlapping Subproblem.
Optimal Substructue:If F(n-2) and F(n-1) are optimal, thenF(n) = F(n-2) + F(n-1) is optimal
Prof. Amr Goneid, AUC 14
Fibonacci SequenceFibonacci Sequence
Dynamic Programming Solution:Buy speed with space, a table
F(n)Store previous instances to
compute current instance
Prof. Amr Goneid, AUC 15
Fibonacci SequenceFibonacci Sequence
Fib(n):
if (n < 2) return 1;
else
return
Fib(n-1) + Fib(n-2)
Table F[n]
F[0] = F[1] = 1;
if ( n >= 2)
for i = 2 to n
F[i] =
F[i-1] + F[i-2];
return F[n];i-2 i-1 i
Prof. Amr Goneid, AUC 16
Fibonacci SequenceFibonacci Sequence
Dynamic Programming Solution:Space Complexity is O(n) Time Complexity is T(n) = O(n)
Prof. Amr Goneid, AUC 17
Example(2): Counting CombinationsExample(2): Counting Combinations
Overlapping Subproblem.5,3
4,2 4,3
3,1 3,2 3,3
2,1 2,2
1,0 1,1
3,2
2,1 2,2
1,0 1,1
10 with
, 1
1
1
)n,n(comb),n(comb
nmm
n
m
n
m
n)m,n(comb
Prof. Amr Goneid, AUC 18
Counting CombinationsCounting Combinations
Optimal SubstructureThe value of Comb(n, m) can be recursively calculated using following standard formula for Binomial Coefficients:
Comb(n, m) = Comb(n-1, m-1) + Comb(n-1, m)
Comb(n, 0) = Comb(n, n) = 1
Prof. Amr Goneid, AUC 19
Counting CombinationsCounting Combinations
Dynamic Programming Solution:Buy speed with space, Pascal’s
Triangle. Use a table T[0..n, 0..m].Store previous instances to
compute current instance
Prof. Amr Goneid, AUC 20
Counting CombinationsCounting Combinations
comb(n,m):if ((m == 0) || (m == n)) return1; else return
comb (n − 1, m − 1) + comb (n − 1, m);
Table T[n,m]for (i = 0 to n − m) T[i, 0] = 1;for (i = 0 to m) T[i, i] = 1;for (j = 1 to m) for (i = j + 1 to n − m + j) T[i, j] = T[i − 1, j − 1] + T[i − 1, j];return T[n, m];
i-1, j-1 i-1 , j
i , j
Prof. Amr Goneid, AUC 21
Counting CombinationsCounting Combinations
Dynamic Programming Solution:Space Complexity is O(nm)Time Complexity is T(n) = O(nm)
Prof. Amr Goneid, AUC 22
ExerciseExerciseConsider the following function:
Consider the number of arithmetic operations used tobe T(n): Show that a direct recursive algorithm would give
an exponential complexity. Explain how, by not re-computing the same F(i)
value twice, one can obtain an algorithm with T(n) = O(n2)
Give an algorithm for this problem that only uses O(n) arithmetic operations.
1
1
21011n
i
)(F)(Fwithnfor)i(F)i(F)n(F
Prof. Amr Goneid, AUC 23
4. The Sum of Subset Problem4. The Sum of Subset Problem
Given a set of positive integers W = {w1,w2...wn} The problem: is there a subset of W that sums exactly to m?
i.e, is SumSub (w,n,m) true?
Example:W = { 11 , 13 , 27 , 7} , m = 31A possible subset that sums exactly to 31 is {11 , 13 , 7}
Hence, SumSub (w,4,31) is true
w1 w2 .... wn m
Prof. Amr Goneid, AUC 24
The Sum of Subset ProblemThe Sum of Subset Problem
Consider the partial problem SumSub (w,i,j)
SumSub (w, i, j) is true if: wi is not needed, {w1,..,wi-1} has a subset that sums to (j), i.e.,
SumSub (w, i-1, j) is true, OR wi is needed to fill the rest of (j), i.e., {w1,..,wi-1} has a subset that
sums to (j-wi)
If there are no elements, i.e. (i = 0) then SumSub (w, 0, j) is true if (j = 0) and false otherwise
w1 w2 .... wi j
Prof. Amr Goneid, AUC 25
Divide & Conquer ApproachDivide & Conquer Approach
Algorithm:bool SumSub (w, i, j)
{
if (i == 0) return (j == 0);
else
if (SumSub (w, i-1, j)) return true;
else if ((j - wi) >= 0)
return SumSub (w, i-1, j - wi);
else return false;
}
Prof. Amr Goneid, AUC 26
Dynamic Programming ApproachDynamic Programming Approach
Use a tabel t[i,j], i = 0.. n, j = 0..m Base case:
set t[0,0] = true and t[0,j] = false for (j != 0) Recursive calls are constructed as follows:
loop on i = 1 to n
loop on j = 1 to m test on SumSub (w, i-1, j) is replaced by t[i,j] = t[i-1,j] return SumSub (w, i-1, j - wi) is replaced by
t[i,j] = t[i-1,j] OR t[i-1,j – wi]
Prof. Amr Goneid, AUC 27
Dynamic Programming AlgorithmDynamic Programming Algorithm
bool SumSub (w , n , m)
{
1. t[0,0] = true; for (j = 1 to m) t[0,j] = false;
2. for (i = 1 to n)
3. for (j = 0 to m)
4. t[i,j] = t[i-1,j];
5. if ((j-wi) >= 0)
6. t[i,j] = t[i-1,j] || t[i-1, j – wi];
7. return t[n,m];
}
i-1,
j - wi
i-1,
j
i , j
Prof. Amr Goneid, AUC 28
Dynamic Programming AlgorithmDynamic Programming Algorithm
Analysis:
1. costs O(1) + O(m)
2. costs O(n)
3. costs O(m)
4. costs O(1)
5. costs O(1)
6. costs O(1)
7. costs O(1)
Hence, space complexity is O(nm)
Time complexity is T(n) = O(m) + O(nm) = O(nm)
Prof. Amr Goneid, AUC 29
5. The (0/1) Knapsack Problem5. The (0/1) Knapsack Problem
Given n indivisible objects with positive integer weights W = {w1,w2...wn} and positive integer values V = {v1,v2...vn} and a knapsack of size (m)
Find the highest valued subset of objects with total weight at most (m)
i = 1 i = 2 i = n
mw1
p1
w2
p2
wn
pn
……..
Prof. Amr Goneid, AUC 30
The Decision InstanceThe Decision Instance
Assume that we have tried objects of type (1,2,..., i -1) to fill the sack up to a capacity (j) with a maximum profit of P(i-1,j)
If j wi then P(i-1, j - wi) is the maximum profit if we remove the equivalent weight wi of an object of type (i).
By trying to add object (i), we expect the maximum profit to change to P(i-1, j - wi) + vi
Prof. Amr Goneid, AUC 31
The Decision InstanceThe Decision Instance
If this change is better, we do it, otherwise we leave things as they were, i.e.,
P(i , j) = max { P(i-1, j) , P(i-1, j - wi) + vi } for j wi P(i , j) = P(i-1, j) for j < wi
The above instance can be solved for P(n , m) by initializing P(0,j) = 0 and successively computing P(1,j) , P(2,j) ....., P (n , j) for all 0 j m
Prof. Amr Goneid, AUC 32
Divide & Conquer ApproachDivide & Conquer Approach
Algorithm:int Knapsackr (int w[ ], int v[ ], int i, int j){
if (i == 0) return 0;else{
int a = Knapsackr (w,v,i-1,j);if ((j - w[i]) >= 0){ int b = Knapsackr (w,v,i-1,j-w[i]) + v[i];
return (b > a? b : a); }else return a;
}}
Prof. Amr Goneid, AUC 33
Divide & Conquer ApproachDivide & Conquer Approach
Analysis:
T(n) = no. of calls to Knapsackr (w, v, n, m): For n = 0, one main call, T(0) = 1 For n > 0, one main call plus two calls each with n-1 The recurrence relation is:
T(n) = 2T(n-1) + 1 for n > 0 with T(0) = 1 Hence T(n) = 2n+1 -1 = O(2n) = exponential time
Prof. Amr Goneid, AUC 34
Dynamic Programming ApproachDynamic Programming Approach The following approach will give the maximum
profit, but not the collection of objects that produced this profit
Initialize P(0 , j) = 0 for 0 j m Initialize P(i , 0) = 0 for 0 i n for each object i from 1 to n do
for a capacity j from 0 to m doP(i , j) = P(i-1 , j) if ( j >= wi) if (P(i-1 , j) < P(i-1, j - wi) + vi ) P(i , j) P(i-1 , j - wi) + vi
Prof. Amr Goneid, AUC 35
DP AlgorithmDP Algorithmint Knapsackdp (int w[ ], int v[ ], int n, int m){ int p[N][M];
for (int j = 0; j <= m; j++) p[0][j] = 0; for (int i= 0; i <= n; i++) p[i][0] = 0;for (int i = 1; i <= n; i++)
for (j = 0; j <= m; j++){ int a = p[i-1][j]; p[i][j] = a;
if ((j-w[i]) >= 0) { int b = p[i-1][j-w[i]]+v[i]; if (b > a) p[i][j] = b; }
}return p[n][m];
}Hence, space complexity is O(nm)Time complexity is T(n) = O(n) + O(m) + O(nm) = O(nm)
Prof. Amr Goneid, AUC 36
ExampleExample
Example: Knapsack capacity m = 5
item weight value ($)
1 2 12
2 1 10
3 3 20
4 2 15
Prof. Amr Goneid, AUC 37
ExampleExample
wi, vi
0 j-wi j m
0 0 0 0 0
i-1 0
i 0
n 0
P(i-1, j-wi) P(i-1, j)
P(i , j) Goal P(n,m)
Prof. Amr Goneid, AUC 38
ExampleExample
.
Prof. Amr Goneid, AUC 39
ExercisesExercises Modify the previous Knapsack algorithm so
that it could also list the objects contributing to the maximum profit.
Explain how to reduce the space complexity of the Knapsack problem to only O(m). You need only to find the maximum profit, not the actual collection of objects.
Prof. Amr Goneid, AUC 40
Exercise: Exercise: Longest Common Sequence ProblemLongest Common Sequence Problem Given two sequences A = {a1, . . . , an} and B = {b1, . . . , bm}.
Find the longest sequence that is a subsequence of both A and B. For example, if A = {aaadebcbac} and
B = {abcadebcbec}, then {adebcb} is subsequence of length 6 of both sequences.
Give the recursive Divide & Conquer algorithm and the Dynamic programming algorithm together with their analyses
Hint:
Let L(i, j) be the length of the longest common subsequence of {a1, . . . , ai} and {b1, . . . , bj}. If ai = bj then
L(i, j) = L(i−1, j −1)+1. Otherwise, one can see that
L(i, j) = max (L(i, j − 1), L(i − 1, j)).
Prof. Amr Goneid, AUC 41
6. Minimum Cost Path6. Minimum Cost Path Given a cost matrix C[ ][ ] and a position (n, m) in
C[ ][ ], find the cost of minimum cost path to reach (n , m) from (0, 0).
Each cell of the matrix represents a cost to traverse through that cell. Total cost of a path to reach (n, m) is the sum of all the costs on that path (including both source and destination).
From a given cell (i, j), You can only traverse down to cell (i+1, j), , right to cell (i, j+1) and diagonally to cell (i+1, j+1) . Assume that all costs are positive integers
Prof. Amr Goneid, AUC 42
Minimum Cost PathMinimum Cost Path
Example: what is the minimum cost path to (2, 2)?
The path is (0, 0) –> (0, 1) –> (1, 2) –> (2, 2). The cost of the path is 8 (1 + 2 + 2 + 3).Optimal SubstructureMinimum cost to reach (n, m) is
“minimum of the 3 cells plus cost[n][m]“.
i.e.,
minCost(n, m) = min (minCost(n-1, m-1), minCost(n-1, m), minCost(n, m-1)) + C[n][m]
Prof. Amr Goneid, AUC 43
Minimum Cost Path (D&Q)Minimum Cost Path (D&Q) Overlapping Subproblems:The recursive definition suggests a D&Q approach with
overlapping subproblems:
int MinCost(int C[ ][M], int n, int m)
{
if (n < 0 || m < 0) return ∞;
else if (n == 0 && m == 0) return C[n][m];
else return C[n][m] + min( MinCost(C, n-1, m-1),
MinCost(C, n-1, m), MinCost(C, n, m-1) );
}
Anaysis: For m=n, T(n) = 3 T(n-1) + 3 for n > 0 with T(0) = 0
Hence T(n) = O(3n) , exponential complexity
Prof. Amr Goneid, AUC 44
Dynamic Programming AlgorithmDynamic Programming Algorithm
In the Dynamic Programming(DP) algorithm, recomputations of same subproblems can be avoided by constructing a temporary array T[ ][ ] in a bottom up manner.int minCost(int C[ ][M], int n, int m)
{ int i, j; int T[N][M];
T[0][0] = C[0][0];
/* Initialize first column */
for (i = 1; i <= n; i++) T[i][0] = T[i-1][0] + C[i][0];
/* Initialize first row */
for (j = 1; j <= m; j++) T[0][j] = T[0][j-1] + C[0][j];
/* Construct rest of the array */
for (i = 1; i <= n; i++)
for (j = 1; j <= m; j++)
T[i][j] = min(T[i-1][j-1], T[i-1][j], T[i][j-1]) + C[i][j];
return T[n][m];
}
Space complexityis O(nm)Time complexity isT(n) = O(n)+O(m) +
O(nm) = O(nm)
Prof. Amr Goneid, AUC 45
7. Coin Change Problem7. Coin Change Problem We want to make change for N cents, and we have infinite
supply of each of S = {S1 , S2, …Sm} valued coins, how many ways can we make the change? (For simplicity's sake, the order does not matter.)
Mathematically, how many ways can we express N as
For example, for N = 4,S = {1,2,3}, there are four solutions: {1,1,1,1},{1,1,2},{2,2},{1,3}.
We are trying to count the number of distinct sets. Since order does not matter, we will impose that our solutions
(sets) are all sorted in non-decreasing order (Thus, we are
looking at sorted-set solutions: collections).
}....1{,0,1
mkxSxN kk
m
kk
Prof. Amr Goneid, AUC 46
Coin Change ProblemCoin Change Problem With S1 < S2 < …<Sm the number of possible sets C(N,m) is
Composed of: Those sets that contain at least 1 Sm, i.e. C(N-Sm, m)
Those sets that do not contain any Sm, i.e. C(N, m-1)
Hence, the solution can be represented by the recurrence relation:
C(N,m) = C(N, m-1) + C(N-Sm , m)
with the base cases:
C(N,m) = 1 for N = 0 C(N,m) = 0 for N < 0
C(N,m) = 0 for N 1 , M ≤ 0 Therefore, the problem has optimal substructure property as
the problem can be solved using solutions to subproblems. It also has the property of overlapping subproblems.
Prof. Amr Goneid, AUC 47
D&Q AlgorithmD&Q Algorithm int count( int S[ ], int m, int n )
{
// If n is 0 then there is 1 solution (do not include any coin)
if (n == 0) return 1;
// If n is less than 0 then no solution exists
if (n < 0) return 0;
// If there are no coins and n is greater than 0, then no solution
if (m <=0 && n >= 1) return 0;
// count is sum of solutions (i) including S[m-1] (ii) excluding
// S[m-1]
return count( S, m - 1, n ) + count( S, m, n-S[m-1] );
}
The algorithm has exponential complexity.
Prof. Amr Goneid, AUC 48
DP AlgorithmDP Algorithm int count( int S[ ], int m, int n ){ int i, j, x, y; int table[n+1][m]; // n+1 rows to include the case (n = 0) for (i=0; i < m; i++) table[0][i] = 1; // Fill for the case (n = 0) // Fill rest of the table enteries bottom up for (i = 1; i < n+1; i++) for (j = 0; j < m; j++) { x = (i-S[j] >= 0)? table[i - S[j]][j]: 0; // solutions including S[j] y = (j >= 1)? table[i][j-1]: 0; // solutions excluding S[j] table[i][j] = x + y; // total count } return table[n][m-1];}Space comlexity is O(nm)Time comlexity is O(m) + O(nm) = O(nm)
Prof. Amr Goneid, AUC 49
8. Optimal Binary Search Trees8. Optimal Binary Search Trees
Problem:
Given a set of keys K1 , K2 , … , Kn and their corresponding search frequencies P1 , P2 , … , Pn , find a binary search tree for the keys such that the total search cost is minimum.
Remark:The problem is similar to that of the optimal merge trees (Huffman Coding) but more difficult because now:- Keys can exist in internal nodes- The binary search tree condition (Left < Parent < Right) is imposed
Prof. Amr Goneid, AUC 50
(a) Example(a) Example
A Binary Search Tree of 5 words:
A Greedy Algorithm:
Insert words in the tree in order of decreasing frequency of search.
10 30 20 18 22Freq
(tot 100)
two if and am aWord
Prof. Amr Goneid, AUC 51
A Greedy BSTA Greedy BST Insert (if) , (a) , (and) , (am) , (two)
Total Search Cost (100 searches) = 1*30 + 2*22 + 2* 10 + 3*20 + 4*18 = 226
if
twoa
and
am
30
22 10
20
18
Level
1
2
4
3
Prof. Amr Goneid, AUC 52
Another way to Compute CostAnother way to Compute Cost
For a tree containing n keys (K 1 , K 2 , … , K n), the total cost C(tree) is:
Total Cost = C(tree) =
( P)All keys + C(left subtree) + C(right subtree)
For the previous example:
C(tree) = 100 + {60 + [0] + [38 + (18) + (0)]}+ {10} = 226
i
C (Left)1 .. (i-1)
C (right)(i+1).. n
Prof. Amr Goneid, AUC 53
An Optimal BSTAn Optimal BST A Dynamic Programming Algorithm leads to the
following BST:
Cost = 1*20 +2*22 + 2*30 + 3*18 + 3*10 = 208 =100 + {40 + [0] + [18]} + {40 + [0] + [10] } = = 208
and
if
two
a
am
20
2230
1810
Prof. Amr Goneid, AUC 54
(b) Dynamic Programming Method(b) Dynamic Programming Method
For n keys, we perform n-1 iterations 1 j n-1 In each iteration (j), we compute the best way to build
a sub-tree containing j+1 keys (K i , K i+1 , … , K i+j) for all possible BST combinations of such j+1 keys ( i.e. for 1 i n-j).
Each sub-tree is tried with one of the keys as the root and a minimum cost sub-tree is stored.
For a given iteration (j), we use previously stored values to determine the current best sub-tree.
Prof. Amr Goneid, AUC 55
Simple ExampleSimple Example For 3 keys (A < B < C), we perform 2
iterations j = 1 , j = 2 For j = 1, we build sub-trees using 2 keys.
These come from i = 1, (A-B), andi = 2, (B-C).
For each of these combinations, we compute the least cost sub-tree, i.e., the least cost of the two sub-trees (A*,B) and (A, B*) and the least cost of the two sub-trees (B*,C) and (B, C*) , where (*) denotes parent.
Prof. Amr Goneid, AUC 56
Simple Example (Cont.)Simple Example (Cont.)
For j = 2, we build trees using 3 keys. These come from i = 1, (A-C).
For this combination, we compute the least cost tree of the trees
(A*,(B-C)) , (A , B* , C) , ((A-B) , C*).
This is done using previously computed least cost sub-trees.
Prof. Amr Goneid, AUC 57
Simple Example (continued)Simple Example (continued) j = 1
i = 1, (A-B) i = 2 , (B-C)
k = 1 k = 2
k = 2 k = 3
j = 2 i = 1, (A-C)k = 1
k = 2
k = 3
A
B
B
A
B
C
C
B
A
B
A C
C
B-C
A-B
min
min
min = ? min = ?
min = ?
Prof. Amr Goneid, AUC 58
(c) Example Revisited: Optimal BST for 5 keys(c) Example Revisited: Optimal BST for 5 keys
Iterations 1..2I 1 2 3 4 5
1 Key a am and if two
j = 0 p 22 18 20 30 10
2 keys a..am am..and and..if if..two
j = 1 sum(p) 40 38 50 40
k = I 18 20 30 10
k = I + j 22 18 20 30
min 58 56 70 50
3 Keys a..and am..if and..two
j = 2 sum(p) 60 68 60
k = I 56 70 50
42 48 30
k = I + j 58 56 70
min 102 116 90
Prof. Amr Goneid, AUC 59
Optimal BST for 5 keysOptimal BST for 5 keys Iterations 3 .. n-1
4 Keys a..if am..two
j = 3 sum(p) 90 78
k = I 116 90
92 68
88 66
k = I + j 102 116
min 178 144
5 Keys a..two
j = 4 sum(p) 100
k = I 144
112
108
112
k = I + j 178
min 208
Prof. Amr Goneid, AUC 60
Construction of The TreeConstruction of The Tree Min BST at j = 4, i = 1, k = 3
cost = 100 + 58 + 50 = 208
(a .. am)min at j = 1, i = 1, k = 1
cost = 58 (if .. two)min at j = 1, i = 4, k = 1
cost = 50
Final min BST
cost = 1*20 + 2*22 + 2*30 +
3*18 + 3*10 = 208
and
if
two
a
am
if
two
a
and
am
a..am If..twomin min
Prof. Amr Goneid, AUC 61
(d) Complexity Analysis(d) Complexity Analysis Skeletal Algorithm:
for j = 1 to n-1 do { // sub-tree has j+1 nodes
for i = 1 to n-j do { // for each of the n-j sub-tree combinations
for k = i to i+j do { find cost of each of the j+1 configurations and determine minimum cost }
} }
T(n) = 1 j n-1 ( j + 1) (n – j) = O(n3) ,
S(n) = O(n2)
Prof. Amr Goneid, AUC 62
ExerciseExercise Find the optimal binary search tree for the
following words with the associated frequencies:a (18) , and (22) , I (19) , it (20) , or (21)
Answer: Min cost = 20 + 2*43 + 3*37 = 217 it
orand
Ia
20
22 21
18 19
Prof. Amr Goneid, AUC 63
99. Dynamic Programming Algorithms . Dynamic Programming Algorithms for Graph Problemsfor Graph ProblemsVarious optimization graph problems have been solved using Dynamic Programming algorithms. Examples are:
Dijkstra's algorithm solves the single-source shortest path problem for a graph with nonnegative edge path costsFloyd–Warshall algorithm for finding all pairs shortest paths in a weighted graph (with positive or negative edge weights) and also for finding transitive closureThe Bellman–Ford algorithm computes single-source shortest paths in a weighted digraph for graphs with negative edge weights.
These will be discussed later under “Graph Algorithms”.
Prof. Amr Goneid, AUC 64
1010. Comparison with Greedy and . Comparison with Greedy and Divide & Conquer MethodsDivide & Conquer Methods Greedy vs. DP :
Both are optimization techniques, building solutions from a collection of choices of individual elements.
The greedy method computes its solution by making its choices in a serial forward fashion, never looking back or revising previous choices.
DP computes its solution bottom up by synthesizing them from smaller subsolutions, and by trying many possibilities and choices before it arrives at the optimal set of choices.
There is no a priori test by which one can tell if the Greedy method will lead to an optimal solution.
By contrast, there is a test for DP, called The Principle of Optimality
Prof. Amr Goneid, AUC 65
Comparison with Greedy and Divide & Comparison with Greedy and Divide & Conquer MethodsConquer Methods D&Q vs. DP:
Both techniques split their input into parts, find subsolutions to the parts, and synthesize larger solutions from smaller ones.
D&Q splits its input at pre-specified deterministic points (e.g., always in the middle)
DP splits its input at every possible split point rather than at pre-specified points. After trying all split points, it determines which split point is optimal.