Upload
angel-harmond
View
228
Download
1
Embed Size (px)
Citation preview
Advance Data Structure and AlgorithmCOSC600
Dr. Yanggon KimChapter 2 Algorithm Analysis
Algorithm Analysis
• General, This Chapter learn for Running-time.• How to estimate the time required for a program.• How to reduce the running time of a program from day or years to fractions
of a second.• The results of careless use of recursion.• Very efficient algorithms to raise a number to a power and to compute the
greatest common divisor of two numbers.
Analysis of Algorithm
-> Use running-time
• Worst-case running-time• Average-case running-time (Expected)running –time
Use T(n)More simplifying “abstract”
T1(N) = 2 + 5N + 100 for algm1 T2(N) = 3 + 5N + 100 for algm2 “Higest Fact”
Asymptotic NotationConvenient for describing the worst-case running-time f(n), T(n)
Function
① Big-Oh - O(f(N))② Big-Omega - Ω(g(N))③ Big-Theta - ϴ(h(N))④ Little-Oh - o(p(N))
Function(Continued)
① Big-Oh - O(f(N)) (Asymptotic Upper-bound)T(N) = O(f(N)) if there are positive constants, C and s.t T(N) ≤
C f(N) when N≥∙
T(N) T T(N)
N
Function(Continued)
T(N) = 2 + 100 - 5N + 500 = O() f(n)= = O () = O ()
ex) T(N) = 2 logN + 500,000∙ = O(logN) = O(N)
= O()
Function(Continued)
② Big-Omega - Ω(g(N)) (Asymptotic lower-bound)T(N) = Ω(g(N)) if there are positive constants, C and s.t T(N) ≥
C g(N) when N≥∙
T(N) T C∙ g(N)
N
Function(Continued)
T(N) = 5 + 500∙ = Ω(1) = Ω()
= Ω(log N) = O() = O(logN)
Function(Continued)③ Big-Theta - ϴ(h(N)) (Asymptotic tight-bound)
T(N) = ϴ(h(N)) if T(N) = O(h(N)) and T(N) = Ωh(N))
Ex) T(N) = 3 + 1000N – 500∙ = O ( ) = O() = Ω(N)
= Ω(log N) = Ω() = ϴ()
Function(Continued)④ Little-Oh
T(N) = o(p(N)) if T(N) = O(p(N)) and T(N) ≠ ϴ(p(N))
Rule1If T1(N) = O(f(N)) and T2(N) = O(g(N)) i) T1(N) + T2(N) = O(f(N) + g(N))
= O(max(f(N) , g(N))) ii) T1(N) * T2(N) = O(f(N) * g(N)) iii) T1(N) – T2(N) ≠ O(min(f(N), O(g(N))) iv) T1(N) / T2(N) ≠ O(f(N)/g(N))
Function(Continued)Rule2
If T(N) is a polynomial of degree k, then T(N) = ϴ() ex) T(N) = 5 + 100
= ϴ()Rule3
= = O(N)• Logarithms grow very slowly• Ignore lower-order terms
Function(Continued) ex) T(N) = + 500 +
≠ O(= O()= O() when a>1
Function(Continued)Hospitals Rules = 0 -> f(N) = O(g(N))
= C (constant) -> f(N) = Ɵ(g(N))= ∞ -> g(N) =O(f(N))= osicillate -> no relationship
Ex) + O(n) = O() = = O(N) = Ω(log N) = Ω(log N) when k is a positive -100n = ϴ()
Function(Continued)Ex) stat O; T(1)
for(i=0; i<n; i++){stament1;
stament2; T2(N) = 3nstament3;
} T(n) = 2+3n+Cfor(j=0; j<n*n; j++){ = O()
stament1; = ϴ() stament2; T3(N) =2
}
Function(Continued)Ex)
for(i=0; i<n; i++){ for(j=0; j<i; j++){
stament1; stament2; T(N) = (1+2+3+ +n) 10 ∙∙∙∙∙ ∙
= 10 ∙ stament10; = O()
} = ϴ()}
Function(Continued)Ex) Binary Search
let , , , , be sorted∙∙∙∙∙∙
T(N) = T() + = T() + + = T() + + +
= T(1) + + + + -> log n = C log n∙ ∙ ∙ ∙ ∙ ∙ ∙ = ϴ(log n)
Function(Continued)Divide and conquer methodGeneral form of T(n)
T(n) = a *T() + ϴ()
Master Theorem T(N) = O() if a >
= O(∙ if a = = O() if a <
Function(Continued)Ex) Merge sort
N
, , ∙∙∙∙∙∙
ϴ(N)
, .. . ,
Function(Continued)T(n) = 2 T() + ∙ ϴ(n) = O(n ) = 2 2 T() + ∙ ∙ ϴ() + ϴ(n) = 2 2 2 T() + () + ∙ ∙ ∙ ϴ() + ϴ(n) = 2 2 T(1) + ∙∙∙∙∙ ∙ = 2 ∙ ≈O(O(n )
Function(Continued)Typical f(n)’s growth rate
C NN ∙
N ∙ =
where A>1 Faster
Faster
Running Time Calculations
• There are several ways to estimate the running time of a Program.
• To simplify the analysis we do computing a Big-Oh running time of the program.
• The presentation is followed by some sample examples and some general rules in calculating running time.
• For loops the running time of the statements inside the loop times the number of iterations.
• In Nested loops the total running time of a statement inside a group of nested loops is the running time of the statement multiplied by the product of sizes of all loops.
for( i = 0; i < n; i++ ) for( j = 0; j < n; j++ ) k++; The above program fragment is O(N^2).
• Consecutive statements: As an example, the following programming fragment, which has O(N) followed by O(N^2) , is also O(N^2).
for( i = 0; i < n; i++ ) {
a[ i ] = 0; for( i = 0; i < n; i++ ) {
for( j = 0; j < n; j++ ) a[ i ] += a[ j ] + i + j;
}}
public static long factorial( int n ) {
if( n <= 1 ) return 1;
else return n * factorial( n - 1 );
}
• Using recursion:public static long fib( int n ) {
if( n <= 1 ) return 1; else
return fib( n - 1 ) + fib( n - 2 );}We have the formula for the running time of fib(n): T(N) = T(N − 1) + T(N − 2) + 2
Maximum Subsequence Sum Problem
• We can present this algorithm to solve the Maximum subsequence sum problem in four ways
a) Cubic maximum contiguous subsequence sum algorithm,b) Quadratic maximum contiguous subsequence sum algorithm,c) Recursive maximum contiguous subsequence sum algorithm,d) Linear-time maximum contiguous subsequence sum algorithm.
Algorithm 1 for Max. Subseq. Sum Pbm
Algorithm 2 for Max. Subseq. Sum Pbm
Algorithm 3 for Max. Subseq. Sum Pbm
Algorithm 4 for Max. Subseq. Sum Pbm
Binary Search
• It is a search Algorithm used to finf some ‘n’ data in a group of data.
• Thus to calculate the maximum we need to scan the array once => O(n).
• Now, if the array is sorted in some order, then we just compare the value to be found with center object of the array. In this comparison we keep eliminating half of the array every step. => O(log n).
• If suppose we have a two dimensional array, We would scan each row and column in the order to be n*n = O(n^2).
• If the two dimensional array is sorted, it can still be reduced to O(n) by using zig-zag pattern to scan.
• The worst case we will end up scanning (2n-1) = O(n).