Upload
otylia
View
46
Download
1
Tags:
Embed Size (px)
DESCRIPTION
Introduction to Analysis of Algorithms. Introduction. What is Algorithm? a clearly specified set of simple instructions to be followed to solve a problem Takes a set of values, as input and produces a value, or set of values, as output May be specified In English - PowerPoint PPT Presentation
Citation preview
Introduction to Analysis of Algorithms
Introductionbull What is Algorithm
ndash a clearly specified set of simple instructions to be followed to solve a problem
bull Takes a set of values as input and bull produces a value or set of values as output
ndash May be specified bull In Englishbull As a computer programbull As a pseudo-code
bull Data structuresndash Methods of organizing data
bull Program = algorithms + data structures
What are the Algorithms
Algorithm
Is any well-defined computational procedure that takes some value or set of values as input and produces some value or set of values as output
It is a tool for solving a well-specified computational problem
They are written in a pseudo code which can be implemented in the language of programmerrsquos choice
Example sorting numbers
1048708 Input A sequence of n numbers a3 a1 a2an
1048708 Output A reordered sequence of the inputa1 a2 a3an such that a1lea2 lea3hellip lean
1048708 Input instance 5 2 4 1 6 31048708 Output 1 2 3 4 5 6
1048708 An instance of a problem consists of the input (satisfying whatever constraints are imposed in the problem statement) needed to compute a solution to the problem
Example sorting numbers
1048708 Sorting is a fundamental operation
1048708 Many algorithms existed for that purpose
1048708 The best algorithm to use depends on1048708 The number of items to be sorted1048708 possible restrictions on the item values1048708 kind of storage device to be used main memory disks or tapes
Correct and incorrect algorithmsbull Algorithm is correct if for every input instance it
ends with the correct output We say that a correct algorithm solves the given computational problem
bull An incorrect algorithm might not end at all on some input instances or it might end with an answer other than the desired one
bull We shall be concerned only with correct algorithms
Hard problems
1048708 We can identify the Efficiency of an algorithm from its speed (how long does the algorithm take to produce the result)
1048708 Some problems have unknown efficient solution
1048708 These problems are called NP-complete problems
8
Big-O Notationbull We use a shorthand mathematical notation to
describe the efficiency of an algorithm relative to any parameter n as its ldquoOrderrdquo or Big-Ondash We can say that the first algorithm is O(n)ndash We can say that the second algorithm is O(n2)
bull For any algorithm that has a function g(n) of the parameter n that describes its length of time to execute we can say the algorithm is O(g(n))
bull We only include the fastest growing term and ignore any multiplying by or adding of constants
9
Seven Growth Functions
bull Seven functions g(n) that occur frequently in the analysis of algorithms (in order of increasing rate of growth relative to n)ndash Constant 1ndash Logarithmic log nndash Linear nndash Log Linear n log nndash Quadratic n2
ndash Cubic n3
ndash Exponential 2n
10
Growth Rates Comparedn=1 n=2 n=4 n=8 n=16 n=32
1 1 1 1 1 1 1
logn 0 1 2 3 4 5
n 1 2 4 8 16 32
nlogn 0 2 8 24 64 160
n2 1 4 16 64 256 1024
n3 1 8 64 512 4096 32768
2n 2 4 16 256 65536 4294967296
Graphical Comparison of Complexity Classes
11
1 2 3 4 5 6 7 8 9 10 11 120
500
1000
1500
2000
2500
3000
3500
4000
4500
xlog xx log xxxxxx2^x
Complexity Analysis
bull Asymptotic Complexity
bull Big-O (asymptotic) Notation
bull Big-O Computation Rules
bull Proving Big-O Complexity
bull How to determine complexity of code structures
Asymptotic Complexity bull Finding the exact complexity f(n) = number of basic
operations of an algorithm is difficultbull We approximate f(n) by a function g(n) in a way that
does not substantially change the magnitude of f(n) --the function g(n) is sufficiently close to f(n) for large values of the input size n
bull This approximate measure of efficiency is called asymptotic complexity
bull Thus the asymptotic complexity measure does not give the exact number of operations of an algorithm but it shows how that number grows with the size of the input
bull This gives us a measure that will work for different operating systems compilers and CPUs
Big-O (asymptotic) Notationbull The most commonly used notation for specifying asymptotic
complexity is the big-O notationbull The Big-O notation O(g(n)) is used to give an upper bound (worst-
case) on a positive runtime function f(n) where n is the input size
Definition of Big-Obull Consider a function f(n) that is non-negative n 0 We say that
ldquof(n) is Big-O of g(n)rdquo ie f(n) = O(g(n)) if n0 0 and a constant c gt 0 such that f(n) cg(n) n n0
Big-O (asymptotic) NotationImplication of the definitionbull For all sufficiently large n c g(n) is an upper bound of
f(n)Note By the definition of Big-O f(n) = 3n + 4 is O(n)
it is also O(n2) it is also O(n3) it is also O(nn)
bull However when Big-O notation is used the function g in the relationship f(n) is O(g(n)) is CHOOSEN TO BE AS SMALL AS POSSIBLE ndash We call such a function g a tight asymptotic bound of f(n)
Big-O (asymptotic) NotationSome Big-O complexity classes in order of magnitude from smallest to highest
O(1) ConstantO(log(n)) LogarithmicO(n) LinearO(n log(n)) n log nO(nx) eg O(n2) O(n3) etc Polynomial
O(an) eg O(16n) O(2n) etc Exponential
O(n) FactorialO(nn)
Examples of Algorithms and their big-O complexity
Big-O Notation Examples of Algorithms
O(1) Push Pop Enqueue (if there is a tail reference) Dequeue Accessing an array element
O(log(n)) Binary search
O(n) Linear search
O(n log(n)) Heap sort Quick sort (average) Merge sort
O(n2) Selection sort Insertion sort Bubble sort
O(n3) Matrix multiplication O(2n) Towers of Hanoi
Warnings about O-Notation bull Big-O notation cannot compare
algorithms in the same complexity classbull Big-O notation only gives sensible
comparisons of algorithms in different complexity classes when n is large
bull Consider two algorithms for same task Linear f(n) = 1000 n Quadratic f(n) = n21000The quadratic one is faster for n lt 1000000
Rules for using big-O bull For large values of input n the constants and terms with lower
degree of n are ignored
1 Multiplicative Constants Rule Ignoring constant factors O(c f(n)) = O(f(n)) where c is a constant Example
O(20 n3) = O(n3)
2 Addition Rule Ignoring smaller termsIf O(f(n)) lt O(h(n)) then O(f(n) + h(n)) = O(h(n))Example
O(n2 log n + n3) = O(n3)O(2000 n3 + 2n + n800 + 10n + 27n log n + 5) = O(n )
3 Multiplication Rule O(f(n) h(n)) = O(f(n)) O(h(n))Example
O((n3 + 2n 2 + 3n log n + 7)(8n 2 + 5n + 2)) = O(n 5)
Proving Big-O Complexity
To prove that f(n) is O(g(n)) we find any pair of values n0 and c that satisfy f(n) le c g(n) for n n0
Note The pair (n0 c) is not unique If such a pair exists then there is an infinite number of such pairs
Example Prove that f(n) = 3n2 + 5 is O(n2)We try to find some values of n and c by solving the following inequality
3n2 + 5 cn2 OR 3 + 5n2 c
(By putting different values for n we get corresponding values for c)
n0 1 2 3 4
c 8 425 355 33125 3
Proving Big-O ComplexityExample
Prove that f(n) = 3n2 + 4n log n + 10 is O(n2) by finding appropriate values for c and n0
We try to find some values of n and c by solving the following inequality 3n2 + 4n log n + 10 cn2
OR 3 + 4 log n n+ 10n2 c
( We used Log of base 2 but another base can be used as well)n0 1 2 3 4
c 13 75 622 562 3
How to determine complexity of code structures Loops for while and do-while
Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop
Examples
for (int i = 0 i lt n i++) sum = sum - i
for (int i = 0 i lt n n i++) sum = sum + i
i=1while (i lt n) sum = sum + i i = i2
O(n)
O(n2)
O(log n)
How to determine complexity of code structures Nested Loops Complexity of inner loop complexity of outer loopExamples
sum = 0for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) sum += i j
i = 1while(i lt= n) j = 1 while(j lt= n) statements of constant complexity j = j2 i = i+1
O(n2)
O(n log n)
How to determine complexity of code structures Sequence of statements Use Addition rule
O(s1 s2 s3 hellip sk) = O(s1) + O(s2) + O(s3) + hellip + O(sk) = O(max(s1 s2 s3 sk))
Example
Complexity is O(n2) + O(n) +O(1) = O(n2)
for (int j = 0 j lt n n j++) sum = sum + jfor (int k = 0 k lt n k++) sum = sum - lSystemoutprint(sum is now rdquo + sum)
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
Introductionbull What is Algorithm
ndash a clearly specified set of simple instructions to be followed to solve a problem
bull Takes a set of values as input and bull produces a value or set of values as output
ndash May be specified bull In Englishbull As a computer programbull As a pseudo-code
bull Data structuresndash Methods of organizing data
bull Program = algorithms + data structures
What are the Algorithms
Algorithm
Is any well-defined computational procedure that takes some value or set of values as input and produces some value or set of values as output
It is a tool for solving a well-specified computational problem
They are written in a pseudo code which can be implemented in the language of programmerrsquos choice
Example sorting numbers
1048708 Input A sequence of n numbers a3 a1 a2an
1048708 Output A reordered sequence of the inputa1 a2 a3an such that a1lea2 lea3hellip lean
1048708 Input instance 5 2 4 1 6 31048708 Output 1 2 3 4 5 6
1048708 An instance of a problem consists of the input (satisfying whatever constraints are imposed in the problem statement) needed to compute a solution to the problem
Example sorting numbers
1048708 Sorting is a fundamental operation
1048708 Many algorithms existed for that purpose
1048708 The best algorithm to use depends on1048708 The number of items to be sorted1048708 possible restrictions on the item values1048708 kind of storage device to be used main memory disks or tapes
Correct and incorrect algorithmsbull Algorithm is correct if for every input instance it
ends with the correct output We say that a correct algorithm solves the given computational problem
bull An incorrect algorithm might not end at all on some input instances or it might end with an answer other than the desired one
bull We shall be concerned only with correct algorithms
Hard problems
1048708 We can identify the Efficiency of an algorithm from its speed (how long does the algorithm take to produce the result)
1048708 Some problems have unknown efficient solution
1048708 These problems are called NP-complete problems
8
Big-O Notationbull We use a shorthand mathematical notation to
describe the efficiency of an algorithm relative to any parameter n as its ldquoOrderrdquo or Big-Ondash We can say that the first algorithm is O(n)ndash We can say that the second algorithm is O(n2)
bull For any algorithm that has a function g(n) of the parameter n that describes its length of time to execute we can say the algorithm is O(g(n))
bull We only include the fastest growing term and ignore any multiplying by or adding of constants
9
Seven Growth Functions
bull Seven functions g(n) that occur frequently in the analysis of algorithms (in order of increasing rate of growth relative to n)ndash Constant 1ndash Logarithmic log nndash Linear nndash Log Linear n log nndash Quadratic n2
ndash Cubic n3
ndash Exponential 2n
10
Growth Rates Comparedn=1 n=2 n=4 n=8 n=16 n=32
1 1 1 1 1 1 1
logn 0 1 2 3 4 5
n 1 2 4 8 16 32
nlogn 0 2 8 24 64 160
n2 1 4 16 64 256 1024
n3 1 8 64 512 4096 32768
2n 2 4 16 256 65536 4294967296
Graphical Comparison of Complexity Classes
11
1 2 3 4 5 6 7 8 9 10 11 120
500
1000
1500
2000
2500
3000
3500
4000
4500
xlog xx log xxxxxx2^x
Complexity Analysis
bull Asymptotic Complexity
bull Big-O (asymptotic) Notation
bull Big-O Computation Rules
bull Proving Big-O Complexity
bull How to determine complexity of code structures
Asymptotic Complexity bull Finding the exact complexity f(n) = number of basic
operations of an algorithm is difficultbull We approximate f(n) by a function g(n) in a way that
does not substantially change the magnitude of f(n) --the function g(n) is sufficiently close to f(n) for large values of the input size n
bull This approximate measure of efficiency is called asymptotic complexity
bull Thus the asymptotic complexity measure does not give the exact number of operations of an algorithm but it shows how that number grows with the size of the input
bull This gives us a measure that will work for different operating systems compilers and CPUs
Big-O (asymptotic) Notationbull The most commonly used notation for specifying asymptotic
complexity is the big-O notationbull The Big-O notation O(g(n)) is used to give an upper bound (worst-
case) on a positive runtime function f(n) where n is the input size
Definition of Big-Obull Consider a function f(n) that is non-negative n 0 We say that
ldquof(n) is Big-O of g(n)rdquo ie f(n) = O(g(n)) if n0 0 and a constant c gt 0 such that f(n) cg(n) n n0
Big-O (asymptotic) NotationImplication of the definitionbull For all sufficiently large n c g(n) is an upper bound of
f(n)Note By the definition of Big-O f(n) = 3n + 4 is O(n)
it is also O(n2) it is also O(n3) it is also O(nn)
bull However when Big-O notation is used the function g in the relationship f(n) is O(g(n)) is CHOOSEN TO BE AS SMALL AS POSSIBLE ndash We call such a function g a tight asymptotic bound of f(n)
Big-O (asymptotic) NotationSome Big-O complexity classes in order of magnitude from smallest to highest
O(1) ConstantO(log(n)) LogarithmicO(n) LinearO(n log(n)) n log nO(nx) eg O(n2) O(n3) etc Polynomial
O(an) eg O(16n) O(2n) etc Exponential
O(n) FactorialO(nn)
Examples of Algorithms and their big-O complexity
Big-O Notation Examples of Algorithms
O(1) Push Pop Enqueue (if there is a tail reference) Dequeue Accessing an array element
O(log(n)) Binary search
O(n) Linear search
O(n log(n)) Heap sort Quick sort (average) Merge sort
O(n2) Selection sort Insertion sort Bubble sort
O(n3) Matrix multiplication O(2n) Towers of Hanoi
Warnings about O-Notation bull Big-O notation cannot compare
algorithms in the same complexity classbull Big-O notation only gives sensible
comparisons of algorithms in different complexity classes when n is large
bull Consider two algorithms for same task Linear f(n) = 1000 n Quadratic f(n) = n21000The quadratic one is faster for n lt 1000000
Rules for using big-O bull For large values of input n the constants and terms with lower
degree of n are ignored
1 Multiplicative Constants Rule Ignoring constant factors O(c f(n)) = O(f(n)) where c is a constant Example
O(20 n3) = O(n3)
2 Addition Rule Ignoring smaller termsIf O(f(n)) lt O(h(n)) then O(f(n) + h(n)) = O(h(n))Example
O(n2 log n + n3) = O(n3)O(2000 n3 + 2n + n800 + 10n + 27n log n + 5) = O(n )
3 Multiplication Rule O(f(n) h(n)) = O(f(n)) O(h(n))Example
O((n3 + 2n 2 + 3n log n + 7)(8n 2 + 5n + 2)) = O(n 5)
Proving Big-O Complexity
To prove that f(n) is O(g(n)) we find any pair of values n0 and c that satisfy f(n) le c g(n) for n n0
Note The pair (n0 c) is not unique If such a pair exists then there is an infinite number of such pairs
Example Prove that f(n) = 3n2 + 5 is O(n2)We try to find some values of n and c by solving the following inequality
3n2 + 5 cn2 OR 3 + 5n2 c
(By putting different values for n we get corresponding values for c)
n0 1 2 3 4
c 8 425 355 33125 3
Proving Big-O ComplexityExample
Prove that f(n) = 3n2 + 4n log n + 10 is O(n2) by finding appropriate values for c and n0
We try to find some values of n and c by solving the following inequality 3n2 + 4n log n + 10 cn2
OR 3 + 4 log n n+ 10n2 c
( We used Log of base 2 but another base can be used as well)n0 1 2 3 4
c 13 75 622 562 3
How to determine complexity of code structures Loops for while and do-while
Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop
Examples
for (int i = 0 i lt n i++) sum = sum - i
for (int i = 0 i lt n n i++) sum = sum + i
i=1while (i lt n) sum = sum + i i = i2
O(n)
O(n2)
O(log n)
How to determine complexity of code structures Nested Loops Complexity of inner loop complexity of outer loopExamples
sum = 0for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) sum += i j
i = 1while(i lt= n) j = 1 while(j lt= n) statements of constant complexity j = j2 i = i+1
O(n2)
O(n log n)
How to determine complexity of code structures Sequence of statements Use Addition rule
O(s1 s2 s3 hellip sk) = O(s1) + O(s2) + O(s3) + hellip + O(sk) = O(max(s1 s2 s3 sk))
Example
Complexity is O(n2) + O(n) +O(1) = O(n2)
for (int j = 0 j lt n n j++) sum = sum + jfor (int k = 0 k lt n k++) sum = sum - lSystemoutprint(sum is now rdquo + sum)
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
What are the Algorithms
Algorithm
Is any well-defined computational procedure that takes some value or set of values as input and produces some value or set of values as output
It is a tool for solving a well-specified computational problem
They are written in a pseudo code which can be implemented in the language of programmerrsquos choice
Example sorting numbers
1048708 Input A sequence of n numbers a3 a1 a2an
1048708 Output A reordered sequence of the inputa1 a2 a3an such that a1lea2 lea3hellip lean
1048708 Input instance 5 2 4 1 6 31048708 Output 1 2 3 4 5 6
1048708 An instance of a problem consists of the input (satisfying whatever constraints are imposed in the problem statement) needed to compute a solution to the problem
Example sorting numbers
1048708 Sorting is a fundamental operation
1048708 Many algorithms existed for that purpose
1048708 The best algorithm to use depends on1048708 The number of items to be sorted1048708 possible restrictions on the item values1048708 kind of storage device to be used main memory disks or tapes
Correct and incorrect algorithmsbull Algorithm is correct if for every input instance it
ends with the correct output We say that a correct algorithm solves the given computational problem
bull An incorrect algorithm might not end at all on some input instances or it might end with an answer other than the desired one
bull We shall be concerned only with correct algorithms
Hard problems
1048708 We can identify the Efficiency of an algorithm from its speed (how long does the algorithm take to produce the result)
1048708 Some problems have unknown efficient solution
1048708 These problems are called NP-complete problems
8
Big-O Notationbull We use a shorthand mathematical notation to
describe the efficiency of an algorithm relative to any parameter n as its ldquoOrderrdquo or Big-Ondash We can say that the first algorithm is O(n)ndash We can say that the second algorithm is O(n2)
bull For any algorithm that has a function g(n) of the parameter n that describes its length of time to execute we can say the algorithm is O(g(n))
bull We only include the fastest growing term and ignore any multiplying by or adding of constants
9
Seven Growth Functions
bull Seven functions g(n) that occur frequently in the analysis of algorithms (in order of increasing rate of growth relative to n)ndash Constant 1ndash Logarithmic log nndash Linear nndash Log Linear n log nndash Quadratic n2
ndash Cubic n3
ndash Exponential 2n
10
Growth Rates Comparedn=1 n=2 n=4 n=8 n=16 n=32
1 1 1 1 1 1 1
logn 0 1 2 3 4 5
n 1 2 4 8 16 32
nlogn 0 2 8 24 64 160
n2 1 4 16 64 256 1024
n3 1 8 64 512 4096 32768
2n 2 4 16 256 65536 4294967296
Graphical Comparison of Complexity Classes
11
1 2 3 4 5 6 7 8 9 10 11 120
500
1000
1500
2000
2500
3000
3500
4000
4500
xlog xx log xxxxxx2^x
Complexity Analysis
bull Asymptotic Complexity
bull Big-O (asymptotic) Notation
bull Big-O Computation Rules
bull Proving Big-O Complexity
bull How to determine complexity of code structures
Asymptotic Complexity bull Finding the exact complexity f(n) = number of basic
operations of an algorithm is difficultbull We approximate f(n) by a function g(n) in a way that
does not substantially change the magnitude of f(n) --the function g(n) is sufficiently close to f(n) for large values of the input size n
bull This approximate measure of efficiency is called asymptotic complexity
bull Thus the asymptotic complexity measure does not give the exact number of operations of an algorithm but it shows how that number grows with the size of the input
bull This gives us a measure that will work for different operating systems compilers and CPUs
Big-O (asymptotic) Notationbull The most commonly used notation for specifying asymptotic
complexity is the big-O notationbull The Big-O notation O(g(n)) is used to give an upper bound (worst-
case) on a positive runtime function f(n) where n is the input size
Definition of Big-Obull Consider a function f(n) that is non-negative n 0 We say that
ldquof(n) is Big-O of g(n)rdquo ie f(n) = O(g(n)) if n0 0 and a constant c gt 0 such that f(n) cg(n) n n0
Big-O (asymptotic) NotationImplication of the definitionbull For all sufficiently large n c g(n) is an upper bound of
f(n)Note By the definition of Big-O f(n) = 3n + 4 is O(n)
it is also O(n2) it is also O(n3) it is also O(nn)
bull However when Big-O notation is used the function g in the relationship f(n) is O(g(n)) is CHOOSEN TO BE AS SMALL AS POSSIBLE ndash We call such a function g a tight asymptotic bound of f(n)
Big-O (asymptotic) NotationSome Big-O complexity classes in order of magnitude from smallest to highest
O(1) ConstantO(log(n)) LogarithmicO(n) LinearO(n log(n)) n log nO(nx) eg O(n2) O(n3) etc Polynomial
O(an) eg O(16n) O(2n) etc Exponential
O(n) FactorialO(nn)
Examples of Algorithms and their big-O complexity
Big-O Notation Examples of Algorithms
O(1) Push Pop Enqueue (if there is a tail reference) Dequeue Accessing an array element
O(log(n)) Binary search
O(n) Linear search
O(n log(n)) Heap sort Quick sort (average) Merge sort
O(n2) Selection sort Insertion sort Bubble sort
O(n3) Matrix multiplication O(2n) Towers of Hanoi
Warnings about O-Notation bull Big-O notation cannot compare
algorithms in the same complexity classbull Big-O notation only gives sensible
comparisons of algorithms in different complexity classes when n is large
bull Consider two algorithms for same task Linear f(n) = 1000 n Quadratic f(n) = n21000The quadratic one is faster for n lt 1000000
Rules for using big-O bull For large values of input n the constants and terms with lower
degree of n are ignored
1 Multiplicative Constants Rule Ignoring constant factors O(c f(n)) = O(f(n)) where c is a constant Example
O(20 n3) = O(n3)
2 Addition Rule Ignoring smaller termsIf O(f(n)) lt O(h(n)) then O(f(n) + h(n)) = O(h(n))Example
O(n2 log n + n3) = O(n3)O(2000 n3 + 2n + n800 + 10n + 27n log n + 5) = O(n )
3 Multiplication Rule O(f(n) h(n)) = O(f(n)) O(h(n))Example
O((n3 + 2n 2 + 3n log n + 7)(8n 2 + 5n + 2)) = O(n 5)
Proving Big-O Complexity
To prove that f(n) is O(g(n)) we find any pair of values n0 and c that satisfy f(n) le c g(n) for n n0
Note The pair (n0 c) is not unique If such a pair exists then there is an infinite number of such pairs
Example Prove that f(n) = 3n2 + 5 is O(n2)We try to find some values of n and c by solving the following inequality
3n2 + 5 cn2 OR 3 + 5n2 c
(By putting different values for n we get corresponding values for c)
n0 1 2 3 4
c 8 425 355 33125 3
Proving Big-O ComplexityExample
Prove that f(n) = 3n2 + 4n log n + 10 is O(n2) by finding appropriate values for c and n0
We try to find some values of n and c by solving the following inequality 3n2 + 4n log n + 10 cn2
OR 3 + 4 log n n+ 10n2 c
( We used Log of base 2 but another base can be used as well)n0 1 2 3 4
c 13 75 622 562 3
How to determine complexity of code structures Loops for while and do-while
Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop
Examples
for (int i = 0 i lt n i++) sum = sum - i
for (int i = 0 i lt n n i++) sum = sum + i
i=1while (i lt n) sum = sum + i i = i2
O(n)
O(n2)
O(log n)
How to determine complexity of code structures Nested Loops Complexity of inner loop complexity of outer loopExamples
sum = 0for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) sum += i j
i = 1while(i lt= n) j = 1 while(j lt= n) statements of constant complexity j = j2 i = i+1
O(n2)
O(n log n)
How to determine complexity of code structures Sequence of statements Use Addition rule
O(s1 s2 s3 hellip sk) = O(s1) + O(s2) + O(s3) + hellip + O(sk) = O(max(s1 s2 s3 sk))
Example
Complexity is O(n2) + O(n) +O(1) = O(n2)
for (int j = 0 j lt n n j++) sum = sum + jfor (int k = 0 k lt n k++) sum = sum - lSystemoutprint(sum is now rdquo + sum)
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
Example sorting numbers
1048708 Input A sequence of n numbers a3 a1 a2an
1048708 Output A reordered sequence of the inputa1 a2 a3an such that a1lea2 lea3hellip lean
1048708 Input instance 5 2 4 1 6 31048708 Output 1 2 3 4 5 6
1048708 An instance of a problem consists of the input (satisfying whatever constraints are imposed in the problem statement) needed to compute a solution to the problem
Example sorting numbers
1048708 Sorting is a fundamental operation
1048708 Many algorithms existed for that purpose
1048708 The best algorithm to use depends on1048708 The number of items to be sorted1048708 possible restrictions on the item values1048708 kind of storage device to be used main memory disks or tapes
Correct and incorrect algorithmsbull Algorithm is correct if for every input instance it
ends with the correct output We say that a correct algorithm solves the given computational problem
bull An incorrect algorithm might not end at all on some input instances or it might end with an answer other than the desired one
bull We shall be concerned only with correct algorithms
Hard problems
1048708 We can identify the Efficiency of an algorithm from its speed (how long does the algorithm take to produce the result)
1048708 Some problems have unknown efficient solution
1048708 These problems are called NP-complete problems
8
Big-O Notationbull We use a shorthand mathematical notation to
describe the efficiency of an algorithm relative to any parameter n as its ldquoOrderrdquo or Big-Ondash We can say that the first algorithm is O(n)ndash We can say that the second algorithm is O(n2)
bull For any algorithm that has a function g(n) of the parameter n that describes its length of time to execute we can say the algorithm is O(g(n))
bull We only include the fastest growing term and ignore any multiplying by or adding of constants
9
Seven Growth Functions
bull Seven functions g(n) that occur frequently in the analysis of algorithms (in order of increasing rate of growth relative to n)ndash Constant 1ndash Logarithmic log nndash Linear nndash Log Linear n log nndash Quadratic n2
ndash Cubic n3
ndash Exponential 2n
10
Growth Rates Comparedn=1 n=2 n=4 n=8 n=16 n=32
1 1 1 1 1 1 1
logn 0 1 2 3 4 5
n 1 2 4 8 16 32
nlogn 0 2 8 24 64 160
n2 1 4 16 64 256 1024
n3 1 8 64 512 4096 32768
2n 2 4 16 256 65536 4294967296
Graphical Comparison of Complexity Classes
11
1 2 3 4 5 6 7 8 9 10 11 120
500
1000
1500
2000
2500
3000
3500
4000
4500
xlog xx log xxxxxx2^x
Complexity Analysis
bull Asymptotic Complexity
bull Big-O (asymptotic) Notation
bull Big-O Computation Rules
bull Proving Big-O Complexity
bull How to determine complexity of code structures
Asymptotic Complexity bull Finding the exact complexity f(n) = number of basic
operations of an algorithm is difficultbull We approximate f(n) by a function g(n) in a way that
does not substantially change the magnitude of f(n) --the function g(n) is sufficiently close to f(n) for large values of the input size n
bull This approximate measure of efficiency is called asymptotic complexity
bull Thus the asymptotic complexity measure does not give the exact number of operations of an algorithm but it shows how that number grows with the size of the input
bull This gives us a measure that will work for different operating systems compilers and CPUs
Big-O (asymptotic) Notationbull The most commonly used notation for specifying asymptotic
complexity is the big-O notationbull The Big-O notation O(g(n)) is used to give an upper bound (worst-
case) on a positive runtime function f(n) where n is the input size
Definition of Big-Obull Consider a function f(n) that is non-negative n 0 We say that
ldquof(n) is Big-O of g(n)rdquo ie f(n) = O(g(n)) if n0 0 and a constant c gt 0 such that f(n) cg(n) n n0
Big-O (asymptotic) NotationImplication of the definitionbull For all sufficiently large n c g(n) is an upper bound of
f(n)Note By the definition of Big-O f(n) = 3n + 4 is O(n)
it is also O(n2) it is also O(n3) it is also O(nn)
bull However when Big-O notation is used the function g in the relationship f(n) is O(g(n)) is CHOOSEN TO BE AS SMALL AS POSSIBLE ndash We call such a function g a tight asymptotic bound of f(n)
Big-O (asymptotic) NotationSome Big-O complexity classes in order of magnitude from smallest to highest
O(1) ConstantO(log(n)) LogarithmicO(n) LinearO(n log(n)) n log nO(nx) eg O(n2) O(n3) etc Polynomial
O(an) eg O(16n) O(2n) etc Exponential
O(n) FactorialO(nn)
Examples of Algorithms and their big-O complexity
Big-O Notation Examples of Algorithms
O(1) Push Pop Enqueue (if there is a tail reference) Dequeue Accessing an array element
O(log(n)) Binary search
O(n) Linear search
O(n log(n)) Heap sort Quick sort (average) Merge sort
O(n2) Selection sort Insertion sort Bubble sort
O(n3) Matrix multiplication O(2n) Towers of Hanoi
Warnings about O-Notation bull Big-O notation cannot compare
algorithms in the same complexity classbull Big-O notation only gives sensible
comparisons of algorithms in different complexity classes when n is large
bull Consider two algorithms for same task Linear f(n) = 1000 n Quadratic f(n) = n21000The quadratic one is faster for n lt 1000000
Rules for using big-O bull For large values of input n the constants and terms with lower
degree of n are ignored
1 Multiplicative Constants Rule Ignoring constant factors O(c f(n)) = O(f(n)) where c is a constant Example
O(20 n3) = O(n3)
2 Addition Rule Ignoring smaller termsIf O(f(n)) lt O(h(n)) then O(f(n) + h(n)) = O(h(n))Example
O(n2 log n + n3) = O(n3)O(2000 n3 + 2n + n800 + 10n + 27n log n + 5) = O(n )
3 Multiplication Rule O(f(n) h(n)) = O(f(n)) O(h(n))Example
O((n3 + 2n 2 + 3n log n + 7)(8n 2 + 5n + 2)) = O(n 5)
Proving Big-O Complexity
To prove that f(n) is O(g(n)) we find any pair of values n0 and c that satisfy f(n) le c g(n) for n n0
Note The pair (n0 c) is not unique If such a pair exists then there is an infinite number of such pairs
Example Prove that f(n) = 3n2 + 5 is O(n2)We try to find some values of n and c by solving the following inequality
3n2 + 5 cn2 OR 3 + 5n2 c
(By putting different values for n we get corresponding values for c)
n0 1 2 3 4
c 8 425 355 33125 3
Proving Big-O ComplexityExample
Prove that f(n) = 3n2 + 4n log n + 10 is O(n2) by finding appropriate values for c and n0
We try to find some values of n and c by solving the following inequality 3n2 + 4n log n + 10 cn2
OR 3 + 4 log n n+ 10n2 c
( We used Log of base 2 but another base can be used as well)n0 1 2 3 4
c 13 75 622 562 3
How to determine complexity of code structures Loops for while and do-while
Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop
Examples
for (int i = 0 i lt n i++) sum = sum - i
for (int i = 0 i lt n n i++) sum = sum + i
i=1while (i lt n) sum = sum + i i = i2
O(n)
O(n2)
O(log n)
How to determine complexity of code structures Nested Loops Complexity of inner loop complexity of outer loopExamples
sum = 0for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) sum += i j
i = 1while(i lt= n) j = 1 while(j lt= n) statements of constant complexity j = j2 i = i+1
O(n2)
O(n log n)
How to determine complexity of code structures Sequence of statements Use Addition rule
O(s1 s2 s3 hellip sk) = O(s1) + O(s2) + O(s3) + hellip + O(sk) = O(max(s1 s2 s3 sk))
Example
Complexity is O(n2) + O(n) +O(1) = O(n2)
for (int j = 0 j lt n n j++) sum = sum + jfor (int k = 0 k lt n k++) sum = sum - lSystemoutprint(sum is now rdquo + sum)
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
Example sorting numbers
1048708 Sorting is a fundamental operation
1048708 Many algorithms existed for that purpose
1048708 The best algorithm to use depends on1048708 The number of items to be sorted1048708 possible restrictions on the item values1048708 kind of storage device to be used main memory disks or tapes
Correct and incorrect algorithmsbull Algorithm is correct if for every input instance it
ends with the correct output We say that a correct algorithm solves the given computational problem
bull An incorrect algorithm might not end at all on some input instances or it might end with an answer other than the desired one
bull We shall be concerned only with correct algorithms
Hard problems
1048708 We can identify the Efficiency of an algorithm from its speed (how long does the algorithm take to produce the result)
1048708 Some problems have unknown efficient solution
1048708 These problems are called NP-complete problems
8
Big-O Notationbull We use a shorthand mathematical notation to
describe the efficiency of an algorithm relative to any parameter n as its ldquoOrderrdquo or Big-Ondash We can say that the first algorithm is O(n)ndash We can say that the second algorithm is O(n2)
bull For any algorithm that has a function g(n) of the parameter n that describes its length of time to execute we can say the algorithm is O(g(n))
bull We only include the fastest growing term and ignore any multiplying by or adding of constants
9
Seven Growth Functions
bull Seven functions g(n) that occur frequently in the analysis of algorithms (in order of increasing rate of growth relative to n)ndash Constant 1ndash Logarithmic log nndash Linear nndash Log Linear n log nndash Quadratic n2
ndash Cubic n3
ndash Exponential 2n
10
Growth Rates Comparedn=1 n=2 n=4 n=8 n=16 n=32
1 1 1 1 1 1 1
logn 0 1 2 3 4 5
n 1 2 4 8 16 32
nlogn 0 2 8 24 64 160
n2 1 4 16 64 256 1024
n3 1 8 64 512 4096 32768
2n 2 4 16 256 65536 4294967296
Graphical Comparison of Complexity Classes
11
1 2 3 4 5 6 7 8 9 10 11 120
500
1000
1500
2000
2500
3000
3500
4000
4500
xlog xx log xxxxxx2^x
Complexity Analysis
bull Asymptotic Complexity
bull Big-O (asymptotic) Notation
bull Big-O Computation Rules
bull Proving Big-O Complexity
bull How to determine complexity of code structures
Asymptotic Complexity bull Finding the exact complexity f(n) = number of basic
operations of an algorithm is difficultbull We approximate f(n) by a function g(n) in a way that
does not substantially change the magnitude of f(n) --the function g(n) is sufficiently close to f(n) for large values of the input size n
bull This approximate measure of efficiency is called asymptotic complexity
bull Thus the asymptotic complexity measure does not give the exact number of operations of an algorithm but it shows how that number grows with the size of the input
bull This gives us a measure that will work for different operating systems compilers and CPUs
Big-O (asymptotic) Notationbull The most commonly used notation for specifying asymptotic
complexity is the big-O notationbull The Big-O notation O(g(n)) is used to give an upper bound (worst-
case) on a positive runtime function f(n) where n is the input size
Definition of Big-Obull Consider a function f(n) that is non-negative n 0 We say that
ldquof(n) is Big-O of g(n)rdquo ie f(n) = O(g(n)) if n0 0 and a constant c gt 0 such that f(n) cg(n) n n0
Big-O (asymptotic) NotationImplication of the definitionbull For all sufficiently large n c g(n) is an upper bound of
f(n)Note By the definition of Big-O f(n) = 3n + 4 is O(n)
it is also O(n2) it is also O(n3) it is also O(nn)
bull However when Big-O notation is used the function g in the relationship f(n) is O(g(n)) is CHOOSEN TO BE AS SMALL AS POSSIBLE ndash We call such a function g a tight asymptotic bound of f(n)
Big-O (asymptotic) NotationSome Big-O complexity classes in order of magnitude from smallest to highest
O(1) ConstantO(log(n)) LogarithmicO(n) LinearO(n log(n)) n log nO(nx) eg O(n2) O(n3) etc Polynomial
O(an) eg O(16n) O(2n) etc Exponential
O(n) FactorialO(nn)
Examples of Algorithms and their big-O complexity
Big-O Notation Examples of Algorithms
O(1) Push Pop Enqueue (if there is a tail reference) Dequeue Accessing an array element
O(log(n)) Binary search
O(n) Linear search
O(n log(n)) Heap sort Quick sort (average) Merge sort
O(n2) Selection sort Insertion sort Bubble sort
O(n3) Matrix multiplication O(2n) Towers of Hanoi
Warnings about O-Notation bull Big-O notation cannot compare
algorithms in the same complexity classbull Big-O notation only gives sensible
comparisons of algorithms in different complexity classes when n is large
bull Consider two algorithms for same task Linear f(n) = 1000 n Quadratic f(n) = n21000The quadratic one is faster for n lt 1000000
Rules for using big-O bull For large values of input n the constants and terms with lower
degree of n are ignored
1 Multiplicative Constants Rule Ignoring constant factors O(c f(n)) = O(f(n)) where c is a constant Example
O(20 n3) = O(n3)
2 Addition Rule Ignoring smaller termsIf O(f(n)) lt O(h(n)) then O(f(n) + h(n)) = O(h(n))Example
O(n2 log n + n3) = O(n3)O(2000 n3 + 2n + n800 + 10n + 27n log n + 5) = O(n )
3 Multiplication Rule O(f(n) h(n)) = O(f(n)) O(h(n))Example
O((n3 + 2n 2 + 3n log n + 7)(8n 2 + 5n + 2)) = O(n 5)
Proving Big-O Complexity
To prove that f(n) is O(g(n)) we find any pair of values n0 and c that satisfy f(n) le c g(n) for n n0
Note The pair (n0 c) is not unique If such a pair exists then there is an infinite number of such pairs
Example Prove that f(n) = 3n2 + 5 is O(n2)We try to find some values of n and c by solving the following inequality
3n2 + 5 cn2 OR 3 + 5n2 c
(By putting different values for n we get corresponding values for c)
n0 1 2 3 4
c 8 425 355 33125 3
Proving Big-O ComplexityExample
Prove that f(n) = 3n2 + 4n log n + 10 is O(n2) by finding appropriate values for c and n0
We try to find some values of n and c by solving the following inequality 3n2 + 4n log n + 10 cn2
OR 3 + 4 log n n+ 10n2 c
( We used Log of base 2 but another base can be used as well)n0 1 2 3 4
c 13 75 622 562 3
How to determine complexity of code structures Loops for while and do-while
Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop
Examples
for (int i = 0 i lt n i++) sum = sum - i
for (int i = 0 i lt n n i++) sum = sum + i
i=1while (i lt n) sum = sum + i i = i2
O(n)
O(n2)
O(log n)
How to determine complexity of code structures Nested Loops Complexity of inner loop complexity of outer loopExamples
sum = 0for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) sum += i j
i = 1while(i lt= n) j = 1 while(j lt= n) statements of constant complexity j = j2 i = i+1
O(n2)
O(n log n)
How to determine complexity of code structures Sequence of statements Use Addition rule
O(s1 s2 s3 hellip sk) = O(s1) + O(s2) + O(s3) + hellip + O(sk) = O(max(s1 s2 s3 sk))
Example
Complexity is O(n2) + O(n) +O(1) = O(n2)
for (int j = 0 j lt n n j++) sum = sum + jfor (int k = 0 k lt n k++) sum = sum - lSystemoutprint(sum is now rdquo + sum)
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
Correct and incorrect algorithmsbull Algorithm is correct if for every input instance it
ends with the correct output We say that a correct algorithm solves the given computational problem
bull An incorrect algorithm might not end at all on some input instances or it might end with an answer other than the desired one
bull We shall be concerned only with correct algorithms
Hard problems
1048708 We can identify the Efficiency of an algorithm from its speed (how long does the algorithm take to produce the result)
1048708 Some problems have unknown efficient solution
1048708 These problems are called NP-complete problems
8
Big-O Notationbull We use a shorthand mathematical notation to
describe the efficiency of an algorithm relative to any parameter n as its ldquoOrderrdquo or Big-Ondash We can say that the first algorithm is O(n)ndash We can say that the second algorithm is O(n2)
bull For any algorithm that has a function g(n) of the parameter n that describes its length of time to execute we can say the algorithm is O(g(n))
bull We only include the fastest growing term and ignore any multiplying by or adding of constants
9
Seven Growth Functions
bull Seven functions g(n) that occur frequently in the analysis of algorithms (in order of increasing rate of growth relative to n)ndash Constant 1ndash Logarithmic log nndash Linear nndash Log Linear n log nndash Quadratic n2
ndash Cubic n3
ndash Exponential 2n
10
Growth Rates Comparedn=1 n=2 n=4 n=8 n=16 n=32
1 1 1 1 1 1 1
logn 0 1 2 3 4 5
n 1 2 4 8 16 32
nlogn 0 2 8 24 64 160
n2 1 4 16 64 256 1024
n3 1 8 64 512 4096 32768
2n 2 4 16 256 65536 4294967296
Graphical Comparison of Complexity Classes
11
1 2 3 4 5 6 7 8 9 10 11 120
500
1000
1500
2000
2500
3000
3500
4000
4500
xlog xx log xxxxxx2^x
Complexity Analysis
bull Asymptotic Complexity
bull Big-O (asymptotic) Notation
bull Big-O Computation Rules
bull Proving Big-O Complexity
bull How to determine complexity of code structures
Asymptotic Complexity bull Finding the exact complexity f(n) = number of basic
operations of an algorithm is difficultbull We approximate f(n) by a function g(n) in a way that
does not substantially change the magnitude of f(n) --the function g(n) is sufficiently close to f(n) for large values of the input size n
bull This approximate measure of efficiency is called asymptotic complexity
bull Thus the asymptotic complexity measure does not give the exact number of operations of an algorithm but it shows how that number grows with the size of the input
bull This gives us a measure that will work for different operating systems compilers and CPUs
Big-O (asymptotic) Notationbull The most commonly used notation for specifying asymptotic
complexity is the big-O notationbull The Big-O notation O(g(n)) is used to give an upper bound (worst-
case) on a positive runtime function f(n) where n is the input size
Definition of Big-Obull Consider a function f(n) that is non-negative n 0 We say that
ldquof(n) is Big-O of g(n)rdquo ie f(n) = O(g(n)) if n0 0 and a constant c gt 0 such that f(n) cg(n) n n0
Big-O (asymptotic) NotationImplication of the definitionbull For all sufficiently large n c g(n) is an upper bound of
f(n)Note By the definition of Big-O f(n) = 3n + 4 is O(n)
it is also O(n2) it is also O(n3) it is also O(nn)
bull However when Big-O notation is used the function g in the relationship f(n) is O(g(n)) is CHOOSEN TO BE AS SMALL AS POSSIBLE ndash We call such a function g a tight asymptotic bound of f(n)
Big-O (asymptotic) NotationSome Big-O complexity classes in order of magnitude from smallest to highest
O(1) ConstantO(log(n)) LogarithmicO(n) LinearO(n log(n)) n log nO(nx) eg O(n2) O(n3) etc Polynomial
O(an) eg O(16n) O(2n) etc Exponential
O(n) FactorialO(nn)
Examples of Algorithms and their big-O complexity
Big-O Notation Examples of Algorithms
O(1) Push Pop Enqueue (if there is a tail reference) Dequeue Accessing an array element
O(log(n)) Binary search
O(n) Linear search
O(n log(n)) Heap sort Quick sort (average) Merge sort
O(n2) Selection sort Insertion sort Bubble sort
O(n3) Matrix multiplication O(2n) Towers of Hanoi
Warnings about O-Notation bull Big-O notation cannot compare
algorithms in the same complexity classbull Big-O notation only gives sensible
comparisons of algorithms in different complexity classes when n is large
bull Consider two algorithms for same task Linear f(n) = 1000 n Quadratic f(n) = n21000The quadratic one is faster for n lt 1000000
Rules for using big-O bull For large values of input n the constants and terms with lower
degree of n are ignored
1 Multiplicative Constants Rule Ignoring constant factors O(c f(n)) = O(f(n)) where c is a constant Example
O(20 n3) = O(n3)
2 Addition Rule Ignoring smaller termsIf O(f(n)) lt O(h(n)) then O(f(n) + h(n)) = O(h(n))Example
O(n2 log n + n3) = O(n3)O(2000 n3 + 2n + n800 + 10n + 27n log n + 5) = O(n )
3 Multiplication Rule O(f(n) h(n)) = O(f(n)) O(h(n))Example
O((n3 + 2n 2 + 3n log n + 7)(8n 2 + 5n + 2)) = O(n 5)
Proving Big-O Complexity
To prove that f(n) is O(g(n)) we find any pair of values n0 and c that satisfy f(n) le c g(n) for n n0
Note The pair (n0 c) is not unique If such a pair exists then there is an infinite number of such pairs
Example Prove that f(n) = 3n2 + 5 is O(n2)We try to find some values of n and c by solving the following inequality
3n2 + 5 cn2 OR 3 + 5n2 c
(By putting different values for n we get corresponding values for c)
n0 1 2 3 4
c 8 425 355 33125 3
Proving Big-O ComplexityExample
Prove that f(n) = 3n2 + 4n log n + 10 is O(n2) by finding appropriate values for c and n0
We try to find some values of n and c by solving the following inequality 3n2 + 4n log n + 10 cn2
OR 3 + 4 log n n+ 10n2 c
( We used Log of base 2 but another base can be used as well)n0 1 2 3 4
c 13 75 622 562 3
How to determine complexity of code structures Loops for while and do-while
Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop
Examples
for (int i = 0 i lt n i++) sum = sum - i
for (int i = 0 i lt n n i++) sum = sum + i
i=1while (i lt n) sum = sum + i i = i2
O(n)
O(n2)
O(log n)
How to determine complexity of code structures Nested Loops Complexity of inner loop complexity of outer loopExamples
sum = 0for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) sum += i j
i = 1while(i lt= n) j = 1 while(j lt= n) statements of constant complexity j = j2 i = i+1
O(n2)
O(n log n)
How to determine complexity of code structures Sequence of statements Use Addition rule
O(s1 s2 s3 hellip sk) = O(s1) + O(s2) + O(s3) + hellip + O(sk) = O(max(s1 s2 s3 sk))
Example
Complexity is O(n2) + O(n) +O(1) = O(n2)
for (int j = 0 j lt n n j++) sum = sum + jfor (int k = 0 k lt n k++) sum = sum - lSystemoutprint(sum is now rdquo + sum)
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
Hard problems
1048708 We can identify the Efficiency of an algorithm from its speed (how long does the algorithm take to produce the result)
1048708 Some problems have unknown efficient solution
1048708 These problems are called NP-complete problems
8
Big-O Notationbull We use a shorthand mathematical notation to
describe the efficiency of an algorithm relative to any parameter n as its ldquoOrderrdquo or Big-Ondash We can say that the first algorithm is O(n)ndash We can say that the second algorithm is O(n2)
bull For any algorithm that has a function g(n) of the parameter n that describes its length of time to execute we can say the algorithm is O(g(n))
bull We only include the fastest growing term and ignore any multiplying by or adding of constants
9
Seven Growth Functions
bull Seven functions g(n) that occur frequently in the analysis of algorithms (in order of increasing rate of growth relative to n)ndash Constant 1ndash Logarithmic log nndash Linear nndash Log Linear n log nndash Quadratic n2
ndash Cubic n3
ndash Exponential 2n
10
Growth Rates Comparedn=1 n=2 n=4 n=8 n=16 n=32
1 1 1 1 1 1 1
logn 0 1 2 3 4 5
n 1 2 4 8 16 32
nlogn 0 2 8 24 64 160
n2 1 4 16 64 256 1024
n3 1 8 64 512 4096 32768
2n 2 4 16 256 65536 4294967296
Graphical Comparison of Complexity Classes
11
1 2 3 4 5 6 7 8 9 10 11 120
500
1000
1500
2000
2500
3000
3500
4000
4500
xlog xx log xxxxxx2^x
Complexity Analysis
bull Asymptotic Complexity
bull Big-O (asymptotic) Notation
bull Big-O Computation Rules
bull Proving Big-O Complexity
bull How to determine complexity of code structures
Asymptotic Complexity bull Finding the exact complexity f(n) = number of basic
operations of an algorithm is difficultbull We approximate f(n) by a function g(n) in a way that
does not substantially change the magnitude of f(n) --the function g(n) is sufficiently close to f(n) for large values of the input size n
bull This approximate measure of efficiency is called asymptotic complexity
bull Thus the asymptotic complexity measure does not give the exact number of operations of an algorithm but it shows how that number grows with the size of the input
bull This gives us a measure that will work for different operating systems compilers and CPUs
Big-O (asymptotic) Notationbull The most commonly used notation for specifying asymptotic
complexity is the big-O notationbull The Big-O notation O(g(n)) is used to give an upper bound (worst-
case) on a positive runtime function f(n) where n is the input size
Definition of Big-Obull Consider a function f(n) that is non-negative n 0 We say that
ldquof(n) is Big-O of g(n)rdquo ie f(n) = O(g(n)) if n0 0 and a constant c gt 0 such that f(n) cg(n) n n0
Big-O (asymptotic) NotationImplication of the definitionbull For all sufficiently large n c g(n) is an upper bound of
f(n)Note By the definition of Big-O f(n) = 3n + 4 is O(n)
it is also O(n2) it is also O(n3) it is also O(nn)
bull However when Big-O notation is used the function g in the relationship f(n) is O(g(n)) is CHOOSEN TO BE AS SMALL AS POSSIBLE ndash We call such a function g a tight asymptotic bound of f(n)
Big-O (asymptotic) NotationSome Big-O complexity classes in order of magnitude from smallest to highest
O(1) ConstantO(log(n)) LogarithmicO(n) LinearO(n log(n)) n log nO(nx) eg O(n2) O(n3) etc Polynomial
O(an) eg O(16n) O(2n) etc Exponential
O(n) FactorialO(nn)
Examples of Algorithms and their big-O complexity
Big-O Notation Examples of Algorithms
O(1) Push Pop Enqueue (if there is a tail reference) Dequeue Accessing an array element
O(log(n)) Binary search
O(n) Linear search
O(n log(n)) Heap sort Quick sort (average) Merge sort
O(n2) Selection sort Insertion sort Bubble sort
O(n3) Matrix multiplication O(2n) Towers of Hanoi
Warnings about O-Notation bull Big-O notation cannot compare
algorithms in the same complexity classbull Big-O notation only gives sensible
comparisons of algorithms in different complexity classes when n is large
bull Consider two algorithms for same task Linear f(n) = 1000 n Quadratic f(n) = n21000The quadratic one is faster for n lt 1000000
Rules for using big-O bull For large values of input n the constants and terms with lower
degree of n are ignored
1 Multiplicative Constants Rule Ignoring constant factors O(c f(n)) = O(f(n)) where c is a constant Example
O(20 n3) = O(n3)
2 Addition Rule Ignoring smaller termsIf O(f(n)) lt O(h(n)) then O(f(n) + h(n)) = O(h(n))Example
O(n2 log n + n3) = O(n3)O(2000 n3 + 2n + n800 + 10n + 27n log n + 5) = O(n )
3 Multiplication Rule O(f(n) h(n)) = O(f(n)) O(h(n))Example
O((n3 + 2n 2 + 3n log n + 7)(8n 2 + 5n + 2)) = O(n 5)
Proving Big-O Complexity
To prove that f(n) is O(g(n)) we find any pair of values n0 and c that satisfy f(n) le c g(n) for n n0
Note The pair (n0 c) is not unique If such a pair exists then there is an infinite number of such pairs
Example Prove that f(n) = 3n2 + 5 is O(n2)We try to find some values of n and c by solving the following inequality
3n2 + 5 cn2 OR 3 + 5n2 c
(By putting different values for n we get corresponding values for c)
n0 1 2 3 4
c 8 425 355 33125 3
Proving Big-O ComplexityExample
Prove that f(n) = 3n2 + 4n log n + 10 is O(n2) by finding appropriate values for c and n0
We try to find some values of n and c by solving the following inequality 3n2 + 4n log n + 10 cn2
OR 3 + 4 log n n+ 10n2 c
( We used Log of base 2 but another base can be used as well)n0 1 2 3 4
c 13 75 622 562 3
How to determine complexity of code structures Loops for while and do-while
Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop
Examples
for (int i = 0 i lt n i++) sum = sum - i
for (int i = 0 i lt n n i++) sum = sum + i
i=1while (i lt n) sum = sum + i i = i2
O(n)
O(n2)
O(log n)
How to determine complexity of code structures Nested Loops Complexity of inner loop complexity of outer loopExamples
sum = 0for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) sum += i j
i = 1while(i lt= n) j = 1 while(j lt= n) statements of constant complexity j = j2 i = i+1
O(n2)
O(n log n)
How to determine complexity of code structures Sequence of statements Use Addition rule
O(s1 s2 s3 hellip sk) = O(s1) + O(s2) + O(s3) + hellip + O(sk) = O(max(s1 s2 s3 sk))
Example
Complexity is O(n2) + O(n) +O(1) = O(n2)
for (int j = 0 j lt n n j++) sum = sum + jfor (int k = 0 k lt n k++) sum = sum - lSystemoutprint(sum is now rdquo + sum)
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
8
Big-O Notationbull We use a shorthand mathematical notation to
describe the efficiency of an algorithm relative to any parameter n as its ldquoOrderrdquo or Big-Ondash We can say that the first algorithm is O(n)ndash We can say that the second algorithm is O(n2)
bull For any algorithm that has a function g(n) of the parameter n that describes its length of time to execute we can say the algorithm is O(g(n))
bull We only include the fastest growing term and ignore any multiplying by or adding of constants
9
Seven Growth Functions
bull Seven functions g(n) that occur frequently in the analysis of algorithms (in order of increasing rate of growth relative to n)ndash Constant 1ndash Logarithmic log nndash Linear nndash Log Linear n log nndash Quadratic n2
ndash Cubic n3
ndash Exponential 2n
10
Growth Rates Comparedn=1 n=2 n=4 n=8 n=16 n=32
1 1 1 1 1 1 1
logn 0 1 2 3 4 5
n 1 2 4 8 16 32
nlogn 0 2 8 24 64 160
n2 1 4 16 64 256 1024
n3 1 8 64 512 4096 32768
2n 2 4 16 256 65536 4294967296
Graphical Comparison of Complexity Classes
11
1 2 3 4 5 6 7 8 9 10 11 120
500
1000
1500
2000
2500
3000
3500
4000
4500
xlog xx log xxxxxx2^x
Complexity Analysis
bull Asymptotic Complexity
bull Big-O (asymptotic) Notation
bull Big-O Computation Rules
bull Proving Big-O Complexity
bull How to determine complexity of code structures
Asymptotic Complexity bull Finding the exact complexity f(n) = number of basic
operations of an algorithm is difficultbull We approximate f(n) by a function g(n) in a way that
does not substantially change the magnitude of f(n) --the function g(n) is sufficiently close to f(n) for large values of the input size n
bull This approximate measure of efficiency is called asymptotic complexity
bull Thus the asymptotic complexity measure does not give the exact number of operations of an algorithm but it shows how that number grows with the size of the input
bull This gives us a measure that will work for different operating systems compilers and CPUs
Big-O (asymptotic) Notationbull The most commonly used notation for specifying asymptotic
complexity is the big-O notationbull The Big-O notation O(g(n)) is used to give an upper bound (worst-
case) on a positive runtime function f(n) where n is the input size
Definition of Big-Obull Consider a function f(n) that is non-negative n 0 We say that
ldquof(n) is Big-O of g(n)rdquo ie f(n) = O(g(n)) if n0 0 and a constant c gt 0 such that f(n) cg(n) n n0
Big-O (asymptotic) NotationImplication of the definitionbull For all sufficiently large n c g(n) is an upper bound of
f(n)Note By the definition of Big-O f(n) = 3n + 4 is O(n)
it is also O(n2) it is also O(n3) it is also O(nn)
bull However when Big-O notation is used the function g in the relationship f(n) is O(g(n)) is CHOOSEN TO BE AS SMALL AS POSSIBLE ndash We call such a function g a tight asymptotic bound of f(n)
Big-O (asymptotic) NotationSome Big-O complexity classes in order of magnitude from smallest to highest
O(1) ConstantO(log(n)) LogarithmicO(n) LinearO(n log(n)) n log nO(nx) eg O(n2) O(n3) etc Polynomial
O(an) eg O(16n) O(2n) etc Exponential
O(n) FactorialO(nn)
Examples of Algorithms and their big-O complexity
Big-O Notation Examples of Algorithms
O(1) Push Pop Enqueue (if there is a tail reference) Dequeue Accessing an array element
O(log(n)) Binary search
O(n) Linear search
O(n log(n)) Heap sort Quick sort (average) Merge sort
O(n2) Selection sort Insertion sort Bubble sort
O(n3) Matrix multiplication O(2n) Towers of Hanoi
Warnings about O-Notation bull Big-O notation cannot compare
algorithms in the same complexity classbull Big-O notation only gives sensible
comparisons of algorithms in different complexity classes when n is large
bull Consider two algorithms for same task Linear f(n) = 1000 n Quadratic f(n) = n21000The quadratic one is faster for n lt 1000000
Rules for using big-O bull For large values of input n the constants and terms with lower
degree of n are ignored
1 Multiplicative Constants Rule Ignoring constant factors O(c f(n)) = O(f(n)) where c is a constant Example
O(20 n3) = O(n3)
2 Addition Rule Ignoring smaller termsIf O(f(n)) lt O(h(n)) then O(f(n) + h(n)) = O(h(n))Example
O(n2 log n + n3) = O(n3)O(2000 n3 + 2n + n800 + 10n + 27n log n + 5) = O(n )
3 Multiplication Rule O(f(n) h(n)) = O(f(n)) O(h(n))Example
O((n3 + 2n 2 + 3n log n + 7)(8n 2 + 5n + 2)) = O(n 5)
Proving Big-O Complexity
To prove that f(n) is O(g(n)) we find any pair of values n0 and c that satisfy f(n) le c g(n) for n n0
Note The pair (n0 c) is not unique If such a pair exists then there is an infinite number of such pairs
Example Prove that f(n) = 3n2 + 5 is O(n2)We try to find some values of n and c by solving the following inequality
3n2 + 5 cn2 OR 3 + 5n2 c
(By putting different values for n we get corresponding values for c)
n0 1 2 3 4
c 8 425 355 33125 3
Proving Big-O ComplexityExample
Prove that f(n) = 3n2 + 4n log n + 10 is O(n2) by finding appropriate values for c and n0
We try to find some values of n and c by solving the following inequality 3n2 + 4n log n + 10 cn2
OR 3 + 4 log n n+ 10n2 c
( We used Log of base 2 but another base can be used as well)n0 1 2 3 4
c 13 75 622 562 3
How to determine complexity of code structures Loops for while and do-while
Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop
Examples
for (int i = 0 i lt n i++) sum = sum - i
for (int i = 0 i lt n n i++) sum = sum + i
i=1while (i lt n) sum = sum + i i = i2
O(n)
O(n2)
O(log n)
How to determine complexity of code structures Nested Loops Complexity of inner loop complexity of outer loopExamples
sum = 0for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) sum += i j
i = 1while(i lt= n) j = 1 while(j lt= n) statements of constant complexity j = j2 i = i+1
O(n2)
O(n log n)
How to determine complexity of code structures Sequence of statements Use Addition rule
O(s1 s2 s3 hellip sk) = O(s1) + O(s2) + O(s3) + hellip + O(sk) = O(max(s1 s2 s3 sk))
Example
Complexity is O(n2) + O(n) +O(1) = O(n2)
for (int j = 0 j lt n n j++) sum = sum + jfor (int k = 0 k lt n k++) sum = sum - lSystemoutprint(sum is now rdquo + sum)
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
9
Seven Growth Functions
bull Seven functions g(n) that occur frequently in the analysis of algorithms (in order of increasing rate of growth relative to n)ndash Constant 1ndash Logarithmic log nndash Linear nndash Log Linear n log nndash Quadratic n2
ndash Cubic n3
ndash Exponential 2n
10
Growth Rates Comparedn=1 n=2 n=4 n=8 n=16 n=32
1 1 1 1 1 1 1
logn 0 1 2 3 4 5
n 1 2 4 8 16 32
nlogn 0 2 8 24 64 160
n2 1 4 16 64 256 1024
n3 1 8 64 512 4096 32768
2n 2 4 16 256 65536 4294967296
Graphical Comparison of Complexity Classes
11
1 2 3 4 5 6 7 8 9 10 11 120
500
1000
1500
2000
2500
3000
3500
4000
4500
xlog xx log xxxxxx2^x
Complexity Analysis
bull Asymptotic Complexity
bull Big-O (asymptotic) Notation
bull Big-O Computation Rules
bull Proving Big-O Complexity
bull How to determine complexity of code structures
Asymptotic Complexity bull Finding the exact complexity f(n) = number of basic
operations of an algorithm is difficultbull We approximate f(n) by a function g(n) in a way that
does not substantially change the magnitude of f(n) --the function g(n) is sufficiently close to f(n) for large values of the input size n
bull This approximate measure of efficiency is called asymptotic complexity
bull Thus the asymptotic complexity measure does not give the exact number of operations of an algorithm but it shows how that number grows with the size of the input
bull This gives us a measure that will work for different operating systems compilers and CPUs
Big-O (asymptotic) Notationbull The most commonly used notation for specifying asymptotic
complexity is the big-O notationbull The Big-O notation O(g(n)) is used to give an upper bound (worst-
case) on a positive runtime function f(n) where n is the input size
Definition of Big-Obull Consider a function f(n) that is non-negative n 0 We say that
ldquof(n) is Big-O of g(n)rdquo ie f(n) = O(g(n)) if n0 0 and a constant c gt 0 such that f(n) cg(n) n n0
Big-O (asymptotic) NotationImplication of the definitionbull For all sufficiently large n c g(n) is an upper bound of
f(n)Note By the definition of Big-O f(n) = 3n + 4 is O(n)
it is also O(n2) it is also O(n3) it is also O(nn)
bull However when Big-O notation is used the function g in the relationship f(n) is O(g(n)) is CHOOSEN TO BE AS SMALL AS POSSIBLE ndash We call such a function g a tight asymptotic bound of f(n)
Big-O (asymptotic) NotationSome Big-O complexity classes in order of magnitude from smallest to highest
O(1) ConstantO(log(n)) LogarithmicO(n) LinearO(n log(n)) n log nO(nx) eg O(n2) O(n3) etc Polynomial
O(an) eg O(16n) O(2n) etc Exponential
O(n) FactorialO(nn)
Examples of Algorithms and their big-O complexity
Big-O Notation Examples of Algorithms
O(1) Push Pop Enqueue (if there is a tail reference) Dequeue Accessing an array element
O(log(n)) Binary search
O(n) Linear search
O(n log(n)) Heap sort Quick sort (average) Merge sort
O(n2) Selection sort Insertion sort Bubble sort
O(n3) Matrix multiplication O(2n) Towers of Hanoi
Warnings about O-Notation bull Big-O notation cannot compare
algorithms in the same complexity classbull Big-O notation only gives sensible
comparisons of algorithms in different complexity classes when n is large
bull Consider two algorithms for same task Linear f(n) = 1000 n Quadratic f(n) = n21000The quadratic one is faster for n lt 1000000
Rules for using big-O bull For large values of input n the constants and terms with lower
degree of n are ignored
1 Multiplicative Constants Rule Ignoring constant factors O(c f(n)) = O(f(n)) where c is a constant Example
O(20 n3) = O(n3)
2 Addition Rule Ignoring smaller termsIf O(f(n)) lt O(h(n)) then O(f(n) + h(n)) = O(h(n))Example
O(n2 log n + n3) = O(n3)O(2000 n3 + 2n + n800 + 10n + 27n log n + 5) = O(n )
3 Multiplication Rule O(f(n) h(n)) = O(f(n)) O(h(n))Example
O((n3 + 2n 2 + 3n log n + 7)(8n 2 + 5n + 2)) = O(n 5)
Proving Big-O Complexity
To prove that f(n) is O(g(n)) we find any pair of values n0 and c that satisfy f(n) le c g(n) for n n0
Note The pair (n0 c) is not unique If such a pair exists then there is an infinite number of such pairs
Example Prove that f(n) = 3n2 + 5 is O(n2)We try to find some values of n and c by solving the following inequality
3n2 + 5 cn2 OR 3 + 5n2 c
(By putting different values for n we get corresponding values for c)
n0 1 2 3 4
c 8 425 355 33125 3
Proving Big-O ComplexityExample
Prove that f(n) = 3n2 + 4n log n + 10 is O(n2) by finding appropriate values for c and n0
We try to find some values of n and c by solving the following inequality 3n2 + 4n log n + 10 cn2
OR 3 + 4 log n n+ 10n2 c
( We used Log of base 2 but another base can be used as well)n0 1 2 3 4
c 13 75 622 562 3
How to determine complexity of code structures Loops for while and do-while
Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop
Examples
for (int i = 0 i lt n i++) sum = sum - i
for (int i = 0 i lt n n i++) sum = sum + i
i=1while (i lt n) sum = sum + i i = i2
O(n)
O(n2)
O(log n)
How to determine complexity of code structures Nested Loops Complexity of inner loop complexity of outer loopExamples
sum = 0for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) sum += i j
i = 1while(i lt= n) j = 1 while(j lt= n) statements of constant complexity j = j2 i = i+1
O(n2)
O(n log n)
How to determine complexity of code structures Sequence of statements Use Addition rule
O(s1 s2 s3 hellip sk) = O(s1) + O(s2) + O(s3) + hellip + O(sk) = O(max(s1 s2 s3 sk))
Example
Complexity is O(n2) + O(n) +O(1) = O(n2)
for (int j = 0 j lt n n j++) sum = sum + jfor (int k = 0 k lt n k++) sum = sum - lSystemoutprint(sum is now rdquo + sum)
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
10
Growth Rates Comparedn=1 n=2 n=4 n=8 n=16 n=32
1 1 1 1 1 1 1
logn 0 1 2 3 4 5
n 1 2 4 8 16 32
nlogn 0 2 8 24 64 160
n2 1 4 16 64 256 1024
n3 1 8 64 512 4096 32768
2n 2 4 16 256 65536 4294967296
Graphical Comparison of Complexity Classes
11
1 2 3 4 5 6 7 8 9 10 11 120
500
1000
1500
2000
2500
3000
3500
4000
4500
xlog xx log xxxxxx2^x
Complexity Analysis
bull Asymptotic Complexity
bull Big-O (asymptotic) Notation
bull Big-O Computation Rules
bull Proving Big-O Complexity
bull How to determine complexity of code structures
Asymptotic Complexity bull Finding the exact complexity f(n) = number of basic
operations of an algorithm is difficultbull We approximate f(n) by a function g(n) in a way that
does not substantially change the magnitude of f(n) --the function g(n) is sufficiently close to f(n) for large values of the input size n
bull This approximate measure of efficiency is called asymptotic complexity
bull Thus the asymptotic complexity measure does not give the exact number of operations of an algorithm but it shows how that number grows with the size of the input
bull This gives us a measure that will work for different operating systems compilers and CPUs
Big-O (asymptotic) Notationbull The most commonly used notation for specifying asymptotic
complexity is the big-O notationbull The Big-O notation O(g(n)) is used to give an upper bound (worst-
case) on a positive runtime function f(n) where n is the input size
Definition of Big-Obull Consider a function f(n) that is non-negative n 0 We say that
ldquof(n) is Big-O of g(n)rdquo ie f(n) = O(g(n)) if n0 0 and a constant c gt 0 such that f(n) cg(n) n n0
Big-O (asymptotic) NotationImplication of the definitionbull For all sufficiently large n c g(n) is an upper bound of
f(n)Note By the definition of Big-O f(n) = 3n + 4 is O(n)
it is also O(n2) it is also O(n3) it is also O(nn)
bull However when Big-O notation is used the function g in the relationship f(n) is O(g(n)) is CHOOSEN TO BE AS SMALL AS POSSIBLE ndash We call such a function g a tight asymptotic bound of f(n)
Big-O (asymptotic) NotationSome Big-O complexity classes in order of magnitude from smallest to highest
O(1) ConstantO(log(n)) LogarithmicO(n) LinearO(n log(n)) n log nO(nx) eg O(n2) O(n3) etc Polynomial
O(an) eg O(16n) O(2n) etc Exponential
O(n) FactorialO(nn)
Examples of Algorithms and their big-O complexity
Big-O Notation Examples of Algorithms
O(1) Push Pop Enqueue (if there is a tail reference) Dequeue Accessing an array element
O(log(n)) Binary search
O(n) Linear search
O(n log(n)) Heap sort Quick sort (average) Merge sort
O(n2) Selection sort Insertion sort Bubble sort
O(n3) Matrix multiplication O(2n) Towers of Hanoi
Warnings about O-Notation bull Big-O notation cannot compare
algorithms in the same complexity classbull Big-O notation only gives sensible
comparisons of algorithms in different complexity classes when n is large
bull Consider two algorithms for same task Linear f(n) = 1000 n Quadratic f(n) = n21000The quadratic one is faster for n lt 1000000
Rules for using big-O bull For large values of input n the constants and terms with lower
degree of n are ignored
1 Multiplicative Constants Rule Ignoring constant factors O(c f(n)) = O(f(n)) where c is a constant Example
O(20 n3) = O(n3)
2 Addition Rule Ignoring smaller termsIf O(f(n)) lt O(h(n)) then O(f(n) + h(n)) = O(h(n))Example
O(n2 log n + n3) = O(n3)O(2000 n3 + 2n + n800 + 10n + 27n log n + 5) = O(n )
3 Multiplication Rule O(f(n) h(n)) = O(f(n)) O(h(n))Example
O((n3 + 2n 2 + 3n log n + 7)(8n 2 + 5n + 2)) = O(n 5)
Proving Big-O Complexity
To prove that f(n) is O(g(n)) we find any pair of values n0 and c that satisfy f(n) le c g(n) for n n0
Note The pair (n0 c) is not unique If such a pair exists then there is an infinite number of such pairs
Example Prove that f(n) = 3n2 + 5 is O(n2)We try to find some values of n and c by solving the following inequality
3n2 + 5 cn2 OR 3 + 5n2 c
(By putting different values for n we get corresponding values for c)
n0 1 2 3 4
c 8 425 355 33125 3
Proving Big-O ComplexityExample
Prove that f(n) = 3n2 + 4n log n + 10 is O(n2) by finding appropriate values for c and n0
We try to find some values of n and c by solving the following inequality 3n2 + 4n log n + 10 cn2
OR 3 + 4 log n n+ 10n2 c
( We used Log of base 2 but another base can be used as well)n0 1 2 3 4
c 13 75 622 562 3
How to determine complexity of code structures Loops for while and do-while
Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop
Examples
for (int i = 0 i lt n i++) sum = sum - i
for (int i = 0 i lt n n i++) sum = sum + i
i=1while (i lt n) sum = sum + i i = i2
O(n)
O(n2)
O(log n)
How to determine complexity of code structures Nested Loops Complexity of inner loop complexity of outer loopExamples
sum = 0for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) sum += i j
i = 1while(i lt= n) j = 1 while(j lt= n) statements of constant complexity j = j2 i = i+1
O(n2)
O(n log n)
How to determine complexity of code structures Sequence of statements Use Addition rule
O(s1 s2 s3 hellip sk) = O(s1) + O(s2) + O(s3) + hellip + O(sk) = O(max(s1 s2 s3 sk))
Example
Complexity is O(n2) + O(n) +O(1) = O(n2)
for (int j = 0 j lt n n j++) sum = sum + jfor (int k = 0 k lt n k++) sum = sum - lSystemoutprint(sum is now rdquo + sum)
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
Graphical Comparison of Complexity Classes
11
1 2 3 4 5 6 7 8 9 10 11 120
500
1000
1500
2000
2500
3000
3500
4000
4500
xlog xx log xxxxxx2^x
Complexity Analysis
bull Asymptotic Complexity
bull Big-O (asymptotic) Notation
bull Big-O Computation Rules
bull Proving Big-O Complexity
bull How to determine complexity of code structures
Asymptotic Complexity bull Finding the exact complexity f(n) = number of basic
operations of an algorithm is difficultbull We approximate f(n) by a function g(n) in a way that
does not substantially change the magnitude of f(n) --the function g(n) is sufficiently close to f(n) for large values of the input size n
bull This approximate measure of efficiency is called asymptotic complexity
bull Thus the asymptotic complexity measure does not give the exact number of operations of an algorithm but it shows how that number grows with the size of the input
bull This gives us a measure that will work for different operating systems compilers and CPUs
Big-O (asymptotic) Notationbull The most commonly used notation for specifying asymptotic
complexity is the big-O notationbull The Big-O notation O(g(n)) is used to give an upper bound (worst-
case) on a positive runtime function f(n) where n is the input size
Definition of Big-Obull Consider a function f(n) that is non-negative n 0 We say that
ldquof(n) is Big-O of g(n)rdquo ie f(n) = O(g(n)) if n0 0 and a constant c gt 0 such that f(n) cg(n) n n0
Big-O (asymptotic) NotationImplication of the definitionbull For all sufficiently large n c g(n) is an upper bound of
f(n)Note By the definition of Big-O f(n) = 3n + 4 is O(n)
it is also O(n2) it is also O(n3) it is also O(nn)
bull However when Big-O notation is used the function g in the relationship f(n) is O(g(n)) is CHOOSEN TO BE AS SMALL AS POSSIBLE ndash We call such a function g a tight asymptotic bound of f(n)
Big-O (asymptotic) NotationSome Big-O complexity classes in order of magnitude from smallest to highest
O(1) ConstantO(log(n)) LogarithmicO(n) LinearO(n log(n)) n log nO(nx) eg O(n2) O(n3) etc Polynomial
O(an) eg O(16n) O(2n) etc Exponential
O(n) FactorialO(nn)
Examples of Algorithms and their big-O complexity
Big-O Notation Examples of Algorithms
O(1) Push Pop Enqueue (if there is a tail reference) Dequeue Accessing an array element
O(log(n)) Binary search
O(n) Linear search
O(n log(n)) Heap sort Quick sort (average) Merge sort
O(n2) Selection sort Insertion sort Bubble sort
O(n3) Matrix multiplication O(2n) Towers of Hanoi
Warnings about O-Notation bull Big-O notation cannot compare
algorithms in the same complexity classbull Big-O notation only gives sensible
comparisons of algorithms in different complexity classes when n is large
bull Consider two algorithms for same task Linear f(n) = 1000 n Quadratic f(n) = n21000The quadratic one is faster for n lt 1000000
Rules for using big-O bull For large values of input n the constants and terms with lower
degree of n are ignored
1 Multiplicative Constants Rule Ignoring constant factors O(c f(n)) = O(f(n)) where c is a constant Example
O(20 n3) = O(n3)
2 Addition Rule Ignoring smaller termsIf O(f(n)) lt O(h(n)) then O(f(n) + h(n)) = O(h(n))Example
O(n2 log n + n3) = O(n3)O(2000 n3 + 2n + n800 + 10n + 27n log n + 5) = O(n )
3 Multiplication Rule O(f(n) h(n)) = O(f(n)) O(h(n))Example
O((n3 + 2n 2 + 3n log n + 7)(8n 2 + 5n + 2)) = O(n 5)
Proving Big-O Complexity
To prove that f(n) is O(g(n)) we find any pair of values n0 and c that satisfy f(n) le c g(n) for n n0
Note The pair (n0 c) is not unique If such a pair exists then there is an infinite number of such pairs
Example Prove that f(n) = 3n2 + 5 is O(n2)We try to find some values of n and c by solving the following inequality
3n2 + 5 cn2 OR 3 + 5n2 c
(By putting different values for n we get corresponding values for c)
n0 1 2 3 4
c 8 425 355 33125 3
Proving Big-O ComplexityExample
Prove that f(n) = 3n2 + 4n log n + 10 is O(n2) by finding appropriate values for c and n0
We try to find some values of n and c by solving the following inequality 3n2 + 4n log n + 10 cn2
OR 3 + 4 log n n+ 10n2 c
( We used Log of base 2 but another base can be used as well)n0 1 2 3 4
c 13 75 622 562 3
How to determine complexity of code structures Loops for while and do-while
Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop
Examples
for (int i = 0 i lt n i++) sum = sum - i
for (int i = 0 i lt n n i++) sum = sum + i
i=1while (i lt n) sum = sum + i i = i2
O(n)
O(n2)
O(log n)
How to determine complexity of code structures Nested Loops Complexity of inner loop complexity of outer loopExamples
sum = 0for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) sum += i j
i = 1while(i lt= n) j = 1 while(j lt= n) statements of constant complexity j = j2 i = i+1
O(n2)
O(n log n)
How to determine complexity of code structures Sequence of statements Use Addition rule
O(s1 s2 s3 hellip sk) = O(s1) + O(s2) + O(s3) + hellip + O(sk) = O(max(s1 s2 s3 sk))
Example
Complexity is O(n2) + O(n) +O(1) = O(n2)
for (int j = 0 j lt n n j++) sum = sum + jfor (int k = 0 k lt n k++) sum = sum - lSystemoutprint(sum is now rdquo + sum)
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
Complexity Analysis
bull Asymptotic Complexity
bull Big-O (asymptotic) Notation
bull Big-O Computation Rules
bull Proving Big-O Complexity
bull How to determine complexity of code structures
Asymptotic Complexity bull Finding the exact complexity f(n) = number of basic
operations of an algorithm is difficultbull We approximate f(n) by a function g(n) in a way that
does not substantially change the magnitude of f(n) --the function g(n) is sufficiently close to f(n) for large values of the input size n
bull This approximate measure of efficiency is called asymptotic complexity
bull Thus the asymptotic complexity measure does not give the exact number of operations of an algorithm but it shows how that number grows with the size of the input
bull This gives us a measure that will work for different operating systems compilers and CPUs
Big-O (asymptotic) Notationbull The most commonly used notation for specifying asymptotic
complexity is the big-O notationbull The Big-O notation O(g(n)) is used to give an upper bound (worst-
case) on a positive runtime function f(n) where n is the input size
Definition of Big-Obull Consider a function f(n) that is non-negative n 0 We say that
ldquof(n) is Big-O of g(n)rdquo ie f(n) = O(g(n)) if n0 0 and a constant c gt 0 such that f(n) cg(n) n n0
Big-O (asymptotic) NotationImplication of the definitionbull For all sufficiently large n c g(n) is an upper bound of
f(n)Note By the definition of Big-O f(n) = 3n + 4 is O(n)
it is also O(n2) it is also O(n3) it is also O(nn)
bull However when Big-O notation is used the function g in the relationship f(n) is O(g(n)) is CHOOSEN TO BE AS SMALL AS POSSIBLE ndash We call such a function g a tight asymptotic bound of f(n)
Big-O (asymptotic) NotationSome Big-O complexity classes in order of magnitude from smallest to highest
O(1) ConstantO(log(n)) LogarithmicO(n) LinearO(n log(n)) n log nO(nx) eg O(n2) O(n3) etc Polynomial
O(an) eg O(16n) O(2n) etc Exponential
O(n) FactorialO(nn)
Examples of Algorithms and their big-O complexity
Big-O Notation Examples of Algorithms
O(1) Push Pop Enqueue (if there is a tail reference) Dequeue Accessing an array element
O(log(n)) Binary search
O(n) Linear search
O(n log(n)) Heap sort Quick sort (average) Merge sort
O(n2) Selection sort Insertion sort Bubble sort
O(n3) Matrix multiplication O(2n) Towers of Hanoi
Warnings about O-Notation bull Big-O notation cannot compare
algorithms in the same complexity classbull Big-O notation only gives sensible
comparisons of algorithms in different complexity classes when n is large
bull Consider two algorithms for same task Linear f(n) = 1000 n Quadratic f(n) = n21000The quadratic one is faster for n lt 1000000
Rules for using big-O bull For large values of input n the constants and terms with lower
degree of n are ignored
1 Multiplicative Constants Rule Ignoring constant factors O(c f(n)) = O(f(n)) where c is a constant Example
O(20 n3) = O(n3)
2 Addition Rule Ignoring smaller termsIf O(f(n)) lt O(h(n)) then O(f(n) + h(n)) = O(h(n))Example
O(n2 log n + n3) = O(n3)O(2000 n3 + 2n + n800 + 10n + 27n log n + 5) = O(n )
3 Multiplication Rule O(f(n) h(n)) = O(f(n)) O(h(n))Example
O((n3 + 2n 2 + 3n log n + 7)(8n 2 + 5n + 2)) = O(n 5)
Proving Big-O Complexity
To prove that f(n) is O(g(n)) we find any pair of values n0 and c that satisfy f(n) le c g(n) for n n0
Note The pair (n0 c) is not unique If such a pair exists then there is an infinite number of such pairs
Example Prove that f(n) = 3n2 + 5 is O(n2)We try to find some values of n and c by solving the following inequality
3n2 + 5 cn2 OR 3 + 5n2 c
(By putting different values for n we get corresponding values for c)
n0 1 2 3 4
c 8 425 355 33125 3
Proving Big-O ComplexityExample
Prove that f(n) = 3n2 + 4n log n + 10 is O(n2) by finding appropriate values for c and n0
We try to find some values of n and c by solving the following inequality 3n2 + 4n log n + 10 cn2
OR 3 + 4 log n n+ 10n2 c
( We used Log of base 2 but another base can be used as well)n0 1 2 3 4
c 13 75 622 562 3
How to determine complexity of code structures Loops for while and do-while
Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop
Examples
for (int i = 0 i lt n i++) sum = sum - i
for (int i = 0 i lt n n i++) sum = sum + i
i=1while (i lt n) sum = sum + i i = i2
O(n)
O(n2)
O(log n)
How to determine complexity of code structures Nested Loops Complexity of inner loop complexity of outer loopExamples
sum = 0for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) sum += i j
i = 1while(i lt= n) j = 1 while(j lt= n) statements of constant complexity j = j2 i = i+1
O(n2)
O(n log n)
How to determine complexity of code structures Sequence of statements Use Addition rule
O(s1 s2 s3 hellip sk) = O(s1) + O(s2) + O(s3) + hellip + O(sk) = O(max(s1 s2 s3 sk))
Example
Complexity is O(n2) + O(n) +O(1) = O(n2)
for (int j = 0 j lt n n j++) sum = sum + jfor (int k = 0 k lt n k++) sum = sum - lSystemoutprint(sum is now rdquo + sum)
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
Asymptotic Complexity bull Finding the exact complexity f(n) = number of basic
operations of an algorithm is difficultbull We approximate f(n) by a function g(n) in a way that
does not substantially change the magnitude of f(n) --the function g(n) is sufficiently close to f(n) for large values of the input size n
bull This approximate measure of efficiency is called asymptotic complexity
bull Thus the asymptotic complexity measure does not give the exact number of operations of an algorithm but it shows how that number grows with the size of the input
bull This gives us a measure that will work for different operating systems compilers and CPUs
Big-O (asymptotic) Notationbull The most commonly used notation for specifying asymptotic
complexity is the big-O notationbull The Big-O notation O(g(n)) is used to give an upper bound (worst-
case) on a positive runtime function f(n) where n is the input size
Definition of Big-Obull Consider a function f(n) that is non-negative n 0 We say that
ldquof(n) is Big-O of g(n)rdquo ie f(n) = O(g(n)) if n0 0 and a constant c gt 0 such that f(n) cg(n) n n0
Big-O (asymptotic) NotationImplication of the definitionbull For all sufficiently large n c g(n) is an upper bound of
f(n)Note By the definition of Big-O f(n) = 3n + 4 is O(n)
it is also O(n2) it is also O(n3) it is also O(nn)
bull However when Big-O notation is used the function g in the relationship f(n) is O(g(n)) is CHOOSEN TO BE AS SMALL AS POSSIBLE ndash We call such a function g a tight asymptotic bound of f(n)
Big-O (asymptotic) NotationSome Big-O complexity classes in order of magnitude from smallest to highest
O(1) ConstantO(log(n)) LogarithmicO(n) LinearO(n log(n)) n log nO(nx) eg O(n2) O(n3) etc Polynomial
O(an) eg O(16n) O(2n) etc Exponential
O(n) FactorialO(nn)
Examples of Algorithms and their big-O complexity
Big-O Notation Examples of Algorithms
O(1) Push Pop Enqueue (if there is a tail reference) Dequeue Accessing an array element
O(log(n)) Binary search
O(n) Linear search
O(n log(n)) Heap sort Quick sort (average) Merge sort
O(n2) Selection sort Insertion sort Bubble sort
O(n3) Matrix multiplication O(2n) Towers of Hanoi
Warnings about O-Notation bull Big-O notation cannot compare
algorithms in the same complexity classbull Big-O notation only gives sensible
comparisons of algorithms in different complexity classes when n is large
bull Consider two algorithms for same task Linear f(n) = 1000 n Quadratic f(n) = n21000The quadratic one is faster for n lt 1000000
Rules for using big-O bull For large values of input n the constants and terms with lower
degree of n are ignored
1 Multiplicative Constants Rule Ignoring constant factors O(c f(n)) = O(f(n)) where c is a constant Example
O(20 n3) = O(n3)
2 Addition Rule Ignoring smaller termsIf O(f(n)) lt O(h(n)) then O(f(n) + h(n)) = O(h(n))Example
O(n2 log n + n3) = O(n3)O(2000 n3 + 2n + n800 + 10n + 27n log n + 5) = O(n )
3 Multiplication Rule O(f(n) h(n)) = O(f(n)) O(h(n))Example
O((n3 + 2n 2 + 3n log n + 7)(8n 2 + 5n + 2)) = O(n 5)
Proving Big-O Complexity
To prove that f(n) is O(g(n)) we find any pair of values n0 and c that satisfy f(n) le c g(n) for n n0
Note The pair (n0 c) is not unique If such a pair exists then there is an infinite number of such pairs
Example Prove that f(n) = 3n2 + 5 is O(n2)We try to find some values of n and c by solving the following inequality
3n2 + 5 cn2 OR 3 + 5n2 c
(By putting different values for n we get corresponding values for c)
n0 1 2 3 4
c 8 425 355 33125 3
Proving Big-O ComplexityExample
Prove that f(n) = 3n2 + 4n log n + 10 is O(n2) by finding appropriate values for c and n0
We try to find some values of n and c by solving the following inequality 3n2 + 4n log n + 10 cn2
OR 3 + 4 log n n+ 10n2 c
( We used Log of base 2 but another base can be used as well)n0 1 2 3 4
c 13 75 622 562 3
How to determine complexity of code structures Loops for while and do-while
Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop
Examples
for (int i = 0 i lt n i++) sum = sum - i
for (int i = 0 i lt n n i++) sum = sum + i
i=1while (i lt n) sum = sum + i i = i2
O(n)
O(n2)
O(log n)
How to determine complexity of code structures Nested Loops Complexity of inner loop complexity of outer loopExamples
sum = 0for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) sum += i j
i = 1while(i lt= n) j = 1 while(j lt= n) statements of constant complexity j = j2 i = i+1
O(n2)
O(n log n)
How to determine complexity of code structures Sequence of statements Use Addition rule
O(s1 s2 s3 hellip sk) = O(s1) + O(s2) + O(s3) + hellip + O(sk) = O(max(s1 s2 s3 sk))
Example
Complexity is O(n2) + O(n) +O(1) = O(n2)
for (int j = 0 j lt n n j++) sum = sum + jfor (int k = 0 k lt n k++) sum = sum - lSystemoutprint(sum is now rdquo + sum)
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
Big-O (asymptotic) Notationbull The most commonly used notation for specifying asymptotic
complexity is the big-O notationbull The Big-O notation O(g(n)) is used to give an upper bound (worst-
case) on a positive runtime function f(n) where n is the input size
Definition of Big-Obull Consider a function f(n) that is non-negative n 0 We say that
ldquof(n) is Big-O of g(n)rdquo ie f(n) = O(g(n)) if n0 0 and a constant c gt 0 such that f(n) cg(n) n n0
Big-O (asymptotic) NotationImplication of the definitionbull For all sufficiently large n c g(n) is an upper bound of
f(n)Note By the definition of Big-O f(n) = 3n + 4 is O(n)
it is also O(n2) it is also O(n3) it is also O(nn)
bull However when Big-O notation is used the function g in the relationship f(n) is O(g(n)) is CHOOSEN TO BE AS SMALL AS POSSIBLE ndash We call such a function g a tight asymptotic bound of f(n)
Big-O (asymptotic) NotationSome Big-O complexity classes in order of magnitude from smallest to highest
O(1) ConstantO(log(n)) LogarithmicO(n) LinearO(n log(n)) n log nO(nx) eg O(n2) O(n3) etc Polynomial
O(an) eg O(16n) O(2n) etc Exponential
O(n) FactorialO(nn)
Examples of Algorithms and their big-O complexity
Big-O Notation Examples of Algorithms
O(1) Push Pop Enqueue (if there is a tail reference) Dequeue Accessing an array element
O(log(n)) Binary search
O(n) Linear search
O(n log(n)) Heap sort Quick sort (average) Merge sort
O(n2) Selection sort Insertion sort Bubble sort
O(n3) Matrix multiplication O(2n) Towers of Hanoi
Warnings about O-Notation bull Big-O notation cannot compare
algorithms in the same complexity classbull Big-O notation only gives sensible
comparisons of algorithms in different complexity classes when n is large
bull Consider two algorithms for same task Linear f(n) = 1000 n Quadratic f(n) = n21000The quadratic one is faster for n lt 1000000
Rules for using big-O bull For large values of input n the constants and terms with lower
degree of n are ignored
1 Multiplicative Constants Rule Ignoring constant factors O(c f(n)) = O(f(n)) where c is a constant Example
O(20 n3) = O(n3)
2 Addition Rule Ignoring smaller termsIf O(f(n)) lt O(h(n)) then O(f(n) + h(n)) = O(h(n))Example
O(n2 log n + n3) = O(n3)O(2000 n3 + 2n + n800 + 10n + 27n log n + 5) = O(n )
3 Multiplication Rule O(f(n) h(n)) = O(f(n)) O(h(n))Example
O((n3 + 2n 2 + 3n log n + 7)(8n 2 + 5n + 2)) = O(n 5)
Proving Big-O Complexity
To prove that f(n) is O(g(n)) we find any pair of values n0 and c that satisfy f(n) le c g(n) for n n0
Note The pair (n0 c) is not unique If such a pair exists then there is an infinite number of such pairs
Example Prove that f(n) = 3n2 + 5 is O(n2)We try to find some values of n and c by solving the following inequality
3n2 + 5 cn2 OR 3 + 5n2 c
(By putting different values for n we get corresponding values for c)
n0 1 2 3 4
c 8 425 355 33125 3
Proving Big-O ComplexityExample
Prove that f(n) = 3n2 + 4n log n + 10 is O(n2) by finding appropriate values for c and n0
We try to find some values of n and c by solving the following inequality 3n2 + 4n log n + 10 cn2
OR 3 + 4 log n n+ 10n2 c
( We used Log of base 2 but another base can be used as well)n0 1 2 3 4
c 13 75 622 562 3
How to determine complexity of code structures Loops for while and do-while
Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop
Examples
for (int i = 0 i lt n i++) sum = sum - i
for (int i = 0 i lt n n i++) sum = sum + i
i=1while (i lt n) sum = sum + i i = i2
O(n)
O(n2)
O(log n)
How to determine complexity of code structures Nested Loops Complexity of inner loop complexity of outer loopExamples
sum = 0for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) sum += i j
i = 1while(i lt= n) j = 1 while(j lt= n) statements of constant complexity j = j2 i = i+1
O(n2)
O(n log n)
How to determine complexity of code structures Sequence of statements Use Addition rule
O(s1 s2 s3 hellip sk) = O(s1) + O(s2) + O(s3) + hellip + O(sk) = O(max(s1 s2 s3 sk))
Example
Complexity is O(n2) + O(n) +O(1) = O(n2)
for (int j = 0 j lt n n j++) sum = sum + jfor (int k = 0 k lt n k++) sum = sum - lSystemoutprint(sum is now rdquo + sum)
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
Big-O (asymptotic) NotationImplication of the definitionbull For all sufficiently large n c g(n) is an upper bound of
f(n)Note By the definition of Big-O f(n) = 3n + 4 is O(n)
it is also O(n2) it is also O(n3) it is also O(nn)
bull However when Big-O notation is used the function g in the relationship f(n) is O(g(n)) is CHOOSEN TO BE AS SMALL AS POSSIBLE ndash We call such a function g a tight asymptotic bound of f(n)
Big-O (asymptotic) NotationSome Big-O complexity classes in order of magnitude from smallest to highest
O(1) ConstantO(log(n)) LogarithmicO(n) LinearO(n log(n)) n log nO(nx) eg O(n2) O(n3) etc Polynomial
O(an) eg O(16n) O(2n) etc Exponential
O(n) FactorialO(nn)
Examples of Algorithms and their big-O complexity
Big-O Notation Examples of Algorithms
O(1) Push Pop Enqueue (if there is a tail reference) Dequeue Accessing an array element
O(log(n)) Binary search
O(n) Linear search
O(n log(n)) Heap sort Quick sort (average) Merge sort
O(n2) Selection sort Insertion sort Bubble sort
O(n3) Matrix multiplication O(2n) Towers of Hanoi
Warnings about O-Notation bull Big-O notation cannot compare
algorithms in the same complexity classbull Big-O notation only gives sensible
comparisons of algorithms in different complexity classes when n is large
bull Consider two algorithms for same task Linear f(n) = 1000 n Quadratic f(n) = n21000The quadratic one is faster for n lt 1000000
Rules for using big-O bull For large values of input n the constants and terms with lower
degree of n are ignored
1 Multiplicative Constants Rule Ignoring constant factors O(c f(n)) = O(f(n)) where c is a constant Example
O(20 n3) = O(n3)
2 Addition Rule Ignoring smaller termsIf O(f(n)) lt O(h(n)) then O(f(n) + h(n)) = O(h(n))Example
O(n2 log n + n3) = O(n3)O(2000 n3 + 2n + n800 + 10n + 27n log n + 5) = O(n )
3 Multiplication Rule O(f(n) h(n)) = O(f(n)) O(h(n))Example
O((n3 + 2n 2 + 3n log n + 7)(8n 2 + 5n + 2)) = O(n 5)
Proving Big-O Complexity
To prove that f(n) is O(g(n)) we find any pair of values n0 and c that satisfy f(n) le c g(n) for n n0
Note The pair (n0 c) is not unique If such a pair exists then there is an infinite number of such pairs
Example Prove that f(n) = 3n2 + 5 is O(n2)We try to find some values of n and c by solving the following inequality
3n2 + 5 cn2 OR 3 + 5n2 c
(By putting different values for n we get corresponding values for c)
n0 1 2 3 4
c 8 425 355 33125 3
Proving Big-O ComplexityExample
Prove that f(n) = 3n2 + 4n log n + 10 is O(n2) by finding appropriate values for c and n0
We try to find some values of n and c by solving the following inequality 3n2 + 4n log n + 10 cn2
OR 3 + 4 log n n+ 10n2 c
( We used Log of base 2 but another base can be used as well)n0 1 2 3 4
c 13 75 622 562 3
How to determine complexity of code structures Loops for while and do-while
Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop
Examples
for (int i = 0 i lt n i++) sum = sum - i
for (int i = 0 i lt n n i++) sum = sum + i
i=1while (i lt n) sum = sum + i i = i2
O(n)
O(n2)
O(log n)
How to determine complexity of code structures Nested Loops Complexity of inner loop complexity of outer loopExamples
sum = 0for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) sum += i j
i = 1while(i lt= n) j = 1 while(j lt= n) statements of constant complexity j = j2 i = i+1
O(n2)
O(n log n)
How to determine complexity of code structures Sequence of statements Use Addition rule
O(s1 s2 s3 hellip sk) = O(s1) + O(s2) + O(s3) + hellip + O(sk) = O(max(s1 s2 s3 sk))
Example
Complexity is O(n2) + O(n) +O(1) = O(n2)
for (int j = 0 j lt n n j++) sum = sum + jfor (int k = 0 k lt n k++) sum = sum - lSystemoutprint(sum is now rdquo + sum)
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
Big-O (asymptotic) NotationSome Big-O complexity classes in order of magnitude from smallest to highest
O(1) ConstantO(log(n)) LogarithmicO(n) LinearO(n log(n)) n log nO(nx) eg O(n2) O(n3) etc Polynomial
O(an) eg O(16n) O(2n) etc Exponential
O(n) FactorialO(nn)
Examples of Algorithms and their big-O complexity
Big-O Notation Examples of Algorithms
O(1) Push Pop Enqueue (if there is a tail reference) Dequeue Accessing an array element
O(log(n)) Binary search
O(n) Linear search
O(n log(n)) Heap sort Quick sort (average) Merge sort
O(n2) Selection sort Insertion sort Bubble sort
O(n3) Matrix multiplication O(2n) Towers of Hanoi
Warnings about O-Notation bull Big-O notation cannot compare
algorithms in the same complexity classbull Big-O notation only gives sensible
comparisons of algorithms in different complexity classes when n is large
bull Consider two algorithms for same task Linear f(n) = 1000 n Quadratic f(n) = n21000The quadratic one is faster for n lt 1000000
Rules for using big-O bull For large values of input n the constants and terms with lower
degree of n are ignored
1 Multiplicative Constants Rule Ignoring constant factors O(c f(n)) = O(f(n)) where c is a constant Example
O(20 n3) = O(n3)
2 Addition Rule Ignoring smaller termsIf O(f(n)) lt O(h(n)) then O(f(n) + h(n)) = O(h(n))Example
O(n2 log n + n3) = O(n3)O(2000 n3 + 2n + n800 + 10n + 27n log n + 5) = O(n )
3 Multiplication Rule O(f(n) h(n)) = O(f(n)) O(h(n))Example
O((n3 + 2n 2 + 3n log n + 7)(8n 2 + 5n + 2)) = O(n 5)
Proving Big-O Complexity
To prove that f(n) is O(g(n)) we find any pair of values n0 and c that satisfy f(n) le c g(n) for n n0
Note The pair (n0 c) is not unique If such a pair exists then there is an infinite number of such pairs
Example Prove that f(n) = 3n2 + 5 is O(n2)We try to find some values of n and c by solving the following inequality
3n2 + 5 cn2 OR 3 + 5n2 c
(By putting different values for n we get corresponding values for c)
n0 1 2 3 4
c 8 425 355 33125 3
Proving Big-O ComplexityExample
Prove that f(n) = 3n2 + 4n log n + 10 is O(n2) by finding appropriate values for c and n0
We try to find some values of n and c by solving the following inequality 3n2 + 4n log n + 10 cn2
OR 3 + 4 log n n+ 10n2 c
( We used Log of base 2 but another base can be used as well)n0 1 2 3 4
c 13 75 622 562 3
How to determine complexity of code structures Loops for while and do-while
Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop
Examples
for (int i = 0 i lt n i++) sum = sum - i
for (int i = 0 i lt n n i++) sum = sum + i
i=1while (i lt n) sum = sum + i i = i2
O(n)
O(n2)
O(log n)
How to determine complexity of code structures Nested Loops Complexity of inner loop complexity of outer loopExamples
sum = 0for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) sum += i j
i = 1while(i lt= n) j = 1 while(j lt= n) statements of constant complexity j = j2 i = i+1
O(n2)
O(n log n)
How to determine complexity of code structures Sequence of statements Use Addition rule
O(s1 s2 s3 hellip sk) = O(s1) + O(s2) + O(s3) + hellip + O(sk) = O(max(s1 s2 s3 sk))
Example
Complexity is O(n2) + O(n) +O(1) = O(n2)
for (int j = 0 j lt n n j++) sum = sum + jfor (int k = 0 k lt n k++) sum = sum - lSystemoutprint(sum is now rdquo + sum)
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
Examples of Algorithms and their big-O complexity
Big-O Notation Examples of Algorithms
O(1) Push Pop Enqueue (if there is a tail reference) Dequeue Accessing an array element
O(log(n)) Binary search
O(n) Linear search
O(n log(n)) Heap sort Quick sort (average) Merge sort
O(n2) Selection sort Insertion sort Bubble sort
O(n3) Matrix multiplication O(2n) Towers of Hanoi
Warnings about O-Notation bull Big-O notation cannot compare
algorithms in the same complexity classbull Big-O notation only gives sensible
comparisons of algorithms in different complexity classes when n is large
bull Consider two algorithms for same task Linear f(n) = 1000 n Quadratic f(n) = n21000The quadratic one is faster for n lt 1000000
Rules for using big-O bull For large values of input n the constants and terms with lower
degree of n are ignored
1 Multiplicative Constants Rule Ignoring constant factors O(c f(n)) = O(f(n)) where c is a constant Example
O(20 n3) = O(n3)
2 Addition Rule Ignoring smaller termsIf O(f(n)) lt O(h(n)) then O(f(n) + h(n)) = O(h(n))Example
O(n2 log n + n3) = O(n3)O(2000 n3 + 2n + n800 + 10n + 27n log n + 5) = O(n )
3 Multiplication Rule O(f(n) h(n)) = O(f(n)) O(h(n))Example
O((n3 + 2n 2 + 3n log n + 7)(8n 2 + 5n + 2)) = O(n 5)
Proving Big-O Complexity
To prove that f(n) is O(g(n)) we find any pair of values n0 and c that satisfy f(n) le c g(n) for n n0
Note The pair (n0 c) is not unique If such a pair exists then there is an infinite number of such pairs
Example Prove that f(n) = 3n2 + 5 is O(n2)We try to find some values of n and c by solving the following inequality
3n2 + 5 cn2 OR 3 + 5n2 c
(By putting different values for n we get corresponding values for c)
n0 1 2 3 4
c 8 425 355 33125 3
Proving Big-O ComplexityExample
Prove that f(n) = 3n2 + 4n log n + 10 is O(n2) by finding appropriate values for c and n0
We try to find some values of n and c by solving the following inequality 3n2 + 4n log n + 10 cn2
OR 3 + 4 log n n+ 10n2 c
( We used Log of base 2 but another base can be used as well)n0 1 2 3 4
c 13 75 622 562 3
How to determine complexity of code structures Loops for while and do-while
Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop
Examples
for (int i = 0 i lt n i++) sum = sum - i
for (int i = 0 i lt n n i++) sum = sum + i
i=1while (i lt n) sum = sum + i i = i2
O(n)
O(n2)
O(log n)
How to determine complexity of code structures Nested Loops Complexity of inner loop complexity of outer loopExamples
sum = 0for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) sum += i j
i = 1while(i lt= n) j = 1 while(j lt= n) statements of constant complexity j = j2 i = i+1
O(n2)
O(n log n)
How to determine complexity of code structures Sequence of statements Use Addition rule
O(s1 s2 s3 hellip sk) = O(s1) + O(s2) + O(s3) + hellip + O(sk) = O(max(s1 s2 s3 sk))
Example
Complexity is O(n2) + O(n) +O(1) = O(n2)
for (int j = 0 j lt n n j++) sum = sum + jfor (int k = 0 k lt n k++) sum = sum - lSystemoutprint(sum is now rdquo + sum)
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
Warnings about O-Notation bull Big-O notation cannot compare
algorithms in the same complexity classbull Big-O notation only gives sensible
comparisons of algorithms in different complexity classes when n is large
bull Consider two algorithms for same task Linear f(n) = 1000 n Quadratic f(n) = n21000The quadratic one is faster for n lt 1000000
Rules for using big-O bull For large values of input n the constants and terms with lower
degree of n are ignored
1 Multiplicative Constants Rule Ignoring constant factors O(c f(n)) = O(f(n)) where c is a constant Example
O(20 n3) = O(n3)
2 Addition Rule Ignoring smaller termsIf O(f(n)) lt O(h(n)) then O(f(n) + h(n)) = O(h(n))Example
O(n2 log n + n3) = O(n3)O(2000 n3 + 2n + n800 + 10n + 27n log n + 5) = O(n )
3 Multiplication Rule O(f(n) h(n)) = O(f(n)) O(h(n))Example
O((n3 + 2n 2 + 3n log n + 7)(8n 2 + 5n + 2)) = O(n 5)
Proving Big-O Complexity
To prove that f(n) is O(g(n)) we find any pair of values n0 and c that satisfy f(n) le c g(n) for n n0
Note The pair (n0 c) is not unique If such a pair exists then there is an infinite number of such pairs
Example Prove that f(n) = 3n2 + 5 is O(n2)We try to find some values of n and c by solving the following inequality
3n2 + 5 cn2 OR 3 + 5n2 c
(By putting different values for n we get corresponding values for c)
n0 1 2 3 4
c 8 425 355 33125 3
Proving Big-O ComplexityExample
Prove that f(n) = 3n2 + 4n log n + 10 is O(n2) by finding appropriate values for c and n0
We try to find some values of n and c by solving the following inequality 3n2 + 4n log n + 10 cn2
OR 3 + 4 log n n+ 10n2 c
( We used Log of base 2 but another base can be used as well)n0 1 2 3 4
c 13 75 622 562 3
How to determine complexity of code structures Loops for while and do-while
Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop
Examples
for (int i = 0 i lt n i++) sum = sum - i
for (int i = 0 i lt n n i++) sum = sum + i
i=1while (i lt n) sum = sum + i i = i2
O(n)
O(n2)
O(log n)
How to determine complexity of code structures Nested Loops Complexity of inner loop complexity of outer loopExamples
sum = 0for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) sum += i j
i = 1while(i lt= n) j = 1 while(j lt= n) statements of constant complexity j = j2 i = i+1
O(n2)
O(n log n)
How to determine complexity of code structures Sequence of statements Use Addition rule
O(s1 s2 s3 hellip sk) = O(s1) + O(s2) + O(s3) + hellip + O(sk) = O(max(s1 s2 s3 sk))
Example
Complexity is O(n2) + O(n) +O(1) = O(n2)
for (int j = 0 j lt n n j++) sum = sum + jfor (int k = 0 k lt n k++) sum = sum - lSystemoutprint(sum is now rdquo + sum)
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
Rules for using big-O bull For large values of input n the constants and terms with lower
degree of n are ignored
1 Multiplicative Constants Rule Ignoring constant factors O(c f(n)) = O(f(n)) where c is a constant Example
O(20 n3) = O(n3)
2 Addition Rule Ignoring smaller termsIf O(f(n)) lt O(h(n)) then O(f(n) + h(n)) = O(h(n))Example
O(n2 log n + n3) = O(n3)O(2000 n3 + 2n + n800 + 10n + 27n log n + 5) = O(n )
3 Multiplication Rule O(f(n) h(n)) = O(f(n)) O(h(n))Example
O((n3 + 2n 2 + 3n log n + 7)(8n 2 + 5n + 2)) = O(n 5)
Proving Big-O Complexity
To prove that f(n) is O(g(n)) we find any pair of values n0 and c that satisfy f(n) le c g(n) for n n0
Note The pair (n0 c) is not unique If such a pair exists then there is an infinite number of such pairs
Example Prove that f(n) = 3n2 + 5 is O(n2)We try to find some values of n and c by solving the following inequality
3n2 + 5 cn2 OR 3 + 5n2 c
(By putting different values for n we get corresponding values for c)
n0 1 2 3 4
c 8 425 355 33125 3
Proving Big-O ComplexityExample
Prove that f(n) = 3n2 + 4n log n + 10 is O(n2) by finding appropriate values for c and n0
We try to find some values of n and c by solving the following inequality 3n2 + 4n log n + 10 cn2
OR 3 + 4 log n n+ 10n2 c
( We used Log of base 2 but another base can be used as well)n0 1 2 3 4
c 13 75 622 562 3
How to determine complexity of code structures Loops for while and do-while
Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop
Examples
for (int i = 0 i lt n i++) sum = sum - i
for (int i = 0 i lt n n i++) sum = sum + i
i=1while (i lt n) sum = sum + i i = i2
O(n)
O(n2)
O(log n)
How to determine complexity of code structures Nested Loops Complexity of inner loop complexity of outer loopExamples
sum = 0for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) sum += i j
i = 1while(i lt= n) j = 1 while(j lt= n) statements of constant complexity j = j2 i = i+1
O(n2)
O(n log n)
How to determine complexity of code structures Sequence of statements Use Addition rule
O(s1 s2 s3 hellip sk) = O(s1) + O(s2) + O(s3) + hellip + O(sk) = O(max(s1 s2 s3 sk))
Example
Complexity is O(n2) + O(n) +O(1) = O(n2)
for (int j = 0 j lt n n j++) sum = sum + jfor (int k = 0 k lt n k++) sum = sum - lSystemoutprint(sum is now rdquo + sum)
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
Proving Big-O Complexity
To prove that f(n) is O(g(n)) we find any pair of values n0 and c that satisfy f(n) le c g(n) for n n0
Note The pair (n0 c) is not unique If such a pair exists then there is an infinite number of such pairs
Example Prove that f(n) = 3n2 + 5 is O(n2)We try to find some values of n and c by solving the following inequality
3n2 + 5 cn2 OR 3 + 5n2 c
(By putting different values for n we get corresponding values for c)
n0 1 2 3 4
c 8 425 355 33125 3
Proving Big-O ComplexityExample
Prove that f(n) = 3n2 + 4n log n + 10 is O(n2) by finding appropriate values for c and n0
We try to find some values of n and c by solving the following inequality 3n2 + 4n log n + 10 cn2
OR 3 + 4 log n n+ 10n2 c
( We used Log of base 2 but another base can be used as well)n0 1 2 3 4
c 13 75 622 562 3
How to determine complexity of code structures Loops for while and do-while
Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop
Examples
for (int i = 0 i lt n i++) sum = sum - i
for (int i = 0 i lt n n i++) sum = sum + i
i=1while (i lt n) sum = sum + i i = i2
O(n)
O(n2)
O(log n)
How to determine complexity of code structures Nested Loops Complexity of inner loop complexity of outer loopExamples
sum = 0for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) sum += i j
i = 1while(i lt= n) j = 1 while(j lt= n) statements of constant complexity j = j2 i = i+1
O(n2)
O(n log n)
How to determine complexity of code structures Sequence of statements Use Addition rule
O(s1 s2 s3 hellip sk) = O(s1) + O(s2) + O(s3) + hellip + O(sk) = O(max(s1 s2 s3 sk))
Example
Complexity is O(n2) + O(n) +O(1) = O(n2)
for (int j = 0 j lt n n j++) sum = sum + jfor (int k = 0 k lt n k++) sum = sum - lSystemoutprint(sum is now rdquo + sum)
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
Proving Big-O ComplexityExample
Prove that f(n) = 3n2 + 4n log n + 10 is O(n2) by finding appropriate values for c and n0
We try to find some values of n and c by solving the following inequality 3n2 + 4n log n + 10 cn2
OR 3 + 4 log n n+ 10n2 c
( We used Log of base 2 but another base can be used as well)n0 1 2 3 4
c 13 75 622 562 3
How to determine complexity of code structures Loops for while and do-while
Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop
Examples
for (int i = 0 i lt n i++) sum = sum - i
for (int i = 0 i lt n n i++) sum = sum + i
i=1while (i lt n) sum = sum + i i = i2
O(n)
O(n2)
O(log n)
How to determine complexity of code structures Nested Loops Complexity of inner loop complexity of outer loopExamples
sum = 0for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) sum += i j
i = 1while(i lt= n) j = 1 while(j lt= n) statements of constant complexity j = j2 i = i+1
O(n2)
O(n log n)
How to determine complexity of code structures Sequence of statements Use Addition rule
O(s1 s2 s3 hellip sk) = O(s1) + O(s2) + O(s3) + hellip + O(sk) = O(max(s1 s2 s3 sk))
Example
Complexity is O(n2) + O(n) +O(1) = O(n2)
for (int j = 0 j lt n n j++) sum = sum + jfor (int k = 0 k lt n k++) sum = sum - lSystemoutprint(sum is now rdquo + sum)
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
How to determine complexity of code structures Loops for while and do-while
Complexity is determined by the number of iterations in the loop times the complexity of the body of the loop
Examples
for (int i = 0 i lt n i++) sum = sum - i
for (int i = 0 i lt n n i++) sum = sum + i
i=1while (i lt n) sum = sum + i i = i2
O(n)
O(n2)
O(log n)
How to determine complexity of code structures Nested Loops Complexity of inner loop complexity of outer loopExamples
sum = 0for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) sum += i j
i = 1while(i lt= n) j = 1 while(j lt= n) statements of constant complexity j = j2 i = i+1
O(n2)
O(n log n)
How to determine complexity of code structures Sequence of statements Use Addition rule
O(s1 s2 s3 hellip sk) = O(s1) + O(s2) + O(s3) + hellip + O(sk) = O(max(s1 s2 s3 sk))
Example
Complexity is O(n2) + O(n) +O(1) = O(n2)
for (int j = 0 j lt n n j++) sum = sum + jfor (int k = 0 k lt n k++) sum = sum - lSystemoutprint(sum is now rdquo + sum)
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
How to determine complexity of code structures Nested Loops Complexity of inner loop complexity of outer loopExamples
sum = 0for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) sum += i j
i = 1while(i lt= n) j = 1 while(j lt= n) statements of constant complexity j = j2 i = i+1
O(n2)
O(n log n)
How to determine complexity of code structures Sequence of statements Use Addition rule
O(s1 s2 s3 hellip sk) = O(s1) + O(s2) + O(s3) + hellip + O(sk) = O(max(s1 s2 s3 sk))
Example
Complexity is O(n2) + O(n) +O(1) = O(n2)
for (int j = 0 j lt n n j++) sum = sum + jfor (int k = 0 k lt n k++) sum = sum - lSystemoutprint(sum is now rdquo + sum)
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
How to determine complexity of code structures Sequence of statements Use Addition rule
O(s1 s2 s3 hellip sk) = O(s1) + O(s2) + O(s3) + hellip + O(sk) = O(max(s1 s2 s3 sk))
Example
Complexity is O(n2) + O(n) +O(1) = O(n2)
for (int j = 0 j lt n n j++) sum = sum + jfor (int k = 0 k lt n k++) sum = sum - lSystemoutprint(sum is now rdquo + sum)
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
char keyint[] X = new int[5] int[][] Y = new int[10][10] switch(key) case a for(int i = 0 i lt Xlength i++) sum += X[i] break case b for(int i = 0 i lt Ylength j++) for(int j = 0 j lt Y[0]length j++) sum += Y[i][j] break End of switch block
How to determine complexity of code structures
Switch Take the complexity of the most expensive case
o(n)
o(n2)
Overall Complexity o(n2)
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
char keyint[][] A = new int[5][5]int[][] B = new int[5][5]int[][] C = new int[5][5]if(key == +) for(int i = 0 i lt n i++) for(int j = 0 j lt n j++) C[i][j] = A[i][j] + B[i][j] End of if block else if(key == x) C = matrixMult(A B)
else Systemoutprintln(Error Enter + or x)
If Statement Take the complexity of the most expensive case
O(n2)
O(n3)
O(1)
How to determine complexity of code structures
Overall complexityO(n3)
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
int[] integers = new int[10]if(hasPrimes(integers) == true) integers[0] = 20else
integers[0] = -20
public boolean hasPrimes(int[] arr) for(int i = 0 i lt arrlength i++)
End of hasPrimes()
How to determine complexity of code structures
bull Sometimes if-else statements must carefully be checkedO(if-else) = O(Condition)+ Max[O(if) O(else)]
O(1)
O(1)
O(if-else) = O(Condition) = O(n)
O(n)
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
How to determine complexity of code structuresbull Note Sometimes a loop may cause the if-else rule not to be
applicable Consider the following loop
The else-branch has more basic operations therefore one may conclude that the loop is O(n) However the if-branch dominates For example if n is 60 then the sequence of n is 60 30 15 14 7 6 3 2 1 and 0 Hence the loop is logarithmic and its complexity is O(log n)
while (n gt 0) if (n 2 = = 0) Systemoutprintln(n) n = n 2 else Systemoutprintln(n) Systemoutprintln(n)
n = n ndash 1
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
Comp 122
Asymptotic Complexity
bull Running time of an algorithm as a function of input size n for large n
bull Expressed using only the highest-order term in the expression for the exact running timendash Instead of exact running time say Q(n2)
bull Describes behavior of function in the limitbull Written using Asymptotic Notation
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
Asymptotic Notationbull Q O Wbull Defined for functions over the natural numbers
ndash Ex f(n) = Q(n2)ndash Describes how f(n) grows in comparison to n2
bull Define a set of functions in practice used to compare two function sizes
bull The notations describe different rate-of-growth relations between the defining function and the defined set of functions
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
Q-notation
Q(g(n)) = f(n) positive constants c1 c2 and n0 such that n n0
we have 0 c1g(n) f(n) c2g(n)
For function g(n) we define Q(g(n)) big-Theta of n as the set
g(n) is an asymptotically tight bound for f(n)
Intuitively Set of all functions thathave the same rate of growth as g(n)
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
O-notation
O(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 f(n) cg(n)
For function g(n) we define O(g(n)) big-O of n as the set
g(n) is an asymptotic upper bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or lower than that of g(n)
f(n) = Q(g(n)) f(n) = O(g(n))Q(g(n)) O(g(n))
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
W -notation
g(n) is an asymptotic lower bound for f(n)
Intuitively Set of all functions whose rate of growth is the same as or higher than that of g(n)
f(n) = Q(g(n)) f(n) = W(g(n))Q(g(n)) W(g(n))
W(g(n)) = f(n) positive constants c and n0 such that n n0
we have 0 cg(n) f(n)
For function g(n) we define W(g(n)) big-Omega of n as the set
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
Relations Between Q O W
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
Relations Between Q W O
bull Ie Q(g(n)) = O(g(n)) Ccedil W(g(n))
bull In practice asymptotically tight bounds are obtained from asymptotic upper and lower bounds
Theorem For any two functions g(n) and f(n) f(n) = Q(g(n)) iff
f(n) = O(g(n)) and f(n) = W(g(n))
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
Logarithms
x = logba is the exponent for a = bx
Natural log ln a = logeaBinary log lg a = log2a
lg2a = (lg a)2
lg lg a = lg (lg a) ac
ab
bb
c
cb
bn
b
ccc
bb ca
ba
aabaa
ana
baab
loglog
log1log
log)1(loglogloglog
loglog
loglog)(log
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
Review on Summationsbull Constant Series For integers a and b a b
bull Linear Series (Arithmetic Series) For n 0
bull Quadratic Series For n 0
b
ai
ab 11
2)1(21
1
nnnin
i
n
i
nnnni1
2222
6)12)(1(21
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx
Review on Summationsbull Cubic Series For n 0
bull Geometric Series For real x 1
For |x| lt 1
n
i
nnni1
223333
4)1(21
n
k
nnk
xxxxxx
0
12
111
0 11
k
k
xx