Chapter 20 Computational complexity. This chapter discusses n Algorithmic efficiency n A commonly...

Preview:

Citation preview

Chapter 20Computational complexity

This chapter discusses

Algorithmic efficiency A commonly used measure:

computational complexity The effects of algorithm choice on

execution time.

Measuring program efficiency There are many factors that affect

execution time: Hardware Operating system System environment Programming language and compiler Run-time system or interpreter Algorithm(s) Data on which the program is run

Time complexity We limit our attention to the algorithm(s)

and data. Our measure of program efficiency is

called computational complexity or time complexity.

Each instance of a problem has some inherent “size.”

Execution time often depends on size. For the most part, we want to know the

worst case behavior of an algorithm. We measure time cost by the number of

primitive steps the algorithm performs in solving the problem.

Time complexity (cont.)

Time complexity of a method M is a function of tM from the natural numbers N to the positive reals R+.

tM : N-> R+ such that tM(n) is the maximum number of steps for method M to solve a problem of size n.

Comparing different algorithms Let f and g be functions from the natural

numbers to the positive reals, f,g: N-> R+

We say f is O-dominated by g, and write fg , provided: there is a positive integer no, and there is a positive real c, such that for all n no, f(n)cg(n).

Time complexity

Relation observations

f, g, h, f’, g’, h’ : N-> R+

The relation is reflexive: for any function f, f f.

The relation is transitive:f g and g h imply f h.

The relation is not antisymmetric: there are functions such that f g and g f, but fg.

The relation is not total: there are functions such that neither f g nor g f.

Relation observations (cont.) If f g and g f , f and g are said to

have the same magnitude; f g. If f g but not g f, then fg. [f + g](n) = f(n) + g(n). [f g](n) = f(n) g(n). [max(f,g)](n) = max(f(n), g(n)).

Computational rules

If c1 and c2 are positive real constants, then fc1f+c2.

If f is a polynomial function, f(n) = cknk+ck-

1nk-1 + … +c0, where ck > 0, then f nk. If f g and f’ g’,then f f’ g g’. If f g and f’ g’,then f +f’ max(g ,g’).

Also f +g max(f,g). If a and b are > 1, logan logbn.

1 logann nlogann2 n3n4 ...2n.

Complexity classes

Set of functions that are O-dominated by f. O(f) = { g | gf } (big oh of f)

Set of functions that O-dominate f. (f) = { g | fg } (omega of f)

Set of functions that have the same magnitude as f.(f) = { g | g f } (theta of f)

Complexity classes (cont.) If tM = c, then M is constant ((1)).

If tM = n, then M is linear ((n)).

If tM = n2, then M is quadratic ((n2)).

If not tM = nk, then M is exponential ((nk)).

Problem complexity

Some problems have a known complexity. i.e. sorting can be done in nlog n time.

Some problems are known to be exponential. They are called intractable.

Some problems are unsolvable. Observing the complexity of a problem is

generally difficult because it involves proving something about all possible methods for solving the problem.

Cost of a method

To determine the time-cost of a method, we must count the steps performed in the worst case.

A method with constant time complexity needs to use only a fixed amount of its data.

Any method that, in the worst case, must examine all its data is at least linear.

Methods that are better than linear typically require the data to have some organization or structure.

Cost of a statement A simple statement requires a

constant number of steps. The number of steps of a method

invocation is the number of steps in the method the statement invokes.

The worst case number of steps required by a conditional is the maximum of the number of steps required by each alternative.

Cost of a statement (cont.) A sequence of statements takes the

maximum of the each statement’s steps.

Time-cost of a loop is the number of steps required by the loop body multiplied by the number of times the loop body is executed.

Recursion time-cost is determined by the depth of recursion.

Examplepublic double average(StudentList students) {

int i, sum, count;count = students.size();sum = 0;i = 0;while ( i < count) {

sum = sum + students.get(i).finalExam();

i = i+1;}return (double)sum / (double)count;

}

This method is (n).

Example (cont.) Suppose get(i) is also linear

(requires : ic1+c0 steps).

value of i: 0 1 2 … n-1steps: c0 c1+c0 2c1+c0 … (n-1)c1+c0

The method is quadratic (n2).

Exampleboolean hasDuplicates (List list) {

int i;int j;int n;boolean found;n = list.size();found = false;for (i = 0; i < n-1 && !found; i=i+1) for (j = i+1; j < n && !found; j=j+1)

found = list.get(i).equals(

list.get(j));return found;

}

Example (cont.) The outer loop is performed n-1

times. The number of iterations depends on the second loop.

value of i: 0 1 2 … n-3 n-2steps: n-1 n-2 n-3 … 2

1

Summing these altogether, we get ½(n2-n).

The method is quadratic (n2).

Analyzing different algorithms Given a list of integers, find the

maximum sum of the elements of a sublist of zero of more contiguous elements.

list sublistmaxsum

(-2,4,-3,5,3,-5,1) (4,-3,5,3) 9

(2,4,5) (2,4,5) 11

(-1,-2,3) (3) 3

(-1,-2,-3) () 0

maxSublistSum

int maxSublistSum (IntegerList list)

The maximum sum of a sublist of the given list.

if list.getInt(i)>0 for some i, 0<=i<list.size(), then

max{ list.getInt(i)+…+list.getInt(j) |

0<=i<=j<list.size() }

else

0

maxSublistSum (cont.)

This makes it (n3). It can be improved

to (n2) with a little adjustment.

Recursive method

Base cases are lists of length 0 and 1. If we have longer list, we divide it in half and

consider possible cases. (divide and conquer) There are 3 possible cases:

The sublist with the max sum is in the left half of the list (x0 … xmid)

The sublist with the max sum in the right half of the list (xmid+1 … xn-1)

The sublist with the max sum “overlaps” the middle of the list (includes xmid and xmid+1).

maxSublistSum

private int maxSublistSum (IntegerList list)The maximum sum of a sublist of the given list. That is, the maximum sum of a sublist of(list.get(first), …, list.get(last)).

require:

0 <= first <= last < list.size()

The principle method goes something like this:int maxSublistSum (IntegerList list) {

if (list.size() == 0)

return 0;

else

return maxSublistSum(List,0,list.size()-1);

}

The method is (nlog n).

Iterative method

Having found the maximum sum in the first ‘i’ of the list, we look at i+1.

Iterative method There are 2 possibilities for the maximum

sum sublist of the first i+1 elements: it doesn’t include xi, in which case it is the

same as the maximum sum sublist of the first i elements.

it does include xi.

There are 2 possibilities if the maximum sum sublist ends with xi: It consists only of xi.

It includes xi-1, in which case it is xi appended to the maximum sum sublist ending with xi-1.

The method is (n).

We’ve covered

Time complexity. Comparing time-cost functions. Computing time-cost for simple

examples.

Glossary

Recommended