53
Deewan V.S. Institute of Engineering & Technology (DVSIET), Meerut Lab Manual Design and Analysis of Algorithms (ECS-552) Prepared By. Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 1

DAA LM

Embed Size (px)

Citation preview

Page 1: DAA LM

Deewan V.S. Institute of Engineering& Technology (DVSIET), Meerut

Lab Manual

Design and Analysis of Algorithms

(ECS-552)

Prepared By.

Mr.Anuj Kumar Srivastava,

Lecturer, Deptt. Of CSE/IT

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 1

Page 2: DAA LM

DESIGN AND ANALYSIS OF ALGORITHMS LABORATORY WORK (ECS-552)

1. WAP to implement Bubble sort.

2. WAP to implement Insertion Sort.

3. WAP to implement Quick Sort .

4. WAP to implement Merge Sort.

5. WAP to implement Shell Sort.

6. WAP to implement Heap Sort.

7. WAP to implement Sequential Search.

8. WAP to implement Binary Search.

9. WAP to implement Dynamic Programming: 0/1 Knapsack Problem.

10. WAP to implement Greedy Method : Fractional Knapsack Problem.

11. WAP to implement Selection : Minimum / Maximum / Kth Smallest element

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 2

Page 3: DAA LM

Program # 1

INSERTION SORT

Here is the program to sort the given integer in ascending order using insertion sort method. Please find the pictorial tutor of the insertion sorting.

Logic: Here, sorting takes place by inserting a particular element at the appropriate position, that’s why the name- insertion sorting. In the First iteration, second element A[1] is compared with the first element A[0]. In the second iteration third element is compared with first and second element. In general, in every iteration an element is compared with all the elements before it. While comparing if it is found that the element can be inserted at a suitable position, then space is created for it by shifting the other elements one position up and inserts the desired element at the suitable position. This procedure is repeated for all the elements in the list.

If we complement the if condition in this program, it will give out the sorted array in descending order. Sorting can also be done in other methods, like selection sorting and bubble sorting, which follows in the next pages.

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 3

Page 4: DAA LM

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 4

Page 5: DAA LM

C program for Insertion Sort:

1234567891011121314151617181920212223242526272829

#include<stdio.h>void main()

int A[20], N, Temp, i, j;clrscr();printf("\n\n\t ENTER THE NUMBER OF TERMS...: ");scanf("%d", &N);printf("\n\t ENTER THE ELEMENTS OF THE ARRAY...:");for(i=0; i<N; i++)

gotoxy(25,11+i);scanf("\n\t\t%d", &A[i]);

for(i=1; i<N; i++)

Temp = A[i];j = i-1;while(Temp<A[j] && j>=0)

A[j+1] = A[j];j = j-1;

A[j+1] = Temp;

printf("\n\tTHE ASCENDING ORDER LIST IS...:\n");for(i=0; i<N; i++)

printf("\n\t\t\t%d", A[i]);getch();

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 5

Page 6: DAA LM

Program # 2

Bubble Sort

Bubble sort program in C

Here is the program to sort the given integer in ascending order using bubble sort method. Please find the pictorial tutor of the bubble sorting.

Logic: The entered integers are stored in the array A. Here, to sort the data in ascending order, any number is compared with the next numbers for orderliness. i.e. first element A[0] is compared with the second element A[1]. If forth is greater than the prior element then swapping them, else no change. Then second element is compared with third element, and procedure is continued. Hence, after the first iteration of the outer for loop, largest element is placed at the end of the array. In the second iteration, the comparisons are made till the last but one position and now second largest element is placed at the last but one position. The procedure is traced till the array length.

If we complement the if condition in this program, it will give out the sorted array in descending order. Sorting can also be done in other methods, like selection sorting and insertion sorting, which follows in the next pages.

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 6

Page 7: DAA LM

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 7

Page 8: DAA LM

Here is the C program to sort the numbers using Bubble sort

#include<stdio.h>void main()

int A[20], N, Temp, i, j;clrscr();printf(“\n\n\t ENTER THE NUMBER OF TERMS…: “);scanf(“%d”,&N);printf(“\n\t ENTER THE ELEMENTS OF THE ARRAY…:”);for(i=0; i<N; i++)

gotoxy(25, 11+i);scanf(“\n\t\t%d”, &A[i]);

for(i=0; i<N-1; i++)

for(j=0; j<N-i;j++)if(A[j]>A[j+1])

Temp = A[j];A[j] = A[j+1];A[j+1] = Temp;

printf(“\n\tTHE ASCENDING ORDER LIST IS…:\n”);for(i=0; i<N; i++)

printf(“\n\t\t\t%d”,A[i]);getch();

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 8

Page 9: DAA LM

Program #3

Quick SortThe basic version of quick sort algorithm was invented by C. A. R. Hoare in 1960 and formally introduced quick sort in 1962. It is used on the principle of divide-and-conquer. Quick sort is an algorithm of choice in many situations because it is not difficult to implement, it is a good "general purpose" sort and it consumes relatively fewer resources during execution.

Good points It is in-place since it uses only a small auxiliary stack. It requires only n log(n) time to sort n items. It has an extremely short inner loop This algorithm has been subjected to a thorough mathematical analysis, a very precise

statement can be made about performance issues.

Bad Points It is recursive. Especially if recursion is not available, the implementation is extremely

complicated. It requires quadratic (i.e., n2) time in the worst-case. It is fragile i.e., a simple mistake in the implementation can go unnoticed and cause it

to perform badly.

Quick sort works by partitioning a given array A[p . . r] into two non-empty sub array A[p . . q] and A[q+1 . . r] such that every key in A[p . . q] is less than or equal to every key in A[q+1 . . r]. Then the two subarrays are sorted by recursive calls to Quick sort. The exact position of the partition depends on the given array and index q is computed as a part of the partitioning procedure.

QuickSort

1. If p < r then 2. q Partition (A, p, r) 3. Recursive call to Quick Sort (A, p, q) 4. Recursive call to Quick Sort (A, q + r, r)

Note that to sort entire array, the initial call Quick Sort (A, 1, length[A])

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 9

Page 10: DAA LM

As a first step, Quick Sort chooses as pivot one of the items in the array to be sorted. Then array is then partitioned on either side of the pivot. Elements that are less than or equal to pivot will move toward the left and elements that are greater than or equal to pivot will move toward the right.

Partitioning the Array

Partitioning procedure rearranges the subarrays in-place.

PARTITION (A, p, r)

1. x ← A[p] 2. i ← p-1 3. j ← r+1 4. while TRUE do 5. Repeat j ← j-1 6. until A[j] ≤ x 7. Repeat i ← i+1 8. until A[i] ≥ x 9. if i < j 10. then exchange A[i] ↔ A[j] 11. else return j

Partition selects the first key, A[p] as a pivot key about which the array will partitioned:

Keys ≤ A[p] will be moved towards the left .Keys ≥ A[p] will be moved towards the right.

The running time of the partition procedure is (n) where n = r - p +1 which is the number of keys in the array.

Another argument that running time of PARTITION on a subarray of size (n) is as follows: Pointer i and pointer j start at each end and move towards each other, conveying somewhere in the middle. The total number of times that i can be incremented and j can be decremented is therefore O(n). Associated with each increment or decrement there are O(1) comparisons and swaps. Hence, the total time is O(n).

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 10

Page 11: DAA LM

Quick Sort#define MAXARRAY 10

void quicksort(int arr[], int low, int high);

int main(void) int array[MAXARRAY] = 0; int i = 0;

/* load some random values into the array */ for(i = 0; i < MAXARRAY; i++) array[i] = rand() % 100;

/* print the original array */ printf("Before quicksort: "); for(i = 0; i < MAXARRAY; i++) printf(" %d ", array[i]); printf("\n");

quicksort(array, 0, (MAXARRAY - 1));

/* print the `quicksorted' array */ printf("After quicksort: "); for(i = 0; i < MAXARRAY; i++) printf(" %d ", array[i]); printf("\n");

return 0;

/* sort everything inbetween `low' <-> `high' */void quicksort(int arr[], int low, int high) int i = low; int j = high; int y = 0; /* compare value */ int z = arr[(low + high) / 2];

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 11

Page 12: DAA LM

/* partition */ do /* find member above ... */ while(arr[i] < z) i++;

/* find element below ... */ while(arr[j] > z) j--;

if(i <= j) /* swap two elements */ y = arr[i]; arr[i] = arr[j]; arr[j] = y; i++; j--; while(i <= j);

/* recurse */ if(low < j) quicksort(arr, low, j);

if(i < high) quicksort(arr, i, high);

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 12

Page 13: DAA LM

Program #4

Merge SortMerge-sort is based on the divide-and-conquer paradigm. The Merge-sort algorithm can be described in general terms as consisting of the following three steps:

1. Divide Step

If given array A has zero or one element, return S; it is already sorted. Otherwise, divide A into two arrays, A1 and A2, each containing about half of the elements of A.

2. Recursion Step Recursively sort array A1 and A2.

3. Conquer Step Combine the elements back in A by merging the sorted arrays A1 and A2 into a sorted sequence.

We can visualize Merge-sort by means of binary tree where each node of the tree represents a recursive call and each external nodes represent individual elements of given array A. Such a tree is called Merge-sort tree. The heart of the Merge-sort algorithm is conquer step, which merge two sorted sequences into a single sorted sequence.

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 13

Page 14: DAA LM

To begin, suppose that we have two sorted arrays A1[1], A1[2], . . , A1[M] and A2[1], A2[2], . . . , A2[N]. The following is a direct algorithm of the obvious strategy of successively choosing the smallest remaining elements from A1 to A2 and putting it in A.

MERGE (A1, A2, A)

i.← j 1A1[m+1], A2[n+1] ← INT_MAXFor k ←1 to m + n do if A1[i] < A2[j] then A[k] ← A1[i] i ← i +1 else A[k] ← A2[j] j ← j + 1

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 14

Page 15: DAA LM

Merge Sort Algorithm

MERGE_SORT (A)

A1[1 . . n/2 ] ← A[1 . . n/2 ] A2[1 . . n/2 ] ← A[1 + n/2 . . n] Merge Sort (A1)Merge Sort (A1)Merge Sort (A1, A2, A)

Analysis

Let T(n) be the time taken by this algorithm to sort an array of n elements dividing A into subarrays A1 and A2 takes linear time. It is easy to see that the Merge (A1, A2, A) also takes the linear time. Consequently,

T(n) = T( n/2 ) + T( n/2 ) + θ(n)

for simplicity

T(n) = 2T (n/2) + θ(n)

The total running time of Merge sort algorithm is O(n lg n), which is asymptotically optimal like Heap sort, Merge sort has a guaranteed n lg n running time. Merge sort required (n) extra space. Merge is not in-place algorithm. The only known ways to merge in-place (without any extra space) are too complex to be reduced to practical program.

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 15

Page 16: DAA LM

Merge Sort#include <stdio.h>#include <stdlib.h>#define MAXARRAY 10void mergesort(int a[], int low, int high);int main(void) int array[MAXARRAY]; int i = 0; /* load some random values into the array */ for(i = 0; i < MAXARRAY; i++) array[i] = rand() % 100; /* array before mergesort */ printf("Before :"); for(i = 0; i < MAXARRAY; i++) printf(" %d", array[i]); printf("\n"); mergesort(array, 0, MAXARRAY - 1); /* array after mergesort */ printf("Mergesort :"); for(i = 0; i < MAXARRAY; i++) printf(" %d", array[i]); printf("\n"); return 0;void mergesort(int a[], int low, int high) int i = 0; int length = high - low + 1; int pivot = 0; int merge1 = 0; int merge2 = 0; int working[length]; if(low == high) return; pivot = (low + high) / 2; mergesort(a, low, pivot); mergesort(a, pivot + 1, high); for(i = 0; i < length; i++) working[i] = a[low + i]; merge1 = 0; merge2 = pivot - low + 1;

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 16

Page 17: DAA LM

for(i = 0; i < length; i++) if(merge2 <= high - low) if(merge1 <= pivot - low) if(working[merge1] > working[merge2]) a[i + low] = working[merge2++]; else a[i + low] = working[merge1++]; else a[i + low] = working[merge2++]; else a[i + low] = working[merge1++];

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 17

Page 18: DAA LM

Program #5

Shell SortThis algorithm is a simple extension of Insertion sort. Its speed comes from the fact that it exchanges elements that are far apart (the insertion sort exchanges only adjacent elements).

The idea of the Shell sort is to rearrange the file to give it the property that taking every hth element (starting anywhere) yields a sorted file. Such a file is said to be h-sorted.

SHELL_SORT (A)

for h = 1 to h N/9 do for (; h > 0; h != 3) do for i = h +1 to i n do v = A[i] j = i while (j > h AND A[j - h] > v A[i] = A[j - h] j = j - h A[j] = v i = i + 1

The function form of the running time for all Shell sort depends on the increment sequence and is unknown. For the above algorithm, two conjectures are n(logn)2 and n1.25. Furthermore, the running time is not sensitive to the initial ordering of the given sequence, unlike Insertion sort.

Shell sort is the method of choice for many sorting application because it has acceptable running time even for moderately large files and requires only small amount of code that is easy to get working. Having said that, it is worthwhile to replace Shell sort with a sophisticated sort in given sorting problem.

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 18

Page 19: DAA LM

Shell Sort#include<stdio.h>#include<stdlib.h>#include<conio.h>void shellsort(int a[],int n);void main()int i,n,a[10];clrscr();printf("Enter array size");scanf("%d",&n);printf("Enter elements of array");for(i=0;i<n;i++)scanf("%d",&a[i]);shellsort(a,n);printf("Sorted elements");for(i=0;i<n;i++)printf("\n%d",a[i]);printf("\n");getch();void shellsort(int a[],int n)int d,i,temp;d=n/2;while(d>=1)for(i=0;i<n-d;i++)if(a[i]>a[i+d])temp=a[i];a[i]=a[i+d];a[i+d]=temp;if(d==1)return;d=d/2.0+0.5;

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 19

Page 20: DAA LM

Program#6

Heap SortThe binary heap data structures is an array that can be viewed as a complete binary tree. Each node of the binary tree corresponds to an element of the array. The array is completely filled on all levels except possibly lowest.

We represent heaps in level order, going from left to right. The array corresponding to the heap above is [25, 13, 17, 5, 8, 3].

The root of the tree A[1] and given index i of a node, the indices of its parent, left child and right child can be computed

PARENT (i) return floor(i/2LEFT (i) return 2iRIGHT (i) return 2i + 1

Let's try these out on a heap to make sure we believe they are correct. Take this heap,

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 20

Page 21: DAA LM

which is represented by the array [20, 14, 17, 8, 6, 9, 4, 1].

We'll go from the 20 to the 6 first. The index of the 20 is 1. To find the index of the left child, we calculate 1 * 2 = 2. This takes us (correctly) to the 14. Now, we go right, so we calculate 2 * 2 + 1 = 5. This takes us (again, correctly) to the 6.

Now let's try going from the 4 to the 20. 4's index is 7. We want to go to the parent, so we calculate 7 / 2 = 3, which takes us to the 17. Now, to get 17's parent, we calculate 3 / 2 = 1, which takes us to the 20.

Heap Property

In a heap, for every node i other than the root, the value of a node is greater than or equal (at most) to the value of its parent.

A[PARENT (i)] ≥A[i]

Thus, the largest element in a heap is stored at the root.

Following is an example of Heap:

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 21

Page 22: DAA LM

By the definition of a heap, all the tree levels are completely filled except possibly for the lowest level, which is filled from the left up to a point. Clearly a heap of height h has the minimum number of elements when it has just one node at the lowest level. The levels above the lowest level form a complete binary tree of height h -1 and 2h -1 nodes. Hence the minimum number of nodes possible in a heap of height h is 2h. Clearly a heap of height h, has the maximum number of elements when its lowest level is completely filled. In this case the heap is a complete binary tree of height h and hence has 2h+1 -1 nodes.

Following is not a heap, because it only has the heap property - it is not a complete binary tree. Recall that to be complete, a binary tree has to fill up all of its levels with the possible exception of the last one, which must be filled in from the left side.

Height of a node

We define the height of a node in a tree to be a number of edges on the longest simple downward path from a node to a leaf.

Height of a tree

The number of edges on a simple downward path from a root to a leaf. Note that the height of a tree with n node is lg n which is (lgn). This implies that an n-element heap has height lg n

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 22

Page 23: DAA LM

In order to show this let the height of the n-element heap be h. From the bounds obtained on maximum and minimum number of elements in a heap, we get

2h ≤ n ≤ 2h+1-1

Where n is the number of elements in a heap.

2h ≤ n ≤ 2h+1

Taking logarithms to the base 2

h ≤ lgn ≤ h +1

It follows that h = lgn

We known from above that largest element resides in root, A[1]. The natural question to ask is where in a heap might the smallest element resides? Consider any path from root of the tree to a leaf. Because of the heap property, as we follow that path, the elements are either decreasing or staying the same. If it happens to be the case that all elements in the heap are distinct, then the above implies that the smallest is in a leaf of the tree. It could also be that an entire subtree of the heap is the smallest element or indeed that there is only one element in the heap, which in the smallest element, so the smallest element is everywhere. Note that anything below the smallest element must equal the smallest element, so in general, only entire subtrees of the heap can contain the smallest element.

Inserting Element in the Heap

Suppose we have a heap as follows

Let's suppose we want to add a node with key 15 to the heap. First, we add the node to the tree at the next spot available at the lowest level of the tree. This is to ensure that the tree remains complete.

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 23

Page 24: DAA LM

Let's suppose we want to add a node with key 15 to the heap. First, we add the node to the tree at the next spot available at the lowest level of the tree. This is to ensure that the tree remains complete.

Now we do the same thing again, comparing the new node to its parent. Since 14 < 15, we have to do another swap:

Now we are done, because 15 20.

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 24

Page 25: DAA LM

Four basic procedures on heap are

1. Heapify, which runs in O(lg n) time. 2. Build-Heap, which runs in linear time. 3. Heap Sort, which runs in O(n lg n) time. 4. Extract-Max, which runs in O(lg n) time.

Maintaining the Heap Property

Heapify is a procedure for manipulating heap data structures. It is given an array A and index i into the array. The subtree rooted at the children of A[i] are heap but node A[i] itself may possibly violate the heap property i.e., A[i] < A[2i] or A[i] < A[2i +1]. The procedure 'Heapify' manipulates the tree rooted at A[i] so it becomes a heap. In other words, 'Heapify' is let the value at A[i] "float down" in a heap so that subtree rooted at index i becomes a heap.

Outline of Procedure Heapify

Heapify picks the largest child key and compare it to the parent key. If parent key is larger than heapify quits, otherwise it swaps the parent key with the largest child key. So that the parent is now becomes larger than its children.

It is important to note that swap may destroy the heap property of the subtree rooted at the largest child node. If this is the case, Heapify calls itself again using largest child node as the new root.

Heapify (A, i)

1. l ← left [i] 2. r ← right [i] 3. if l ≤ heap-size [A] and A[l] > A[i] 4. then largest ← l 5. else largest ← i 6. if r ≤ heap-size [A] and A[i] > A[largest] 7. then largest ← r 8. if largest ≠ i 9. then exchange A[i] ↔ A[largest] 10. Heapify (A, largest)

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 25

Page 26: DAA LM

Analysis

If we put a value at root that is less than every value in the left and right subtree, then 'Heapify' will be called recursively until leaf is reached. To make recursive calls traverse the longest path to a leaf, choose value that make 'Heapify' always recurse on the left child. It follows the left branch when left child is greater than or equal to the right child, so putting 0 at the root and 1 at all other nodes, for example, will accomplished this task. With such values 'Heapify' will called h times, where h is the heap height so its running time will be θ(h) (since each call does (1) work), which is

(lgn). Since we have a case in which Heapify's running time (lg n), its worst-case running time is Ω(lgn).

Example of Heapify

Suppose we have a complete binary tree somewhere whose subtrees are heaps. In the following complete binary tree, the subtrees of 6 are heaps:

The Heapify procedure alters the heap so that the tree rooted at 6's position is a heap. Here's how it works. First, we look at the root of our tree and its two children.

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 26

Page 27: DAA LM

We then determine which of the three nodes is the greatest. If it is the root, we are done, because we have a heap. If not, we exchange the appropriate child with the root, and continue recursively down the tree. In this case, we exchange 6 and 8, and continue.

Now, 7 is greater than 6, so we exchange them.

We are at the bottom of the tree, and can't continue, so we terminate. Building a Heap

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 27

Page 28: DAA LM

We can use the procedure 'Heapify' in a bottom-up fashion to convert an array A[1 . . n] into a heap. Since the elements in the subarray A[ n/2 +1 . . n] are all leaves, the procedure BUILD_HEAP goes through the remaining nodes of the tree and runs 'Heapify' on each one. The bottom-up order of processing node guarantees that the subtree rooted at children are heap before 'Heapify' is run at their parent.

BUILD_HEAP (A)

1. heap-size (A) ← length [A] 2. For i ← floor(length[A]/2 down to 1 do 3. Heapify (A, i)

We can build a heap from an unordered array in linear time.

Heap Sort Algorithm

The heap sort combines the best of both merge sort and insertion sort. Like merge sort, the worst case time of heap sort is O(n log n) and like insertion sort, heap sort sorts in-place. The heap sort algorithm starts by using procedure BUILD-HEAP to build a heap on the input array A[1 . . n]. Since the maximum element of the array stored at the root A[1], it can be put into its correct final position by exchanging it with A[n] (the last element in A). If we now discard node n from the heap than the remaining elements can be made into heap. Note that the new element at the root may violate the heap property. All that is needed to restore the heap property.

HEAPSORT (A)

1. BUILD_HEAP (A) 2. for i ← length (A) down to 2 do

exchange A[1] ↔ A[i]heap-size [A] ← heap-size [A] - 1Heapify (A, 1)

The HEAPSORT procedure takes time O(n lg n), since the call to BUILD_HEAP takes time O(n) and each of the n -1 calls to Heapify takes time O(lg n).

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 28

Page 29: DAA LM

Implementation Heap Sortvoid heapSort(int numbers[], int array_size) int i, temp; for (i = (array_size / 2)-1; i >= 0; i--) siftDown(numbers, i, array_size);

for (i = array_size-1; i >= 1; i--) temp = numbers[0]; numbers[0] = numbers[i]; numbers[i] = temp; siftDown(numbers, 0, i-1); void siftDown(int numbers[], int root, int bottom) int done, maxChild, temp; done = 0; while ((root*2 <= bottom) && (!done)) if (root*2 == bottom) maxChild = root * 2; else if (numbers[root * 2] > numbers[root * 2 + 1]) maxChild = root * 2; else maxChild = root * 2 + 1; if (numbers[root] < numbers[maxChild]) temp = numbers[root]; numbers[root] = numbers[maxChild]; numbers[maxChild] = temp; root = maxChild; else

done = 1;

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 29

Page 30: DAA LM

Program # 7

Linear Search In computer science, linear search or sequential search is a method for finding a particular value in a list, which consists of checking every one of its elements, one at a time and in sequence, until the desired one is found

Linear search is the simplest search algorithm; it is a special case of brute-force search. Its worst case cost is proportional to the number of elements in the list; and so is its expected cost, if all list elements are equally likely to be searched for. Therefore, if the list has more than a few elements, other methods (such as binary search or hashing) may be much more efficient.

Analysis

For a list with n items, the best case is when the value is equal to the first element of the list, in which case only one comparison is needed. The worst case is when the value is not in the list (or occurs only once at the end of the list), in which case n comparisons are needed.

If the value being sought occurs k times in the list, and all orderings of the list are equally likely, the expected number of comparisons is

For example, if the value being sought occurs once in the list, and all orderings of the list are

equally likely, the expected number of comparisons is . However, if it is known that it occurs once, than at most n - 1 comparisons are needed, and the expected number of comparisons is

(for example, for n = 2 this is 1, corresponding to a single if-then-else construct).

Either way, asymptotically the worst-case cost and the expected cost of linear search are both O(n).

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 30

Page 31: DAA LM

Linear Search

include<stdio.h>int main()

int a[10],i,n,m,c=0;printf("Enter the size of an array");scanf("%d",&n);printf("\nEnter the elements of the array");for(i=0;i<=n-1;i++)scanf("%d",&a[i]);printf("\nThe elements of an array are");for(i=0;i<=n-1;i++)printf(" %d",a[i]);printf("\nEnter the number to be search");scanf("%d",&m);for(i=0;i<=n-1;i++)if(a[i]==m)c=1;break;if(c==0)printf("\nThe number is not in the list");elseprintf("\nThe number is found");return 0;

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 31

Page 32: DAA LM

Program # 8

Binary search algorithm Generally, to find a value in unsorted array, we should look through elements of an array one by one, until searched value is found. In case of searched value is absent from array, we go through all elements. In average, complexity of such an algorithm is proportional to the length of the array.Situation changes significantly, when array is sorted. If we know it, random access capability can be utilized very efficiently to find searched value quick. Cost of searching algorithm reduces to binary logarithm of the array length. For reference, log2 (1 000 000) ≈ 20. It means, that in worst case, algorithm makes 20 steps to find a value in sorted array of a million elements or to say, that it doesn't present it the array.

Algorithm : Algorithm is quite simple. It can be done either recursively or iteratively:

1. get the middle element; 2. if the middle element equals to the searched value, the algorithm stops; 3. otherwise, two cases are possible:

o searched value is less, than the middle element. In this case, go to the step 1 for the part of the array, before middle element.

o searched value is greater, than the middle element. In this case, go to the step 1 for the part of the array, after middle element.

Now we should define, when iterations should stop. First case is when searched element is found. Second one is when subarray has no elements. In this case, we can conclude, that searched value doesn't present in the array.

Examples. Find 6 in -1, 5, 6, 18, 19, 25, 46, 78, 102, 114.

Step 1 (middle element is 19 > 6): -1 5 6 18 19 25 46 78 102 114

Step 2 (middle element is 5 < 6): -1 5 6 18 19 25 46 78 102 114

Step 3 (middle element is 6 == 6): -1 5 6 18 19 25 46 78 102 114

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 32

Page 33: DAA LM

Complexity analysis

Huge advantage of this algorithm is that it's complexity depends on the array size logarithmically in worst case. In practice it means, that algorithm will do at most log2(n) iterations, which is a very small number even for big arrays. It can be proved very easily. Indeed, on every step the size of the searched part is reduced by half. Algorithm stops, when there are no elements to search in. Therefore, solving following inequality in whole numbers:

n / 2iterations > 0

resulting in

iterations <= log2(n).

It means, that binary search algorithm time complexity is O(log2(n)).

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 33

Page 34: DAA LM

Binary search

#include <stdio.h>

#define TRUE 0#define FALSE 1

int main(void) int array[10] = 1, 2, 3, 4, 5, 6, 7, 8, 9, 10; int left = 0; int right = 10; int middle = 0; int number = 0; int bsearch = FALSE; int i = 0;

printf("ARRAY: "); for(i = 1; i <= 10; i++) printf("[%d] ", i);

printf("\nSearch for Number: "); scanf("%d", &number);

while(bsearch == FALSE && left <= right) middle = (left + right) / 2;

if(number == array[middle]) bsearch = TRUE; printf("** Number Found **\n"); else if(number < array[middle]) right = middle - 1; if(number > array[middle]) left = middle + 1;

if(bsearch == FALSE) printf("-- Number Not found --\n");

return 0;

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 34

Page 35: DAA LM

Program # 9

Dynamic-Programming Algorithm 0-1 Knapsack Problem

Let i be the highest-numbered item in an optimal solution S for W pounds. Then S`= S - i is an optimal solution for W-wi pounds and the value to the solution S is Vi plus the value of the subproblem.

We can express this fact in the following formula: define c[i, w] to be the solution for items 1,2, . . . , i and maximum weight w. Then

0 if i = 0 or w = 0c[i,w] = c[i-1, w] if wi ≥ 0

max [vi + c[i-1, w-wi], c[i-1, w] if i>0 and w ≥ wi

This says that the value of the solution to i items either include ith item, in which case it is vi plus a subproblem solution for (i-1) items and the weight excluding wi, or does not include ith item, in which case it is a subproblem's solution for (i-1) items and the same weight. That is, if the thief picks item i, thief takes vi value, and thief can choose from items w-wi, and get c[i-1, w-wi] additional value. On other hand, if thief decides not to take item i, thief can choose from item 1,2, . . . , i-1 upto the weight limit w, and get c[i-1, w] value. The better of these two choices should be made.

Although the 0-1 knapsack problem, the above formula for c is similar to LCS formula: boundary values are 0, and other values are computed from the input and "earlier" values of c. So the 0-1 knapsack algorithm is like the LCS-length algorithm given in CLR-book for finding a longest common subsequence of two sequences.

The algorithm takes as input the maximum weight W, the number of items n, and the two sequences v = <v1, v2, . . . , vn> and w = <w1, w2, . . . , wn>. It stores the c[i, j] values in the table, that is, a two dimensional array, c[0 . . n, 0 . . w] whose entries are computed in a row-major order. That is, the first row of c is filled in from left to right, then the second row, and so on. At the end of the computation, c[n, w] contains the maximum value that can be picked into the knapsack.

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 35

Page 36: DAA LM

Dynamic-0-1-knapsack (v, w, n, W)

for w = 0 to W do c[0, w] = 0for i = 1 to n do c[i, 0] = 0 for w = 1 to W do if wi ≤ w then if vi + c[i-1, w-wi] then c[i, w] = vi + c[i-1, w-wi] else c[i, w] = c[i-1, w] else c[i, w] = c[i-1, w]

The set of items to take can be deduced from the table, starting at c[n. w] and tracing backwards where the optimal values came from. If c[i, w] = c[i-1, w] item i is not part of the solution, and we are continue tracing with c[i-1, w]. Otherwise item i is part of the solution, and we continue tracing with c[i-1, w-W].

Analysis

This dynamic-0-1-kanpsack algorithm takes θ(nw) times, broken up as follows:

θ(nw) times to fill the c-table, which has (n+1).(w+1) entries, each requiring θ(1) time to compute. O(n) time to trace the solution, because the tracing process starts in row n of the table and moves up 1 row at each step.

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 36

Page 37: DAA LM

Dynamic Programming(0/1 Knapsack Problem)

#include <stdio.h>

#define MAXWEIGHT 100

int n = 3; /* The number of objects */int c[10] = 8, 6, 4; /* c[i] is the *COST* of the ith object; i.e. whatYOU PAY to take the object */int v[10] = 16, 10, 7; /* v[i] is the *VALUE* of the ith object; i.e.what YOU GET for taking the object */int W = 10; /* The maximum weight you can take */

void fill_sack() int a[MAXWEIGHT]; /* a[i] holds the maximum value that can be obtained using

at most i weight */int last_added[MAXWEIGHT]; /* I use this to calculate which object were added

*/int i, j;int aux;

for (i = 0; i <= W; ++i) a[i] = 0;last_added[i] = -1;

a[0] = 0;for (i = 1; i <= W; ++i)

for (j = 0; j < n; ++j)if ((c[j] <= i) && (a[i] < a[i - c[j]] + v[j]))

a[i] = a[i - c[j]] + v[j];last_added[i] = j;

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 37

Page 38: DAA LM

for (i = 0; i <= W; ++i)if (last_added[i] != -1)

printf("Weight %d; Benefit: %d; To reach this weight I added object %d (%d$ %dKg) to weight %d.\n", i, a[i], last_added[i] + 1, v[last_added[i]], c[last_added[i]], i - c[last_added[i]]);

else printf("Weight %d; Benefit: 0; Can't reach this exact weight.\n", i); printf("---\n");

aux = W;while ((aux > 0) && (last_added[aux] != -1))

printf("Added object %d (%d$ %dKg). Space left: %d\n", last_added[aux] + 1, v[last_added[aux]], c[last_added[aux]], aux - c[last_added[aux]]);

aux -= c[last_added[aux]];

printf("Total value added: %d$\n", a[W]);

int main(int argc, char *argv[]) fill_sack();

return 0;

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 38

Page 39: DAA LM

Program # 10

Greedy Algorithm: Fractional Knapsack Problem There are n items in a store. For i =1,2, . . . , n, item i has weight wi > 0 and worth vi > 0. Thief can carry a maximum weight of W pounds in a knapsack. In this version of a problem the items can be broken into smaller piece, so the thief may decide to carry only a fraction xi of object i, where 0 ≤ xi ≤ 1. Item i contributes xiwi to the total weight in the knapsack, and xivi to the value of the load.

In Symbol, the fraction knapsack problem can be stated as follows.maximize nSi=1 xivi subject to constraint nSi=1 xiwi ≤ W

It is clear that an optimal solution must fill the knapsack exactly, for otherwise we could add a fraction of one of the remaining objects and increase the value of the load. Thus in an optimal solution nSi=1 xiwi = W.

Greedy-fractional-knapsack (w, v, W)

FOR i =1 to n do x[i] =0weight = 0while weight < W do i = best remaining item IF weight + w[i] ≤ W then x[i] = 1 weight = weight + w[i] else x[i] = (w - weight) / w[i] weight = Wreturn x

Analysis

If the items are already sorted into decreasing order of vi / wi, then the while-loop takes a time in O(n); Therefore, the total time including the sort is in O(n log n).

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 39

Page 40: DAA LM

If we keep the items in heap with largest vi/wi at the root. Then

creating the heap takes O(n) time while-loop now takes O(log n) time (since heap property must be restored after

the removal of root)

Although this data structure does not alter the worst-case, it may be faster if only a small number of items are need to fill the knapsack.

One variant of the 0-1 knapsack problem is when order of items are sorted by increasing weight is the same as their order when sorted by decreasing value.

The optimal solution to this problem is to sort by the value of the item in decreasing order. Then pick up the most valuable item which also has a least weight. First, if its weight is less than the total weight that can be carried. Then deduct the total weight that can be carried by the weight of the item just pick. The second item to pick is the most valuable item among those remaining. Keep follow the same strategy until thief cannot carry more item (due to weight).

Proof

One way to proof the correctness of the above algorithm is to prove the greedy choice property and optimal substructure property. It consist of two steps. First, prove that there exists an optimal solution begins with the greedy choice given above. The second part prove that if A is an optimal solution to the original problem S, then A - a is also an optimal solution to the problem S - s where a is the item thief picked as in the greedy choice and S - s is the subproblem after the first greedy choice has been made. The second part is easy to prove since the more valuable items have less weight.Note that if v` / w` , is not it can replace any other because w` < w, but it increases the value because v` > v.

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 40

Page 41: DAA LM

Greedy Algorithms(Fractional Knapsack Problem)

#include <stdio.h>int n = 5; /* The number of objects */int c[10] = 12, 1, 2, 1, 4; /* c[i] is the *COST* of the ith object; i.e. what YOU PAY to take the object */int v[10] = 4, 2, 2, 1, 10; /* v[i] is the *VALUE* of the ith object; i.e. what YOU GET for taking the object */int W = 15; /* The maximum weight you can take */ void simple_fill()

int cur_w;float tot_v;int i, maxi;int used[10];for (i = 0; i < n; ++i)used[i] = 0; /* I have not used the ith object yet */cur_w = W;while (cur_w > 0) /* while there's still room*/

/* Find the best object */maxi = -1;for (i = 0; i < n; ++i)

if ((used[i] == 0) &&((maxi == -1) || ((float)v[i]/c[i] >

(float)v[maxi]/c[maxi])))maxi = i;

used[maxi] = 1; /* mark the maxi-th object as used */ cur_w -= c[maxi]; /* with the object in the bag, I can carry less */

tot_v += v[maxi];if (cur_w >= 0)

printf("Added object %d (%d$, %dKg) completly in the bag. Space left: %d.\n", maxi + 1, v[maxi], c[maxi], cur_w);

else printf("Added %d%% (%d$, %dKg) of object %d in the bag.\n", (int)((1 + (float)cur_w/c[maxi]) * 100), v[maxi], c[maxi], maxi + 1);

tot_v -= v[maxi];tot_v += (1 + (float)cur_w/c[maxi]) * v[maxi];

printf("Filled the bag with objects worth %.2f$.\n", tot_v); int main(int argc, char *argv[]) simple_fill(); return 0;

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 41

Page 42: DAA LM

Program #11

To find Maximum Element from List

Maximum(list,n)

1.nlength[A]

2.maxlist[0]

3.for i1 to n-1

4.if list[i]>max then

5.maxa[i]

6.print the value of max

7.exit

To find Minimum Element from List

Minimum(list,n)

1.nlength[A]

2.minlist[0]

3.for i1 to n-1

4.if list[i]<min then

5.mina[i]

6.print the value of min

7.exit

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 42

Page 43: DAA LM

Maximum and Minimum element in an arary.#include<stdio.h>int main () int i;

int a[10] = 10, 55, 9, 4, 234, 20, 30, 40, 22,34 ; int max = a[0]; int min = a[0];

for (i = 0; i < 10; i++) if (a[i] > max)

max = a[i];

else if (a[i] < min) min = a[i];

printf ("Maximum element in an array : %d\n", max); printf ("Minimum element in an array : %d\n", min);

return 0;

Compile and Run By : Mr. Anuj Kumar Srivastava,Lecturer Deptt. of CSE/IT 43