In order to illustrate few of the binary tree traversals, let us consider the below binary tree:
Preorder traversal: To traverse a binary tree in Preorder, following operations are carried-out (i) Visit the root, (ii) Traverse the left subtree, and (iii) Traverse the right subtree.
Therefore, the Preorder traversal of the above tree will outputs:
7, 1, 0, 3, 2, 5, 4, 6, 9, 8, 10
Inorder traversal: To traverse a binary tree in Inorder, following operations are carried-out (i) Traverse the left most subtree starting at the left external node, (ii) Visit the root, and (iii) Traverse the right subtree starting at the left external node.
Therefore, the Inorder traversal of the above tree will outputs:
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
Postorder traversal: To traverse a binary tree in Postorder, following operations are carried-out (i) Traverse all the left external nodes starting with the left most subtree which is then followed by bubble-up all the internal nodes, (ii) Traverse the right subtree starting at the left external node which is then followed by bubble-up all the internal nodes, and (iii) Visit the root.
Therefore, the Postorder traversal of the above tree will outputs:
0, 2, 4, 6, 5, 3, 1, 8, 10, 9, 7
Thursday, February 19, 2009
Wednesday, February 18, 2009
String Searching Algorithm - KMP Algorithm
Knuth-Morris-Partt string searching algorithm:
- This algorithm searches a word->w within the main string text->s
- Such a search must employ the following observation:
i) A given match determines where the next match could begin
ii) The above intelligence is obtained for the search word itself that contains sufficient information for making such a discussion (i.e. determine where the next match begins)
Advantage of this string searching algorithm: Helps to avoid the re-examination of already matched characters
Step1:
returns: No match found!
Search algorithms observation: We notice no string->'s'(i.e. word position->i[0]) occurs between position w[0] & w[8] in the main string text except at w[5]. Hence, the next character matching operation begins at w[5].
Step2:
returns: Match found!
Search algorithms observation: We notice all strings of the word occurs between position w[5] & w[3] in the main string text. The next character matching operation begins at w[4].
Step3:
returns: No match found!
Search algorithms observation: We notice no string->'s'(i.e. word position->i[0]) occurs between position w[4] & w[9] in the main string text. The character matching operation ends here.
- This algorithm searches a word->w within the main string text->s
- Such a search must employ the following observation:
i) A given match determines where the next match could begin
ii) The above intelligence is obtained for the search word itself that contains sufficient information for making such a discussion (i.e. determine where the next match begins)
Advantage of this string searching algorithm: Helps to avoid the re-examination of already matched characters
Step1:
returns: No match found!
Search algorithms observation: We notice no string->'s'(i.e. word position->i[0]) occurs between position w[0] & w[8] in the main string text except at w[5]. Hence, the next character matching operation begins at w[5].
Step2:
returns: Match found!
Search algorithms observation: We notice all strings of the word occurs between position w[5] & w[3] in the main string text. The next character matching operation begins at w[4].
Step3:
returns: No match found!
Search algorithms observation: We notice no string->'s'(i.e. word position->i[0]) occurs between position w[4] & w[9] in the main string text. The character matching operation ends here.
Labels:
algorithm,
Data Structure,
KMP algorithm,
match string,
search string
Tuesday, February 17, 2009
Exercise-Radix Sorting
Exercise-Manipulating an element in an array (Replace/Insert/Remove)
Given array sequence: S=(6, 5, 4, 3, 2, 1)
a) S.replaceElementAtRank(0,7)
(7, 5, 4, 3, 2, 1)
b) S.insertElementAtRank(1,7); S.replaceElementAtRank(3,8)
(6, 7, 5, 4, 3, 2, 1) => (6, 7, 5, 8, 3, 2, 1)
c) S.removeElementAtRank(2); S.insertElementAtRank(4,S.elementAtRank(2))
(6, 5, 3, 2, 1) => (6, 5, 3, 2, 3, 1)
d) S.insertElementAtRank(1,S.elementAtRank(2)); S.removeElementAtRank(S.elementAtRank(4))
(6, 4, 5, 4, 3, 2, 1) => (6, 4, 5, 3, 2, 1)
a) S.replaceElementAtRank(0,7)
(7, 5, 4, 3, 2, 1)
b) S.insertElementAtRank(1,7); S.replaceElementAtRank(3,8)
(6, 7, 5, 4, 3, 2, 1) => (6, 7, 5, 8, 3, 2, 1)
c) S.removeElementAtRank(2); S.insertElementAtRank(4,S.elementAtRank(2))
(6, 5, 3, 2, 1) => (6, 5, 3, 2, 3, 1)
d) S.insertElementAtRank(1,S.elementAtRank(2)); S.removeElementAtRank(S.elementAtRank(4))
(6, 4, 5, 4, 3, 2, 1) => (6, 4, 5, 3, 2, 1)
Monday, February 16, 2009
Exercise-Reverse Polish Notation (RPN)
a) 2+3+4
=> 34+
=> 234++
b) (2+3)*4
=> 23+
=> 423+*
c) 2*(5+2*3)
=> 23*
=> 23*5+
=> 223*5+*
d) (3+4)*(20-(3*4+2))
=> 34*
=> 34*2+
=> 2034*2+-
=> 34+2034*2+-*
e) 5*(3+4)-(2*(2+2*(1+2)))
=> 12+
=> 212+*
=> 2212+*+
=> 22212+*+*
=> 34+22212+*+*-
=> 534+*22212+*+*-
=> 34+
=> 234++
b) (2+3)*4
=> 23+
=> 423+*
c) 2*(5+2*3)
=> 23*
=> 23*5+
=> 223*5+*
d) (3+4)*(20-(3*4+2))
=> 34*
=> 34*2+
=> 2034*2+-
=> 34+2034*2+-*
e) 5*(3+4)-(2*(2+2*(1+2)))
=> 12+
=> 212+*
=> 2212+*+
=> 22212+*+*
=> 34+22212+*+*-
=> 534+*22212+*+*-
Sorting Algorithm - HeapSort & Heapify
HeapSort(A)
---------------------------------
01 BuildMaxHeap(A)
02 For i <- A.Length to 2 do
03 Exchange A[1] <-> A[i]
04 A.HeapSize <- A.HeapSize-1
05 PercolateDown(A,1)
06 end for
---------------------------------
BuildMaxHeap(A) (i.e. Max-Heapify)
---------------------------------
01 A.HeapSize <- A.Length
02 For i <- (A.Length/2) to 1 do
03 PercolateDown(A,i)
04 end for
---------------------------------
---------------------------------
01 BuildMaxHeap(A)
02 For i <- A.Length to 2 do
03 Exchange A[1] <-> A[i]
04 A.HeapSize <- A.HeapSize-1
05 PercolateDown(A,1)
06 end for
---------------------------------
BuildMaxHeap(A) (i.e. Max-Heapify)
---------------------------------
01 A.HeapSize <- A.Length
02 For i <- (A.Length/2) to 1 do
03 PercolateDown(A,i)
04 end for
---------------------------------
Labels:
algorithm,
Data Structure,
Heapify,
HeapSort,
sorting
Thursday, February 12, 2009
Algorithm that returns the middle element of a double linked list
FindTheMiddleElementofDoubleLinkedList(L)
01 i <- head
02 j <- tail
03 while i != j do
04 i <- i.next
05 j <- j.previous
06 return i
01 i <- head
02 j <- tail
03 while i != j do
04 i <- i.next
05 j <- j.previous
06 return i
Time complexity - Omega of algorithms
a) if statement - O(n) [Omega times n]
b) for loop - O(n) [Omega times n]
c) for loop with a for loop - O(n*n) [Omega times n squared]
d) for loop within a if loop, this if loop is within a for loop - O(n + n*n) [n plus n squared omega]
e) while loop - O(n) [Omega times n]
f) MergeSort - O(nlogn) [Omega n time logarithmic n]
g) HeapSort - O(nlogn) [Omega n time logarithmic n]
h) KMP algorithm - O(m+n) [m plus n omega]
b) for loop - O(n) [Omega times n]
c) for loop with a for loop - O(n*n) [Omega times n squared]
d) for loop within a if loop, this if loop is within a for loop - O(n + n*n) [n plus n squared omega]
e) while loop - O(n) [Omega times n]
f) MergeSort - O(nlogn) [Omega n time logarithmic n]
g) HeapSort - O(nlogn) [Omega n time logarithmic n]
h) KMP algorithm - O(m+n) [m plus n omega]
Wednesday, February 11, 2009
Checklist before selecting the right data structure for a given problem
(i) Analysis the problem in order to determine resource constraints
(ii) Determine the basic operations that need to be supported, such as Insert/Remove/Search
(iii) Now, select the data structure that has minimum cost/benefits to support the required operations
(ii) Determine the basic operations that need to be supported, such as Insert/Remove/Search
(iii) Now, select the data structure that has minimum cost/benefits to support the required operations
Algorithm to print-all-pairs in the tables (X, Y)
PrintAllPairs(X[0, 1, 2, ..., n-1], Y[0, 1, 2, ..., m-1])
01 for i<-0 to n-1 do
02 for j<-0 to m-1 do
03 Print(X[i], Y[j])
04 end for
05 end for
01 for i<-0 to n-1 do
02 for j<-0 to m-1 do
03 Print(X[i], Y[j])
04 end for
05 end for
Algorithm to Sum-up n number of items in the table
Sum-up n number of items in the table and return the computed Sum
SumUp(X[0, 1, 2, ..., n-1])
01 Sum<-0
02 for i<-0 to n-1 do
03 Sum <- Sum + X[i]
04 end for
05 return Sum
SumUp(X[0, 1, 2, ..., n-1])
01 Sum<-0
02 for i<-0 to n-1 do
03 Sum <- Sum + X[i]
04 end for
05 return Sum
Time complexity of algorithms - In particular Big Omega of algorithms
Big Omega or so called asymptotic upper bound of algorithms such as Sequential statements, If statements and Looping construct
(i) Sequential statements:
Big Omega of this algorithm equals to maximum time complexities of all individual statements (i.e. sum of maximum time complexities of the individual statements)
Therefore, O[g(n)] = O{f1(n)+f2(n)+ ... +fk(n)}
Conclusion: Overall complexity of this algorithm equates to O[g(n)]
(ii) If statements:
Big Omega of this algorithm equals to maximum time complexities of either one of the alternative statements i.e. then branch -> f(n)/else branch -> g(n)
Therefore, O[h(n)] = O{f(n), g(n)}
Conclusion: Overall complexity of this algorithm equates to O[h(n)]
(iii) Looping construct:
Big Omega of this algorithm equals to aggeration of the maximum time complexities of the following: (a) execution time (i.e. elementary operations execution in a given iteration) -> f(n) and (b) the number of iterations -> g(n)
Therefore, O{f(n) * g(n)}
Conclusion: Overall complexity of this algorithm equates to O{f(n) * g(n)}
(i) Sequential statements:
Big Omega of this algorithm equals to maximum time complexities of all individual statements (i.e. sum of maximum time complexities of the individual statements)
Therefore, O[g(n)] = O{f1(n)+f2(n)+ ... +fk(n)}
Conclusion: Overall complexity of this algorithm equates to O[g(n)]
(ii) If statements:
Big Omega of this algorithm equals to maximum time complexities of either one of the alternative statements i.e. then branch -> f(n)/else branch -> g(n)
Therefore, O[h(n)] = O{f(n), g(n)}
Conclusion: Overall complexity of this algorithm equates to O[h(n)]
(iii) Looping construct:
Big Omega of this algorithm equals to aggeration of the maximum time complexities of the following: (a) execution time (i.e. elementary operations execution in a given iteration) -> f(n) and (b) the number of iterations -> g(n)
Therefore, O{f(n) * g(n)}
Conclusion: Overall complexity of this algorithm equates to O{f(n) * g(n)}
Tuesday, February 10, 2009
Operation time complexity for a LINKED LIST
The time complexity of handling the operations in a data structure like a LINKED LIST are as follows:
i. Most of the operations are O(n) [i.e. omega times n]
ii. Remove/Insert operations that handles element between elements require O(1) [i.e. omega times one] this is due to references/pointers on both the ends of the list, hence the elements don't require to be shifted. However, due to the references/pointers this data structure requires additional memory!
i. Most of the operations are O(n) [i.e. omega times n]
ii. Remove/Insert operations that handles element between elements require O(1) [i.e. omega times one] this is due to references/pointers on both the ends of the list, hence the elements don't require to be shifted. However, due to the references/pointers this data structure requires additional memory!
Labels:
datastructures,
linked list,
operations,
time complexity
Operation time complexity for an ARRAY
The time complexity of handling the operations in a data structure like an ARRAY are as follows:
i. Almost all the operations are O(1) [i.e. omega times one]
ii. Remove/Insert operations that handles element between elements require O(n) [i.e. omega times n] because the elements need to be shifted.
Note! Hence, an array as a data structure is used where remove/insert operations are not required.
i. Almost all the operations are O(1) [i.e. omega times one]
ii. Remove/Insert operations that handles element between elements require O(n) [i.e. omega times n] because the elements need to be shifted.
Note! Hence, an array as a data structure is used where remove/insert operations are not required.
Monday, February 9, 2009
Selection Sorting An Array
Selection sorting an array A[0, 1, 2, ..., n-1]
01 SelectionSort(A[0, 1, 2, ..., n-1])
02 for (j <- 0) to (n-2) do
03 m <- j
04 for (i <- j+1) to (n-1) do
05 if A[m] > A[i] then
06 m <- i
07 end if
08 end for
09 exchange A[m] <-> A[j]
10 end for
01 SelectionSort(A[0, 1, 2, ..., n-1])
02 for (j <- 0) to (n-2) do
03 m <- j
04 for (i <- j+1) to (n-1) do
05 if A[m] > A[i] then
06 m <- i
07 end if
08 end for
09 exchange A[m] <-> A[j]
10 end for
Labels:
algorithm,
datastructures,
pseudocode,
SelectionSort,
sorting
Bubble Sorting An Array
Bubble sorting an array A[0, 1, 2, ..., n-1]
01 BubbleSort(A[0, 1, 2, ..., n-1])
02 i <- 0 03 repeat 04 exch <- False 05 for (j <- 0) to (n-2-i) do 06 if A[j] > A[j+1] then
07 exchange A[j] <-> A[j+1]
08 exch <- True
09 end if
10 end for
11 i <- i+1
12 Until exch = False
Click here for explanation on what is bubble sort?
01 BubbleSort(A[0, 1, 2, ..., n-1])
02 i <- 0 03 repeat 04 exch <- False 05 for (j <- 0) to (n-2-i) do 06 if A[j] > A[j+1] then
07 exchange A[j] <-> A[j+1]
08 exch <- True
09 end if
10 end for
11 i <- i+1
12 Until exch = False
Click here for explanation on what is bubble sort?
Labels:
algorithm,
BubbleSort,
datastructures,
pseudocode,
sorting
Computing The Median of An Array
Pseudocode for computing the median of an array A[0, 1, 2, ..., n-1]
01 SelectMedian(A[0, 1, 2, ..., n-1])
02 BubbleSort(A) or SelectionSort(A)
03 if n is odd then
04 r <- A[(n-1)/2]
05 else
06 r <- (A[n/2] + A[(n/2)-1)])/2
07 end if
08 return r
01 SelectMedian(A[0, 1, 2, ..., n-1])
02 BubbleSort(A) or SelectionSort(A)
03 if n is odd then
04 r <- A[(n-1)/2]
05 else
06 r <- (A[n/2] + A[(n/2)-1)])/2
07 end if
08 return r
Subscribe to:
Posts (Atom)