for(int i=1; i<=n; i++)
for(int j=1; j<=n; j++)
a.add(i);
v=a.toArray();
insertSort(v);
where
The time complexity of the two nested for loops is O(n²). The resulting list is of the length n², since each number from range 1-b is added there n times: for n=3, the list would be {1, 1, 1, 2, 2, 2, 3, 3, 3}
.
toArray is linear in the number of elements, and it will operate on an array of n² elements.
insertSort will operate with the same array, of length n². The time complexity of insertion sort is in general case quadratic, but in this case we are dealing with presorted input, so it will be linear in the number of elements, thus O(n²) since we have n² elements.
There are three consecutive items of complexity O(n²), thus the complexity of the whole program will reduce to O(n²).
for(int i=n; i>=1; i=i/2)
a.add(i);
v=a.toArray();
insertSort(v);
where
The for loop is O(log n), since if starts from n and halves it at each step, until reaching 0. The resulting list will contain log n elements in descending order.
toArray
is linear in the number of elements, and it will operate on an array of log n elements.
insertSort is quadratic in the number of elements, and in this case we know that the general case holds, since the input is not presorted. It will operate on an array of log n elements, thus doing (log n)² steps. This is the most expensive operation out of the four lines, so the complexity of the whole program becomes O((log n)²).
for(int i=1; i<=n; i=i*2)
t.add(i);
where
The first for loop is O(log n), since it starts from 1 and doubles at each step until reaching n. Next line is insertion to a BST, which is log k, where k is the height of the tree.
A quick, but wrong, answer would be O(log n * log k), k unknown
.
We can have a case where adding numbers from the loop doesn't change the height of the tree:
For example, assume that the original tree is
6 k=4
\
10
\
12
\
20
n is 10, and we add 1,2,4,8
6 k'=4
/ \
4 10
/ / \
2 8 12
/ \
1 20
In the worst case, each addition of a number grows the height by one:
For example, starting with an empty tree
∅ k=0
n is 10, and we add 1,2,4,8
1 k'=k+(log n)
\
2
\
4
\
8
In the best case, the height of the initial and final tree is k. t.add(i)
takes every time log k steps.
In the worst case, the height of the initial tree is k and final tree is k+log n. t.add(i)
takes from log k steps to log (k+log n) steps, on average log (k + (log n)/2) steps. We can, however, ignore the constant denominator of (log n)/2 and just reduce it to log n.
Thus, the upper bound on the asymptotic complexity of the program is log n * log (k + log n).
Let us demonstrate with the given binary tree:
_ 1 _
/ \
_2_ 3_
/ \ \
4 5 6
The inorder traversal of the tree is {4,2,5,1,3,6}
and the postorder is {4,5,2,6,3,1}
.
Last element of the postorder traversal is the root. By locating the root in the inorder sequence and matching by the length of the subtrees, we get the following groupings:
Inorder: 4 2 5 (1) 3 6
Postorder: 4 5 2 6 3 (1)
_ 1 _
/ \
4,2,5 3,6
We will repeat the procedure for the left subtree:
Inorder: 4 (2) 5
Postorder: 4 5 (2)
_ 1 _
/ \
_2_ 3,6
/ \
4 5
And the right subtree:
Inorder: (3) 6
Postorder: 6 (3)
_ 1 _
/ \
_2_ 3_
/ \ \
4 5 6
The full algorithm is below (in Haskell).
import Data.List
data Tree a = Empty | Node a (Tree a) (Tree a) deriving (Show)
recreateTree :: (Ord a, Eq a) => [a] -> [a] -> Tree a
recreateTree [] [] = Empty
recreateTree inord postord = Node root left right
where root = last postord
left = recreateTree leftInord leftPostord
right = recreateTree rightInord rightPostord
Just rootInd = elemIndex root inord
leftInord = take rootInd inord
rightInord = drop (rootInd+1) inord
leftPostord = take rootInd postord
rightPostord = init $ drop rootInd postord
For an implementation in Java, see e.g. programcreek.com/2013/01/construct-binary-tree-from-inorder-and-postorder-traversal/.
Full Haskell code for this and other tree exercises can be found in ideone.com/4nSGrk.
This is possible for BSTs, but not for trees in general.
Let us demonstrate with the given BST:
_ 6 _
/ \
_4_ _8_
/ \ / \
2 5 7 10
The preorder traversal is {6, 4, 2, 5, 8, 7, 10}
.
The first method is to simply create an empty BST and insert the elements in the list one by one. Insertion into a BST takes care of placing the elements in the right order, and since we know that the original tree was a BST, there is only one way for it to produce the list when doing preorder traversal. Demonstrate with our example tree:
t = new Tree()
t.insert(6)
_ 6 _
/ \
∅ ∅
t.insert(4)
_ 6 _
/ \
4 ∅
t.insert(2)
_ 6 _
/ \
_4_ ∅
/
2
t.insert(5)
_ 6 _
/ \
_4_ ∅
/ \
2 5
t.insert(8)
_ 6 _
/ \
_4_ 8
/ \
2 5
and so on for the rest of the values. Below is the algorithm written in Java:
BST construct_BST(int[] preorder) {
BST t = new BST();
for (int i=0; i<preorder.length; i++) {
t.insert(preorder[i]);
}
return t;
}
Unbalanced BSTs are no problem, see for instance:
1 4
\ /
2 3
\ /
3 2
\ /
4 1
preorder: preorder:
{1,2,3,4} {4,3,2,1}
When we add the nodes in the order from the list, it becomes unbalanced just like the original BST was.
See the solution from geeksforgeeks.org/construct-bst-from-given-preorder-traversal-set-2/. The procedure is described below:
Create an empty stack.
Make the first value as root. Push it to the stack.
Keep on popping while the stack is not empty and the next value is greater than stack’s top value. Make this value as the right child of the last popped node. Push the new node to the stack.
If the next value is less than the stack’s top value, make this value as the left child of the stack’s top node. Push the new node to the stack.
Repeat steps 2 and 3 until there are items remaining in pre[].
It is not possible to recreate a BST from only inorder traversal. The inorder traversal of a BST is always ordered, and for example the following two BSTs have the same inorder traversal.
1 2
\ / \
2 1 4
\ /
3 3
\
4
inorder: inorder:
{1,2,3,4} {1,2,3,4}
Below is an algorithm written in Haskell.
It starts from the middle of the whole list, and splits the rest of
the list recursively, until all values are in the tree. Applied to e.g. [1,2,3,4,5,6,7]
, the algorithm will start building the tree with 4 as the root, call buildBST
for [1,2,3]
and [5,6,7]
for left and right subtree respectively, until the lists are empty.
data Tree a = Empty | Node a (Tree a) (Tree a) deriving (Show)
buildBST :: (Ord a, Eq a) => [a] -> Tree a
buildBST [] = Empty
buildBST xs = Node root left right
where (root,len) = middle xs
left = buildBST (take len xs)
right = buildBST (drop (len+1) xs)
--helper function that gives the element in the middle and the index of it
middle :: [a] -> (a,Int)
middle [] = error "Empty list"
middle xs = (xs !! half, half)
where half = (length xs) `div` 2
Below is the algorithm written in Haskell. The tree data structure is the same as in the previous exercises.
printLevelK :: (Show a) => Tree a -> Integer -> String
printLevelK tree k = go tree k
where go Empty _ = "Nil"
go (Node a l r) 0 = show a
go (Node a l r) k = go l (k-1) ++ ", " ++ go r (k-1)
Starting with inserting 1. Resulting tree has height 1, balance factor 0 since it has no children.
_ 1 _
/ \
∅ ∅
Inserting 2, we compare it to the root node, and since it is bigger, we put it to the right subtree. Balance factor of the right subtree is 0, and root's is -1, which is in the acceptable range.
_ 1 _
/ \
∅ _2_
/ \
∅ ∅
Next up, we add 3! Compare it to the root, it is bigger so go to the right subtree. Also bigger than 2, so right subtree. After insertion we have this situation:
_ 1 _
/ \
∅ _2_
/ \
∅ _3_
/ \
∅ ∅
Balance factor of the root has become -2. We're in the right-right tree subcase, and we fix the situation by doing a left rotation.
_ 2 _
/ \
_1_ _3_
/ \ / \
∅ ∅ ∅ ∅
Insert 4. It's bigger than anything in the tree, so on it goes as the rightmost leaf.
_ 2 _
/ \
_1_ _3_
/ \ / \
∅ ∅ ∅ _4_
/ \
∅ ∅
Balance factors are in the acceptable limits, so no rotations. Add 5!
5 finds it place again as the rightmost child of the right subtree.
_ 2 _
/ \
_1_ _3_
/ \ / \
∅ ∅ ∅ _4_
/ \
∅ _5_
/ \
∅ ∅
Balance factors of the subtrees starting with 4 and 5 are fine, but 3 is at -2. We fix the situation with a left rotation between 3 and 4.
_ 2 _
/ \
1 _4_
/ \ / \
∅ ∅ 3 5
/ \ / \
∅ ∅ ∅ ∅
This rotation has managed to restore the balance in the whole tree. We proceed to add the final element, 6. This breaks the balance again:
_ 2 _
/ \
1 _4_
/ \ / \
∅ ∅ 3 5
/ \ / \
∅ ∅ ∅ 6
/ \
∅ ∅
The balance factors of 1, 3 and 6 are 0; 5 is -1; 4 and root are at -2. Do a left rotation between 4 and 2, and the final tree becomes
_ 4 _
/ \
_2_ _5_
/ \ / \
1 3 ∅ 6
/ \ / \ / \
∅ ∅ ∅ ∅ ∅ ∅
For an animated version, you can go to http://visualgo.net/bst.html , choose AVL tree and type in 1,2,3,4,5,6!
We use two queues, one of which we call Main
and other Help
.
When we push to the stack, we first copy everything already in the main queue by doing n dequeue operations from the main queue to the help queue. Then we enqueue the new value to the main queue, and add each of the elements from the help queue to main.
Main: Help:
push 1 enqueue 1 -
// Main: head=1
push 2 dequeue 1 --> enqueue 1
enqueue 2 -
enqueue 1 <-- dequeue 1
// Main: head=2,1
push 3 dequeue 2 --> enqueue 2
dequeue 1 --> enqueue 1
enqueue 3 -
enqueue 2 <-- dequeue 2
enqueue 1 <-- dequeue 1
// Main: head=3,2,1
push 4 dequeue 3 --> enqueue 3
dequeue 2 --> enqueue 2
dequeue 1 --> enqueue 1
enqueue 4 -
enqueue 3 <-- dequeue 3
enqueue 2 <-- dequeue 2
enqueue 1 <-- dequeue 1
// Main: head=4,3,2,1
Enqueue and dequeue are O(1) operations in an efficient
implementation: we can just store a pointer to head and tail. Pushing
requires that we dequeue n times, enqueue n times, enqueue once, and
then again dequeue and enqueue n times. This becomes 4n+1 times, which
normalises to O(n)
.
Popping is simple: we just dequeue from the main queue, which is O(1)
.
pop 4 dequeue 4 -
// Main: head=3,2,1
Below is an implementation in Haskell. We assume Queue as a black box, with operations enq
, deq
and newQueue
.
data Stack a = S (Queue a) (Queue a)
pop :: Stack a -> (a, Stack a)
pop (S main help) = if empty main
then error "Empty stack"
else (x, S newMain help)
where (x, newMain) = deq main
push :: a -> Stack a -> Stack a
push a (S main help) = S newMain help
where newHelp = move main help --move items from main queue to help queue
newTop = enq a newQueue --enqueue the item to be pushed to an empty queue
newMain = move newHelp newTop --move items from help queue to the queue with the pushed item
-- helper function to move the contents of q1 to q2
move :: Queue a -> Queue a -> Queue a
move q1 q2 = if empty q1
then q2
else move newQ1 (enq x q2)
where (x, newQ1) = deq q1
For the full code and some test printouts, see ideone.com/xw5iZY.
Example: 159+841= 1000, where 159 is represented as: Cons 9 (Cons 5 (Cons 1 Nil)).
We are using the following representation for lists in all of the following exercises on lists.
class Node {
int val;
Node next;
}
class List {
Node head;
}
Below is an implementation of the algorithm, written in Java.
Since the lowest units are at the head of the list, we can start adding the elements in the lists pairwise. If one of the lists runs out of elements, we just add it with 0. Remainder from the previous round is added to the value at each round. If there is a non-0 remainder after going through both lists, we add that as the last (biggest) position.
List listAdd(List one, List two) {
Node a = one.head;
Node b = two.head;
int va, vb;
int remainder = 0;
List result = new List();
while (!(a==null && b==null)) {
va = (a==null) ? 0 : a.val; // if one of the lists runs out of numbers,
vb = (b==null) ? 0 : b.val; // use 0 as the value for that position.
int val = va + vb + remainder; // add remainder from previous round
if(val >= 10) { // set remainder for this round
remainder = (val - remainder) / 10;
val = val - 10;
} else {
remainder = 0;
}
result.append(new Node(val));
a = (a==null) ? null : a.next;
b = (b==null) ? null : b.next;
}
if (remainder > 0) {
result.append(new Node(remainder));
}
return result;
}
A full implementation, with test cases, is in ideone.com/QRi7jQ.
We use the Tortoise and hare algorithm. They start from the beginning of the list, tortoise advances one node at the time and hare two. If they ever end up pointing at the same node, there must be a cycle in the list.
boolean checkCycle(List l) {
Node tortoise = l.head;
Node hare = l.head;
while(true) {
if(hare==null || hare.next==null) {
System.out.println("Reached end of list, no cycles");
return false;
}
hare = hare.next.next;
tortoise = tortoise.next;
if(tortoise == hare) {
System.out.println("Cycle discovered");
return true;
}
}
}
We take two pointers and start first from the beginning of the list, second one from the k+1
th
position. Then we advance both one at a time, and when the second one
reaches the end, first one must be at the kth from the end.
Node kthBeforeLast(int k, List l) {
Node temp1 = l.head;
Node temp2 = l.head;
int i = 0;
// Set temp2 to the k+1th element in the list
while(i < k+1) {
if(temp2.next == null) {
return temp1;
}
temp2 = temp2.next;
i++;
}
// When temp2 reaches the end, temp1 is at kth element from end.
while(temp2.next != null) {
temp1 = temp1.next;
temp2 = temp2.next;
}
return temp1;
}
We assume the following representations for Graph
and Node
:
class Graph { ArrayList<Node> nodes; }
class Node {
String id;
ArrayList<Node> adjList;
}
Graph square(Graph g) {
ArrayList<Node> sqnodes = new ArrayList<Node>();
for (Node nd : g.nodes) {
Node newNd = new Node(nd.id); // make new node with same id but empty adjList
for (Node nd2 : g.nodes) {
if (nd != nd2 && nd.distTwoOrLess(nd2)) {
newNd.addNeighbour(nd2);
}
}
sqnodes.add(newNd);
}
return new Graph(sqnodes);
}
The Node
contains the function distTwoOrLess
which takes a node as an argument and checks all the nodes in its
adjacency list + the adjancency lists of all its neighbours. If the
argument node is found in one of them, the function returns true,
otherwise it returns false.
boolean distTwoOrLess(Node other) {
for (Node nd : this.adjList) {
if (nd==other) {
return true; // other node found directly in adjList, distance is 1
}
for (Node nd2 : nd.adjList) {
if (nd2==other) {
return true; // other node found at distance 2
}
}
}
// reached end of adjList + adjLists of all nodes therein
// with no link less than 2 found
return false;
}
With the two functions, we can construct the square graph from the original.
The whole code, with an example graph, can be found at ideone.com/Med4AG.
Kruskal's algorithm starts with sorting the edges by weight. If we
know the range of possible values beforehand, we can use
non-comparison-based sorting algorithms, such as counting sort. Those
kind of sorting algorithms have average time complexity of O(n+k)
, where n
is the amount of elements and k
is the range of values. Thus the complexity of the sort would become O(|E| + |V|)
instead of O(|E| log |E|)
for comparison-based sort.
We start from node 0 and put it in a stack.
Stack: 0
Visited: -
Step 1: remove node from stack and visit it
Stack: -
Visited: 0
Step 2: add adjacent nodes to the stack, if they are not visited yet, or not already in the stack.
Stack: 2, 1
Visited: 0
Step 1: remove node from stack and visit it
Stack: 1
Visited: 0, 2
Step 2: add adjacent (non-visited & not already in the stack) nodes to the stack
Stack: 4, 3, 1
Visited: 0, 2
Step 1: remove node from stack and visit it
Stack: 3, 1
Visited: 0, 2, 4
Step 2: for this node, we can't add any neighbours to the stack, because 2 is already visited and 1 and 3 are in the stack. So we move on to the next element in the stack.
Step 1: remove node from stack and visit it
Stack: 1
Visited: 0, 2, 4, 3
Step 2: all neighbours of 3 are already visited or in the stack, so we move on to the next elemnt in the stack.
Step 1: remove node from stack and visit it
Stack: -
Visited: 0, 2, 4, 3, 1
Step 2: all neighbours of 1 are already visited.
As the stack is now empty, we have visited the whole graph in the order 0, 2, 4, 3, 1
.
We start again from node 0. Put it in the queue:
Queue: 0
Visited: -
Step 1: remove node from queue and visit it
Queue: -
Visited: 0
Step 2: add adjacent nodes to queue (if they are not visited yet)
Queue: 1, 2
Visited: 0
Step 1: remove first node from queue and visit it
Queue: 2
Visited: 0, 1
Step 2: add adjacent nodes to queue (if they are not visited yet)
Queue: 2, 3, 4
Visited: 0, 1
Now we have all nodes in the queue. Next we visit 2, but all of its
neighbours (1,3,4) are either already in the queue or visited, so we
don't add anything to the queue, and just remove 2 and visit it. The
same applies to 3 and 4. The visit order becomes 0, 1, 2, 3, 4
.
We show the result from node 0, but the same prodecure applies to any node.
for all nodes in graph:
put node in Q
mark node as not visited
mark distance to node as INF
mark distance to node 0 as 0
The queue Q contains now 0,1,2,3,4. Other nodes have distance INF to node 0, and node 0 has distance 0 to itself.
We start by visiting the item with smallest distance in Q: node 0 with value 0.
pop 0 from Q
mark 0 as visited
for all unvisited nodes in 0's adjacency list:
first we have node 1.
calculate distance to 1 that goes through 0.
distance_through_0 = shortest path to 0 + weight of the edge from 0 to 1
= 0 + 9
= 9
compare old distance to node 1 to distance_through_0
= compare INF to 9
9 is smaller, so we update the shortest path to node 1 as 9.
next we have node 2.
calculate distance to 2 that goes through 0.
distance_through_0 = shortest path to 0 + weight of the edge from 0 to 2
= 0 + 75
= 75
compare old distance to node 2 to distance_through_0
= compare INF to 75
75 is smaller, so we update the shortest path to node 1 as 75.
After the first round, we have one visited node (node 0), and two nodes (1 and 2) have gotten updated values. Q contains 1,2,3,4.
We continue by visiting the item with smallest distance in Q: node 1 with distance 9.
pop 1 from Q
mark 1 as visited
for all unvisited nodes in 1's adjacency list:
first we have node 2.
calculate distance to 2 that goes through 1.
distance_through_1 = shortest path to 1 + weight of the edge from 1 to 2
= 9 + 95
= 104
compare old distance to node 2 to distance_through_1
= compare 75 to 104
104 is bigger, so we keep the old smallest distance to node 2, 75.
next we have node 3.
calculate distance to 3 that goes through 1.
distance_through_1 = shortest path to 1 + weight of the edge from 1 to 3
= 9 + 19
= 28
compare old distance to node 3 to distance_through_1
= compare INF to 28
28 is smaller, so we update the shortest path to node 3 as 28.
next we have node 4.
calculate distance to 4 that goes through 1.
distance_through_1 = shortest path to 1 + weight of the edge from 1 to 4
= 9 + 42
= 51
compare old distance to node 4 to distance_through_1
= compare INF to 51
51 is smaller, so we update the shortest path to node 4 as 51.
After second round, we have two visited nodes (0, 1), and Q contains the nodes 2 (distance 75), 3 (distance 28) and 4 (distance 51).
For third round, we visit the node with the shortest distance, that is, 3.
pop 3 from Q
mark 3 as visited
for all unvisited nodes in 3's adjacency list:
first we have node 2.
calculate distance to 2 that goes through 3.
distance_through_3 = shortest path to 3 + weight of the edge from 3 to 2
= 28 + 51
= 79
compare old distance to node 2 to distance_through_3
= compare 75 to 79
79 is bigger, so we keep the old smallest distance to node 2.
next we have node 4.
calculate distance to 4 that goes through 3.
distance_through_3 = shortest path to 3 + weight of the edge from 3 to 4
= 28 + 31
= 59
compare old distance to node 4 to distance_through_3
= compare 51 to 59
59 is bigger, so we keep the old smallest distance to node 4.
After the third round, we have three visited nodes (0, 1, 2), and Q contains the nodes 2, 4. Distances didn't change from second round.
For fourth round, we visit the node with the shortest distance: 4 with distance 51.
pop 4 from Q
mark 4 as visited
for all unvisited nodes in 4's adjacency list:
first we have node 2.
calculate distance to 2 that goes through 4.
distance_through_4 = shortest path to 4 + weight of the edge from 4 to 2
= 51 + 42
= 93
compare old distance to node 2 to distance_through_4
= compare 75 to 93
93 is bigger, so we keep the old smallest distance to node 2.
After the fourth round, we have visited nodes (0, 1, 2, 4), and Q contains the node 2. Distances didn't change during the previous round.
For the final round, we pop 2 from the queue, and since all its neighbours have been visited, we can't do any more steps.
The final distances from the node 0 are the following:
0-->1: 9
path 0->1
0-->2: 75
path 0->2
0-->3: 28
path 0->1->3
0-->4: 51
path 0->1->4
We start from an arbitrary node, 3.
V = {0, 1, 2, 3, 4}
V_new = {3}
E_new = {}
Next we choose an edge {u, v}
with minimal weight such that u
is in V_new
and v
is not. The edge that fulfils the conditions is {3,1}
. We add 1
to V_new
and {3,1}
to E_new
.
V = {0, 1, 2, 3, 4}
V_new = {3, 1}
E_new = {{3,1}}
We repeat the procedure. The smallest edge, with u
in V_new
and v
not, is {1,0}
. We add 0
to V_new
and {1,0}
to E_new
.
V = {0, 1, 2, 3, 4}
V_new = {3, 1, 0}
E_new = {{3,1}, {1,0}}
For third round, the smallest edge that fulfils the conditions is {3,4}
. We add 4
to V_new
and {3,4}
to E_new
.
V = {0, 1, 2, 3, 4}
V_new = {3, 1, 0, 4}
E_new = {{3,1}, {1,0}, {3,4}}
Fourth round, we get the edge {3,2}
. Add 2
to V_new
and {3,2}
to E_new
.
V = {0, 1, 2, 3, 4}
V_new = {3, 1, 0, 4, 2}
E_new = {{3,1}, {1,0}, {3,4}, {3,2}}
Now V_new
contains all the nodes, so the algorithm stops. The MST looks like this:
____3____
/ | \
/(19) |(31) \(51)
1 4 2
/
/(9)
0
For animated version, you can run Prim's algorithm here: visualgo.net/mst.html.
We start by sorting the edges by increasing weight. Initialise E_new
as an empty set.
E_new = {}
The smallest edge is {0,1}
with weight 9. We add id to E_new
.
E_new = {{0,1}}
The next smallest edge is {1,3}
with weight 19. The edge doesn't form a cycle with the edge in E_new
, so we add it there.
E_new = {{0,1}, {1,3}}
The third smallest edge is {3,4}
with weight 31. It doesn't form a cycle with any of the edges in E_new
, so we add it there.
E_new = {{0,1}, {1,3}, {3,4}}
The fourth smallest edge is {1,4}
with weight 42. However, there are already edges {1,3}
and {3,4}
, and adding {1,4}
would result in a cycle. Thus we don't add that edge and move on to the next smallest edge.
The fifth smallest edge is {2,3}
with weight 51. It doesn't form a cycle with any of the edges in E_new
, so we add it there.
E_new = {{0,1}, {1,3}, {3,4}, {2,3}}
Now we have found 4 edges: these will connect the 5 vertices of the graph. E_new
contains the MST. An unoptimised version of the algorithm would go
through the remaining edges and discard each of them, because they would
always form a cycle with some of the edges in E_new
.
For animated version, you can run Kruskal's algorithm here: visualgo.net/mst.html.
In chaining, instead of an array of elements, we have an array of linked lists. To add an element, we calculate its hash and insert it into the list at that index
Start with an empty table and 9 keys to insert.
50; 28; 19; 15; 20; 33; 12; 17; 91
[0] Nil
[1] Nil
[2] Nil
[3] Nil
[4] Nil
[5] Nil
[6] Nil
[7] Nil
[8] Nil
50 is the first element to insert. We calculate the hash function: h(50) = 50 mod 9 = 5
and insert 50 to the list in index 5.
Next element is 28. Calculate the hash function again h(28) = 28 mod 9 = 1
, and insert 28 to the list in index 1.
After 2 insertions, our table looks like this.
19; 15; 20; 33; 12; 17; 91
[0] Nil
[1] Cons 28 Nil
[2] Nil
[3] Nil
[4] Nil
[5] Cons 50 Nil
[6] Nil
[7] Nil
[8] Nil
Next element is 19. Now we get that h(19) = 1
, and there is already a non-empty list in the index 1. We append 19 to the linked list:
15; 20; 33; 12; 17; 91
[0] Nil
[1] Cons 28 (Cons 19 Nil)
[2] Nil
[3] Nil
[4] Nil
[5] Cons 50 Nil
[6] Nil
[7] Nil
[8] Nil
Next elements to insert are 15 and 20. h(15) = 6
and h(20) = 2
, both indices have an empty list currently.
33; 12; 17; 91
[0] Nil
[1] Cons 28 (Cons 19 Nil)
[2] Cons 2 Nil
[3] Nil
[4] Nil
[5] Cons 50 Nil
[6] Cons 15 Nil
[7] Nil
[8] Nil
We move on to 33. h(33) = 6
, which has a non-empty list. Appending 33 to the list, we get the following:
12; 17; 91
[0] Nil
[1] Cons 28 (Cons 19 Nil)
[2] Cons 2 Nil
[3] Nil
[4] Nil
[5] Cons 50 Nil
[6] Cons 15 (Cons 33 Nil)
[7] Nil
[8] Nil
Final items are 12, 17 and 91. h(12) = 3
, h(17) = 8
and h(91) = 1
. No clashes for the first two, but 91 gets hashed to 1, so we append it to the list of other items that hash to 1.
The final result is:
[0] Nil
[1] Cons 28 (Cons 19 (Cons 91 Nil))
[2] Cons 2 Nil
[3] Cons 12 Nil
[4] Nil
[5] Cons 50 Nil
[6] Cons 15 (Cons 33 Nil)
[7] Nil
[8] Cons 17 Nil
Assuming this is a max heap, but works the same way as a min heap.
Start with inserting 4 to an empty tree, so it becomes the root. (X) marks the bottom of the heap, where we will insert the next element.
to insert: 9, 2, 5, 1, 10, 6
_ 4 _
/ \
(X)
Next we insert 9 to the bottom of the heap. (Next place to insert becomes the root's right child.)
to insert: 2, 5, 1, 10, 6
_ 4 _
/ \
9 (X)
After insertion, the heap property does not hold: 9 is bigger than its parent. We bubble up 9 by swapping it with its parent. Since we only have two elements in the tree, we are satisfied with this one swap. 9 has become the new root and the max element of the tree.
to insert: 2, 5, 1, 10, 6
_ 9 _
/ \
4 (X)
Next we insert 2 to the last position.
to insert: 5, 1, 10, 6
_ 9 _
/ \
4 2
/
(X)
No changes needed, continue by inserting 5. This breaks the heap property again:
to insert: 1, 10, 6
_ 9 _
/ \
4 2
/ \
5 (X)
We bubble up 5 by swapping it with its parent. This one swap is enough to restore the heap property:
to insert: 1, 10, 6
_ 9 _
/ \
5 2
/ \
4 (X)
We insert 1. Heap property is unbroken:
to insert: 10, 6
_ 9 _
/ \
5 2
/ \ /
4 1 (X)
Next we insert 10. Now we have a big element in the bottom, so we must bubble 10 up.
to insert: 6
_ 9 _
/ \
5 2
/ \ / \
4 1 10 (X)
First step, swap 10 with its parent node, 2.
to insert: 6
_ 9 _
/ \
5 10
/ \ / \
4 1 2 (X)
This is not enough: we compare 10 with its parent in the new position, and see that it is still bigger than its parent, 9. We do another swap:
to insert: 6
_10_
/ \
5 9
/ \ / \
4 1 2 (X)
Now 10 has bubbled up as much as possible, becoming the new root. We know that it is the biggest element in the heap. We don't need to compare anything in the left child: since 9 was the old root, we know that everything in the left child must be smaller than 9, so there is no possibility that a bigger element would be found there.
Finally we insert 6 to the bottom of the heap. It is smaller than its parent, so we don't need to do any more changes.
to insert: 6
_10_
/ \
5 9
/ \ / \
4 1 2 6
/
(X)
For animated version, you can run the heap here: visualgo.net/heap.html.
Let us also assume this is a max heap.
For a long version with very detailed walkthrough, see here.
inserted: 4
to insert: 9, 2, 5, 1, 10, 6
4
/ \
∅ ∅
After inserting 9:
inserted: 4, 9
to insert: 2, 5, 1, 10, 6
9
/ \
4 ∅
/ \
∅ ∅
After inserting 2:
inserted: 4, 9, 2
to insert: 5, 1, 10, 6
_9_
/ \
4 2
/ \ / \
∅ ∅ ∅ ∅
After inserting 5:
inserted: 4, 9, 2, 5
to insert: 1, 10, 6
_ 9 _
/ \
4 5
/ \ / \
∅ ∅ 2 ∅
/ \
∅ ∅
After inserting 1:
inserted: 4, 9, 2, 5, 1
to insert: 10, 6
_ 9 _
/ \
5 4
/ \ / \
2 1 ∅ ∅
/\ /\
∅ ∅ ∅ ∅
After inserting 10:
inserted: 4, 9, 2, 5, 1, 10
to insert: 6
___ 10 ___
/ \
_ 9 _ ∅
/ \
5 4
/ \ / \
2 1 ∅ ∅
/\ /\
∅ ∅ ∅ ∅
After inserting 6:
inserted: 4, 9, 2, 5, 1, 10, 6
to insert: -
___ 10 ___
/ \
_ 9 _ 6
/ \ / \
5 4 ∅ ∅
/ \ / \
2 1 ∅ ∅
/\ /\
∅ ∅ ∅ ∅
7 8 9 1 2 3
We will find a minimum at the rotation point, where arr[n] > arr[n+1]
. Then we know that the element at arr[n+1]
is the minimum, since the list was initially sorted.
We can use a modified version of binary search. Instead of finding a certain key, we use the condition that arr[low]
is larger than arr[high]
, and check the mid point to know in which direction to narrow our search.
Below is an implementation in Java. (For the line about calculating mid
, see here.)
public int findMin(int[] arr) {
int low = 0;
int high = arr.length-1;
while (arr[low] > arr[high]) {
int mid = low + ((high - low) / 2);
if (arr[mid] > arr[high]) {
low = mid+1; // rotation point is between mid and high
} else {
high = mid; // rotation point is between low and mid
}
}
return low;
}
For this, we can use the previously defined O(log n)
algorithm to find the minimum index. We can use this to split our list
in two sorted parts. Then we compare the key to be searched to the last
index of the rotated list: if it is bigger, we will use the part of the
originally sorted list that has overflown and is located at indices 0..minIndex-1
. Otherwise we will find the key in the part minIndex..last
.
public int findKey(int[] arr, int key) {
int minIndex = findMin(); // O(log n)
if (key > arr[(arr.length-1)]) { // O(1)
return binarySearch(arr, key, 0, minIndex-1); // O(log n)
} else {
return binarySearch(arr, key, minIndex, arr.length-1); // O(log n)
}
}
Full implementation and tests for this and previous exercise can be found in ideone.com/ykCPxL.