Every week there will be exercises you can answer. They are completely optional, but are meant to help you master the course. You can go to the exercise sessions to get help with the exercises.

The exercise sessions are:

  • Mondays 10-12, room EC (EDIT building)

  • Thursdays 8-10, usually in room ML1 (M-huset), but in week 51 (20th December) it will be in EC (EDIT building).

You can find directions to the rooms here.

Generally, you should try to go to one exercise session per week. Of course, you are welcome to come to more if there is space, and it is not compulsory to come.

The week numbers in the exercises indicate which week’s lectures the exercises correspond to. Therefore, the exercise sessions may lag slightly behind the numbering here. Also, the exercises for later weeks may change without notice!

Answers to most of the exercises will appear here, and you can get a zip file of all the solutions here. But please try the exercises before looking at the answers!

Week 1

Introduction

  1. Write a method that reverses an array of type String[]. Then make your method generic so that it works for any type of array.

  2. Write a method that interleaves two strings: it should take one character from the first string, then one from the second string, another from the first string and so on. Once one string has no characters left it should carry on with the other string. For example, interleaving the strings "anna" and "patrik" should give the result "apnantarik" (or "paantnraik"). Use s.length() to find the length of a string and s.charAt(i) to find the character at index i.

  3. Write a method boolean isPalindrome(String string) that checks if a given string is a palindrome (reads the same forwards and backwards). For example, isPalindrome("madamimadam") should return true.

  4. Extend isPalindrome so that it only considers letters (i.e. it ignores punctuation and whitespace) and is case-insensitive. For example, isPalindrome("Madam, I’m Adam!") should return true. If c is a char, you can write Character.isAlphabetic(c) to check if c is a letter, and Character.toUppercase(c) to convert c to uppercase.

    Hint: To check if two characters are the same in a case-insensitive fashion, you can see if they come out the same when converted to uppercase.

    Hint: Define two indexes into the string, one that starts at the beginning and move forwards, and the other that starts at the end and moves backwards.

  5. Implement a "guess-the-number" game using binary search: I think of a number between 1 and 10000, your program guesses a number and I reply whether my number is higher or lower. Your program makes a new guess and so on until it guesses the right number. Then extend your program so that I can pick any positive number (that fits within a Java int), not just ones up to 10000.

  6. From Weiss: exercises 1.5, 1.11, 1.12.

Complexity

Note: you will only be able to do most of these exercises after Friday’s lecture.

  1. Arrange the following functions in order of complexity: n4, log n, n log n, 4n, 3n3, 5n2 + n.

  2. Assume that you have an algorithm that takes 2n steps and each step takes a microsecond. What is the largest problem size you can solve before the death of the Sun? Assume that the Sun will die in 5 billion years.

  3. Here are the growth rates of several algorithms. Which will run in a reasonable amount of time when n = 50?

    • 10 × 2n

    • 10000 × log2 n

    • 100000 × n

    • 10 × n!

  4. Write the following functions in big-O notation:

    • 34x + x2

    • 1x + 2x + 3x + 4x + 5x + 6x + 7x

    • 104000x + 0.005 x2 + 103/x3

    • 10 log2 x + 2 log10 x

    • 2x + 10x

    • (x–2) (x–1) x

  5. What is the complexity of adding a single element to an ArrayList? What is the complexity of adding n elements, if the ArrayList contains n elements?

  6. Suppose we want to be able to delete elements from a dynamic array. If we want to delete an arbitrary element, what is the complexity? What if deletion is allowed to change the order of the elements in the array?

  7. This question is about adding an operation to dynamic arrays that deletes the last element of the array.

    1. The last element of a dynamic array can be deleted in O(1) time. How?

    2. Suppose that we do not try to resize the underlying array when deleting the last element. Then the dynamic array can end up with unlimited overhead - for example, we can end up with an array of capacity 1024 but only one element stored in it. How?

    3. To solve this problem, we might decide on the following rule: if the array ever becomes half-empty, resize it to half its current size. Unfortunately, this ruins the efficiency of the design: the time complexity of performing n operations (adds and deletes) is O(n2) in the worst case. Find a sequence of operations that demonstrates this.

    4. (Harder) Tweak the rule in part (c) so that the O(n) runtime is preserved. Persuade yourself that your design works.

    5. (Much harder, extremely optional) Learn about the potential method (not covered in this course, unless we have spare time later) and use it to prove that your design works.

  8. What complexity do the following algorithms have?

    int result = 0;
    for (int i = 1; i <= n; i *= 2)
       for (int j = 0; j <= i; j++)
           result++;
    int result = 0;
    for (int i = 1; i <= n; i *= 2)
       for (int j = 0; j <= n; j++)
           result++;

    (In the first one, the for-loop condition is j <= i; in the second one it’s j <= n.)

  9. Suppose you have an array of n elements, then you can check if it has duplicate elements in O(n2) time, by looping over every pair of elements. But if the array is sorted, you can do it in O(n) time. How?

  10. What is the time complexity of the following operations?

    • Finding an element in an unsorted array of n elements

    • Finding an element in a sorted array of n elements

    • Adding an element to an unsorted array of n elements, assuming there is an empty space at the end of the array

    • Adding an element to a sorted array of n elements, assuming there is an empty space at the end of the array (the resulting array should also be sorted)

    The moral is that all data structures make tradeoffs—choosing a different representation will make some operations faster but others slower.

  11. From Weiss: some interesting exercises are 2.1, 2.2, 2.3, 2.6, 2.7, 2.10, 2.11, 2.12, 2.14, 2.15, 2.25, 2.31.

  12. What is the complexity of the following code?

    for (int i = 1; i <= n; i *= 2) {
       for (int j = 0; j < n*n; j++)
          for (int k = 0; k <= j; k++)
             // O(1)
       for (int j = 0; j < n; j++)
          // O(1)
    }
  13. (Not really a question) Read through some of the stories at Accidentally Quadratic and think about why the complexity is quadratic.

Week 2

Sorting

  1. Check out the following visualisations of sorting algorithms:

  2. Sort the sequence 4 6 8 2 9 5 1 7 3 by hand with:

    • Insertion sort

    • Quicksort (picking the first element as the pivot)

    • Quicksort (using the median-of-three pivot)

    • Mergesort

  3. A sorting algorithm is stable if equal elements always remain in the same relative order before and after sorting. Which of our sorting algorithms are stable? Why?

  4. Suppose you have an array of booleans of size n. Give an O(n) algorithm that sorts the array so that all false elements come before all true elements. Your algorithm should use O(1) extra memory.

  5. Write a method that removes all duplicate elements from a Java array in O(n log n) time. Hint: sort the array first (you may use the sorting function from the standard library).

  6. How can you use sorting to check if two words are anagrams of one another?

  7. (Somewhat hard) Design a data structure for an anagram finder. It should support two operations:

    1. Create the data structure given a list of words. This operation should have complexity O(n log n), where n is the number of words (assuming that words have a bounded length).

    2. Given a word, find all its anagrams in the data structure. This operation should have complexity O(m + log n), where m is the number of anagrams found for that word and n is the total number of words in the data structure.

      Hint: part (a) should sort the list of words. But you need to sort them in a particular order so that part (b) can be solved efficiently…

      Hint 2: answer question 6 first. Can you think of a way to sort the list of words so that anagrams always end up next to each other?

  8. Old exam questions: 2012-12 q3, 2012-04 q4.

  9. From Weiss: 7.1-2, 7.15-16, 7.21a-c, 7.22, 7.23, 7.25-7.26 (hard), 7.28, 7.32, 7.43, 7.49, 7.53-7.54, 7.60 (hard).

Complexity of recursive functions

  1. Write a method with the following signature

    void printSubsets(int[] array)

    that prints out all subsets of the input array. Use recursion. What is its complexity? Start by writing a recurrence relation. Assume that printing out a single subset is O(1).

  2. (Hard) Modify your solution to question 1 so that it instead returns an iterator which generates all subsets of an array:

    Iterator<Set<Integer>> subsets(int[] array)

    where Set is the standard Java set class.

  3. From Weiss: question 2.26.

  4. This question is about the sorting algorithm stooge sort.

    1. Read about the algorithm at the link above. Convince yourself that it works. Hint: suppose that the input list has 3n elements, and consider the largest n elements of the list. Initially they can be anywhere in the list. Where can they be after the first recursive call? Where can they be after the second recursive call?

    2. Write down a recurrence relation for the algorithm.

    3. (Hard) Solve the recurrence relation to find the complexity of the algorithm.

Stacks and queues

  1. Give an algorithm that removes all comments from a program. Assume that comments are written as (* ... \*) and can be nested (i.e. (* a comment with (* another comment *) inside *)). Write your algorithm in natural language or pseudocode if you like. What is the complexity of your algorithm?

  2. Write a program that reads a postfix expression and evaluates it.

  3. Implement a bounded queue class using circular arrays as in the lecture. Then build an unbounded queue class using the size-doubling trick. The implementation of your unbounded queue should use the bounded queue class (i.e. call its methods), rather than modifying the bounded queue code.

  4. In the queue implementation in the lecture we record the size of the queue in a field. You might think that we don’t need to do this: the size is simply the number of elements between front and rear, i.e., (rear - front + 1) % capacity. Instead of storing the size, we could compute it using this formula whenever we need to know it. But this doesn’t work. Why?

  5. Given a circular buffer (i.e. the array, and variables front and rear), write down an expression giving the sequence represented by the buffer.

  6. Old exam questions: 2012-08 q1, 2012-12 q5, 2013-04 q5.

  7. From Weiss: 3.21-22, 3.25a, 3.27, 3.28, 3.33.

Week 3

Binary heaps

  1. How would you implement a priority queue using:

    1. a sorted array?

    2. an unsorted array?

    3. a binary search tree?

    What complexity do the resulting operations have? In part c, give two answers depending on whether the tree is balanced (has height O(log n)) or unbalanced.

  2. A max heap is a heap which operates on the maximum element instead of the minimum. That is, it has operations findMax and deleteMax instead of findMin and deleteMin.

    1. Suppose you have an implementation of a min heap. How would you change it into a max heap?

    2. Java provides a class PriorityQueue<E> that implements a min heap. Define a class that uses PriorityQueue<E> to implement a max heap. Hint: you will need to use a custom Comparator when you initialise the priority queue. Here is a nice explanation about how comparators work.

  3. Insert the following elements in sequence into an empty heap: 6, 8, 4, 7, 2, 3, 9, 1, 5. Draw both the tree and array representations of the heap. Then illustrate building a heap from the same data.

  4. (Harder) Remember from the lectures that a binary tree is complete if all its levels are completely full, except for the bottom level which is filled from left to right. The following diagram illustrates a complete binary tree.

    completebinarytree.png

    Write a O(n) function that checks if a binary tree is complete or not.

    Hint: you will need a helper function that calculates these three pieces of information at once:

    1. Whether the tree is complete or not.

    2. The height of the tree.

    3. Whether the bottom level of the tree is full.

  5. Suppose you want a data structure with only the operations insert and findMin. What is the most efficient way to implement it? Be careful with your answer!

  6. Old exam questions: 2013-04 q3, 2013-08 q4, 2012-11 q1, 2013-08 q2.

  7. From Weiss: 6.2a, 6.3, 6.10ab, 6.37b, 6.38.

Leftist heaps

  1. As mentioned in the lectures, we can use a priority queue to implement sorting as follows: start with an empty priority queue, add all elements of the input list in turn, and then repeatedly find and remove the smallest element. Perform this sorting algorithm by hand on the list [3,1,4,2,5] using a leftist heap.

  2. It’s hard to efficiently implement merge for ordinary binary heaps, even even we represent them as trees rather than arrays. What is the problem?

  3. Leftist heaps don’t try to keep the tree balanced, but just try to stop it from becoming right-heavy. Why can’t we use the same trick to make an efficient BST?

Binary trees, binary search trees and AVL trees

  1. Is the following statement true or false? Why?

    Inserting a set of objects into a binary search tree, you get the same tree regardless of what order you insert the objects in.
  2. How many nulls are there in a binary tree of n nodes? Why?

  3. Try implementing binary search trees in Java.

  4. Define a recursive function that, given a binary search tree, prints out all its elements in ascending order.

  5. Insert the values 2,1,4,5,9,3,6,7 into:

    1. A binary search tree

    2. An AVL tree

  6. Insert the values 1,2,3,4,5,6,7,8 into:

    1. A binary search tree

    2. An AVL tree

  7. We saw that requiring the tree be perfectly balanced is too strong an invariant for a balanced BST, because it only allows trees of certain sizes. Alternatively, we could have the invariant that the tree is complete, like we did for binary heaps. But this doesn’t work well. Why not?

  8. Work out an algorithm for doing lazy deletion in a binary search tree (see the lecture slides). Your invariant should probably be: a deleted node can only have deleted children.

  9. Old exam questions: 2012-11 q2, 2012-12 q1, 2013-04 q1, 2012-08 q3, 2011-12 q2, 2012-04 q1,2, 2011-12 q3, 2012-12 q4, 2011-12 q1.

  10. From Weiss: 4.1-5, 4.8, 4.41, 4.9, 4.11-12, 4.16, 4.19, 4.25, 4.35, 4.37, 4.49.

Week 4

2-3 trees, B-trees and AA trees

  1. Repeat last week’s question 5 and 6 with:

    1. A 2-3 tree

    2. An AA tree

  2. In a B-tree, the bigger value of k (node size) we choose, the shallower the trees get and the less often we have to do splitting. Is there a disadvantage to increasing k?

  3. In 2-3 trees, we split a 4-node into three 2-nodes. We can’t split a 3-node into two 2-nodes, though. Why not?

  4. (not really a question) We skipped over 2-3 tree deletion, which is more complicated than insertion. If you want, you can read about it in these notes: http://www.cs.princeton.edu/~dpw/courses/cos326-12/ass/2-3-trees.pdf.

  5. In AA trees, we only allowed the right child to have the same level as its parent. We could weaken the invariant, so that either child (but not both) can have the same level as its parent. In other words, a 3-node is constructed by a node and either of its children. What happens if you do this? Is there an advantage to doing it? Is there a disadvantage?

  6. (hard) Can you design a version of AA trees that corresponds to 2-3-4 trees instead of 2-3 trees? You will want to represent a 4-node as a node where both children have the same level. (This structure is normally called a red-black tree.)

  7. (very hard) In a 2-3-4 tree, you can use a more efficent insertion algorithm called top-down insertion. In top-down insertion, as you move down the tree looking for the insertion point, whenever you reach a 4-node you split it and absorb it into its parent (note that its parent won’t be a 4-node because it will already have been split). After splitting the leaf, it will be a 2- or 3-node, so there will be room for the inserted value. Thus you avoid having to go up the tree splitting after insertion.

    1. Describe this algorithm in pseudocode using splitting and absorption.

    2. Adapt your red-black tree from question 6 to use this algorithm. Congratulations, you now have the fastest balanced BST!

Week 5

No solutions for this week

Graphs

  1. Implement a Java class for directed graphs using an adjecency list representation. Nodes should have labels of an arbitrary type (use generics, class Graph<T>) and edges have no labels. You can use map from node labels to lists of node labels where each element in the list represents one edge. The class should have the following constructor and methods:

    • Graph() constructor that creates an empty graph

    • void addNode(T label) which adds a node with given label or does nothing if there is already a node with this label

    • void addEdge(T node1, T node2) which adds an edge from node node1 to node2 provided that the two nodes exist

    • int nNodes() which returns the number of nodes

    • int nEdges() which returns the number of edges

    • Iterable<T> nodes() which returns an iterable with all nodes

    • Iterable<T> neighboursOf(T node) which returns an iterable with all nodes adjecent to node

  2. Add, to the Java implemention from question 1, a method/function isCyclic which determines whether the graph/a given graph is cyclic (contains a cycle) and returns a boolean value.

  3. Extend the class in question 1 to a class UndirectedGraph<T> which represents undirected graphs by overriding addEdge and nEdges.

  4. Add a method to the class in question 1 that transposes the graph.

  5. Old exam questions: 2012-12 q2, 2012-08 q4, 2013-08 q5, 2012-04 q6, 2011-12 q4, 2012-12 q6, 2011-12 q6.

  6. From Weiss: 9.1-2, 9.5, 9.7a, 9.10b, 9.15-16 (ignore Kruskal’s algorithm), 9.20, 9.26, 9.31, 9.38, 9.41-42, 9.44-45, 9.51-53.

Week 6

Linked lists

  1. What complexity does inserting an element at the front of a singly linked list of n elements have?

  2. What complexity does inserting an element at the back of a singly linked list of n elements have?

  3. Try implementing circular linked lists in your favourite language.

  4. Old exam questions: 2012-08 q5, 2012-04 q3, 2013-04 q6.

  5. From Weiss: 3.2, 3.6, 3.11, 3.15, 3.29, 3.31-32, 3.34.

Skip lists

  1. Insert the values 2,1,4,5,9,3,6,7 into:

    1. A probabilistic skip list (you can use this link to flip a virtual coin).

    2. A deterministic skip list.

  2. 2-3 tree insertion works by splitting 4-nodes; deterministic skip list insertion works by raising the level of nodes that break the invariant. But they are really the same thing. Can you work out in detail the connection between 2-3 trees and deterministic skip lists?
    Start by making sure you understand how to translate a 2-3 tree to a deterministic skip list, as illustrated in the slides. Then take a small 2-3 tree and do some insertions that cause nodes to split. Translate your original tree into a deterministic skip list and repeat the insertions. The process should be very similar. What is going on?

  3. We can view any deterministic skip list as a 2-3 tree. Likewise, we should be able to view an arbitrary skip list as a tree of some kind (not a 2-3 tree though).
    What sort of tree corresponds to a skip list? It will be a pretty weird one! Can you adapt the skip list insertion/deletion algorithms to work on this kind of tree?

  4. (hard) Work out how to use skip lists to implement a sequence data type. A sequence is like an array, in that you can get or set the ith index. Unlike an array, you can also efficiently insert a new element anywhere in the array, or delete an arbitrary element (given the index).
    Hint: you will probably want to augment each link in the skip list so that it records how many nodes it skips over.

  5. Old exam questions: 2011-12 q3 with skip lists.

  6. From Weiss: 10.37, 10.39.

Hash tables

  1. What are the advantages and disadvantages of hash tables compared to binary search trees?

  2. Write down a suitable invariant for:

    1. Hash tables with linear chaining.

    2. Hash tables with linear probing.

  3. Remember that in linear probing we have to use lazy deletion. Construct an example where this is necessary, i.e., simply deleting the item gives the wrong result.

  4. When using linear probing, we might try to avoid lazy deletion as follows. To delete an element x, we first find the element, then find the last element in that cluster (call it y), and overwrite x with y in the array (and delete y).
    The idea is that this keeps the cluster in one piece, thus not breaking the invariant. However, it doesn’t work. Find an example where it goes wrong.
    Can you think of a way to fix this method, without going back to lazy deletion? It may help to think about the hash table’s invariant.

  5. In early versions of the PHP programming language, the hash function used for strings was the length of the string. Can you think of any problems this would cause?
    After thinking a bit, read the answer here.

  6. Old exam questions: 2012-08 q2, 2011-12 q1, 2012-11 q3, 2013-04 q4.

  7. From Weiss: 5.1a-b, 5.17.