Exercise 1, part 1, solution 1

Perform a postorder traversal of the tree, swapping left and right subtrees.

Time complexity analysis

We do a constant amount of work per tree node, so the algorithm is linear in the size of the tree.

Correctness proof (sketch)

We want to prove that the resulting tree is the reverse of the input. We do this by induction over the structure of the tree.

The empty tree case: The algorithm does nothing, which is fine, because an empty tree is its own reverse.

The node case: Assume that we start with a node ⟨l, x, r⟩, where l is the left subtree, x is the node element, and r is the right subtree. We know that all elements in l are smaller than x, and that x is smaller than all elements in r.

We recursively reverse l and r, ending up with l′ and r′, respectively. By the inductive hypothesis we know that l′ and r′ are reversals of l and r, respectively. The final tree is ⟨r′, x, l′⟩, which is sorted in reverse order because

Exercise 1, part 1, solution 2

Haskell code:

  data BinaryTree a = Empty | Node (BinaryTree a) a (BinaryTree a)

  reverseTree :: BinaryTree a -> BinaryTree a
  reverseTree Empty        = Empty
  reverseTree (Node l x r) = Node (reverseTree r) x (reverseTree l)

Time complexity analysis

As before. (If we assume that strict evaluation is used.)

Correctness proof

We want to prove, for all (total, finite) trees t, that

  toList (reverseTree t) = reverse (toList t),

where reverse reverses a list and toList converts a tree into a list:

  toList :: BinaryTree a -> [a]
  toList Empty = []
  toList (Node l x r) = toList l ++ [x] ++ toList r

Proof by induction over t:

Case Empty:

  toList (reverseTree Empty)
    =
  toList Empty
    =
  []
    =
  reverse []
    =
  reverse (toList Empty)

Case Node l x r:

  toList (reverseTree (Node l x r))
    =
  toList (Node (reverseTree r) x (reverseTree l))
    =
  toList (reverseTree r) ++ [x] ++ toList (reverseTree l)
    = { Inductive hypotheses. }
  reverse (toList r) ++ [x] ++ reverse (toList l)
    = { Property of reverse. }
  reverse (toList l ++ [x] ++ toList r)
    =
  reverse (toList (Node l x r))

Exercise 1, part 2

We can store one boolean in the tree (not one boolean per tree node), which indicates whether the tree is reversed or not. Other operations (lookup, delete, etc.) need to consult this bit at most once per visited node, so their asymptotic worst-case time complexities do not change.

Exercise 2

Worst-case time complexity

pop takes constant time, so multi-pop(n,s) is linear in n (θ(n)).

Amortised time complexity

The goal is to prove that all operations, including multi-pop, take constant amortised time. The key observation is that we can only multi-pop n elements after we have already pushed n elements.

Assume that the stack ADT only includes the following operations:

We can see every operation as being one of those or a multi-pop.

Let us choose the time unit so that

Let us further assign the following amortised time complexities to the operations:

We need to prove that the sum of the amortised time complexities for a sequence of operations (for a given stack, starting with its creation) is an upper bound of the sum of the worst-case time complexities:

  i = 1nti ≤ ∑i = 1nai,

where the sequence contains n operations, ti stands for the actual (idealised) time complexity of the i-th operation, and ai is its amortised time complexity.

The inequality above follows from the following inequality:

  i = 1nti  +  sn ≤ ∑i = 1nai.

Here si is the size of the stack after i operations (or 0 if the stack has not been created yet). We can prove the inequality by induction over n:

Case 0: The left and the right hand sides both evaluate to 0.

Case 1+n: We proceed by case analysis on the 1+n-th operation:

We get that all operations take constant amortised time.

Is this the "best possible amortised time complexity of multi-pop" (up to constant factors)? A slightly more refined analysis shows that one can assign multi-pop the amortised time complexity zero whenever at least one element is popped. However, in the worst case we pop nothing. Consider the operation sequence consisting of the creation of an empty stack followed by n multi-pops of nothing. This sequence takes 1 + n time units. Furthermore creation can only be assigned a constant amortised time complexity (because it does not take any input). Hence we cannot, in general, assign multi-pop a "sub-constant" amortised time complexity.

Some notes: