In this memo I outline how the axiom of parametricity can be given computational rules. The symmetries (automorphisms) of the hypercubic structure generated by parametricity is essential to interpret nested parametricity.

# Parametricity as computation of terms

The abstraction theorem says that if a term `A` has type `B`, then `A` satisfies the relational interpretation of the type `B` (written here `B!`). Moreover, the proof of that fact is given by `A!`.

``Γ ⊢ A : B ⇒ Γ! ⊢ A! : B! A``

Let’s take the the case of a function `f` of type `A → B`. For each concrete `f`, `f!` is a proof that `f` satisfies the relational interpretation of the function type:

``f! : (A → B)! f``

or after expansion of the definition of `!` on arrow types:

``f! :  ∀x:A.  A! x → B! (f x)``

Then (considering only normal forms), `f` is either a variable (and `f!` is a normal form) or is it a lambda expression (`f ≡ λx:A. b`) and then a suitable proof of the above statement is given by

``(λx:A. b)!``

which by definition reduces as follows:

``(λx:A. b)! ⟶ λx:A. λy : A!x. b!ˣ``

Note the new notation: as exponent to the operator `!`, I write the variable `x`, because inside `b` we already have `y` in scope, which is a proof that `x` satisfies parametricity. Hence there is no need to wait for a concrete `x` to interpret it: we have

``x!ˣ ⟶ y``

For example, the proof that the identity function is parametric is the constant function:

``````(λx:A. x)! ⟶ λx:A. λy : A!x. x!ˣ
⟶ λx:A. λy : A!x. y``````

This is all very well, and because similar rules can be given for application and arrow, it would seem that we can reduce `!` in all possible cases, and therefore that we’ve got our computational rules for parametricity. But…

## A hurdle

Consider the following term

``(λx:A. x!)!``

According to our rule above it should reduce to

``λx:A. λy:A!x. x!!ˣ``

But what is the meaning of `x!!ˣ`? In particular, what should we do when we supply a concrete `x` and `y`? It is a-priori hard to tell, because the `x` in exponent cannot be substituted with an expression; it’s just a marker to remember which variables have a corresponding proof of parametricity in context.

If we could commute `!` and `!ˣ` then the problem would go away: we would have the following reduction sequence, which ends in a form that no longer mentions `x`:

``x!!ˣ  ⟶ x!ˣ! ⟶ y!``

However, the commutation step does not preserve types! Indeed, let’s apply the typing rule of `!` to find out. We are given the following types for variables:

``````x : A
y : A! x``````

We have then immediately:

``x! : A! x``

And then:

``````x!!ˣ : (A! x)!ˣ  x!
{- definition of ! on application -}
: A!!ˣ x y  x!
{- x does not appear in A -}
: A!!  x y  x!``````

But:

``````y!   : (A! x)!   y
{- definition of ! on application -}
: A!! x x!  y``````

To sum-up, the effect of the commutation is to swap the arguments of the type of the expression.

# Permutations

Fortunately, the swap in the type is harmless. Indeed, the above two types are isomorphic. In fact, for every closed type `A` and `x:A`, the relation `A!! x` is symmetric. To prove that, we’ll assume an inhabitant `B` of `A!! x y₁ y₂`, and construct a new term `B#` inhabiting `A!! x y₂ y₁`

Let us first check what this means when `A = ⋆`. Assume we have:

``````   Γ, x : ⋆, y₁ : x → ⋆, y₂ : x → ⋆  ⊢ B  : ⋆!! x y₁ y₂

{- by definition of ! on ⋆ and arrow -}

Γ, x : ⋆, y₁ : x → ⋆, y₂ : x → ⋆  ⊢ B  : (α : x) → y₁ α → y₂ α → ⋆``````

It is not difficult to see that, by swapping abstractions in the normal form of `B`, one can construct an inhabitant of `(α : x) → y₂ α → y₁ α → ⋆`. Writing the modified term `B#`:

``````   Γ, x : ⋆, y₁ : x → ⋆, y₂ : x → ⋆  ⊢ B  : (α : x) → y₁ α → y₂ α → ⋆
⇒  Γ, x : ⋆, y₁ : x → ⋆, y₂ : x → ⋆  ⊢ B# : (α : x) → y₂ α → y₁ α → ⋆  ``````

Moreover, for every proof `C` that a triplet `x y₁ y₂` satisfies the relation `B`, there is a proof `C#` that `x y₂ y₁` satisfies `B#`, obtained by properly swapping the corresponding abstractions and arguments. Thus:

``Γ, ... ⊢ C : B x y₁ y₂ ⇒  Γ, ... ⊢  C# : B# x y₂ y₁ ``

In fact, with the appropriate restrictions, we could sum up the above in the slogan:

``Γ ⊢ A : B ⇒ Γ ⊢ A# : B#``

with

``````(λx y₁ y₂. b)#  ⟶ λx y₂ y₁. b#
(∀x y₁ y₂. B)#  ⟶ ∀x y₂ y₁. B#
(F x y₁ y₂)#    ⟶ F# x y₂ y₁``````

And then we know how to amend our reduction rule to preserve types: commutation of `!` adds a swap.

``x!!ˣ  ⟶ x!ˣ!#``

Of course, adding a new construction such as `#` opens a number of questions. Fortunately, it appears that the construction behaves well with the rest of the system. Firstly, we do not have to worry about `#` “piling up”: permutations compose as usual, and we have, for example

`` x##  ⟶ x``

Furthermore, provided that `#` is applied only to relations generated by second-level parametricity and their inhabitants, it reduces on all forms except variables. Hence, the only new normal forms added are of the form `x!#` or `y#`.

## Higher dimensions

We’re now left with dealing with the higher dimensions of parametricity. For example, what happens in at the next level, namely, what does `x!!!ˣ` reduce to?

The answer is, commutation at higher parametricity levels introduces permutations of higher dimensions. (We should recall here the structure of higher-dimension parametricity). That is, seeing the simple swap `#` as twisting a square, then commutation at dimension three involves twisting a cube. Writing in parenthesis which faces of the cube swap:

``A!!!ˣ ⟶ A!!ˣ!#(132)``

The general formulation of `#` for arbitrary permutations of arbitrary dimensions is quite daunting (and it’s best left as an exercise to the reader). Fortunately, the transformations can be expressed in terms of hypercubes, and have vivid geometric representations.

Consider for example the following rule, which says how `#` and `!` interact at dimensions 2 and 3:

``A!#! ⟶ A!!#(213)``

Geometrically, the swap on the left hand side twists a square. If that square is a face of an hypercube, then the twist of that face necessitates a swap of the faces supported by the square.

# Loose ends

The presentation I gave of permutations (`#`) is still mostly syntactic, despite the underlying geometry. What is the deep mysterious meaning behind them, if there is any? One thing I can mention is that, essentially, (`#`) bakes in symmetry into the system. This means that, if one is to add custom constructs to the system (say natural numbers), then their 2-dimensional relational interpretation must be symmetric (which is usually the case for natural numbers: the corresponding relation is equality).