Specifications

A program is specified with a logical statement that characterizes what is expected from this program. For example, here is the lambda-lifted specification of the addition function:

Definition recursive_specification_of_addition (add : nat -> nat -> nat) :=
  (forall y : nat,
      add O y = y)
  /\
  (forall x' y : nat,
      add (S x') y = S (add x' y)).

This specification is that of a function mapping two natural numbers to a natural number, and is inductive on the first argument of this function in that it follows its structure (0 or the result of applying S to a given natural number).

An implementation is said to satisfy a specification when, given this implementation, the associated logical statement holds.

For example, Nat.add (infix notation: +) is the resident addition function in Gallina. The following theorem (proved at the end of the present lecture note) captures that Nat.add satisfies this specification of addition:

Theorem the_resident_addition_function_satisfies_the_recursive_specification_of_addition :
  recursive_specification_of_addition Nat.add.

Resources

Properties of specifications

Specifications, like any artifact (i.e., man-made construct), should be studied in and of themselves. For example, are they vacuous? Ambiguous? Do they specify exactly one program? In that light, a theorem to the effect that one program satisfies a specification can be viewed as showing that this specification specifies at least one program.

The rest of this lecture note is dedicated to methods for studying specifications, based on the recursive specification of addition and on the implementation of addition stated above.

There is at most one recursive addition function

Let us show that the recursive specification of addition is unambiguous, i.e., that it specifies at most one function:

Proposition there_is_at_most_one_recursive_addition :
  forall add1 add2 : nat -> nat -> nat,
    recursive_specification_of_addition add1 ->
    recursive_specification_of_addition add2 ->
    forall x y : nat,
      add1 x y = add2 x y.

In words: if two functions satisfy the recursive specification of addition, then they are the same function, i.e., they map the same input to the same output.

The proof starts by populating the assumptions of the inner forall-statement:

Proof.
  intros add1 add2.
  unfold recursive_specification_of_addition.
  intros [H_add1_O H_add1_S] [H_add2_O H_add2_S].

The *goals* windows then reads:

add1, add2 : nat -> nat -> nat
H_add1_O : forall y : nat, add1 0 y = y
H_add1_S : forall x' y : nat, add1 (S x') y = S (add1 x' y)
H_add2_O : forall y : nat, add2 0 y = y
H_add2_S : forall x' y : nat, add2 (S x') y = S (add2 x' y)
============================
forall x y : nat, add1 x y = add2 x y

We can then assume a natural number x and proceed by routine mathematical induction.

Interlude

Alfrothul: Routine mathematical induction?

Anton (stroking his beard reflectively): I heard that once from another math teacher, before he left the school.

Dana: Chill out, Anton, maybe it really is routine.

Anton: Let me see. Assuming such a natural number x:

intro x.

Alfrothul: We can initiate the induction proof and focus on the base case:

induction x as [ | x' IHx'].

-

Pablito: The *goals* windows reads:

add1, add2 : nat -> nat -> nat
H_add1_O : forall y : nat, add1 0 y = y
H_add1_S : forall x' y : nat, add1 (S x') y = S (add1 x' y)
H_add2_O : forall y : nat, add2 0 y = y
H_add2_S : forall x' y : nat, add2 (S x') y = S (add2 x' y)
============================
forall y : nat, add1 0 y = add2 0 y

Anton: Let us assume a natural number y:

- intro y.

Dana: How about we consider the right-hand side first?

Anton (practicing his Singlish): Can:

rewrite -> (H_add2_O y).

Pablito: The *goals* windows reads:

add1, add2 : nat -> nat -> nat
H_add1_O : forall y : nat, add1 0 y = y
H_add1_S : forall x' y : nat, add1 (S x') y = S (add1 x' y)
H_add2_O : forall y : nat, add2 0 y = y
H_add2_S : forall x' y : nat, add2 (S x') y = S (add2 x' y)
y : nat
============================
add1 0 y = y

Alfrothul: Neat – the goal is an instance of H_add1_O:

Check (H_add1_O y).

Pablito: Indeed it is. The *response* windows reads:

H_add1_O y
     : add1 0 y = y

Anton: So we can use the exact tactic, which completes the base case:

exact (H_add1_O y).

Alfrothul: Good one, Dana. Let us focus on the induction step:

-

Anton: Let us assume a natural number y:

- intro y.

Dana: That’s two cases for H_add1_S and H_add2_S, in either order:

rewrite -> (H_add1_S x' y).
rewrite -> (H_add2_S x' y).

Pablito: The *goals* windows reads:

add1, add2 : nat -> nat -> nat
H_add1_O : forall y : nat, add1 0 y = y
H_add1_S : forall x' y : nat, add1 (S x') y = S (add1 x' y)
H_add2_O : forall y : nat, add2 0 y = y
H_add2_S : forall x' y : nat, add2 (S x') y = S (add2 x' y)
x' : nat
IHx' : forall y : nat, add1 x' y = add2 x' y
y : nat
============================
S (add1 x' y) = S (add2 x' y)

Alfrothul: Well, the goal could be processed with an instance of the induction hypothesis:

Check (IHx' y).

Pablito: Indeed it is. The *response* windows reads:

IHx' y
     : add1 x' y = add2 x' y

Anton: So let us replace add1 x' y in the goal by add2 x' y using the rewrite tactic from right to left – though we could do the converse too:

rewrite -> (IHx' y).

Pablito: Success – the left-hand side and the right-hand side are identical, witness the *goals* window:

add1, add2 : nat -> nat -> nat
H_add1_O : forall y : nat, add1 0 y = y
H_add1_S : forall x' y : nat, add1 (S x') y = S (add1 x' y)
H_add2_O : forall y : nat, add2 0 y = y
H_add2_S : forall x' y : nat, add2 (S x') y = S (add2 x' y)
x' : nat
IHx' : forall y : nat, add1 x' y = add2 x' y
y : nat
============================
S (add2 x' y) = S (add2 x' y)

Anton: And there we are:

    reflexivity.
Qed.

Dana: So, routine?

Alfrothul: Well, it looks like: in the base case, we used the base-case rules for add1 and add2, and in the induction step, we used the induction-step rules for add1 and add2, and then the induction hypothesis.

Anton (stroking his beard meditatively): Harrumph.

Exercise 01

The addition function can also be specified tail-recursively:

Definition tail_recursive_specification_of_addition (add : nat -> nat -> nat) :=
  (forall y : nat,
      add O y = y)
  /\
  (forall x' y : nat,
      add (S x') y = add x' (S y)).

Is there at most one tail-recursive addition function?

Solution for Exercise 01

See the live-proving session, why a naive induction proof does not go through, and how the Light of Inductil is needed. Using The Light of Inductil strenghtens the induction hypothesis and makes the proof go through.

An ambiguous specification

Consider the following specification of the predecessor function:

Definition specification_of_the_predecessor_function (pred : nat -> nat) :=
  forall n : nat,
    pred (S n) = n.

Let us attempt to prove that there is at most one function satisfying this specification:

Proposition there_is_at_most_one_predecessor_function :
  forall pred1 pred2 : nat -> nat,
    specification_of_the_predecessor_function pred1 ->
    specification_of_the_predecessor_function pred2 ->
    forall n : nat,
      pred1 n = pred2 n.

When attempting to prove this proposition by induction on n (what else?), we get stuck in the base case. Being stuck there suggests the following maker of predecessor functions. This maker is parameterized by the base case:

Definition make_predecessor_function (zero n : nat) :=
  match n with
    0 => zero
  | S n' => n'
  end.

We can then show that given a zero, this maker yields a function that satisfies the specification, always:

Lemma about_make_predecessor_function :
  forall zero : nat,
    specification_of_the_predecessor_function (make_predecessor_function zero).

Unsurprisingly, the proof is by cases:

Proof.
  intro zero.
  unfold specification_of_the_predecessor_function.
  intros [ | n'].

  - unfold make_predecessor_function.
    reflexivity.

  - unfold make_predecessor_function.
    reflexivity.
Qed.

We are now in position to show that the specification is ambiguous, in that at least two distinct functions satisfy it:

Theorem there_are_at_least_two_predecessor_functions :
  exists pred1 pred2 : nat -> nat,
    specification_of_the_predecessor_function pred1 /\
    specification_of_the_predecessor_function pred2 /\
    exists n : nat,
      pred1 n <> pred2 n.

where the infix notation _ <> _ means “is not Leibniz-equal to”. (OCaml uses the same infix notation for its polymorphic equality predicate.) So here, pred1 n <> pred2 n, not (pred1 n = pred2 n), and ~(pred1 n = pred2 n) are all syntactic sugar for pred1 n = pred2 n -> False.

We start the proof by supplying two distinct witnesses:

Proof.
  exists (make_predecessor_function 0).
  exists (make_predecessor_function 1).

The *goals* windows then reads:

============================
specification_of_the_predecessor_function (make_predecessor_function 0) /\
specification_of_the_predecessor_function (make_predecessor_function 1) /\
(exists n : nat, make_predecessor_function 0 n <> make_predecessor_function 1 n)

Since the goal is a conjunction, let us successively focus on each of the conjuncts:

split.

-

The *goals* windows then reads:

============================
specification_of_the_predecessor_function (make_predecessor_function 0)

We recognize this goal as an instance of Lemma about_make_predecessor_function, so let us prove it using the exact tactic:

- exact (about_make_predecessor_function 0).

This completes the current goal, so let us focus on the second conjunct, which is itself a conjunction, so let us focus on its first conjunct:

- split.

  --

The *goals* windows then reads:

============================
specification_of_the_predecessor_function (make_predecessor_function 1)

We also recognize this goal as an instance of Lemma about_make_predecessor_function, so let us also prove it using the exact tactic:

-- exact (about_make_predecessor_function 0).

This completes the current goal, so let us focus on the second conjunct:

--

The *goals* windows then reads:

============================
exists n : nat, make_predecessor_function 0 n <> make_predecessor_function 1 n

Let us supply – not at all innocently – 0 as a witness:

-- exists 0.

The *goals* windows then reads:

============================
make_predecessor_function 0 0 <> make_predecessor_function 1 0

Let us unfold the call to make_predecessor_function:

unfold make_predecessor_function.

The *goals* windows then reads:

============================
0 <> 1

Searching for the pattern _ <> S _ reveals Theorem n_Sn:

Search (_ <> S _).
Check n_Sn.

The *response* windows reads:

n_Sn
     : forall n : nat, n <> S n

Since the goal is an instance of n_Sn, we can complete the proof with the exact tactic:

       exact (n_Sn 0).
Qed.

Exercise 02

Consider the following total specification of the predecessor function:

Definition specification_of_the_total_predecessor_function (pred : nat -> option nat) :=
  (pred 0 = None)
  /\
  (forall n : nat,
      pred (S n) = Some n).

Is this specification ambiguous?

Interlude: Solution for Exercise 02

Anton (stroking his beard happily): Guys, this looks like a routine induction. Let’s get cracking:

Proposition there_is_at_most_one_total_predecessor_function :
  forall pred1 pred2 : nat -> option nat,
    specification_of_the_total_predecessor_function pred1 ->
    specification_of_the_total_predecessor_function pred2 ->
    forall n : nat,
      pred1 n = pred2 n.
Proof.
  intros pred1 pred2.
  unfold specification_of_the_total_predecessor_function.
  intros [H_pred1_O H_pred1_S] [H_pred2_O H_pred2_S].
  intro n.
  induction n as [ | n' IHn'].

  -

Pablito: The *goals* windows now reads:

pred1, pred2 : nat -> option nat
H_pred1_O : pred1 0 = None
H_pred1_S : forall n : nat, pred1 (S n) = Some n
H_pred2_O : pred2 0 = None
H_pred2_S : forall n : nat, pred2 (S n) = Some n
============================
pred1 0 = pred2 0

Alfrothul: Piece of cake. Let us treat the right-hand side first, so that it gives an instance of H_pred1_O. And then we can focus on the induction step:

- rewrite -> H_pred2_O.
  exact H_pred1_O.

-

Pablito: The *goals* windows now reads:

pred1, pred2 : nat -> option nat
H_pred1_O : pred1 0 = None
H_pred1_S : forall n : nat, pred1 (S n) = Some n
H_pred2_O : pred2 0 = None
H_pred2_S : forall n : nat, pred2 (S n) = Some n
n' : nat
IHn' : pred1 n' = pred2 n'
============================
pred1 (S n') = pred2 (S n')

Anton: Same same. Let us treat the right-hand side first, so that it gives an instance of H_pred1_S:

- rewrite -> (H_pred2_S n').
  exact (H_pred1_S n').

Vigful: And we are done, witness the *response* window:

No more subgoals.

Anton (happily): This routine is getting routine.

Dana: Harrumph.

Alfrothul: Yes, Dana?

Dana: We didn’t use the induction hypothesis.

Anton (eyeballing the proof script): True that.

Dana: So, it’s not much an induction proof is it.

Anton (stroking his beard nervously): What do you mean by that? It’s routine.

Dana: It’s more like a proof by cases. Look:

  Restart.

  intros pred1 pred2.
  unfold specification_of_the_total_predecessor_function.
  intros [H_pred1_O H_pred1_S] [H_pred2_O H_pred2_S].
  intros [ | n'].

  - rewrite -> H_pred2_O.
    exact H_pred1_O.

  - rewrite -> (H_pred2_S n').
    exact (H_pred1_S n').
Qed.

Alfrothul: I see. If we don’t use the induction hypothesis, the induction proof reduces to a proof by cases.

Anton (dejected): So it was not at all a routine induction.

“Mad Eye” Moody: CONSTANT VIGILANCE!

Mimer: Prof. Moody, thanks for stopping by! How’s your CapsLock key?

“Mad Eye” Moody: Under control, thanks.

A vacuous specification

Consider the following specification of both the predecessor and the successor function:

Definition specification_of_the_predecessor_and_successor_function (ps : nat -> nat) :=
  (forall n : nat,
      ps (S n) = n)
  /\
  (forall n : nat,
      ps n = (S n)).

We can prove that there is at most one function satisfying this specification:

Theorem there_is_at_most_one_predecessor_and_successor_function :
  forall ps1 ps2 : nat -> nat,
    specification_of_the_predecessor_and_successor_function ps1 ->
    specification_of_the_predecessor_and_successor_function ps2 ->
    forall n : nat,
      ps1 n = ps2 n.

Exercise 03

Is Theorem there_is_at_most_one_predecessor_and_successor_function proved by induction or by cases?

A vacuous specification (continued and ended)

Let us capture a contradictory aspect of the specification with the following lemma:

Lemma a_contradictory_aspect_of_the_predecessor_and_successor_function :
  forall ps : nat -> nat,
    specification_of_the_predecessor_and_successor_function ps ->
    ps 1 = 0 /\ ps 1 = 2.

The proof is by cases:

Proof.
  intro ps.
  unfold specification_of_the_predecessor_and_successor_function.
  intros [H_ps_S H_ps].
  split.

  - rewrite -> (H_ps_S 0).
    reflexivity.

  - rewrite -> (H_ps 1).
    reflexivity.
Qed.

We are now in position to show that the specification is vacuous, in that at if an entity satisfies it, it is not a function since functions map the same input to the same output, always:

Theorem there_are_zero_predecessor_and_successor_functions :
  forall ps : nat -> nat,
    specification_of_the_predecessor_and_successor_function ps ->
    exists n : nat,
      ps n <> ps n.

Let us start the proof and check out the contradictory lemma:

Proof.
  intros ps S_ps.
  Check (a_contradictory_aspect_of_the_predecessor_and_successor_function ps S_ps).

The *response* windows reads:

a_contradictory_aspect_of_the_predecessor_and_successor_function ps S_ps
     : ps 1 = 0 /\ ps 1 = 2

Let us destructure the resulting conjunction and name its conjuncts:

destruct (a_contradictory_aspect_of_the_predecessor_and_successor_function ps S_ps) as [H_ps_0 H_ps_2].

The *goals* windows reads:

ps : nat -> nat
S_ps : specification_of_the_predecessor_and_successor_function ps
H_ps_0 : ps 1 = 0
H_ps_2 : ps 1 = 2
============================
exists n : nat, ps n <> ps n

Let us (obviously) pick 1 as a witness:

exists 1.

The *goals* windows reads:

ps : nat -> nat
S_ps : specification_of_the_predecessor_and_successor_function ps
H_ps_0 : ps 1 = 0
H_ps_2 : ps 1 = 2
============================
ps 1 <> ps 1

Now all we need to do is to rewrite the left-hand side of the goal with H_ps_0 and the right-hand side of the goal with H_ps_2. We will then obtain 0 <> 2 as a goal, which surely we can prove after searching the libraries. But alas, as we are about to see, tCPA is over-industrious:

rewrite -> H_ps_0.

The *goals* windows reads:

ps : nat -> nat
S_ps : specification_of_the_predecessor_and_successor_function ps
H_ps_0 : ps 1 = 0
H_ps_2 : ps 1 = 2
============================
0 <> 0

What happened? Both instances of ps 1 have been rewritten, not only the one in the left-hand side.

We need to tell tCPA which instance of ps 1 to rewrite, and this is done by postfixing the application of rewrite with a positive natural number: 1 for the first instance, 2 for the second instance, etc. (And what “first”, “second”, etc. mean often requires some trial and error in practice.)

Here, let us tell tCPA to apply the rewrite tactic on the first instance of ps 1:

Restart.

intros ps S_ps.
Check (a_contradictory_aspect_of_the_predecessor_and_successor_function ps S_ps).
destruct (a_contradictory_aspect_of_the_predecessor_and_successor_function ps S_ps) as [H_ps_0 H_ps_2].
exists 1.
rewrite -> H_ps_0 at 1.

The *goals* windows reads:

ps : nat -> nat
S_ps : specification_of_the_predecessor_and_successor_function ps
H_ps_0 : ps 1 = 0
H_ps_2 : ps 1 = 2
============================
0 <> ps 1

We can then rewrite the remaining instance of ps 1:

rewrite -> H_ps_2.

The *goals* windows reads:

ps : nat -> nat
S_ps : specification_of_the_predecessor_and_successor_function ps
H_ps_0 : ps 1 = 0
H_ps_2 : ps 1 = 2
============================
0 <> 2

With a bit of searching, we can see that this goal is an instance of a resident theorem named O_S, which lets us complete the proof:

  Search (0 <> S _).
  Check (O_S 1).
  exact (O_S 1).
Qed.

About proving that an implementation satisfies a specification

A specification can be put to active use. For example, let us revisit the case of the addition function:

Theorem the_resident_addition_function_satisfies_the_recursive_specification_of_addition :
  recursive_specification_of_addition Nat.add.

The proof is by cases:

Proof.
  unfold recursive_specification_of_addition.
  split.
  - intro y.
    Search (0 + _ = _).
    exact (Nat.add_0_l y).
  - intros x' y.
    Search (S _ + _ = S (_ + _)).
    exact (Nat.add_succ_l x' y).
Qed.

Exercise 04

Does the resident addition function satisfy the tail-recursive specification of addition?

Exercise 05

Are the two specifications of addition (the recursive one and the tail-recursive one) equivalent?

Exercise 06

  1. If a function satisfies the recursive specification of addition, is it associative?
  2. If a function satisfies the tail-recursive specification of addition, is it associative?

Exercise 07

  1. If a function satisfies the recursive specification of addition, is it commutative?
  2. If a function satisfies the tail-recursive specification of addition, is it commutative?

Partial solution for Exercise 07

  1. As becomes rapidly clear, proving the theorem requires two lemmas:

    Lemma commutativity_of_recursive_addition_O :
      forall add : nat -> nat -> nat,
        recursive_specification_of_addition add ->
        forall n : nat,
          add n O = n.
    
    Lemma commutativity_of_recursive_addition_S :
      forall add : nat -> nat -> nat,
        recursive_specification_of_addition add ->
        forall n1 n2 : nat,
          add n1 (S n2) = S (add n1 n2).
    

    These two lemmas are proved by routine induction.

    The proof of the theorem then proceeds by declaring two new hypotheses, which is done using the assert tactic, a proof analogue of a local let-expression in a program:

    Theorem commutativity_of_recursive_addition :
      forall add : nat -> nat -> nat,
        recursive_specification_of_addition add ->
        forall n1 n2 : nat,
          add n1 n2 = add n2 n1.
    Proof.
      intros add S_add.
      assert (H_add_O := commutativity_of_recursive_addition_O add S_add).
      assert (H_add_S := commutativity_of_recursive_addition_S add S_add).
      unfold recursive_specification_of_addition in S_add.
      destruct S_add as [S_add_O S_add_S].
    

Exercise 08

  1. If a function satisfies the recursive specification of addition, is O left-neutral for it?
  2. If a function satisfies the recursive specification of addition, is O right-neutral for it?
  3. If a function satisfies the tail-recursive specification of addition, is O left-neutral for it?
  4. If a function satisfies the tail-recursive specification of addition, is O right-neutral for it?

Halcyon’s dream

Galadriel: And for you, Halcyon Kingfisher, I give you the Light of Inductil, our most beloved proof strategy. When facing the choice between writing:

intros x y.
induction x as ...
- ...
- ...

Galadriel (continuing): and:

intro x.
induction x as ...
- intro y.
  ...
- ...

Galadriel (ending): write the latter, for it will give you a stronger induction hypothesis. May it be a light for you in stark proofs when the induction hypotheses are too weak and all other sights fade out.

Halcyon (his cheeks somewhat red): Huh, thank you, I guess.

Postlude

Alfrothul: Cute reference to The Fellowship of the Ring.

Pablito (jotting it down): The movie?

Alfrothul: The movie.

The fourth wall (its eye on the ball): And now for an executive summary.

Executive summary

Assuming we want to prove forall x : nat, forall y : nat, Q x y, where Q : nat -> nat -> Prop,

  • when we write:

    intros x y.
    induction x as ...
    - ...
    - ...
    

    we prove the proposition P x == Q x y of type Prop by induction on x; whereas

  • when we write:

    intro x.
    induction x as ...
    - intro y.
      ...
    - intro y.
      ...
    

    we prove the proposition P x == forall y : nat, Q x y of type nat -> Prop by induction on x.

In the former case, y occurs in P x (and therefore in the induction hypothesis) as a constant, and in the latter case, P x (and therefore the induction hypothesis) is a function that is parameterized by y. In the latter case, the induction hypothesis is said to be “stronger” than in the former case, and writing:

intro x.
induction x as ...
- intro y.
  ...
- intro y.
  ...

instead of:

intros x y.
induction x as ...
- ...
- ...

is said to “strenghten” the induction hypothesis.

The Light of Inductil is very useful to reason about tail-recursive functions that use an accumulator.

Resources

Version

Added the executive summary after Halcyon’s dream [06 Feb 2024]

Omitted the interlude after the partial solution for Exercise 07 because it had become irrelevant [02 Feb 2024]

Added the .v file for the present lecture note after the live-proving session [02 Feb 2024]

Created [01 Feb 2024]