Library equations_simple

Equations is a plugin for that comes with a few support modules defining classes and tactics for running it. We will introduce its main features through a handful of examples. We start our Coq primer session by importing the Equations module.
Coq uses tactics to define terms/programs. Haskell allows one to define functions by equations. The same style is used in Agda and Idris implementations of type theory Equations introduces the same style in Coq.

From Coq Require Import Arith Lia Program.
From Equations Require Import Equations.
Require Import Bvector.

Inductive types

In its simplest form, Equations allows to define functions on inductive datatypes. Take for example the booleans defined as an inductive type with two constructors true and false:
   Inductive bool : Set := true : bool | false : bool
We can define the boolean negation as follows:

Equations neg (b : bool) : bool :=
neg true := false ;
neg false := true.

New concise syntax
Equations neg' : bool -> bool :=
| true => false
| false => true.

Equations declarations are formed by a signature definition and a set of clauses that must form a covering of this signature. The compiler is then expected to automatically find a corresponding case-splitting tree that implements the function. In this case, it simply needs to split on the single variable b to produce two new programming problems neg true and neg false that are directly handled by the user clauses. We will see in more complex examples that this search for a splitting tree may be non-trivial.

Reasoning principles

We need not only the ability to define complex functions but also get good reasoning support for them. Practically, this translates to the ability to simplify applications of functions appearing in the goal and to give strong enough proof principles for (recursive) definitions.
Equations provides this through an automatic generation of proofs related to the function. Namely, each defining equation gives rise to a lemma stating the equality between the left and right hand sides. These equations can be used as rewrite rules for simplification during proofs, without having to rely on the fragile simplifications implemented by raw reduction. We can also generate the inductive graph of any Equations definition, giving the strongest elimination principle on the function.
I.e., for neg the inductive graph is defined as an inductive relation in Set:
Inductive neg_graph : bool -> bool -> Set :=
   neg_graph_equation_1 : neg_graph true false
 | neg_graph_equation_2 : neg_graph false true

*)

Check neg_graph.
Print neg_graph.

Along with a proof of Π b, neg_graph b (neg b), we can eliminate any call to neg specializing its argument and result in a single command. Suppose we want to show that neg is involutive for example, our goal will look like:
  b : bool
  ============================
   neg (neg b) = b
An application of the tactic funelim (neg b) will produce two goals corresponding to the splitting done in neg: neg false = true and neg true = false. These correspond exactly to the rewriting lemmas generated for neg.

Lemma neg_inv : forall b, neg (neg b) = b.
Proof. intros b. funelim (neg b).
  • simp neg. reflexivity.
  • simp neg. reflexivity.
Defined.
Check neg_elim. Print neg_elim.
Lemma neg_inv' : forall b, neg (neg b) = b. Proof. intros b. apply neg_elim with (P:=(fun b1 b2 : bool => neg b2 = b1)).
  • simp neg. reflexivity.
  • simp neg. reflexivity.
Defined.
Inductive list {A} : Type := nil : list | cons : A -> list -> list.
Arguments list : clear implicits. Notation "x :: l" := (cons x l).
Equations tail {A} (l : list A) : list A := tail nil := nil ; tail (cons a v) := v.
Check tail_equation_2.
Equations app {A} (l l' : list A) : list A := app nil l' := l' ; app (cons a l) l' := cons a (app l l').
Check app_graph. Check @app_graph_equation_1. Check @app_graph_equation_2. Lemma app_assoc {A} (k l m:list A): app k (app l m)= app (app k l) m. funelim (app k (app l m)). + now simp app. + simp app. now f_equal. Qed.
Fixpoint fix_app {A} (l l' : list A) : list A := match l with | nil => l' | cons a l => cons a (fix_app l l') end.
Lemma app_assoc_ind_list {A} (k l m:list A): fix_app k (fix_app l m)= fix_app (fix_app k l) m. Proof. induction k; simpl;f_equal;auto. Qed.
Equations filter {A} (l : list A) (p : A -> bool) : list A := filter nil p := nil ; filter (cons a l) p with (p a) => { filter _ p true := a :: filter l p ; filter _ p false := filter l p }.
Check filter_elim.
Equations filter' {A} (l : list A) (p : A -> bool) : list A := filter' nil p := nil ; filter' (cons a l) p with (p a) => { | true := a :: filter' l p ; | false := filter' l p }.
Lemma filter_false_nil {A} (l : list A) : filter l (fun x => false) = nil. Proof. funelim (filter l _). + now simp filter. + simp filter. + simp filter. Qed.
Global Transparent filter.
Lemma filter_false_nil' {A} (l : list A) : filter l (fun x => false) = nil. Proof. funelim (filter l (fun _ => true)). + reflexivity. + simp filter. + simp filter. Qed.
Equations unzip {A B} (l : list (A * B)) : list A * list B := unzip nil := (nil, nil) ; unzip (cons (a,b) l) with (unzip l) => { unzip _ (la, lb) := (a :: la, b :: lb) }.
Equations equal (n m : nat) : { n = m } + { n <> m } := equal O O := left eq_refl ; equal (S n) (S m) with (equal n m) := { equal (S n) (S ?(n)) (left eq_refl) := left eq_refl ; equal (S n) (S m) (right p) := right _ } ; equal x y := right _.
Section EqualManualObligations.
[tactic=idtac] Equations equal' (n m : nat) : { n = m } + { n <> m } := equal' O O := left eq_refl ; equal' (S n) (S m) with (equal n m) := { equal' (S n) (S ?(n)) (left eq_refl) := left eq_refl ; equal' (S n) (S m) (right p) := right _ } ; equal' x y := right _. Local Obligation Tactic := idtac. (* disable automatic solving of each obligation
  Next Obligation. intros ? ? ? H. discriminate H. Qed.
  Next Obligation. intros ? ? ? H. discriminate H. Qed.
  Next Obligation. intros ? ? ? H. inversion H. auto. Qed.
End EqualManualObligations.

Dependent types are also useful to turn partial functions into total functions by restricting their domain. Typically, we can force the list passed to head to be non-empty using the specification:

Equations head {A} (l : list A) (pf : l <> nil) : A :=
head nil pf with (pf eq_refl) := { | ! } ;
head (cons a v) _ := a.

We decompose the list and are faced with two cases:
  • In the first case, the list is empty, hence the proof pf of type nil <> nil allows us to derive a contradiction by applying it to reflexivity. We make use of another category of right-hand sides, which we call empty nodes to inform the compiler that a contradiction is derivable in this case. In general we cannot expect the compiler to find by itself that the context contains a contradiction, as it is undecidable .
    However, in this case, one could also write an empty set of clauses for the with subprogram, as Equations applies a heuristic in case of an empty set of clause: it tries to split each of the variables in the context to find an empty type.
  • In the second case, we simply return the head of the list, disregarding the proof.

Inductive families

The next step is to make constraints such as non-emptiness part of the datatype itself. This capability is provided through inductive families in Coq , which are a similar concept to the generalization of algebraic datatypes to GADTs in functional languages like Haskell . Families provide a way to associate to each constructor a different type, making it possible to give specific information about a value in its type.

Equality

The alma mater of inductive families is the propositional equality eq defined as:
   Inductive eq (A : Type) (x : A) : A -> Prop :=
   eq_refl : eq A x x.
Equality is a polymorphic relation on A. (The Prop sort (or kind) categorizes propositions, while the Set sort, equivalent to in Haskell categorizes computational types.) Equality is parameterized by a value x of type A and indexed by another value of type A. Its single constructor states that equality is reflexive, so the only way to build an object of eq x y is if x ~= y, that is if x is definitionally equal to y.
Now what is the elimination principle associated to this inductive family? It is the good old Leibniz substitution principle:
forall (A : Type) (x : A) (P : A -> Type), P x -> forall y : A, x = y -> P y
Provided a proof that x = y, we can create on object of type P y from an existing object of type P x. This substitution principle is enough to show that equality is symmetric and transitive. For example we can use pattern-matching on equality proofs to show:

Equations eqt {A} (x y z : A) (p : x = y) (q : y = z) : x = z :=
eqt x ?(x) ?(x) eq_refl eq_refl := eq_refl.
Let us explain the meaning of the non-linear patterns here that we slipped through in the equal example. By pattern-matching on the equalities, we have unified x, y and z, hence we determined the values of the patterns for the variables to be x. The ?(x) notation is essentially denoting that the pattern is not a candidate for refinement, as it is determined by another pattern. This particular patterns are called "inaccessible". When they are variables the inaccessibility annotation is optional.

Indexed datatypes

Functions on vectors provide more striking examples of this situation. The vector family is indexed by a natural number representing the size of the vector: [ Inductive vector (A : Type) : nat -> Type := | Vnil : vector A O | Vcons : A -> forall n : nat, vector A n -> vector A (S n) ]
The empty vector Vnil has size O while the cons operation increments the size by one. Now let us define the usual map on vectors:
Arguments Vector.nil {A}.
Arguments Vector.cons {A} a {n} v : rename.

Notation vector := Vector.t.
Notation Vnil := Vector.nil.
Notation Vcons := Vector.cons.

Equations vmap {A B} (f : A -> B) {n} (v : vector A n) :
  vector B n :=
vmap f (n:=?(0)) Vnil := Vnil ;
vmap f (Vcons a v) := Vcons (f a) (vmap f v).

Here the value of the index representing the size of the vector is directly determined by the constructor, hence in the case tree we have no need to eliminate n. This means in particular that the function vmap does not do any computation with n, and the argument could be eliminated in the extracted code. In other words, it provides only logical information about the shape of v but no computational information.
The vmap function works on every member of the vector family, but some functions may work only for some subfamilies, for example vtail:

Equations vtail {A n} (v : vector A (S n)) : vector A n :=
vtail (Vcons a v') := v'.

The type of v ensures that vtail can only be applied to non-empty vectors, moreover the patterns only need to consider constructors that can produce objects in the subfamily vector A (S n), excluding Vnil. The pattern-matching compiler uses unification with the theory of constructors to discover which cases need to be considered and which are impossible. In this case the failed unification of 0 and S n shows that the Vnil case is impossible. This powerful unification engine running under the hood permits to write concise code where all uninteresting cases are handled automatically.
In order to demonstrate, how Equations simplify definitions that use dependent types, consider the following function defined using basic pattern-matching match ... with

Definition Vzip {A B : Type} : forall {n}, vector A n -> vector B n -> vector (A * B) n :=
  fix zip {n} :=
    match n return vector A n -> vector B n -> vector (A * B) n with
      | O => fun vs vs' => Vnil
      | S n' => fun (vs vs' : vector _ (S n')) =>
          (match vs, vs' with
           | Vcons v tl, Vcons v' tl' =>
             fun f => f v v' tl tl'
           end) (fun v v' vs vs' => Vcons (v,v') (zip vs vs'))
    end.

It can be written concisely with Equations
Equations Vzip' {A B : Type} {n : nat} (v1 : vector A n) (v2 : vector B n)
  : vector (A * B) n :=
  Vzip' Vnil _ := Vnil;
  Vzip' (Vcons a1 v1') (Vcons a2 v2') := Vcons (a1,a2) (Vzip' v1' v2').

Derived notions, No-Confusion

For this to work smoothly, the package requires some derived definitions on each (indexed) family, which can be generated automatically using the generic Derive command. Here we ask to generate the signature, heterogeneous no-confusion and homogeneous no-confusion principles for vectors:

Derive NoConfusion for nat.
Derive Signature NoConfusion NoConfusionHom for vector.

The precise specification of these derived definitions can be found in the manual section . Signature is used to "pack" a value in an inductive family with its index, e.g. the "total space" of every index and value of the family. This can be used to derive the heterogeneous no-confusion principle for the family, which allows to discriminate between objects in potentially different instances/fibers of the family, or deduce injectivity of each constructor. The NoConfusionHom variant derives the homogeneous no-confusion principle between two objects in the same instance of the family, e.g. to simplify equations of the form Vnil = Vnil :> vector A 0. This last principle can only be defined when pattern-matching on the inductive family does not require the K axiom and will otherwise fail.

Unification and indexed datatypes

Back to our example, of course the equations and the induction principle are simplified in a similar way. If we encounter a call to vtail in a proof, we can use the following elimination principle to simplify both the call and the argument which will be automatically substituted by an object of the form Vcons _ _ _:

forall P : forall (A : Type) (n : nat), vector A (S n) -> vector A n -> Prop,
(forall (A : Type) (n : nat) (a : A) (v : vector A n),
  P A n (Vcons a v) v) ->
forall (A : Type) (n : nat) (v : vector A (S n)), P A n v (vtail v) ]] *)

Check vtail_elim.

As a witness of the power of the unification, consider the following function which computes the diagonal of a square matrix of size n * n.


Equations diag {A n} (v : vector (vector A n) n) : vector A n :=
diag (n:=O) Vnil := Vnil ;
diag (n:=S _) (Vcons (Vcons a v) v') :=
  Vcons a (diag (vmap vtail v')).

Here in the second equation, we know that the elements of the vector are necessarily of size S n too, hence we can do a nested refinement on the first one to find the first element of the diagonal.

Recursion

Notice how in the diag example above we explicitely pattern-matched on the index n, even though the Vnil and Vcons pattern matching would have been enough to determine these indices. This is because the following definition also fails:

Fail Equations diag' {A n} (v : vector (vector A n) n) : vector A n :=
diag' Vnil := Vnil ;
diag' (Vcons (Vcons a v) v') :=
  Vcons a (diag' (vmap vtail v')).

Indeed, Coq cannot guess the decreasing argument of this fixpoint using its limited syntactic guard criterion: vmap vtail v' cannot be seen to be a (large) subterm of v' using this criterion, even if it is clearly "smaller". In general, it can also be the case that the compilation algorithm introduces decorations to the proof term that prevent the syntactic guard check from seeing that the definition is structurally recursive.
To alleviate this problem, Equations provides support for well-founded recursive definitions which do not rely on syntactic checks.
The simplest example of this is using the lt order on natural numbers to define a recursive definition of identity:

Equations id (n : nat) : nat by wf n lt :=
  id 0 := 0;
  id (S n') := S (id n').

Here id is defined by well-founded recursion on lt on the (only) argument n using the by wf annotation. At recursive calls of id, obligations are generated to show that the arguments effectively decrease according to this relation. Here the proof that n' < S n' is discharged automatically.
Well-founded recursion on arbitrary dependent families is not as easy to use, as in general the relations on families are heterogeneous, as they must relate inhabitants of potentially different instances of the family. Equations provides a Derive command to generate the subterm relation on any such inductive family and derive the well-foundedness of its transitive closure. This provides course-of-values or so-called "mathematical" induction on these objects, reflecting the structural recursion criterion in the logic.

Derive Subterm for vector.

For vectors for example, the relation is defined as:
Inductive t_direct_subterm (A : Type) :
  forall n n0 : nat, vector A n -> vector A n0 -> Prop :=
    t_direct_subterm_1_1 : forall (h : A) (n : nat) (H : vector A n),
      t_direct_subterm A n (S n) H (Vcons h H)
That is, there is only one recursive subterm, for the subvector in the Vcons constructor. We also get a proof of:

Check well_founded_t_subterm : forall A, WellFounded (t_subterm A).
Module UnzipVect. Context {A B : Type}.
Equations unzip {n} (v : vector (A * B) n) : vector A n * vector B n by wf (signature_pack v) (@t_subterm (A * B)) := unzip Vnil := (Vnil, Vnil) ; unzip (Vector.cons (pair x y) v) with unzip v := { | pair xs ys := (Vector.cons x xs, Vector.cons y ys) }.
End UnzipVect.
Equations diag' {A n} (v : vector (vector A n) n) : vector A n by wf n := diag' Vnil := Vnil ; diag' (Vcons (Vcons a v) v') := Vcons a (diag' (vmap vtail v')).
Extraction Language OCaml.
Extraction diag'.
Extraction filter.