Equality in Coq and in a paper of Awodey - coq

In the paper
Univalence as a Principle of Logic, Awodey writes on page 7:
Let us consider the example of intensional versus extensional type theory. The extensional theory has an apparently “stronger” notion of equality, because it permits one to simply substitute equals for equals in all contexts. In the intensional system, by contrast, one can have a = b and a statement Φ(a) and yet not have Φ(b).
I do not understand this, because I thought this is the basic property of equality.
Also, in Coq one can simply prove:
Theorem subs: forall (T:Type)(a b:T)(p:a=b)(P:T-> Prop), P a -> P b.
intros.
rewrite <- p.
assumption.
Qed.

If I am not mistaken, in Awodey's paper, the notation Φ(a) means substituting some implicit free variable in Φ for the expression a.
In my answer, I will use the following more explicit notation instead (which corresponds to Φ(a) in the paper):
Φ[z ⟼ a]
This means that in the expression Φ, free variable z will be substituted for expression a.
Example:
(x = z)[z ⟼ a]
results in
(x = a)
Now, I will assume that you are already familiar with the usual presentation of Type Theory by means of inference rules, as in Appendix A.2 in the HoTT Book.
Type Theory uses two notions of equality.
Identity Types: Usually written using the symbol = in inference rules. In order to conclude an statement like a = b, we need to provide a proof term for it. For example, let's have a look at the introduction rule for identity types:
a : A
---------------- (=)-INTRO
refl a : a = a
Here, refl a is acting as the proof term or evidence that justifies our claim that a = a holds (namely, refl a is representing the trivial or reflexivity proof). So, an statement like p : a = b is expressing that a and b can be identified due to evidence p.
Definitional Equality: Usually written using the symbol ≡ in inference rules. The statement a ≡ b means that a and b are interchangeable, replaceable anywhere, or substitutionally equivalent. This equality captures notions like "by definition", "by computation", "by simplification". This kind of equality does not carry a proof term with it, i.e. it is not a typing statement. This is the kind of equality Coq uses implicitly when you use the tactics simpl and compute. For example, let's have a look at the reflexivity rule for ≡:
a : A
--------- (≡)-REFLE
a ≡ a
Notice that there is no proof term to the left of a ≡ a (compare with the (=)-INTRO rule above). In this case, the proof system is treating a ≡ a as a fact, something that holds without the need to state explicitly its justification, since the only use of ≡ in the proof system will be for rewriting expressions.
That ≡ is used only for simplifying expressions, can be found in other inference rules, for example, the type preservation rule for ≡:
a : A A ≡ B
------------------ (≡)-TYPE-PRESERV
a : B
In other words, if you start with a term a of type A, and you know that types A and B are interchangeable, then term a also has type B. Notice that (and this will be important later!!) the proof term or evidence for B did not change, it is still the same proof term as the one used for A.
We can now get into the question.
What differentiates Extensional Type Theory (ETT) from an Intensional Type Theory like HoTT or CoC is the way they treat identity types and definitional equality.
ETT makes identity types and definitional equality interchangeable by adding the following inference rule:
p : a = b
----------- (=)-EXT
a ≡ b
In other words, the evidence p for the identity becomes irrelevant, and we treat a and b as interchangeable in the proof system (thanks to rules like (≡)-TYPE-PRESERV and other similar rules).
Starting from the hypotheses p : a = b and a : A, in ETT we can do stuff like the following:
a : A p : a = b
--------- (≡)-REFLE ----------- (=)-EXT
a ≡ a a ≡ b
------------------------------------- (=)-CONG (*1)
(a = a) ≡ (a = b)
Where (=)-CONG is a congruence rule (i.e. definitionally equivalent terms produce definitionally equivalent identity types), and I am calling this derivation (*1).
Using (*1), we can then derive:
a : A
----------------- (=)-INTRO -------------------- (*1)
refl a : a = a (a = a) ≡ (a = b)
-------------------------------------------------- (≡)-TYPE-PRESERV
refl a : a = b
Where in (*1) we insert the derivation we did above.
In other words, if we ignore the hypotheses a : A and intermediate steps, it is as if we did the following inference:
p : a = b refl a : a = a
-------------------------------------
refl a : a = b
Since ETT is treating a and b as interchangeable (thanks to the hypothesis p : a = b and the (=)-EXT rule), the proof refl a for a = a can be also seen as a proof for a = b. So, it is not hard to see that, in ETT, having a proof for an identity like a = b is sufficient for replacing some or all occurrences of a for b in ANY statement involving a.
Now, what happens in an Intensional Type Theory (ITT)?
In an ITT, the (=)-EXT rule is not assumed. Therefore, we cannot carry out the derivation (*1) we did above, and in particular, the following inference is invalid:
p : a = b refl a : a = a
-------------------------------------
refl a : a = b
This is an example where we have an identity p : a = b, but from the statement (refl a : a = z)[z ⟼ a], we cannot conclude the statement (refl a : a = z)[z ⟼ b]. This is an instance of what Awodey's paper was referring to I think.
Why is this an invalid inference? Because refl a : a = b is forcing a and b to be definitionally equal (i.e. the only way to introduce refl a into a derivation is through the (=)-INTRO rule), but this is not necessarily true from the hypothesis p : a = b. In HoTT, for example, the interval type I (Section 6.3 in the HoTT Book), has two terms 0 : I and 1 : I, they are not definitionally equal, but we have a proof seg : 0 = 1.
The fact that there might exist other identity proofs that are not the trivial or the reflexivity proof, it's what gives an Intentional Type Theory its richness. It is what allows HoTT to have Univalence and Higher Inductive Types, for example.
So, what can we conclude from the hypotheses p : a = b and refl a : a = a in an ITT?
In your question, the theorem you proved in Coq is called the "transport" function in HoTT (Section 2.3 in the HoTT Book). Using your theorem (removing the implicit parameters), you will be able to do the following derivation:
p : a = b refl a : a = a
------------------------------------------
subs p (λx => a = x) (refl a) : a = b
In other words, we can conclude that a = b, but our proof term for this changed! In ETT we simply carried out a substitution (because a and b were interchangeable) allowing us to use the same evidence in the conclusion (namely refl a). But in an ITT, we cannot treat a and b as substitutionally equivalent, due to the richness of the identity types. And to reflect this intention, we need to combine the proofs of the hypotheses to build our new evidence in the conclusion.
So, from (refl a : a = z)[z ⟼ a] we cannot conclude (refl a : a = z)[z ⟼ b], but we can conclude subs p (λx => a = x) (refl a) : a = b, which is not the result of a simple substitution from the hypothesis, as in ETT.

The rewrite tactic in Coq can fail - it can generate an ill-typed term.
If I remember correctly, it's sometimes possible to get around this with some careful manipulations, but if you do this by (implicitly or explicitly) introducing an additional axiom, such as functional extensionality or JMeq_eq, it's no longer the case that the first goal simply follows from the second.

Related

Real numbers in Coq

In https://www.cs.umd.edu/~rrand/vqc/Real.html#lab1 one can read:
Coq's standard library takes a very different approach to the real numbers: An axiomatic approach.
and one can find the following axiom:
Axiom
completeness :
∀E:R → Prop,
bound E → (∃x : R, E x) → { m:R | is_lub E m }.
The library is not mentioned but in Why are the real numbers axiomatized in Coq? one can find the same description :
I was wondering whether Coq defined the real numbers as Cauchy sequences or Dedekind cuts, so I checked Coq.Reals.Raxioms and... none of these two. The real numbers are axiomatized, along with their operations (as Parameters and Axioms). Why is it so?
Also, the real numbers tightly rely on the notion of subset, since one of their defining properties is that is every upper bounded subset has a least upper bound. The Axiom completeness encodes those subsets as Props."
Nevertheless, whenever I look at https://coq.inria.fr/library/Coq.Reals.Raxioms.html I do not see any axiomatic approach, in particular we have the following lemma:
Lemma completeness :
forall E:R -> Prop,
bound E -> (exists x : R, E x) -> { m:R | is_lub E m }.
Where can I find such an axiomatic approach of the real numbers in Coq?
The description you mention is outdated indeed, because since I asked the question you linked, I rewrote the axioms defining Coq's standard library real numbers in a more standard way. The real numbers are now divided into 2 layers
constructive real numbers, that are defined in terms of Cauchy sequences and that use no axioms at all;
classical real numbers, that are a quotient set of constructive reals, and that do use 3 axioms to prove the least upper bound theorem that you mention.
Coq easily gives you the axioms underlying any term by the Print Assumptions command:
Require Import Raxioms.
Print Assumptions completeness.
Axioms:
ClassicalDedekindReals.sig_not_dec : forall P : Prop, {~ ~ P} + {~ P}
ClassicalDedekindReals.sig_forall_dec
: forall P : nat -> Prop,
(forall n : nat, {P n} + {~ P n}) -> {n : nat | ~ P n} + {forall n : nat, P n}
FunctionalExtensionality.functional_extensionality_dep
: forall (A : Type) (B : A -> Type) (f g : forall x : A, B x),
(forall x : A, f x = g x) -> f = g
As you can see these 3 axioms are purely logical, they do not speak about real numbers at all. They just assume a fragment of classical logic.
If you want an axiomatic definition of the reals in Coq, I provided one for the constructive reals
Require Import Coq.Reals.Abstract.ConstructiveReals.
And this becomes an interface for classical reals if you assume the 3 axioms above.
These descriptions are outdated. It used to be the case that the type R of real numbers was axiomatized, along with its basic properties. But nowadays (since 2019?) it is defined in terms of more basic axioms, more or less like one would do in traditional mathematics.

Proving equality between instances of dependent types

When attempting to formalize the class which corresponds to an algebraic structure (for example the class of all monoids), a natural design is to create a type monoid (a:Type) as a product type which models all the required fields (an element e:a, an operator app : a -> a -> a, proofs that the monoid laws are satisfied etc.). In doing so, we are creating a map monoid: Type -> Type. A possible drawback of this approach is that given a monoid m:monoid a (a monoid with support type a) and m':monoid b (a monoid wih support type b), we cannot even write the equality m = m' (let alone prove it) because it is ill-typed. An alternative design would be to create a type monoid where the support type is just another field a:Type, so that given m m':monoid, it is always meaningful to ask whether m = m'. Somehow, one would like to argue that if m and m' have the same supports (a m = a m) and the operators are equals (app m = app m', which may be achieved thanks to some extensional equality axiom), and that the proof fields do not matter (because we have some proof irrelevance axiom) etc. , then m = m'. Unfortunately, we can't event express the equality app m = app m' because it is ill-typed...
To simplify the problem, suppose we have:
Inductive myType : Type :=
| make : forall (a:Type), a -> myType.
.
I would like to have results of the form:
forall (a b:Type) (x:a) (y:b), a = b -> x = y -> make a x = make b y.
This statement is ill-typed so we can't have it.
I may have axioms allowing me to prove that two types a and b are same, and I may be able to show that x and y are indeed the same too, but I want to have a tool allowing me to conclude that make a x = make b y. Any suggestion is welcome.
A low-tech way to prove this is to insert a manual type-cast, using the provided equality. That is, instead of having an assumption x = y, you have an assumption (CAST q x) = y. Below I explicitly write the cast as a match, but you could also make it look nicer by defining a function to do it.
Inductive myType : Type :=
| make : forall (a:Type), a -> myType.
Lemma ex : forall (a b:Type) (x:a) (y:b) (q: a = b), (match q in _ = T return T with eq_refl => x end) = y -> make a x = make b y.
Proof.
destruct q.
intros q.
congruence.
Qed.
There is a nicer way to hide most of this machinery by using "heterogenous equality", also known as JMeq. I recommend the Equality chapter of CPDT for a detailed introduction. Your example becomes
Require Import Coq.Logic.JMeq.
Infix "==" := JMeq (at level 70, no associativity).
Inductive myType : Type :=
| make : forall (a:Type), a -> myType.
Lemma ex : forall (a b:Type) (x:a) (y:b), a = b -> x == y -> make a x = make b y.
Proof.
intros.
rewrite H0.
reflexivity.
Qed.
In general, although this particular theorem can be proved without axioms, if you do the formalization in this style you are likely to encounter goals that can not be proven in Coq without axioms about equality. In particular, injectivity for this kind of dependent records is not provable. The JMEq library will automatically use an axiom JMeq_eq about heterogeneous equality, which makes it quite convenient.

how to figure out what "=" means in different types in coq

Given a type (like List) in Coq, how do I figure out what the equality symbol "=" mean in that type? What commands should I type to figure out the definition?
The equality symbol is just special infix syntax for the eq predicate. Perhaps surprisingly, it is defined the same way for every type, and we can even ask Coq to print it for us:
Print eq.
(* Answer: *)
Inductive eq (A : Type) (x : A) : Prop :=
| eq_refl : eq x x.
This definition is so minimal that it might be hard to understand what is going on. Roughly speaking, it says that the most basic way to show that two expressions are equal is by reflexivity -- that is, when they are exactly the same. For instance, we can use eq_refl to prove that 5 = 5 or [4] = [4]:
Check eq_refl : 5 = 5.
Check eq_refl : [4] = [4].
There is more to this definition than meets the eye. First, Coq considers any two expressions that are equalivalent up to simplification to be equal. In these cases, we can use eq_refl to show that they are equal as well. For instance:
Check eq_refl : 2 + 2 = 4.
This works because Coq knows the definition of addition on the natural numbers and is able to mechanically simplify the expression 2 + 2 until it arrives at 4.
Furthermore, the above definition tells us how to use an equality to prove other facts. Because of the way inductive types work in Coq, we can show the following result:
eq_elim :
forall (A : Type) (x y : A),
x = y ->
forall (P : A -> Prop), P x -> P y
Paraphrasing, when two things are equal, any fact that holds of the first one also holds of the second one. This principle is roughly what Coq uses under the hood when you invoke the rewrite tactic.
Finally, equality interacts with other types in interesting ways. You asked what the definition of equality for list was. We can show that the following lemmas are valid:
forall A (x1 x2 : A) (l1 l2 : list A),
x1 :: l1 = x2 :: l2 -> x1 = x2 /\ l1 = l2
forall A (x : A) (l : list A),
x :: l <> nil.
In words:
if two nonempty lists are equal, then their heads and tails are equal;
a nonempty list is different from nil.
More generally, if T is an inductive type, we can show that:
if two expressions starting with the same constructor are equal, then their arguments are equal (that is, constructors are injective); and
two expressions starting with different constructors are always different (that is, different constructors are disjoint).
These facts are not, strictly speaking, part of the definition of equality, but rather consequences of the way inductive types work in Coq. Unfortunately, it doesn't work as well for other kinds of types in Coq; in particular, the notion of equality for functions in Coq is not very useful, unless you are willing to add extra axioms into the theory.

What is difference between `destruct` and `case_eq` tactics in Coq?

I understood destruct as it breaks an inductive definition into its constructors. I recently saw case_eq and I couldn't understand what it does differently?
1 subgoals
n : nat
k : nat
m : M.t nat
H : match M.find (elt:=nat) n m with
| Some _ => true
| None => false
end = true
______________________________________(1/1)
cc n (M.add k k m) = true
In the above context, if I do destruct M.find n m it breaks H into true and false whereas case_eq (M.find n m) leaves H intact and adds separate proposition M.find (elt:=nat) n m = Some v, which I can rewrite to get same effect as destruct.
Can someone please explain me the difference between the two tactics and when which one should be used?
The first basic tactic in the family of destruct and case_eq is called case. This tactic modifies only the conclusion. When you type case A and A has a type T which is inductive, the system replaces A in the goal's conclusion by instances of all the constructors of type T, adding universal quantifications for the arguments of these constructors, if needed. This creates as many goals as there are constructors in type T. The formula A disappears from the goal and if there is any information about A in an hypothesis, the link between this information and all the new constructors that replace it in the conclusion gets lost. In spite of this, case is an important primitive tactic.
Loosing the link between information in the hypotheses and instances of A in the conclusion is a big problem in practice, so developers came up with two solutions: case_eq and destruct.
Personnally, when writing the Coq'Art book, I proposed that we write a simple tactic on top of case that keeps a link between A and the various constructor instances in the form of an equality. This is the tactic now called case_eq. It does the same thing as case but adds an extra implication in the goal, where the premise of the implication is an equality of the form A = ... and where ... is an instance of each constructor.
At about the same time, the tactic destruct was proposed. Instead of limiting the effect of replacement in the goal's conclusion, destruct replaces all instances of A appearing in the hypotheses with instances of constructors of type T. In a sense, this is cleaner because it avoids relying on the extra concept of equality, but it is still incomplete because the expression A may be a compound expression f B, and if B appears in the hypothesis but not f B the link between A and B will still be lost.
Illustration
Definition my_pred (n : nat) := match n with 0 => 0 | S p => p end.
Lemma example n : n <= 1 -> my_pred n <= 0.
Proof.
case_eq (my_pred n).
Gives the two goals
------------------
n <= 1 -> my_pred n = 0 -> 0 <= 0
and
------------------
forall p, my_pred n = S p -> n <= 1 -> S p <= 0
the extra equality is very useful here.
In this question I suggested that the developer use case_eq (a == b) when (a == b) has type bool because this type is inductive and not very informative (constructors have no argument). But when (a == b) has type {a = b}+{a <> b} (which is the case for the string_dec function) the constructors have arguments that are proofs of interesting properties and the extra universal quantification for the arguments of the constructors are enough to give the relevant information, in this case a = b in a first goal and a <> b in a second goal.

Defining isomorphism classes in Coq

How to define isomorphism classes in Coq?
Suppose I have a record ToyRec:
Record ToyRec {Labels : Set} := {
X:Set;
r:X->Labels
}.
And a definition of isomorphisms between two objects of type ToyRec, stating that two
objects T1 and T2 are isomorphic if there exists a bijection f:T1.(X)->T2.(X) which preserves the label of mapped elements.
Definition Isomorphic{Labels:Set} (T1 T2 : #ToyRec Labels) : Prop :=
exists f:T1.(X)->T2.(X), (forall x1 x2:T1.(X), f x1 <> f x2) /\
(forall x2:T2.(X), exists x1:T1.(X), f x1 = f x2) /\
(forall x1:T1.(X) T1.(r) x1 = T2.(r) (f x1)).
Now I would like to define a function that takes an object T1 and returns a set
containing all objects that are isomorphic to T1.
g(T1) = {T2 | Isomorphic T1 T2}
How one does such a thing in coq? I know that I might be reasoning too set theoretically
here, but what would be the right type theoretic notion of isomorphism class? Or even more basically, how one would define a set (or type) of all elemenets satisfying a given property?
It really depends on what you want to do with it. In Coq, there is a comprehension type {x : T | P x} which is the type of all elements x in type T that satisfy property P. However, it is a type, meaning that it is used to classify other terms, and not a data-structure you can compute with in the traditional sense. Thus, you can use it, for instance, to write a function on T that only works on elements that satisfy P (in which case the type of the function would be {x : T | P x} -> Y, where Y is its result type), but you can't use it to, say, write a function that computes how many elements of T satisfy P.
If you want to compute with this set, things become a bit more complicated. Let's suppose P is a decidable property so that things become a bit easier. If T is a finite type, then you can a set data-structure that has a comprehension operator (have a look at the Ssreflect library, for instance). However, this breaks when T is infinite, which is the case of your ToyRec type. As Vinz said, there's no generic way of constructively building this set as a data-structure.
Perhaps it would be easier to have a complete answer if you explained what you want to do with this type exactly.