I am trying to prove this:
Lemma eq_eq: forall (U: Type) (p: U) (eqv: p = p), eq_refl = eqv.
But there just seems to be no way to do it. The problem is the type p = p being an equality on the same term, and then trying to match its instance. If this were not the case, it is easy enough to prove that a term of a type with a single constructor is equal to that constructor.
Lemma eq_tt: forall (U: Type) (x: unit), tt = x.
Proof
fun (U: Type) (x: unit) =>
match x as x'
return tt = x'
with tt => eq_refl
end.
But when you try the same strategy on my problem, it fails.
Lemma eq_eq: forall (U: Type) (p: U) (eqv: p = p), eq_refl = eqv.
Proof
fun (U: Type) (p: U) (eqv: p = p) =>
match eqv as e
in _ = p'
return eq_refl = e
with eq_refl => eq_refl
end.
This fails with The term "e" has type "p = p'" while it is expected to have type "p = p" (cannot unify "p'" and "p").
The problem is that the return clause here translates internally to a predicate function something like this:
fun (p': U) (e: p = p') =>
eq_refl = e
which fails to typecheck because we have now lost the constraint between the 2 terms in e's equality and eq_refl requires that constraint.
Is there any way around this problem? Am I missing something?
Your proposed lemma is precisely the statement of uniqueness of identity proofs (UIP). It was first proven that the negation of UIP is consistent in MLTT with Hofmann and Streicher's groupoid model (pdf link). In this model, types are interpreted as groupoids, where the identity type x = y is the set of morphisms from x to y in the groupoid. In this model, there can be more than one distinct e: x = y.
More recently, homotopy type theory has embraced this point of view. Rather than mere groupoids, types are interpreted as ∞-groupoids, with not only the possibility of multiple equalities between x and y, but also possibly multiple identities between identities p q: x = y, etc.
Suffice to say, your lemma isn't provable without an extra axiom such as UIP mentioned above or Axiom K.
Related
I want to rewrite a term, as a function in a sort of beta expansion (inverse of beta reduction).
So, for example in the term a + 1 = RHS I would like to replace it as (fun x => x + 1) a = RHS. Obviously, the two terms are equal by betta reduction, but I can't figure out how to automate it.
The tactic pattern comes very close to what I want, except it only applies to a full goal, and I can't see how I would use it in a term inside an equality.
Similarly, I thought I could use the context holes. Here is my best attempt
Ltac betaExpansion term a:=
let T:= type of a in
match term with
context hole [a] =>
idtac hole;
let f:= fun x => context hole [x] in
remember ( fun x:T => f x ) as f'
end.
Goal forall a: nat, a + 1 = 0.
intros a.
match goal with
|- ?LHS = _ =>
betaExpansion LHS a (*Error: Variable f should be bound to a term but is bound to a tacvalue.*)
end.
This obviously fails, because f is a tacvalue when I really need a normal value. Can I somehow evaluate the expression to make it a value?
You should have a look at the pattern tactic. pattern t replaced all occurrences of t in the goal by a beta expanded variable.
You may also use the change ... with ... at tactic.
Goal forall (a:nat) , a+1 = 2* (a+1) - (a+1).
intro x; change (x+1) with ((fun z => z) (x+1)) at 1 3.
(*
x : nat
============================
(fun z : nat => z) (x + 1) = 2 * (x + 1) - (fun z : nat => z) (x + 1)
*)
Or, more automatically
Ltac betaexp term i :=
let x := fresh "x" in
let T := type of term in
change term with ((fun x : T => x) term) at i.
Goal forall (a:nat) , a+1 = a+1 .
intro x; betaexp (x+1) ltac:(1).
When attempting to formalize the class which corresponds to an algebraic structure (for example the class of all monoids), a natural design is to create a type monoid (a:Type) as a product type which models all the required fields (an element e:a, an operator app : a -> a -> a, proofs that the monoid laws are satisfied etc.). In doing so, we are creating a map monoid: Type -> Type. A possible drawback of this approach is that given a monoid m:monoid a (a monoid with support type a) and m':monoid b (a monoid wih support type b), we cannot even write the equality m = m' (let alone prove it) because it is ill-typed. An alternative design would be to create a type monoid where the support type is just another field a:Type, so that given m m':monoid, it is always meaningful to ask whether m = m'. Somehow, one would like to argue that if m and m' have the same supports (a m = a m) and the operators are equals (app m = app m', which may be achieved thanks to some extensional equality axiom), and that the proof fields do not matter (because we have some proof irrelevance axiom) etc. , then m = m'. Unfortunately, we can't event express the equality app m = app m' because it is ill-typed...
To simplify the problem, suppose we have:
Inductive myType : Type :=
| make : forall (a:Type), a -> myType.
.
I would like to have results of the form:
forall (a b:Type) (x:a) (y:b), a = b -> x = y -> make a x = make b y.
This statement is ill-typed so we can't have it.
I may have axioms allowing me to prove that two types a and b are same, and I may be able to show that x and y are indeed the same too, but I want to have a tool allowing me to conclude that make a x = make b y. Any suggestion is welcome.
A low-tech way to prove this is to insert a manual type-cast, using the provided equality. That is, instead of having an assumption x = y, you have an assumption (CAST q x) = y. Below I explicitly write the cast as a match, but you could also make it look nicer by defining a function to do it.
Inductive myType : Type :=
| make : forall (a:Type), a -> myType.
Lemma ex : forall (a b:Type) (x:a) (y:b) (q: a = b), (match q in _ = T return T with eq_refl => x end) = y -> make a x = make b y.
Proof.
destruct q.
intros q.
congruence.
Qed.
There is a nicer way to hide most of this machinery by using "heterogenous equality", also known as JMeq. I recommend the Equality chapter of CPDT for a detailed introduction. Your example becomes
Require Import Coq.Logic.JMeq.
Infix "==" := JMeq (at level 70, no associativity).
Inductive myType : Type :=
| make : forall (a:Type), a -> myType.
Lemma ex : forall (a b:Type) (x:a) (y:b), a = b -> x == y -> make a x = make b y.
Proof.
intros.
rewrite H0.
reflexivity.
Qed.
In general, although this particular theorem can be proved without axioms, if you do the formalization in this style you are likely to encounter goals that can not be proven in Coq without axioms about equality. In particular, injectivity for this kind of dependent records is not provable. The JMEq library will automatically use an axiom JMeq_eq about heterogeneous equality, which makes it quite convenient.
I was curious about how the discriminate tactic works behind the curtain. Therefore I did some experiments.
First a simple Inductive definition:
Inductive AB:=A|B.
Then a simple lemma which can be proved by the discriminate tactic:
Lemma l1: A=B -> False.
intro.
discriminate.
Defined.
Let's see what the proof looks like:
Print l1.
l1 =
fun H : A = B =>
(fun H0 : False => False_ind False H0)
(eq_ind A
(fun e : AB => match e with
| A => True
| B => False
end) I B H)
: A = B -> False
This looks rather complicated and I do not understand what is happening here. Therefore I tried to prove the same lemma more explicitly:
Lemma l2: A=B -> False.
apply (fun e:(A=B) => match e with end).
Defined.
Let's again see what Coq has made with this:
Print l2.
l2 =
fun e : A = B =>
match
e as e0 in (_ = a)
return
(match a as x return (A = x -> Type) with
| A => fun _ : A = A => IDProp
| B => fun _ : A = B => False
end e0)
with
| eq_refl => idProp
end
: A = B -> False
Now I am totally confused. This is still more complicated.
Can anyone explain what is going on here?
Let's go over this l1 term and describe every part of it.
l1 : A = B -> False
l1 is an implication, hence by Curry-Howard correspondence it's an abstraction (function):
fun H : A = B =>
Now we need to construct the body of our abstraction, which must have type False. The discriminate tactic chooses to implement the body as an application f x, where f = fun H0 : False => False_ind False H0 and it's just a wrapper around the induction principle for False, which says that if you have a proof of False, you can get a proof of any proposition you want (False_ind : forall P : Prop, False -> P):
(fun H0 : False => False_ind False H0)
(eq_ind A
(fun e : AB => match e with
| A => True
| B => False
end) I B H)
If we perform one step of beta-reduction, we'll simplify the above into
False_ind False
(eq_ind A
(fun e : AB => match e with
| A => True
| B => False
end) I B H)
The first argument to False_ind is the type of the term we are building. If you were to prove A = B -> True, it would have been False_ind True (eq_ind A ...).
By the way, it's easy to see that we can simplify our body further - for False_ind to work it needs to be provided with a proof of False, but that's exactly what we are trying to construct here! Thus, we can get rid of False_ind completely, getting the following:
eq_ind A
(fun e : AB => match e with
| A => True
| B => False
end) I B H
eq_ind is the induction principle for equality, saying that equals can be substituted for equals:
eq_ind : forall (A : Type) (x : A) (P : A -> Prop),
P x -> forall y : A, x = y -> P y
In other words, if one has a proof of P x, then for all y equal to x, P y holds.
Now, let's create step-by-step a proof of False using eq_ind (in the end we should obtain the eq_ind A (fun e : AB ...) term).
We start, of course, with eq_ind, then we apply it to some x - let's use A for that purpose. Next, we need the predicate P. One important thing to keep in mind while writing P down is that we must be able to prove P x. This goal is easy to achieve - we are going to use the True proposition, which has a trivial proof. Another thing to remember is the proposition we are trying to prove (False) - we should be returning it if the input parameter is not A.
With all the above the predicate almost writes itself:
fun x : AB => match x with
| A => True
| B => False
end
We have the first two arguments for eq_ind and we need three more: the proof for the branch where x is A, which is the proof of True, i.e. I. Some y, which will lead us to the proposition we want to get proof of, i.e. B, and a proof that A = B, which is called H at the very beginning of this answer. Stacking these upon each other we get
eq_ind A
(fun x : AB => match x with
| A => True
| B => False
end)
I
B
H
And this is exactly what discriminate gave us (modulo some wrapping).
Another answer focuses on the discriminate part, I will focus on the manual proof. You tried:
Lemma l2: A=B -> False.
apply (fun e:(A=B) => match e with end).
Defined.
What should be noted and makes me often uncomfortable using Coq is that Coq accepts ill-defined definitions that it internally rewrites into well-typed terms. This allows to be less verbose, since Coq adds itself some parts. But on the other hand, Coq manipulates a different term than the one we entered.
This is the case for your proof. Naturally, the pattern-matching on e should involve the constructor eq_refl which is the single constructor of the eq type. Here, Coq detects that the equality is not inhabited and thus understands how to modify your code, but what you entered is not a proper pattern-matching.
Two ingredients can help understand what is going on here:
the definition of eq
the full pattern-matching syntax, with as, in and return terms
First, we can look at the definition of eq.
Inductive eq {A : Type} (x : A) : A -> Prop := eq_refl : x = x.
Note that this definition is different from the one that seems more natural (in any case, more symmetric).
Inductive eq {A : Type} : A -> A -> Prop := eq_refl : forall (x:A), x = x.
This is really important that eq is defined with the first definition and not the second. In particular, for our problem, what is important is that, in x = y, x is a parameter while y is an index. That is to say, x is constant across all the constructors while y can be different in each constructor. You have the same difference with the type Vector.t. The type of the elements of a vector will not change if you add an element, that's why it is implemented as a parameter. Its size, however, can change, that's why it is implemented as an index.
Now, let us look at the extended pattern-matching syntax. I give here a very brief explanation of what I have understood. Do not hesitate to look at the reference manual for safer information. The return clause can help specify a return type that will be different for each branch. That clause can use the variables defined in the as and in clauses of the pattern-matching, which binds respectively the matched term and the type indices. The return clause will both be interpreted in the context of each branch, substituting the variables of as and in using this context, to type-check the branches one by one, and be used to type the match from an external point of view.
Here is a contrived example with an as clause:
Definition test n :=
match n as n0 return (match n0 with | 0 => nat | S _ => bool end) with
| 0 => 17
| _ => true
end.
Depending on the value of n, we are not returning the same type. The type of test is forall n : nat, match n with | 0 => nat | S _ => bool end. But when Coq can decide in which case of the match we are, it can simplify the type. For example:
Definition test2 n : bool := test (S n).
Here, Coq knows that, whatever is n, S n given to test will result as something of type bool.
For equality, we can do something similar, this time using the in clause.
Definition test3 (e:A=B) : False :=
match e in (_ = c) return (match c with | B => False | _ => True end) with
| eq_refl => I
end.
What's going on here ? Essentially, Coq type-checks separately the branches of the match and the match itself. In the only branch eq_refl, c is equal to A (because of the definition of eq_refl which instantiates the index with the same value as the parameter), therefore we claimed we returned some value of type True, here I. But when seen from an external point of view, c is equal to B (because e is of type A=B), and this time the return clause claims that the match returns some value of type False. We use here the capability of Coq to simplify pattern-matching in types that we have just seen with test2. Note that we used True in the other cases than B, but we don't need True in particular. We only need some inhabited type, such that we can return something in the eq_refl branch.
Going back to the strange term produced by Coq, the method used by Coq does something similar, but on this example, certainly more complicated. In particular, Coq often uses types IDProp inhabited by idProp when it needs useless types and terms. They correspond to True and I used just above.
Finally, I give the link of a discussion on coq-club that really helped me understand how extended pattern-matching is typed in Coq.
Let's say I have again a small problem with my datatype with an existential quantified component. This time I want to define when two values of type ext are equal.
Inductive ext (A: Set) :=
| ext_ : forall (X: Set), option X -> ext A.
Fail Definition ext_eq (A: Set) (x y: ext A) : Prop :=
match x with
| ext_ _ ox => match y with
| ext_ _ oy => (* only when they have the same types *)
ox = oy
end
end.
What I'd like to do is somehow distinguish between the cases where the existential type is actually same and where it's not. Is this a case for JMeq or is there some other way to accomplish such a case distinction?
I googled a lot, but unfortunately I mostly stumbled upon posts about dependent pattern matching.
I also tried to generate a (boolean) scheme with Scheme Equality for ext, but this wasn't successful because of the type argument.
What I'd like to do is somehow distinguish between the cases where the existential type is actually same and where it's not.
This is not possible as Coq's logic is compatible with the univalence axiom which says that isomorphic types are equal. So even though (unit * unit) and unit are syntactically distinct, they cannot be distinguished by Coq's logic.
A possible work-around is to have a datatype of codes for the types you are interested in and store that as an existential. Something like this:
Inductive Code : Type :=
| Nat : Code
| List : Code -> Code.
Fixpoint meaning (c : Code) := match c with
| Nat => nat
| List c' => list (meaning c')
end.
Inductive ext (A: Set) :=
| ext_ : forall (c: Code), option (meaning c) -> ext A.
Lemma Code_eq_dec : forall (c d : Code), { c = d } + { c <> d }.
Proof.
intros c; induction c; intros d; destruct d.
- left ; reflexivity.
- right ; inversion 1.
- right ; inversion 1.
- destruct (IHc d).
+ left ; congruence.
+ right; inversion 1; contradiction.
Defined.
Definition ext_eq (A: Set) (x y: ext A) : Prop.
refine(
match x with | #ext_ _ c ox =>
match y with | #ext_ _ d oy =>
match Code_eq_dec c d with
| left eq => _
| right neq => False
end end end).
subst; exact (ox = oy).
Defined.
However this obviously limits quite a lot the sort of types you can pack in an ext. Other, more powerful, languages (e.g. equipped with Induction-recursion) would give you more expressive power.
I have been struggling on this for a while now. I have an inductive type:
Definition char := nat.
Definition string := list char.
Inductive Exp : Set :=
| Lit : char -> Exp
| And : Exp -> Exp -> Exp
| Or : Exp -> Exp -> Exp
| Many: Exp -> Exp
from which I define a family of types inductively:
Inductive Language : Exp -> Set :=
| LangLit : forall c:char, Language (Lit c)
| LangAnd : forall r1 r2: Exp, Language(r1) -> Language(r2) -> Language(And r1 r2)
| LangOrLeft : forall r1 r2: Exp, Language(r1) -> Language(Or r1 r2)
| LangOrRight : forall r1 r2: Exp, Language(r2) -> Language(Or r1 r2)
| LangEmpty : forall r: Exp, Language (Many r)
| LangMany : forall r: Exp, Language (Many r) -> Language r -> Language (Many r).
The rational here is that given a regular expression r:Exp I am attempting to represent the language associated with r as a type Language r, and I am doing so with a single inductive definition.
I would like to prove:
Lemma L1 : forall (c:char)(x:Language (Lit c)),
x = LangLit c.
(In other words, the type Language (Lit c) has only one element, i.e. the language of the regular expression 'c' is made of the single string "c". Of course I need to define some semantics converting elements of Language r to string)
Now the specifics of this problem are not important and simply serve to motivate my question: let us use nat instead of Exp and let us define a type List n which represents the lists of length n:
Parameter A:Set.
Inductive List : nat -> Set :=
| ListNil : List 0
| ListCons : forall (n:nat), A -> List n -> List (S n).
Here again I am using a single inductive definition to define a family of types List n.
I would like to prove:
Lemma L2: forall (x: List 0),
x = ListNil.
(in other words, the type List 0 has only one element).
I have run out of ideas on this one.
Normally when attempting to prove (negative) results with inductive types (or predicates), I would use the elim tactic (having made sure all the relevant hypothesis are inside my goal (generalize) and only variables occur in the type constructors). But elim is no good in this case.
If you are willing to accept more than just the basic logic of Coq, you can just use the dependent destruction tactic, available in the Program library (I've taken the liberty of rephrasing your last example in terms of standard-library vectors):
Require Coq.Vectors.Vector.
Require Import Program.
Lemma l0 A (v : Vector.t A 0) : v = #Vector.nil A.
Proof.
now dependent destruction v.
Qed.
If you inspect the term, you'll see that this tactic relied on the JMeq_eq axiom to get the proof to go through:
Print Assumptions l0.
Axioms:
JMeq_eq : forall (A : Type) (x y : A), x ~= y -> x = y
Fortunately, it is possible to prove l0 without having to resort to features outside of Coq's basic logic, by making a small change to the statement of the previous lemma.
Lemma l0_gen A n (v : Vector.t A n) :
match n return Vector.t A n -> Prop with
| 0 => fun v => v = #Vector.nil A
| _ => fun _ => True
end v.
Proof.
now destruct v.
Qed.
Lemma l0' A (v : Vector.t A 0) : v = #Vector.nil A.
Proof.
exact (l0_gen A 0 v).
Qed.
We can see that this new proof does not require any additional axioms:
Print Assumptions l0'.
Closed under the global context
What happened here? The problem, roughly speaking, is that in Coq we cannot perform case analysis on terms of dependent types whose indices have a specific shape (such as 0, in your case) directly. Instead, we must prove a more general statement where the problematic indices are replaced by variables. This is exactly what the l0_gen lemma is doing. Notice how we had to make the match on n return a function that abstracts on v. This is another instance of what is known as "convoy pattern". Had we written
match n with
| 0 => v = #Vector.nil A
| _ => True
end.
Coq would see the v in the 0 branch as having type Vector.t A n, making that branch ill-typed.
Coming up with such generalizations is one of the big pains of doing dependently typed programming in Coq. Other systems, such as Agda, make it possible to write this kind of code with much less effort, but it was only recently shown that this can be done without relying on the extra axioms that Coq wanted to avoid including in its basic theory. We can only hope that this will be simplified in future versions.