I have the following definitions for type precision:
Require Import Coq.Program.Equality.
Inductive type : Set :=
| TInt
| TFun: type -> type -> type
| TStar
.
Reserved Infix "<|" (at level 80).
(* Inductive TPrecision : type -> type -> Prop := *)
Inductive TPrecision : type -> type -> Type := (* see below *)
| PRInt : TInt <| TInt
| PRFun : forall t1 t2 t1' t2',
t1 <| t2 -> t1' <| t2' -> TFun t1 t1' <| TFun t2 t2'
| PRStar : forall t, t <| TStar
where "x '<|' y" := (TPrecision x y).
I want to prove that proofs of "x <| y" are unique:
Lemma TP_unique: forall t1 t2 (P1 P2 : t1 <| t2), P1 = P2.
With the previous definition of TPrecision, resulting in a Type, I can prove the lemma:
Proof.
intros.
dependent induction P1; dependent destruction P2; trivial.
f_equal; auto.
Qed.
However, with the "correct" definition for TPrecision, using Prop, the tactic "dependent induction" does not work, giving the following error:
Cannot infer this placeholder of type
"DependentEliminationPackage (t1 <| t2)" (no type class instance found) in
environment:
t1, t2 : type
P1 : t1 <| t2
How to provide this missing instance? (Or how to prove that lemma another way?) If I use only "induction", instead of "dependent induction", the tactic does not type-check.
I also tried stating the lemma with JMeq instead of regular equality, but it didn't work either.
The issue is that, by default, Coq does not generate dependent induction principles for propositions. We can fix this by overriding this default:
Require Import Coq.Program.Equality.
Inductive type : Set :=
| TInt
| TFun: type -> type -> type
| TStar
.
Reserved Infix "<|" (at level 80).
(* Ensure Coq does not generate TPrecision_ind when processing this definition *)
Unset Elimination Schemes.
Inductive TPrecision : type -> type -> Prop := (* see below *)
| PRInt : TInt <| TInt
| PRFun : forall t1 t2 t1' t2',
t1 <| t2 -> t1' <| t2' -> TFun t1 t1' <| TFun t2 t2'
| PRStar : forall t, t <| TStar
where "x '<|' y" := (TPrecision x y).
(* Re-enable automatic generation *)
Set Elimination Schemes.
(* Compare the following induction principle with the one generated by Coq by
default. *)
Scheme TPrecision_ind := Induction for TPrecision Sort Prop.
Lemma TP_unique: forall t1 t2 (P1 P2 : t1 <| t2), P1 = P2.
Proof.
intros.
dependent induction P1; dependent destruction P2; trivial.
f_equal; auto.
Qed.
Related
I have a definition:
Inductive type :=
| Bool: type
| Int: type
| Option: type -> type
Then wanna define a proof about property P like this:
Proof type_match_P:
forall ty ty1,
(ty = Int \/ ty = Option ty1) -> P ty.
However ty1 does not seem to matter, could I just write something like (Option _) to match it in a proof?
Thank you very much!
Here are three different ways to represent your assumption, and examples of how to use them:
Inductive type :=
| Bool: type
| Int: type
| Option: type -> type.
Variable P : type -> Prop.
Definition hyp1 (t : type) : Prop :=
match t with
| Int | Option _ => True
| Bool => False
end.
Inductive hyp2 : type -> Prop :=
| hypI : hyp2 Int
| hypO (t : type) : hyp2 (Option t).
Definition hyp3 (t : type) : Prop := t = Int \/ (exists t', t = Option t').
Lemma type_match_P1:
forall ty, hyp1 ty -> P ty.
Proof.
intros ty Hty.
destruct ty, Hty.
Abort.
Lemma type_match_P2:
forall ty, hyp2 ty -> P ty.
Proof.
intros ty Hty.
inversion_clear Hty.
Undo.
inversion Hty ; subst ; clear Hty.
Abort.
Lemma type_match_P3:
forall ty, hyp3 ty -> P ty.
Proof.
intros ty [-> | [? ->]].
Undo.
intros ty Hty.
destruct Hty as [e | [ty' e]].
all: subst.
Abort.
The first version computes based on t either the True or False proposition. To use this version, you have to destruct t and then get rid of the either absurd or trivial hypothesis.
The second version represents your assumption as an inductive predicate. To use it, use the inversion tactics and/or its variants such as inversion_clear.
The third is the most similar to yours, except that it uses an existential type rather than a universal quantification. Here you can directly destruct the assumption using intro-patterns, or first introduce it and destruct it later on.
These are just presentation choices: in the end, they all are equivalent. So use the one that fits your needs the most.
Suppose I have some programming language, with a "has type" relation and a "small step" relation.
Inductive type : Set :=
| Nat : type
| Bool : type.
Inductive tm : Set :=
| num : nat -> tm
| plus : tm -> tm -> tm
| lt : tm -> tm -> tm
| ifthen : tm -> tm -> tm -> tm.
Inductive hasType : tm -> type -> Prop :=
| hasTypeNum :
forall n, hasType (num n) Nat
| hasTypePlus:
forall tm1 tm2,
hasType tm1 Nat ->
hasType tm2 Nat ->
hasType (plus tm1 tm2) Nat
| hasTypeLt:
forall tm1 tm2,
hasType tm1 Nat ->
hasType tm2 Nat ->
hasType (lt tm1 tm2) Bool
| hasTypeIfThen:
forall tm1 tm2 tm3,
hasType tm1 Bool ->
hasType tm2 Nat ->
hasType tm3 Nat ->
hasType (ifthen tm1 tm2 tm3) Nat.
Inductive SmallStep : tm -> tm -> Prop :=
...
Definition is_value (t : tm) := ...
The key detail here is that for each term variant, there's only one possible HasType variant that could possible match.
Suppose then that I want to prove a progress lemma, but that I also want to be able to extract an interpreter from this.
Lemma progress_interp:
forall tm t,
hasType tm t ->
(is_value tm = false) ->
{tm2 | SmallStep tm tm2}.
intro; induction tm0; intros; inversion H.
This gives the error Inversion would require case analysis on sort Set which is not allowed for inductive definition hasType.
I understand why it's doing this: inversion performs case analysis on a value of sort Prop, which we can't do since it gets erased in the extracted code.
But, because there's a one-to-one correspondence between the term variants and type derivation rules, we don't actually have to perform any analysis at runtime.
Ideally, I could apply a bunch of lemmas that look like this:
plusInv: forall e t, hasType e t ->
(forall e1 e2, e = plus e1 e2 -> hasType e1 Nat /\ hasType e2 Nat ).
where there would be a lemma like this for each case (or a single lemma that's a conjunction of these cases).
I've looked at Derive Inversion but it doesn't seem to do what I'm looking for here, though maybe I'm not understanding it correctly.
Is there a way to do this sort of "case analysis where there's only one case?" Or to get the equalities implied by the Prop proof, so that I can only write the possible cases in my extracted interpreter? Can deriving these lemmas be automated, with Ltac or a Deriving mechanism?
Lemma plus_inv can be obtained by a case analysis on type tm instead of a case analysis on type hasType.
Lemma plus_inv : forall e t, hasType e t ->
(forall e1 e2, e = plus e1 e2 -> hasType e1 Nat /\ hasType e2 Nat ).
Proof.
intros e; case e; try (intros; discriminate).
intros e1 e2 t h; inversion h; intros e5 e6 heq; inversion heq; subst; auto.
Qed.
The proof of your main objective progress_interp can probably be performed
by induction ont the structure of tm as well. This tantamounts to writing your interpreter directly as a gallina recursive function.
Your question has a second part: can this be automated. The answer is yes, probably. I suggest using either the template-coq package or the elpi package for this. Both packages are available as opam packages.
I think eexists will do the trick: the existential variable should be filled in at some point during the proof in the sigma type (where you can freely use inversion on the hasType hypothesis).
How to project (with proj1 or proj2) a universally quantified biconditional (iff) such as in the following example?
Parameter T : Set.
Parameter P Q R: T -> Prop.
Parameter H : forall (t : T), P t <-> Q t.
When I try to use proj1 H, it fails with the following error:
Error: The term "H" has type "forall t : T, P t <-> Q t" while it is
expected to have type "?A /\ ?B".
While I would like to get forall (t : T), P t -> Q t.
Edit
Using the suggested solution, I have now two ways to project the biconditional:
Theorem proj1' : (forall t, P t <-> Q t) -> forall t, P t -> Q t.
Proof.
intros H t.
exact (proj1 (H t)).
Qed.
Theorem foo : forall (t1 t2 : T),
(R t1 -> P t1) ->
(R t2 -> P t2) ->
R t1 /\ R t2 -> Q t1 /\ Q t2.
Proof.
intros t1 t2 H1 H2 [H3 H4].
(* Does not solve the goal, as expected. *)
auto using H.
(* Solves the goal, but is unnecessary explicit. *)
(* auto using (proj1 (H t1)), (proj1 (H t2)). *)
(* Solves the goal and instanciations are infered. *)
auto using (proj1' H).
Qed.
Now, a function such as proj1' seems to be quite useful. If it is not offered in the standard library, is it because such situations are actually not happening often enough to justify it, or is it simply an historical accident?
I do realize that a distinct function would be require for two, three, etc. universal quantification (e.g. proj1'' : (forall t u, P t u <-> Q t u) -> forall t u, P t u -> Q t u). But wouldn't functions for up to three or four arguments be enough for most cases?
Related
How does `auto` interract with biconditional (iff)
Since a term of type forall (t : T), P t <-> Q t is a function, you need to apply it to a t of type T to get access to the body, which is a pair of proofs:
Goal (forall t, P t <-> Q t) -> forall t, P t -> Q t.
Proof.
intros H t.
exact (proj1 (H t)).
Qed.
The above is like the following (modulo transparency):
Definition proj1' : (forall t, P t <-> Q t) -> forall t, P t -> Q t :=
fun H t => proj1 (H t).
Respond to Edit
One can suggest many proofs of the foo theorem. I wouldn't use proj1' in any of them:
Theorem foo t1 t2 : (R t1 -> P t1) -> (R t2 -> P t2) ->
P t1 /\ P t2 -> Q t1 /\ Q t2.
Solution 1
apply is one smart tactic, it can handle biconditionals:
Proof. now split; apply H. Qed.
Solution 2
intros can apply lemmas when moving stuff to the context:
Proof. now intros _ _ [H3%H H4%H]. Qed.
It's like SSReflects's by move=> _ _ [/H H3 /H H4].
Solution 3
Coq can use biconditionals to do rewrites if you Require Import Setoid. first:
Proof. now rewrite !H. Qed.
! in front of a term means "rewrite as many times as you can, but at least once".
With a sig type defintion like:
Inductive A: Set := mkA : nat-> A.
Function getId (a: A) : nat := match a with mkA n => n end.
Function filter (a: A) : bool := if (beq_nat (getId a) 0) then true else false.
Coercion is_true : bool >-> Sortclass.
Definition subsetA : Set := { a : A | filter a }.
I try to prove its projection is injective:
Lemma projection_injective :
forall t1 t2: subsetA, proj1_sig t1 = proj1_sig t2 -> t1 = t2.
Proof.
destruct t1.
destruct t2.
simpl.
intros.
rewrite -> H. (* <- stuck here *)
Abort.
At this point, Coq knows:
x : A
i : is_true (filter x)
x0 : A
i0 : is_true (filter x0)
H : x = x0
I tried some rewrite without success. For example, why can't I rewrite of i and H to give Coq a i0? May I ask what did I miss here? Thanks.
At the point where you got stuck, your goal looked roughly like this:
exist x i = exist x0 i0
If the rewrite you typed were to succeed, you would have obtained the following goal:
exist x0 i = exist x0 i0
Here, you can see why Coq is complaining: rewriting would have yielded an ill-typed term. The problem is that the subterm exist x0 i is using i as a term of type filter x0, when it really has type filter x. To convince Coq that this is not a problem, you need to massage your goal a little bit before rewriting:
Lemma projection_injective :
forall t1 t2: subsetA, proj1_sig t1 = proj1_sig t2 -> t1 = t2.
Proof.
destruct t1.
destruct t2.
simpl.
intros.
revert i. (* <- this is new *)
rewrite -> H. (* and now the tactic succeeds *)
intros i.
Abort.
Alternatively, you could use the subst tactic, which tries to remove all redundant variables in the context. Here is a more compact version of the above script:
Lemma projection_injective :
forall t1 t2: subsetA, proj1_sig t1 = proj1_sig t2 -> t1 = t2.
Proof.
intros [x1 i1] [x2 i2]; simpl; intros e.
subst.
Abort.
You might run into another issue afterwards: showing that any two terms of type filter x0 are equal. In general, you would need the axiom of proof irrelevance to be able to show this; however, since filter is defined as an equality between two terms of a type with decidable equality, you can prove this property as a theorem (which the Coq standard library already does for you).
As a side note, the mathcomp library already has a generic lemma that subsumes your property, called val_inj. Just to give you an example, this is how one might use it:
From mathcomp Require Import ssreflect ssrfun ssrbool eqtype.
Inductive A: Set := mkA : nat-> A.
Function getId (a: A) : nat := match a with mkA n => n end.
Function filter (a: A) : bool := if (Nat.eqb (getId a) 0) then true else false.
Definition subsetA : Set := { a : A | filter a }.
Lemma projection_injective :
forall t1 t2: subsetA, proj1_sig t1 = proj1_sig t2 -> t1 = t2.
Proof.
intros t1 t2.
apply val_inj.
Qed.
I defined Subtype as follows
Record Subtype {T:Type}(P : T -> Prop) := {
subtype :> Type;
subtype_inj :> subtype -> T;
subtype_isinj : forall (s t:subtype), (subtype_inj s = subtype_inj t) -> s = t;
subtype_h : forall (x : T), P x -> (exists s:subtype,x = subtype_inj s);
subtype_h0 : forall (s : subtype), P (subtype_inj s)}.
Can the following theorem be proven?
Theorem Subtypes_Exist : forall {T}(P : T -> Prop), Subtype P.
If not, is it provable from any well-known compatible axiom? Or Can I add this as an axiom? Would it conflict with any usual axiom?(like extensionality, functional choice, etc.)
Your definition is practically identical to the one of MathComp; indeed, what you are missing mainly is injectivity due to proof relevance.
For that, I am afraid you will need to assume propositional irrelevance:
Require Import ProofIrrelevance.
Theorem Subtypes_Exist : forall {T}(P : T -> Prop), Subtype P.
Proof.
intros T P; set (subtype_inj := #proj1_sig T P).
apply (#Build_Subtype _ _ { x | P x} subtype_inj).
+ intros [s Ps] [t Pt]; simpl; intros ->.
now rewrite (proof_irrelevance _ Ps Pt).
+ now intros x Px; exists (exist _ x Px).
+ now destruct 0.
Qed.
You can always restrict you predicate P to a type which is effectively proof-irrelevant, of course.