I work on mereology and I wanted to prove that a given theorem (Extensionality) follows from the four axioms I had.
This is my code:
Require Import Classical.
Parameter Entity: Set.
Parameter P : Entity -> Entity -> Prop.
Axiom P_refl : forall x, P x x.
Axiom P_trans : forall x y z,
P x y -> P y z -> P x z.
Axiom P_antisym : forall x y,
P x y -> P y x -> x = y.
Definition PP x y := P x y /\ x <> y.
Definition O x y := exists z, P z x /\ P z y.
Axiom strong_supp : forall x y,
~ P y x -> exists z, P z y /\ ~ O z x.
And this is my proof:
Theorem extension : forall x y,
(exists z, PP z x) -> (forall z, PP z x <-> PP z y) -> x = y.
Proof.
intros x y [w PPwx] H.
apply Peirce.
intros Hcontra.
destruct (classic (P y x)) as [yesP|notP].
- pose proof (H y) as [].
destruct H0.
split; auto.
contradiction.
- pose proof (strong_supp x y notP) as [z []].
assert (y = z).
apply Peirce.
intros Hcontra'.
pose proof (H z) as [].
destruct H3.
split; auto.
destruct H1.
exists z.
split.
apply P_refl.
assumption.
rewrite <- H2 in H1.
pose proof (H w) as [].
pose proof (H3 PPwx).
destruct PPwx.
destruct H5.
destruct H1.
exists w.
split; assumption.
Qed.
I’m happy with the fact that I completed this proof. However, I find it quite messy. And I don’t know how to improve it. (The only thing I think of is to use patterns instead of destruct.) It is possible to improve this proof? If so, please do not use super complex tactics: I would like to understand the upgrades you will propose.
Here is a refactoring of your proof:
Require Import Classical.
Parameter Entity: Set.
Parameter P : Entity -> Entity -> Prop.
Axiom P_refl : forall x, P x x.
Axiom P_trans : forall x y z,
P x y -> P y z -> P x z.
Axiom P_antisym : forall x y,
P x y -> P y x -> x = y.
Definition PP x y := P x y /\ x <> y.
Definition O x y := exists z, P z x /\ P z y.
Axiom strong_supp : forall x y,
~ P y x -> exists z, P z y /\ ~ O z x.
Theorem extension : forall x y,
(exists z, PP z x) -> (forall z, PP z x <-> PP z y) -> x = y.
Proof.
intros x y [w PPwx] x_equiv_y.
apply NNPP. intros x_ne_y.
assert (~ P y x) as NPyx.
{ intros Pxy.
enough (PP y y) as [_ y_ne_y] by congruence.
rewrite <- x_equiv_y. split; congruence. }
destruct (strong_supp x y NPyx) as (z & Pzy & NOzx).
assert (y <> z) as y_ne_z.
{ intros <-. (* Substitute z right away. *)
assert (PP w y) as [Pwy NEwy] by (rewrite <- x_equiv_y; trivial).
destruct PPwx as [Pwx NEwx].
apply NOzx.
now exists w. }
assert (PP z x) as [Pzx _].
{ rewrite x_equiv_y. split; congruence. }
apply NOzx. exists z. split; trivial.
apply P_refl.
Qed.
The main changes are:
Give explicit and informative names to all the intermediate hypotheses (i.e., avoid doing destruct foo as [x []])
Use curly braces to separate the proofs of the intermediate assertions from the main proof.
Use the congruence tactic to automate some of the low-level equality reasoning. Roughly speaking, this tactic solves goals that can be established just by rewriting with equalities and pruning subgoals with contradictory statements like x <> x.
Condense trivial proof steps using the assert ... by tactic, which does not generate new subgoals.
Use the (a & b & c) destruct patterns rather than [a [b c]], which are harder to read.
Rewrite with x_equiv_y to avoid doing sequences such as specialize... destruct... apply H0 in H1
Avoid some unnecessary uses of classical reasoning.
Related
It is comparatively easy to prove the following (Coq):
Goal forall A (P : A -> A -> Prop), (exists! xy, P (fst xy) (snd xy)) -> (exists! x, exists! y, P x y).
The question I am puzzled with: does the reverse hold? The exists x, exists y, ... formulation allows y to be chosen based on what x got selected one step back, so y is admitted to be dependent upon x. It seems to me (at least, I am not able to convince myself otherwise) that exists xy, ... - a pair (x, y) existence is different: it does not allow y to be chosen based on x.
The fun fact is that I tried to prove both Goal forall A (P : A -> A -> Prop), (exists! x, exists! y, P x y) -> ~ (exists! xy, P (fst xy) (snd xy)). and it's negation and both times got stuck with not being able to construct a required object or derive False.
Please, help me out.
The converse does not hold. Here is a counterexample:
Definition P (x y : bool) : Prop :=
x = true -> y = true.
Lemma l1 : exists! x, exists! y, P x y.
Proof.
exists true.
split.
- exists true. split; [easy|].
now intros y ->.
- intros x' (y & Py & unique_y).
destruct x'; trivial.
assert (contra : P false (negb y)).
{ intros; easy. }
specialize (unique_y (negb y) contra).
now destruct y.
Qed.
Lemma l2 : ~ (exists! xy, P (fst xy) (snd xy)).
Proof.
intros ([x y] & Pxy & unique_xy); simpl in *.
assert (contra : P (negb x) true).
{ intros ?. reflexivity. }
specialize (unique_xy (negb x, true) contra).
injection unique_xy as contra' _.
now destruct x.
Qed.
I want to define a relation over two type families in Coq and have come up with the following definition dep_rel and the identity relation dep_rel_id:
Require Import Coq.Logic.JMeq.
Require Import Coq.Program.Equality.
Require Import Classical_Prop.
Definition dep_rel (X Y: Type -> Type) :=
forall i, X i -> forall j, Y j -> Prop.
Inductive dep_rel_id {X} : dep_rel X X :=
| dep_rel_id_intro i x: dep_rel_id i x i x.
However, I got stuck when I tried to prove the following lemma:
Lemma dep_rel_id_inv {E} i x j y:
#dep_rel_id E i x j y -> existT _ i x = existT _ j y.
Proof.
intros H. inversion H. subst.
Abort.
inversion H seems to ignore the fact that the two xs in dep_rel_id i x i x are the same. I end up with the proof state:
E : Type -> Type
j : Type
x, y, x0 : E j
H2 : existT (fun x : Type => E x) j x0 = existT (fun x : Type => E x) j x
H : dep_rel_id j x j y
x1 : E j
H5 : existT (fun x : Type => E x) j x1 = existT (fun x : Type => E x) j y
x2 : E j
H4 : j = j
============================
existT E j x = existT E j x1
I don't think the goal can be proved in this way. Are there any tactics for situation like this that I'm not aware of?
By the way, I was able to prove the lemma with a somehow tweaked definition like below:
Inductive dep_rel_id' {X} : dep_rel X X :=
| dep_rel_id_intro' i x j y:
i = j -> x ~= y -> dep_rel_id' i x j y.
Lemma dep_rel_id_inv' {E} i x j y:
#dep_rel_id' E i x j y -> existT _ i x = existT _ j y.
Proof.
intros H. inversion H. subst.
apply inj_pair2 in H0.
apply inj_pair2 in H1. subst.
reflexivity.
Qed.
But I'm still curious whether this can be done in a simpler way (without using JMeq probably?). I'd be grateful for your suggestions.
Not sure what the issue is with inversion, indeed it seems like it has lost track of an important equality. However, using case H instead of inversion H seems to work just fine:
Lemma dep_rel_id_inv {E} i x j y:
#dep_rel_id E i x j y -> existT _ i x = existT _ j y.
Proof.
intros H.
case H.
reflexivity.
Qed.
But having case or destruct do the job where inversion couldn’t is very suprising to me. I suspect some kind of bug/wrong simplification by inversion, as simple inversion also gives a hypothesis from which one can prove the goal.
I can figure out how to prove my "degree_descent" Theorem below if I really need to:
Variable X : Type.
Variable degree : X -> nat.
Variable P : X -> Prop.
Axiom inductive_by_degree : forall n, (forall x, S (degree x) = n -> P x) -> (forall x, degree x = n -> P x).
Lemma hacky_rephrasing : forall n, forall x, degree x = n -> P x.
Proof. induction n; intros.
- apply (inductive_by_degree 0). discriminate. exact H.
- apply (inductive_by_degree (S n)); try exact H. intros y K. apply IHn. injection K; auto.
Qed.
Theorem degree_descent : forall x, P x.
Proof. intro. apply (hacky_rephrasing (degree x)); reflexivity.
Qed.
but this "hacky_rephrasing" Lemma is an ugly and unintuitive pattern to me. Is there a better way to prove degree_descent all by itself? For example, using set or pose to introduce n := degree x and then invoking induction n isn't working because it annihilates the hypothesis from the subsequent contexts (if someone could explain why this occurs, too, that would be helpful!). I can't figure out how to get generalize to work with me here, either.
PS: This is just weak induction for simplicity, but ideally I would like the solution to work with custom induction schemes via induction ... using ....
It looks like you would like to use the remember tactic:
Variable X : Type.
Variable degree : X -> nat.
Variable P : X -> Prop.
Axiom inductive_by_degree : forall n, (forall x, S (degree x) = n -> P x) -> (forall x, degree x = n -> P x).
Theorem degree_descent : forall x, P x.
Proof.
intro x. remember (degree x) as n eqn:E.
symmetry in E. revert x E.
(* Goal: forall x : X, degree x = n -> P x *)
Restart. From Coq Require Import ssreflect.
(* Or ssreflect style *)
move=> x; move: {2}(degree x) (eq_refl : degree x = _)=> n.
(* ... *)
A common kind of proof I have to make is something like
Lemma my_lemma : forall y, (forall x x', Q x x' y) -> (forall x x', P x y <-> P x' y).
Proof.
intros y Q_y.
split.
+ <some proof using Q>
+ <the same proof using Q, but x and x' are swapped>
where Q is itself some kind of iff-shaped predicate.
My problem is that the proofs of P x y -> P x' y and P x' y -> P x y are often basically identical, with the only difference between that the roles of x and x' are swapped between them. Can I ask Coq to transform the goal into
forall x x', P x y -> P x' y
which then generalises to the iff case, so that I don't need to repeat myself in the proof?
I had a look through the standard library, the tactic index, and some SO questions, but nothing told me how to do this.
Here is a custom tactic for it:
Ltac sufficient_if :=
match goal with
| [ |- forall (x : ?t) (x' : ?t'), ?T <-> ?U ] => (* If the goal looks like an equivalence (T <-> U) (hoping that T and U are sufficiently similar)... *)
assert (HHH : forall (x : t) (x' : t'), T -> U); (* Change the goal to (T -> U) *)
[ | split; apply HHH ] (* And prove the two directions of the old goal *)
end.
Parameter Q : nat -> nat -> nat -> Prop.
Parameter P : nat -> nat -> Prop.
Lemma my_lemma : forall y, (forall x x', Q x x' y) -> (forall x x', P x y <-> P x' y).
Proof.
intros y Q_y.
sufficient_if.
In mathematics, one often can make "assumptions" "without loss of generality" (WLOG) to simplify proofs of this kind. In your example, you could say "assume without loss of generality that P x y holds. To prove P x y <-> P x' y it is sufficient to prove P x' y."
If you are using ssreflect, you have the wlog tactic.
You essentially cut in another goal which can easily solve your goal. You can also do it with standard tactics like assert or enough (which is like assert but the proof obligations are in the other order).
An example to show what I mean: below I just want to show the implication in one direction, because it can easily solve the implication in the other direction (with firstorder).
Context (T:Type) (P:T->T->Prop).
Goal forall x y, P x y <-> P y x.
enough (forall x y, P x y -> P y x) by firstorder.
Now I just have to show the goal in one direction, because it implies the real goal's both directions.
For more about WLOG see for instance 1
Lemma In_map_iff :
forall (A B : Type) (f : A -> B) (l : list A) (y : B),
In y (map f l) <->
exists x, f x = y /\ In x l.
Proof.
split.
- generalize dependent y.
generalize dependent f.
induction l.
+ intros. inversion H.
+ intros.
simpl.
simpl in H.
destruct H.
* exists x.
split.
apply H.
left. reflexivity.
*
1 subgoal
A : Type
B : Type
x : A
l : list A
IHl : forall (f : A -> B) (y : B),
In y (map f l) -> exists x : A, f x = y /\ In x l
f : A -> B
y : B
H : In y (map f l)
______________________________________(1/1)
exists x0 : A, f x0 = y /\ (x = x0 \/ In x0 l)
Since proving exists x0 : A, f x0 = y /\ (x = x0 \/ In x0 l) is the same as proving exists x0 : A, f x0 = y /\ In x0 l, I want to eliminate x = x0 inside the goal here so I can apply the inductive hypothesis, but I am not sure how to do this. I've tried left in (x = x0 \/ In x0 l) and various other things, but I haven't been successful in making it happen. As it turns out, defining a helper function of type forall a b c, (a /\ c) -> a /\ (b \/ c) to do the rewriting does not work for terms under an existential either.
How could this be done?
Note that the above is one of the SF book exercises.
You can get access to the components of your inductive hypothesis with any of the following:
specialize (IHl f y h); destruct IHl
destruct (IHl f y H)
edestruct IHl
You can then use exists and split to manipulate the goal into a form that is easier to work with.
As it turns out, it is necessary to define a helper.
Lemma In_map_iff_helper : forall (X : Type) (a b c : X -> Prop),
(exists q, (a q /\ c q)) -> (exists q, a q /\ (b q \/ c q)).
Proof.
intros.
destruct H.
exists x.
destruct H.
split.
apply H.
right.
apply H0.
Qed.
This does the rewriting that is needed right off the bat. I made a really dumb error thinking that I needed a tactic rather than an auxiliary lemma. I should have studied the preceding examples more closely - if I did, I'd have realized that existentials need to be accounted for.