How can I generalise Coq proofs of an iff? - coq

A common kind of proof I have to make is something like
Lemma my_lemma : forall y, (forall x x', Q x x' y) -> (forall x x', P x y <-> P x' y).
Proof.
intros y Q_y.
split.
+ <some proof using Q>
+ <the same proof using Q, but x and x' are swapped>
where Q is itself some kind of iff-shaped predicate.
My problem is that the proofs of P x y -> P x' y and P x' y -> P x y are often basically identical, with the only difference between that the roles of x and x' are swapped between them. Can I ask Coq to transform the goal into
forall x x', P x y -> P x' y
which then generalises to the iff case, so that I don't need to repeat myself in the proof?
I had a look through the standard library, the tactic index, and some SO questions, but nothing told me how to do this.

Here is a custom tactic for it:
Ltac sufficient_if :=
match goal with
| [ |- forall (x : ?t) (x' : ?t'), ?T <-> ?U ] => (* If the goal looks like an equivalence (T <-> U) (hoping that T and U are sufficiently similar)... *)
assert (HHH : forall (x : t) (x' : t'), T -> U); (* Change the goal to (T -> U) *)
[ | split; apply HHH ] (* And prove the two directions of the old goal *)
end.
Parameter Q : nat -> nat -> nat -> Prop.
Parameter P : nat -> nat -> Prop.
Lemma my_lemma : forall y, (forall x x', Q x x' y) -> (forall x x', P x y <-> P x' y).
Proof.
intros y Q_y.
sufficient_if.

In mathematics, one often can make "assumptions" "without loss of generality" (WLOG) to simplify proofs of this kind. In your example, you could say "assume without loss of generality that P x y holds. To prove P x y <-> P x' y it is sufficient to prove P x' y."
If you are using ssreflect, you have the wlog tactic.
You essentially cut in another goal which can easily solve your goal. You can also do it with standard tactics like assert or enough (which is like assert but the proof obligations are in the other order).
An example to show what I mean: below I just want to show the implication in one direction, because it can easily solve the implication in the other direction (with firstorder).
Context (T:Type) (P:T->T->Prop).
Goal forall x y, P x y <-> P y x.
enough (forall x y, P x y -> P y x) by firstorder.
Now I just have to show the goal in one direction, because it implies the real goal's both directions.
For more about WLOG see for instance 1

Related

How to improve this proof?

I work on mereology and I wanted to prove that a given theorem (Extensionality) follows from the four axioms I had.
This is my code:
Require Import Classical.
Parameter Entity: Set.
Parameter P : Entity -> Entity -> Prop.
Axiom P_refl : forall x, P x x.
Axiom P_trans : forall x y z,
P x y -> P y z -> P x z.
Axiom P_antisym : forall x y,
P x y -> P y x -> x = y.
Definition PP x y := P x y /\ x <> y.
Definition O x y := exists z, P z x /\ P z y.
Axiom strong_supp : forall x y,
~ P y x -> exists z, P z y /\ ~ O z x.
And this is my proof:
Theorem extension : forall x y,
(exists z, PP z x) -> (forall z, PP z x <-> PP z y) -> x = y.
Proof.
intros x y [w PPwx] H.
apply Peirce.
intros Hcontra.
destruct (classic (P y x)) as [yesP|notP].
- pose proof (H y) as [].
destruct H0.
split; auto.
contradiction.
- pose proof (strong_supp x y notP) as [z []].
assert (y = z).
apply Peirce.
intros Hcontra'.
pose proof (H z) as [].
destruct H3.
split; auto.
destruct H1.
exists z.
split.
apply P_refl.
assumption.
rewrite <- H2 in H1.
pose proof (H w) as [].
pose proof (H3 PPwx).
destruct PPwx.
destruct H5.
destruct H1.
exists w.
split; assumption.
Qed.
I’m happy with the fact that I completed this proof. However, I find it quite messy. And I don’t know how to improve it. (The only thing I think of is to use patterns instead of destruct.) It is possible to improve this proof? If so, please do not use super complex tactics: I would like to understand the upgrades you will propose.
Here is a refactoring of your proof:
Require Import Classical.
Parameter Entity: Set.
Parameter P : Entity -> Entity -> Prop.
Axiom P_refl : forall x, P x x.
Axiom P_trans : forall x y z,
P x y -> P y z -> P x z.
Axiom P_antisym : forall x y,
P x y -> P y x -> x = y.
Definition PP x y := P x y /\ x <> y.
Definition O x y := exists z, P z x /\ P z y.
Axiom strong_supp : forall x y,
~ P y x -> exists z, P z y /\ ~ O z x.
Theorem extension : forall x y,
(exists z, PP z x) -> (forall z, PP z x <-> PP z y) -> x = y.
Proof.
intros x y [w PPwx] x_equiv_y.
apply NNPP. intros x_ne_y.
assert (~ P y x) as NPyx.
{ intros Pxy.
enough (PP y y) as [_ y_ne_y] by congruence.
rewrite <- x_equiv_y. split; congruence. }
destruct (strong_supp x y NPyx) as (z & Pzy & NOzx).
assert (y <> z) as y_ne_z.
{ intros <-. (* Substitute z right away. *)
assert (PP w y) as [Pwy NEwy] by (rewrite <- x_equiv_y; trivial).
destruct PPwx as [Pwx NEwx].
apply NOzx.
now exists w. }
assert (PP z x) as [Pzx _].
{ rewrite x_equiv_y. split; congruence. }
apply NOzx. exists z. split; trivial.
apply P_refl.
Qed.
The main changes are:
Give explicit and informative names to all the intermediate hypotheses (i.e., avoid doing destruct foo as [x []])
Use curly braces to separate the proofs of the intermediate assertions from the main proof.
Use the congruence tactic to automate some of the low-level equality reasoning. Roughly speaking, this tactic solves goals that can be established just by rewriting with equalities and pruning subgoals with contradictory statements like x <> x.
Condense trivial proof steps using the assert ... by tactic, which does not generate new subgoals.
Use the (a & b & c) destruct patterns rather than [a [b c]], which are harder to read.
Rewrite with x_equiv_y to avoid doing sequences such as specialize... destruct... apply H0 in H1
Avoid some unnecessary uses of classical reasoning.

Is there a shorter proof to this Coq theorem?

I'm learning to use Coq and I try to prove the theorems of a paper I'm reading. The paper is Having a Part Twice Over of Karen Bennett, published in 2013. The paper propopes a mereological theory composed of two primitives F and Ps and defines the parthood relation P using the two primitives.
I coded it as follows:
Class Entity: Type.
(* Slot Mereology defines the parthood relation
* with the two primitives F and Ps.
* The idea is that wholes have slots
* filled by their parts.
* F x s means that x fills slot s.
* Ps s y means that s is a parthood slot of y.
* P is the parthood relation.
*)
Parameter F : Entity -> Entity -> Prop.
Parameter Ps : Entity -> Entity -> Prop.
Definition P (x y : Entity) :=
exists s, F x s /\ Ps s y.
(* Slot Inheritance *)
Axiom A5 : forall x y z1 z2 : Entity,
(Ps z1 y /\ F x z1) /\ Ps z2 x -> Ps z2 y.
(* Parthood Transitivity *)
Theorem T7 : forall x y z : Entity,
(P x y /\ P y z) -> P x z.
Proof.
intros x y z.
unfold P.
intro h.
destruct h as (EsPxy, EsPyz).
destruct EsPxy as (s1, Pxy).
destruct Pxy as (Fxs1, Pss1y).
destruct EsPyz as (s2, Pyz).
destruct Pyz as (Fys2, Pss2z).
exists s1.
split.
apply Fxs1.
apply A5 with (z1 := s2) (x := y).
split.
split.
assumption.
assumption.
assumption.
Qed.
I succeeded to prove theorem T7. I have two questions:
is my Coq code ok? (I'm not sure If the way I declared the type, the primitives and the predicate is the right way to do it.)
is there a shorter proof? (About the length of the proof, I only want to know about the number of tactics.)
Another approach, using ssreflect and its neat notation for destructuring, one can rephrase your explicit proof in a more compact way (I'm using Arthur's version).
From mathcomp Require Import all_ssreflect.
Parameter Entity: Type.
Parameter F : Entity -> Entity -> Prop.
Parameter Ps : Entity -> Entity -> Prop.
Definition P (x y : Entity) :=
exists s, F x s /\ Ps s y.
Axiom A5 : forall x y z1 z2 : Entity,
(Ps z1 y /\ F x z1) /\ Ps z2 x -> Ps z2 y.
Theorem T7 : forall x y z : Entity,
(P x y /\ P y z) -> P x z.
Proof.
move=> x y z [[s1 [Fxs1 Ps1y]] [s2 [Fys2 Ps2z]]].
by exists s1; split; [|exact: (A5 y z s2 s1)].
Qed.
Yes, your Coq code is OK. But there are shorter proofs. This theorem is simple enough that it can be solved with Coq's automation tactics. E.g.,
Parameter Entity: Type.
(* Slot Mereology defines the parthood relation
* with the two primitives F and Ps.
* The idea is that wholes have slots
* filled by their parts.
* F x s means that x fills slot s.
* Ps s y means that s is a parthood slot of y.
* P is the parthood relation.
*)
Parameter F : Entity -> Entity -> Prop.
Parameter Ps : Entity -> Entity -> Prop.
Definition P (x y : Entity) :=
exists s, F x s /\ Ps s y.
(* Slot Inheritance *)
Axiom A5 : forall x y z1 z2 : Entity,
(Ps z1 y /\ F x z1) /\ Ps z2 x -> Ps z2 y.
(* Parthood Transitivity *)
Theorem T7 : forall x y z : Entity,
(P x y /\ P y z) -> P x z.
Proof.
unfold P; firstorder; eauto 10 using A5.
Qed.
(Notice that I replaced "Class Entity" with "Parameter Entity": The first form is actually just defining a type whose elements are records with no fields, whereas the second one is postulating that the type Entity exists without placing any further constraints on it.)

Coq: Induction on associated variable

I can figure out how to prove my "degree_descent" Theorem below if I really need to:
Variable X : Type.
Variable degree : X -> nat.
Variable P : X -> Prop.
Axiom inductive_by_degree : forall n, (forall x, S (degree x) = n -> P x) -> (forall x, degree x = n -> P x).
Lemma hacky_rephrasing : forall n, forall x, degree x = n -> P x.
Proof. induction n; intros.
- apply (inductive_by_degree 0). discriminate. exact H.
- apply (inductive_by_degree (S n)); try exact H. intros y K. apply IHn. injection K; auto.
Qed.
Theorem degree_descent : forall x, P x.
Proof. intro. apply (hacky_rephrasing (degree x)); reflexivity.
Qed.
but this "hacky_rephrasing" Lemma is an ugly and unintuitive pattern to me. Is there a better way to prove degree_descent all by itself? For example, using set or pose to introduce n := degree x and then invoking induction n isn't working because it annihilates the hypothesis from the subsequent contexts (if someone could explain why this occurs, too, that would be helpful!). I can't figure out how to get generalize to work with me here, either.
PS: This is just weak induction for simplicity, but ideally I would like the solution to work with custom induction schemes via induction ... using ....
It looks like you would like to use the remember tactic:
Variable X : Type.
Variable degree : X -> nat.
Variable P : X -> Prop.
Axiom inductive_by_degree : forall n, (forall x, S (degree x) = n -> P x) -> (forall x, degree x = n -> P x).
Theorem degree_descent : forall x, P x.
Proof.
intro x. remember (degree x) as n eqn:E.
symmetry in E. revert x E.
(* Goal: forall x : X, degree x = n -> P x *)
Restart. From Coq Require Import ssreflect.
(* Or ssreflect style *)
move=> x; move: {2}(degree x) (eq_refl : degree x = _)=> n.
(* ... *)

How to build a function implicitly in Coq?

I am trying to prove that every group has an inverse function.
I have defined a group as follows:
Record Group:Type := {
G:Set;
mult:G->G->G;
e:G;
assoc:forall x y z:G, mult x (mult y z)=mult (mult x y) z;
neut:forall x:G, mult e x=x /\ mult x e=x;
inverse:forall x:G,exists y:G, mult x y = e
}.
I am aware that it is better to just replace the inverse axiom by inverse:forall x:G, {y: mult x y = e}., or even inverse:G->G. is_inverse:forall x:G, mult x (inverse x)=e., but I prefer my definition, mainly because I want the definition to be identical to the one given in a classroom.
So I have included a suitable version of the axiom of choice:
Axiom indefinite_description : forall (A : Type) (P: A->Prop), ex P -> sig P.
Axiom functional_choice : forall A B (R:A->B->Prop), (forall x, exists y, R x y) -> (exists f, forall x, R x (f x)).
Now I can prove my claim:
Lemma inv_func_exists(H:Group):exists inv_func:G H->G H, (forall x:G H, mult H x (inv_func(x))=e H).
generalize (inverse H).
apply functional_choice.
Qed.
Now that I have proved the existence, I would like to define an actual function. Here I feel that things start to go messy. The following definition creates an actual function, but seems to ugly and complicated:
Definition inv_func(H:Group):G H->G H.
pose (inv_func_exists H).
pose indefinite_description.
generalize e0 s.
trivial.
Qed.
Lastly, I would like to prove that inv_func is actually an inverse function:
Lemma inv_func_is_inverse:forall (H:Group), forall x:(G H), mult H x (inv_func H x)=e H.
I can see that Coq knows how inv_func was defined (e.g. Print inv_func), but I have no idea how to formally prove the lemma.
To conclude, I would appreciate suggestions as to how to prove the last lemma, and of better ways to define inv_func (but under my definition of group, without including the existence of such a function in the group definition. I believe the question could be relevant in many other situations when one can prove some correspondence for each element and needs to build this correspondence as a function).
There are quite a few questions inside your question. I'll try to address all of them:
First, there is no reason to prefer exists x, P + description over {x | P}, indeed, it seems weird you do so. {x | P} is perfectly valid as "there exists a x that can be computed" and I would rather use that definition with your groups.
Secondly, when creating definitions using tactics, you should end the proof with the command Defined. Using Qed will declare the definition "Opaque", which means it cannot be expanded, then preventing you proof.
The way to extract the witness from your definition is by using a projection. In this case, proj1_sig.
Using all the above we arrive at:
Definition inv_func' (H:Group) (x : G H) : G H.
Proof.
destruct (inverse H x) as [y _].
exact y.
Defined.
Definition inv_func (H:Group) (x : G H) : G H := proj1_sig (inverse H x).
Lemma inv_func_is_inverse (H:Group) (x: G H) : mult H x (inv_func H x) = e H.
Proof. now unfold inv_func; destruct (inverse H x). Qed.

Termination implies existence of normal form

I would like to prove that termination implies existence of normal form. These are my definitions:
Section Forms.
Require Import Classical_Prop.
Require Import Classical_Pred_Type.
Context {A : Type}
Variable R : A -> A -> Prop.
Definition Inverse (Rel : A -> A -> Prop) := fun x y => Rel y x.
Inductive ReflexiveTransitiveClosure : Relation A A :=
| rtc_into (x y : A) : R x y -> ReflexiveTransitiveClosure x y
| rtc_trans (x y z : A) : R x y -> ReflexiveTransitiveClosure y z ->
ReflexiveTransitiveClosure x z
| rtc_refl (x y : A) : x = y -> ReflexiveTransitiveClosure x y.
Definition redc (x : A) := exists y, R x y.
Definition nf (x : A) := ~(redc x).
Definition nfo (x y : A) := ReflexiveTransitiveClosure R x y /\ nf y.
Definition terminating := forall x, Acc (Inverse R) x.
Definition normalizing := forall x, (exists y, nfo x y).
End Forms.
I'd like to prove:
Lemma terminating_impl_normalizing (T : terminating):
normalizing.
I have been banging my head against the wall for a couple of hours now, and I've made almost no progress. I can show:
Lemma terminating_not_inf_forall (T : terminating) :
forall f : nat -> A, ~ (forall n, R (f n) (f (S n))).
which I believe should help (this is also true without classic).
Here is a proof using the excluded middle. I reformulated the problem to replace custom definitions by standard ones (note by the way that in your definition of the closure, the rtc_into is redundant with the other ones). I also reformulated terminating using well_founded.
Require Import Classical_Prop.
Require Import Relations.
Section Forms.
Context {A : Type} (R:relation A).
Definition inverse := fun x y => R y x.
Definition redc (x : A) := exists y, R x y.
Definition nf (x : A) := ~(redc x).
Definition nfo (x y : A) := clos_refl_trans _ R x y /\ nf y.
Definition terminating := well_founded inverse. (* forall x, Acc inverse x. *)
Definition normalizing := forall x, (exists y, nfo x y).
Lemma terminating_impl_normalizing (T : terminating):
normalizing.
Proof.
unfold normalizing.
apply (well_founded_ind T). intros.
destruct (classic (redc x)).
- destruct H0 as [y H0]. pose proof (H _ H0).
destruct H1 as [y' H1]. exists y'. unfold nfo.
destruct H1.
split.
+ apply rt_trans with (y:=y). apply rt_step. assumption. assumption.
+ assumption.
- exists x. unfold nfo. split. apply rt_refl. assumption.
Qed.
End Forms.
The proof is not very complicated but here are the main ideas:
use well founded induction
thanks to the excluded middle principle, separate the case where x is not in normal form and the case where it is
if x is not in normal form, use the induction hypothesis and use the transitivity of the closure to conclude
if x is already in normal form, we are done