Coq how to pretty-print a term constructed using tactics? - coq

I'm new to Coq and am doing some exercises to get more familiar with it.
My understanding is that proving a proposition in Coq "really" is writing down a type in Gallina and then showing that it's inhabited using tactics to combine terms together in deterministic ways.
I'm wondering if there's a way to get a pretty-printed representation of the actual term, with all the tactics removed.
In the example below, an anonymous term of type plus_comm (x y : N) : plus x y = plus y x is ultimately produced... I think. What should I do if I want to look at it? In a certain sense, I'm curious what the tactics "desugar" to.
Here's the code in question, lifted essentially verbatim from a tutorial on YouTube https://www.youtube.com/watch?v=OaIn7g8BAIc.
Inductive N : Type :=
| O : N
| S : N -> N
.
Fixpoint plus (x y : N) : N :=
match x with
| O => y
| S x' => S (plus x' y)
end.
Lemma plus_0 (x : N) : plus x O = x.
Proof.
induction x.
- simpl. reflexivity.
- simpl. rewrite IHx. reflexivity.
Qed.
Lemma plus_S (x y : N) : plus x (S y) = S(plus x y).
Proof.
induction x.
- simpl. reflexivity.
- simpl. rewrite IHx. reflexivity.
Qed.
Lemma plus_comm (x y : N) : plus x y = plus y x.
Proof.
induction x.
- simpl. rewrite plus_0. reflexivity.
- simpl. rewrite IHx. rewrite plus_S. reflexivity.
Qed.

First of all, plus_comm is not a part of the type. You get a term named plus_comm of type forall x y : N, plus x y = plus y x. You can check it using the following command
Check plus_comm.
So, an alternative way of defining the plus_comm lemma is
Lemma plus_comm : forall x y : N, plus x y = plus y x.
As a side note: in this case you'll need to add intros x y. (or just intros.) after the Proof. part.
Tactics (and the means to glue them together) are a metalanguage called Ltac, because they are used to produce terms of another language, called Gallina, which is the specification language of Coq.
For example, forall x y : N, plus x y = plus y x is an instance of Gallina sentence as well as the body of the plus function. To obtain the term attached to plus_comm use the Print command:
Print plus_comm.
plus_comm =
fun x y : N =>
N_ind (fun x0 : N => plus x0 y = plus y x0)
(eq_ind_r (fun n : N => y = n) eq_refl (plus_0 y))
(fun (x0 : N) (IHx : plus x0 y = plus y x0) =>
eq_ind_r (fun n : N => S n = plus y (S x0))
(eq_ind_r (fun n : N => S (plus y x0) = n) eq_refl (plus_S y x0))
IHx) x
: forall x y : N, plus x y = plus y x
It is not an easy read, but with some experience you'll be able to understand it.
Incidentally, here is how we could have proved the lemma not using tactics:
Definition plus_comm : forall x y : N, plus x y = plus y x :=
fix IH (x y : N) :=
match x return plus x y = plus y x with
| O => eq_sym (plus_0 y)
| S x => eq_ind _ (fun p => S p = plus y (S x)) (eq_sym (plus_S y x)) _ (eq_sym (IH x y))
end.
To explain a few things: fix is the means of defining recursive functions, eq_sym is used to change x = y into y = x, and eq_ind corresponds to the rewrite tactic.

Related

How to improve this proof?

I work on mereology and I wanted to prove that a given theorem (Extensionality) follows from the four axioms I had.
This is my code:
Require Import Classical.
Parameter Entity: Set.
Parameter P : Entity -> Entity -> Prop.
Axiom P_refl : forall x, P x x.
Axiom P_trans : forall x y z,
P x y -> P y z -> P x z.
Axiom P_antisym : forall x y,
P x y -> P y x -> x = y.
Definition PP x y := P x y /\ x <> y.
Definition O x y := exists z, P z x /\ P z y.
Axiom strong_supp : forall x y,
~ P y x -> exists z, P z y /\ ~ O z x.
And this is my proof:
Theorem extension : forall x y,
(exists z, PP z x) -> (forall z, PP z x <-> PP z y) -> x = y.
Proof.
intros x y [w PPwx] H.
apply Peirce.
intros Hcontra.
destruct (classic (P y x)) as [yesP|notP].
- pose proof (H y) as [].
destruct H0.
split; auto.
contradiction.
- pose proof (strong_supp x y notP) as [z []].
assert (y = z).
apply Peirce.
intros Hcontra'.
pose proof (H z) as [].
destruct H3.
split; auto.
destruct H1.
exists z.
split.
apply P_refl.
assumption.
rewrite <- H2 in H1.
pose proof (H w) as [].
pose proof (H3 PPwx).
destruct PPwx.
destruct H5.
destruct H1.
exists w.
split; assumption.
Qed.
I’m happy with the fact that I completed this proof. However, I find it quite messy. And I don’t know how to improve it. (The only thing I think of is to use patterns instead of destruct.) It is possible to improve this proof? If so, please do not use super complex tactics: I would like to understand the upgrades you will propose.
Here is a refactoring of your proof:
Require Import Classical.
Parameter Entity: Set.
Parameter P : Entity -> Entity -> Prop.
Axiom P_refl : forall x, P x x.
Axiom P_trans : forall x y z,
P x y -> P y z -> P x z.
Axiom P_antisym : forall x y,
P x y -> P y x -> x = y.
Definition PP x y := P x y /\ x <> y.
Definition O x y := exists z, P z x /\ P z y.
Axiom strong_supp : forall x y,
~ P y x -> exists z, P z y /\ ~ O z x.
Theorem extension : forall x y,
(exists z, PP z x) -> (forall z, PP z x <-> PP z y) -> x = y.
Proof.
intros x y [w PPwx] x_equiv_y.
apply NNPP. intros x_ne_y.
assert (~ P y x) as NPyx.
{ intros Pxy.
enough (PP y y) as [_ y_ne_y] by congruence.
rewrite <- x_equiv_y. split; congruence. }
destruct (strong_supp x y NPyx) as (z & Pzy & NOzx).
assert (y <> z) as y_ne_z.
{ intros <-. (* Substitute z right away. *)
assert (PP w y) as [Pwy NEwy] by (rewrite <- x_equiv_y; trivial).
destruct PPwx as [Pwx NEwx].
apply NOzx.
now exists w. }
assert (PP z x) as [Pzx _].
{ rewrite x_equiv_y. split; congruence. }
apply NOzx. exists z. split; trivial.
apply P_refl.
Qed.
The main changes are:
Give explicit and informative names to all the intermediate hypotheses (i.e., avoid doing destruct foo as [x []])
Use curly braces to separate the proofs of the intermediate assertions from the main proof.
Use the congruence tactic to automate some of the low-level equality reasoning. Roughly speaking, this tactic solves goals that can be established just by rewriting with equalities and pruning subgoals with contradictory statements like x <> x.
Condense trivial proof steps using the assert ... by tactic, which does not generate new subgoals.
Use the (a & b & c) destruct patterns rather than [a [b c]], which are harder to read.
Rewrite with x_equiv_y to avoid doing sequences such as specialize... destruct... apply H0 in H1
Avoid some unnecessary uses of classical reasoning.

Relations with dependent types in Coq

I want to define a relation over two type families in Coq and have come up with the following definition dep_rel and the identity relation dep_rel_id:
Require Import Coq.Logic.JMeq.
Require Import Coq.Program.Equality.
Require Import Classical_Prop.
Definition dep_rel (X Y: Type -> Type) :=
forall i, X i -> forall j, Y j -> Prop.
Inductive dep_rel_id {X} : dep_rel X X :=
| dep_rel_id_intro i x: dep_rel_id i x i x.
However, I got stuck when I tried to prove the following lemma:
Lemma dep_rel_id_inv {E} i x j y:
#dep_rel_id E i x j y -> existT _ i x = existT _ j y.
Proof.
intros H. inversion H. subst.
Abort.
inversion H seems to ignore the fact that the two xs in dep_rel_id i x i x are the same. I end up with the proof state:
E : Type -> Type
j : Type
x, y, x0 : E j
H2 : existT (fun x : Type => E x) j x0 = existT (fun x : Type => E x) j x
H : dep_rel_id j x j y
x1 : E j
H5 : existT (fun x : Type => E x) j x1 = existT (fun x : Type => E x) j y
x2 : E j
H4 : j = j
============================
existT E j x = existT E j x1
I don't think the goal can be proved in this way. Are there any tactics for situation like this that I'm not aware of?
By the way, I was able to prove the lemma with a somehow tweaked definition like below:
Inductive dep_rel_id' {X} : dep_rel X X :=
| dep_rel_id_intro' i x j y:
i = j -> x ~= y -> dep_rel_id' i x j y.
Lemma dep_rel_id_inv' {E} i x j y:
#dep_rel_id' E i x j y -> existT _ i x = existT _ j y.
Proof.
intros H. inversion H. subst.
apply inj_pair2 in H0.
apply inj_pair2 in H1. subst.
reflexivity.
Qed.
But I'm still curious whether this can be done in a simpler way (without using JMeq probably?). I'd be grateful for your suggestions.
Not sure what the issue is with inversion, indeed it seems like it has lost track of an important equality. However, using case H instead of inversion H seems to work just fine:
Lemma dep_rel_id_inv {E} i x j y:
#dep_rel_id E i x j y -> existT _ i x = existT _ j y.
Proof.
intros H.
case H.
reflexivity.
Qed.
But having case or destruct do the job where inversion couldn’t is very suprising to me. I suspect some kind of bug/wrong simplification by inversion, as simple inversion also gives a hypothesis from which one can prove the goal.

Coq: Induction on associated variable

I can figure out how to prove my "degree_descent" Theorem below if I really need to:
Variable X : Type.
Variable degree : X -> nat.
Variable P : X -> Prop.
Axiom inductive_by_degree : forall n, (forall x, S (degree x) = n -> P x) -> (forall x, degree x = n -> P x).
Lemma hacky_rephrasing : forall n, forall x, degree x = n -> P x.
Proof. induction n; intros.
- apply (inductive_by_degree 0). discriminate. exact H.
- apply (inductive_by_degree (S n)); try exact H. intros y K. apply IHn. injection K; auto.
Qed.
Theorem degree_descent : forall x, P x.
Proof. intro. apply (hacky_rephrasing (degree x)); reflexivity.
Qed.
but this "hacky_rephrasing" Lemma is an ugly and unintuitive pattern to me. Is there a better way to prove degree_descent all by itself? For example, using set or pose to introduce n := degree x and then invoking induction n isn't working because it annihilates the hypothesis from the subsequent contexts (if someone could explain why this occurs, too, that would be helpful!). I can't figure out how to get generalize to work with me here, either.
PS: This is just weak induction for simplicity, but ideally I would like the solution to work with custom induction schemes via induction ... using ....
It looks like you would like to use the remember tactic:
Variable X : Type.
Variable degree : X -> nat.
Variable P : X -> Prop.
Axiom inductive_by_degree : forall n, (forall x, S (degree x) = n -> P x) -> (forall x, degree x = n -> P x).
Theorem degree_descent : forall x, P x.
Proof.
intro x. remember (degree x) as n eqn:E.
symmetry in E. revert x E.
(* Goal: forall x : X, degree x = n -> P x *)
Restart. From Coq Require Import ssreflect.
(* Or ssreflect style *)
move=> x; move: {2}(degree x) (eq_refl : degree x = _)=> n.
(* ... *)

Path induction using eq_rect

According to Homotopy Type Theory (page 49), this is the full induction principle for equality :
Definition path_induction (A : Type) (C : forall x y : A, (x = y) -> Type)
(c : forall x : A, C x x eq_refl) (x y : A) (prEq : x = y)
: C x y prEq :=
match prEq with
| eq_refl => c x
end.
I don't understand much about HoTT, but I do see path induction is stronger than eq_rect :
Lemma path_ind_stronger : forall (A : Type) (x y : A) (P : A -> Type)
(prX : P x) (prEq : x = y),
eq_rect x P prX y prEq =
path_induction A (fun x y pr => P x -> P y) (fun x pr => pr) x y prEq prX.
Proof.
intros. destruct prEq. reflexivity.
Qed.
Conversely, I failed to construct path_induction from eq_rect. Is it possible ? If not, what is the correct induction principle for equality ? I thought those principles were mechanically derived from the Inductive type definitions.
EDIT
Thanks to the answer below, the full induction principle on equality can be generated by
Scheme eq_rect_full := Induction for eq Sort Prop.
Then we get the converse,
Lemma eq_rect_full_works : forall (A : Type) (C : forall x y : A, (x = y) -> Prop)
(c : forall x : A, C x x eq_refl) (x y : A)
(prEq : x = y),
path_induction A C c x y prEq
= eq_rect_full A x (fun y => C x y) (c x) y prEq.
Proof.
intros. destruct prEq. reflexivity.
Qed.
I think you are referring to the fact that the result type of path_induction mentions the path that is being destructed, whereas the one of eq_rect does not. This omission is the default for inductive propositions (as opposed to what happens with Type), because the extra argument is not usually used in proof-irrelevant developments. Nevertheless, you can instruct Coq to generate more complete induction principles with the Scheme command: https://coq.inria.fr/distrib/current/refman/user-extensions/proof-schemes.html?highlight=minimality. (The Minimality variant is the one used for propositions by default.)

Termination implies existence of normal form

I would like to prove that termination implies existence of normal form. These are my definitions:
Section Forms.
Require Import Classical_Prop.
Require Import Classical_Pred_Type.
Context {A : Type}
Variable R : A -> A -> Prop.
Definition Inverse (Rel : A -> A -> Prop) := fun x y => Rel y x.
Inductive ReflexiveTransitiveClosure : Relation A A :=
| rtc_into (x y : A) : R x y -> ReflexiveTransitiveClosure x y
| rtc_trans (x y z : A) : R x y -> ReflexiveTransitiveClosure y z ->
ReflexiveTransitiveClosure x z
| rtc_refl (x y : A) : x = y -> ReflexiveTransitiveClosure x y.
Definition redc (x : A) := exists y, R x y.
Definition nf (x : A) := ~(redc x).
Definition nfo (x y : A) := ReflexiveTransitiveClosure R x y /\ nf y.
Definition terminating := forall x, Acc (Inverse R) x.
Definition normalizing := forall x, (exists y, nfo x y).
End Forms.
I'd like to prove:
Lemma terminating_impl_normalizing (T : terminating):
normalizing.
I have been banging my head against the wall for a couple of hours now, and I've made almost no progress. I can show:
Lemma terminating_not_inf_forall (T : terminating) :
forall f : nat -> A, ~ (forall n, R (f n) (f (S n))).
which I believe should help (this is also true without classic).
Here is a proof using the excluded middle. I reformulated the problem to replace custom definitions by standard ones (note by the way that in your definition of the closure, the rtc_into is redundant with the other ones). I also reformulated terminating using well_founded.
Require Import Classical_Prop.
Require Import Relations.
Section Forms.
Context {A : Type} (R:relation A).
Definition inverse := fun x y => R y x.
Definition redc (x : A) := exists y, R x y.
Definition nf (x : A) := ~(redc x).
Definition nfo (x y : A) := clos_refl_trans _ R x y /\ nf y.
Definition terminating := well_founded inverse. (* forall x, Acc inverse x. *)
Definition normalizing := forall x, (exists y, nfo x y).
Lemma terminating_impl_normalizing (T : terminating):
normalizing.
Proof.
unfold normalizing.
apply (well_founded_ind T). intros.
destruct (classic (redc x)).
- destruct H0 as [y H0]. pose proof (H _ H0).
destruct H1 as [y' H1]. exists y'. unfold nfo.
destruct H1.
split.
+ apply rt_trans with (y:=y). apply rt_step. assumption. assumption.
+ assumption.
- exists x. unfold nfo. split. apply rt_refl. assumption.
Qed.
End Forms.
The proof is not very complicated but here are the main ideas:
use well founded induction
thanks to the excluded middle principle, separate the case where x is not in normal form and the case where it is
if x is not in normal form, use the induction hypothesis and use the transitivity of the closure to conclude
if x is already in normal form, we are done