Coq simple implies proof - coq

I am trying to prove the following trivial cancellation theorem for natural numbers:
forall i, j, k in nat . ((i+j) = (i+k)) -> (j = k)
Here is what I wrote in Coq:
Theorem cancel : forall (i j k : nat),
((add i j) = (add i k)) -> (j = k).
Proof.
intros i j k.
induction i.
simpl.
After which Coq tries to prove the basis of the induction which is trivial:
j = k -> j = k
Almost all Coq tutorials start with proving A -> A, but when I try to use such proofs I get stuck. For example, I'm using:
Theorem my_first_proof : (forall A : Prop, A -> A).
Proof.
intros A.
intros proof_of_A.
exact proof_of_A.
Qed.
And then when I try to:
rewrite -> my_first_proof
I get the following error:
Error: Cannot find a relation to rewrite.
Any help is very much appreciated, thanks!

The right tactic in this case is apply.
apply my_first_proof.
rewrite is used to replace one subterm of the goal with another, usually with a lemma showing that these subterms are equal or equivalent in some sense. my_first_proof is not an equality proof.

Related

Proof leaking in Coq extraction?

In order to understand how general recursive Function definitions works, and how they comply with Coq's structural recursion constraint, I tried to reimplement it on the Peano natural numbers. I want to define recursive nat -> nat functions that can use any previous values, not just the predecessor. Here is what I did :
Definition nat_strong_induction_set
(* erased at extraction, type specification *)
(P : nat -> Set)
(* The strong induction step. To build the P n it can, but does not have to,
recursively query the construction of any previous P k's. *)
(ind_step : forall n : nat, (forall k : nat, (lt k n -> P k)) -> P n)
(n : nat)
: P n.
Proof.
(* Force the hypothesis of ind_step as a standard induction hypothesis *)
assert (forall m k : nat, lt k m -> P k) as partial_build.
{ induction m.
- intros k H0. destruct k; inversion H0.
- intros k H0. apply ind_step. intros k0 H1. apply IHm. apply (lt_transitive k0 k).
assumption. apply le_lt_equiv. assumption. }
apply (partial_build (S n) n). apply succ_lt.
Defined.
I used some custom lemmas on nats that I didn't paste here. It works, I managed to define the euclidean division div a b with it, which recursively uses div (a-b) b. The extraction is almost what I expected :
let nat_strong_induction_set ind_step n =
let m = S n in
let rec f n0 k =
match n0 with
| O -> assert false (* absurd case *)
| S n1 -> ind_step k (fun k0 _ -> f n1 k0)
in f m n
Except for the n0 parameter. We see that the only effect of this parameter is to stop the recursion at the S n-nth step. The extraction also mentions that this assert false should not happen. So why is it extracted ? This seems better
let nat_strong_induction_set ind_step n =
let rec f k = ind_step k (fun k0 _ -> f k0)
in f n
It looks like a glitch of Coq's structural recursion constraint, to ensure the termination of all recursions. The Coq definition of nat_strong_induction_set writes lt k n, so Coq knows only previous P k's will be queried. This makes a decreasing chain in the nats, which is forced to terminate in less than S n steps. This allows a structural recursive definition on an additional fuel parameter n0 starting at S n, it won't affect the result. So if it is only a part of the termination proof, why is it not erased by the extraction ?
Your match is not erased because your definition mixes two things: the termination argument, where the match is needed, and the computationally relevant recursive call, where it isn't.
To force erasure, you need to convince Coq that the match is computationally irrelevant. You can do so by making the termination argument -- that is, the induction on m -- produce the proof of a proposition instead of a function of type forall m k, lt k m -> P k. Luckily, the standard library provides an easy way of doing so, with the Fix combinator:
Require Import Coq.Arith.Wf_nat.
Definition nat_strong_induction_set
(P : nat -> Set)
(ind_step : forall n : nat, (forall k : nat, (lt k n -> P k)) -> P n)
(n : nat)
: P n :=
Fix lt_wf P ind_step n.
Here, lt_wf is a proof that lt is well-founded. When you extract this function, you get
let rec nat_strong_induction_set ind_step n =
ind_step n (fun y _ -> nat_strong_induction_set ind_step y)
which is exactly what you wanted.
(As an aside, note that you don't need well-founded recursion to define division -- check for instance how it is defined in the Mathematical Components library.)

Wellfounded induction in CoQ

Let's say that I know certain natural numbers are good. I know 1 is good, if n is good then 3n is, and if n is good then n+5 is, and those are only ways of constructing good numbers. It seems to me that the adequate formalization of this in Coq is
Inductive good : nat -> Prop :=
| g1 : good 1
| g3 : forall n, good n -> good (n * 3)
| g5 : forall n, good n -> good (n + 5).
However, despite being obvious, the fact that 0 is not good seems not being provable using this definition (because when I invert, in case of g3 I only get the same thing in the hypothesis).
Now it isn't so obvious what exactly are good numbers. And it really seems that I don't need to characterize them totally in order to know that 0 is not good. For example, I can know that 2 is not good just by doing few inversions.
Indeed g3 can be applied an unbounded number of times when trying to disprove good 0. That is why we can think this proof requires induction (and we can see that the auxiliary lemma needed in the solution of #AntonTrunov uses induction). The same idea is used in theorem loop_never_stop of http://www.cis.upenn.edu/~bcpierce/sf/current/Imp.html#lab428.
Require Import Omega.
Example not_good_0 : ~ good 0.
Proof.
intros contra. remember 0 as n. induction contra.
discriminate. apply IHcontra. omega. omega.
Qed.
This problem needs induction. And induction needs some predicate P : nat -> Prop to work with. A primitive (constant) predicate like (fun n => ~good 0) doesn't give you much: you won't be able to prove the base case for 1 (which corresponds to the constructor g1), because the predicate "forgets" its argument.
So you need to prove some logically equivalent (or stronger) statement which readily will give you the necessary predicate.
An example of such equivalent statement is forall n, good n -> n > 0, which you can later use to disprove good 0. The corresponding predicate P is (fun n => n > 0).
Require Import Coq.Arith.Arith.
Require Import Coq.omega.Omega.
Inductive good : nat -> Prop :=
| g1 : good 1
| g3 : forall n, good n -> good (n * 3)
| g5 : forall n, good n -> good (n + 5).
Lemma good_gt_O: forall n, good n -> n > 0.
Proof.
intros n H. induction H; omega.
Qed.
Goal ~ good 0.
intro H. now apply good_ge_O in H.
Qed.
Here is a proof of the aforementioned equivalence:
Lemma not_good0_gt_zero_equiv_not_good0 :
(forall n, good n -> n > 0) <-> ~ good 0.
Proof.
split; intros; firstorder.
destruct n; [tauto | omega].
Qed.
And it's easy to show that forall n, n = 0 -> ~ good n which implicitly appears in #eponier's answer is equivalent to ~ good 0 too.
Lemma not_good0_eq_zero_equiv_not_good0 :
(forall n, n = 0 -> ~ good n) <-> ~ good 0.
Proof.
split; intros; subst; auto.
Qed.
Now, the corresponding predicate used to prove forall n, n = 0 -> ~ good n is fun n => n = 0 -> False. This can be shown by using manual application of the goal_ind induction principle, automatically generated by Coq:
Example not_good_0_manual : forall n, n = 0 -> ~ good n.
Proof.
intros n Eq contra.
generalize Eq.
refine (good_ind (fun n => n = 0 -> False) _ _ _ _ _);
try eassumption; intros; omega.
Qed.
generalize Eq. introduces n = 0 as a premise to the current goal. Without it the goal to prove would be False and the corresponding predicate would be the boring fun n => False again.

Induction on predicates with product type arguments

If I have a predicate like this:
Inductive foo : nat -> nat -> Prop :=
| Foo : forall n, foo n n.
then I can trivially use induction to prove some dummy lemmas:
Lemma foo_refl : forall n n',
foo n n' -> n = n'.
Proof.
intros.
induction H.
reflexivity.
Qed.
However, for a predicate with product type arguments:
Inductive bar : (nat * nat) -> (nat * nat) -> Prop :=
| Bar : forall n m, bar (n, m) (n, m).
a similar proof for nearly identical lemma gets stuck because all assumptions about variables disappear:
Lemma bar_refl : forall n n' m m',
bar (n, m) (n', m') -> n = n'.
Proof.
intros.
induction H.
(* :( *)
Why is this happening? If I replace induction with inversion, then it behaves as expected.
The lemma is still provable with induction but requires some workarounds:
Lemma bar_refl : forall n n' m m',
bar (n, m) (n', m') -> n = n'.
Proof.
intros.
remember (n, m) as nm.
remember (n', m') as n'm'.
induction H.
inversion Heqnm. inversion Heqn'm'. repeat subst.
reflexivity.
Qed.
Unfortunately, this way proofs gets completely cluttered and are impossible to follow for more complicated predicates.
One obvious solution would be to declare bar like this:
Inductive bar' : nat -> nat -> nat -> nat -> Prop :=
| Bar' : forall n m, bar' n m n m.
This solves all the problems. Yet, for my purposes, I find the previous ("tupled") approach somewhat more elegant. Is there a way to keep the predicate as it is and still be able to do manageable inductive proofs? Where does the problem even come from?
The issue is that induction can only works with variables, not constructed terms. This is why you should first prove something like
Lemma bar_refl : forall p q, bar p q -> fst p = fst q.
which is trivially proved by now induction 1. to prove your lemma.
If you don't want the intermediate lemma to have a name, your solution is the correct one: you need to help Coq with remember to generalize your goal, and then you'll be able to prove it.
I don't remember exactly where this restriction comes from, but I recall something about making some unification problem undecidable.
Often in these situation one can do induction on one of the sub-terms.
In your case your lemma can be proved by induction on n, with
Lemma bar_refl : forall n n' m m', bar (n, m) (n', m') -> n = n'.
Proof. induction n; intros; inversion H; auto. Qed.
... all assumptions about variables disappear... Why is this happening? If I replace induction with inversion, then it behaves as expected.
The reason that happens is described perfectly in this blog post:
Dependent Case Analysis in Coq without Axioms
by James Wilcox. Let me quote the most relevant part for this case:
When Coq performs a case analysis, it first abstracts over all indices. You may have seen this manifest as a loss of information when using destruct on predicates (try destructing even 3 for example: it just deletes the hypothesis!), or when doing induction on a predicate with concrete indices (try proving forall n, even (2*n+1) -> False by induction on the hypothesis (not the nat) -- you'll be stuck!). Coq essentially forgets the concrete values of the indices. When trying to induct on such a hypothesis, one solution is to replace each concrete index with a new variable together with a constraint that forces the variable to be equal to the correct concrete value. destruct does something similar: when given a term of some inductive type with concrete index values, it first replaces the concrete values with new variables. It doesn't add the equality constraints (but inversion does). The error here is about abstracting out the indices. You can't just go replacing concrete values with arbitrary variables and hope that things still type check. It's just a heuristic.
To give a concrete example, when using destruct H. one basically does pattern-matching like so:
Lemma bar_refl : forall n n' m m',
bar (n, m) (n', m') -> n = n'.
Proof.
intros n n' m m' H.
refine (match H with
| Bar a b => _
end).
with the following proof state:
n, n', m, m' : nat
H : bar (n, m) (n', m')
a, b : nat
============================
n = n'
To get almost the exact proof state we should've erased H from the context, using the clear H. command: refine (...); clear H.. This rather primitive pattern-matching doesn't allow us to prove our goal.
Coq abstracted away (n, m) and (n',m') replacing them with some pairs p and p', such that p = (a, b) and p' = (a, b). Unfortunately, our goal has the form n = n' and there is neither (n,m) nor (n',m') in it -- that's why Coq didn't change the goal with a = a.
But there is a way to tell Coq to do that. I don't know how to do exactly that using tactics, so I'll show a proof term. It's is going to look somewhat similar to #Vinz's solution, but notice that I didn't change the statement of the lemma:
Undo. (* to undo the previous pattern-matching *)
refine (match H in (bar p p') return fst p = fst p' with
| Bar a b => _
end).
This time we added more annotations for Coq to understand the connections between the components of H's type and the goal -- we explicitly named the p and p' pairs and because we told Coq to treat our goal as fst p = fst p' it will replace p and p' in the goal with (a,b). Our proof state looks like this now:
n, n', m, m' : nat
H : bar (n, m) (n', m')
a, b : nat
============================
fst (a, b) = fst (a, b)
and simple reflexivity is able to finish the proof.
I think now it should be clear why destruct works fine in the following lemma (don't look at the answer below, try to figure it out first):
Lemma bar_refl_all : forall n n' m m',
bar (n, m) (n', m') -> (n, m) = (n', m').
Proof.
intros. destruct H. reflexivity.
Qed.
Answer: because the goal contains the same pairs that are present in the hypothesis's type, so Coq replaces them all with appropriate variables and that will prevent the information loss.
Another way ...
Lemma bar_refl n n' m m' : bar (n, m) (n', m') -> n = n'.
Proof.
change (n = n') with (fst (n,m) = fst (n',m')).
generalize (n,m) (n',m').
intros ? ? [ ]; reflexivity.
Qed.

How to prove False from obviously contradictory assumptions

Suppose I want to prove following Theorem:
Theorem succ_neq_zero : forall n m: nat, S n = m -> 0 = m -> False.
This one is trivial since m cannot be both successor and zero, as assumed. However I found it quite tricky to prove it, and I don't know how to make it without an auxiliary lemma:
Lemma succ_neq_zero_lemma : forall n : nat, O = S n -> False.
Proof.
intros.
inversion H.
Qed.
Theorem succ_neq_zero : forall n m: nat, S n = m -> 0 = m -> False.
Proof.
intros.
symmetry in H.
apply (succ_neq_zero_lemma n).
transitivity m.
assumption.
assumption.
Qed.
I am pretty sure there is a better way to prove this. What is the best way to do it?
You just need to substitute for m in the first equation:
Theorem succ_neq_zero : forall n m: nat, S n = m -> 0 = m -> False.
Proof.
intros n m H1 H2; rewrite <- H2 in H1; inversion H1.
Qed.
There's a very easy way to prove it:
Theorem succ_neq_zero : forall n m: nat, S n = m -> 0 = m -> False.
Proof.
congruence.
Qed.
The congruence tactic is a decision procedure for ground equalities on uninterpreted symbols. It's complete for uninterpreted symbols and for constructors, so in cases like this one, it can prove that the equality 0 = m is impossible.
It might be useful to know how congruence works.
To prove that two terms constructed by different constructors are in fact different, just create a function that returns True in one case and False in the other cases, and then use it to prove True = False. I think this is explained in Coq'Art
Example not_congruent: 0 <> 1.
intros C. (* now our goal is 'False' *)
pose (fun m=>match m with 0=>True |S _=>False end) as f.
assert (Contra: f 1 = f 0) by (rewrite C; reflexivity).
now replace False with True by Contra.
Qed.

Coq induction start at specific nat

I'm trying to learn coq so please assume I know nothing about it.
If I have a lemma in coq that starts
forall n m:nat, n>=1 -> m>=1 ...
And I want to proceed by induction on n. How do I start the induction at 1? Currently when I use the "induction n." tactic it starts at zero and this makes the base statement false which makes it hard to proceed.
Any hints?
The following is a proof that every proposition P is true forall n>=1, if P is true for 1 and if P is inductively true.
Require Import Omega.
Parameter P : nat -> Prop.
Parameter p1 : P 1.
Parameter pS : forall n, P n -> P (S n).
Goal forall n, n>=1 -> P n.
We begin the proof by induction.
induction n; intro.
A false base case is no problem, if you have a false hypothesis laying around. In this case 0>=1.
- exfalso. omega.
The inductive case is tricky, because to access a proof of P n, we first have to proof that n>=1. The trick is to do a case analysis on n. If n=0, then we can trivially proof the goal P 1. If n>=1, we can access P n, and then proof the rest.
- destruct n.
+ apply p1.
+ assert (S n >= 1) by omega.
intuition.
apply pS.
trivial.
Qed.