Coq induction start at specific nat - coq

I'm trying to learn coq so please assume I know nothing about it.
If I have a lemma in coq that starts
forall n m:nat, n>=1 -> m>=1 ...
And I want to proceed by induction on n. How do I start the induction at 1? Currently when I use the "induction n." tactic it starts at zero and this makes the base statement false which makes it hard to proceed.
Any hints?

The following is a proof that every proposition P is true forall n>=1, if P is true for 1 and if P is inductively true.
Require Import Omega.
Parameter P : nat -> Prop.
Parameter p1 : P 1.
Parameter pS : forall n, P n -> P (S n).
Goal forall n, n>=1 -> P n.
We begin the proof by induction.
induction n; intro.
A false base case is no problem, if you have a false hypothesis laying around. In this case 0>=1.
- exfalso. omega.
The inductive case is tricky, because to access a proof of P n, we first have to proof that n>=1. The trick is to do a case analysis on n. If n=0, then we can trivially proof the goal P 1. If n>=1, we can access P n, and then proof the rest.
- destruct n.
+ apply p1.
+ assert (S n >= 1) by omega.
intuition.
apply pS.
trivial.
Qed.

Related

Coq simple implies proof

I am trying to prove the following trivial cancellation theorem for natural numbers:
forall i, j, k in nat . ((i+j) = (i+k)) -> (j = k)
Here is what I wrote in Coq:
Theorem cancel : forall (i j k : nat),
((add i j) = (add i k)) -> (j = k).
Proof.
intros i j k.
induction i.
simpl.
After which Coq tries to prove the basis of the induction which is trivial:
j = k -> j = k
Almost all Coq tutorials start with proving A -> A, but when I try to use such proofs I get stuck. For example, I'm using:
Theorem my_first_proof : (forall A : Prop, A -> A).
Proof.
intros A.
intros proof_of_A.
exact proof_of_A.
Qed.
And then when I try to:
rewrite -> my_first_proof
I get the following error:
Error: Cannot find a relation to rewrite.
Any help is very much appreciated, thanks!
The right tactic in this case is apply.
apply my_first_proof.
rewrite is used to replace one subterm of the goal with another, usually with a lemma showing that these subterms are equal or equivalent in some sense. my_first_proof is not an equality proof.

Wellfounded induction in CoQ

Let's say that I know certain natural numbers are good. I know 1 is good, if n is good then 3n is, and if n is good then n+5 is, and those are only ways of constructing good numbers. It seems to me that the adequate formalization of this in Coq is
Inductive good : nat -> Prop :=
| g1 : good 1
| g3 : forall n, good n -> good (n * 3)
| g5 : forall n, good n -> good (n + 5).
However, despite being obvious, the fact that 0 is not good seems not being provable using this definition (because when I invert, in case of g3 I only get the same thing in the hypothesis).
Now it isn't so obvious what exactly are good numbers. And it really seems that I don't need to characterize them totally in order to know that 0 is not good. For example, I can know that 2 is not good just by doing few inversions.
Indeed g3 can be applied an unbounded number of times when trying to disprove good 0. That is why we can think this proof requires induction (and we can see that the auxiliary lemma needed in the solution of #AntonTrunov uses induction). The same idea is used in theorem loop_never_stop of http://www.cis.upenn.edu/~bcpierce/sf/current/Imp.html#lab428.
Require Import Omega.
Example not_good_0 : ~ good 0.
Proof.
intros contra. remember 0 as n. induction contra.
discriminate. apply IHcontra. omega. omega.
Qed.
This problem needs induction. And induction needs some predicate P : nat -> Prop to work with. A primitive (constant) predicate like (fun n => ~good 0) doesn't give you much: you won't be able to prove the base case for 1 (which corresponds to the constructor g1), because the predicate "forgets" its argument.
So you need to prove some logically equivalent (or stronger) statement which readily will give you the necessary predicate.
An example of such equivalent statement is forall n, good n -> n > 0, which you can later use to disprove good 0. The corresponding predicate P is (fun n => n > 0).
Require Import Coq.Arith.Arith.
Require Import Coq.omega.Omega.
Inductive good : nat -> Prop :=
| g1 : good 1
| g3 : forall n, good n -> good (n * 3)
| g5 : forall n, good n -> good (n + 5).
Lemma good_gt_O: forall n, good n -> n > 0.
Proof.
intros n H. induction H; omega.
Qed.
Goal ~ good 0.
intro H. now apply good_ge_O in H.
Qed.
Here is a proof of the aforementioned equivalence:
Lemma not_good0_gt_zero_equiv_not_good0 :
(forall n, good n -> n > 0) <-> ~ good 0.
Proof.
split; intros; firstorder.
destruct n; [tauto | omega].
Qed.
And it's easy to show that forall n, n = 0 -> ~ good n which implicitly appears in #eponier's answer is equivalent to ~ good 0 too.
Lemma not_good0_eq_zero_equiv_not_good0 :
(forall n, n = 0 -> ~ good n) <-> ~ good 0.
Proof.
split; intros; subst; auto.
Qed.
Now, the corresponding predicate used to prove forall n, n = 0 -> ~ good n is fun n => n = 0 -> False. This can be shown by using manual application of the goal_ind induction principle, automatically generated by Coq:
Example not_good_0_manual : forall n, n = 0 -> ~ good n.
Proof.
intros n Eq contra.
generalize Eq.
refine (good_ind (fun n => n = 0 -> False) _ _ _ _ _);
try eassumption; intros; omega.
Qed.
generalize Eq. introduces n = 0 as a premise to the current goal. Without it the goal to prove would be False and the corresponding predicate would be the boring fun n => False again.

Induction on predicates with product type arguments

If I have a predicate like this:
Inductive foo : nat -> nat -> Prop :=
| Foo : forall n, foo n n.
then I can trivially use induction to prove some dummy lemmas:
Lemma foo_refl : forall n n',
foo n n' -> n = n'.
Proof.
intros.
induction H.
reflexivity.
Qed.
However, for a predicate with product type arguments:
Inductive bar : (nat * nat) -> (nat * nat) -> Prop :=
| Bar : forall n m, bar (n, m) (n, m).
a similar proof for nearly identical lemma gets stuck because all assumptions about variables disappear:
Lemma bar_refl : forall n n' m m',
bar (n, m) (n', m') -> n = n'.
Proof.
intros.
induction H.
(* :( *)
Why is this happening? If I replace induction with inversion, then it behaves as expected.
The lemma is still provable with induction but requires some workarounds:
Lemma bar_refl : forall n n' m m',
bar (n, m) (n', m') -> n = n'.
Proof.
intros.
remember (n, m) as nm.
remember (n', m') as n'm'.
induction H.
inversion Heqnm. inversion Heqn'm'. repeat subst.
reflexivity.
Qed.
Unfortunately, this way proofs gets completely cluttered and are impossible to follow for more complicated predicates.
One obvious solution would be to declare bar like this:
Inductive bar' : nat -> nat -> nat -> nat -> Prop :=
| Bar' : forall n m, bar' n m n m.
This solves all the problems. Yet, for my purposes, I find the previous ("tupled") approach somewhat more elegant. Is there a way to keep the predicate as it is and still be able to do manageable inductive proofs? Where does the problem even come from?
The issue is that induction can only works with variables, not constructed terms. This is why you should first prove something like
Lemma bar_refl : forall p q, bar p q -> fst p = fst q.
which is trivially proved by now induction 1. to prove your lemma.
If you don't want the intermediate lemma to have a name, your solution is the correct one: you need to help Coq with remember to generalize your goal, and then you'll be able to prove it.
I don't remember exactly where this restriction comes from, but I recall something about making some unification problem undecidable.
Often in these situation one can do induction on one of the sub-terms.
In your case your lemma can be proved by induction on n, with
Lemma bar_refl : forall n n' m m', bar (n, m) (n', m') -> n = n'.
Proof. induction n; intros; inversion H; auto. Qed.
... all assumptions about variables disappear... Why is this happening? If I replace induction with inversion, then it behaves as expected.
The reason that happens is described perfectly in this blog post:
Dependent Case Analysis in Coq without Axioms
by James Wilcox. Let me quote the most relevant part for this case:
When Coq performs a case analysis, it first abstracts over all indices. You may have seen this manifest as a loss of information when using destruct on predicates (try destructing even 3 for example: it just deletes the hypothesis!), or when doing induction on a predicate with concrete indices (try proving forall n, even (2*n+1) -> False by induction on the hypothesis (not the nat) -- you'll be stuck!). Coq essentially forgets the concrete values of the indices. When trying to induct on such a hypothesis, one solution is to replace each concrete index with a new variable together with a constraint that forces the variable to be equal to the correct concrete value. destruct does something similar: when given a term of some inductive type with concrete index values, it first replaces the concrete values with new variables. It doesn't add the equality constraints (but inversion does). The error here is about abstracting out the indices. You can't just go replacing concrete values with arbitrary variables and hope that things still type check. It's just a heuristic.
To give a concrete example, when using destruct H. one basically does pattern-matching like so:
Lemma bar_refl : forall n n' m m',
bar (n, m) (n', m') -> n = n'.
Proof.
intros n n' m m' H.
refine (match H with
| Bar a b => _
end).
with the following proof state:
n, n', m, m' : nat
H : bar (n, m) (n', m')
a, b : nat
============================
n = n'
To get almost the exact proof state we should've erased H from the context, using the clear H. command: refine (...); clear H.. This rather primitive pattern-matching doesn't allow us to prove our goal.
Coq abstracted away (n, m) and (n',m') replacing them with some pairs p and p', such that p = (a, b) and p' = (a, b). Unfortunately, our goal has the form n = n' and there is neither (n,m) nor (n',m') in it -- that's why Coq didn't change the goal with a = a.
But there is a way to tell Coq to do that. I don't know how to do exactly that using tactics, so I'll show a proof term. It's is going to look somewhat similar to #Vinz's solution, but notice that I didn't change the statement of the lemma:
Undo. (* to undo the previous pattern-matching *)
refine (match H in (bar p p') return fst p = fst p' with
| Bar a b => _
end).
This time we added more annotations for Coq to understand the connections between the components of H's type and the goal -- we explicitly named the p and p' pairs and because we told Coq to treat our goal as fst p = fst p' it will replace p and p' in the goal with (a,b). Our proof state looks like this now:
n, n', m, m' : nat
H : bar (n, m) (n', m')
a, b : nat
============================
fst (a, b) = fst (a, b)
and simple reflexivity is able to finish the proof.
I think now it should be clear why destruct works fine in the following lemma (don't look at the answer below, try to figure it out first):
Lemma bar_refl_all : forall n n' m m',
bar (n, m) (n', m') -> (n, m) = (n', m').
Proof.
intros. destruct H. reflexivity.
Qed.
Answer: because the goal contains the same pairs that are present in the hypothesis's type, so Coq replaces them all with appropriate variables and that will prevent the information loss.
Another way ...
Lemma bar_refl n n' m m' : bar (n, m) (n', m') -> n = n'.
Proof.
change (n = n') with (fst (n,m) = fst (n',m')).
generalize (n,m) (n',m').
intros ? ? [ ]; reflexivity.
Qed.

Using `dependent induction` tactic to keep information while doing induction

I have just run into the issue of the Coq induction discarding information about constructed terms while reading a proof from here.
The authors used something like:
remember (WHILE b DO c END) as cw eqn:Heqcw.
to rewrite a hypothesis H before the actual induction induction H. I really don't like the idea of having to introduce a trivial equality as it looks like black magic.
Some search here in SO shows that actually the remember trick is necessary. One answer here, however, points out that the new dependent induction can be used to avoid the remember trick. This is nice, but the dependent induction itself now seems a bit magical.
I have a hard time trying to understand how dependent induction works. The documentation gives an example where dependent induction is required:
Lemma le_minus : forall n:nat, n < 1 -> n = 0.
I can verify how induction fails and dependent induction works in this case. But I can't use the remember trick to replicate the dependent induction result.
What I tried so far to mimic the remember trick is:
Require Import Coq.Program.Equality.
Lemma le_minus : forall n:nat, n < 1 -> n = 0.
intros n H. (* dependent induction H works*)
remember (n < 1) as H0. induction H.
But this doesn't work. Anyone can explain how dependent induction works here in terms of the remember-ing?
You can do
Require Import Coq.Program.Equality.
Lemma le_minus : forall n:nat, n < 1 -> n = 0.
Proof.
intros n H.
remember 1 as m in H. induction H.
- inversion Heqm. reflexivity.
- inversion Heqm. subst m.
inversion H.
Qed.
As stated here, the problem is that Coq cannot keep track of the shape of terms that appear in the type of the thing you are doing induction on. In other words, doing induction over the "less than" relation instructs Coq to try to prove something about a generic upper bound, as opposed to the specific one you're considering (1).
Notice that it is always possible to prove such goals without remember or dependent induction, by generalizing your result a little bit:
Lemma le_minus_aux :
forall n m, n < m ->
match m with
| 1 => n = 0
| _ => True
end.
Proof.
intros n m H. destruct H.
- destruct n; trivial.
- destruct H; trivial.
Qed.
Lemma le_minus : forall n, n < 1 -> n = 0.
Proof.
intros n H.
apply (le_minus_aux n 1 H).
Qed.

How to prove False from obviously contradictory assumptions

Suppose I want to prove following Theorem:
Theorem succ_neq_zero : forall n m: nat, S n = m -> 0 = m -> False.
This one is trivial since m cannot be both successor and zero, as assumed. However I found it quite tricky to prove it, and I don't know how to make it without an auxiliary lemma:
Lemma succ_neq_zero_lemma : forall n : nat, O = S n -> False.
Proof.
intros.
inversion H.
Qed.
Theorem succ_neq_zero : forall n m: nat, S n = m -> 0 = m -> False.
Proof.
intros.
symmetry in H.
apply (succ_neq_zero_lemma n).
transitivity m.
assumption.
assumption.
Qed.
I am pretty sure there is a better way to prove this. What is the best way to do it?
You just need to substitute for m in the first equation:
Theorem succ_neq_zero : forall n m: nat, S n = m -> 0 = m -> False.
Proof.
intros n m H1 H2; rewrite <- H2 in H1; inversion H1.
Qed.
There's a very easy way to prove it:
Theorem succ_neq_zero : forall n m: nat, S n = m -> 0 = m -> False.
Proof.
congruence.
Qed.
The congruence tactic is a decision procedure for ground equalities on uninterpreted symbols. It's complete for uninterpreted symbols and for constructors, so in cases like this one, it can prove that the equality 0 = m is impossible.
It might be useful to know how congruence works.
To prove that two terms constructed by different constructors are in fact different, just create a function that returns True in one case and False in the other cases, and then use it to prove True = False. I think this is explained in Coq'Art
Example not_congruent: 0 <> 1.
intros C. (* now our goal is 'False' *)
pose (fun m=>match m with 0=>True |S _=>False end) as f.
assert (Contra: f 1 = f 0) by (rewrite C; reflexivity).
now replace False with True by Contra.
Qed.