Say I have a slightly different definition of addition, and a slightly different definition of vector appending:
From Coq Require Import Vector.
Definition myAddNat a b := nat_rect _ b (fun _ p => S p) a.
Theorem rewrite_myAddNat a b : myAddNat a b = (a + b)%nat.
Proof.
induction a.
{ reflexivity. }
{
simpl.
congruence.
}
Defined.
Definition myAppend T m n : Vector.t T m -> Vector.t T n -> Vector.t T (myAddNat m n).
rewrite rewrite_myAddNat.
apply Vector.append.
Defined.
I would like to be able to prove the following:
Theorem myAppend_cons_1 T m n h a b :
myAppend T (S m) n (cons T h m a) b =
cons T h (myAddNat m n) (myAppend T m n a b).
Proof.
induction a.
{ reflexivity. }
{
simpl.
unfold myAppend.
(* stuck! *)
}
Abort.
I end up stuck on two terms that are very close to each other, except they each have an equality cast in a different position that I am not sure how to handle.
I have considered changing my theorem statement to:
Theorem myAppend_cons T m n h a b :
existT _ _ (myAppend T (S m) n (cons T h m a) b) =
existT _ _ (cons T h (myAddNat m n) (myAppend T m n a b)).
so as to be able to temporarily make the two sides of the equation have a different type, but have not been able to make much more progress on the proof.
So:
1) Is there a nice way to prove either theorem
or,
2) Should I write myAppend in a different way that will make my life easier?
Here is a quick answer:
Theorem myAppend_cons_1 T m n h a b :
myAppend T (S m) n (cons T h m a) b =
cons T h (myAddNat m n) (myAppend T m n a b).
Proof.
unfold myAppend, eq_rect_r; simpl.
rewrite !eq_trans_refl_l, !eq_sym_map_distr.
now destruct (eq_sym _).
Qed.
Related
I am working on the theorem ev_ev__ev in IndProp.v of Software Foundations (Vol 1: Logical Foundations).
Theorem ev_ev__ev : forall n m,
even (n+m) -> even n -> even m.
Proof.
intros n m Enm En. induction En as [| n' Hn' IHn'].
- (* En: ev_0 *) simpl in Enm. apply Enm.
- (* En: ev_SS n' Hn': even n'
with IHn': even (n' + m) -> even m *)
apply IHn'. simpl in Enm. inversion Enm as [| n'm H]. apply H.
Qed.
where even is defined as:
Inductive even : nat -> Prop :=
| ev_0 : even 0
| ev_SS (n : nat) (H : even n) : even (S (S n)).
At the point of the second bullet -, the context as well as the goal is as follows:
m, n' : nat
Enm : even (S (S n') + m)
Hn' : even n'
IHn' : even (n' + m) -> even m
______________________________________(1/1)
even m
I understand how m, n', Enm, Hn' in the context are generated. However, how is IHn' generated?
Induction hypotheses are systematically created for premises of constructors that are in the same type family. So, you can look at each constructor independently.
Assume you have an inductive definition of a type that starts with:
Inductive arbitraryName : A -> B -> Prop :=
An induction principle called arbitraryName_ind will be created, which starts with a quantification over an arbitrary predicate usually called P with the same type
forall P : A -> B -> Prop,
Now, if you have a constructor of the form
arbitrary_constructor : forall x y, arbitraryName x y -> ...
The induction principle will have a sub-clause for this constructor that starts with the same quantifications over all variables in the constructor, the same hypothesis, plus an induction hypothesis for the premise that relies on arbitraryName.
forall x y, arbitraryName x y -> P x y -> ...
Finally, each constructor of the inductive definition has to finish with an application of the defined type family (in this case arbitraryName). The end of the clause for this constructor apply the function P to the same argument.
Let's go back to arbitrary_constructor and suppose it has the following full type:
arbitrary_constructor : forall x y, arbitraryName x y -> arbitraryName (g x y) (h x y)
In that case the clause in the induction principle is :
(forall x y, arbitraryName x y -> P x y -> P (g x y) (h x y))
In the case of even, there is a constructor ev_SS that has the following shape:
ev_SS : forall x, even x -> even (S (S x))
So the clause that is generated has the following shape:
(forall x, even x -> P x -> P (S (S x)))
The induction hypothesis IHn' corresponds exactly to this P in the clause.
The full induction principle has the following shape.
forall P : nat -> Prop, P 0 ->
(forall x, even x -> P x -> P (S (S x))) ->
forall n, even n -> P n
When you type induction En, this theorem is applied. The hypothesis even n, where n is universally quantified, is matched with the text of En in the goal at that moment. It turns out that the statement of that hypothesis is even n (the n here is fixed in the goal) so the universally quantified n is instantiated with the local n from the goal context. Then, the tactic tries to find all the hypotheses in the context where this n appears. In this case, there is Enm, so this hypothesis is used to define the P on which the induction principle will be instantiated. In a sense, what happens is that Enm is put back in the goal's conclusion, as if one had executed revert Enm.
We need P n to be the same thing as even (n + m) -> even m. The most natural solution is that P is equal to the function fun x => even (x + m) -> even m
So in the second case of the proof by induction, a new n' is introduced and P is applied to n' to give the contents of the induction hypothesis:
(even (n' + m) -> even m)
and P is applied to S (S n') to give the contents of the final goal.
even (S (S n') + m) -> even m
Now, at the time of calling the induction tactic, the hypothesis Enm was in the context, so the statement even (S (S n') + m), which is morally an offspring of Enm is put back in the context with the same name. Note that there was already a hypothesis named Enm in the other goal, but the statement was again different.
It is normal that you have a question on how this induction hypothesis was generated, because what happens actually involves several operations.
Lemma In_map_iff :
forall (A B : Type) (f : A -> B) (l : list A) (y : B),
In y (map f l) <->
exists x, f x = y /\ In x l.
Proof.
split.
- generalize dependent y.
generalize dependent f.
induction l.
+ intros. inversion H.
+ intros.
simpl.
simpl in H.
destruct H.
* exists x.
split.
apply H.
left. reflexivity.
*
1 subgoal
A : Type
B : Type
x : A
l : list A
IHl : forall (f : A -> B) (y : B),
In y (map f l) -> exists x : A, f x = y /\ In x l
f : A -> B
y : B
H : In y (map f l)
______________________________________(1/1)
exists x0 : A, f x0 = y /\ (x = x0 \/ In x0 l)
Since proving exists x0 : A, f x0 = y /\ (x = x0 \/ In x0 l) is the same as proving exists x0 : A, f x0 = y /\ In x0 l, I want to eliminate x = x0 inside the goal here so I can apply the inductive hypothesis, but I am not sure how to do this. I've tried left in (x = x0 \/ In x0 l) and various other things, but I haven't been successful in making it happen. As it turns out, defining a helper function of type forall a b c, (a /\ c) -> a /\ (b \/ c) to do the rewriting does not work for terms under an existential either.
How could this be done?
Note that the above is one of the SF book exercises.
You can get access to the components of your inductive hypothesis with any of the following:
specialize (IHl f y h); destruct IHl
destruct (IHl f y H)
edestruct IHl
You can then use exists and split to manipulate the goal into a form that is easier to work with.
As it turns out, it is necessary to define a helper.
Lemma In_map_iff_helper : forall (X : Type) (a b c : X -> Prop),
(exists q, (a q /\ c q)) -> (exists q, a q /\ (b q \/ c q)).
Proof.
intros.
destruct H.
exists x.
destruct H.
split.
apply H.
right.
apply H0.
Qed.
This does the rewriting that is needed right off the bat. I made a really dumb error thinking that I needed a tactic rather than an auxiliary lemma. I should have studied the preceding examples more closely - if I did, I'd have realized that existentials need to be accounted for.
I am stuck on a goal.
Assume we have the following definition:
Fixpoint iota (n : nat) : list nat :=
match n with
| 0 => []
| S k => iota k ++ [k]
end.
And we want to prove:
Theorem t1 : forall n, In n (iota n) -> False.
So far, I have managed to the following:
Theorem t1 : forall n, In n (iota n) -> False.
Proof.
intros.
induction n.
- cbn in H. contradiction.
- cbn in H. apply app_split in H.
Focus 2. unfold not. intros.
unfold In in H0. destruct H0. assert (~(n = S n)) by now apply s_inj.
contradiction.
apply H0.
apply IHn.
I used these two lemmas, proofs omitted:
Axiom app_split : forall A x (l l2 : list A), In x (l ++ l2) -> not (In x l2) -> In x l.
Axiom s_inj : forall n, ~(n = S n).
However, I am completely stuck, I need to somehow show that: In n (iota n) assuming In (S n) (iota n).
As you've observed the fact that the n in In n and the one in iota n are in lockstep in your statement makes the induction hypothesis hard to invoke (if not completely useless).
The trick here is to prove a more general statement than the one you are actually interested in which breaks this dependency between the two ns. I would suggest:
Theorem t : forall n k, n <= k -> In k (iota n) -> False.
from which you can derive t1 as a corollary:
Corollary t1 : forall n, In n (iota n) -> False.
intro n; apply (t n n); reflexivity.
Qed.
If you want to peek at the proof of t, you can have a look at this self-contained gist
I'm new to inductive predicates in Coq. I have learned how to define simple inductive predicates such as "even" (as in adam.chlipala.net/cpdt/html/Predicates.html) or "last" (as in http://www.cse.chalmers.se/research/group/logic/TypesSS05/resources/coq/CoqArt/inductive-prop-chap/SRC/last.v).
Now I wanted to try something slightly more complicated: to define addition as an inductive predicate, but I got stuck. I did the following:
Inductive N : Type :=
| z : N (* zero *)
| s : N -> N. (* successor *)
Inductive Add: N -> N -> N -> Prop :=
| add_z: forall n, (Add n z n)
| add_s: forall m n r, (Add m n r) -> (Add m (s n) (s r)).
Fixpoint plus (x y : N) :=
match y with
| z => x
| (s n) => (s (plus x n))
end.
And I would like to prove a simple theorem (analogously to what has been done for last and last_fun in www.cse.chalmers.se/research/group/logic/TypesSS05/resources/coq/CoqArt/inductive-prop-chap/SRC/last.v):
Theorem T1: forall x y r, (plus x y) = r -> (Add x y r).
Proof.
intros x y r. induction y.
simpl. intro H. rewrite H. apply add_z.
case r.
simpl. intro H. discriminate H.
???
But then I get stuck. The induction hypothesis seems strange. I don't know if I defined Add wrongly, or if I am just using wrong tactics. Could you please help me, by either correcting my inductive Add or telling me how to complete this proof?
You introduced r before using induction on y. In general you'll want to use induction before introducing anything so the induction hypothesis is as general as possible.
Conjecture injectivity : forall n m, s n = s m -> n = m.
Theorem T1: forall x y r, (plus x y) = r -> (Add x y r).
Proof.
intros x y. induction y.
simpl. intros r H. rewrite H. apply add_z.
intro r. case r.
simpl. intro H. discriminate H.
simpl. intros n H. apply add_s. apply IHy. apply injectivity. apply H.
Qed.
I have a function max:
Fixpoint max (n : nat) (m : nat) : nat :=
match n, m with
| O, O => O
| O, S x => S x
| S x, O => S x
| S x, S y => S (max x y)
end.
and a proof of the commutativity of max as follows:
Theorem max_comm :
forall n m : nat, max n m = max m n.
Proof.
intros n m.
induction n as [|n'];
induction m as [|m'];
simpl; trivial.
(* Qed. *)
This leaves off at S (max n' m') = S (max m' n'), which seems correct, and given the base case has already been proven, seems like one should be able to tell coq "just use the recursion!". However, I cannot figure out how to do it. Any help?
The problem is you introduce variable m before doing induction on variable n, and that makes the induction hypothesis less general. Try this instead.
intro n; induction n as [| n' IHn'];
intro m; destruct m as [| m'];
simpl; try (rewrite IHn'); trivial.