I am dividing n1 number of elements in natural list l1 and n2 in l2. During executing this step I need to prove below lemma. I struct here. Here is code
Lemma g_list_max n l : In n l -> n <= list_max l.
Proof.
destruct l as [ | a l'].
simpl. contradiction.
unfold list_max. intros. simpl in *.
destruct H. rewrite H.
The number of elements in a list l is given by length l.
I encourage you to Search length. to know more about them or simply have a look at the List module of the standard library.
Related
I'm doing the Software foundations exercises and doing the combine_split exercise I'm running into a wall when trying to prove an auxiliary lemma.
When applying reflexivity within the assert the Proof Process just hangs there despite the equation just being (x, y) = (x, y) which is obviously true.
Here is the implementation
Theorem combine_split : forall X Y (l : list (X * Y)) l1 l2,
split l = (l1, l2) ->
combine l1 l2 = l.
Proof.
intros X Y.
intros l.
induction l as [| n l' IHl'].
- simpl. intros l1 l2 H. injection H as H1 H2. rewrite <- H1, <-H2. reflexivity.
- destruct n as [n1 n2]. simpl. destruct (split l').
intros l1 l2 H. injection H as H1 H2.
rewrite <- H1, <- H2. simpl.
assert ( Hc : combine x y = l'). { apply IHl'. reflexivity.}
apply Hc.
Qed.
Why is this happening?
Looks like a parsing bug in Proof General, in its sentence splitting. It appears to be sending reflexivity.} to Coq based on the highlighting when you wanted it split into reflexivity. and then } as a separate command. In any case coqc doesn't lex this as desired, interpreting .} as a single (unknown) token. (I'm actually confused why if it's sending reflexivity.} you don't get that lexing error.)
You can fix this by adding a space: reflexivity. }
Lemma remove {A} (x : A) xs (p : In x xs) :
exists xs', (forall x', x' <> x -> In x' xs -> In x' xs') /\ (length xs = S (length xs')).
Proof.
induction xs.
- inversion p.
- destruct p.
+ subst x0.
exists xs.
split.
* intros x' neq pin.
destruct pin.
-- contradict neq. symmetry. assumption.
-- assumption.
* reflexivity.
+ destruct (IHxs H) as [xs' pxs']. clear IHxs.
destruct pxs' as [p1 plen]. rename x0 into x'.
exists (x' :: xs').
split.
* intros x'' neq pin.
destruct pin.
-- subst x'. left. reflexivity.
-- right. apply p1. assumption. assumption.
* simpl.
rewrite -> plen.
reflexivity.
Qed.
Theorem pigeonhole_principle: forall (X:Type) (l1 l2:list X),
excluded_middle ->
AllIn l1 l2 ->
length l2 < length l1 ->
repeats l1.
Proof.
induction l1; simpl; intros l2 ex_mid Hin Hlen.
- inversion Hlen.
- apply repeats_rest.
destruct (remove x l2) as [l2' Hl2].
+ apply Hin. left. reflexivity.
+ destruct Hl2 as [Hmap Hlen'].
rewrite Hlen' in Hlen.
clear Hlen'.
apply (IHl1 l2').
1 : { assumption. }
2 : { revert Hlen. unfold lt. intros. omega. }
clear Hlen IHl1.
revert Hin.
unfold AllIn.
intros.
apply Hmap.
2 : { apply Hin. right. assumption. }
1 subgoal
X : Type
x : X
l1, l2 : list X
ex_mid : excluded_middle
l2' : list X
Hmap : forall x' : X, x' <> x -> In x' l2 -> In x' l2'
Hin : forall u : X, In u (x :: l1) -> In u l2
u : X
H : In u l1
______________________________________(1/1)
u <> x
I found various solutions floating about for the pigeonhole principle on the net. The above is adapted from the one by Kovacs, but in his proof he proves that there is an absence of no duplicates rather that there are repetitions as in the SF exercise.
The marked difference is that I cannot prove the u <> x goal here because there is less information when the problem is stated in this form.
Since this problem is both hard and optional and there are existing solutions floating around - and I've already been working on it for two days, could somebody describe to me a high level plan of exactly what I need to make this proof.
I am not looking for a solution, but I am hoping that the one with the excluded middle turns out to be elegant, because the Coq proof without the excluded middle is just a mess of rewriting and knowing the source code of a program is far from understanding what it does. Most explanations of the principle just describe what it is which is not enough for me to bridge the intuition gap.
I've never seen the classical laws in action - it does not seem like knowing something is decidable would gain me much and I find it hard to see what the point of them is. This is especially so in this situation, so I am that much more interested to see what their purpose will turn out to be.
I came across this question when searching for answers while working through SF (Software Foundations), but managed to prove it myself. I'll provide a sketch for the SF version of the pigeonhole principle, using excluded_middle. The statement to prove is that if all elements in list l1 are in l2 and length l2 is less than length l1, then l1 contains repeated elements.
The proof as SF suggests begins by induction on l1. I'll omit the straightforward empty case. In the inductive case, destruct l2. The case where l2 is empty is straightforward; we consider the other case.
Now, because of induction on l1 and destructuring on l2, your statement to prove is about lists with a first member, let's call them x1::l1 and x2::l2. Your membership hypothesis now looks like: all elements in x1::l1 are in x2::l2.
But first, let us use excluded middle to state that either x1 in l1 or x1 not in l1. If x1 in l1, then trivially we can prove that x1::l1 has repeats. So moving forward, we may assume that x1 is not in l1.
It suffices to match the inductive case that there exists an l2' of the same length as l2 such that all elements of l1 are in l2'.
Now consider the membership hypothesis on x1::l1, with the forall variable introduced as x:
By hypothesis, we know that x1 in x2::l2. Now consider l2', where l2' is x2::l2 with one instance of x1 removed (use in_split). Now because x1 not in l1, we can conclude that all members of l1 are also in l2' and not the removed element. Then this satisfies the membership hypothesis in the induction, and some wrangling with lengths gives us length l2 = length l2', so that the length hypothesis is satisfied. We thus conclude that l1 contains repeats, and thus also x1::l1.
Edit: previously, I also did a case analysis on whether or not x1 = x2 or x1 in l2, and whether or not x = x2 or x in l2, and solved the special cases in a more straightforward manner. It's not needed, and the general case also covers them.
I am a beginner in coq. I want to prove symmetry of a boolean equality on natural numbers. I have applied induction and destruct commands, but it does not work. Please guide me in proving the theorem.
Fixpoint beqnat(n m : nat): bool:=
match n with
|0=> match m with
|0=> true
|S m' => false
end
|S n'=> match m with
|0=>false
|S m'=> beqnat n' m'
end
end.
Theorem beq sys:
forall(n m:nat),
beqnat n m = beqnat m n.
The proof follows by induction on n followed by destruction on m:
Theorem beq_sym: forall n m : nat, beqnat n m = beqnat m n.
Proof.
induction n as [|n' IH]; destruct m; auto.
apply IH.
Qed.
To understand what is happening:
Do induction n which gives subgoals for n = 0 and n = S n'.
Do simpl on each subgoal to see how the first match/with reduces.
Now you need to do something to m to reduce the second match/with. Induction is not necessary because your beqnat is structurally recursive on n (type Print beqnat and look for {struct n} to confirm), not m. So, destruct m suffices. Again, use simpl to see why.
The induction hypothesis is needed for the recursive call to beqnat in the second subgoal.
I tried to continue proving this practice example that is about list of pairs, but it seems impossible. How should I continue to solve this theorem?
Require Import List.
Fixpoint split (A B:Set)(x:list (A*B)) : (list A)*(list B) :=
match x with
|nil => (nil, nil)
|cons (a,b) x1 => let (ta, tb) := split A B x1 in (a::ta, b::tb)
end.
Theorem split_eq_len :
forall (A B:Set)(x:list (A*B))(y:list A)(z:list B),(split A B x)=(y,z) ->
length y = length z.
Proof.
intros A B x.
elim x.
simpl.
intros y z.
intros H.
injection H.
intros H1 H2.
rewrite <- H1.
rewrite <- H2.
reflexivity.
intros hx.
elim hx.
intros a b tx H y z.
simpl.
intro.
destruct (split A B tx).
I don't want to just give you a proof, but here's one hint:
Your proof will be a bit simpler if you use inversion H instead of injection H and subst instead of rewriting with equalities (subst takes any equality v = expr where v is a variable and substitutes expr for v everywhere; it then clears out the variable v).
Let me show you a couple of steps you can take to advance your proof.
You have ended up in this proof state:
H0 : (a :: l, b :: l0) = (y, z)
============================
length y = length z
At this point it should be obvious that y and z are some non-empty lists. So injection H0. (or inversion H0. as suggested by Tej Chajed) helps you with this.
Then you can change your goal into
length l = length l0
using a combination of simplifications and rewrites (it depends on the exact tactic you use, inversion makes it simpler). You may also find the f_equal tactic very useful. At this point you are almost done because you can now use your induction hypothesis.
I have just run into the issue of the Coq induction discarding information about constructed terms while reading a proof from here.
The authors used something like:
remember (WHILE b DO c END) as cw eqn:Heqcw.
to rewrite a hypothesis H before the actual induction induction H. I really don't like the idea of having to introduce a trivial equality as it looks like black magic.
Some search here in SO shows that actually the remember trick is necessary. One answer here, however, points out that the new dependent induction can be used to avoid the remember trick. This is nice, but the dependent induction itself now seems a bit magical.
I have a hard time trying to understand how dependent induction works. The documentation gives an example where dependent induction is required:
Lemma le_minus : forall n:nat, n < 1 -> n = 0.
I can verify how induction fails and dependent induction works in this case. But I can't use the remember trick to replicate the dependent induction result.
What I tried so far to mimic the remember trick is:
Require Import Coq.Program.Equality.
Lemma le_minus : forall n:nat, n < 1 -> n = 0.
intros n H. (* dependent induction H works*)
remember (n < 1) as H0. induction H.
But this doesn't work. Anyone can explain how dependent induction works here in terms of the remember-ing?
You can do
Require Import Coq.Program.Equality.
Lemma le_minus : forall n:nat, n < 1 -> n = 0.
Proof.
intros n H.
remember 1 as m in H. induction H.
- inversion Heqm. reflexivity.
- inversion Heqm. subst m.
inversion H.
Qed.
As stated here, the problem is that Coq cannot keep track of the shape of terms that appear in the type of the thing you are doing induction on. In other words, doing induction over the "less than" relation instructs Coq to try to prove something about a generic upper bound, as opposed to the specific one you're considering (1).
Notice that it is always possible to prove such goals without remember or dependent induction, by generalizing your result a little bit:
Lemma le_minus_aux :
forall n m, n < m ->
match m with
| 1 => n = 0
| _ => True
end.
Proof.
intros n m H. destruct H.
- destruct n; trivial.
- destruct H; trivial.
Qed.
Lemma le_minus : forall n, n < 1 -> n = 0.
Proof.
intros n H.
apply (le_minus_aux n 1 H).
Qed.