Is it possible to give a counterexample for a statement which doesn't hold in general? Like, for example that the all quantor does not distribute over the connective "or". How would you state that to begin with?
Parameter X : Set.
Parameter P : X -> Prop.
Parameter Q : X -> Prop.
(* This holds in general *)
Theorem forall_distributes_over_and
: (forall x:X, P x /\ Q x) -> ((forall x:X, P x) /\ (forall x:X, Q x)).
Proof.
intro H. split. apply H. apply H.
Qed.
(* This doesn't hold in general *)
Theorem forall_doesnt_distributes_over_or
: (forall x:X, P x \/ Q x) -> ((forall x:X, P x) \/ (forall x:X, Q x)).
Abort.
Here is a quick and dirty way to prove something similar to what you want:
Theorem forall_doesnt_distributes_over_or:
~ (forall X P Q, (forall x:X, P x \/ Q x) -> ((forall x:X, P x) \/ (forall x:X, Q x))).
Proof.
intros H.
assert (X : forall x : bool, x = true \/ x = false).
destruct x; intuition.
specialize (H _ (fun b => b = true) (fun b => b = false) X).
destruct H as [H|H].
now specialize (H false).
now specialize (H true).
Qed.
I have to quantify X P and Q inside the negation in order to be able to provide the one I want. You couldn't quite do that with your Parameters as they somehow fixed an abstract X, P and Q, thus making your theorem potentially true.
In general, if you want to produce a counterexample, you can state the negation of the formula and then prove that this negation is satisfied.
Related
I'm new to Coq and try to learn it through Software foundations. In the chapter "Logic in Coq", there is an exercise not_exists_dist which I completed (by guessing) but not understand:
Theorem not_exists_dist :
excluded_middle →
∀ (X:Type) (P : X → Prop),
¬ (∃ x, ¬ P x) → (∀ x, P x).
Proof.
intros em X P H x.
destruct (em (P x)) as [T | F].
- apply T.
- destruct H. (* <-- This step *)
exists x.
apply F.
Qed.
Before the destruct, the context and goal looks like:
em: excluded_middle
X: Type
P: X -> Prop
H: ~ (exists x : X, ~ P x)
x: X
F: ~ P x
--------------------------------------
(1/1)
P x
And after it
em: excluded_middle
X: Type
P: X -> Prop
x: X
F: ~ P x
--------------------------------------
(1/1)
exists x0 : X, ~ P x0
While I understand destruct on P /\ Q and P \/ Q in hypothesis, I don't understand how it works on P -> False like here.
Let me try to give some intuition behind this by doing another proof first.
Consider:
Goal forall A B C : Prop, A -> C -> (A \/ B -> B \/ C -> A /\ B) -> A /\ B.
Proof.
intros. (*eval up to here*)
Admitted.
What you will see in *goals* is:
1 subgoal (ID 77)
A, B, C : Prop
H : A
H0 : C
H1 : A ∨ B → B ∨ C → A ∧ B
============================
A ∧ B
Ok, so we need to show A /\ B. We can use split to break the and apart, thus we need to show A and B. A follows easily by assumption, B is something we do not have. So, our proof script now might look like:
Goal forall A B C : Prop, A -> C -> (A \/ B -> B \/ C -> A /\ B) -> A /\ B.
Proof.
intros. split; try assumption. (*eval up to here*)
Admitted.
With goals:
1 subgoal (ID 80)
A, B, C : Prop
H : A
H0 : C
H1 : A ∨ B → B ∨ C → A ∧ B
============================
B
The only way we can get to the B is by somehow using H1. Let's see what destruct H1 does to our goals:
3 subgoals (ID 86)
A, B, C : Prop
H : A
H0 : C
============================
A ∨ B
subgoal 2 (ID 87) is:
B ∨ C
subgoal 3 (ID 93) is:
B
We get additional subgoals! In order to destruct H1 we need to provide it proofs for A \/ B and B \/ C, we cannot destruct A /\ B otherwise!
For the sake of completeness: (without the split;try assumption shorthand)
Goal forall A B C : Prop, A -> C -> (A \/ B -> B \/ C -> A /\ B) -> A /\ B.
Proof.
intros. split.
- assumption.
- destruct H1.
+ left. assumption.
+ right. assumption.
+ assumption.
Qed.
Another way to view it is this: H1 is a function that takes A \/ B and B \/ C as input. destruct works on its output. In order to destruct the result of such a function, you need to give it an appropriate input.
Then, destruct performs a case analysis without introducing additional goals.
We can do that in the proof script as well before destructing:
Goal forall A B C : Prop, A -> C -> (A \/ B -> B \/ C -> A /\ B) -> A /\ B.
Proof.
intros. split.
- assumption.
- specialize (H1 (or_introl H) (or_intror H0)).
destruct H1.
assumption.
Qed.
From a proof term perspective, destruct of A /\ B is the same as match A /\ B with conj H1 H2 => (*construct a term that has your goal as its type*) end.
We can replace the destruct in our proof script with a corresponding refine that does exactly that:
Goal forall A B C : Prop, A -> C -> (A \/ B -> B \/ C -> A /\ B) -> A /\ B.
Proof.
intros. unfold not in H0. split.
- assumption.
- specialize (H1 (or_introl H) (or_intror H0)).
refine (match H1 with conj Ha Hb => _ end).
exact Hb.
Qed.
Back to your proof. Your goals before destruct
em: excluded_middle
X: Type
P: X -> Prop
H: ~ (exists x : X, ~ P x)
x: X
F: ~ P x
--------------------------------------
(1/1)
P x
After applying the unfold not in H tactic you see:
em: excluded_middle
X: Type
P: X -> Prop
H: (exists x : X, P x -> ⊥) -> ⊥
x: X
F: ~ P x
--------------------------------------
(1/1)
P x
Now recall the definition of ⊥: It's a proposition that cannot be constructed, i.e. it has no constructors.
If you somehow have ⊥ as an assumption and you destruct, you essentially look at the type of match ⊥ with end, which can be anything.
In fact, we can prove any goal with it:
Goal (forall (A : Prop), A) <-> False. (* <- note that this is different from *)
Proof. (* forall (A : Prop), A <-> False *)
split; intros.
- specialize (H False). assumption.
- refine (match H with end).
Qed.
Its proofterm is:
(λ (A B C : Prop) (H : A) (H0 : C) (H1 : A ∨ B → B ∨ C → A ∧ B),
conj H (let H2 : A ∧ B := H1 (or_introl H) (or_intror H0) in match H2 with
| conj _ Hb => Hb
end))
Anyhow, destruct on your assumption H will give you a proof for your goal if you are able to show exists x : X, ~ P x -> ⊥.
Instead of destruct, you could also do exfalso. apply H. to achieve the same thing.
Normally, destruct t applies when t is an inhabitant of an inductive type I, giving you one goal for each possible constructor for I that could have been used to produce t. Here as you remarked H has type P -> False, which is not an inductive type, but False is. So what happens is this: destruct gives you a first goal corresponding to the P hypothesis of H. Applying H to that goal leads to a term of type False, which is an inductive type, on which destruct works as it should, giving you zero goals since False has no constructors. Many tactics for inductive types work like this on hypothesis of the form P1 -> … -> Pn -> I where I is an inductive type: they give you side-goals for P1 … Pn, and then work on I.
I am trying to prove some FOL equivalences. I am having trouble using DeMorgan's laws for quantifiers, in particular
~ (exists x. P(x)) <-> forall x. ~P(x)
I tried applying not_ex_all_not from Coq.Logic.Classical_Pred_Type., and scoured StackOverflow (Coq convert non exist to forall statement, Convert ~exists to forall in hypothesis) but neither came close to solving the issue.
Theorem t3: forall (T: Type), forall p q: T -> Prop, forall r: T -> T -> Prop,
~(exists (x: T), ((p x) /\ (exists (y: T), ((q y) /\ ~(r x y)))))
<-> forall (x y: T), ((p x) -> (((q y) -> (r x y)))).
Proof.
intros T p q r.
split.
- intros H.
apply not_ex_all_not.
I get this error:
In environment
T : Type
p, q : T → Prop
r : T → T → Prop
H : ¬ (∃ x : T, p x ∧ (∃ y : T, q y ∧ ¬ r x y))
Unable to unify
"∀ (U : Type) (P : U → Prop), ¬ (∃ n : U, P n) → ∀ n : U, ¬ P n"
with "∀ x y : T, p x → q y → r x y".
I expected DeMorgan's law to be applied to the goal resulting in a negated existential.
Let's observe what we can derive from H:
~ (exists x : T, p x /\ (exists y : T, q y /\ ~ r x y))
=> (not exists <-> forall not)
forall x : T, ~ (p x /\ (exists y : T, q y /\ ~ r x y))
=> (not (A and B) <-> A implies not B)
forall x : T, p x -> ~ (exists y : T, q y /\ ~ r x y)
=>
forall x : T, p x -> forall y : T, ~ (q y /\ ~ r x y)
=>
forall x : T, p x -> forall y : T, q y -> ~ (~ r x y)
We end up with a double negation on the conclusion. If you don't mind using a classical axiom, we can apply NNPP to strip it and we're done.
Here is the equivalent Coq proof:
Require Import Classical.
(* I couldn't find this lemma in the stdlib, so here is a quick proof. *)
Lemma not_and_impl_not : forall P Q : Prop, ~ (P /\ Q) <-> (P -> ~ Q).
Proof. tauto. Qed.
Theorem t3: forall (T: Type), forall p q: T -> Prop, forall r: T -> T -> Prop,
~(exists (x: T), ((p x) /\ (exists (y: T), ((q y) /\ ~(r x y)))))
<-> forall (x y: T), ((p x) -> (((q y) -> (r x y)))).
Proof.
intros T p q r.
split.
- intros H x y Hp Hq.
apply not_ex_all_not with (n := x) in H.
apply (not_and_impl_not (p x)) in H; try assumption.
apply not_ex_all_not with (n := y) in H.
apply (not_and_impl_not (q y)) in H; try assumption.
apply NNPP in H. assumption.
The above was a forward reasoning. If you want backwards (by applying lemmas to the goal instead of hypotheses), things get a little harder, because you need to build the exact forms before you can apply the lemmas to the goal. This is also why your apply fails. Coq doesn't automatically find where and how to apply the lemma out of the box.
(And apply is a relatively low-level tactic. There is an advanced Coq feature that allows to apply a propositional lemma to subterms.)
Require Import Classical.
Lemma not_and_impl_not : forall P Q : Prop, ~ (P /\ Q) <-> (P -> ~ Q).
Proof. tauto. Qed.
Theorem t3: forall (T: Type), forall p q: T -> Prop, forall r: T -> T -> Prop,
~(exists (x: T), ((p x) /\ (exists (y: T), ((q y) /\ ~(r x y)))))
<-> forall (x y: T), ((p x) -> (((q y) -> (r x y)))).
Proof.
intros T p q r.
split.
- intros H x y Hp Hq.
apply NNPP. revert dependent Hq. apply not_and_impl_not.
revert dependent y. apply not_ex_all_not.
revert dependent Hp. apply not_and_impl_not.
revert dependent x. apply not_ex_all_not. apply H.
Actually, there is an automation tactic called firstorder, which (as you guessed) solves first-order intuitionistic logic. Note that NNPP is still needed since firstorder doesn't handle classical logic.
Theorem t3: forall (T: Type), forall p q: T -> Prop, forall r: T -> T -> Prop,
~(exists (x: T), ((p x) /\ (exists (y: T), ((q y) /\ ~(r x y)))))
<-> forall (x y: T), ((p x) -> (((q y) -> (r x y)))).
Proof.
intros T p q r.
split.
- intros H x y Hp Hq. apply NNPP. firstorder.
- firstorder. Qed.
Lemma In_map_iff :
forall (A B : Type) (f : A -> B) (l : list A) (y : B),
In y (map f l) <->
exists x, f x = y /\ In x l.
Proof.
split.
- generalize dependent y.
generalize dependent f.
induction l.
+ intros. inversion H.
+ intros.
simpl.
simpl in H.
destruct H.
* exists x.
split.
apply H.
left. reflexivity.
*
1 subgoal
A : Type
B : Type
x : A
l : list A
IHl : forall (f : A -> B) (y : B),
In y (map f l) -> exists x : A, f x = y /\ In x l
f : A -> B
y : B
H : In y (map f l)
______________________________________(1/1)
exists x0 : A, f x0 = y /\ (x = x0 \/ In x0 l)
Since proving exists x0 : A, f x0 = y /\ (x = x0 \/ In x0 l) is the same as proving exists x0 : A, f x0 = y /\ In x0 l, I want to eliminate x = x0 inside the goal here so I can apply the inductive hypothesis, but I am not sure how to do this. I've tried left in (x = x0 \/ In x0 l) and various other things, but I haven't been successful in making it happen. As it turns out, defining a helper function of type forall a b c, (a /\ c) -> a /\ (b \/ c) to do the rewriting does not work for terms under an existential either.
How could this be done?
Note that the above is one of the SF book exercises.
You can get access to the components of your inductive hypothesis with any of the following:
specialize (IHl f y h); destruct IHl
destruct (IHl f y H)
edestruct IHl
You can then use exists and split to manipulate the goal into a form that is easier to work with.
As it turns out, it is necessary to define a helper.
Lemma In_map_iff_helper : forall (X : Type) (a b c : X -> Prop),
(exists q, (a q /\ c q)) -> (exists q, a q /\ (b q \/ c q)).
Proof.
intros.
destruct H.
exists x.
destruct H.
split.
apply H.
right.
apply H0.
Qed.
This does the rewriting that is needed right off the bat. I made a really dumb error thinking that I needed a tactic rather than an auxiliary lemma. I should have studied the preceding examples more closely - if I did, I'd have realized that existentials need to be accounted for.
I am doing an exercise in Coq and trying to prove if a list equals to its reverse, it's a palindrome. Here is how I define palindromes:
Inductive pal {X : Type} : list X -> Prop :=
| emptypal : pal []
| singlpal : forall x, pal [x]
| inducpal : forall x l, pal l -> pal (x :: l ++ [x]).
Here is the theorem:
Theorem palindrome3 : forall {X : Type} (l : list X),
l = rev l -> pal l.
According to my definition, I will need to do the induction my extracting the front and tail element but apparently coq won't let me do it, and if I force it to do so, it gives an induction result that definitely doesn't make any sense:
Proof.
intros X l H. remember (rev l) as rl. induction l, rl.
- apply emptypal.
- inversion H.
- inversion H.
- (* stuck *)
context:
1 subgoals
X : Type
x : X
l : list X
x0 : X
rl : list X
Heqrl : x0 :: rl = rev (x :: l)
H : x :: l = x0 :: rl
IHl : x0 :: rl = rev l -> l = x0 :: rl -> pal l
______________________________________(1/1)
pal (x :: l)
aparently the inductive context is terribly wrong. is there any way I can fix the induction?
The solution I propose here is probably not the shortest one, but I think it is rather natural.
My solution consists in defining an induction principle on list specialized to your problem.
Consider natural numbers. There is not only the standard induction nat_ind where you prove P 0 and forall n, P n -> P (S n). But there are other induction schemes, e.g., the strong induction lt_wf_ind, or the two-step induction where you prove P 0, P 1 and forall n, P n -> P (S (S n)). If the standard induction scheme is not strong enough to prove the property you want, you can try another one.
We can do the same for lists. If the standard induction scheme list_ind is not enough, we can write another one that works. In this idea, we define for lists an induction principle similar to the two-step induction on nat (and we will prove the validity of this induction scheme using the two-step induction on nat), where we need to prove three cases: P [], forall x, P [x] and forall x l x', P l -> P (x :: l ++ [x']). The proof of this scheme is the difficult part. Applying it to deduce your theorem is quite straightforward.
I don't know if the two-step induction scheme is part of the standard library, so I introduce it as an axiom.
Axiom nat_ind2 : forall P : nat -> Prop, P 0 -> P 1 ->
(forall n : nat, P n -> P (S (S n))) -> forall n : nat, P n.
Then we prove the induction scheme we want.
Lemma list_ind2 : forall {A} (P : list A -> Prop) (P_nil : P [])
(P_single : forall x, P [x])
(P_cons_snoc : forall x l x', P l -> P (x :: l ++ [x'])),
forall l, P l.
Proof.
intros. remember (length l) as n. symmetry in Heqn. revert dependent l.
induction n using nat_ind2; intros.
- apply length_zero_iff_nil in Heqn. subst l. apply P_nil.
- destruct l; [discriminate|]. simpl in Heqn. inversion Heqn; subst.
apply length_zero_iff_nil in H0. subst l. apply P_single.
- destruct l; [discriminate|]. simpl in Heqn.
inversion Heqn; subst. pose proof (rev_involutive l) as Hinv.
destruct (rev l). destruct l; discriminate. simpl in Hinv. subst l.
rewrite app_length in H0.
rewrite PeanoNat.Nat.add_comm in H0. simpl in H0. inversion H0.
apply P_cons_snoc. apply IHn. assumption.
Qed.
You should be able to conclude quite easily using this induction principle.
Theorem palindrome3 : forall {X : Type} (l : list X),
l = rev l -> pal l.
How does one prove forall x, (R x \/ ~R x) in Coq. I'm a noob at this and don't know much of this tool.
This is what I wrote:
Variables D: Set.
Variables R: D -> Prop.
Variables x:D.
Lemma tes : forall x, (R x \/ ~R x).
I tried this and it worked, but only in auto mode. And if I print the proof I cannot understand the meaning of what's printed (so I can go back and try to do it not in auto mode):
Require Import Classical.
Variables D: Set.
Variables R: D -> Prop.
Variables x:D.
Lemma tes : forall x, (R x \/ ~R x).
Proof.
intro.
tauto.
Qed.
Print tes.
tes =
fun x0 : D =>
NNPP (R x0 \/ ~ R x0)
(fun H : ~ (R x0 \/ ~ R x0) =>
(fun H0 : R x0 -> False =>
(fun H1 : ~ R x0 -> False =>
(fun H2 : False => False_ind False H2) (H1 H0))
(fun H1 : ~ R x0 => H (or_intror H1)))
(fun H0 : R x0 => H (or_introl H0)))
: forall x : D, R x \/ ~ R x
One way to prove it would be
Print classic.
Print R.
Print x.
Check R x.
Check classic (R x).
Check fun y => classic (R y).
Definition tes2 : forall x, (R x \/ ~ R x) := fun y => classic (R y).
Or using tactics
Lemma tes3 : forall x, (R x \/ ~ R x). Proof. intro. apply classic. Qed.
Print tes3.
The proof tauto found was
Lemma tes : forall x, (R x \/ ~R x).
Proof.
intro.
eapply NNPP.
intro.
assert (R x0 -> False).
intro.
eapply H.
eapply or_introl.
exact H0.
assert (~ R x0 -> False).
intro.
eapply H.
eapply or_intror.
exact H1.
assert False.
eapply H1.
exact H0.
eapply False_ind.
exact H2.
Qed.
The intro tactic builds lambda abstractions, and performs implication introduction and universal quantification introduction, the apply tactic combines previously proven theorems, and performs modus ponens where the first antecedent has already been proven, the exact tactic gives the exact proof term, and the assert tactic also performs modus ponens where none of the antecedents of the rule are given. To see how tactics manipulate the proof term use the Show Proof command before and after applying a tactic.