~ (exists x:D, ~ R x)->(forall y:D, R y)
I have worked on it for quite a long time, but it seems that I cannot use the left part of the implication well.
This is the first part of my code:
Parameter D: Set.
Parameter x: D.
Parameter y: D.
Parameter R: D->Prop.
Lemma b: ~(exists x:D, ~ (R x))->(forall y:D, (R y)).
Can anyone help me figure out how to write the rest of of the code?
Your question is a bit vague, as you don't specify what D and R are, and where you are stuck in your proof. Try providing a minimal working example, with an explicit fail tactic for where you're stuck in the proof.
In classical logic (the one you're use to in maths), as you have the excluded middle rule, you can always do a case analysis on whether something is true or false. In vanilla Coq, built for intuitionistic logic, it's not the case. Your result is actually not provable if the predicate R is not decidable (if it's not either true or false on every input : forall (x:D), R x \/ ~R x), if the type D is not empty.
Try adding the decidability of R as an hypothesis and reprove it. It should follow more or less this structure (the key being the case analysis on whether (R y) is true or false) :
Parameter D: Set.
Parameter R: D -> Prop.
Lemma yourGoal :
(forall x, R x \/ ~ R x) -> (* Decidability of R *)
~ ( exists x, ~ (R x) )->
forall y, (R y).
Proof.
intros Hdec Hex y. (* naming the hypothesis for convenience *)
specialize (Hdec y).
destruct Hdec as [H_Ry_is_true | H_Ry_is_false]. (* case analysis, creates two goals *)
+ (* (R y) is true, which is our goal. *)
assumption.
+ (* (R y) is false, which contradicts Hex *)
exfalso. (* transform your goal into False *)
apply Hex.
(* should be easy from here, using the [exists] tactic *)
Qed.
Ps: this exact result (and its link with excluded middle) is mentioned in Software foundations, which is a great resource to learn Coq and logic : https://softwarefoundations.cis.upenn.edu/lf-current/Logic.html#not_exists_dist
Related
I have the following coq code:
Theorem filter_exercise : forall (X : Type) (l lf : list X) (test : X -> bool)
(x : X),
filter test l = x :: lf ->
test x = true.
Proof.
intros X l lf test x eq.
induction l as [|l' l].
- inversion eq.
- inversion eq as [H].
Which gives me:
X : Type
l' : X
l, lf : list X
test : X -> bool
x : X
eq : filter test (l' :: l) = x :: lf
testEq : test x = false
IHl : filter test l = x :: lf -> false = true
============================
filter test l = (if test l' then l' :: filter test l else filter test l)
Here, if I could just say that because test x = false and both x and l' are universally quantified variables of type X, then I'd be done with the proof.
However, that's a semantic argument, and I'm not sure how to do that in Coq. Am I going down the wrong path?
EDIT
For posterity, this is the solution I ultimately obtained:
Theorem filter_exercise : forall (X : Type) (l lf : list X) (test : X -> bool)
(x : X),
filter test l = x :: lf ->
test x = true.
Proof.
intros X l lf test x eq.
induction l as [|l' l].
- inversion eq.
- simpl in eq. destruct (test l') eqn:testl'.
+ inversion eq. rewrite <- H0. apply testl'.
+ apply IHl. apply eq.
Qed.
I am not sure what you mean by "semantic argument", but this proof strategy is not correct, neither on paper nor in Coq. Consider, for instance, the following statement:
Lemma faulty : forall n m : nat, even n -> even m.
Proof. Admitted.
By your logic, if n is even, then m should also be even, since both are universally quantified variables of type nat. However, precisely because they are universally quantified, they can instantiated to different values of nat, thus yielding obviously contradictory statements. For instance, if we instantiate faulty with 2 and 1, we should be able to conclude that 1 is even, which is not true.
Your argument that test x = false -> test l' = false is not true, as both variables x and l are universally quantified, and thus can have any value. You could just have a specific relationship between both variables in your hypothesis, but here it is not the case, except the relationship filter test (l' :: l) = x :: lf, that tells you that x could be an element of l which has not been filtered by test (but it also might be l').
You should not use inversion here, as your problem is really simple. You idea to perform an induction is fine however :
Try first to simplify several hypothesis.
Then see if there are different cases to deal with, and use destruct when needed (on the value of test l' in this problem)
You should be able to solve the problem then (the most complicated tactic you might have to use is injection)
I am currently following the book Computational Type Theory and Interactive Theorem Proving with Coq by Gert Smolka and on page 75, there is exercise 9.3.14.b asking me to prove that the 'weak law of excluded middle' defined as:
Definition WXM : Prop := forall (X:Prop), ~X \/ ~~X.
implies the Morgan law for conjunction:
Definition MGC : Prop := forall (X Y:Prop), ~(X/\Y) -> ~X \/ ~Y. (* <- always true *)
i.e.
Lemma L1 : WXM -> MGC
Proof.
Admitted.
I have been trying for solve this for a while now, but with no success. Assuming WXM
and ~(X/\Y) for some Prop X Y, faced with the goal ~X \/ ~Y, I did a case
analysis (applying WXM on X and Y). Out of the 4 cases, 3 are immediately dispatched but I am left with a 4th cases with additional assumptions ~~X and ~~Y.
Intuitively ~~X and ~~Y are saying that X and Y are 'weakly true' and you would hope to conclude that X /\ Y is also weakly true (i.e. show ~~(X/\Y) leading to a contradiction thanks to the assumption ~(X/\Y)). However, I am not able to conclude.
I don't want to give up but would also like to move on with the book. Does anyone have the answer to this?.
One does not need to consider the four cases here, because one has negated disjuncts in the goal giving some terms to work with when one chooses which disjunct to prove. Getting e.g. ~ Q and ~ ~ Q cases is enough to prove the lemma.
I'm not sure how to explain this intuition further without just showing the solution.
Goal
(forall P : Prop, ~ P \/ ~ ~ P) ->
(forall P Q : Prop, ~ (P /\ Q) -> ~ P \/ ~ Q).
Proof.
intros wlem P Q npq.
destruct (wlem Q) as [nq | nnq].
- right; trivial.
- left; intros p.
apply nnq; intros q.
apply npq; split; trivial.
Qed.
The following example is from chapter Poly of the Software Foundations book.
Definition fold_length {X : Type} (l : list X) : nat :=
fold (fun _ n => S n) l 0.
Theorem fold_length_correct : forall X (l : list X),
fold_length l = length l.
Proof.
intros.
induction l.
- simpl. reflexivity.
- simpl.
1 subgoal
X : Type
x : X
l : list X
IHl : fold_length l = length l
______________________________________(1/1)
fold_length (x :: l) = S (length l)
I expected it to simplify a step here on the left side. It certainly should be able to.
Theorem fold_length_correct : forall X (l : list X),
fold_length l = length l.
Proof.
intros.
induction l.
- simpl. reflexivity.
- simpl. rewrite <- IHl. simpl.
1 subgoal
X : Type
x : X
l : list X
IHl : fold_length l = length l
______________________________________(1/1)
fold_length (x :: l) = S (fold_length l)
During the running of the tests I had an issue where simpl would refuse to dive in, but reflexivity did the trick, so I tried the same thing here and the proof succeeded.
Note that one would not expect reflexivity to pass given the state of the goal, but it does. In this example it worked, but it did force me to do the rewrite in the opposite direction of what I intended originally.
Is it possible to have more control over simpl so that it does the desired reductions?
For the purposes of this answer, I'll assume the definition of fold is something along the lines of
Fixpoint fold {A B: Type} (f: A -> B -> B) (u: list A) (b: B): B :=
match u with
| [] => b
| x :: v => f x (fold f v b)
end.
(basically fold_right from the standard library). If your definition is substantially different, the tactics I recommend might not work.
The issue here is the behavior of simpl with constants that have to be unfolded before they can be simplified. From the documentation:
Notice that only transparent constants whose name can be reused in the recursive calls are possibly unfolded by simpl. For instance a constant defined by plus' := plus is possibly unfolded and reused in the recursive calls, but a constant such as succ := plus (S O) is never unfolded.
This is a bit hard to understand, so let's use an example.
Definition add_5 (n: nat) := n + 5.
Goal forall n: nat, add_5 (S n) = S (add_5 n).
Proof.
intro n.
simpl.
unfold add_5; simpl.
exact eq_refl.
Qed.
You'll see that the first call to simpl didn't do anything, even though add_5 (S n) could be simplified to S (n + 5). However, if I unfold add_5 first, it works perfectly. I think the issue is that plus_5 is not directly a Fixpoint. While plus_5 (S n) is equivalent to S (plus_5 n), that isn't actually the definition of it. So Coq doesn't recognize that its "name can be reused in the recursive calls". Nat.add (that is, "+") is defined directly as a recursive Fixpoint, so simpl does simplify it.
The behavior of simpl can be changed a little bit (see the documentation again). As Anton mentions in the comments, you can use the Arguments vernacular command to change when simpl tries to simplify. Arguments fold_length _ _ /. tells Coq that fold_length should be unfolded if at least two arguments are provided (the slash separates between the required arguments on the left and the unnecessary arguments on the right).[sup]1[\sup]
A simpler tactic to use if you don't want to deal with that is cbn which works here by default and works better in general. Quoting from the documentation:
The cbn tactic is claimed to be a more principled, faster and more predictable replacement for simpl.
Neither simpl with Arguments and a slash nor cbn reduce the goal to quite what you want in your case, since it'll unfold fold_length but not refold it. You could recognize that the call to fold is just fold_length l and refold it with fold (fold_length l).
Another possibility in your case is to use the change tactic. It seemed like you knew already that fold_length (a :: l) was supposed to simplify to S (fold_length l). If that's the case, you could use change (fold_length (a :: l)) with (S (fold_length l)). and Coq will try to convert one into the other (using only the basic conversion rules, not equalities like rewrite does).
After you've gotten the goal to S (fold_length l) = S (length l) using either of the above tactics, you can use rewrite -> IHl. like you wanted to.
I thought the slashes only made simpl unfold things less, which is why I didn't mention it before. I'm not sure what the default actually is, since putting the slash anywhere seems to make simpl unfold fold_length.
I wanted to see a few hands on examples of Coq proofs of the form:
\exists A(x1,...,xn)
essentially where the Goal had an existential quantifier. I was having issues manipulating the goal in meaningful ways to make progress in my proof and wanted to see a few examples of common tactics to manipulate.
What are some good existential quantifiers examples in Coq to prove?
My specific example I had:
Theorem Big_Small_ForwardImpl :
forall (P : Program) (S' : State),
(BigStepR (B_PgmConf P) (B_StateConf S')) -> (ConfigEquivR (S_PgmConf P) (S_BlkConf EmptyBlk S')).
Proof.
intros.
induction P.
unfold ConfigEquivR.
refine (ex_intro _ _ _) .
my context and goals was:
1 subgoal
l : list string
s : Statement
S' : State
H : BigStepR (B_PgmConf (Pgm l s)) (B_StateConf S')
______________________________________(1/1)
exists N : nat, NSmallSteps N (S_PgmConf (Pgm l s)) (S_BlkConf EmptyBlk S')
but then changed to:
1 subgoal
l : list string
s : Statement
S' : State
H : BigStepR (B_PgmConf (Pgm l s)) (B_StateConf S')
______________________________________(1/1)
NSmallSteps ?Goal (S_PgmConf (Pgm l s)) (S_BlkConf EmptyBlk S')
after using the refine (ex_intro _ _ _) tactic. Since I am not sure what is going on I was hoping some simpler examples could show me how to manipulate existential quantifiers in my Coq goal.
helpful comment:
The ?Goal was introduced by Coq as a placeholder for some N that will have to be deduced later in the proof.
The following example is based on the code provided in this answer.
Suppose we have a type T and a binary relation R on elements of type T. For the purpose of this example, we can define those as follows.
Variable T : Type.
Variable R : T -> T -> Prop.
Let us prove the following simple theorem.
Theorem test : forall x y, R x y -> exists t, R x t.
Here is a possible solution.
Proof.
intros. exists y. apply H.
Qed.
Instead of explicitly specifying that y is the element we are looking for, we can rely on Coq's powerful automatic proof mechanisms in order to automatically deduce which variable satisfies R x t:
Proof.
intros.
eexists. (* Introduce a temporary placeholder of the form ?t *)
apply H. (* Coq can deduce from the hypothesis H that ?t must be y *)
Qed.
There exist numerous tactics that make ise of the same automated deduction mechanisms, such as eexists, eapply, eauto, etc.
Note that their names often correspond to usual tactics prefixed with an e.
Given any programming language, whenever a standard library function exists, we should most likely use it rather than write our own code. One would think that this advice applies equally to Coq. However, I recently forced myself to use the same_relation predicate of the Relation module, and I am left with the feeling of being worse off. So I must be missing something, hence my question. To illustrate what I mean let us consider to possible relations:
Require Import Relations. (* same_relation *)
Require Import Setoids.Setoid. (* seems to be needed for rewrite *)
Inductive rel1 {A:Type} : A -> A -> Prop :=
| rel1_refl : forall x:A, rel1 x x. (* for example *)
Inductive rel2 {A:Type} : A -> A -> Prop :=
| rel2_refl : forall x:A, rel2 x x. (* for example *)
The specific details of these relations do not matter here, as long as rel1 and rel2 are equivalent. Now, if I want to ignore the Coq library, I could simply state:
Lemma L1: forall (A:Type)(x y:A), rel1 x y <-> rel2 x y.
Proof.
(* some proof *)
Qed.
and if I want to follow my instinct and use the Coq library:
Lemma L2: forall (A:Type), same_relation A rel1 rel2.
Proof.
(* some proof *)
Qed.
In the simplest of cases, it seems that having proven lemma L1 or Lemma L2 is equally beneficial:
Lemma application1: forall (A:Type) (x y:A),
rel1 x y -> rel2 x y (* for example *)
Proof.
intros A x y H. apply L1 (* or L2 *) . exact H.
Qed.
Whether I decide to use apply L1 or apply L2 makes no difference...
However in practice, we are likely to be faced with a more complicated goal:
Lemma application2: forall (A:Type) (x y:A) (p:Prop),
p /\ rel1 x y -> p /\ rel2 x y.
Proof.
intros A x y p H. rewrite <- L1. exact H.
Qed.
My point here is that replacing rewrite <- L1 by rewrite <- L2 will fail. This is also true of the previous example, but at least I was able to use apply rather than rewrite. I cannot use apply in this case (unless I go through the trouble of splitting my goal). So it seems that I have lost the convenience of using rewrite, if I only have Lemma L2.
Using rewrite on results which are an equivalence (not just an equality) is very convenient. It seems that wrapping an equivalence into the predicate same_relation takes away this convenience. Was I right to follow my instinct and force myself to use same_relation? More generally, is it so true that if a construct is defined in the standard Coq library, I should use it, rather than define my own version of it?
You pose two questions, I try to answer separately:
Regarding your rewrite problem, this problem is natural as the definition of same_relation goes as double inclusion. I agree that maybe a definition using iff would be more convenient. It would really depend on the kind of goals you have. A possible solution for your problem is to define a view:
Lemma L1 {A:Type} {x y:A} : rel1 x y <-> rel2 x y.
Proof.
Admitted.
Lemma L2 {A:Type} : same_relation A rel1 rel2.
Proof.
Admitted.
Lemma U {T} {R1 R2 : relation T} :
same_relation _ R1 R2 -> forall x y, R1 x y <-> R2 x y.
Proof. now destruct 1; intros x y; split; auto. Qed.
Lemma application2 {A:Type} {x y:A} {p:Prop} :
p /\ rel1 x y -> p /\ rel2 x y.
Proof. now rewrite (U L2). Qed.
Note also that rewriting with a <-> relation is not really based on equality, but on "setoid rewriting". In fact, the following doesn't hold in Coq A <-> B -> A = B.
Regarding your second question, whether to use the Coq standard library is a highly subjective topic. I personally rarely use it, I prefer a different library called math-comp, but YMMV. Regarding relations, mathcomp is mostly specialized into boolean relations rel x y = x -> y -> bool, thus, equivalence is simply defined as equality, typically, given r1 r2 you'd write r1 =2 r2.
IMHO in the end, such choices are highly dependent on your application domain.
[edit]: Note that the Relation library is dated:
Naive set theory in Coq. Coq V6.1. This work was started in July 1993 by F. Prost.
So indeed, it may not be the best modern base to build Coq developments on.