Stuck on proving uniqueness of null element in posets - coq

I am trying to learn COQ, by implementing facts on Posets. While proving my first theorem I am stuck here.
Class Poset {A: Type} ( leq : A -> A -> Prop ) : Prop := {
reflexivity: forall x y : A, x = y -> (leq x y);
antisymmetry: forall x y : A, ((leq x y) /\ (leq y x)) -> x = y;
transitivity: forall x y z :A, ((leq x y) /\ (leq y z) -> (leq x z))
}.
Module Poset.
Parameter A : Type.
Parameter leq : A -> A -> Prop.
Parameter poset : #Poset A leq.
Definition null_element (n : A) :=
forall a : A, leq n a.
Theorem uniqueness_of_null_element (n1 : A) (n2 : A) : null_element(n1) /\ null_element(n2) -> n1 = n2.
Proof.
unfold null_element.
Qed.
End Poset.
I am not sure how to proceed after this. Can someone help?

I think I got it.
This is what I did.
Proof.
unfold null_element.
intros [H1 H2].
specialize H1 with n2.
specialize H2 with n1.
apply antisymmetry.
split.
- apply H1.
- apply H2.
Qed.

Related

"Cannot instantiate metavariable P of type ..." when destructing in Coq proof mode

I have a problem in proving trivial proposition.
First, We define a composition of function with general domain and codomain:
Definition fun_comp {X Y Z W}
(f : X -> Y) (g : Z -> W) (H : Y = Z) : X -> W.
destruct H. refine (fun x => g (f x)). Defined.
We will now try to prove a trivial lemma:
Lemma compose_trivial {X Y Z} (f : X -> Y) (g : Y -> Z) (H : Y = Y)
: forall x, fun_comp f g H x = g (f x).
Proof.
intros x. revert f g. destruct H.
But destruct H. fails with an error message:
Cannot instantiate metavariable P of type
"forall a : Type, Y = a -> Prop" with abstraction
"fun (Y : Type) (H : Y = Y) =>
forall (f : X -> Y) (g : Y -> Z), fun_comp f g H x = g (f x)"
of incompatible type
"forall Y : Type, Y = Y -> Prop".
If it is able to generalize Y in the right hand side of H independently, the destruct tactic would work, but it would contradict to the right hand side of the goal g (f x).
Is it possible to prove trivial_compose? If possible, how can I?
This is not trivial at all. This has to do with uniqueness of identity proof that is not provable in Coq. You need an extra axiom.
For example,
Require Import ProofIrrelevance.
Lemma compose_trivial {X Y Z} (f : X -> Y) (g : Y -> Z) (H : Y = Y)
: forall x, fun_comp f g H x = g (f x).
Proof.
intros x.
now rewrite <- (proof_irrelevance _ (eq_refl Y) H).
Qed.

DeMorgan's law for quantifiers in Coq

I am trying to prove some FOL equivalences. I am having trouble using DeMorgan's laws for quantifiers, in particular
~ (exists x. P(x)) <-> forall x. ~P(x)
I tried applying not_ex_all_not from Coq.Logic.Classical_Pred_Type., and scoured StackOverflow (Coq convert non exist to forall statement, Convert ~exists to forall in hypothesis) but neither came close to solving the issue.
Theorem t3: forall (T: Type), forall p q: T -> Prop, forall r: T -> T -> Prop,
~(exists (x: T), ((p x) /\ (exists (y: T), ((q y) /\ ~(r x y)))))
<-> forall (x y: T), ((p x) -> (((q y) -> (r x y)))).
Proof.
intros T p q r.
split.
- intros H.
apply not_ex_all_not.
I get this error:
In environment
T : Type
p, q : T → Prop
r : T → T → Prop
H : ¬ (∃ x : T, p x ∧ (∃ y : T, q y ∧ ¬ r x y))
Unable to unify
"∀ (U : Type) (P : U → Prop), ¬ (∃ n : U, P n) → ∀ n : U, ¬ P n"
with "∀ x y : T, p x → q y → r x y".
I expected DeMorgan's law to be applied to the goal resulting in a negated existential.
Let's observe what we can derive from H:
~ (exists x : T, p x /\ (exists y : T, q y /\ ~ r x y))
=> (not exists <-> forall not)
forall x : T, ~ (p x /\ (exists y : T, q y /\ ~ r x y))
=> (not (A and B) <-> A implies not B)
forall x : T, p x -> ~ (exists y : T, q y /\ ~ r x y)
=>
forall x : T, p x -> forall y : T, ~ (q y /\ ~ r x y)
=>
forall x : T, p x -> forall y : T, q y -> ~ (~ r x y)
We end up with a double negation on the conclusion. If you don't mind using a classical axiom, we can apply NNPP to strip it and we're done.
Here is the equivalent Coq proof:
Require Import Classical.
(* I couldn't find this lemma in the stdlib, so here is a quick proof. *)
Lemma not_and_impl_not : forall P Q : Prop, ~ (P /\ Q) <-> (P -> ~ Q).
Proof. tauto. Qed.
Theorem t3: forall (T: Type), forall p q: T -> Prop, forall r: T -> T -> Prop,
~(exists (x: T), ((p x) /\ (exists (y: T), ((q y) /\ ~(r x y)))))
<-> forall (x y: T), ((p x) -> (((q y) -> (r x y)))).
Proof.
intros T p q r.
split.
- intros H x y Hp Hq.
apply not_ex_all_not with (n := x) in H.
apply (not_and_impl_not (p x)) in H; try assumption.
apply not_ex_all_not with (n := y) in H.
apply (not_and_impl_not (q y)) in H; try assumption.
apply NNPP in H. assumption.
The above was a forward reasoning. If you want backwards (by applying lemmas to the goal instead of hypotheses), things get a little harder, because you need to build the exact forms before you can apply the lemmas to the goal. This is also why your apply fails. Coq doesn't automatically find where and how to apply the lemma out of the box.
(And apply is a relatively low-level tactic. There is an advanced Coq feature that allows to apply a propositional lemma to subterms.)
Require Import Classical.
Lemma not_and_impl_not : forall P Q : Prop, ~ (P /\ Q) <-> (P -> ~ Q).
Proof. tauto. Qed.
Theorem t3: forall (T: Type), forall p q: T -> Prop, forall r: T -> T -> Prop,
~(exists (x: T), ((p x) /\ (exists (y: T), ((q y) /\ ~(r x y)))))
<-> forall (x y: T), ((p x) -> (((q y) -> (r x y)))).
Proof.
intros T p q r.
split.
- intros H x y Hp Hq.
apply NNPP. revert dependent Hq. apply not_and_impl_not.
revert dependent y. apply not_ex_all_not.
revert dependent Hp. apply not_and_impl_not.
revert dependent x. apply not_ex_all_not. apply H.
Actually, there is an automation tactic called firstorder, which (as you guessed) solves first-order intuitionistic logic. Note that NNPP is still needed since firstorder doesn't handle classical logic.
Theorem t3: forall (T: Type), forall p q: T -> Prop, forall r: T -> T -> Prop,
~(exists (x: T), ((p x) /\ (exists (y: T), ((q y) /\ ~(r x y)))))
<-> forall (x y: T), ((p x) -> (((q y) -> (r x y)))).
Proof.
intros T p q r.
split.
- intros H x y Hp Hq. apply NNPP. firstorder.
- firstorder. Qed.

How to eliminate a disjunction inside of an expression?

Lemma In_map_iff :
forall (A B : Type) (f : A -> B) (l : list A) (y : B),
In y (map f l) <->
exists x, f x = y /\ In x l.
Proof.
split.
- generalize dependent y.
generalize dependent f.
induction l.
+ intros. inversion H.
+ intros.
simpl.
simpl in H.
destruct H.
* exists x.
split.
apply H.
left. reflexivity.
*
1 subgoal
A : Type
B : Type
x : A
l : list A
IHl : forall (f : A -> B) (y : B),
In y (map f l) -> exists x : A, f x = y /\ In x l
f : A -> B
y : B
H : In y (map f l)
______________________________________(1/1)
exists x0 : A, f x0 = y /\ (x = x0 \/ In x0 l)
Since proving exists x0 : A, f x0 = y /\ (x = x0 \/ In x0 l) is the same as proving exists x0 : A, f x0 = y /\ In x0 l, I want to eliminate x = x0 inside the goal here so I can apply the inductive hypothesis, but I am not sure how to do this. I've tried left in (x = x0 \/ In x0 l) and various other things, but I haven't been successful in making it happen. As it turns out, defining a helper function of type forall a b c, (a /\ c) -> a /\ (b \/ c) to do the rewriting does not work for terms under an existential either.
How could this be done?
Note that the above is one of the SF book exercises.
You can get access to the components of your inductive hypothesis with any of the following:
specialize (IHl f y h); destruct IHl
destruct (IHl f y H)
edestruct IHl
You can then use exists and split to manipulate the goal into a form that is easier to work with.
As it turns out, it is necessary to define a helper.
Lemma In_map_iff_helper : forall (X : Type) (a b c : X -> Prop),
(exists q, (a q /\ c q)) -> (exists q, a q /\ (b q \/ c q)).
Proof.
intros.
destruct H.
exists x.
destruct H.
split.
apply H.
right.
apply H0.
Qed.
This does the rewriting that is needed right off the bat. I made a really dumb error thinking that I needed a tactic rather than an auxiliary lemma. I should have studied the preceding examples more closely - if I did, I'd have realized that existentials need to be accounted for.

Minimum in non-empty, finite set

With the following definitions I want to prove lemma without_P
Variable n : nat.
Definition mnnat := {m : nat | m < n}.
Variable f : mnnat -> nat.
Lemma without_P : (exists x : mnnat, True) -> (exists x, forall y, f x <= f y).
Lemma without_P means: if you know (the finite) set mnnat is not empty, then there must exist an element in mnnat, that is the smallest of them all, after mapping f onto mnnat.
We know mnnat is finite, as there are n-1 numbers in it and in the context of the proof of without_P we also know mnnat is not empty, because of the premise (exists x : mnnat, True).
Now mnnat being non-empty and finite "naturally/intuitively" has some smallest element (after applying f on all its elements).
At the moment I am stuck at the point below, where I thought to proceed by induction over n, which is not allowed.
1 subgoal
n : nat
f : mnnat -> nat
x : nat
H' : x < n
______________________________________(1/1)
exists (y : nat) (H0 : y < n),
forall (y0 : nat) (H1 : y0 < n),
f (exist (fun m : nat => m < n) y H0) <= f (exist (fun m : nat => m < n) y0 H1)
My only idea here is to assert the existance of a function f' : nat -> nat like this: exists (f' : nat -> nat), forall (x : nat) (H0: x < n), f' (exist (fun m : nat => m < n) x H0) = f x, after solving this assertion I have proven the lemma by induction over n. How can I prove this assertion?
Is there a way to prove "non-empty, finite sets (after applying f to each element) have a minimum" more directly? My current path seems too hard for my Coq-skills.
Require Import Psatz Arith. (* use lia to solve the linear integer arithmetic. *)
Variable f : nat -> nat.
This below is essentially your goal, modulo packing of the statement into some dependent type. (It doesn't say that mi < n, but you can extend the proof statement to also contain that.)
Goal forall n, exists mi, forall i, i < n -> f mi <= f i.
induction n; intros.
- now exists 0; inversion 1. (* n cant be zero *)
- destruct IHn as [mi IHn]. (* get the smallest pos mi, which is < n *)
(* Is f mi still smallest, or is f n the smallest? *)
(* If f mi < f n then mi is the position of the
smallest value, otherwise n is that position,
so consider those two cases. *)
destruct (lt_dec (f mi) (f n));
[ exists mi | exists n];
intros.
+ destruct (eq_nat_dec i n).
subst; lia.
apply IHn; lia.
+ destruct (eq_nat_dec i n).
subst; lia.
apply le_trans with(f mi).
lia.
apply IHn.
lia.
Qed.
Your problem is an specific instance of a more general result which is proven for example in math-comp. There, you even have a notation for denoting "the minimal x such that it meets P", where P must be a decidable predicate.
Without tweaking your statement too much, we get:
From mathcomp Require Import all_ssreflect.
Variable n : nat.
Variable f : 'I_n.+1 -> nat.
Lemma without_P : exists x, forall y, f x <= f y.
Proof.
have/(_ ord0)[] := arg_minP (P:=xpredT) f erefl => i _ P.
by exists i => ?; apply/P.
Qed.
I found a proof to my assertion (exists (f' : nat -> nat), forall (x : nat) (H0: x < n), f (exist (fun m : nat => m < n) x H0) = f' x). by proving the similar assertion (exists (f' : nat -> nat), forall x : mnnat, f x = f' (proj1_sig x)). with Lemma f'exists. The first assertion then follows almost trivially.
After I proved this assertion I can do a similar proof to user larsr, to prove Lemma without_P.
I used the mod-Function to convert any nat to a nat smaller then n, apart from the base case of n = 0.
Lemma mod_mnnat : forall m,
n > 0 -> m mod n < n.
Proof.
intros.
apply PeanoNat.Nat.mod_upper_bound.
intuition.
Qed.
Lemma mod_mnnat' : forall m,
m < n -> m mod n = m.
Proof.
intros.
apply PeanoNat.Nat.mod_small.
auto.
Qed.
Lemma f_proj1_sig : forall x y,
proj1_sig x = proj1_sig y -> f x = f y.
Proof.
intros.
rewrite (sig_eta x).
rewrite (sig_eta y).
destruct x. destruct y as [y H0].
simpl in *.
subst.
assert (l = H0).
apply proof_irrelevance. (* This was tricky to find.
It means two proofs of the same thing are equal themselves.
This makes (exist a b c) (exist a b d) equal,
if c and d prove the same thing. *)
subst.
intuition.
Qed.
(* Main Lemma *)
Lemma f'exists :
exists (ff : nat -> nat), forall x : mnnat, f x = ff (proj1_sig x).
Proof.
assert (n = 0 \/ n > 0).
induction n.
auto.
intuition.
destruct H.
exists (fun m : nat => m).
intuition. destruct x. assert (l' := l). rewrite H in l'. inversion l'.
unfold mnnat in *.
(* I am using the mod-function to map (m : nat) -> {m | m < n} *)
exists (fun m : nat => f (exist (ltn n) (m mod n) (mod_mnnat m H))).
intros.
destruct x.
simpl.
unfold ltn.
assert (l' := l).
apply mod_mnnat' in l'.
assert (proj1_sig (exist (fun m : nat => m < n) x l) = proj1_sig (exist (fun m : nat => m < n) (x mod n) (mod_mnnat x H))).
simpl. rewrite l'.
auto.
apply f_proj1_sig in H0.
auto.
Qed.

How to give a counterxample in Coq?

Is it possible to give a counterexample for a statement which doesn't hold in general? Like, for example that the all quantor does not distribute over the connective "or". How would you state that to begin with?
Parameter X : Set.
Parameter P : X -> Prop.
Parameter Q : X -> Prop.
(* This holds in general *)
Theorem forall_distributes_over_and
: (forall x:X, P x /\ Q x) -> ((forall x:X, P x) /\ (forall x:X, Q x)).
Proof.
intro H. split. apply H. apply H.
Qed.
(* This doesn't hold in general *)
Theorem forall_doesnt_distributes_over_or
: (forall x:X, P x \/ Q x) -> ((forall x:X, P x) \/ (forall x:X, Q x)).
Abort.
Here is a quick and dirty way to prove something similar to what you want:
Theorem forall_doesnt_distributes_over_or:
~ (forall X P Q, (forall x:X, P x \/ Q x) -> ((forall x:X, P x) \/ (forall x:X, Q x))).
Proof.
intros H.
assert (X : forall x : bool, x = true \/ x = false).
destruct x; intuition.
specialize (H _ (fun b => b = true) (fun b => b = false) X).
destruct H as [H|H].
now specialize (H false).
now specialize (H true).
Qed.
I have to quantify X P and Q inside the negation in order to be able to provide the one I want. You couldn't quite do that with your Parameters as they somehow fixed an abstract X, P and Q, thus making your theorem potentially true.
In general, if you want to produce a counterexample, you can state the negation of the formula and then prove that this negation is satisfied.