I defined Subtype as follows
Record Subtype {T:Type}(P : T -> Prop) := {
subtype :> Type;
subtype_inj :> subtype -> T;
subtype_isinj : forall (s t:subtype), (subtype_inj s = subtype_inj t) -> s = t;
subtype_h : forall (x : T), P x -> (exists s:subtype,x = subtype_inj s);
subtype_h0 : forall (s : subtype), P (subtype_inj s)}.
Can the following theorem be proven?
Theorem Subtypes_Exist : forall {T}(P : T -> Prop), Subtype P.
If not, is it provable from any well-known compatible axiom? Or Can I add this as an axiom? Would it conflict with any usual axiom?(like extensionality, functional choice, etc.)
Your definition is practically identical to the one of MathComp; indeed, what you are missing mainly is injectivity due to proof relevance.
For that, I am afraid you will need to assume propositional irrelevance:
Require Import ProofIrrelevance.
Theorem Subtypes_Exist : forall {T}(P : T -> Prop), Subtype P.
Proof.
intros T P; set (subtype_inj := #proj1_sig T P).
apply (#Build_Subtype _ _ { x | P x} subtype_inj).
+ intros [s Ps] [t Pt]; simpl; intros ->.
now rewrite (proof_irrelevance _ Ps Pt).
+ now intros x Px; exists (exist _ x Px).
+ now destruct 0.
Qed.
You can always restrict you predicate P to a type which is effectively proof-irrelevant, of course.
Related
Coq uses constructive logic, which means that if you try to fill out
De Morgan's laws, you'll end up missing 2.
Namely, you can't prove:
Theorem deMorgan_nand P Q (andPQ : ~(P /\ Q)) : P \/ Q.
Abort.
Theorem deMorgan_nall {A} (P : A -> Prop) (allPa : ~forall a, P a) : exists a, ~P a.
Abort.
This makes sense, because you've have to compute whether it was
the left or right item of the or, which you can't do in general.
Looking at
"Classical Mathematics for a Constructive World"
(https://arxiv.org/pdf/1008.1213.pdf)
has the definitions
Definition orW P Q := ~(~P /\ ~Q).
Definition exW {A} (P : A -> Prop) := ~forall a, ~P a.
similar to De Morgan's law. This suggests an alternative formulation.
Theorem deMorgan_nand P Q (andPQ : ~(P /\ Q)) : orW (~P) (~Q).
hnf; intros nnPQ; destruct nnPQ as [ nnP nnQ ].
apply nnP; clear nnP; hnf; intros p.
apply nnQ; clear nnQ; hnf; intros q.
apply (andPQ (conj p q)).
Qed.
Theorem deMorgan_nall {A} (P : A -> Prop) (allPa : ~forall a, P a) : exW (fun a => ~P a).
Abort.
But, it doesn't work with negating forall. In particular, it gets stuck on
trying to convert ~~P a into P a. So, despite in the nand case
converting ~~P into P, it doesn't seem to work with forall.
You can also show that there is some element of a that has
P a.
Similarly, you could try to show
Theorem deMorgan_nexn {A} (P : A -> Prop) (exPa : ~exists a, ~P a) : ~~forall a, P a.
Abort.
but that gets stuck in that once you have the argument a,
the conclusion is no longer False, so you can't use ~~P -> P.
So, if you can't prove deMorgan_nall, is there any theorem like it?
Or is ~forall a, P a already as simplified as it can get?
More generally, when the conclusion is False, that allows for using
the law of excluded middle (P \/ ~P). Is there any counterpart
to that that works when the proposition takes an argument, that is
P : A -> Prop instead of P : Prop ?
The principle you are looking for is known as double negation shift. It is not valid in intuitionistic logic in general. Despite looking fairly innocuous at first, as its conclusion is a doubly-negated formula, it is actually quite potent. Indeed, DNS is essentially what is needed in order to interpret the axiom of choice through the double-negation translation.
Edit by scubed:
So, that means I have to add an axiom to handle this case. Using the axiom,
Axiom deMorgan_allnn : forall {A} (P : A -> Prop) (allPa : forall a, ~~P a), ~~forall a, P a.
Theorem deMorgan_nall {A} (P : A -> Prop) (allPa : ~forall a, P a) : exW (fun a => ~P a).
hnf; intros ex1; apply deMorgan_allnn in ex1.
apply ex1; clear ex1; hnf; intros all2.
apply (allPa all2).
Qed.
I'm trying to reason on a TRS, and I have ran into the following proof obligation:
infinite_sequence : forall t' : Term,
transitive_closure R t t' ->
exists t'' : Term, R t' t''
============================
exists f : nat -> Term, forall n : nat, R (f n) (f (n + 1))
With transitive_closure defined as follows:
Definition transitive_closure (trs : TRS) (x y : Term) :=
exists f: nat -> Term,
f 0 = x
/\
exists l: nat,
f l = y
/\
forall n: nat,
n < l
->
trs (f n) (f (n + 1))
.
So when I unfold:
infinite_sequence : forall t' : Term,
(exists f : nat -> Term,
f 0 = t /\
(exists l : nat,
f l = t' /\
(forall n : nat, n < l -> R (f n) (f (n + 1))))) ->
exists t'' : Term, R t' t''
============================
exists f : nat -> Term, forall n : nat, R (f n) (f (n + 1))
Is this proof obligation possible to fulfill? I am not married this exact definition of transitive_closure, so if it becomes much easier by choosing a different definition for that, I'm open to that.
Since your goal starts with exists f : nat -> Term, you have to explicitly build such a function. The easiest way to do so is to first build a function with a slightly richer return type ({ u: Term | transitive_closure R t u } instead of Term) and then to project pointwise its first component to finish the proof. This would give the following script:
simple refine (let f : nat -> { u: Term | transitive_closure R t u } := _ in _).
- fix f 1.
intros [|n].
{ exists t. exists (fun _ => t). admit. }
destruct (f n) as [t' H].
destruct (infinite_sequence t' H) as [t'' H']. (* ISSUE *)
exists t''.
destruct H as [f' [H1 [l [H2 H3]]]].
exists (fun m => if Nat.ltb m l then f' m else t'').
admit.
- exists (fun n => proj1_sig (f n)).
intros n.
rewrite Nat.add_1_r.
simpl.
destruct (f n) as [fn Hn].
now destruct infinite_sequence as [t'' H'].
The two admit are just there to keep the code simple; there is nothing difficult about them. The real issue comes from the line destruct (infinite_sequence t' H), since Coq will complain that "Case analysis on sort Set is not allowed for inductive definition ex." Indeed, infinite_sequence states that there exists t'' such that R t' t'', but it does so in a non-informative way (i.e., in Prop), while you need it to build a function that lives in the concrete world (i.e., in Set).
There are only two axiom-free solutions, but both might be incompatible with the remaining of your development. The easiest one is to put infinite_sequence in Set, which means its type is changed to forall t', transitive_closure R t t' -> { t'' | R t' t'' }.
The second solution requires R to be a decidable relation and Term to be an enumerable set. That way, you can still build a concrete t'' by enumerating all the terms until you find one that satisfies R t' t''. In that case, infinite_sequence is only used to prove that this process terminates, so it can be non-informative.
Trying to comprehend the answer of #keep_learning I walked through this code step by step:
Inductive nostutter {X:Type} : list X -> Prop :=
| ns_nil : nostutter []
| ns_one : forall (x : X), nostutter [x]
| ns_cons: forall (x : X) (h : X) (t : list X), nostutter (h::t) -> x <> h -> nostutter (x::h::t).
Example test_nostutter_4: not (nostutter [3;1;1;4]).
Proof.
intro.
inversion_clear H.
inversion_clear H0.
unfold not in H2.
(* We are here *)
specialize (H2 eq_refl).
apply H2.
Qed.
Here is what we have before excuting specialize
H1 : 3 <> 1
H : nostutter [1; 4]
H2 : 1 = 1 -> False
============================
False
Here is eq Prop whose constructor eq_refl is used in specialize:
Inductive eq (A:Type) (x:A) : A -> Prop :=
eq_refl : x = x :>A
where "x = y :> A" := (#eq A x y) : type_scope.
I can't explain, how this command works:
specialize (H2 eq_refl).
I read about specialize in reference manual, but the explanation there is too broad. As far as I understand it sees that "1 = 1" expression in H2 satisfies eq_refl constructor and therefore eq proposition is True. Then it simplifies the expression:
True -> False => False
And we get
H1 : 3 <> 1
H : nostutter [1; 4]
H2 : False
============================
False
Can somebody provide me a minimal example with explanation of what is specialize doing, so I could freely use it?
Update
Trying to imitate how specialize works using apply I did the following:
Example specialize {A B: Type} (H: A -> B) (a: A): B.
Proof.
apply H in a.
This gives:
A : Type
B : Type
H : A -> B
a : B
============================
B
Almost the same as specialize, only different hypothesis name.
In test_nostutter_4 theorem I tried this and it worked:
remember (#eq_refl nat 1) as Heq.
apply H2 in Heq as H3.
It gives us:
H1 : 3 <> 1
H : nostutter [1; 4]
H2 : 1 = 1 -> False
Heq : 1 = 1
H3 : False
HeqHeq : Heq = eq_refl
============================
False
This one was more complex, we had to introduce a new hypothesis Heq. But we got what we need - H3 at the end.
Does specialize internally use something like remember? Or is it possible to solve it with apply but without remember?
specialize, in its simplest form, simply replaces a given hypothesis with that hypothesis applied to some other term.
In this proof,
Example specialize {A B: Type} (H: A -> B) (a: A): B.
Proof.
specialize (H a).
exact H.
Qed.
we initially have the hypothesis H: A -> B. When we call specialize (H a), we apply H to a (apply as in function application). This gives us something of type B. specialize then gets rid of the old H for us and replaces it with the result of the application. It gives the new hypothesis the same name: H.
In your case, we have H2: 1 = 1 -> False, which is a function from the type 1 = 1 to the type False. That means that H2 applied to eq_refl is of type False, i.e. H2 eq_refl: False. When we use the tactic specialize (H2 eq_refl)., the old H2 is cleared and replaced by a new term (H2 eq_refl) whose type is False. It keeps the old name H2, though.
specialize is useful when you're sure that you're only going to use a hypothesis once, since it automatically gets rid of the old hypothesis. One disadvantage is that the old name may not fit the meaning of the new hypothesis. However, in your case and in my example, H is a generic enough name that it works either way.
To your update...
specialize is a core tactic defined directly in the ltac plugin. It doesn't use any other tactic internally, since it is its internals.
If you want to keep a hypothesis, you can use the as modifier, which works for both specialize and apply. In the proof
Example specialize {A B: Type} (H: A -> B) (a: A): B.
Proof.
if you do specialize (H a) as H0., instead of clearing H, it'll introduce a new hypothesis H0: B. apply H in a as H0. has the same effect.
I'm doing a Coq proof. I have P -> Q as a hypothesis, and (P -> Q) -> (~Q -> ~P) as a lemma. How can I transform the hypothesis into ~Q -> ~P?
When I try to apply it, I just spawn new subgoals, which isn't helpful.
Put another way, I wish to start with:
P : Prop
Q : Prop
H : P -> Q
and end up with
P : Prop
Q : Prop
H : ~Q -> ~P
given the lemma above - i.e. (P -> Q) -> (~Q -> ~P).
This is not as elegant as just an apply, but you can use pose proof (lemma _ _ H) as H0, where lemma is the name of your lemma. This will add another hypothesis with the correct type to the context, with the name H0.
This is one case where ssreflect views do help:
From Coq Require Import ssreflect.
Variable (P Q : Prop).
Axiom u : (P -> Q) -> (~Q -> ~P).
Lemma test (H : P -> Q) : False.
Proof. move/u in H. Abort.
apply u in H does also work, however it is too smart for its own good and does too much.
If I wanted to transform H in place I would go with #ejgallego's answer, since SSReflect is now (starting from Coq 8.7.0) a part of standard Coq, but here is another option:
Ltac dumb_apply_in f H := generalize (f H); clear H; intros H.
Tactic Notation "dumb" "apply" constr(f) "in" hyp(H) := dumb_apply_in f H.
A simple test:
Variable (P Q : Prop).
Axiom u : (P -> Q) -> (~Q -> ~P).
Lemma test (H : P -> Q) : False.
Proof. dumb apply u in H. Abort.
Suppose I have two functions f and g and I know f = g. Is there a forward reasoning 'function application' tactic that will allow me to add f a = g a to the context for some a in their common domain? In this contrived example, I could use assert (f a = g a) followed by f_equal. But I want to do something like this in more complex situations; e.g.,
Lemma fapp : forall (A B : Type) (P Q : A -> B) (a : A),
(fun (a : A) => P a) = (fun (a : A) => Q a) ->
P a = Q a.
I think I can't correctly infer the general problem that you have, given your description and example.
If you already know H : f = g, you can use that to rewrite H wherever you want to show something about f and g, or just elim H to rewrite everything at once. You don't need to assert a helper theorem and if you do, you'll obviously need something like assert or pose proof.
If that equality is hidden underneath some eta-expansion, like in your example, remove that layer and then proceed as above. Here are two (out of many) possible ways of doing that:
intros A B P Q a H. assert (P = Q) as H0 by apply H. rewrite H0; reflexivity.
This solves your example proof by asserting the equality, then rewriting. Another possibility is to define eta reduction helpers (haven't found predefined ones) and using these. That will be more verbose, but might work in more complex cases.
If you define
Lemma eta_reduce : forall (A B : Type) (f : A -> B),
(fun x => f x) = f.
intros. reflexivity.
Defined.
Tactic Notation "eta" constr(f) "in" ident(H) :=
pattern (fun x => f x) in H;
rewrite -> eta_reduce in H.
you can do the following:
intros A B P Q a H. eta P in H. eta Q in H. rewrite H; reflexivity.
(That notation is a bit of a loose cannon and might rewrite in the wrong places. Don't rely on it and in case of anomalies do the pattern and rewrite manually.)
I don't have a lot of experience with Coq or its tactics, but why not just use an auxiliary theorem?
Theorem fapp': forall (t0 t1: Type) (f0 f1: t0 -> t1),
f0 = f1 -> forall (x0: t0), f0 x0 = f1 x0.
Proof.
intros.
rewrite H.
trivial.
Qed.
Lemma fapp : forall (A B : Type) (P Q : A -> B) (a : A),
(fun (a : A) => P a) = (fun (a : A) => Q a) ->
P a = Q a.
Proof.
intros.
apply fapp' with (x0 := a) in H.
trivial.
Qed.