I need to prove in Coq that for any type X and any proposition P (though I think it should work even if P is a type) there exists
trunc_impl: || P-> X || -> (P-> ||X||)
where ||_|| is the symbol used in HoTT book to indicate propositional truncation.
I demonstrated the statement in type theory: one gets the thesis by using the induction principle of propositional truncation, assuming from an H : || P-> X || and a p: P that H=|H'|, with H': P->X , and then defines trunc_impl(p):= |H'(p)|.
(|-| indicates the constructor for the trucation, i.e. |_| : A -> ||A||).
By the way, I cannot write it in Coq!
Any help would be very appreciated.
I am using the HoTT library available on GitHub.
You need to Require Import Basics. since coq doesn't know Trunc.TruncType can be coerced to Type otherwise. The tactics that you want to be aware of are apply Trunc_ind which will act on a goal like forall (x : Tr _ _), _.
intros x y and revert x will come in handy to get the goal into a form you want to apply trunc_ind to .
You also have the (custom) tactic strip_truncations which will search the context for any terms that are wrapped up with a truncation and try to do induction on them to remove them. This requires the goal to be as truncated but that shouldn't be a problem here.
Finally, the constructor for truncations is tr, so you can use apply there.
Related
Let's suppose that I have the following proof state:
1 subgoal
P, Q : Prop
H : P -> Q
-------------------(1/1)
Q
and I know a way to prove P, how may I add an "inline proof" of it so that I can have it in my list of assumptions?
Another thing you can use is the pose proof tactic. This lets you actually supply the proof term and name it (pose proof (foo bar baz) as qux basically translates into let qux := foo bar baz in ... in the generated proof).
This can be neat for things like specialized versions of a lemma that you're going to use in multiple branches.
The tactic assert does exactly that.
Using assert (<proposition>) breaks your objective into two subgoals, in the first you have to prove "<proposition>" with your current assumptions, and the second has your original goal as the objective, but with "<proposition>" added to the list of assumptions.
You may also specify its name with assert (<proposition>) as <name> or assert (<name>: <proposition>).
Another option is to use cut (<proposition>).
It also creates two objectives, one of the form <proposition> -> <your objective> (then you can get your hypothesis by using intros, or intros <name> if you want to specify its name), and another one in which you have to prove "<prososition>" with your current assumptions.
If you use the SSRreflect proof language, you can use the have proof_of_P: P. <write your proof here> construct.
I am struggling with the plus operator between two propositions (maybe types) in Coq. I already could figure out this is something like "or" (maybe "xor") and I think it says that something is decidable but I cannot understand the complete meaning of it, and where does this sign come from (in classical mathematics).
P. S. Of course I already googled and researched but couldn't find the complete sophisticated answer I want.
That's the sum datatype, where A + B is basically A or B. The main difference with A \/ B is that it lives in Type, so it has computational content. That is to say, given A \/ B you cannot produce a boolean such that if A then true else false.
Another way to see it is that for A B : Prop, A + B -> A \/ B holds, but not the converse.
Prop is a special, impredicative sort in Coq; I recommend reading the manual about it.
Playing with nostutter excersizes I found another odd behaviour. Here is the code:
Inductive nostutter {X:Type} : list X -> Prop :=
| ns_nil : nostutter []
| ns_one : forall (x : X), nostutter [x]
| ns_cons: forall (x : X) (h : X) (t : list X), nostutter (h::t) -> x <> h -> nostutter (x::h::t).
Example test_nostutter_manual: not (nostutter [3;1;1;4]).
Proof.
intro.
inversion_clear H.
inversion_clear H0.
unfold not in H2.
(* We are here *)
specialize (H2 eq_refl).
apply H2.
Qed.
Status after unfold is this:
1 subgoal (ID 229)
H1 : 3 <> 1
H : nostutter [1; 4]
H2 : 1 = 1 -> False
============================
False
When I run specialize (H2 eq_refl). inside IndProp.v that loads other Logical foundations files, it works. Somehow it understands that it needs to put "1" as a parameter. Header of IndProp.v is this:
Set Warnings "-notation-overridden,-parsing".
From LF Require Export Logic.
Require Import String.
Require Coq.omega.Omega.
When I move the code into another file "nostutter.v", this same code gives an expected error:
The term "eq_refl" has type "RelationClasses.Reflexive Logic.eq" while
it is expected to have type "1 = 1".
Header of nostutter.v:
Set Warnings "-notation-overridden,-parsing".
Require Import List.
Import ListNotations.
Require Import PeanoNat.
Import Nat.
Local Open Scope nat_scope.
I have to explicitly add a parameter to eq_refl: specialize (H2 (eq_refl 1)).
I think it's not related specifically to specialize. What is it? How to fix?
The problem is importing PeanoNat.Nat.
When you import PeanoNat, the module type Nat comes into scope, so importing Nat brings in PeanoNat.Nat. If you meant to import Coq.Init.Nat, you'll either have to import it before importing PeanoNat, or import it with Import Init.Nat..
Why does importing PeanoNat.Nat cause trouble in this case?
Arith/PeanoNat.v (static link) contains the module1 Nat. Inside that module, we find2 the unusual looking line
Include NBasicProp <+ UsualMinMaxLogicalProperties <+ UsualMinMaxDecProperties.
All this means is that each of NBasicProp, UsualMinMaxLogicalProperties and UsualMinMaxDecProperties are included, which in turn means that everything defined in those modules is included in the current module. Separating this line out into three Include commands, we can figure out which one is redefining eq_refl. It turns out to be NBasicProp, which is found in this file (static link). We're not quite there yet: the redefinition of eq_refl isn't here. However, we see the definition of NBasicProp in terms of NMaxMinProp.
This leads us to NMaxMin.v, which in turn leads us to NSub.v, which leads us to NMulOrder.v, which leads us to NAddOrder.v, which leads us to NOrder.v, which leads us to NAdd.v, which leads us to NBase.v, ...
I'll cut to the chase here. Eventually we end up in Structures/Equality.v (static link) with the module BackportEq which finally gives us our redefinition of eq_refl.
Module BackportEq (E:Eq)(F:IsEq E) <: IsEqOrig E.
Definition eq_refl := #Equivalence_Reflexive _ _ F.eq_equiv.
Definition eq_sym := #Equivalence_Symmetric _ _ F.eq_equiv.
Definition eq_trans := #Equivalence_Transitive _ _ F.eq_equiv.
End BackportEq.
The way this is defined, eq_refl (without any arguments) has type Reflexive eq, where Reflexive is the class
Class Reflexive (R : relation A) :=
reflexivity : forall x : A, R x x.
(found in Classes/RelationClasses.v)
So that means that we'll always need to supply an extra argument to get something of type x = x. There are no implicit arguments defined here.
Why is importing modules like PeanoNat.Nat generally a bad idea?
If the wild goose chase above wasn't convincing enough, let me just say that modules like this one, which extend and import other modules and module types, are often not meant to be imported. They often have short names (like N, Z or Nat) so any theorem you want to use from them is easily accessible without having to type out a long name. They usually have a long chain of imports and thus contain a vast number of items. If you import them, now that vast number of items is polluting your global namespace. As you saw with eq_refl, that can cause unexpected behavior with what you thought was a familiar constant.
Most of the modules encountered in this adventure are of the "module type/functor" variety. Suffice to say, they're difficult to understand fully, but a short guide can be found here.
My sleuthing was done by opening files in CoqIDE and running the command Locate eq_refl. (or better yet, ctrl+shift+L) after anything that might import from elsewhere. Locate can also tell you where a constant was imported from. I wish there were an easier way to see the path of imports in module types, but I don't think so. You could guess that we'd end up in Coq.Classes.RelationClasses based on the type of the overwritten eq_refl, but that isn't as precise.
I am formalizing a grammar which is essentially one over boolean expressions. In coq, you can get boolean-like things in Prop or more explicitly in bool.
So for example, I could write:
true && true
Or
True /\ True
The problem is that in proofs (which is what I really care about) I can do a case analysis in domain bool, but in Prop this is not possible (since all members are not enumerable, I suppose). Giving up this tactic and similar rewriting tactics seems like a huge drawback even for very simple proofs.
In general, what situations would one choose Prop over bool for formalizing? I realize this is a broad question, but I feel like this is not addressed in the Coq manual sufficiently. I am interested in real world experience people have had going down both routes.
There are lots of different opinions on this. My personal take is that you are often better off not making this choice: it makes sense to have two versions of a property, one in Prop, the other one in bool.
Why would you want this? As you pointed out, booleans support case analysis in proofs and functions, which general propositions do not. However, Prop is more convenient to use in certain cases. Suppose you have a type T with finitely many values. We can write a procedure
all : (T -> bool) -> bool
that decides whether a boolean property P : T -> bool holds of all elements of T. Imagine that we know that all P = true, for some property P. We might want to use this fact to conclude that P x = true for some value x. To do this, we need to prove a lemma about all:
allP : forall P : T -> bool,
all P = true <-> (forall x : T, P x = true)
This lemma connects two different formulations of the same property: a boolean one and a propositional one. When reasoning about all in a proof, we can invoke allP to convert freely between the two. We can also have different conversion lemmas:
allPn : forall P,
all P = false <-> (exists x, P x = false)
In fact, we are free to choose any Coq proposition whatsoever to relate to a boolean computation (as long, of course, as we can prove that the two are logically equivalent). For instance, if we would like to have a custom induction principle associated with a boolean property, we can look for an equivalent formulation as an inductively defined proposition.
The Mathematical Components library is a good example of development that follows this style. Indeed, because it is so pervasive there, the library provides a special view mechanism for writing conversion lemmas like the one above and applying them. In plain Coq, we can also use the rewrite tactic to apply logical equivalences more conveniently.
Of course, there are many situations where it does not make sense to have two formulations of the same property. Sometimes, you are forced to use Prop, because the property you care about is undecidable. Sometimes, you might feel that you wouldn't gain anything by writing your property in Prop, and may keep it only as a boolean.
In addition to the Software Foundations chapter linked above, this answer discusses the difference between bool and Prop in more depth.
I'm still puzzled what the sort Set means in Coq. When do I use Set and when do I use Type?
In Hott a Set is defined as a type, where identity proofs are unique.
But I think in Coq it has a different interpretation.
Set means rather different things in Coq and HoTT.
In Coq, every object has a type, including types themselves. Types of types are usually referred to as sorts, kinds or universes. In Coq, the (computationally relevant) universes are Set, and Type_i, where i ranges over natural numbers (0, 1, 2, 3, ...). We have the following inclusions:
Set <= Type_0 <= Type_1 <= Type_2 <= ...
These universes are typed as follows:
Set : Type_i for any i
Type_i : Type_j for any i < j
Like in Hott, this stratification is needed to ensure logical consistency. As Antal pointed out, Set behaves mostly like the smallest Type, with one exception: it can be made impredicative when you invoke coqtop with the -impredicative-set option. Concretely, this means that forall X : Set, A is of type Set whenever A is. In contrast, forall X : Type_i, A is of type Type_(i + 1), even when A has type Type_i.
The reason for this difference is that, due to logical paradoxes, only the lowest level of such a hierarchy can be made impredicative. You may then wonder then why Set is not made impredicative by default. This is because an impredicative Set is inconsistent with a strong form of the axiom of the excluded middle:
forall P : Prop, {P} + {~ P}.
What this axiom allows you to do is to write functions that can decide arbitrary propositions. Note that the {P} + {~ P} type lives in Set, and not Prop. The usual form of the excluded middle, forall P : Prop, P \/ ~ P, cannot be used in the same way, because things that live in Prop cannot be used in a computationally relevant way.
In addition to Arthur's answer:
From the fact that Set is located at the bottom of the hierarchy,
it follows that Set is the type of the “small” datatypes and function types, i.e. the ones whose values do not directly or indirectly involve types.
That means the following will fail:
Fail Inductive Ts : Set :=
| constrS : Set -> Ts.
with this error message:
Large non-propositional inductive types must be in Type.
As the message suggests, we can amend it by using Type:
Inductive Tt : Type :=
| constrT : Set -> Tt.
Reference:
The Essence of Coq as a Formal System by B. Jacobs (2013), pdf.