Consider the following code:
Inductive Even : nat -> Prop :=
| EO : Even O
| ESS : forall n, Even n -> Even (S (S n)).
Fixpoint is_even_prop (n : nat) : Prop :=
match n with
| O => True
| S O => False
| S (S n) => is_even_prop n
end.
Theorem is_even_prop_correct : forall n, is_even_prop n -> Even n.
Admitted.
Example Even_5000 : Even 5000.
Proof.
apply is_even_prop_correct.
Time constructor. (* ~0.45 secs *)
Undo.
Time (constructor 1). (* ~0.25 secs *)
Undo.
(* The documentation for constructor says that "constructor 1"
should be the same thing as doing this: *)
Time (apply I). (* ~0 secs *)
Undo.
(* Apparently, if there's only one applicable constructor,
reflexivity falls back on constructor and consequently
takes as much time as that tactic: *)
Time reflexivity. (* Around ~0.45 secs also *)
Undo.
(* If we manually reduce before calling constructor things are
faster, if we use the right reduction strategy: *)
Time (cbv; constructor). (* ~0 secs *)
Undo.
Time (cbn; constructor). (* ~0.5 secs *)
Qed.
Theorem is_even_prop_correct_fast : forall n, is_even_prop n = True -> Even n.
Admitted.
Example Even_5000_fast : Even 5000.
Proof.
apply is_even_prop_correct_fast.
(* Everything here is essentially 0 secs: *)
Time constructor.
Undo.
Time reflexivity.
Undo.
Time (apply eq_refl). Qed.
I just wanted to see if you could do reflection in Prop rather than Set and stumbled upon this. My question is not how to do the reflection properly, I just want to know why constructor is so slow in the first case compared to the second case. (Maybe it has something to do with that constructor can immediately see (without any reductions) that the constructor must be eq_refl in the second case? But it must still reduce afterwards...)
Also, while trying to figure out what constructor is doing I noticed that the documentation does not say which reduction strategy will be used by the tactic. Is this omission intentional, and the idea is that you should explicitly say which reduction strategy you want if you want one in particular (otherwise the implementation is free to pick any)?
Short answer: It spends its time trying to figure out what inductive family your goal is a part of (twice, in the case of constructor), using hnf.
Longer answer: Doing a bit of source-diving, it looks like constructor calls Tactics.any_constructor, while constructor 1 calls Tactics.constructor_tac. Tactics.any_constructor in turn calls Tacmach.New.pf_apply Tacred.reduce_to_quantified_ind to determine the inductive type to count the constructors, and then calls Tactics.constructor_tac on each possible constructor in turn. For True, since there is one constructor, it's suggestive that the time for constructor is about double the time for constructor 1; I'm guessing that the time is therefore spent in reduce_to_quantified_ind. Tacred.reduce_to_quantified_ind, in turn, calls reduce_to_ind_gen, which, in turn calls hnf_constr. And, indeed, it looks like Time hnf and Time constructor 1 are about the same. Furthermore, Time constructor is instant after a manual hnf. I'm not sure what strategy hnf uses internally. The documentation omission is almost certainly not deliberate (at least, whatever the current strategy is should appear in a footnote, I think, so feel free to report a bug), but it's not clear to me that the reduction strategy used by constructor in determining what inductive family your goal is a part of should be part of the specification of constructor.
Related
I need to prove in Coq that for any type X and any proposition P (though I think it should work even if P is a type) there exists
trunc_impl: || P-> X || -> (P-> ||X||)
where ||_|| is the symbol used in HoTT book to indicate propositional truncation.
I demonstrated the statement in type theory: one gets the thesis by using the induction principle of propositional truncation, assuming from an H : || P-> X || and a p: P that H=|H'|, with H': P->X , and then defines trunc_impl(p):= |H'(p)|.
(|-| indicates the constructor for the trucation, i.e. |_| : A -> ||A||).
By the way, I cannot write it in Coq!
Any help would be very appreciated.
I am using the HoTT library available on GitHub.
You need to Require Import Basics. since coq doesn't know Trunc.TruncType can be coerced to Type otherwise. The tactics that you want to be aware of are apply Trunc_ind which will act on a goal like forall (x : Tr _ _), _.
intros x y and revert x will come in handy to get the goal into a form you want to apply trunc_ind to .
You also have the (custom) tactic strip_truncations which will search the context for any terms that are wrapped up with a truncation and try to do induction on them to remove them. This requires the goal to be as truncated but that shouldn't be a problem here.
Finally, the constructor for truncations is tr, so you can use apply there.
Playing with nostutter excersizes I found another odd behaviour. Here is the code:
Inductive nostutter {X:Type} : list X -> Prop :=
| ns_nil : nostutter []
| ns_one : forall (x : X), nostutter [x]
| ns_cons: forall (x : X) (h : X) (t : list X), nostutter (h::t) -> x <> h -> nostutter (x::h::t).
Example test_nostutter_manual: not (nostutter [3;1;1;4]).
Proof.
intro.
inversion_clear H.
inversion_clear H0.
unfold not in H2.
(* We are here *)
specialize (H2 eq_refl).
apply H2.
Qed.
Status after unfold is this:
1 subgoal (ID 229)
H1 : 3 <> 1
H : nostutter [1; 4]
H2 : 1 = 1 -> False
============================
False
When I run specialize (H2 eq_refl). inside IndProp.v that loads other Logical foundations files, it works. Somehow it understands that it needs to put "1" as a parameter. Header of IndProp.v is this:
Set Warnings "-notation-overridden,-parsing".
From LF Require Export Logic.
Require Import String.
Require Coq.omega.Omega.
When I move the code into another file "nostutter.v", this same code gives an expected error:
The term "eq_refl" has type "RelationClasses.Reflexive Logic.eq" while
it is expected to have type "1 = 1".
Header of nostutter.v:
Set Warnings "-notation-overridden,-parsing".
Require Import List.
Import ListNotations.
Require Import PeanoNat.
Import Nat.
Local Open Scope nat_scope.
I have to explicitly add a parameter to eq_refl: specialize (H2 (eq_refl 1)).
I think it's not related specifically to specialize. What is it? How to fix?
The problem is importing PeanoNat.Nat.
When you import PeanoNat, the module type Nat comes into scope, so importing Nat brings in PeanoNat.Nat. If you meant to import Coq.Init.Nat, you'll either have to import it before importing PeanoNat, or import it with Import Init.Nat..
Why does importing PeanoNat.Nat cause trouble in this case?
Arith/PeanoNat.v (static link) contains the module1 Nat. Inside that module, we find2 the unusual looking line
Include NBasicProp <+ UsualMinMaxLogicalProperties <+ UsualMinMaxDecProperties.
All this means is that each of NBasicProp, UsualMinMaxLogicalProperties and UsualMinMaxDecProperties are included, which in turn means that everything defined in those modules is included in the current module. Separating this line out into three Include commands, we can figure out which one is redefining eq_refl. It turns out to be NBasicProp, which is found in this file (static link). We're not quite there yet: the redefinition of eq_refl isn't here. However, we see the definition of NBasicProp in terms of NMaxMinProp.
This leads us to NMaxMin.v, which in turn leads us to NSub.v, which leads us to NMulOrder.v, which leads us to NAddOrder.v, which leads us to NOrder.v, which leads us to NAdd.v, which leads us to NBase.v, ...
I'll cut to the chase here. Eventually we end up in Structures/Equality.v (static link) with the module BackportEq which finally gives us our redefinition of eq_refl.
Module BackportEq (E:Eq)(F:IsEq E) <: IsEqOrig E.
Definition eq_refl := #Equivalence_Reflexive _ _ F.eq_equiv.
Definition eq_sym := #Equivalence_Symmetric _ _ F.eq_equiv.
Definition eq_trans := #Equivalence_Transitive _ _ F.eq_equiv.
End BackportEq.
The way this is defined, eq_refl (without any arguments) has type Reflexive eq, where Reflexive is the class
Class Reflexive (R : relation A) :=
reflexivity : forall x : A, R x x.
(found in Classes/RelationClasses.v)
So that means that we'll always need to supply an extra argument to get something of type x = x. There are no implicit arguments defined here.
Why is importing modules like PeanoNat.Nat generally a bad idea?
If the wild goose chase above wasn't convincing enough, let me just say that modules like this one, which extend and import other modules and module types, are often not meant to be imported. They often have short names (like N, Z or Nat) so any theorem you want to use from them is easily accessible without having to type out a long name. They usually have a long chain of imports and thus contain a vast number of items. If you import them, now that vast number of items is polluting your global namespace. As you saw with eq_refl, that can cause unexpected behavior with what you thought was a familiar constant.
Most of the modules encountered in this adventure are of the "module type/functor" variety. Suffice to say, they're difficult to understand fully, but a short guide can be found here.
My sleuthing was done by opening files in CoqIDE and running the command Locate eq_refl. (or better yet, ctrl+shift+L) after anything that might import from elsewhere. Locate can also tell you where a constant was imported from. I wish there were an easier way to see the path of imports in module types, but I don't think so. You could guess that we'd end up in Coq.Classes.RelationClasses based on the type of the overwritten eq_refl, but that isn't as precise.
I state the following goal in HOL4:
set_goal([``A:bool``,``B:bool``], ``B:bool``);
resulting in the proof state
val it =
Proof manager status: 1 proof.
1. Incomplete goalstack:
Initial goal:
B
------------------------------------
0. B
1. A
: proofs
I tried to find a proper tactic for using the assumptions. I came up with ASM_MESON_TAC:
e (mesonLib.ASM_MESON_TAC [])
and it proved the goal:
OK..
Meson search level: ..
val it =
Initial goal proved.
[..] ⊢ B: proof
Is this the standard tactic in such a situation? Or, is there a simpler one?
e (FIRST_ASSUM ACCEPT_TAC)
does it.
FIRST_ASSUM applies the argument theorem tactic on assumptions until success.
ACCEPT_TAC simply proves a goal if we supply the same theorem.
ACCEPT_TAC: thm -> tactic
FIRST_ASSUM: (thm -> tactic) -> tactic
(thanks to somebody on #hol)
I was looking at IndProp and I saw:
Fail Inductive wrong_ev (n : nat) : Prop :=
| wrong_ev_0 : wrong_ev 0
| wrong_ev_SS : ∀ n, wrong_ev n → wrong_ev (S (S n)).
(* ===> Error: A parameter of an inductive type n is not
allowed to be used as a bound variable in the type
of its constructor. *)
except that it seems to behave exactly as if it was taking an argument but it seems to throw an error. Why is this?
The text provides some explanation but I don't understand it:
what I don't understand specifically it. The part I don't understand is the part it says:
it is allowed to take different values in the types
why is it saying "in the types"? Types are NOT the input, values are. Why is it saying this? It seems extremely confusing. I know (extremely vaguely) that there is such a thing as "dependent types" but is that what it's referring too? Shouldn't it be arguments? Don't constructors take value or "stuff" and return an object of some type?
Why does it seem that the signature of the Inductive type (which really I just view it as a function that builds things are returns objects of some type) missing the arguments?
More context from text where explanation seems to appear:
This definition is different in one crucial respect from previous uses of Inductive: its result is not a Type, but rather a function from nat to Prop — that is, a property of numbers. Note that we've already seen other inductive definitions that result in functions, such as list, whose type is Type → Type. What is new here is that, because the nat argument of ev appears unnamed, to the right of the colon, it is allowed to take different values in the types of different constructors: 0 in the type of ev_0 and S (S n) in the type of ev_SS.
In contrast, the definition of list names the X parameter globally, to the left of the colon, forcing the result of nil and cons to be the same (list X). Had we tried to bring nat to the left in defining ev, we would have seen an error ... We can think of the definition of ev as defining a Coq property ev : nat → Prop, together with primitive theorems ev_0 : ev 0 and ev_SS : ∀n, ev n → ev (S (S n)).
Such "constructor theorems" have the same status as proven theorems.
why is it saying "in the types"? Types are NOT the input, values are
You need to read the whole expression: "in the types of different constructors".
And, indeed, the natural number is different in the return type of the two constructors:
It is 0 for ev_0
And it is S (S n) for ev_SS
I wonder if there is a way to obtain the proof term (serialized via Print or not) at some level beyond the current context (or just down to primitives). For example, executing the following
From mathcomp Require Import odd_order.PFsection14.
Print Feit_Thompson.
results in
Feit_Thompson =
fun (gT : fingroup.FinGroup.type)
(G : fingroup.group_of (gT:=gT)
(ssreflect.Phant
(fingroup.FinGroup.arg_sort
(fingroup.FinGroup.base gT)))) =>
BGsection7.minSimpleOdd_ind no_minSimple_odd_group (gT:=gT)
(G:=G)
: forall (gT : fingroup.FinGroup.type)
(G : fingroup.group_of (gT:=gT)
(ssreflect.Phant
(fingroup.FinGroup.arg_sort
(fingroup.FinGroup.base gT)))),
is_true
(ssrnat.odd
(fintype.CardDef.card
(T:=fingroup.FinGroup.arg_finType
(fingroup.FinGroup.base gT))
(ssrbool.mem
(finset.SetDef.pred_of_set
(fingroup.gval G))))) ->
is_true (nilpotent.solvable (fingroup.gval G))
but i would like to unfold the identifiers (theorems and definitions) used in the proof term such as no_minSimple_odd_group to their proof terms. I suspect that the opacity of theorems and lemmas might pose an obstacle to this purpose.
The naive solution i can think of is to recursively query each identifier via Print. Or a less naive (and less ideal due to the change in the language representing the proof term) solution, via program extraction.
I am not sure if there is a direct way of doing that, but it wouldn't be too hard to implement, at this level opacity is just a flag and can be bypassed.
However, I wonder about what do you want to achieve?
Note that most proof terms obtained that way are going to be just unmanageable, specially unfolding will quickly lead to a worse than exponential size blowup.