I'm trying to deal with canonical structures in ssreflect. There are 2 pieces of code that I took from here.
I will bring pieces for the bool and the option types.
Section BoolFinType.
Lemma bool_enumP : Finite.axiom [:: true; false]. Proof. by case. Qed.
Definition bool_finMixin := Eval hnf in FinMixin bool_enumP.
Canonical bool_finType := Eval hnf in FinType bool bool_finMixin.
Lemma card_bool : #|{: bool}| = 2. Proof. by rewrite cardT enumT unlock. Qed.
End BoolFinType.
Section OptionFinType.
Variable T : finType.
Notation some := (#Some _) (only parsing).
Local Notation enumF T := (Finite.enum T).
Definition option_enum := None :: map some (enumF T).
Lemma option_enumP : Finite.axiom option_enum.
Proof. by case => [x|]; rewrite /= count_map (count_pred0, enumP). Qed.
Definition option_finMixin := Eval hnf in FinMixin option_enumP.
Canonical option_finType := Eval hnf in FinType (option T) option_finMixin.
Lemma card_option : #|{: option T}| = #|T|.+1.
Proof. by rewrite !cardT !enumT {1}unlock /= !size_map. Qed.
End OptionFinType.
Now, suppose I have a function f from finType to Prop.
Variable T: finType.
Variable f: finType -> Prop.
Goal f T. (* Ok *)
Goal f bool. (* Not ok *)
Goal f (option T). (* Not ok *)
In the last two cases I get the following error:
The term "bool/option T" has type "Set/Type" while it is expected to have type "finType".
What am I doing wrong?
The instance search for canonical structures is a bit counter intuitive in these cases. Suppose that you have the following things:
a structure type S, and a type T;
a field proj : S -> T of S;
an element x : T; and
an element st : S that has been declared as canonical, such that proj st is defined as x.
In your example, we would have:
S = finType
T = Type
proj = Finite.sort
x = bool
st = bool_finType.
Canonical structure search is triggered only in the following case: when the type-checking algorithm is trying to find a value to validly fill in the hole in the equation proj _ = x. Then, it will use st : S to fill in this hole. In your example, you expected the algorithm to understand that bool can be used as finType, by transforming it into bool_finType, which is not quite what is described above.
To make Coq infer what you want, you need to use a unification problem of that form. For instance,
Variable P : finType -> Prop.
Check ((fun (T : finType) (x : T) => P T) _ true).
What is going on here? Remember that Finite.sort is declared as a coercion from finType to Type, so x : T really means x : Finite.sort T. When you apply the fun expression to true : bool, Coq has to find a solution for Finite.sort _ = bool. It then finds bool_finType, because it was declared as canonical. So the element of bool is what triggers the search, but not quite bool itself.
As ejgallego pointed out, this pattern is so common that ssreflect provides the special [finType of ...] syntax. But it might still be useful to understand what is going on under the hood.
Related
The following code states that it defines finite sets in Coq with the axiom of extensionality:
(** A library for finite sets with extensional equality.
Author: Brian Aydemir. *)
Require Import FSets.
Require Import ListFacts.
Require Import Coq.Logic.ProofIrrelevance.
(* *********************************************************************** *)
(** * Interface *)
(** The following interface wraps the standard library's finite set
interface with an additional property: extensional equality. *)
Module Type S.
Declare Module E : UsualOrderedType.
Declare Module F : FSetInterface.S with Module E := E.
Parameter eq_if_Equal :
forall s s' : F.t, F.Equal s s' -> s = s'.
End S.
(* *********************************************************************** *)
(** * Implementation *)
(** For documentation purposes, we hide the implementation of a
functor implementing the above interface. We note only that the
implementation here assumes (as an axiom) that proof irrelevance
holds. *)
Module Make (X : UsualOrderedType) <: S with Module E := X.
(* begin hide *)
Module E := X.
Module F := FSetList.Make E.
Module OFacts := OrderedType.OrderedTypeFacts E.
Lemma eq_if_Equal :
forall s s' : F.t, F.Equal s s' -> s = s'.
Proof.
intros [s1 pf1] [s2 pf2] Eq.
assert (s1 = s2).
unfold F.MSet.Raw.t in *.
eapply Sort_InA_eq_ext; eauto.
intros; eapply E.lt_trans; eauto.
1 : {
apply F.MSet.Raw.isok_iff.
auto.
}
1 : {
apply F.MSet.Raw.isok_iff.
auto.
}
subst s1.
assert (pf1 = pf2).
apply proof_irrelevance.
subst pf2.
reflexivity.
Qed.
(* end hide *)
End Make.
How can I define a function with signature from finite sets to finite sets using this module?
You need to define a Module (call it M) that implements the UsualOrderedType module type for the type you want to make finite sets out of, and then build another Module with Make M which contains an implementation of finite sets for your type.
Module M <: UsualOrderedType.
...
End M.
Module foo := Make M.
Check foo.F.singleton.
Note that you need to declare the module type with <: instead of just :, otherwise you are hiding the fact that the module is defined for (in the example below) nat behind an opaque type t.
Say you want to make finite sets of nats:
(* Print the module type to see all the things you need to define. *)
Print Module Type UsualOrderedType.
Require Import PeanonNat.
Module NatOrdered <: UsualOrderedType . (* note the `<:` *)
Definition t:=nat.
Definition eq:=#eq nat.
Definition lt:=lt.
Definition eq_refl:=#eq_refl nat.
Definition eq_sym:=#eq_sym nat.
Definition eq_trans:=#eq_trans nat.
Definition lt_trans:=Nat.lt_trans.
(* I wrote Admitted where I didn't provide an implementation *)
Definition lt_not_eq : forall x y : t, lt x y -> ~ eq x y. Admitted.
Definition compare : forall x y : t, Compare lt eq x y. Admitted.
Definition eq_dec : forall x y : t, {eq x y} + {~ eq x y}. Admitted.
End NatOrdered.
Now you can create a module that uses this ordered type.
Module foo := Make NatOrdered.
Print foo.F. (* to see everything that is defined for the FSetList *)
Import foo. (* get F in our namespace so we can say F.t instead of foo.F.t, etc. *)
Now we can use our F module. The finite sets have type F.t and the elements have type F.elt which is coercible to nat since we know they come from NatOrdered.
Lets build a function that uses stuff from F.
Definition f: F.elt -> F.t.
intros x. apply (F.singleton x).
Defined.
Print F.
Goal F.cardinal (F.union (f 1) (f 2)) = 2.
compute.
Ok. That gets stuck halfway through the computation because I didn't implement compare above. I was lazy and just wrote Admitted. But you can do it! :-)
I can define the following inductive type:
Inductive T : Type -> Type :=
| c1 : forall (A : Type), A -> T A
| c2 : T unit.
But then the command Check (c1 (T nat)) fails with the message: The term T nat has type Type#{max(Set, Top.3+1)} while it is expected to have type Type#{Top.3} (universe inconsistency).
How can I tweak the above inductive definition so that c1 (T nat) does not cause a universe inconsistency, and without setting universe polymorphism on?
The following works, but I would prefer a solution without adding equality:
Inductive T (A : Type) : Type :=
| c1 : A -> T A
| c2' : A = unit -> T A.
Definition c2 : T unit := c2' unit eq_refl.
Check (c1 (T nat)).
(*
c1 (T nat)
: T nat -> T (T nat)
*)
Let me first answer the question of why we get the universe inconsistency in the first place.
Universe inconsistencies are the errors that Coq reports to avoid proofs of False via Russell's paradox, which results from considering the set of all sets which do not contain themselves.
There's a variant which is more convenient to formalize in type theory called Hurken's Paradox; see Coq.Logic.Hurkens for more details. There is a specialization of Hurken's paradox which proves that no type can retract to a smaller type. That is, given
U := Type#{u}
A : U
down : U -> A
up : A -> U
up_down : forall (X:U), up (down X) = X
we can prove False.
This is almost exactly the setup of your Inductive type. Annotating your type with universes, you start with
Inductive T : Type#{i} -> Type#{j} :=
| c1 : forall (A : Type#{i}), A -> T A
| c2 : T unit.
Note that we can invert this inductive; we may write
Definition c1' (A : Type#{i}) (v : T A) : A
:= match v with
| c1 A x => x
| c2 => tt
end.
Lemma c1'_c1 (A : Type#{i}) : forall v, c1' A (c1 A v) = v.
Proof. reflexivity. Qed.
Suppose, for a moment, that c1 (T nat) typechecked. Since T nat : Type#{j}, this would require j <= i. If it gave us that j < i, then we would be set. We could write c1 Type#{j}. And this is exactly the setup for the variant of Hurken's that I mentioned above! We could define
u = j
U := Type#{j}
A := T Type#{j}
down : U -> A := c1 Type#{j}
up : A -> U := c1' Type#{j}
up_down := c1'_c1 Type#{j}
and hence prove False.
Coq needs to implement a rule for avoiding this paradox. As described here, the rule is that for each (non-parameter) argument to a constructor of an inductive, if the type of the argument has a sort in universe u, then the universe of the inductive is constrained to be >= u. In this case, this is stricter than Coq needs to be. As mentioned by SkySkimmer here, Coq could recognize arguments which appear directly in locations which are indices of the inductive, and disregard those in the same way that it disregards parameters.
So, to finally answer your question, I believe the following are your only options:
You can Set Universe Polymorphism so that in T (T nat), your two Ts take different universe arguments. (Equivalently, you can write Polymorphic Inductive.)
You can take advantage of how Coq treats parameters of inductive types specially, which mandates using equality in your case. (The requirement of using equality is a general property of going from indexed inductive types to parameterized inductives types---from moving arguments from after the : to before it.)
You can pass Coq the flag -type-in-type to entirely disable universe checking.
You can fix bug #7929, which I reported as part of digging into this question, to make Coq handle arguments of constructors which appear in index-position in the inductive in the same way it handles parameters of inductive types.
(You can find another edge case of the system, and manage to trick Coq into ignoring the universes you want to slip past it, and probably find a proof of False in the process. (Possibly involving module subtyping, see, e.g., this recent bug in modules with universes.))
I'm currently trying to write a tactic that instantiates an existential quantifier using a term that can be generated easily (in this specific example, from tauto). My first attempt:
Ltac mytac :=
match goal with
| |- (exists (_ : ?X), _) => cut X;
[ let t := fresh "t" in intro t ; exists t; firstorder
| tauto ]
end.
This tactic will work on a simple problem like
Lemma obv1(X : Set) : exists f : X -> X, f = f.
mytac.
Qed.
However it won't work on a goal like
Lemma obv2(X : Set) : exists f : X -> X, forall x, f x = x.
mytac. (* goal becomes t x = x for arbitrary t,x *)
Here I would like to use this tactic, trusting that the f which tauto finds will be just fun x => x, thus subbing in the specific proof (which should be the identity function) and not just the generic t from my current script. How might I go about writing such a tactic?
It's much more common to create an existential variable and let some tactic (eauto or tauto for example) instantiate the variable by unification.
On the other hand, you can also literally use a tactic to provide the witness using tactics in terms:
Ltac mytac :=
match goal with
| [ |- exists (_:?T), _ ] =>
exists (ltac:(tauto) : T)
end.
Lemma obv1(X : Set) : exists f : X -> X, f = f.
Proof.
mytac.
auto.
Qed.
You need the type ascription : T so that the tactic-in-term ltac:(tauto) has the right goal (the type the exists expects).
I'm not sure this is all that useful (usually the type of the witness isn't very informative and you want to use the rest of the goal to choose it), but it's cool that you can do this nonetheless.
You can use eexists to introduce an existential variable, and let tauto instantiates it.
This give the following simple code.
Lemma obv2(X : Set) : exists f : X -> X, forall x, f x = x.
eexists; tauto.
Qed.
I am learning Coq and I'd like to use it to formalize Regular languages theory, specially finite automata. Let's say I have a structure for an automata as follows:
Record automata : Type := {
dfa_set_states : list state;
init_state : state;
end_state : state;
dfa_func: state -> terminal -> state;
}.
Where state is an inductive type as:
Inductive state:Type :=
S.
And the type terminal terminal is
Inductive terminal:Type :=
a | b.
I am trying to define it so later I'll be able to generalize the definition for any regular language. For now, I'd want to construct an automata which recognizes the language (a * b *), which is all words over the {a,b} alphabet. Does anyone have an idea on how to build some kind of fixpoint function that will run the word (which I see as a list of terminal) and tell me if that automata recgonizes that word or not? Any idea/help will be greatly apreciated.
Thanks in advance,
Erick.
Because you're restricting yourself to regular languages, this is quite simple: you just have to use a fold. Here is a sample:
Require Import Coq.Lists.List.
Import ListNotations.
Set Implicit Arguments.
Unset Strict Implicit.
Unset Printing Implicit Defensive.
Record dfa (S A : Type) := DFA {
initial_state : S;
is_final : S -> bool;
next : S -> A -> S
}.
Definition run_dfa S A (m : dfa S A) (l : list A) : bool :=
is_final m (fold_left (next m) l (initial_state m)).
This snippet is a little bit different from your original definition in that the state and alphabet components are now type parameters of the DFA, and in that I have replaced the end state with a predicate that answers whether we are in an accepting state or not. The run_dfa function simply iterates the transition function of the DFA starting from the initial state, and then tests whether the last state is an accepting state.
You can use this infrastructure to describe pretty much any regular language. For instance, here is an automaton for recognizing a*b*:
Inductive ab := A | B.
Inductive ab_state : Type :=
ReadA | ReadB | Fail.
Definition ab_dfa : dfa ab_state ab := {|
initial_state := ReadA;
is_final s := match s with Fail => false | _ => true end;
next s x :=
match s, x with
| ReadB, A => Fail
| ReadA, B => ReadB
| _, _ => s
end
|}.
We can prove that this automaton does what we expect. Here is a theorem that says that it accepts strings of the sought language:
Lemma ab_dfa_complete n m : run_dfa ab_dfa (repeat A n ++ repeat B m) = true.
Proof.
unfold run_dfa. rewrite fold_left_app.
assert (fold_left (next ab_dfa) (repeat A n) (initial_state ab_dfa) = ReadA) as ->.
{ now simpl; induction n as [| n IH]; simpl; trivial. }
destruct m as [|m]; simpl; trivial.
induction m as [|m IH]; simpl; trivial.
Qed.
We can also state a converse, that says that it accepts only strings of that language, and nothing else. I have left the proof out; it shouldn't be hard to figure it out.
Lemma ab_dfa_sound l :
run_dfa ab_dfa l = true ->
exists n m, l = repeat A n ++ repeat B m.
Unfortunately, there is not much we can do with this representation besides running the automaton. In particular, we cannot minimize an automaton, test whether two automata are equivalent, etc. These functions also need to take as arguments lists that enumerate all elements of the state and alphabet types, S and A.
I have been struggling on this for a while now. I have an inductive type:
Definition char := nat.
Definition string := list char.
Inductive Exp : Set :=
| Lit : char -> Exp
| And : Exp -> Exp -> Exp
| Or : Exp -> Exp -> Exp
| Many: Exp -> Exp
from which I define a family of types inductively:
Inductive Language : Exp -> Set :=
| LangLit : forall c:char, Language (Lit c)
| LangAnd : forall r1 r2: Exp, Language(r1) -> Language(r2) -> Language(And r1 r2)
| LangOrLeft : forall r1 r2: Exp, Language(r1) -> Language(Or r1 r2)
| LangOrRight : forall r1 r2: Exp, Language(r2) -> Language(Or r1 r2)
| LangEmpty : forall r: Exp, Language (Many r)
| LangMany : forall r: Exp, Language (Many r) -> Language r -> Language (Many r).
The rational here is that given a regular expression r:Exp I am attempting to represent the language associated with r as a type Language r, and I am doing so with a single inductive definition.
I would like to prove:
Lemma L1 : forall (c:char)(x:Language (Lit c)),
x = LangLit c.
(In other words, the type Language (Lit c) has only one element, i.e. the language of the regular expression 'c' is made of the single string "c". Of course I need to define some semantics converting elements of Language r to string)
Now the specifics of this problem are not important and simply serve to motivate my question: let us use nat instead of Exp and let us define a type List n which represents the lists of length n:
Parameter A:Set.
Inductive List : nat -> Set :=
| ListNil : List 0
| ListCons : forall (n:nat), A -> List n -> List (S n).
Here again I am using a single inductive definition to define a family of types List n.
I would like to prove:
Lemma L2: forall (x: List 0),
x = ListNil.
(in other words, the type List 0 has only one element).
I have run out of ideas on this one.
Normally when attempting to prove (negative) results with inductive types (or predicates), I would use the elim tactic (having made sure all the relevant hypothesis are inside my goal (generalize) and only variables occur in the type constructors). But elim is no good in this case.
If you are willing to accept more than just the basic logic of Coq, you can just use the dependent destruction tactic, available in the Program library (I've taken the liberty of rephrasing your last example in terms of standard-library vectors):
Require Coq.Vectors.Vector.
Require Import Program.
Lemma l0 A (v : Vector.t A 0) : v = #Vector.nil A.
Proof.
now dependent destruction v.
Qed.
If you inspect the term, you'll see that this tactic relied on the JMeq_eq axiom to get the proof to go through:
Print Assumptions l0.
Axioms:
JMeq_eq : forall (A : Type) (x y : A), x ~= y -> x = y
Fortunately, it is possible to prove l0 without having to resort to features outside of Coq's basic logic, by making a small change to the statement of the previous lemma.
Lemma l0_gen A n (v : Vector.t A n) :
match n return Vector.t A n -> Prop with
| 0 => fun v => v = #Vector.nil A
| _ => fun _ => True
end v.
Proof.
now destruct v.
Qed.
Lemma l0' A (v : Vector.t A 0) : v = #Vector.nil A.
Proof.
exact (l0_gen A 0 v).
Qed.
We can see that this new proof does not require any additional axioms:
Print Assumptions l0'.
Closed under the global context
What happened here? The problem, roughly speaking, is that in Coq we cannot perform case analysis on terms of dependent types whose indices have a specific shape (such as 0, in your case) directly. Instead, we must prove a more general statement where the problematic indices are replaced by variables. This is exactly what the l0_gen lemma is doing. Notice how we had to make the match on n return a function that abstracts on v. This is another instance of what is known as "convoy pattern". Had we written
match n with
| 0 => v = #Vector.nil A
| _ => True
end.
Coq would see the v in the 0 branch as having type Vector.t A n, making that branch ill-typed.
Coming up with such generalizations is one of the big pains of doing dependently typed programming in Coq. Other systems, such as Agda, make it possible to write this kind of code with much less effort, but it was only recently shown that this can be done without relying on the extra axioms that Coq wanted to avoid including in its basic theory. We can only hope that this will be simplified in future versions.