Type hierarchy definition in Coq or Agda - coq

I would like to build a kind of type hierarchy:
B is of type A ( B::A )
C and D are of type of B (C,D ::B)
E and F are of type of C (E,F ::C)
I asked here if this is possible to be directly implemented in Isabelle, but the answer as you see was No. Is it possible to encode this directly in Agda or Coq?
PS: Suppose A..F are all abstract and some functions are defined over each type)
Thanks

If I understood your question correctly, you want something that looks like the identity type. When we declare the type constructor _isOfType_, we mention two Sets (the parameter A and the index B) but the constructor indeed makes sure that the only way to construct an element of such a type is to enforce that they are indeed equal (and that a is of this type):
data _isOfType_ {ℓ} {A : Set ℓ} (a : A) : (B : Set ℓ) → Set where
indeed : a isOfType A
We can now have functions taking as arguments proofs that things are of the right type. Here I translated your requirements & assumed that I had a function f able to combine two Cs into one. Pattern-matching on the appropriate assumptions reveals that E and F are indeed on type C and can therefore be fed to f to discharge the goal:
example : ∀ (A : Set₃) (B : Set₂) (C D : Set₁) (E F : Set) →
B isOfType A
→ C isOfType B → D isOfType B
→ E isOfType C → F isOfType C
→ (f : C → C → C) → C
example A B .Set D E F _ _ _ indeed indeed f = f E F
Do you have a particular use case in mind for this sort of patterns or are you coming to Agda with ideas you have encountered in other programming languages? There may be a more idiomatic way to formulate your problem.

Related

Lean define groups

This is a follow-up question of Lean pass type as parameter
I tried jmc's suggestion, which seemed to work, but then I got stuck at another point. The original purpose of the question was to define the categories of groups and rings, but now apparently I am unable to define group morphisms:
class group :=
(set: Type)
(add: set → set → set)
infix + := group.add
class group_morphism (G H: group) :=
(f: G.set → H.set)
(additive: ∀ g h : G.set, f(g + h) = (f g) + (f h))
I get an error at the first +. Lean seems to think that this refers to H.add, whereas it is supposed to refer to G.add.
The issue is that group should not be a class. If you look at the type of group.add, by #check #group.addyou will get
group.add : Π [c : group], group.set → group.set → group.set
The square brackets around c : group indicate that this is an implicit argument that will be inferred by type class inference. You won't have to explicitly type this argument, but Lean will try to work out what it is. Type class inference works best for types where there is only one inhabitant you ever want to use.
In mathlib the definition of group is closer to
class group (set : Type) :=
(add : set → set → set)
On a particular type, there is usually only one group structure you want to refer to, so in mathlib, a type like add_group int has only one inhabitant you care about.
Lean automatically chose H as the canonical representative of the type group, but this is not the one you wanted.
So usually when you deal with groups the type and the group structure are kept as separate objects, they are not put into a pair. However for category theory, the usual approach doesn't work, an object is a pair of a type and a group structure.
The setup in mathlib is closer to the following. The coe_to_sort tells Lean how to take a group_obj and interpret it as a Type without having explicitly write G.set, the group_obj_group instance tells Lean how to automatically infer the group structure on the type of a group_obj
class group (set : Type) :=
(add: set → set → set)
structure group_obj :=
(set : Type)
(group : group set)
instance coe_to_sort : has_coe_to_sort group_obj :=
{ S := Type,
coe := group_obj.set }
instance group_obj_group (G : group_obj) : group G := G.group
infix `+` := group.add
structure group_morphism (G H : group_obj) :=
(f: G → H)
(additive: ∀ g h : G.set, f(g + h) = (f g) + (f h))
You are redefining the + notation which will very quickly lead to headaches. Have a polymorphic + notation is very helpful. (How will you denote the addition in a ring?)
Further points:
you should use structure instead of class
mathematically, you are defining monoids and monoid homs, not groups and group homs
This works though
(set: Type)
(add: set → set → set)
def add {G : group} := group.add G
class group_morphism (G H: group) :=
(f: G.set → H.set)
(additive: ∀ g h : G.set, f(add g h) = add (f g) (f h))

Expressing "almost Properness" under option type

Suppose we have a type A with an equivalence relation (===) : A -> A -> Prop on it.
On top of that there is a function f : A -> option A.
It so happens that this function f is "almost" Proper with respect to equiv. By "almost" I mean the following:
Lemma almost_proper :
forall a1 a2 b1 b2 : A,
a1 === a2 ->
f a1 = Some b1 -> f a2 = Some b2 ->
b1 === b2.
In other words, if f succeeds on both inputs, the relation is preserved, but f might still fail on one and succeed on the other. I would like to express this concept concisely but came up with a few questions when trying to do so.
I see three solutions to the problem:
Leave everything as is. Do not use typeclasses, prove lemmas like the one above. This doesn't look good, because of the proliferation of preconditions like x = Some y, which create complications when proving lemmas.
It is possible to prove Proper ((===) ==> equiv_if_Some) f when equiv_if_Some is defined as follows:
Inductive equiv_if_Some {A : Type} {EqA : relation A} `{Equivalence A EqA} : relation (option A) :=
| equiv_Some_Some : forall a1 a2, a1 === a2 -> equiv_if_Some (Some a1) (Some a2)
| equiv_Some_None : forall a, equiv_if_Some (Some a) None
| equiv_None_Some : forall a, equiv_if_Some None (Some a)
| equiv_None_None : equiv_if_Some None None.
One problem here is that this is no longer an equivalence relation (it is not transitive).
It might be possible to prove Almost_Proper ((===) ==> (===)) f if some reasonable Almost_Proper class is used. I am not sure how that would work.
What would be the best way to express this concept? I am leaning toward the second one, but perhaps there are more options?
For variants 2 and 3, are there preexisting common names (and therefore possibly premade definitions) for the relations I describe? (equiv_if_Some and Almost_Proper)
2 if it simplifies your proofs.
The fact that it's not an equivalence relation is not a deal breaker, but it does limit the contexts in which it can be used.
equiv_if_Some might be nicer to define as an implication (similar to how it appears in the almost_proper lemma) than as an inductive type.
You may also consider using other relations (as an alternative to, or in combination with equiv_if_Some):
A partial order, that can only relate None to Some but not Some to None
A partial equivalence relation, that can only relate Somes.

How do I code algebraic cpos as an Isabelle locale

I am trying to prove the known fact that there is a standard way to build an algebriac_cpo from a partial_order. My problem is I keep getting the error
Type unification failed: No type arity set :: partial_order
and I can not understand what this means.
I think I have tracked down my problem to the definition of cpo. The definition works and I have proven various results for it but the working interpretation of a partial_order fails with cpo.
theory LocaleProblem imports "HOL-Lattice.Bounds"
begin
definition directed:: "'a::partial_order set ⇒ bool" where
" directed a ≡
¬a={} ∧ ( ∀ a1 a2. a1∈a∧ a2∈a ⟶ (∃ ub . (is_Sup {a1,a2} ub))) "
class cpo = partial_order +
fixes bot :: "'a::partial_order" ("⊥")
assumes bottom: " ∀ x::'a. ⊥ ⊑ x "
assumes dlub: "directed (d::'a::partial_order set) ⟶ (∃ lub . is_Inf d lub) "
print_locale cpo
interpretation "idl" : partial_order
"(⊆)::( ('b set) ⇒ ('b set) ⇒ bool) "
by (unfold_locales , auto) (* works *)
interpretation "idl" : cpo
"(⊆)::( ('b set) ⇒ ('b set) ⇒ bool) "
"{}" (* gives error
Type unification failed: No type arity set :: partial_order
Failed to meet type constraint:
Term: (⊆) :: 'b set ⇒ 'b set ⇒ bool
Type: ??'a ⇒ ??'a ⇒ bool *)
Any help much appreciated. david
You offered two solutions.
Following the work of Hennessy in Algebraic Theory of Processes" I am trying to prove (where I(A) are the Ideal which are sets) "If a is_a partial_order then I(A) is an algebraic_cpo" I will then want to apply the result to a number of semantics, give as sets. Dose your comment mean that the second solution is not a good idea?
Initially I had a proven lemma that started
lemma directed_ran: "directed (d::('a::partial_order×'b::partial_order) set) ⟹ directed (Range d)"
proof (unfold directed_def)
With the First solution started well:
context partial_order begin (* type 'a is now a partial_order *)
definition
is_Sup :: "'a set ⇒ 'a ⇒ bool" where
"is_Sup A sup = ((∀x ∈ A. x ⊑ sup) ∧ (∀z. (∀x ∈ A. x ⊑ z) ⟶ sup ⊑ z))"
definition directed:: "'a set ⇒ bool" where
" directed a ≡
¬a={} ∧ ( ∀ a1 a2. a1∈a∧ a2∈a ⟶ (∃ ub . (is_Sup {a1,a2} ub))) "
lemma directed_ran: "directed (d::('c::partial_order×'b::partial_order) set) ⟹ directed (Range d)"
proof - assume a:"directed d"
from a local.directed_def[of "d"] (* Fail with message below *)
show "directed (Range d)"
Alas the working lemma now fails: I rewrote
proof (unfold local.directed_def)
so I explored and found that although the fact local.directed_def exists is can not be unified
Failed to meet type constraint:
Term: d :: ('c × 'b) set
Type: 'a set
I changed the type successfully in the lemma statement but now can find no way to unfold the definition in the proof. Is there some way to do this?
Second solution again starts well:
instantiation set:: (type) partial_order
begin
definition setpoDef: "a⊑(b:: 'a set) = subset_eq a b"
instance proof
fix x::"'a set" show " x ⊑ x" by (auto simp: setpoDef)
fix x y z::"'a set" show "x ⊑ y ⟹ y ⊑ z ⟹ x ⊑ z" by(auto simp: setpoDef)
fix x y::"'a set" show "x ⊑ y ⟹ y ⊑ x ⟹ x = y" by (auto simp: setpoDef)
qed
end
but next step fails
instance proof
fix d show "directed d ⟶ (∃lub. is_Sup d (lub::'a set))"
proof assume "directed d " with directed_def[of "d"] show "∃lub. is_Sup d lub"
by (metis Sup_least Sup_upper is_SupI setpoDef)
qed
next
from class.cpo_axioms_def[of "bot"] show "∀x . ⊥ ⊑ x " (* Fails *)
qed
end
the first subgoal is proven but show "∀x . ⊥ ⊑ x " although cut an paste from the subgaol in the output does not match the subgoal. Normally at this point is need to add type constraints. But I cannot find any that work.
Do you know what is going wrong?
Do you know if I can force the Output to reveal the type information in the sugoal.
The interpretation command acts on the locale that a type class definition implicitly declares. In particular, it does not register the type constructor as an instance of the type class. That's what the command instantiation is good for. Therefore, in the second interpretation, Isabelle complains about set not having been registered as a instance of partial_order.
Since directed only needs the ordering for one type instance (namely 'a), I recommend to move the definition of directed into the locale context of the partial_order type class and remove the sort constraint on 'a:
context partial_order begin
definition directed:: "'a set ⇒ bool" where
"directed a ≡ ¬a={} ∧ ( ∀ a1 a2. a1∈a∧ a2∈a ⟶ (∃ ub . (is_Sup {a1,a2} ub))) "
end
(This only works if is_Sup is also defined in the locale context. If not, I recommend to replace the is_Sup condition with a1 <= ub and a2 <= ub.)
Then, you don't need to constrain 'a in the cpo type class definition, either:
class cpo = partial_order +
fixes bot :: "'a" ("⊥")
assumes bottom: " ∀ x::'a. ⊥ ⊑ x "
assumes dlub: "directed (d::'a set) ⟶ (∃ lub . is_Inf d lub)"
And consequently, your interpretation should not fail due to sort constraints.
Alternatively, you could declare set as an instance of partial_order instead of interpreting the type class. The advantage is that you can then also use constants and theorems that need partial_order as a sort constraint, i.e., that have not been defined or proven inside the locale context of partial_order. The disadvantage is that you'd have to define the type class operation inside the instantiation block. So you can't just say that the subset relation should be used; this has to be a new constant. Depending on what you intend to do with the instantiation, this might not matter or be very annoying.

Characteristic function of a union

In a constructive setting such as Coq's, I expect the proof of a disjunction A \/ B to be either a proof of A, or a proof of B. If I reformulate this on subsets of a type X, it says that if I have a proof that x is in A union B, then I either have a proof that x is in A, or a proof that x is in B. So I want to define the characteristic function of a union by case analysis,
Definition characteristicUnion (X : Type) (A B : X -> Prop)
(x : X) (un : A x \/ B x) : nat.
It will be equal to 1 when x is in A, and to 0 when x is in B. However Coq does not let me destruct un, because "Case analysis on sort Set is not allowed for inductive definition or".
Is there another way in Coq to model subsets of type X, that would allow me to construct those characteristic functions on unions ? I do not need to extract programs, so I guess simply disabling the previous error on case analysis would work for me.
Mind that I do not want to model subsets as A : X -> bool. That would be unecessarily stronger : I do not need laws of excluded middle such as "either x is in A or x is not in A".
As pointed out by #András Kovács, Coq prevents you from "extracting" computationally relevant information from types in Prop in order to allow some more advanced features to be used. There has been a lot of research on this topic, including recently Univalent Foundations / HoTT, but that would go beyond the scope of this question.
In your case you want indeed to use the type { A } + { B } which will allow you to do what you want.
I think the union of subsets should be a subset as well. We can do this by defining union as pointwise disjunction:
Definition subset (X : Type) : Type := X -> Prop.
Definition union {X : Type}(A B : subset X) : subset X := fun x => A x \/ B x.

Equality in Coq and in a paper of Awodey

In the paper
Univalence as a Principle of Logic, Awodey writes on page 7:
Let us consider the example of intensional versus extensional type theory. The extensional theory has an apparently “stronger” notion of equality, because it permits one to simply substitute equals for equals in all contexts. In the intensional system, by contrast, one can have a = b and a statement Φ(a) and yet not have Φ(b).
I do not understand this, because I thought this is the basic property of equality.
Also, in Coq one can simply prove:
Theorem subs: forall (T:Type)(a b:T)(p:a=b)(P:T-> Prop), P a -> P b.
intros.
rewrite <- p.
assumption.
Qed.
If I am not mistaken, in Awodey's paper, the notation Φ(a) means substituting some implicit free variable in Φ for the expression a.
In my answer, I will use the following more explicit notation instead (which corresponds to Φ(a) in the paper):
Φ[z ⟼ a]
This means that in the expression Φ, free variable z will be substituted for expression a.
Example:
(x = z)[z ⟼ a]
results in
(x = a)
Now, I will assume that you are already familiar with the usual presentation of Type Theory by means of inference rules, as in Appendix A.2 in the HoTT Book.
Type Theory uses two notions of equality.
Identity Types: Usually written using the symbol = in inference rules. In order to conclude an statement like a = b, we need to provide a proof term for it. For example, let's have a look at the introduction rule for identity types:
a : A
---------------- (=)-INTRO
refl a : a = a
Here, refl a is acting as the proof term or evidence that justifies our claim that a = a holds (namely, refl a is representing the trivial or reflexivity proof). So, an statement like p : a = b is expressing that a and b can be identified due to evidence p.
Definitional Equality: Usually written using the symbol ≡ in inference rules. The statement a ≡ b means that a and b are interchangeable, replaceable anywhere, or substitutionally equivalent. This equality captures notions like "by definition", "by computation", "by simplification". This kind of equality does not carry a proof term with it, i.e. it is not a typing statement. This is the kind of equality Coq uses implicitly when you use the tactics simpl and compute. For example, let's have a look at the reflexivity rule for ≡:
a : A
--------- (≡)-REFLE
a ≡ a
Notice that there is no proof term to the left of a ≡ a (compare with the (=)-INTRO rule above). In this case, the proof system is treating a ≡ a as a fact, something that holds without the need to state explicitly its justification, since the only use of ≡ in the proof system will be for rewriting expressions.
That ≡ is used only for simplifying expressions, can be found in other inference rules, for example, the type preservation rule for ≡:
a : A A ≡ B
------------------ (≡)-TYPE-PRESERV
a : B
In other words, if you start with a term a of type A, and you know that types A and B are interchangeable, then term a also has type B. Notice that (and this will be important later!!) the proof term or evidence for B did not change, it is still the same proof term as the one used for A.
We can now get into the question.
What differentiates Extensional Type Theory (ETT) from an Intensional Type Theory like HoTT or CoC is the way they treat identity types and definitional equality.
ETT makes identity types and definitional equality interchangeable by adding the following inference rule:
p : a = b
----------- (=)-EXT
a ≡ b
In other words, the evidence p for the identity becomes irrelevant, and we treat a and b as interchangeable in the proof system (thanks to rules like (≡)-TYPE-PRESERV and other similar rules).
Starting from the hypotheses p : a = b and a : A, in ETT we can do stuff like the following:
a : A p : a = b
--------- (≡)-REFLE ----------- (=)-EXT
a ≡ a a ≡ b
------------------------------------- (=)-CONG (*1)
(a = a) ≡ (a = b)
Where (=)-CONG is a congruence rule (i.e. definitionally equivalent terms produce definitionally equivalent identity types), and I am calling this derivation (*1).
Using (*1), we can then derive:
a : A
----------------- (=)-INTRO -------------------- (*1)
refl a : a = a (a = a) ≡ (a = b)
-------------------------------------------------- (≡)-TYPE-PRESERV
refl a : a = b
Where in (*1) we insert the derivation we did above.
In other words, if we ignore the hypotheses a : A and intermediate steps, it is as if we did the following inference:
p : a = b refl a : a = a
-------------------------------------
refl a : a = b
Since ETT is treating a and b as interchangeable (thanks to the hypothesis p : a = b and the (=)-EXT rule), the proof refl a for a = a can be also seen as a proof for a = b. So, it is not hard to see that, in ETT, having a proof for an identity like a = b is sufficient for replacing some or all occurrences of a for b in ANY statement involving a.
Now, what happens in an Intensional Type Theory (ITT)?
In an ITT, the (=)-EXT rule is not assumed. Therefore, we cannot carry out the derivation (*1) we did above, and in particular, the following inference is invalid:
p : a = b refl a : a = a
-------------------------------------
refl a : a = b
This is an example where we have an identity p : a = b, but from the statement (refl a : a = z)[z ⟼ a], we cannot conclude the statement (refl a : a = z)[z ⟼ b]. This is an instance of what Awodey's paper was referring to I think.
Why is this an invalid inference? Because refl a : a = b is forcing a and b to be definitionally equal (i.e. the only way to introduce refl a into a derivation is through the (=)-INTRO rule), but this is not necessarily true from the hypothesis p : a = b. In HoTT, for example, the interval type I (Section 6.3 in the HoTT Book), has two terms 0 : I and 1 : I, they are not definitionally equal, but we have a proof seg : 0 = 1.
The fact that there might exist other identity proofs that are not the trivial or the reflexivity proof, it's what gives an Intentional Type Theory its richness. It is what allows HoTT to have Univalence and Higher Inductive Types, for example.
So, what can we conclude from the hypotheses p : a = b and refl a : a = a in an ITT?
In your question, the theorem you proved in Coq is called the "transport" function in HoTT (Section 2.3 in the HoTT Book). Using your theorem (removing the implicit parameters), you will be able to do the following derivation:
p : a = b refl a : a = a
------------------------------------------
subs p (λx => a = x) (refl a) : a = b
In other words, we can conclude that a = b, but our proof term for this changed! In ETT we simply carried out a substitution (because a and b were interchangeable) allowing us to use the same evidence in the conclusion (namely refl a). But in an ITT, we cannot treat a and b as substitutionally equivalent, due to the richness of the identity types. And to reflect this intention, we need to combine the proofs of the hypotheses to build our new evidence in the conclusion.
So, from (refl a : a = z)[z ⟼ a] we cannot conclude (refl a : a = z)[z ⟼ b], but we can conclude subs p (λx => a = x) (refl a) : a = b, which is not the result of a simple substitution from the hypothesis, as in ETT.
The rewrite tactic in Coq can fail - it can generate an ill-typed term.
If I remember correctly, it's sometimes possible to get around this with some careful manipulations, but if you do this by (implicitly or explicitly) introducing an additional axiom, such as functional extensionality or JMeq_eq, it's no longer the case that the first goal simply follows from the second.