Are types in Coq mutually exclusive? - coq

Question:
Are types in Coq mutually exclusive?
For example in the accepted answer for this question:
What exactly is a Set in COQ
it is mentioned that "Set <= Type_0". (Does it mean that anything of type Set is also of type Type?)
On the other hand in the accepted answer for this question:
Making and comparing Sets in Coq
it is mentioned that "each valid element of the language has exactly one type".
My motivation:
Relations in Coq (in Relations: Coq.Relations.Relation_Definitions) are defined as:
Variable A : Type.
Definition relation := A -> A -> Prop.
My intention was to express a restriction of a relation to some "smaller" B. If the types are mutually exclusive then it may make no sense.

In the sense asked, yes, types are exclusive.
You cannot express a restriction to some smaller B, at least not between sorts like Set/Prop/Type(0) - everything contained in one sort is also contained in larger sorts.
Subtyping in Coq is a little different from traditional languages. Set <= Type(0) means anything of type Set can be promoted to having type Type(0), not that it always has Type(0). Subtyping between sorts is explained in CIC/Sorts section of the Coq Manual.
However, if you are using Type Classes, you can define restriction of relations on subclasses! For example, you can define a type class which has the "+" operator and define a subclass where the "+" operator is associative.

Related

Type inequality without cardinality arguments

How can I get Coq to let me prove syntactic type inequality?
Negating univalence
I've read the answer to this question, which suggests that if you assume univalence, then the only way to prove type inequality is through cardinality arguments.
My understanding is - if Coq's logic is consistent with univalence, it should also be consistent with the negation of univalence. While I get that the negation of univalence would really be that some isomorphic types are non-equal, I believe it should be possible to express that no isomorphic types (that aren't identical) are equal.
Inequality for type constructors
In effect, I would like Coq to treat types and type constructors as inductive definitions, and do a typical inversion-style argument to say that my two very clearly different types are not equal.
Can it be done? This would need to be:
Useable for concrete types i.e. no type variables, and so
Not necessarily decidable
That strikes me as being weak enough to be consistent.
Context
I have a polymorphic judgement (effectively an inductive type with parameters forall X : Type, x -> Prop) for which the choice of X is decided by the constructors of the judgement.
I want to prove that, for all judgements for a certain choice of X (say X = nat) some property holds, but if I try to use inversion, some constructors give me hypotheses like nat = string (for example). These type equality hypotheses appear even for types with the same cardinality, so I can't (and don't want to) make cardinality arguments to produce a contradiction.
The unthinkable...
Should I produce an Inductive closed-world encoding of the types I care about, and let that be the polymorphic variable of the above judgement?
If you want to use type inequality, I think that the best you can do is to assume axioms for every pair of types you care about:
Axiom nat_not_string : nat <> string.
Axiom nat_not_pair : forall A B, nat <> A * B.
(* ... *)
In Coq, there is no first-class way of talking about the name of an inductively defined type, so there shouldn't be a way of stating this family of axioms with a single assumption. Naturally, you might be able to write a Coq plugin in OCaml to generate those axioms automatically every time an inductive type is defined. But the number of axioms you need grows quadratically in the number of types, so I think would quickly get out of hand.
Your "unthinkable" approach is probably the most convenient in this case, actually.
(Nit: "if Coq's logic is consistent with univalence, it should also be consistent with the negation of univalence". Yes, but only because Coq cannot prove univalence.)

Does Scala have a value restriction like ML, if not then why?

Here’s my thoughts on the question. Can anyone confirm, deny, or elaborate?
I wrote:
Scala doesn’t unify covariant List[A] with a GLB ⊤ assigned to List[Int], bcz afaics in subtyping “biunification” the direction of assignment matters. Thus None must have type Option[⊥] (i.e. Option[Nothing]), ditto Nil type List[Nothing] which can’t accept assignment from an Option[Int] or List[Int] respectively. So the value restriction problem originates from directionless unification and global biunification was thought to be undecidable until the recent research linked above.
You may wish to view the context of the above comment.
ML’s value restriction will disallow parametric polymorphism in (formerly thought to be rare but maybe more prevalent) cases where it would otherwise be sound (i.e. type safe) to do so such as especially for partial application of curried functions (which is important in functional programming), because the alternative typing solutions create a stratification between functional and imperative programming as well as break encapsulation of modular abstract types. Haskell has an analogous dual monomorphisation restriction. OCaml has a relaxation of the restriction in some cases. I elaborated about some of these details.
EDIT: my original intuition as expressed in the above quote (that the value restriction may be obviated by subtyping) is incorrect. The answers IMO elucidate the issue(s) well and I’m unable to decide which in the set containing Alexey’s, Andreas’, or mine, should be the selected best answer. IMO they’re all worthy.
As I explained before, the need for the value restriction -- or something similar -- arises when you combine parametric polymorphism with mutable references (or certain other effects). That is completely independent from whether the language has type inference or not or whether the language also allows subtyping or not. A canonical counter example like
let r : ∀A.Ref(List(A)) = ref [] in
r := ["boo"];
head(!r) + 1
is not affected by the ability to elide the type annotation nor by the ability to add a bound to the quantified type.
Consequently, when you add references to F<: then you need to impose a value restriction to not lose soundness. Similarly, MLsub cannot get rid of the value restriction. Scala enforces a value restriction through its syntax already, since there is no way to even write the definition of a value that would have polymorphic type.
It's much simpler than that. In Scala values can't have polymorphic types, only methods can. E.g. if you write
val id = x => x
its type isn't [A] A => A.
And if you take a polymorphic method e.g.
def id[A](x: A): A = x
and try to assign it to a value
val id1 = id
again the compiler will try (and in this case fail) to infer a specific A instead of creating a polymorphic value.
So the issue doesn't arise.
EDIT:
If you try to reproduce the http://mlton.org/ValueRestriction#_alternatives_to_the_value_restriction example in Scala, the problem you run into isn't the lack of let: val corresponds to it perfectly well. But you'd need something like
val f[A]: A => A = {
var r: Option[A] = None
{ x => ... }
}
which is illegal. If you write def f[A]: A => A = ... it's legal but creates a new r on each call. In ML terms it would be like
val f: unit -> ('a -> 'a) =
fn () =>
let
val r: 'a option ref = ref NONE
in
fn x =>
let
val y = !r
val () = r := SOME x
in
case y of
NONE => x
| SOME y => y
end
end
val _ = f () 13
val _ = f () "foo"
which is allowed by the value restriction.
That is, Scala's rules are equivalent to only allowing lambdas as polymorphic values in ML instead of everything value restriction allows.
EDIT: this answer was incorrect before. I have completely rewritten the explanation below to gather my new understanding from the comments under the answers by Andreas and Alexey.
The edit history and the history of archives of this page at archive.is provides a recording of my prior misunderstanding and discussion. Another reason I chose to edit rather than delete and write a new answer, is to retain the comments on this answer. IMO, this answer is still needed because although Alexey answers the thread title correctly and most succinctly—also Andreas’ elaboration was the most helpful for me to gain understanding—yet I think the layman reader may require a different, more holistic (yet hopefully still generative essence) explanation in order to quickly gain some depth of understanding of the issue. Also I think the other answers obscure how convoluted a holistic explanation is, and I want naive readers to have the option to taste it. The prior elucidations I’ve found don’t state all the details in English language and instead (as mathematicians tend to do for efficiency) rely on the reader to discern the details from the nuances of the symbolic programming language examples and prerequisite domain knowledge (e.g. background facts about programming language design).
The value restriction arises where we have mutation of referenced1 type parametrised objects2. The type unsafety that would result without the value restriction is demonstrated in the following MLton code example:
val r: 'a option ref = ref NONE
val r1: string option ref = r
val r2: int option ref = r
val () = r1 := SOME "foo"
val v: int = valOf (!r2)
The NONE value (which is akin to null) contained in the object referenced by r can be assigned to a reference with any concrete type for the type parameter 'a because r has a polymorphic type a'. That would allow type unsafety because as shown in the example above, the same object referenced by r which has been assigned to both string option ref and int option ref can be written (i.e. mutated) with a string value via the r1 reference and then read as an int value via the r2 reference. The value restriction generates a compiler error for the above example.
A typing complication arises to prevent3 the (re-)quantification (i.e. binding or determination) of the type parameter (aka type variable) of a said reference (and the object it points to) to a type which differs when reusing an instance of said reference that was previously quantified with a different type.
Such (arguably bewildering and convoluted) cases arise for example where successive function applications (aka calls) reuse the same instance of such a reference. IOW, cases where the type parameters (pertaining to the object) for a reference are (re-)quantified each time the function is applied, yet the same instance of the reference (and the object it points to) being reused for each subsequent application (and quantification) of the function.
Tangentially, the occurrence of these is sometimes non-intuitive due to lack of explicit universal quantifier ∀ (since the implicit rank-1 prenex lexical scope quantification can be dislodged from lexical evaluation order by constructions such as let or coroutines) and the arguably greater irregularity (as compared to Scala) of when unsafe cases may arise in ML’s value restriction:
Andreas wrote:
Unfortunately, ML does not usually make the quantifiers explicit in its syntax, only in its typing rules.
Reusing a referenced object is for example desired for let expressions which analogous to math notation, should only create and evaluate the instantiation of the substitutions once even though they may be lexically substituted more than once within the in clause. So for example, if the function application is evaluated as (regardless of whether also lexically or not) within the in clause whilst the type parameters of substitutions are re-quantified for each application (because the instantiation of the substitutions are only lexically within the function application), then type safety can be lost if the applications aren’t all forced to quantify the offending type parameters only once (i.e. disallow the offending type parameter to be polymorphic).
The value restriction is ML’s compromise to prevent all unsafe cases while also preventing some (formerly thought to be rare) safe cases, so as to simplify the type system. The value restriction is considered a better compromise, because the early (antiquated?) experience with more complicated typing approaches that didn’t restrict any or as many safe cases, caused a bifurcation between imperative and pure functional (aka applicative) programming and leaked some of the encapsulation of abstract types in ML functor modules. I cited some sources and elaborated here. Tangentially though, I’m pondering whether the early argument against bifurcation really stands up against the fact that value restriction isn’t required at all for call-by-name (e.g. Haskell-esque lazy evaluation when also memoized by need) because conceptually partial applications don’t form closures on already evaluated state; and call-by-name is required for modular compositional reasoning and when combined with purity then modular (category theory and equational reasoning) control and composition of effects. The monomorphisation restriction argument against call-by-name is really about forcing type annotations, yet being explicit when optimal memoization (aka sharing) is required is arguably less onerous given said annotation is needed for modularity and readability any way. Call-by-value is a fine tooth comb level of control, so where we need that low-level control then perhaps we should accept the value restriction, because the rare cases that more complex typing would allow would be less useful in the imperative versus applicative setting. However, I don’t know if the two can be stratified/segregated in the same programming language in smooth/elegant manner. Algebraic effects can be implemented in a CBV language such as ML and they may obviate the value restriction. IOW, if the value restriction is impinging on your code, possibly it’s because your programming language and libraries lack a suitable metamodel for handling effects.
Scala makes a syntactical restriction against all such references, which is a compromise that restricts for example the same and even more cases (that would be safe if not restricted) than ML’s value restriction, but is more regular in the sense that we’ll not be scratching our head about an error message pertaining to the value restriction. In Scala, we’re never allowed to create such a reference. Thus in Scala, we can only express cases where a new instance of a reference is created when it’s type parameters are quantified. Note OCaml relaxes the value restriction in some cases.
Note afaik both Scala and ML don’t enable declaring that a reference is immutable1, although the object they point to can be declared immutable with val. Note there’s no need for the restriction for references that can’t be mutated.
The reason that mutability of the reference type1 is required in order to make the complicated typing cases arise, is because if we instantiate the reference (e.g. in for example the substitutions clause of let) with a non-parametrised object (i.e. not None or Nil4 but instead for example a Option[String] or List[Int]), then the reference won’t have a polymorphic type (pertaining to the object it points to) and thus the re-quantification issue never arises. So the problematic cases are due to instantiation with a polymorphic object then subsequently assigning a newly quantified object (i.e. mutating the reference type) in a re-quantified context followed by dereferencing (reading) from the (object pointed to by) reference in a subsequent re-quantified context. As aforementioned, when the re-quantified type parameters conflict, the typing complication arises and unsafe cases must be prevented/restricted.
Phew! If you understood that without reviewing linked examples, I’m impressed.
1 IMO to instead employ the phrase “mutable references” instead of “mutability of the referenced object” and “mutability of the reference type” would be more potentially confusing, because our intention is to mutate the object’s value (and its type) which is referenced by the pointer— not referring to mutability of the pointer of what the reference points to. Some programming languages don’t even explicitly distinguish when they’re disallowing in the case of primitive types a choice of mutating the reference or the object they point to.
2 Wherein an object may even be a function, in a programming language that allows first-class functions.
3 To prevent a segmentation fault at runtime due to accessing (read or write of) the referenced object with a presumption about its statically (i.e. at compile-time) determined type which is not the type that the object actually has.
4 Which are NONE and [] respectively in ML.

A type constructor IS a monad or HAS a monad?

People usually say a type IS a monad.
In some functional languages and libraries (like Scala/Scalaz), you have a type constructor like List or Option, and you can define a Monad implementation that is separated from the original type. So basically there's nothing that forbids you in the type system from creating distinct instances of Monad for the same type constructor.
is it possible for a type constructor to have multiple monads?
if yes, can you provide any meaningful example of that? any "artificial" one?
what about monoids, applicatives... ?
You can commonly find this all around in mathematics.
A monad is a triple (T, return, bind) such that (...). When bind and return can be inferred from the context, we just refer to the monad as T.
A monoid is a triple (M, e, •) such that (...). (...) we just refer to the monoid as M.
A topological space is a pair (S, T) such that (...). We just refer to the topological space as S.
A ring is a tuple (V, 0, +, 1, ×)...
So indeed, for a given type constructor T there may be multiple different definitions of return and bind that make a monad. To avoid having to refer to the triple every time, we can give T different names to disambiguate, in a way which corresponds to the newtype construct in Haskell. For example: [] vs ZipList, State s vs ReaderT s (Writer s).
P.S. There is something artificial in saying that a monad or a monoid is a triple, especially given that there are different presentations: we could also say that a monad is a triple (T, fmap, join), or that a monoid is a pair (M, •), with the identity element hidden in the extra condition (because it is uniquely determined by • anyway). The ontology of mathematical structures is a more philosophical question that is outside the scope of SO (as well as outside my expertise). But a more prudent way to reformulate such definitions may be to say that "a monad is (defined|characterized) by a triple (T, return, bind)".
Insofar as you're asking about language usage, Google says that the phrase “has a monad” doesn't seem to be commonly used in the way you're asking about. Most real occurrences are in sentences such as, “The Haskell community has a monad problem.” However, a few cases of vaguely similar usage do exist in the wild, such as, “the only thing which makes it ‘monadic‘ is that it has a Monad instance.” That is, monad is often used as a synonym for monadic, modifying some other noun to produce a phrase (a monad problem, a Monad instance) that is sometimes used as the object of the verb have.
As for coding: in Haskell, a type can declare one instance of Monad, one of Monoid and so on. When a given type could have many such instances defined, such as how numbers are monoids under addition, multiplication, maximum, minimum and many other operations, Haskell defines separate types, such as Sum Int, a Monoid instance over Int where the operation is +, and Product Int, a Monoid instance where the operation is *.
I haven't comprehensively checked the tens of thousands of hits, though, so it's very possible there are better examples in there of what you're asking about.
The phrasing I've commonly seen for that is the one I just used: a type is a category under an operation.

Enable explicit Type indices in coqtop?

In Coq there is a hierarchy of types each one denoting the type of the previous i.e. Type_0 : Type_1, Type_1 : Type_2 and so on. In coqtop however when I type Check Type. I get Type : Type which looks like a contradiction but is not since Type is implicitly indexed.
Question: How to enable explicit indexing of the Type universes?
The short answer as #ejgallego mentioned above is to enable the printing of universe levels:
Coq < Set Printing Universes.
Coq < Check Type.
Type#{Top.1}
: Type#{Top.1+1}
(* Top.1 |= *)
Conceptually there is indeed a hierarchy of types which might be called Type#{1}, Type#{2}, etc. However, Coq actually maintains symbols for universe indices and relationships among them (universe constraints) rather than explicit numbers. The constraints are kept consistent so that it's always possible to assign some explicit number to every symbol in a consistent manner.
In the output above you can see that Coq has created a universe level Top.1 for the Type inside the Check Type. command. Its type is always one level higher, which Coq does without another symbol with the expression Top.1+1. With Set Printing Universes a list of constraints is also output as a comment; in this case it gives the one symbol in the context Top.1 and no constraints on the right hand side.
Coq maintains a global list of universe levels and constraints it has created so far. You can read a more thorough explanation of universe levels and constraints in CPDT: http://adam.chlipala.net/cpdt/html/Universes.html.

What is the difference between subtyping and subsumption?

What is the difference between subtyping and subsumption? Does subsumption mean implicit coercion?
Yes, you're mostly right.
Subtyping is a relation over two types. By itself, it doesn't say how this relation actually applies to the typing of expressions.
Subsumption usually means the presence of a typing rule for expressions that allows to apply subtyping to their types implicitly. There are languages that have subtyping but no subsumption rule, and where it must be invoked explicitly with a special type annotation (e.g., OCaml).
There's also the somewhat independent aspect of whether subtyping is "coercive". Coercive subtyping changes the value it is applied to. For example, in a language where Int <: Float, subtyping may be coercive, because ints and floats are different domains. Typical OO-style subtyping on objects usually is non-coercive. However, this is a somewhat fuzzy notion, since it often depends on the choice of semantic model, and may not necessarily make an observable difference (unless a language allows to reverse subtying with downcasts). In practice, it refers to implementation techniques more than semantics.