Interface with multiple implementations in OCaml - interface

What is the conventional way to create an interface in OCaml? It's possible to have an interface with a single implementation by creating an interface file foo.mli and an implementation file foo.ml, but how can you create multiple implementations for the same interface?

You must use modules and signatures. A .ml file implicitly define a module, and a .mli its signature. With explicit modules and signature, you can apply a signature to several different modules.
See this chapter of the online book "Developing Applications with OCaml".

If you're going to have multiple implementations for the same signature, define your signature inside a compilation unit, rather than as a compilation unit, and (if needed) similarly for the modules. There's an example of that in the standard library: the OrderedType signature, that describes modules with a type and a comparison function on that type:
module type OrderedType = sig
type t
val compare : t -> t -> int
end
This signature is defined in both set.mli and map.mli (you can refer to it as either Set.OrderedType or Map.OrderedType, or even write it out yourself: signatures are structural). There are several compilation units in the standard library that have this signature (String, Nativeint, etc.). You can also define your own module, and you don't need to do anything special when defining the module: as long as it has a type called t and a value called compare of type t -> t -> int, the module has that signature. There's a slightly elaborate example of that in the standard library: the Set.Make functor builds a module which has the signature OrderedType, so you can build sets of sets that way.
(* All four modules passed as arguments to Set.Make have the signature Set.OrderedType *)
module IntSet = Set.Make(module type t = int val compare = Pervasives.compare end)
module StringSet = Set.Make(String)
module StringSetSet = Set.Make(StringSet)
module IntSetSet = Set.Make(IntSet)

Related

Objects with type members: what is Scala's object vs module system ? (Trying to understand a 2014 Odersky paper on path dependent types)

I am reading Foundations of path dependent types. On the first page, on the right column it is written:
Our motivation is twofold. First, we believe objects with type members
are not fully understood. It is not clear what causes the complexity,
which pieces of complexity are essential to the concept or accidental
to a language implementation or calculus that tries to achieve
something else. Second, we believe objects with type members are
really useful. They can encode a variety of other, usually separate
type system features. Most importantly, they unify concepts from
object and module systems, by adding a notion of nominality to otherwise structural systems.
Could someone clarify/explain what does "object vs module" system mean?
Or in general, what does
"they (objects with type members) unify concepts from
object and module systems, by adding a notion of nominality to otherwise structural systems."
mean ?
What concepts? From where ?
Nominality in the object names / values ?
Structure in the types ? Or the other way around?
Where do type members here belong to ? To module system ? Object system ? How? Why?
EDIT:
How does this unification relate to path dependent types ? It seems to me that they allow this unification to happen (objects with type members). Is that so ?
If yes, how ?
Could you give a simple example what that means ? (I.e. path dependent types allowing the unification of module and object systems vs. why would the unification not be possible happen if we would not have path dependent types?)
EDIT 2:
From the paper:
To make any use of type members, programmers need a way to refer to
them. This means that types must be able to refer to objects, i.e.
contain terms that serve as static approximation of a set of dynamic
objects. In other words, some level of dependent types is required;
the usual notion is that of path-dependent types.
So my understanding so far (with the help of Jesper's answer) :
This paragraph above partially answers some of the questions above. The main seems to be to have objects with type members and to have that path dependent types are needed because objects are dynamic/runtime dependent but types are static (defined at compile time) so just by having objects that lead to type members would not work because then those type members would not be defined clearly at compile time.
Path dependent types help here by pinning down the path leading to a type member at compile time (by requiring that the objects are already known/defined at compile time), so even if the path goes via objects (that can change during compile time) but if those objects are fixed already at compile time then their type members can have a clear meaning at compile time too.
I'm not sure I fully understand what your question is, but I'll take a stab at it. :) I think the authors mainly are referring to ML style modules where a signature corresponds to a Scala trait and a structure corresponds to a Scala object. Scala unifies the concepts of record values, objects and modules which in most other languages (like ML, Rust etc.) are separate concepts. The main benefit is that in Scala modules/objects can be passed around as normal function arguments (while in ML you have to use special functors for this).
In ML a module is checked for compatibility with a signature (trait in Scala) based on its structure (similar to structural typing in Scala), but in Scala the module must implement the trait by name (nominal typing). So even if two modules/objects have the same structure in Scala they might not be compatible with each other depending on their super type hierarchy.
A really powerful feature regarding type members in Scala is that you can use a trait even if you don't know the exact type of its type members as long as you do it in a type safe way (I think this is also possible in ML modules), for example:
trait A {
type X
def getX: X
def setX(x: X): Unit
}
def foo(a: A) = a.setX(a.getX)
In foo the Scala compiler doesn't know the exact type of a.X but a value of the type can still be used in a way the compiler knows is safe. This is not possible in Rust for example.
The next version of the Scala compiler, Dotty, will be based on the theory described in the paper you reference. This unification of modules and objects combined with subtyping, traits and type members is one reason that Scala is unique and very powerful.
EDIT: To expand a bit why path dependent types increases the flexibility of Scala's module/object system, let's expand the example above with:
def bar(a: A, b: A) = a.setX(b.getX)
This will result in a compilation error:
error: type mismatch;
found : b.T
required: a.T
def foo(a: A, b: A) = a.setX(b.getX)
^
and correctly so because a.T and b.T could resolve to different types. You can fix it by using a path dependent type:
def bar(a: A)(b: A { type X = a.X }) = a.setX(b.getX)
Or add a type parameter:
def bar[T](a: A { type X = T }, b: A { type X = T }) = a.setX(b.getX)
So, path dependent types eliminates some need of type parameters, and also allows us to express existential types efficiently (corresponding to A[_] or A[T] forSome { type T } if A had a type parameter instead of a type member).

How to properly load type Int from Coq.ZArith.Int?

I'm new to coq and I am trying to use the "int" type from ZArith.Int but coq cannot find it.
Require Export ZArith Int.
Open Scope Int_scope.
when I use "int" in my definitions such as (... -> int -> ...), coq cannot find it. How can I properly load it along with the library's operations?
That library actually formalizes an abstract module of integers, that can be later instantiated with a concrete implementation. In Coq, the implementation of integers from the standard library is called Z. There's an instance of the Int module type in terms of Z defined in that library, called Z_as_Int; to use the definitions available there with Z, you just need to refer to them prefixed by the module name, e.g. Z_as_Int._0. However, given that most theorems are proven directly over Z, without relying on the interface defined in Int, it is probably better to just use Z directly.

OCaml interface vs. signature?

I'm a bit confused about interfaces vs. signatures in OCaml.
From what I've read, interfaces (the .mli files) are what govern what values can be used/called by the other programs. Signature files look like they're exactly the same, except that they name it, so that you can create different implementations of the interface.
For example, if I want to create a module that is similar to a set in Java:
I'd have something like this:
the set.mli file:
type 'a set
val is_empty : 'a set -> bool
val ....
etc.
The signature file (setType.ml)
module type Set = sig
type 'a set
val is_empty : 'a set -> bool
val ...
etc.
end
and then an implementation would be another .ml file, such as SpecialSet.ml, which includes a struct that defines all the values and what they do.
module SpecialSet : Set
struct
...
I'm a bit confused as to what exactly the "signature" does, and what purpose it serves. Isn't it acting like a sort of interface? Why is both the .mli and .ml needed? The only difference in lines I see is that it names the module.
Am I misunderstanding this, or is there something else going on here?
OCaml's module system is tied into separate compilation (the pairs of .ml and .mli files). So each .ml file implicitly defines a module, each .mli file defines a signature, and if there is a corresponding .ml file that signature is applied to that module.
It is useful to have an explicit syntax to manipulate modules and interfaces to one's liking inside a .ml or .mli file. This allows signature constraints, as in S with type t = M.t.
Not least is the possibility it gives to define functors, modules parameterized by one or several modules: module F (X : S) = struct ... end. All these would be impossible if the only way to define a module or signature was as a file.
I am not sure how that answers your question, but I think the answer to your question is probably "yes, it is as simple as you think, and the system of having .mli files and explicit signatures inside files is redundant on your example. Manipulating modules and signatures inside a file allows more complicated tricks in addition to these simple things".
This question is old but maybe this is useful to someone:
A file named a.ml appears as a module A in the program...
The interface of the module a.ml can be written in file named a.mli
slide link
This is from the OCaml MOOC from Université Paris Diderot.

Closures in Scala vs Closures in Java

Some time ago Oracle decided that adding Closures to Java 8 would be an good idea. I wonder how design problems are solved there in comparison to Scala, which had closures since day one.
Citing the Open Issues from javac.info:
Can Method Handles be used for Function Types?
It isn't obvious how to make that work. One problem is that Method Handles reify type parameters, but in a way that interferes with function subtyping.
Can we get rid of the explicit declaration of "throws" type parameters?
The idea would be to use disjuntive type inference whenever the declared bound is a checked exception type. This is not strictly backward compatible, but it's unlikely to break real existing code. We probably can't get rid of "throws" in the type argument, however, due to syntactic ambiguity.
Disallow #Shared on old-style loop index variables
Handle interfaces like Comparator that define more than one method, all but one of which will be implemented by a method inherited from Object.
The definition of "interface with a single method" should count only methods that would not be implemented by a method in Object and should count multiple methods as one if implementing one of them would implement them all. Mainly, this requires a more precise specification of what it means for an interface to have only a single abstract method.
Specify mapping from function types to interfaces: names, parameters, etc.
We should fully specify the mapping from function types to system-generated interfaces precisely.
Type inference. The rules for type inference need to be augmented to accomodate the inference of exception type parameters. Similarly, the subtype relationships used by the closure conversion should be reflected as well.
Elided exception type parameters to help retrofit exception transparency.
Perhaps make elided exception type parameters mean the bound. This enables retrofitting existing generic interfaces that don't have a type parameter for the exception, such as java.util.concurrent.Callable, by adding a new generic exception parameter.
How are class literals for function types formed?
Is it #void().class ? If so, how does it work if object types are erased? Is it #?(?).class ?
The system class loader should dynamically generate function type interfaces.
The interfaces corresponding to function types should be generated on demand by the bootstrap class loader, so they can be shared among all user code. For the prototype, we may have javac generate these interfaces so prototype-generated code can run on stock (JDK5-6) VMs.
Must the evaluation of a lambda expression produce a fresh object each time?
Hopefully not. If a lambda captures no variables from an enclosing scope, for example, it can be allocated statically. Similarly, in other situations a lambda could be moved out of an inner loop if it doesn't capture any variables declared inside the loop. It would therefore be best if the specification promises nothing about the reference identity of the result of a lambda expression, so such optimizations can be done by the compiler.
As far as I understand 2., 6. and 7. aren't a problem in Scala, because Scala doesn't use Checked Exceptions as some sort of "Shadow type-system" like Java.
What about the rest?
1) Can Method Handles be used for Function Types?
Scala targets JDK 5 and 6 which don't have method handles, so it hasn't tried to deal with that issue yet.
2) Can we get rid of the explicit declaration of "throws" type parameters?
Scala doesn't have checked exceptions.
3) Disallow #Shared on old-style loop index variables.
Scala doesn't have loop index variables. Still, the same idea can be expressed with a certain kind of while loop . Scala's semantics are pretty standard here. Symbols bindings are captured and if the symbol happens to map to a mutable reference cell then on your own head be it.
4) Handle interfaces like Comparator that define more than one method all but one of which come from Object
Scala users tend to use functions (or implicit functions) to coerce functions of the right type to an interface. e.g.
[implicit] def toComparator[A](f : (A, A) => Int) = new Comparator[A] {
def compare(x : A, y : A) = f(x, y)
}
5) Specify mapping from function types to interfaces:
Scala's standard library includes FuncitonN traits for 0 <= N <= 22 and the spec says that function literals create instances of those traits
6) Type inference. The rules for type inference need to be augmented to accomodate the inference of exception type parameters.
Since Scala doesn't have checked exceptions it can punt on this whole issue
7) Elided exception type parameters to help retrofit exception transparency.
Same deal, no checked exceptions.
8) How are class literals for function types formed? Is it #void().class ? If so, how does it work if object types are erased? Is it #?(?).class ?
classOf[A => B] //or, equivalently,
classOf[Function1[A,B]]
Type erasure is type erasure. The above literals produce scala.lang.Function1 regardless of the choice for A and B. If you prefer, you can write
classOf[ _ => _ ] // or
classOf[Function1[ _,_ ]]
9) The system class loader should dynamically generate function type interfaces.
Scala arbitrarily limits the number of arguments to be at most 22 so that it doesn't have to generate the FunctionN classes dynamically.
10) Must the evaluation of a lambda expression produce a fresh object each time?
The Scala specification does not say that it must. But as of 2.8.1 the the compiler does not optimizes the case where a lambda does not capture anything from its environment. I haven't tested with 2.9.0 yet.
I'll address only number 4 here.
One of the things that distinguishes Java "closures" from closures found in other languages is that they can be used in place of interface that does not describe a function -- for example, Runnable. This is what is meant by SAM, Single Abstract Method.
Java does this because these interfaces abound in Java library, and they abound in Java library because Java was created without function types or closures. In their absence, every code that needed inversion of control had to resort to using a SAM interface.
For example, Arrays.sort takes a Comparator object that will perform comparison between members of the array to be sorted. By contrast, Scala can sort a List[A] by receiving a function (A, A) => Int, which is easily passed through a closure. See note 1 at the end, however.
So, because Scala's library was created for a language with function types and closures, there isn't need to support such a thing as SAM closures in Scala.
Of course, there's a question of Scala/Java interoperability -- while Scala's library might not need something like SAM, Java library does. There are two ways that can be solved. First, because Scala supports closures and function types, it is very easy to create helper methods. For example:
def runnable(f: () => Unit) = new Runnable {
def run() = f()
}
runnable { () => println("Hello") } // creates a Runnable
Actually, this particular example can be made even shorter by use of Scala's by-name parameters, but that's beside the point. Anyway, this is something that, arguably, Java could have done instead of what it is going to do. Given the prevalence of SAM interfaces, it is not all that surprising.
The other way Scala handles this is through implicit conversions. By just prepending implicit to the runnable method above, one creates a method that gets automatically (note 2) applied whenever a Runnable is required but a function () => Unit is provided.
Implicits are very unique, however, and still controversial to some extent.
Note 1: Actually, this particular example was choose with some malice... Comparator has two abstract methods instead of one, which is the whole problem with it. Since one of its methods can be implemented in terms of the other, I think they'll just "subtract" defender methods from the abstract list.
And, on the Scala side, even though there's a sort method that uses (A, A) => Boolean, not (A, A) => Int, the standard sorting method calls for a Ordering object, which is quite similar to Java's Comparator! In Scala's case, though, Ordering performs the role of a type class.
Note 2: Implicits are automatically applied, once they have been imported into scope.

Redundancy in OCaml type declaration (ml/mli)

I'm trying to understand a specific thing about ocaml modules and their compilation:
am I forced to redeclare types already declared in a .mli inside the specific .ml implementations?
Just to give an example:
(* foo.mli *)
type foobar = Bool of bool | Float of float | Int of int
(* foo.ml *)
type baz = foobar option
This, according to my normal way of thinking about interfaces/implementations, should be ok but it says
Error: Unbound type constructor foobar
while trying to compile with
ocamlc -c foo.mli
ocamlc -c foo.ml
Of course the error disappears if I declare foobar inside foo.ml too but it seems a complex way since I have to keep things synched on every change.
Is there a way to avoid this redundancy or I'm forced to redeclare types every time?
Thanks in advance
OCaml tries to force you to separate the interface (.mli) from the implementation (.ml. Most of the time, this is a good thing; for values, you publish the type in the interface, and keep the code in the implementation. You could say that OCaml is enforcing a certain amount of abstraction (interfaces must be published; no code in interfaces).
For types, very often, the implementation is the same as the interface: both state that the type has a particular representation (and perhaps that the type declaration is generative). Here, there can be no abstraction, because the implementer doesn't have any information about the type that he doesn't want to publish. (The exception is basically when you declare an abstract type.)
One way to look at it is that the interface already contains enough information to write the implementation. Given the interface type foobar = Bool of bool | Float of float | Int of int, there is only one possible implementation. So don't write an implementation!
A common idiom is to have a module that is dedicated to type declarations, and make it have only a .mli. Since types don't depend on values, this module typically comes in very early in the dependency chain. Most compilation tools cope well with this; for example ocamldep will do the right thing. (This is one advantage over having only a .ml.)
The limitation of this approach is when you also need a few module definitions here and there. (A typical example is defining a type foo, then an OrderedFoo : Map.OrderedType module with type t = foo, then a further type declaration involving'a Map.Make(OrderedFoo).t.) These can't be put in interface files. Sometimes it's acceptable to break down your definitions into several chunks, first a bunch of types (types1.mli), then a module (mod1.mli and mod1.ml), then more types (types2.mli). Other times (for example if the definitions are recursive) you have to live with either a .ml without a .mli or duplication.
Yes, you are forced to redeclare types. The only ways around it that I know of are
Don't use a .mli file; just expose everything with no interface. Terrible idea.
Use a literate-programming tool or other preprocessor to avoid duplicating the interface declarations in the One True Source. For large projects, we do this in my group.
For small projects, we just duplicate type declarations. And grumble about it.
You can let ocamlc generate the mli file for you from the ml file:
ocamlc -i some.ml > some.mli
In general, yes, you are required to duplicate the types.
You can work around this, however, with Camlp4 and the pa_macro syntax extension (findlib package: camlp4.macro). It defines, among other things, and INCLUDE construct. You can use it to factor the common type definitions out into a separate file and include that file in both the .ml and .mli files. I haven't seen this done in a deployed OCaml project, however, so I don't know that it would qualify as recommended practice, but it is possible.
The literate programming solution, however, is cleaner IMO.
No, in the mli file, just say "type foobar". This will work.