The purpose of type classes in Haskell vs the purpose of traits in Scala - scala

I am trying to understand how to think about type classes in Haskell versus traits in Scala.
My understanding is that type classes are primarily important at compile time in Haskell and not at runtime anymore, on the other hand traits in Scala are important both at compile time and run time. I want to illustrate this idea with a simple example, and I want to know if this viewpoint of mine is correct or not.
First, let us consider type classes in Haskell:
Let's take a simple example. The type class Eq.
For example, Int and Char are both instances of Eq. So it is possible to create a polymorphic List that is also an instance of Eq and can either contain Ints or Chars but not both in the same List.
My question is : is this the only reason why type classes exist in Haskell?
The same question in other words:
Type classes enable to create polymorphic types ( in this example a polymorphic List) that support operations that are defined in a given type class ( in this example the operation == defined in the type class Eq) but that is their only reason for existence, according to my understanding. Is this understanding of mine correct?
Is there any other reason why type classes exist in ( standard ) Haskell?
Is there any other use case in which type classes are useful in standard Haskell ? I cannot seem to find any.
Since Haskell's Lists are homogeneous, it is not possible to put Char and Int into the same list. So the usefulness of type classes, according to my understanding, is exhausted at compile time. Is this understanding of mine correct?
Now, let's consider the analogous List example in Scala:
Lets define a trait Eq with an equals method on it.
Now let's make Char and Int implement the trait Eq.
Now it is possible to create a List[Eq] in Scala that accepts both Chars and Ints into the same List ( Note that this - putting different type of elements into the same List - is not possible Haskell, at least not in standard Haskell 98 without extensions)!
In the case of the Haskell's List, the existence of type classes is important/useful only for type checking at compile time, according to my understanding.
In contrast, the existence of traits in Scala is important both at compile time for type checking and at run type for polymorphic dispatch on the actual runtime type of the object in the List when comparing two Lists for equality.
So, based on this simple example, I came to the conclusion that in Haskell type classes are primarily important/used at compilation time, in contrast, Scala's traits are important/used both at compile time and run time.
Is this conclusion of mine correct?
If not, why not ?
EDIT:
Scala code in response to n.m.'s comments:
case class MyInt(i:Int) {
override def equals(b:Any)= i == b.asInstanceOf[MyInt].i
}
case class MyChar(c:Char) {
override def equals(a:Any)= c==a.asInstanceOf[MyChar].c
}
object Test {
def main(args: Array[String]) {
val l1 = List(MyInt(1), MyInt(2), MyChar('a'), MyChar('b'))
val l2 = List(MyInt(1), MyInt(2), MyChar('a'), MyChar('b'))
val l3 = List(MyInt(1), MyInt(2), MyChar('a'), MyChar('c'))
println(l1==l1)
println(l1==l3)
}
}
This prints:
true
false

I will comment on the Haskell side.
Type classes bring restricted polymorphism in Haskell, wherein a type variable a can still be quantified universally, but ranges over only a subset of all the types -- namely, the types for which an instance of the type class is available.
Why restricted polymorphism is useful? A nice example would be the equality operator
(==) :: ?????
What its type should be? Intuitively, it takes two values of the same type and returns a boolean, so:
(==) :: a -> a -> Bool -- (1)
But the typing above is not entirely honest, since it allows one to apply == to any type a, including function types!
(\x :: Integer -> x + x) == (\x :: Integer -> 2*x)
The above would pass type checking if (1) were the typing for (==), since both arguments are of the same type a = (Integer -> Integer). However, we can not effectively compare two functions: well-known Computability results tell us that there is no algorithm to do that in general.
So, what we could do to implement (==)?
Option 1: at run time, if a function (or any other value involving functions -- such as a list of functions) is found to be passed to (==), raise an exception. This is what e.g. ML does. Typed programs can now "go wrong", despite checking types at compile time.
Option 2: introduce a new kind of polymorphism, restricting a to the function-free types. For instance, ww could have (==) :: forall-non-fun a. a -> a -> Bool so that comparing functions yields to a type error. Haskell exploits type classes to obtain exactly that.
So, Haskell type classes allow one to type (==) "honestly", ensuring no error at run time, and without being overly restrictive. Of course, the power of type classes goes far beyond of that but, at least in my own view, they primary purpose is to allow restricted polymorphism, in a very general and flexible way. Indeed, with type classes the programmer can define their own restrictions on the universal type quantifications.

Related

Disfunctionality of type parameter

I’m new to using Scala and am trying to see if a list contains any objects of a certain type.
When I make a method to do this, I get the following results:
var l = List("Some string", 3)
def containsType[T] = l.exists(_.isInstanceOf[T])
containsType[Boolean] // val res0: Boolean = true
l.exists(_.isInstanceOf[Boolean]) // val res1: Boolean = false
Could someone please help me understand why my method doesn’t return the same results as the expression on the last line?
Thank you,
Johan
Alin's answer details perfectly why the generic isn't available at runtime. You can get a bit closer to what you want with the magic of ClassTag, but you still have to be conscious of some issues with Java generics.
import scala.reflect.ClassTag
var l = List("Some string", 3)
def containsType[T](implicit cls: ClassTag[T]): Boolean = {
l.exists(cls.runtimeClass.isInstance(_))
}
Now, whenever you call containsType, a hidden extra argument of type ClassTag[T] gets passed it. So when you write, for instance, println(containsType[String]), then this gets compiled to
scala.this.Predef.println($anon.this.containsType[String](ClassTag.apply[String](classOf[java.lang.String])))
An extra argument gets passed to containsType, namely ClassTag.apply[String](classOf[java.lang.String]). That's a really long winded way of explicitly passing a Class<String>, which is what you'd have to do in Java manually. And java.lang.Class has an isInstance function.
Now, this will mostly work, but there are still major caveats. Generics arguments are completely erased at runtime, so this won't help you distinguish between an Option[Int] and an Option[String] in your list, for instance. As far as the JVM is concerned, they're both Option.
Second, Java has an unfortunate history with primitive types, so containsType[Int] will actually be false in your case, despite the fact that the 3 in your list is actually an Int. This is because, in Java, generics can only be class types, not primitives, so a generic List can never contain int (note the lowercase 'i', this is considered a fundamentally different thing in Java than a class).
Scala paints over a lot of these low-level details, but the cracks show through in situations like this. Scala sees that you're constructing a list of Strings and Ints, so it wants to construct a list of the common supertype of the two, which is Any (strings and ints have no common supertype more specific than Any). At runtime, Scala Int can translate to either int (the primitive) or Integer (the object). Scala will favor the former for efficiency, but when storing in generic containers, it can't use a primitive type. So while Scala thinks that your list l contains a String and an Int, Java thinks that it contains a String and a java.lang.Integer. And to make things even crazier, both int and java.lang.Integer have distinct Class instances.
So summon[ClassTag[Int]] in Scala is java.lang.Integer.TYPE, which is a Class<Integer> instance representing the primitive type int (yes, the non-class type int has a Class instance representing it). While summon[ClassTag[java.lang.Integer]] is java.lang.Integer::class, a distinct Class<Integer> representing the non-primitive type Integer. And at runtime, your list contains the latter.
In summary, generics in Java are a hot mess. Scala does its best to work with what it has, but when you start playing with reflection (which ClassTag does), you have to start thinking about these problems.
println(containsType[Boolean]) // false
println(containsType[Double]) // false
println(containsType[Int]) // false (list can't contain primitive type)
println(containsType[Integer]) // true (3 is converted to an Integer)
println(containsType[String]) // true (class type so it works the way you expect)
println(containsType[Unit]) // false
println(containsType[Long]) // false
Scala uses the type erasure model of generics. This means that no
information about type arguments is kept at runtime, so there's no way
to determine at runtime the specific type arguments of the given
List object. All the system can do is determine that a value is a
List of some arbitrary type parameters.
You can verify this behavior by trying any List concrete type:
val l = List("Some string", 3)
println(l.isInstanceOf[List[Int]]) // true
println(l.isInstanceOf[List[String]]) // true
println(l.isInstanceOf[List[Boolean]]) // also true
println(l.isInstanceOf[List[Unit]]) // also true
Now regarding your example:
def containsType[T] = l.exists(_.isInstanceOf[T])
println(containsType[Int]) // true
println(containsType[Boolean]) // also true
println(containsType[Unit]) // also true
println(containsType[Double]) // also true
isInstanceOf is a synthetic function (a function generated by the Scala compiler at compile-time, usually to work around the underlying JVM limitations) and does not work the way you would expect with generic type arguments like T, because after compilation, this would normally be equivalent in Java to instanceof T which, by the way - is illegal in Java.
Why is illegal? Because of type erasure. Type erasure means all your generic code (generic classes, generic methods, etc.) is converted to non-generic code. This usually means 3 things:
all type parameters in generic types are replaced with their bounds or Object if they are unbounded;
wherever necessary the compiler inserts type casts to preserve type-safety;
bridge methods are generated if needed to preserve polymorphism of all generic methods.
However, in the case of instanceof T, the JVM cannot differentiate between types of T at execution time, so this makes no sense. The type used with instanceof has to be reifiable, meaning that all information about the type needs to be available at runtime. This property does not apply to generic types.
So if Java forbids this because it can't work, why does Scala even allows it? The Scala compiler is indeed more permissive here, but for one good reason; because it treats it differently. Like the Java compiler, the Scala compiler also erases all generic code at compile-time, but since isInstanceOf is a synthetic function in Scala, calls to it using generic type arguments such as isInstanceOf[T] are replaced during compilation with instanceof Object.
Here's a sample of your code decompiled:
public <T> boolean containsType() {
return this.l().exists(x$1 -> BoxesRunTime.boxToBoolean(x$1 instanceof Object));
}
Main$.l = (List<Object>)package$.MODULE$.List().apply((Seq)ScalaRunTime$.MODULE$.wrapIntArray(new int[] { 1, 2, 3 }));
Predef$.MODULE$.println((Object)BoxesRunTime.boxToBoolean(this.containsType()));
Predef$.MODULE$.println((Object)BoxesRunTime.boxToBoolean(this.containsType()));
This is why no matter what type you give to the polymorphic function containsType, it will always result in true. Basically, containsType[T] is equivalent to containsType[_] from Scala's perspective - which actually makes sense because a generic type T, without any upper bounds, is just a placeholder for type Any in Scala. Because Scala cannot have raw types, you cannot for example, create a List without providing a type parameter, so every List must be a List of "something", and that "something" is at least an Any, if not given a more specific type.
Therefore, isInstanceOf can only be called with specific (concrete) type arguments like Boolean, Double, String, etc. That is why, this works as expected:
println(l.exists(_.isInstanceOf[Boolean])) // false
We said that Scala is more permissive, but that does not mean you get away without a warning.
To alert you of the possibly non-intuitive runtime behavior, the Scala compiler does usually emit unchecked warnings. For example, if you had run your code in the Scala interpreter (or compile it using scalac), you would have received this:

Can type constructors be considered as types in functional programming languages?

I am approaching the Haskell programming language, and I have a background of Scala and Java developer.
I was reading the theory behind type constructors, but I cannot understand if they can be considered types. I mean, in Scala, you use the keywords class or trait to define type constructors. Think about List[T], or Option[T]. Also in Haskell, you use the same keyword data, that is used for defining new types.
So, are type constructors also types?
Let's look at an analogy: functions. In some branches of mathematics, functions are called value constructors, because that's what they do: you put one or more values in, and they construct a new value out of those.
Type constructors are exactly the same thing, except on the type level: you put one or more types in, and they construct a new type out of those. They are, in some sense, functions on the type level.
Now, to our analogy: what is the analog of the question you are asking? Well, it is this: "Can value constructors (i.e. functions) be considered as values in functional programming languages?"
And the answer is: it depends on the programming language. Now, for functional programming languages, the answer is "Yes" for almost all (if not all) of them. It depends on your definition of what a "functional programming language" is. Some people define a functional programming language as a programming language which has functions as values, so the answer will be trivially "Yes" by definition. But, some people define a functional programming language as a programming language which does not allow side-effects, and in such a language, it is not necessarily true that functions are values.
The most famous example may be John Backus' FP, from his seminal paper Can Programming Be Liberated from the von Neumann Style? – a functional style and its algebra of programs. In FP, there is a hierarchy of "function-like" things. Functions can only deal with values, and functions themselves are not values. However, there is a concept of "functionals" which are "function constructors", i.e. they can take functions (and also values) as input and/or produce functions as output, but they cannot take functionals as input and/or produce them as output.
So, FP is arguably a functional programming language, but it does not have functions as values.
Note: functions as values is also called "first-class functions" and functions that take functions as input or return them as output are called "higher-order functions".
If we look at some types:
1 :: Int
[1] :: List Int
add :: Int → Int
map :: (a → b, List a) → b
You can see that we can easily say: any value whose type has an arrow in it, is a function. Any value whose type has more than one arrow in it, is a higher-order function.
Again, the same applies to type constructors, since they are really the same thing except on the type level. In some languages, type constructors can be types, in some they can't. For example, in Java and C♯, type constructors are not types. You cannot have a List<List> in C♯, for example. You can write down the type List<List> in Java, but that is misleading, since the two Lists mean different things: the first List is the type constructor, the second List is the raw type, so this is in fact not using a type constructor as a type.
What is the equivalent to our types example above?
Int :: Type
List :: Type ⇒ Type
→ :: (Type, Type) ⇒ Type
Functor :: (Type ⇒ Type) ⇒ Type
(Note, how we always have Type? Indeed, we are only dealing with types, so we normally don't write Type but instead simply write *, pronounced "Type"):
Int :: *
List :: * ⇒ *
→ :: (*, *) ⇒ *
Functor :: (* ⇒ *) ⇒ *
So, Int is a proper type, List is a type constructor that takes one type and produces a type, → (the function type constructor) takes two types and returns a type (assuming only unary functions, e.g. using currying or passing tuples), and Functor is a type constructor, which itself takes a type constructor and returns a type.
Theses "type-types" are called kinds. Like with functions, anything with an arrow is a type constructor, and anything with more than one arrow is a higher-kinded type constructor.
And like with functions, some languages allow higher-kinded type constructors and some don't. The two languages you mention in your question, Scala and Haskell do, but as mentioned above, Java and C♯ don't.
However, there is a complication when we look at your question:
So, are type constructors also types?
Not really, no. At least not in any language I know about. See, while you can have higher-kinded type constructors that take type constructors as input and/or return them as output, you cannot have an expression or a value or a variable or a parameter which has a type constructor as its type. You cannot have a function that takes a List or returns a List. You cannot have a variable of type Monad. But, you can have a variable of type Int.
So, clearly, there is a difference between types and type constructors.
Well, types and type constructors have a calculus of their own and they each have kinds. If you use :k (Maybe Int) in ghci for example, you'll get *, now this is a proper type and it (usually) has inhabitants. In this case Nothing, Just 42, etc. * now has a more descriptive alias Type.
Now you can look at the kind of the constructor that is Maybe, and :k Maybe will give you * -> *. With the alias, this is Type -> Type which is what you expect. It takes a Type and constructs a Type. Now if you see types as set of values, one good question is what set of values do Maybe has? Well, none because it is not really a type. You might attempt something like Just but that has type a -> Maybe a with kind Type, rather than Maybe with kind Type -> Type.
At least in Haskell, there is a hierarchy that can roughly be described as follows.
Terms are things that exist at run-time, values like 1, 'a', and (+), for example.
Each term has a type, like Int or Char or Int -> Int -> Int.
Each type has a kind, and all types have the same kind, namely *.
A type constructor like [], though, has kind * -> *, so it is not a type. Instead, it is a mapping from a type to a type.
There are other kinds as well, including (in addition to * and * -> *, with an example of each):
* -> * -> * (Either)
(* -> *) -> * -> * (ReaderT, a monad transformer)
Constraint (Num Int)
* -> Constraint (Num; this is the kind of a type class)

Objects with type members: what is Scala's object vs module system ? (Trying to understand a 2014 Odersky paper on path dependent types)

I am reading Foundations of path dependent types. On the first page, on the right column it is written:
Our motivation is twofold. First, we believe objects with type members
are not fully understood. It is not clear what causes the complexity,
which pieces of complexity are essential to the concept or accidental
to a language implementation or calculus that tries to achieve
something else. Second, we believe objects with type members are
really useful. They can encode a variety of other, usually separate
type system features. Most importantly, they unify concepts from
object and module systems, by adding a notion of nominality to otherwise structural systems.
Could someone clarify/explain what does "object vs module" system mean?
Or in general, what does
"they (objects with type members) unify concepts from
object and module systems, by adding a notion of nominality to otherwise structural systems."
mean ?
What concepts? From where ?
Nominality in the object names / values ?
Structure in the types ? Or the other way around?
Where do type members here belong to ? To module system ? Object system ? How? Why?
EDIT:
How does this unification relate to path dependent types ? It seems to me that they allow this unification to happen (objects with type members). Is that so ?
If yes, how ?
Could you give a simple example what that means ? (I.e. path dependent types allowing the unification of module and object systems vs. why would the unification not be possible happen if we would not have path dependent types?)
EDIT 2:
From the paper:
To make any use of type members, programmers need a way to refer to
them. This means that types must be able to refer to objects, i.e.
contain terms that serve as static approximation of a set of dynamic
objects. In other words, some level of dependent types is required;
the usual notion is that of path-dependent types.
So my understanding so far (with the help of Jesper's answer) :
This paragraph above partially answers some of the questions above. The main seems to be to have objects with type members and to have that path dependent types are needed because objects are dynamic/runtime dependent but types are static (defined at compile time) so just by having objects that lead to type members would not work because then those type members would not be defined clearly at compile time.
Path dependent types help here by pinning down the path leading to a type member at compile time (by requiring that the objects are already known/defined at compile time), so even if the path goes via objects (that can change during compile time) but if those objects are fixed already at compile time then their type members can have a clear meaning at compile time too.
I'm not sure I fully understand what your question is, but I'll take a stab at it. :) I think the authors mainly are referring to ML style modules where a signature corresponds to a Scala trait and a structure corresponds to a Scala object. Scala unifies the concepts of record values, objects and modules which in most other languages (like ML, Rust etc.) are separate concepts. The main benefit is that in Scala modules/objects can be passed around as normal function arguments (while in ML you have to use special functors for this).
In ML a module is checked for compatibility with a signature (trait in Scala) based on its structure (similar to structural typing in Scala), but in Scala the module must implement the trait by name (nominal typing). So even if two modules/objects have the same structure in Scala they might not be compatible with each other depending on their super type hierarchy.
A really powerful feature regarding type members in Scala is that you can use a trait even if you don't know the exact type of its type members as long as you do it in a type safe way (I think this is also possible in ML modules), for example:
trait A {
type X
def getX: X
def setX(x: X): Unit
}
def foo(a: A) = a.setX(a.getX)
In foo the Scala compiler doesn't know the exact type of a.X but a value of the type can still be used in a way the compiler knows is safe. This is not possible in Rust for example.
The next version of the Scala compiler, Dotty, will be based on the theory described in the paper you reference. This unification of modules and objects combined with subtyping, traits and type members is one reason that Scala is unique and very powerful.
EDIT: To expand a bit why path dependent types increases the flexibility of Scala's module/object system, let's expand the example above with:
def bar(a: A, b: A) = a.setX(b.getX)
This will result in a compilation error:
error: type mismatch;
found : b.T
required: a.T
def foo(a: A, b: A) = a.setX(b.getX)
^
and correctly so because a.T and b.T could resolve to different types. You can fix it by using a path dependent type:
def bar(a: A)(b: A { type X = a.X }) = a.setX(b.getX)
Or add a type parameter:
def bar[T](a: A { type X = T }, b: A { type X = T }) = a.setX(b.getX)
So, path dependent types eliminates some need of type parameters, and also allows us to express existential types efficiently (corresponding to A[_] or A[T] forSome { type T } if A had a type parameter instead of a type member).

Any reason why scala does not explicitly support dependent types?

There are path dependent types and I think it is possible to express almost all the features of such languages as Epigram or Agda in Scala, but I'm wondering why Scala does not support this more explicitly like it does very nicely in other areas (say, DSLs) ?
Anything I'm missing like "it is not necessary" ?
Syntactic convenience aside, the combination of singleton types, path-dependent types and implicit values means that Scala has surprisingly good support for dependent typing, as I've tried to demonstrate in shapeless.
Scala's intrinsic support for dependent types is via path-dependent types. These allow a type to depend on a selector path through an object- (ie. value-) graph like so,
scala> class Foo { class Bar }
defined class Foo
scala> val foo1 = new Foo
foo1: Foo = Foo#24bc0658
scala> val foo2 = new Foo
foo2: Foo = Foo#6f7f757
scala> implicitly[foo1.Bar =:= foo1.Bar] // OK: equal types
res0: =:=[foo1.Bar,foo1.Bar] = <function1>
scala> implicitly[foo1.Bar =:= foo2.Bar] // Not OK: unequal types
<console>:11: error: Cannot prove that foo1.Bar =:= foo2.Bar.
implicitly[foo1.Bar =:= foo2.Bar]
In my view, the above should be enough to answer the question "Is Scala a dependently typed language?" in the positive: it's clear that here we have types which are distinguished by the values which are their prefixes.
However, it's often objected that Scala isn't a "fully" dependently type language because it doesn't have dependent sum and product types as found in Agda or Coq or Idris as intrinsics. I think this reflects a fixation on form over fundamentals to some extent, nevertheless, I'll try and show that Scala is a lot closer to these other languages than is typically acknowledged.
Despite the terminology, dependent sum types (also known as Sigma types) are simply a pair of values where the type of the second value is dependent on the first value. This is directly representable in Scala,
scala> trait Sigma {
| val foo: Foo
| val bar: foo.Bar
| }
defined trait Sigma
scala> val sigma = new Sigma {
| val foo = foo1
| val bar = new foo.Bar
| }
sigma: java.lang.Object with Sigma{val bar: this.foo.Bar} = $anon$1#e3fabd8
and in fact, this is a crucial part of the encoding of dependent method types which is needed to escape from the 'Bakery of Doom' in Scala prior to 2.10 (or earlier via the experimental -Ydependent-method types Scala compiler option).
Dependent product types (aka Pi types) are essentially functions from values to types. They are key to the representation of statically sized vectors and the other poster children for dependently typed programming languages. We can encode Pi types in Scala using a combination of path dependent types, singleton types and implicit parameters. First we define a trait which is going to represent a function from a value of type T to a type U,
scala> trait Pi[T] { type U }
defined trait Pi
We can than define a polymorphic method which uses this type,
scala> def depList[T](t: T)(implicit pi: Pi[T]): List[pi.U] = Nil
depList: [T](t: T)(implicit pi: Pi[T])List[pi.U]
(note the use of the path-dependent type pi.U in the result type List[pi.U]). Given a value of type T, this function will return a(n empty) list of values of the type corresponding to that particular T value.
Now let's define some suitable values and implicit witnesses for the functional relationships we want to hold,
scala> object Foo
defined module Foo
scala> object Bar
defined module Bar
scala> implicit val fooInt = new Pi[Foo.type] { type U = Int }
fooInt: java.lang.Object with Pi[Foo.type]{type U = Int} = $anon$1#60681a11
scala> implicit val barString = new Pi[Bar.type] { type U = String }
barString: java.lang.Object with Pi[Bar.type]{type U = String} = $anon$1#187602ae
And now here is our Pi-type-using function in action,
scala> depList(Foo)
res2: List[fooInt.U] = List()
scala> depList(Bar)
res3: List[barString.U] = List()
scala> implicitly[res2.type <:< List[Int]]
res4: <:<[res2.type,List[Int]] = <function1>
scala> implicitly[res2.type <:< List[String]]
<console>:19: error: Cannot prove that res2.type <:< List[String].
implicitly[res2.type <:< List[String]]
^
scala> implicitly[res3.type <:< List[String]]
res6: <:<[res3.type,List[String]] = <function1>
scala> implicitly[res3.type <:< List[Int]]
<console>:19: error: Cannot prove that res3.type <:< List[Int].
implicitly[res3.type <:< List[Int]]
(note that here we use Scala's <:< subtype-witnessing operator rather than =:= because res2.type and res3.type are singleton types and hence more precise than the types we are verifying on the RHS).
In practice, however, in Scala we wouldn't start by encoding Sigma and Pi types and then proceeding from there as we would in Agda or Idris. Instead we would use path-dependent types, singleton types and implicits directly. You can find numerous examples of how this plays out in shapeless: sized types, extensible records, comprehensive HLists, scrap your boilerplate, generic Zippers etc. etc.
The only remaining objection I can see is that in the above encoding of Pi types we require the singleton types of the depended-on values to be expressible. Unfortunately in Scala this is only possible for values of reference types and not for values of non-reference types (esp. eg. Int). This is a shame, but not an intrinsic difficulty: Scala's type checker represents the singleton types of non-reference values internally, and there have been a couple of experiments in making them directly expressible. In practice we can work around the problem with a fairly standard type-level encoding of the natural numbers.
In any case, I don't think this slight domain restriction can be used as an objection to Scala's status as a dependently typed language. If it is, then the same could be said for Dependent ML (which only allows dependencies on natural number values) which would be a bizarre conclusion.
I would assume it is because (as I know from experience, having used dependent types in the Coq proof assistant, which fully supports them but still not in a very convenient way) dependent types are a very advanced programming language feature which is really hard to get right - and can cause an exponential blowup in complexity in practice. They're still a topic of computer science research.
I believe that Scala's path-dependent types can only represent Σ-types, but not Π-types. This:
trait Pi[T] { type U }
is not exactly a Π-type. By definition, Π-type, or dependent product, is a function which result type depends on argument value, representing universal quantifier, i.e. ∀x: A, B(x). In the case above, however, it depends only on type T, but not on some value of this type. Pi trait itself is a Σ-type, an existential quantifier, i.e. ∃x: A, B(x). Object's self-reference in this case is acting as quantified variable. When passed in as implicit parameter, however, it reduces to an ordinary type function, since it is resolved type-wise. Encoding for dependent product in Scala may look like the following:
trait Sigma[T] {
val x: T
type U //can depend on x
}
// (t: T) => (∃ mapping(x, U), x == t) => (u: U); sadly, refinement won't compile
def pi[T](t: T)(implicit mapping: Sigma[T] { val x = t }): mapping.U
The missing piece here is an ability to statically constraint field x to expected value t, effectively forming an equation representing the property of all values inhabiting type T. Together with our Σ-types, used to express the existence of object with given property, the logic is formed, in which our equation is a theorem to be proven.
On a side note, in real case theorem may be highly nontrivial, up to the point where it cannot be automatically derived from code or solved without significant amount of effort. One can even formulate Riemann Hypothesis this way, only to find the signature impossible to implement without actually proving it, looping forever or throwing an exception.
The question was about using dependently typed feature more directly and, in my opinion,
there would be a benefit in having a more direct dependent typing approach than what Scala offers.
Current answers try to argue the question on type theoretical level.
I want to put a more pragmatic spin on it.
This may explain why people are divided on the level of support of dependent types in the Scala language. We may have somewhat different definitions in mind. (not to say one is right and one is wrong).
This is not an attempt to answer the question how easy would it be to turn
Scala into something like Idris (I imagine very hard) or to write a library
offering more direct support for Idris like capabilities (like singletons tries to be in Haskell).
Instead, I want to emphasize the pragmatic difference between Scala and a language like Idris.
What are code bits for value and type level expressions?
Idris uses the same code, Scala uses very different code.
Scala (similarly to Haskell) may be able to encode lots of type level calculations.
This is shown by libraries like shapeless.
These libraries do it using some really impressive and clever tricks.
However, their type level code is (currently) quite different from value level expressions
(I find that gap to be somewhat closer in Haskell). Idris allows to use value level expression on the type level AS IS.
The obvious benefit is code reuse (you do not need to code type level expressions
separately from value level if you need them in both places). It should be way easier to
write value level code. It should be easier to not have to deal with hacks like singletons (not to mention performance cost). You do not need to learn two things you learn one thing.
On a pragmatic level, we end up needing fewer concepts. Type synonyms, type families, functions, ... how about just functions? In my opinion, this unifying benefits go much deeper and are more than syntactic convenience.
Consider verified code. See:
https://github.com/idris-lang/Idris-dev/blob/v1.3.0/libs/contrib/Interfaces/Verified.idr
Type checker verifies proofs of monadic/functor/applicative laws and the
proofs are about actual implementations of monad/functor/applicative and not some encoded
type level equivalent that may be the same or not the same.
The big question is what are we proving?
The same can me done using clever encoding tricks (see the following for Haskell version, I have not seen one for Scala)
https://blog.jle.im/entry/verified-instances-in-haskell.html
https://github.com/rpeszek/IdrisTddNotes/wiki/Play_FunctorLaws
except the types are so complicated that it is hard to see the laws, the value
level expressions are converted (automatically but still) to type level things and
you need to trust that conversion as well.
There is room for error in all of this which kinda defies the purpose of compiler acting as
a proof assistant.
(EDITED 2018.8.10) Talking about proof assistance, here is another big difference between Idris and Scala. There is nothing in Scala (or Haskell) that can prevent from writing diverging proofs:
case class Void(underlying: Nothing) extends AnyVal //should be uninhabited
def impossible() : Void = impossible()
while Idris has total keyword preventing code like this from compiling.
A Scala library that tries to unify value and type level code (like Haskell singletons) would be an interesting test for Scala's support of dependent types. Can such library be done much better in Scala because of path-dependent types?
I am too new to Scala to answer that question myself.

Closures in Scala vs Closures in Java

Some time ago Oracle decided that adding Closures to Java 8 would be an good idea. I wonder how design problems are solved there in comparison to Scala, which had closures since day one.
Citing the Open Issues from javac.info:
Can Method Handles be used for Function Types?
It isn't obvious how to make that work. One problem is that Method Handles reify type parameters, but in a way that interferes with function subtyping.
Can we get rid of the explicit declaration of "throws" type parameters?
The idea would be to use disjuntive type inference whenever the declared bound is a checked exception type. This is not strictly backward compatible, but it's unlikely to break real existing code. We probably can't get rid of "throws" in the type argument, however, due to syntactic ambiguity.
Disallow #Shared on old-style loop index variables
Handle interfaces like Comparator that define more than one method, all but one of which will be implemented by a method inherited from Object.
The definition of "interface with a single method" should count only methods that would not be implemented by a method in Object and should count multiple methods as one if implementing one of them would implement them all. Mainly, this requires a more precise specification of what it means for an interface to have only a single abstract method.
Specify mapping from function types to interfaces: names, parameters, etc.
We should fully specify the mapping from function types to system-generated interfaces precisely.
Type inference. The rules for type inference need to be augmented to accomodate the inference of exception type parameters. Similarly, the subtype relationships used by the closure conversion should be reflected as well.
Elided exception type parameters to help retrofit exception transparency.
Perhaps make elided exception type parameters mean the bound. This enables retrofitting existing generic interfaces that don't have a type parameter for the exception, such as java.util.concurrent.Callable, by adding a new generic exception parameter.
How are class literals for function types formed?
Is it #void().class ? If so, how does it work if object types are erased? Is it #?(?).class ?
The system class loader should dynamically generate function type interfaces.
The interfaces corresponding to function types should be generated on demand by the bootstrap class loader, so they can be shared among all user code. For the prototype, we may have javac generate these interfaces so prototype-generated code can run on stock (JDK5-6) VMs.
Must the evaluation of a lambda expression produce a fresh object each time?
Hopefully not. If a lambda captures no variables from an enclosing scope, for example, it can be allocated statically. Similarly, in other situations a lambda could be moved out of an inner loop if it doesn't capture any variables declared inside the loop. It would therefore be best if the specification promises nothing about the reference identity of the result of a lambda expression, so such optimizations can be done by the compiler.
As far as I understand 2., 6. and 7. aren't a problem in Scala, because Scala doesn't use Checked Exceptions as some sort of "Shadow type-system" like Java.
What about the rest?
1) Can Method Handles be used for Function Types?
Scala targets JDK 5 and 6 which don't have method handles, so it hasn't tried to deal with that issue yet.
2) Can we get rid of the explicit declaration of "throws" type parameters?
Scala doesn't have checked exceptions.
3) Disallow #Shared on old-style loop index variables.
Scala doesn't have loop index variables. Still, the same idea can be expressed with a certain kind of while loop . Scala's semantics are pretty standard here. Symbols bindings are captured and if the symbol happens to map to a mutable reference cell then on your own head be it.
4) Handle interfaces like Comparator that define more than one method all but one of which come from Object
Scala users tend to use functions (or implicit functions) to coerce functions of the right type to an interface. e.g.
[implicit] def toComparator[A](f : (A, A) => Int) = new Comparator[A] {
def compare(x : A, y : A) = f(x, y)
}
5) Specify mapping from function types to interfaces:
Scala's standard library includes FuncitonN traits for 0 <= N <= 22 and the spec says that function literals create instances of those traits
6) Type inference. The rules for type inference need to be augmented to accomodate the inference of exception type parameters.
Since Scala doesn't have checked exceptions it can punt on this whole issue
7) Elided exception type parameters to help retrofit exception transparency.
Same deal, no checked exceptions.
8) How are class literals for function types formed? Is it #void().class ? If so, how does it work if object types are erased? Is it #?(?).class ?
classOf[A => B] //or, equivalently,
classOf[Function1[A,B]]
Type erasure is type erasure. The above literals produce scala.lang.Function1 regardless of the choice for A and B. If you prefer, you can write
classOf[ _ => _ ] // or
classOf[Function1[ _,_ ]]
9) The system class loader should dynamically generate function type interfaces.
Scala arbitrarily limits the number of arguments to be at most 22 so that it doesn't have to generate the FunctionN classes dynamically.
10) Must the evaluation of a lambda expression produce a fresh object each time?
The Scala specification does not say that it must. But as of 2.8.1 the the compiler does not optimizes the case where a lambda does not capture anything from its environment. I haven't tested with 2.9.0 yet.
I'll address only number 4 here.
One of the things that distinguishes Java "closures" from closures found in other languages is that they can be used in place of interface that does not describe a function -- for example, Runnable. This is what is meant by SAM, Single Abstract Method.
Java does this because these interfaces abound in Java library, and they abound in Java library because Java was created without function types or closures. In their absence, every code that needed inversion of control had to resort to using a SAM interface.
For example, Arrays.sort takes a Comparator object that will perform comparison between members of the array to be sorted. By contrast, Scala can sort a List[A] by receiving a function (A, A) => Int, which is easily passed through a closure. See note 1 at the end, however.
So, because Scala's library was created for a language with function types and closures, there isn't need to support such a thing as SAM closures in Scala.
Of course, there's a question of Scala/Java interoperability -- while Scala's library might not need something like SAM, Java library does. There are two ways that can be solved. First, because Scala supports closures and function types, it is very easy to create helper methods. For example:
def runnable(f: () => Unit) = new Runnable {
def run() = f()
}
runnable { () => println("Hello") } // creates a Runnable
Actually, this particular example can be made even shorter by use of Scala's by-name parameters, but that's beside the point. Anyway, this is something that, arguably, Java could have done instead of what it is going to do. Given the prevalence of SAM interfaces, it is not all that surprising.
The other way Scala handles this is through implicit conversions. By just prepending implicit to the runnable method above, one creates a method that gets automatically (note 2) applied whenever a Runnable is required but a function () => Unit is provided.
Implicits are very unique, however, and still controversial to some extent.
Note 1: Actually, this particular example was choose with some malice... Comparator has two abstract methods instead of one, which is the whole problem with it. Since one of its methods can be implemented in terms of the other, I think they'll just "subtract" defender methods from the abstract list.
And, on the Scala side, even though there's a sort method that uses (A, A) => Boolean, not (A, A) => Int, the standard sorting method calls for a Ordering object, which is quite similar to Java's Comparator! In Scala's case, though, Ordering performs the role of a type class.
Note 2: Implicits are automatically applied, once they have been imported into scope.