Why Scala Infer the Bottom Type when the type parameter is not specified? - scala

I wonder if anyone could explain the inferencing rule in this particular case below, and most importantly it's rational/implication ?
case class E[A, B](a: A) // class E
E(2) // E[Int,Nothing] = E(2)
Note that I could have wrote E[Int](2). What matter to me is why is the second parameter type inferred to be Nothing (i.e. Bottom type) instead of let say Any for instance ? Why is that and What's the rational/Implication ?
Just to give some context, this is related to the definition of Either and how it works for Left and Right. Both are defined according to the pattern
final case class X[+A, +B](value: A) extends Either[A, B]
Where you instantiate it let say as Right[Int](2) and the type inferred is Right[Nothing, Int] and by extension Either[Nothing, Int]
EDIT1
There is consistency here, but i still can figure out the rational. Below is the same definition with a contra-variant paramete:
case class E[A, -B](a: A)// class E
E(2) // E[Int, Any] = E(2)
Hence we do have the same thing the other way around when it is contra-variant, and that make the all behavior or inference rule, coherent. However the rational for this i am not sure ....
Why not the opposite rule i.e. infer Any when Co-Variant/Invariant and Nothing when Contra-Variant ?
EDIT2
In the light of #slouc Answer, which make good sense, i'm left with still understanding what and why the compiler is doing what it is doing. The example below illustrate my confusion
val myleft = Left("Error") // Left[String,Nothing] = Left(Error)
myleft map { (e:Int) => e * 4} // Either[String,Int] = Left(Error)
First the compiler fix the type to something that "for sure work" to reuse the conclusion of #slouc (albeit make more sense in the context of a Function) Left[String,Nothing]
Next the compile infer myleft to be of type Either[String,Int]
given map definition def map[B](f: A => B): Either[E, B], (e:Int) => e * 4 can only be supplied if myleft is actually Left[String,Int] or Either[String,Int]
So in other words, my question is, what is the point of fixing the type to Nothing if it is to change it later.
Indeed the following does not compile
val aleft: Left[String, Nothing] = Left[String, Int]("Error")
type mismatch;
found : scala.util.Left[String,Int]
required: Left[String,Nothing]
val aleft: Left[String, Nothing] = Left[String, Int]("Error")
So why would I infer to a type, that normally would block me to do anything else over variable of that type (but for sure works in term of inference), to ultimately change that type, so i can do something with a variable of that inferred type.
EDIT3
Edit2 is a bit misunderstanding and everything is clarified in #slouc answer and comments.

Covariance:
Given type F[+A] and relation A <: B, then the following holds: F[A] <: F[B]
Contravariance:
Given type F[-A] and relation A <: B, then the following holds: F[A] >: F[B]
If the compiler cannot infer the exact type, it will resolve the lowest possible type in case of covariance and highest possible type in case of contravariance.
Why?
This is a very important rule when it comes to variance in subtyping. It can be shown on the example of the following data type from Scala:
trait Function1[Input-, Output+]
Generally speaking, when a type is placed in the function/method parameters, it means it's in the so-called "contravariant position". If it's used in function/method return values, it's in the so-called "covariant position". If it's in both, then it's invariant.
Now, given the rules from the beginning of this post, we conclude that, given:
trait Food
trait Fruit extends Food
trait Apple extends Fruit
def foo(someFunction: Fruit => Fruit) = ???
we can supply
val f: Food => Apple = ???
foo(f)
Function f is a valid substitute for someFunction because:
Food is a supertype of Fruit (contravariance of input)
Apple is a subtype of Fruit (covariance of output)
We can explain this in natural language like this:
"Method foo needs a function that can take a Fruit and produce a
Fruit. This means foo will have some Fruit and will need a
function it can feed it to, and expect some Fruit back. If it gets a
function Food => Apple, everything is fine - it can still feed it
Fruit (because the function takes any food), and it can receive
Fruit (apples are fruit, so the contract is respected).
Coming back to your initial dilemma, hopefully this explains why, without any extra information, compiler will resort to lowest possible type for covariant types and highest possible type for contravariant ones. If we want to supply a function to foo, there's one that we know surely works: Any => Nothing.
Variance in general.
Variance in Scala documentation.
Article about variance in Scala (full disclosure: I wrote it).
EDIT:
I think I know what's confusing you.
When you instantiate a Left[String, Nothing], you're allowed to later map it with a function Int => Whatever, or String => Whatever, or Any => Whatever. This is precisly because of the contravariance of function input explained earlier. That's why your map works.
"what is the point of fixing the type to Nothing if it is to change it
later?"
I think it's a bit hard to wrap your head around compiler fixing the unknown type to Nothing in case of contravariance. When it fixes the unknown type to Any in case of covariance, it feels more natural (it can be "Anything"). Because of the duality of covariance and contravariance explained earlier, same reasoning applies for contravariant Nothing and covariant Any.

This is a quote from
Unification of Compile-Time and Runtime Metaprogramming in Scala
by Eugene Burmako
https://infoscience.epfl.ch/record/226166 (p. 95-96)
During type inference, the typechecker collects constraints on missing
type arguments from bounds of type parameters, from types of term
arguments, and even from results of implicit search (type inference
works together with implicit search because Scala supports an analogue
of functional dependencies). One can view these constraints as a
system of inequalities where unknown type arguments are represented as
type variables and order is imposed by the subtyping relation.
After collecting constraints, the typechecker starts a step-by-step
process that, on each step, tries to apply a certain transformation to
inequalities, creating an equivalent, yet supposedly simpler system of
inequalities. The goal of type inference is to transform the original
inequalities to equalities that represent a unique solution of the
original system.
Most of the time, type inference succeeds. In that
case, missing type arguments are inferred to the types represented by
the solution.
However, sometimes type inference fails. For example,
when a type parameter T is phantom, i.e. unused in the term parameters
of the method, its only entry in the system of inequalities will be
L <: T <: U, where L and U are its lower and upper bound respectively.
If L != U, this inequality does not have a unique solution, and that
means a failure of type inference.
When type inference fails, i.e.
when it is unable to take any more transformation steps and its
working state still contains some inequalities, the typechecker breaks
the stalemate. It takes all yet uninferred type arguments, i.e. those
whose variables are still represented by inequalities, and forcibly
minimizes them, i.e. equates them to their lower bounds. This produces
a result where some type arguments are inferred precisely, and some
are replaced with seemingly arbitrary types. For instance,
unconstrained type parameters are inferred to Nothing, which is a
common source of confusion for Scala beginners.
You can learn more about type inference in Scala:
Hubert Plociniczak Decrypting Local Type Inference https://infoscience.epfl.ch/record/214757
Guillaume Martres Scala 3, Type Inference and You! https://www.youtube.com/watch?v=lMvOykNQ4zs
Guillaume Martres Dotty and types: the story so far https://www.youtube.com/watch?v=YIQjfCKDR5A
Slides http://guillaume.martres.me/talks/
Aleksander Boruch-Gruszecki GADTs in Dotty https://www.youtube.com/watch?v=VV9lPg3fNl8

Related

Type constraint on input function parameter in Scala

I'm trying to create a higher order function that has an upper bound on the type of the parameter accepted by the input function.
A toy example of a naive attempt that demonstrates the problem:
class A
class B extends A
def AFunc(f: A => Unit): Unit = {}
AFunc((b: B) => {})
//<console>:14: error: type mismatch;
// found : B => Unit
// required: A => Unit
// AFunc((b: B) => {})
This doesn't work, I assume, because functions are contravariant in their parameters so B => Unit is a supertype of A => Unit.
It's possible to get it working with polymorphic functions like so:
class A
class B extends A
def AFunc[T <: A](f: T => Unit): Unit = {}
AFunc[B]((b: B) => {})
This solution has the drawback of requiring the caller to know the exact type of the passed function's parameter, despite the fact that all AFunc cares about the type is that it is a subtype of A.
Is there a type-safe way to enforce the type constraint without explicitly passing the type?
Your problem here is not due to language construct limitations, it's conceptual. There is a reason why functions are contravariant in their arguments.
What would you expect to happen if we take your code and extend it slightly into a following situation:
class A
class B extends A
def AFunc(f: A => Unit): Unit = {
f(new A) // let's use the function f
}
AFunc((b: B) => {}) // ???
How would this compile? Method AFunc clearly needs a way to handle values of type A, but you only provided it with a way of handling one subset of those values (= those of type B).
Sure, you can make your method polymorphic the way you did, and btw you won't even need to specify the argument type at call-site because it will be inferred. BUT, this is not the "solution". It's simply a different thing. Now instead of saying "I need a function that handles values of type A", your method is saying "I need a function that handles one particular subtype of A, but I will be fine with whatever subtype you decide on".
So, as it often happens on StackOverflow, I have to ask you - what are you trying to achieve? Is your method AFunc going to be handling all kinds of As, or just one particular subtype at a time?
If it's the latter, parameterising with an upper bound (as you did) should be fine. Just be careful with the inference if you decide not to specify the type at call site, because sometimes your code might compile just because the compiler inferred something other than what you intended (I see from the comments in your question that you already discovered this behaviour). But, if parametric method is indeed what you need, then I don't see why would specifying the type at call-site be a problem.
despite the fact that all AFunc cares about the type is that it is a subtype of A.
Sure, AFunc doesn't care. But the caller does. I mean, someone must care, right? :) Again, it's hard to continue discussion without knowing more about the actual problem you're solving, but if you want to provide more details I'd be happy to help.

Why Scala doesn't deduce trait type parameters?

trait foo[F] {
def test: F
}
class ah extends foo[(Int,Int) => Int] {
def test = (i: Int,j: Int) => i+j
}
So the question is, why Scala known to be so smart about types cannot just deduce type (Int,Int) => Int from the type of test asking me instead to specify it? Or It is still possible? Or maybe this is backed by some thoughts that I don't have in mind.
Your question is basically "why does Scala have only local type inference" or equivalently "why does Scala not have non-local type inference". And the answer is: because the designers don't want to.
There are several reasons for this. One reason is that the most reasonable alternative to local type inference is global type inference, but that's impossible in Scala: Scala has separate compilation and modular type checking, so there simply is no point in time where the compiler has a global view of the entire program. In fact, Scala has dynamic code loading, which means that at compile time the entire code doesn't even need to exist yet! Global type inference is simply impossible in Scala. The best we could do is "whole-compilation-unit type inference", but that is undesirable, too: it would mean that whether or not you need type annotations depends on whether or not you compile your code in multiple units or just one.
Another important reason is that type annotations at module boundaries serve as a double-check, kind of like double-entry book keeping for types. Note that even in languages with global type inference like Haskell, it is strongly recommended to put type annotations on module boundaries and public interfaces.
A third reason is that global type inference can sometimes lead to confusing error messages, when instead of the type checker failing at a type annotation, the type inferencer happily chugs along inferring increasingly non-sensical types, until it finally gives up at a location far away from the actual error (which might just be a simple typo) with a type error that is only tangentially related to the original types at the error site. This can sometimes happen in Haskell, for example. The Scala designers value helpful error messages so much that they are willing to sacrifice language features unless they can figure out how to implement them with good error messages. (Note that even with Scala's very limited type inference, you can get such "helpful" messages as "expected Foo got Product with Serializable".)
However, what Scala's local type inference can do in this example, is to work in the other direction, and infer the parameter types of the anonymous function from the return type of test:
trait foo[F] {
def test: F
}
class ah extends foo[(Int, Int) ⇒ Int] {
def test = (i, j) ⇒ i + j
}
(new ah) test(2, 3) //=> 5
For you particular example, inferring type parameters based on inheritance can be ambiguous:
trait A
trait B extends A
trait Foo[T] {
val x: T
}
trait Bar extends Foo[?] {
val x: B
}
The type that could go in the ? could be either A or B. This is a general example of where Scala will not infer types that could be ambiguous.
Now, you are correct to observe that there is a similarity between
class Foo[T](x: T)
and
trait Foo[T] { x: T }
I have seen work by some into possibly generalizing the similarity (but I can't seem to find it right now). That could, in theory, allow type inference of type parameters based on a member. But I don't know if it will ever get to that point.

Deciphering one of the toughest scala method prototypes (slick)

Looking at the <> method in the following scala slick class, from http://slick.typesafe.com/doc/2.1.0/api/index.html#scala.slick.lifted.ToShapedValue, it reminds me of that iconic stackoverflow thread about scala prototypes.
def <>[R, U](f: (U) ⇒ R, g: (R) ⇒ Option[U])
(implicit arg0: ClassTag[R], shape: Shape[_ <: FlatShapeLevel, T, U, _]):
MappedProjection[R, U]
Can someone bold and knowledgeable provide an articulate walkthrough of that long prototype definition, carefully clarifying all of its type covariance/invariance, double parameter lists, and other advanced scala aspects?
This exercise will also greatly help dealing with similarly convoluted prototypes!
Ok, let's take a look:
class ToShapedValue[T](val value: T) extends AnyVal {
...
#inline def <>[R: ClassTag, U](f: (U) ⇒ R, g: (R) ⇒ Option[U])(implicit shape: Shape[_ <: FlatShapeLevel, T, U, _]): MappedProjection[R, U]
}
The class is an AnyVal wrapper; while I can't actually see the implicit conversion from a quick look, it smells like the "pimp my library" pattern. So I'm guessing this is meant to add <> as an "extension method" onto some (or maybe all) types.
#inline is an annotation, a way of putting metadata on, well, anything; this one is a hint to the compiler that this should be inlined. <> is the method name - plenty of things that look like "operators" are simply ordinary methods in scala.
The documentation you link has already expanded the R: ClassTag to ordinary R and an implicit ClassTag[R] - this is a "context bound" and it's simply syntactic sugar. ClassTag is a compiler-generated thing that exists for every (concrete) type and helps with reflection, so this is a hint that the method will probably do some reflection on an R at some point.
Now, the meat: this is a generic method, parameterized by two types: [R, U]. Its arguments are two functions, f: U => R and g: R => Option[U]. This looks a bit like the functional Prism concept - a conversion from U to R that always works, and a conversion from R to U that sometimes doesn't work.
The interesting part of the signature (sort of) is the implicit shape at the end. Shape is described as a "typeclass", so this is probably best thought of as a "constraint": it limits the possible types U and R that we can call this function with, to only those for which an appropriate Shape is available.
Looking at the documentation forShape, we see that the four types are Level, Mixed, Unpacked and Packed. So the constraint is: there must be a Shape, whose "level" is some subtype of FlatShapeLevel, where the Mixed type is T and the Unpacked type is R (the Packed type can be any type).
So, this is a type-level function that expresses that R is "the unpacked version of" T. To use the example from the Shape documentation again, if T is (Column[Int], Column[(Int, String)], (Int, Option[Double])) then R will be (Int, (Int, String), (Int, Option[Double]) (and it only works for FlatShapeLevel, but I'm going to make a judgement call that that's probably not important). U is, interestingly enough, completely unconstrained.
So this lets us create a MappedProjection[unpacked-version-of-T, U] from any T, by providing conversion functions in both directions. So in a simple version, maybe T is a Column[String] - a representation of a String column in a database - and we want to represent it as some application-specific type, e.g. EmailAddress. So R=String, U=EmailAddress, and we provide conversion functions in both directions: f: EmailAddress => String and g: String => Option[EmailAddress]. It makes sense that it's this way around: every EmailAddress can be represented as a String (at least, they'd better be, if we want to be able to store them in the database), but not every String is a valid EmailAddress. If our database somehow had e.g. "http://www.foo.com/" in the email address column, our g would return None, and Slick could handle this gracefully.
MappedProjection itself is, sadly, undocumented. But I'm guessing it's some kind of lazy representation of a thing we can query; where we had a Column[String], now we have a pseudo-column-thing whose (underlying) type is EmailAddress. So this might allow us to write pseudo-queries like 'select from users where emailAddress.domain = "gmail.com"', which would be impossible to do directly in the database (which doesn't know which part of an email address is the domain), but is easy to do with the help of code. At least, that's my best guess at what it might do.
Arguably the function could be made clearer by using a standard Prism type (e.g. the one from Monocle) rather than passing a pair of functions explicitly. Using the implicit to provide a type-level function is awkward but necessary; in a fully dependently typed language (e.g. Idris), we could write our type-level function as a function (something like def unpackedType(t: Type): Type = ...). So conceptually, this function looks something like:
def <>[U](p: Prism[U, unpackedType(T)]): MappedProjection[unpackedType(T), U]
Hopefully this explains some of the thought process of reading a new, unfamiliar function. I don't know Slick at all, so I have no idea how accurate I am as to what this <> is used for - did I get it right?

What is and when to use Scala's forSome keyword?

What is the difference between List[T] forSome {type T} and List[T forSome {type T}]? How do I read them in "English"? How should I grok the forSome keyword? What are some practical uses of forSome? What are some useful practical and more complex than simple T forSome {type T} usages?
Attention: (Update 2016-12-08) The forSome keyword is very likely going away with Scala 2.13 or 2.14, according to Martin Odersky's talk on the ScalaX 2016. Replace it with path dependent types or with anonymous type attributes (A[_]). This is possible in most cases. If you have an edge case where it is not possible, refactor your code or loosen your type restrictions.
How to read "forSome" (in an informal way)
Usually when you use a generic API, the API guarantees you, that it will work with any type you provide (up to some given constraints). So when you use List[T], the List API guarantees you that it will work with any type T you provide.
With forSome (so called existentially quantified type parameters) it is the other way round. The API will provide a type (not you) and it guarantees you, it will work with this type it provided you. The semantics is, that a concrete object will give you something of type T. The same object will also accept the things it provided you. But no other object may work with these Ts and no other object can provide you with something of type T.
The idea of "existentially quantified" is: There exists (at least) one type T (in the implementation) to fulfill the contract of the API. But I won't tell you which type it is.
forSome can be read similar: For some types T the API contract holds true. But it is not necessary true for all types T. So when you provide some type T (instead of the one hidden in the implementation of the API), the compiler cannot guarantee that you got the right T. So it will throw a type error.
Applied to your example
So when you see List[T] forSome {type T} in an API, you can read it like this: The API will provide you with a List of some unknown type T. It will gladly accept this list back and it will work with it. But it won't tell you, what T is. But you know at least, that all elements of the list are of the same type T.
The second one is a little bit more tricky. Again the API will provide you with a List. And it will use some type T and not tell you what T is. But it is free to choose a different type for each element. A real world API would establish some constraints for T, so it can actually work with the elements of the list.
Conclusion
forSome is useful, when you write an API, where each object represents an implementation of the API. Each implementation will provide you with some objects and will accept these objects back. But you can neither mix objects from different implementations nor can you create the objects yourself. Instead you must always use the corresponding API functions to get some objects that will work with that API. forSome enables a very strict kind of encapsulation. You can read forSome in the following way:
The API contract folds true for some types. But you don't know for
which types it holds true. Hence you cannot provide you own type and
you cannot create your own objects. You have to use the ones provided
through the API that uses forSome.
This is quite informal and might even be wrong in some corner cases. But it should help you to grok the concept.
There are a lot of questions here, and most of them have been addressed pretty thoroughly in the answers linked in the comments above, so I'll respond to your more concrete first question.
There's no real meaningful difference between List[T] forSome { type T } and List[T forSome { type T }], but we can see a difference between the following two types:
class Foo[A]
type Outer = List[Foo[T]] forSome { type T }
type Inner = List[Foo[T] forSome { type T }]
We can read the first as "a list of foos of T, for some type T". There's a single T for the entire list. The second, on the other hand, can be read as "a list of foos, where each foo is of T for some T".
To put it another way, if we've got a list outer: Outer, we can say that "there exists some type T such that outer is a list of foos of T", where for a list of type Inner, we can only say that "for each element of the list, there exists some T such that that element is a foo of T". The latter is weaker—it tells us less about the list.
So, for example, if we have the following two lists:
val inner: Inner = List(new Foo[Char], new Foo[Int])
val outer: Outer = List(new Foo[Char], new Foo[Int])
The first will compile just fine—each element of the list is a Foo[T] for some T. The second won't compile, since there's not some T such that each element of the list is a Foo[T].

Any reason why scala does not explicitly support dependent types?

There are path dependent types and I think it is possible to express almost all the features of such languages as Epigram or Agda in Scala, but I'm wondering why Scala does not support this more explicitly like it does very nicely in other areas (say, DSLs) ?
Anything I'm missing like "it is not necessary" ?
Syntactic convenience aside, the combination of singleton types, path-dependent types and implicit values means that Scala has surprisingly good support for dependent typing, as I've tried to demonstrate in shapeless.
Scala's intrinsic support for dependent types is via path-dependent types. These allow a type to depend on a selector path through an object- (ie. value-) graph like so,
scala> class Foo { class Bar }
defined class Foo
scala> val foo1 = new Foo
foo1: Foo = Foo#24bc0658
scala> val foo2 = new Foo
foo2: Foo = Foo#6f7f757
scala> implicitly[foo1.Bar =:= foo1.Bar] // OK: equal types
res0: =:=[foo1.Bar,foo1.Bar] = <function1>
scala> implicitly[foo1.Bar =:= foo2.Bar] // Not OK: unequal types
<console>:11: error: Cannot prove that foo1.Bar =:= foo2.Bar.
implicitly[foo1.Bar =:= foo2.Bar]
In my view, the above should be enough to answer the question "Is Scala a dependently typed language?" in the positive: it's clear that here we have types which are distinguished by the values which are their prefixes.
However, it's often objected that Scala isn't a "fully" dependently type language because it doesn't have dependent sum and product types as found in Agda or Coq or Idris as intrinsics. I think this reflects a fixation on form over fundamentals to some extent, nevertheless, I'll try and show that Scala is a lot closer to these other languages than is typically acknowledged.
Despite the terminology, dependent sum types (also known as Sigma types) are simply a pair of values where the type of the second value is dependent on the first value. This is directly representable in Scala,
scala> trait Sigma {
| val foo: Foo
| val bar: foo.Bar
| }
defined trait Sigma
scala> val sigma = new Sigma {
| val foo = foo1
| val bar = new foo.Bar
| }
sigma: java.lang.Object with Sigma{val bar: this.foo.Bar} = $anon$1#e3fabd8
and in fact, this is a crucial part of the encoding of dependent method types which is needed to escape from the 'Bakery of Doom' in Scala prior to 2.10 (or earlier via the experimental -Ydependent-method types Scala compiler option).
Dependent product types (aka Pi types) are essentially functions from values to types. They are key to the representation of statically sized vectors and the other poster children for dependently typed programming languages. We can encode Pi types in Scala using a combination of path dependent types, singleton types and implicit parameters. First we define a trait which is going to represent a function from a value of type T to a type U,
scala> trait Pi[T] { type U }
defined trait Pi
We can than define a polymorphic method which uses this type,
scala> def depList[T](t: T)(implicit pi: Pi[T]): List[pi.U] = Nil
depList: [T](t: T)(implicit pi: Pi[T])List[pi.U]
(note the use of the path-dependent type pi.U in the result type List[pi.U]). Given a value of type T, this function will return a(n empty) list of values of the type corresponding to that particular T value.
Now let's define some suitable values and implicit witnesses for the functional relationships we want to hold,
scala> object Foo
defined module Foo
scala> object Bar
defined module Bar
scala> implicit val fooInt = new Pi[Foo.type] { type U = Int }
fooInt: java.lang.Object with Pi[Foo.type]{type U = Int} = $anon$1#60681a11
scala> implicit val barString = new Pi[Bar.type] { type U = String }
barString: java.lang.Object with Pi[Bar.type]{type U = String} = $anon$1#187602ae
And now here is our Pi-type-using function in action,
scala> depList(Foo)
res2: List[fooInt.U] = List()
scala> depList(Bar)
res3: List[barString.U] = List()
scala> implicitly[res2.type <:< List[Int]]
res4: <:<[res2.type,List[Int]] = <function1>
scala> implicitly[res2.type <:< List[String]]
<console>:19: error: Cannot prove that res2.type <:< List[String].
implicitly[res2.type <:< List[String]]
^
scala> implicitly[res3.type <:< List[String]]
res6: <:<[res3.type,List[String]] = <function1>
scala> implicitly[res3.type <:< List[Int]]
<console>:19: error: Cannot prove that res3.type <:< List[Int].
implicitly[res3.type <:< List[Int]]
(note that here we use Scala's <:< subtype-witnessing operator rather than =:= because res2.type and res3.type are singleton types and hence more precise than the types we are verifying on the RHS).
In practice, however, in Scala we wouldn't start by encoding Sigma and Pi types and then proceeding from there as we would in Agda or Idris. Instead we would use path-dependent types, singleton types and implicits directly. You can find numerous examples of how this plays out in shapeless: sized types, extensible records, comprehensive HLists, scrap your boilerplate, generic Zippers etc. etc.
The only remaining objection I can see is that in the above encoding of Pi types we require the singleton types of the depended-on values to be expressible. Unfortunately in Scala this is only possible for values of reference types and not for values of non-reference types (esp. eg. Int). This is a shame, but not an intrinsic difficulty: Scala's type checker represents the singleton types of non-reference values internally, and there have been a couple of experiments in making them directly expressible. In practice we can work around the problem with a fairly standard type-level encoding of the natural numbers.
In any case, I don't think this slight domain restriction can be used as an objection to Scala's status as a dependently typed language. If it is, then the same could be said for Dependent ML (which only allows dependencies on natural number values) which would be a bizarre conclusion.
I would assume it is because (as I know from experience, having used dependent types in the Coq proof assistant, which fully supports them but still not in a very convenient way) dependent types are a very advanced programming language feature which is really hard to get right - and can cause an exponential blowup in complexity in practice. They're still a topic of computer science research.
I believe that Scala's path-dependent types can only represent Σ-types, but not Π-types. This:
trait Pi[T] { type U }
is not exactly a Π-type. By definition, Π-type, or dependent product, is a function which result type depends on argument value, representing universal quantifier, i.e. ∀x: A, B(x). In the case above, however, it depends only on type T, but not on some value of this type. Pi trait itself is a Σ-type, an existential quantifier, i.e. ∃x: A, B(x). Object's self-reference in this case is acting as quantified variable. When passed in as implicit parameter, however, it reduces to an ordinary type function, since it is resolved type-wise. Encoding for dependent product in Scala may look like the following:
trait Sigma[T] {
val x: T
type U //can depend on x
}
// (t: T) => (∃ mapping(x, U), x == t) => (u: U); sadly, refinement won't compile
def pi[T](t: T)(implicit mapping: Sigma[T] { val x = t }): mapping.U
The missing piece here is an ability to statically constraint field x to expected value t, effectively forming an equation representing the property of all values inhabiting type T. Together with our Σ-types, used to express the existence of object with given property, the logic is formed, in which our equation is a theorem to be proven.
On a side note, in real case theorem may be highly nontrivial, up to the point where it cannot be automatically derived from code or solved without significant amount of effort. One can even formulate Riemann Hypothesis this way, only to find the signature impossible to implement without actually proving it, looping forever or throwing an exception.
The question was about using dependently typed feature more directly and, in my opinion,
there would be a benefit in having a more direct dependent typing approach than what Scala offers.
Current answers try to argue the question on type theoretical level.
I want to put a more pragmatic spin on it.
This may explain why people are divided on the level of support of dependent types in the Scala language. We may have somewhat different definitions in mind. (not to say one is right and one is wrong).
This is not an attempt to answer the question how easy would it be to turn
Scala into something like Idris (I imagine very hard) or to write a library
offering more direct support for Idris like capabilities (like singletons tries to be in Haskell).
Instead, I want to emphasize the pragmatic difference between Scala and a language like Idris.
What are code bits for value and type level expressions?
Idris uses the same code, Scala uses very different code.
Scala (similarly to Haskell) may be able to encode lots of type level calculations.
This is shown by libraries like shapeless.
These libraries do it using some really impressive and clever tricks.
However, their type level code is (currently) quite different from value level expressions
(I find that gap to be somewhat closer in Haskell). Idris allows to use value level expression on the type level AS IS.
The obvious benefit is code reuse (you do not need to code type level expressions
separately from value level if you need them in both places). It should be way easier to
write value level code. It should be easier to not have to deal with hacks like singletons (not to mention performance cost). You do not need to learn two things you learn one thing.
On a pragmatic level, we end up needing fewer concepts. Type synonyms, type families, functions, ... how about just functions? In my opinion, this unifying benefits go much deeper and are more than syntactic convenience.
Consider verified code. See:
https://github.com/idris-lang/Idris-dev/blob/v1.3.0/libs/contrib/Interfaces/Verified.idr
Type checker verifies proofs of monadic/functor/applicative laws and the
proofs are about actual implementations of monad/functor/applicative and not some encoded
type level equivalent that may be the same or not the same.
The big question is what are we proving?
The same can me done using clever encoding tricks (see the following for Haskell version, I have not seen one for Scala)
https://blog.jle.im/entry/verified-instances-in-haskell.html
https://github.com/rpeszek/IdrisTddNotes/wiki/Play_FunctorLaws
except the types are so complicated that it is hard to see the laws, the value
level expressions are converted (automatically but still) to type level things and
you need to trust that conversion as well.
There is room for error in all of this which kinda defies the purpose of compiler acting as
a proof assistant.
(EDITED 2018.8.10) Talking about proof assistance, here is another big difference between Idris and Scala. There is nothing in Scala (or Haskell) that can prevent from writing diverging proofs:
case class Void(underlying: Nothing) extends AnyVal //should be uninhabited
def impossible() : Void = impossible()
while Idris has total keyword preventing code like this from compiling.
A Scala library that tries to unify value and type level code (like Haskell singletons) would be an interesting test for Scala's support of dependent types. Can such library be done much better in Scala because of path-dependent types?
I am too new to Scala to answer that question myself.