How to constraint maximum length of a list in an argument definition? - scala

A function of mine is to take from zero to five integer arguments and from zero to five string arguments. So I consider it to define as a function of 2 lists: f(numbers: List[Int], strings: List[String]). But I think it is good to constraint the lengths if it is possible, for an IDE and/or a compiler can enforce it. Is this possible?

I think you're really asking a lot of the type system for this one... This is a classic task for dependently typed programming, in which category Scala unfortunately does not belong.
You could look at Mark Harrah's type-level Naturals:
type _0 = Nat0
type _1 = Succ[_0]
type _2 = Succ[_1]
// ...
But if you go down this route, you'll have to build all your lists in such a way that the length-type is evident to the compiler. That means no recursion, no unbounded looping, etc. Also you'll have to come up with a way to encode "<" in the type system... since you can't do recursion I'm not sure how you'd do that. So, probably not worth it.
Maybe another way to approach the problem is to figure out where '0..5' comes from and constrain some other type based on that information?
As a last resort you could define special cases for the allowable sizes, separated so that you don't have 25 cases:
case class Small[+X](l: List[X])
def small(): Small[Nothing] = Small(List())
def small[A](a: A): Small[A] = Small(List(a))
def small[A](a1: A, a2: A): Small[A] = Small(List(a1,a2))

Related

Attempting to figure out the correct type for this

Apologies, this is going to be a somewhat noob-ish question. I have an object from the slick library that has at type like this:
Query[(Rep[String], Rep[String]), (String, String), Seq]
I'm trying to write a function which accepts queries as arguments, though the sequences in them are of uncertain length - ie, it could equally well be:
Query[(Rep[String], Rep[String], Rep[String]), (String, String, String), Seq]
So the first two components have three elements rather than two. I cannot figure out how this is done. I have tried various erroneous permutations, like Query[Product[Rep[String]], Product[String], Seq], to no avail, and even what I assumed would be the nuclear option of just using Any doesn't work. My error messages are along the lines of
[error] found : Option[slick.driver.H2Driver.api.Query[(slick.driver.H2Driver.api.Rep[String], slick.driver.H2Driver.api.Rep[String]),(String, St
ring),Seq]]
[error] (which expands to) Option[slick.lifted.Query[(slick.lifted.Rep[String], slick.lifted.Rep[String]),(String, String),Seq]]
[error] required: Option[slick.driver.H2Driver.api.Rep[scala.concurrent.Future[List[String]]]]
[error] (which expands to) Option[slick.lifted.Rep[scala.concurrent.Future[List[String]]]]
[error] ReturnFunctions.completeQuery(db, query, serialize_and_send)
I think my inability to solve this may reflect some fundamental lack of understanding about scala, strongly typed languages in general and possibly also computing as a whole. Should I be resolving this Query to some more definite form before I try to even pass it into a function? I also suspect I'm not interpreting the original type correctly - what do the parantheses mean in this context? Is it that Query is expecting to receive three sets of parameters, one after the other, like when you do fn(arg1)(arg2)(arg3) = ...?
Any help with this troubling dilemma gratefully received.
I also suspect I'm not interpreting the original type correctly - what do the parentheses mean in this context?
You're looked at a reasonable advanced area, but let's try to help.
The Query type always has three type parameters. You'll see them written as Query[M, U, C].
The first parameter, M, is a tuple. That's what the parentheses mean in this context.
In your first example, M is a tuple of two elements; and in the second it's three. The same situation exists for the second parameter, U. There's a bit more detail on this in Essential Slick.
In Scala, you can have generic parameters. That means you can say something along the lines of:
def foo[M, U, C[_]](q: Query[M,U,C]) = ???
We've defined a method with:
three type parameters; and
taking an argument of a query that has those types.
We've not said anything about M, U, or much about C (other than that it's a type that takes a type as an argument). That means there's not a lot we can do with them, but you may not need to.
A post on query enrichment in Slick gives a longer (related) example which may be of use.
As Dmytro suggests, a better route would be to create a concrete example of what you'd like to achieve and work from there.
Consider the shape of Query type constructor
Query[+E, U, C[_]]
We say Query is a type constructor because it constructs a concrete type out of given type arguments E, U, and C[_], similarly to how a function constructs a concrete value out of given function arguments.
Now lets try to deconstruct the concrete type
Query[(Rep[String], Rep[String]), (String, String), Seq]
into its constituent type parameters. We have
E = (Rep[String], Rep[String])
U = (String, String)
C[_] = Seq
Note (A, B) is just syntactic sugar for Tuple2[A, B] thus
E = (Rep[String], Rep[String]) = Tuple2[Rep[String], Rep[String]]
U = (String, String) = Tuple2[String, String]
C[_] = Seq = Seq
You might be wondering about that underscore in C[_]. This specifies that the type parameter C must be a type constructor as opposed to a concrete type. For example Seq is a type constructor whilst Seq[Int] is not. Furthermore, you might be wondering about that + in +E. This specifies the inheritance relationship of parameterised types, or in other words, variance, for example, it specifies whether Seq[Dog] a subtype of Seq[Animal].
Lastly lets write the resulting concrete type in its full verbosity
Query[Tuple2[Rep[String], Rep[String]], Tuple2[String, String], Seq]

Correct use of term Monoid

From the following example, I think it is correct to say that String defines a monoid under the concatenation operation since it is an associative binary operation and String happens to have an identity element which is an empty string "".
scala> ("" + "Jane") + "Doe" == "" + ("Jane" + "Doe")
res0: Boolean = true
From the various texts I have been reading on the subject lately, it seems that the correct use of the term monoid is that the monoid is actually a combination of both the type (in this case String) and an instance of some monoid type which defines the operation and identity element.
For example, here is a theoretical Monoid type and a concrete instance of it as seems to be commonly defined in various books/articles:-
trait Monoid[A] {
def op(a1: A, a2: A): A
def zero: A
}
val stringMonoid = new Monoid[String] {
def op(a1: String, a2: String) = a1 + a2
val zero = ""
}
I know that we do not need trait Monoid[A] nor stringMonoid to be defined in the core (Scala, or rather Java) library to support my REPL output and that the example is just a tool to understand the abstract concept of a monoid.
My issue (and I am very possibly thinking about it way too much) is the purist definition. I know that the underlying java.lang.String (or rather StringBuilder) already defines the associative operation, but I do not think that there is an explicit definition of the identity element (in this case just an empty string "") defined anywhere.
Question:-
Is String a monoid under the concatenation operation implicitly, just because we happen to know that using an empty string "" provides the identity? Or is it that an explicit definition of the identity element for a type is unnecessary for it to be classed as a monoid (under a particular associative binary operation).
You may be overthinking this a little. A monoid is an abstract concept that exists outside of programming and type systems (a la category theory). Consider the definition:
A monoid is a set that is closed under an associative binary operation and has an identity element.
You have identified the String type to be a closed
set with an associative binary operation (concatenation) and identity element (the empty string). The language may not explicitly tell you what the identity element is, but that doesn't mean that it doesn't exist (we know it does). The String type with the binary operation of concatenation is most certainly a monoid because you can prove that it meets all of the aforementioned properties.
Creating a Monoid type class is merely for our convenience when it comes to working with generic data structures that operate with monoids. Regardless of whether or not the programming language (or some other library) explicitly spells out what sets with what binary operations and what identities construct monoids, the monoids can still exist.
It is important to note that the String type on its own does not constitute a monoid, as it must exist with said binary operation, etc. It may be possible that another monoid exists using the same set, but a different binary operation.
I think that you already understand the concept right and indeed "think too much" about it (;
A monoid is a triplet as they say in mathematics: a set (think of a type with its values), an associative binary operator on it and a neutral element. You define such triple — you define a monoid. So the answer to
Is String a monoid under the concatenation operation implicitly, just because we happen to know that using an empty string "" provides the identity?
is simply yes. You named the set, the associative binary operation and the neutral element — bingo! You got a monoid! About
already defines the associative operation, but I do not think that there is an explicit definition of the identity element
there's a bit of confusion. You can choose different associative operations on String with their corresponding neutral elements and define various monoids. Concatenation is not like some kind of "orthodox" associative operation on String to define monoid, just, probably, the most obvious.
If you define a table as something "with four legs and a flat horizontal surface on them", then anything that fits this definition is a table, regardless of the material it's made and other variable characteristics. Now, when do you need to "certify" it explicitly is a table? Only when you need to use its "table-properties", say if to sell it and advertise that you can put things on it and they won't fall off, because the surface is guaranteed to be flat and horizontal.
Sorry, if the example is kind of stupid, I'm not very good in such analogies. I hope it is still helpful.
Now about instantiating the mentioned "theoretical Monoid type". Such types are usually called a type class. Neither existence of such type itself (which can be defined in various ways), nor its instance are necessary to call the triple (String, (_ ++ _), "") a monoid and reason about it as a monoid (i.e. use general monoid properties). What it is actually used for is ad-hoc polymorphism. In Scala it is done with implicits. One can, for example, define a polymorphic function
def fold[M](seq: Seq[M])(implicit m: Monoid[M]): M = seq match {
case Seq.empty => m.zero
case (h +: t) => m.op(h, fold(t))
}
Then if your stringMonoid value is declared as implicit val, whenever you use fold[String] and stringMonoid is in scope, it will use its zero and op inside. Same fold definition will work for other instances of Monoid[...].
Another topic is what happens when you have several instances of Monoid[String]. Read Where does Scala look for implicits?.
To answer your question, first we need to answer another one: what is a monoid? One can look at it from at least 2 different perspectives:
Category theory perspective,
Scala programming language perspective.
It is important though to not mix these 2 perspectives. Category theory is a form of mathematics, and the monoid in category theory therefore belongs to the domain of Platonic Forms, while in Scala programming language, according to Platonic realism, it is a Particular.
So monoid in Scala is a mere reflection of an ideal Platonic monoid. But we are free to choose how exactly we do this reflection, because anyways the Particular is a mere approximation of a Form. This reflection is based on the expressive powers of the medium, in which this reflection happens (Scala programming language in our case).
Monoid requires us to designate a unique instance of a given type with the "identity" notion. I'd argue that the best way to uniquely identify an instance of some type is to use a singleton type, because types, as opposed to values, are guaranteed to be unique. Since Scala doesn't have the singleton types proper (see here for discussion), it's impossible to clearly express monoid Form in Scala, and this is probably where you question stems from.
The Scala's answer to this problem is the typeclasses. Scala says: "Ok, I can't express the empty string as an unique type (neither 0 as identity of numbers under sum, etc.), so I won't operate on that level at all. Instead, convince me you have a monoid typeclass for String, and I'll let you use it where appropriate." This is a very practical approach, but not very pure. Having singleton types would allow one to express an empty string as a unique type, and the string monoid like this (warning, this doesn't compile):
// Identity typeclass
trait Identity[T] {
type Repr
val identity: Repr
}
object Identity {
implicit val stringIdentity = new Identity {
type I = "" // can't do this without singleton types support
val identity = ""
}
}
trait Monoid[A] {
def op(a1: A, a2: A): A
def zero: Identity[A]#Repr
}
object Monoid {
implicit def stringMonoid[I](implicit
stringIdentity: Identity[String] { type Repr = I }) = new Monoid[String] {
def op(a1: String, a2: String): String = a1 + a2
def zero: I = stringIdentity.identity
}
}
I think it would be possible to get pretty close to this using shapeless, but I haven't tried.
To summarize, from Scala language perspective, having monoid means having an instance of monoid typeclass, because so we, the implementers, choose to reflect the ideal monoid into Scala's reality. Your question mixes ideal monoid with real-world Scala monoid, and therefore it's so hard to formulate and answer. The lack of singleton types in Scala forces us to make assumptions, like that the empty string is the identity for the String type. With singleton types, we don't have to assume, but we can prove this fact, and use such proof in the monoid definition.
Hope this helps.
You would only need String (as implemented in the JVM, or as supplemented in the SDK) to be a proper Monoid (e.g. RichString extends Monoid[String]) if you were going to have String (or RichString) method implementations that you take directly from Monoid, so you do not have to implement them. That is, if we know that a String is a Monoid, then it has all its operations without us having to implement them. There are two problems with this scenario: first, String was implemented in the JVM a long time before anyone thought to recognize that String is a monoid, so all useful methods have been written directly into String, or StringBuilder. Second, Monoid probably does not have terribly useful methods anyway.

def layout[A](x: A) = ... syntax in Scala

I'm a beginner of Scala who is struggling with Scala syntax.
I got the line of code from https://www.tutorialspoint.com/scala/higher_order_functions.htm.
I know (x: A) is an argument of layout function
( which means argument x of Type A)
But what is [A] between layout and (x: A)?
I've been googling scala function syntax, couldn't find it.
def layout[A](x: A) = "[" + x.toString() + "]"
It's a type parameter, meaning that the method is parameterised (some also say "generic"). Without it, compiler would think that x: A denotes a variable of some concrete type A, and when it wouldn't find any such type it would report a compile error.
This is a fairly common thing in statically typed languages; for example, Java has the same thing, only syntax is <A>.
Parameterized methods have rules where the types can occur which involve concepts of covariance and contravariance, denoted as [+A] and [-A]. Variance is definitely not in the scope of this question and is probably too much for you too handle right now, but it's an important concept so I figured I'd just mention it, at least to let you know what those plus and minus signs mean when you see them (and you will).
Also, type parameters can be upper or lower bounded, denoted as [A <: SomeType] and [A >: SomeType]. This means that generic parameter needs to be a subtype/supertype of another type, in this case a made-up type SomeType.
There are even more constructs that contribute extra information about the type (e.g. context bounds, denoted as [A : Foo], used for typeclass mechanism), but you'll learn about those later.
This means that the method is using a generic type as its parameter. Every type you pass that has the definition for .toString could be passed through layout.
For example, you could pass both int and string arguments to layout, since you could call .toString on both of them.
val i = 1
val s = "hi"
layout(i) // would give "[1]"
layout(s) // would give "[hi]"
Without the gereric parameter, for this example you would have to write two definitions for layout: one that accepts integers as param, and one that accepts string as param. Even worse: every time you need another type you'd have to write another definition that accepts it.
Take a look at this example here and you'll understand it better.
I also recomend you to take a look at generic classes here.
A is a type parameter. Rather than being a data type itself (Ex. case class A), it is generic to allow any data type to be accepted by the function. So both of these will work:
layout(123f) [Float datatype] will output: "[123]"
layout("hello world") [String datatype] will output: "[hello world]"
Hence, whichever datatype is passed, the function will allow. These type parameters can also specify rules. These are called contravariance and covariance. Read more about them here!

Deciphering one of the toughest scala method prototypes (slick)

Looking at the <> method in the following scala slick class, from http://slick.typesafe.com/doc/2.1.0/api/index.html#scala.slick.lifted.ToShapedValue, it reminds me of that iconic stackoverflow thread about scala prototypes.
def <>[R, U](f: (U) ⇒ R, g: (R) ⇒ Option[U])
(implicit arg0: ClassTag[R], shape: Shape[_ <: FlatShapeLevel, T, U, _]):
MappedProjection[R, U]
Can someone bold and knowledgeable provide an articulate walkthrough of that long prototype definition, carefully clarifying all of its type covariance/invariance, double parameter lists, and other advanced scala aspects?
This exercise will also greatly help dealing with similarly convoluted prototypes!
Ok, let's take a look:
class ToShapedValue[T](val value: T) extends AnyVal {
...
#inline def <>[R: ClassTag, U](f: (U) ⇒ R, g: (R) ⇒ Option[U])(implicit shape: Shape[_ <: FlatShapeLevel, T, U, _]): MappedProjection[R, U]
}
The class is an AnyVal wrapper; while I can't actually see the implicit conversion from a quick look, it smells like the "pimp my library" pattern. So I'm guessing this is meant to add <> as an "extension method" onto some (or maybe all) types.
#inline is an annotation, a way of putting metadata on, well, anything; this one is a hint to the compiler that this should be inlined. <> is the method name - plenty of things that look like "operators" are simply ordinary methods in scala.
The documentation you link has already expanded the R: ClassTag to ordinary R and an implicit ClassTag[R] - this is a "context bound" and it's simply syntactic sugar. ClassTag is a compiler-generated thing that exists for every (concrete) type and helps with reflection, so this is a hint that the method will probably do some reflection on an R at some point.
Now, the meat: this is a generic method, parameterized by two types: [R, U]. Its arguments are two functions, f: U => R and g: R => Option[U]. This looks a bit like the functional Prism concept - a conversion from U to R that always works, and a conversion from R to U that sometimes doesn't work.
The interesting part of the signature (sort of) is the implicit shape at the end. Shape is described as a "typeclass", so this is probably best thought of as a "constraint": it limits the possible types U and R that we can call this function with, to only those for which an appropriate Shape is available.
Looking at the documentation forShape, we see that the four types are Level, Mixed, Unpacked and Packed. So the constraint is: there must be a Shape, whose "level" is some subtype of FlatShapeLevel, where the Mixed type is T and the Unpacked type is R (the Packed type can be any type).
So, this is a type-level function that expresses that R is "the unpacked version of" T. To use the example from the Shape documentation again, if T is (Column[Int], Column[(Int, String)], (Int, Option[Double])) then R will be (Int, (Int, String), (Int, Option[Double]) (and it only works for FlatShapeLevel, but I'm going to make a judgement call that that's probably not important). U is, interestingly enough, completely unconstrained.
So this lets us create a MappedProjection[unpacked-version-of-T, U] from any T, by providing conversion functions in both directions. So in a simple version, maybe T is a Column[String] - a representation of a String column in a database - and we want to represent it as some application-specific type, e.g. EmailAddress. So R=String, U=EmailAddress, and we provide conversion functions in both directions: f: EmailAddress => String and g: String => Option[EmailAddress]. It makes sense that it's this way around: every EmailAddress can be represented as a String (at least, they'd better be, if we want to be able to store them in the database), but not every String is a valid EmailAddress. If our database somehow had e.g. "http://www.foo.com/" in the email address column, our g would return None, and Slick could handle this gracefully.
MappedProjection itself is, sadly, undocumented. But I'm guessing it's some kind of lazy representation of a thing we can query; where we had a Column[String], now we have a pseudo-column-thing whose (underlying) type is EmailAddress. So this might allow us to write pseudo-queries like 'select from users where emailAddress.domain = "gmail.com"', which would be impossible to do directly in the database (which doesn't know which part of an email address is the domain), but is easy to do with the help of code. At least, that's my best guess at what it might do.
Arguably the function could be made clearer by using a standard Prism type (e.g. the one from Monocle) rather than passing a pair of functions explicitly. Using the implicit to provide a type-level function is awkward but necessary; in a fully dependently typed language (e.g. Idris), we could write our type-level function as a function (something like def unpackedType(t: Type): Type = ...). So conceptually, this function looks something like:
def <>[U](p: Prism[U, unpackedType(T)]): MappedProjection[unpackedType(T), U]
Hopefully this explains some of the thought process of reading a new, unfamiliar function. I don't know Slick at all, so I have no idea how accurate I am as to what this <> is used for - did I get it right?

Any reason why scala does not explicitly support dependent types?

There are path dependent types and I think it is possible to express almost all the features of such languages as Epigram or Agda in Scala, but I'm wondering why Scala does not support this more explicitly like it does very nicely in other areas (say, DSLs) ?
Anything I'm missing like "it is not necessary" ?
Syntactic convenience aside, the combination of singleton types, path-dependent types and implicit values means that Scala has surprisingly good support for dependent typing, as I've tried to demonstrate in shapeless.
Scala's intrinsic support for dependent types is via path-dependent types. These allow a type to depend on a selector path through an object- (ie. value-) graph like so,
scala> class Foo { class Bar }
defined class Foo
scala> val foo1 = new Foo
foo1: Foo = Foo#24bc0658
scala> val foo2 = new Foo
foo2: Foo = Foo#6f7f757
scala> implicitly[foo1.Bar =:= foo1.Bar] // OK: equal types
res0: =:=[foo1.Bar,foo1.Bar] = <function1>
scala> implicitly[foo1.Bar =:= foo2.Bar] // Not OK: unequal types
<console>:11: error: Cannot prove that foo1.Bar =:= foo2.Bar.
implicitly[foo1.Bar =:= foo2.Bar]
In my view, the above should be enough to answer the question "Is Scala a dependently typed language?" in the positive: it's clear that here we have types which are distinguished by the values which are their prefixes.
However, it's often objected that Scala isn't a "fully" dependently type language because it doesn't have dependent sum and product types as found in Agda or Coq or Idris as intrinsics. I think this reflects a fixation on form over fundamentals to some extent, nevertheless, I'll try and show that Scala is a lot closer to these other languages than is typically acknowledged.
Despite the terminology, dependent sum types (also known as Sigma types) are simply a pair of values where the type of the second value is dependent on the first value. This is directly representable in Scala,
scala> trait Sigma {
| val foo: Foo
| val bar: foo.Bar
| }
defined trait Sigma
scala> val sigma = new Sigma {
| val foo = foo1
| val bar = new foo.Bar
| }
sigma: java.lang.Object with Sigma{val bar: this.foo.Bar} = $anon$1#e3fabd8
and in fact, this is a crucial part of the encoding of dependent method types which is needed to escape from the 'Bakery of Doom' in Scala prior to 2.10 (or earlier via the experimental -Ydependent-method types Scala compiler option).
Dependent product types (aka Pi types) are essentially functions from values to types. They are key to the representation of statically sized vectors and the other poster children for dependently typed programming languages. We can encode Pi types in Scala using a combination of path dependent types, singleton types and implicit parameters. First we define a trait which is going to represent a function from a value of type T to a type U,
scala> trait Pi[T] { type U }
defined trait Pi
We can than define a polymorphic method which uses this type,
scala> def depList[T](t: T)(implicit pi: Pi[T]): List[pi.U] = Nil
depList: [T](t: T)(implicit pi: Pi[T])List[pi.U]
(note the use of the path-dependent type pi.U in the result type List[pi.U]). Given a value of type T, this function will return a(n empty) list of values of the type corresponding to that particular T value.
Now let's define some suitable values and implicit witnesses for the functional relationships we want to hold,
scala> object Foo
defined module Foo
scala> object Bar
defined module Bar
scala> implicit val fooInt = new Pi[Foo.type] { type U = Int }
fooInt: java.lang.Object with Pi[Foo.type]{type U = Int} = $anon$1#60681a11
scala> implicit val barString = new Pi[Bar.type] { type U = String }
barString: java.lang.Object with Pi[Bar.type]{type U = String} = $anon$1#187602ae
And now here is our Pi-type-using function in action,
scala> depList(Foo)
res2: List[fooInt.U] = List()
scala> depList(Bar)
res3: List[barString.U] = List()
scala> implicitly[res2.type <:< List[Int]]
res4: <:<[res2.type,List[Int]] = <function1>
scala> implicitly[res2.type <:< List[String]]
<console>:19: error: Cannot prove that res2.type <:< List[String].
implicitly[res2.type <:< List[String]]
^
scala> implicitly[res3.type <:< List[String]]
res6: <:<[res3.type,List[String]] = <function1>
scala> implicitly[res3.type <:< List[Int]]
<console>:19: error: Cannot prove that res3.type <:< List[Int].
implicitly[res3.type <:< List[Int]]
(note that here we use Scala's <:< subtype-witnessing operator rather than =:= because res2.type and res3.type are singleton types and hence more precise than the types we are verifying on the RHS).
In practice, however, in Scala we wouldn't start by encoding Sigma and Pi types and then proceeding from there as we would in Agda or Idris. Instead we would use path-dependent types, singleton types and implicits directly. You can find numerous examples of how this plays out in shapeless: sized types, extensible records, comprehensive HLists, scrap your boilerplate, generic Zippers etc. etc.
The only remaining objection I can see is that in the above encoding of Pi types we require the singleton types of the depended-on values to be expressible. Unfortunately in Scala this is only possible for values of reference types and not for values of non-reference types (esp. eg. Int). This is a shame, but not an intrinsic difficulty: Scala's type checker represents the singleton types of non-reference values internally, and there have been a couple of experiments in making them directly expressible. In practice we can work around the problem with a fairly standard type-level encoding of the natural numbers.
In any case, I don't think this slight domain restriction can be used as an objection to Scala's status as a dependently typed language. If it is, then the same could be said for Dependent ML (which only allows dependencies on natural number values) which would be a bizarre conclusion.
I would assume it is because (as I know from experience, having used dependent types in the Coq proof assistant, which fully supports them but still not in a very convenient way) dependent types are a very advanced programming language feature which is really hard to get right - and can cause an exponential blowup in complexity in practice. They're still a topic of computer science research.
I believe that Scala's path-dependent types can only represent Σ-types, but not Π-types. This:
trait Pi[T] { type U }
is not exactly a Π-type. By definition, Π-type, or dependent product, is a function which result type depends on argument value, representing universal quantifier, i.e. ∀x: A, B(x). In the case above, however, it depends only on type T, but not on some value of this type. Pi trait itself is a Σ-type, an existential quantifier, i.e. ∃x: A, B(x). Object's self-reference in this case is acting as quantified variable. When passed in as implicit parameter, however, it reduces to an ordinary type function, since it is resolved type-wise. Encoding for dependent product in Scala may look like the following:
trait Sigma[T] {
val x: T
type U //can depend on x
}
// (t: T) => (∃ mapping(x, U), x == t) => (u: U); sadly, refinement won't compile
def pi[T](t: T)(implicit mapping: Sigma[T] { val x = t }): mapping.U
The missing piece here is an ability to statically constraint field x to expected value t, effectively forming an equation representing the property of all values inhabiting type T. Together with our Σ-types, used to express the existence of object with given property, the logic is formed, in which our equation is a theorem to be proven.
On a side note, in real case theorem may be highly nontrivial, up to the point where it cannot be automatically derived from code or solved without significant amount of effort. One can even formulate Riemann Hypothesis this way, only to find the signature impossible to implement without actually proving it, looping forever or throwing an exception.
The question was about using dependently typed feature more directly and, in my opinion,
there would be a benefit in having a more direct dependent typing approach than what Scala offers.
Current answers try to argue the question on type theoretical level.
I want to put a more pragmatic spin on it.
This may explain why people are divided on the level of support of dependent types in the Scala language. We may have somewhat different definitions in mind. (not to say one is right and one is wrong).
This is not an attempt to answer the question how easy would it be to turn
Scala into something like Idris (I imagine very hard) or to write a library
offering more direct support for Idris like capabilities (like singletons tries to be in Haskell).
Instead, I want to emphasize the pragmatic difference between Scala and a language like Idris.
What are code bits for value and type level expressions?
Idris uses the same code, Scala uses very different code.
Scala (similarly to Haskell) may be able to encode lots of type level calculations.
This is shown by libraries like shapeless.
These libraries do it using some really impressive and clever tricks.
However, their type level code is (currently) quite different from value level expressions
(I find that gap to be somewhat closer in Haskell). Idris allows to use value level expression on the type level AS IS.
The obvious benefit is code reuse (you do not need to code type level expressions
separately from value level if you need them in both places). It should be way easier to
write value level code. It should be easier to not have to deal with hacks like singletons (not to mention performance cost). You do not need to learn two things you learn one thing.
On a pragmatic level, we end up needing fewer concepts. Type synonyms, type families, functions, ... how about just functions? In my opinion, this unifying benefits go much deeper and are more than syntactic convenience.
Consider verified code. See:
https://github.com/idris-lang/Idris-dev/blob/v1.3.0/libs/contrib/Interfaces/Verified.idr
Type checker verifies proofs of monadic/functor/applicative laws and the
proofs are about actual implementations of monad/functor/applicative and not some encoded
type level equivalent that may be the same or not the same.
The big question is what are we proving?
The same can me done using clever encoding tricks (see the following for Haskell version, I have not seen one for Scala)
https://blog.jle.im/entry/verified-instances-in-haskell.html
https://github.com/rpeszek/IdrisTddNotes/wiki/Play_FunctorLaws
except the types are so complicated that it is hard to see the laws, the value
level expressions are converted (automatically but still) to type level things and
you need to trust that conversion as well.
There is room for error in all of this which kinda defies the purpose of compiler acting as
a proof assistant.
(EDITED 2018.8.10) Talking about proof assistance, here is another big difference between Idris and Scala. There is nothing in Scala (or Haskell) that can prevent from writing diverging proofs:
case class Void(underlying: Nothing) extends AnyVal //should be uninhabited
def impossible() : Void = impossible()
while Idris has total keyword preventing code like this from compiling.
A Scala library that tries to unify value and type level code (like Haskell singletons) would be an interesting test for Scala's support of dependent types. Can such library be done much better in Scala because of path-dependent types?
I am too new to Scala to answer that question myself.