Correct use of term Monoid - scala

From the following example, I think it is correct to say that String defines a monoid under the concatenation operation since it is an associative binary operation and String happens to have an identity element which is an empty string "".
scala> ("" + "Jane") + "Doe" == "" + ("Jane" + "Doe")
res0: Boolean = true
From the various texts I have been reading on the subject lately, it seems that the correct use of the term monoid is that the monoid is actually a combination of both the type (in this case String) and an instance of some monoid type which defines the operation and identity element.
For example, here is a theoretical Monoid type and a concrete instance of it as seems to be commonly defined in various books/articles:-
trait Monoid[A] {
def op(a1: A, a2: A): A
def zero: A
}
val stringMonoid = new Monoid[String] {
def op(a1: String, a2: String) = a1 + a2
val zero = ""
}
I know that we do not need trait Monoid[A] nor stringMonoid to be defined in the core (Scala, or rather Java) library to support my REPL output and that the example is just a tool to understand the abstract concept of a monoid.
My issue (and I am very possibly thinking about it way too much) is the purist definition. I know that the underlying java.lang.String (or rather StringBuilder) already defines the associative operation, but I do not think that there is an explicit definition of the identity element (in this case just an empty string "") defined anywhere.
Question:-
Is String a monoid under the concatenation operation implicitly, just because we happen to know that using an empty string "" provides the identity? Or is it that an explicit definition of the identity element for a type is unnecessary for it to be classed as a monoid (under a particular associative binary operation).

You may be overthinking this a little. A monoid is an abstract concept that exists outside of programming and type systems (a la category theory). Consider the definition:
A monoid is a set that is closed under an associative binary operation and has an identity element.
You have identified the String type to be a closed
set with an associative binary operation (concatenation) and identity element (the empty string). The language may not explicitly tell you what the identity element is, but that doesn't mean that it doesn't exist (we know it does). The String type with the binary operation of concatenation is most certainly a monoid because you can prove that it meets all of the aforementioned properties.
Creating a Monoid type class is merely for our convenience when it comes to working with generic data structures that operate with monoids. Regardless of whether or not the programming language (or some other library) explicitly spells out what sets with what binary operations and what identities construct monoids, the monoids can still exist.
It is important to note that the String type on its own does not constitute a monoid, as it must exist with said binary operation, etc. It may be possible that another monoid exists using the same set, but a different binary operation.

I think that you already understand the concept right and indeed "think too much" about it (;
A monoid is a triplet as they say in mathematics: a set (think of a type with its values), an associative binary operator on it and a neutral element. You define such triple — you define a monoid. So the answer to
Is String a monoid under the concatenation operation implicitly, just because we happen to know that using an empty string "" provides the identity?
is simply yes. You named the set, the associative binary operation and the neutral element — bingo! You got a monoid! About
already defines the associative operation, but I do not think that there is an explicit definition of the identity element
there's a bit of confusion. You can choose different associative operations on String with their corresponding neutral elements and define various monoids. Concatenation is not like some kind of "orthodox" associative operation on String to define monoid, just, probably, the most obvious.
If you define a table as something "with four legs and a flat horizontal surface on them", then anything that fits this definition is a table, regardless of the material it's made and other variable characteristics. Now, when do you need to "certify" it explicitly is a table? Only when you need to use its "table-properties", say if to sell it and advertise that you can put things on it and they won't fall off, because the surface is guaranteed to be flat and horizontal.
Sorry, if the example is kind of stupid, I'm not very good in such analogies. I hope it is still helpful.
Now about instantiating the mentioned "theoretical Monoid type". Such types are usually called a type class. Neither existence of such type itself (which can be defined in various ways), nor its instance are necessary to call the triple (String, (_ ++ _), "") a monoid and reason about it as a monoid (i.e. use general monoid properties). What it is actually used for is ad-hoc polymorphism. In Scala it is done with implicits. One can, for example, define a polymorphic function
def fold[M](seq: Seq[M])(implicit m: Monoid[M]): M = seq match {
case Seq.empty => m.zero
case (h +: t) => m.op(h, fold(t))
}
Then if your stringMonoid value is declared as implicit val, whenever you use fold[String] and stringMonoid is in scope, it will use its zero and op inside. Same fold definition will work for other instances of Monoid[...].
Another topic is what happens when you have several instances of Monoid[String]. Read Where does Scala look for implicits?.

To answer your question, first we need to answer another one: what is a monoid? One can look at it from at least 2 different perspectives:
Category theory perspective,
Scala programming language perspective.
It is important though to not mix these 2 perspectives. Category theory is a form of mathematics, and the monoid in category theory therefore belongs to the domain of Platonic Forms, while in Scala programming language, according to Platonic realism, it is a Particular.
So monoid in Scala is a mere reflection of an ideal Platonic monoid. But we are free to choose how exactly we do this reflection, because anyways the Particular is a mere approximation of a Form. This reflection is based on the expressive powers of the medium, in which this reflection happens (Scala programming language in our case).
Monoid requires us to designate a unique instance of a given type with the "identity" notion. I'd argue that the best way to uniquely identify an instance of some type is to use a singleton type, because types, as opposed to values, are guaranteed to be unique. Since Scala doesn't have the singleton types proper (see here for discussion), it's impossible to clearly express monoid Form in Scala, and this is probably where you question stems from.
The Scala's answer to this problem is the typeclasses. Scala says: "Ok, I can't express the empty string as an unique type (neither 0 as identity of numbers under sum, etc.), so I won't operate on that level at all. Instead, convince me you have a monoid typeclass for String, and I'll let you use it where appropriate." This is a very practical approach, but not very pure. Having singleton types would allow one to express an empty string as a unique type, and the string monoid like this (warning, this doesn't compile):
// Identity typeclass
trait Identity[T] {
type Repr
val identity: Repr
}
object Identity {
implicit val stringIdentity = new Identity {
type I = "" // can't do this without singleton types support
val identity = ""
}
}
trait Monoid[A] {
def op(a1: A, a2: A): A
def zero: Identity[A]#Repr
}
object Monoid {
implicit def stringMonoid[I](implicit
stringIdentity: Identity[String] { type Repr = I }) = new Monoid[String] {
def op(a1: String, a2: String): String = a1 + a2
def zero: I = stringIdentity.identity
}
}
I think it would be possible to get pretty close to this using shapeless, but I haven't tried.
To summarize, from Scala language perspective, having monoid means having an instance of monoid typeclass, because so we, the implementers, choose to reflect the ideal monoid into Scala's reality. Your question mixes ideal monoid with real-world Scala monoid, and therefore it's so hard to formulate and answer. The lack of singleton types in Scala forces us to make assumptions, like that the empty string is the identity for the String type. With singleton types, we don't have to assume, but we can prove this fact, and use such proof in the monoid definition.
Hope this helps.

You would only need String (as implemented in the JVM, or as supplemented in the SDK) to be a proper Monoid (e.g. RichString extends Monoid[String]) if you were going to have String (or RichString) method implementations that you take directly from Monoid, so you do not have to implement them. That is, if we know that a String is a Monoid, then it has all its operations without us having to implement them. There are two problems with this scenario: first, String was implemented in the JVM a long time before anyone thought to recognize that String is a monoid, so all useful methods have been written directly into String, or StringBuilder. Second, Monoid probably does not have terribly useful methods anyway.

Related

How to define a union type that works at runtime?

Following on form this excellent set of answers on how to define union types in Scala. I've been using the Miles Sabin definition of Union types, but one questions remains.
How do you work with these if the type isn't know until Runtime? For example:
trait inv[-A] {}
type Or[A,B] = {
type check[X] = (inv[A] with inv[B]) <:< inv[X]
}
case class Foo[A : (Int Or String)#check](a: A)
Foo(1) // Foo[Int] = Foo(1)
Foo("hi") // Foo[String] = Foo(hi)
Foo(2.0) // Error!
This example works since the parameter A is know at compile time, and calling Foo(1) is really calling Foo[Int](1). However, what do you do if parameter A isn't known until runtime? Maybe you're paring a file that contains the data for Foo's, in which case the type parameter of Foo isn't know until you read the data. There's no easy way to set parameter A in this case.
The best solutions I've been able to come up with are:
Pattern Match on the data you've read and then create different Foo's based that type. In my case this isn't feasible because my case-class actually contains dozens of union types, so there'd be hundreds of combinations of types to pattern match.
Cast the type you've just read to be (String or Int), so you have a single type to pass around, that passes the Type Class constraint when you create Foo with it. Then return Foo[_] instead. This puts the onus back on the Foo user to work out the type of each field (since they'll appear to be type Any), but at least it defers having to know the type until the field is actually used, in which case a pattern match seems more tractable.
The second solution looks like this:
def parseLine: Any // Parses data point, but can be either a String or
// Int, so returns Any.
def mkFoo: Foo[_] = {
val a = parseLine.asInstanceOf[Int with String]
Foo(a) // Passes type constraint now
}
In practice I've ended up using the second solution, but I'm wondering if there's something better I can do?
Another way to state the problem is: What does it mean to return a Union Type? Functions can only return a single type, and the trickery we use with Miles Sabin union types is only useful for the types you pass in, not for the types you return.
PS. For context, why this is a problem in my case is that I'm generating a set of case-classes from a Json schema file. Json naturally supports union types, so I would like to make my case classes reflect that too. This works great in one direction: users creating case-classes to be serialized out to Json. But gets sticky in the other direction: user's parsing Json files to have a set of populated case classes returned to them.
The "standard" Scala solution to this problem is to use an ordinary discriminated-union type (ie, to forego true union types altogether):
sealed trait Foo
case class IntFoo(x: Int) extends Foo
case class StringFoo(x: String) extends Foo
This reflects the fact that, as you observe, the particular type of the member is a runtime value; the JVM type-tag of the Foo instance provides this runtime value.
Miles Sabin's implementation of union types is very clever, but I'm not sure if it provides any practical benefit, because it only restricts the type of thing that can go into a Foo, but provides the user of a Foo with no computable version of that restriction, in the way a match provides you with a computable version of the sealed trait. In general, for a restriction to be useful, it needs two sides: a check that only the right things are put in, and an extractor (aka an eliminator) that allows the same right things to come out the other end.
Perhaps if you gave some explanation of why you're looking for a purer union type it would illuminate whether regular discriminated unions are sufficient or if you really need something more.
There's a reason every JSON parser for Scala requires well defined types into which the JSON will be converted, even if some fields have to be dropped: you cannot work with something you don't know the type of.
To given an example, say you have a, and maybe a is a String, maybe it's an Int, but you don't know what it is. Why computation could you possibly make with a, not knowing its type? Why would your code compute the sum of all a's, for instance, if you didn't know in advance it was a number?
Generally, the answer to that is to perform user-provided data manipulation at runtime over data with unknown characteristics, as the user itself sees that it's a number and decides they want to know what the sum of that field is. Fine, but you are going the wrong way about it if so.
There is a well defined way to represent JSON data in Scala (and, for that matter, any data that has the same characteristics as JSON. Which is using a hierarchy of classes. A json value may be a json object, array or one of a number of primitives. A json object contains a list of key/value pairs, whose keys are json strings and values are json values. And so on. This is easy to represent, and there are many library doing so already. In fact, there are so many that there's a project called Json4s which presents a unified API which can be used and is implemented by many of the aforementioned libraries.
Things like the records which Miles Sabin's Shapeless library provide are intended to be used when the input doesn't have a well defined schema, but the program knows what it needs from that input. And, yes, the program might know what to do with a if it is an Int or a String, but not every possible value.
The next Scala 3 (mid 2020) based on Dotty will implement the proposal for Union Type from last Sept. 2018
You see it in "a tour of Scala 3" (June 2019)
Union Types Provide ad-hoc combinations of types
Subsetting = Subtyping
No boxing overhead
case class UserName(name: String)
case class Password(hash: Hash)
def help(id: UserName | Password) = {
val user = id match {
case UserName(name) => lookupName(name)
case Password(hash) => lookupPassword(hash)
}
...
}
Union Types Work also with singleton types
Great for JS interop
type Command = "Click" | "Drag" | "KeyPressed"
def handleEvent(kind: Command) = kind match {
case "Click" => MouseClick()
case "Drag" => MoveTo()
case "KeyPressed" => KeyPressed()
}

The purpose of type classes in Haskell vs the purpose of traits in Scala

I am trying to understand how to think about type classes in Haskell versus traits in Scala.
My understanding is that type classes are primarily important at compile time in Haskell and not at runtime anymore, on the other hand traits in Scala are important both at compile time and run time. I want to illustrate this idea with a simple example, and I want to know if this viewpoint of mine is correct or not.
First, let us consider type classes in Haskell:
Let's take a simple example. The type class Eq.
For example, Int and Char are both instances of Eq. So it is possible to create a polymorphic List that is also an instance of Eq and can either contain Ints or Chars but not both in the same List.
My question is : is this the only reason why type classes exist in Haskell?
The same question in other words:
Type classes enable to create polymorphic types ( in this example a polymorphic List) that support operations that are defined in a given type class ( in this example the operation == defined in the type class Eq) but that is their only reason for existence, according to my understanding. Is this understanding of mine correct?
Is there any other reason why type classes exist in ( standard ) Haskell?
Is there any other use case in which type classes are useful in standard Haskell ? I cannot seem to find any.
Since Haskell's Lists are homogeneous, it is not possible to put Char and Int into the same list. So the usefulness of type classes, according to my understanding, is exhausted at compile time. Is this understanding of mine correct?
Now, let's consider the analogous List example in Scala:
Lets define a trait Eq with an equals method on it.
Now let's make Char and Int implement the trait Eq.
Now it is possible to create a List[Eq] in Scala that accepts both Chars and Ints into the same List ( Note that this - putting different type of elements into the same List - is not possible Haskell, at least not in standard Haskell 98 without extensions)!
In the case of the Haskell's List, the existence of type classes is important/useful only for type checking at compile time, according to my understanding.
In contrast, the existence of traits in Scala is important both at compile time for type checking and at run type for polymorphic dispatch on the actual runtime type of the object in the List when comparing two Lists for equality.
So, based on this simple example, I came to the conclusion that in Haskell type classes are primarily important/used at compilation time, in contrast, Scala's traits are important/used both at compile time and run time.
Is this conclusion of mine correct?
If not, why not ?
EDIT:
Scala code in response to n.m.'s comments:
case class MyInt(i:Int) {
override def equals(b:Any)= i == b.asInstanceOf[MyInt].i
}
case class MyChar(c:Char) {
override def equals(a:Any)= c==a.asInstanceOf[MyChar].c
}
object Test {
def main(args: Array[String]) {
val l1 = List(MyInt(1), MyInt(2), MyChar('a'), MyChar('b'))
val l2 = List(MyInt(1), MyInt(2), MyChar('a'), MyChar('b'))
val l3 = List(MyInt(1), MyInt(2), MyChar('a'), MyChar('c'))
println(l1==l1)
println(l1==l3)
}
}
This prints:
true
false
I will comment on the Haskell side.
Type classes bring restricted polymorphism in Haskell, wherein a type variable a can still be quantified universally, but ranges over only a subset of all the types -- namely, the types for which an instance of the type class is available.
Why restricted polymorphism is useful? A nice example would be the equality operator
(==) :: ?????
What its type should be? Intuitively, it takes two values of the same type and returns a boolean, so:
(==) :: a -> a -> Bool -- (1)
But the typing above is not entirely honest, since it allows one to apply == to any type a, including function types!
(\x :: Integer -> x + x) == (\x :: Integer -> 2*x)
The above would pass type checking if (1) were the typing for (==), since both arguments are of the same type a = (Integer -> Integer). However, we can not effectively compare two functions: well-known Computability results tell us that there is no algorithm to do that in general.
So, what we could do to implement (==)?
Option 1: at run time, if a function (or any other value involving functions -- such as a list of functions) is found to be passed to (==), raise an exception. This is what e.g. ML does. Typed programs can now "go wrong", despite checking types at compile time.
Option 2: introduce a new kind of polymorphism, restricting a to the function-free types. For instance, ww could have (==) :: forall-non-fun a. a -> a -> Bool so that comparing functions yields to a type error. Haskell exploits type classes to obtain exactly that.
So, Haskell type classes allow one to type (==) "honestly", ensuring no error at run time, and without being overly restrictive. Of course, the power of type classes goes far beyond of that but, at least in my own view, they primary purpose is to allow restricted polymorphism, in a very general and flexible way. Indeed, with type classes the programmer can define their own restrictions on the universal type quantifications.

Primitive types are not traited Immutable in scala?

Can anyone please share insight into the trait "Immutable" in scala? At first glance I thought this would be a nice control structure to limit a class I'm building, but oddly I noticed that primitive types do not extend this. Is there a reason for this? Is there a way to bind the syntax to Immutable or AnyVal?
class Test {
def test[T<:Immutable](x:T)={
println("passes "+x)
}
case class X(s:String) extends Immutable
test(X("hello")) //passes
// test("fail") - does not pass compiler
}
The only direct subtypes of Immutable in the Scala core library are:
collection.immutable.Traversable
collection.parallel.immutable.ParIterable
Nothing else refers to Immutable at all.
Immutable hasn't been changed since it was added in 2009 in Martin Odersky's "massive new collections checkin". I'm searching through that commit, and it looks like Immutable was never even used as a bound when it was first introduced either.
Honestly, I doubt there's much intent behind these traits anymore. Odersky probably planned to use Immutable to bound the type arguments on immutable collections, and then thought better of it. But that's just my speculation.
So-called primitive types (Boolean, Byte, Char, Short, Int, Long, Float, Double) are intrinsically immutable. 5 is 5 is 5. You cannot do anything to 5 to turn it into anything that is not 5.
Otherwise, immutability is a property of how a value is stored. If stored in a var, that var may be replaced freely with a new value (of a compatible type). By extension, constructed types (classes, traits and objects) may be either immutable or mutable depending on whether they allow any of their internal state to be altered following construction.
Java's String (also used as Scala's String) is immutable.
However, none of this has anything to do with you example, since you did not demonstrate mutability. You simply showed what happens when one applies the + method of one value to another value.
While it is certainly possible that one can implement a + method that mutates its (apparent) left-hand operand, one rarely does that. If there's a need for that kind of mutation, one would conventionally define the += method instead.
+ is somewhat special in that it may be applied to any value (if the argument / right-hand operand) is a String by virtue of an implicit conversion to a special class that defines +(s: String) so that the string concatenation interpretation of + may be applied. In other words, if you write e1 + "e2" and the type of the expression e1 does not define +, then Scala will convert e1 to String and concatenate it with "e2".

Why are value classes restricted to AnyVal?

As far as I understand value classes in Scala are just there to wrap primitive types like Int or Boolean into another type without introducing additional memory usage. So they are basically used as a lightweight alternative to ordinary classes.
That reminds me of Haskell's newtype notation which is also used to wrap existing types in new ones, thus introducing a new interface to some data without consuming additional space (to see the similarity of both languages consider for instance the restriction to one "constructor" with one field both in Haskell and in Scala).
What I am wondering is why the concept of introducing new types that get inlined by the compiler is not generalized to Haskell's approach of having zero-overhead type wrappers for any kind of type. Why did the Scala guys stick to primitive types (aka AnyVal) here?
Or is there already a way in Scala to also define such wrappers for Scala.AnyRef types?
They're not limited to AnyVal.
implicit class RichOptionPair[A,B](val o: Option[(A,B)]) extends AnyVal {
def ofold[C](f: (A,B) => C) = o map { case (a,b) => f(a,b) }
}
scala> Some("fish",5).ofold(_ * _)
res0: Option[String] = Some(fishfishfishfishfish)
There are various limitations on value classes that make them act like lightweight wrappers, but only being able to wrap primitives is not one of them.
The reasoning is documented as Scala Improvement Process (SIP)-15. As Alexey Romanov pointed out in his comment, the idea was to look for an expression using existing keywords that would allow the compiler to determine this situation.
In order for the compiler to perform the inlining, several constraints apply, such as the wrapping class being "ephemeral" (no field or object members, constructor body etc.). Your suggestion of automatically generating inlining classes has at least two problems:
The compiler would need to go through the whole list of constraints for each class. And because the status as value class is implicit, it may flip by adding members to the class at a later point, breaking binary compatibility
More constraints are added by the compiler, e.g. the value class becomes final prohibiting inheritance. So you would have to add these constraints to any class who want to be inlineable that way, and then you gain nothing but extra verbosity.
One could think of other hypothetical constructs, e.g. val class Meter(underlying: Double) { ... }, but the advantage of extends AnyVal IMO is that no syntactic extensions are needed. Also all primitive types are extending AnyVal, so there is a nice analogy (no reference, no inheritance, effective representation etc.)

Any reason why scala does not explicitly support dependent types?

There are path dependent types and I think it is possible to express almost all the features of such languages as Epigram or Agda in Scala, but I'm wondering why Scala does not support this more explicitly like it does very nicely in other areas (say, DSLs) ?
Anything I'm missing like "it is not necessary" ?
Syntactic convenience aside, the combination of singleton types, path-dependent types and implicit values means that Scala has surprisingly good support for dependent typing, as I've tried to demonstrate in shapeless.
Scala's intrinsic support for dependent types is via path-dependent types. These allow a type to depend on a selector path through an object- (ie. value-) graph like so,
scala> class Foo { class Bar }
defined class Foo
scala> val foo1 = new Foo
foo1: Foo = Foo#24bc0658
scala> val foo2 = new Foo
foo2: Foo = Foo#6f7f757
scala> implicitly[foo1.Bar =:= foo1.Bar] // OK: equal types
res0: =:=[foo1.Bar,foo1.Bar] = <function1>
scala> implicitly[foo1.Bar =:= foo2.Bar] // Not OK: unequal types
<console>:11: error: Cannot prove that foo1.Bar =:= foo2.Bar.
implicitly[foo1.Bar =:= foo2.Bar]
In my view, the above should be enough to answer the question "Is Scala a dependently typed language?" in the positive: it's clear that here we have types which are distinguished by the values which are their prefixes.
However, it's often objected that Scala isn't a "fully" dependently type language because it doesn't have dependent sum and product types as found in Agda or Coq or Idris as intrinsics. I think this reflects a fixation on form over fundamentals to some extent, nevertheless, I'll try and show that Scala is a lot closer to these other languages than is typically acknowledged.
Despite the terminology, dependent sum types (also known as Sigma types) are simply a pair of values where the type of the second value is dependent on the first value. This is directly representable in Scala,
scala> trait Sigma {
| val foo: Foo
| val bar: foo.Bar
| }
defined trait Sigma
scala> val sigma = new Sigma {
| val foo = foo1
| val bar = new foo.Bar
| }
sigma: java.lang.Object with Sigma{val bar: this.foo.Bar} = $anon$1#e3fabd8
and in fact, this is a crucial part of the encoding of dependent method types which is needed to escape from the 'Bakery of Doom' in Scala prior to 2.10 (or earlier via the experimental -Ydependent-method types Scala compiler option).
Dependent product types (aka Pi types) are essentially functions from values to types. They are key to the representation of statically sized vectors and the other poster children for dependently typed programming languages. We can encode Pi types in Scala using a combination of path dependent types, singleton types and implicit parameters. First we define a trait which is going to represent a function from a value of type T to a type U,
scala> trait Pi[T] { type U }
defined trait Pi
We can than define a polymorphic method which uses this type,
scala> def depList[T](t: T)(implicit pi: Pi[T]): List[pi.U] = Nil
depList: [T](t: T)(implicit pi: Pi[T])List[pi.U]
(note the use of the path-dependent type pi.U in the result type List[pi.U]). Given a value of type T, this function will return a(n empty) list of values of the type corresponding to that particular T value.
Now let's define some suitable values and implicit witnesses for the functional relationships we want to hold,
scala> object Foo
defined module Foo
scala> object Bar
defined module Bar
scala> implicit val fooInt = new Pi[Foo.type] { type U = Int }
fooInt: java.lang.Object with Pi[Foo.type]{type U = Int} = $anon$1#60681a11
scala> implicit val barString = new Pi[Bar.type] { type U = String }
barString: java.lang.Object with Pi[Bar.type]{type U = String} = $anon$1#187602ae
And now here is our Pi-type-using function in action,
scala> depList(Foo)
res2: List[fooInt.U] = List()
scala> depList(Bar)
res3: List[barString.U] = List()
scala> implicitly[res2.type <:< List[Int]]
res4: <:<[res2.type,List[Int]] = <function1>
scala> implicitly[res2.type <:< List[String]]
<console>:19: error: Cannot prove that res2.type <:< List[String].
implicitly[res2.type <:< List[String]]
^
scala> implicitly[res3.type <:< List[String]]
res6: <:<[res3.type,List[String]] = <function1>
scala> implicitly[res3.type <:< List[Int]]
<console>:19: error: Cannot prove that res3.type <:< List[Int].
implicitly[res3.type <:< List[Int]]
(note that here we use Scala's <:< subtype-witnessing operator rather than =:= because res2.type and res3.type are singleton types and hence more precise than the types we are verifying on the RHS).
In practice, however, in Scala we wouldn't start by encoding Sigma and Pi types and then proceeding from there as we would in Agda or Idris. Instead we would use path-dependent types, singleton types and implicits directly. You can find numerous examples of how this plays out in shapeless: sized types, extensible records, comprehensive HLists, scrap your boilerplate, generic Zippers etc. etc.
The only remaining objection I can see is that in the above encoding of Pi types we require the singleton types of the depended-on values to be expressible. Unfortunately in Scala this is only possible for values of reference types and not for values of non-reference types (esp. eg. Int). This is a shame, but not an intrinsic difficulty: Scala's type checker represents the singleton types of non-reference values internally, and there have been a couple of experiments in making them directly expressible. In practice we can work around the problem with a fairly standard type-level encoding of the natural numbers.
In any case, I don't think this slight domain restriction can be used as an objection to Scala's status as a dependently typed language. If it is, then the same could be said for Dependent ML (which only allows dependencies on natural number values) which would be a bizarre conclusion.
I would assume it is because (as I know from experience, having used dependent types in the Coq proof assistant, which fully supports them but still not in a very convenient way) dependent types are a very advanced programming language feature which is really hard to get right - and can cause an exponential blowup in complexity in practice. They're still a topic of computer science research.
I believe that Scala's path-dependent types can only represent Σ-types, but not Π-types. This:
trait Pi[T] { type U }
is not exactly a Π-type. By definition, Π-type, or dependent product, is a function which result type depends on argument value, representing universal quantifier, i.e. ∀x: A, B(x). In the case above, however, it depends only on type T, but not on some value of this type. Pi trait itself is a Σ-type, an existential quantifier, i.e. ∃x: A, B(x). Object's self-reference in this case is acting as quantified variable. When passed in as implicit parameter, however, it reduces to an ordinary type function, since it is resolved type-wise. Encoding for dependent product in Scala may look like the following:
trait Sigma[T] {
val x: T
type U //can depend on x
}
// (t: T) => (∃ mapping(x, U), x == t) => (u: U); sadly, refinement won't compile
def pi[T](t: T)(implicit mapping: Sigma[T] { val x = t }): mapping.U
The missing piece here is an ability to statically constraint field x to expected value t, effectively forming an equation representing the property of all values inhabiting type T. Together with our Σ-types, used to express the existence of object with given property, the logic is formed, in which our equation is a theorem to be proven.
On a side note, in real case theorem may be highly nontrivial, up to the point where it cannot be automatically derived from code or solved without significant amount of effort. One can even formulate Riemann Hypothesis this way, only to find the signature impossible to implement without actually proving it, looping forever or throwing an exception.
The question was about using dependently typed feature more directly and, in my opinion,
there would be a benefit in having a more direct dependent typing approach than what Scala offers.
Current answers try to argue the question on type theoretical level.
I want to put a more pragmatic spin on it.
This may explain why people are divided on the level of support of dependent types in the Scala language. We may have somewhat different definitions in mind. (not to say one is right and one is wrong).
This is not an attempt to answer the question how easy would it be to turn
Scala into something like Idris (I imagine very hard) or to write a library
offering more direct support for Idris like capabilities (like singletons tries to be in Haskell).
Instead, I want to emphasize the pragmatic difference between Scala and a language like Idris.
What are code bits for value and type level expressions?
Idris uses the same code, Scala uses very different code.
Scala (similarly to Haskell) may be able to encode lots of type level calculations.
This is shown by libraries like shapeless.
These libraries do it using some really impressive and clever tricks.
However, their type level code is (currently) quite different from value level expressions
(I find that gap to be somewhat closer in Haskell). Idris allows to use value level expression on the type level AS IS.
The obvious benefit is code reuse (you do not need to code type level expressions
separately from value level if you need them in both places). It should be way easier to
write value level code. It should be easier to not have to deal with hacks like singletons (not to mention performance cost). You do not need to learn two things you learn one thing.
On a pragmatic level, we end up needing fewer concepts. Type synonyms, type families, functions, ... how about just functions? In my opinion, this unifying benefits go much deeper and are more than syntactic convenience.
Consider verified code. See:
https://github.com/idris-lang/Idris-dev/blob/v1.3.0/libs/contrib/Interfaces/Verified.idr
Type checker verifies proofs of monadic/functor/applicative laws and the
proofs are about actual implementations of monad/functor/applicative and not some encoded
type level equivalent that may be the same or not the same.
The big question is what are we proving?
The same can me done using clever encoding tricks (see the following for Haskell version, I have not seen one for Scala)
https://blog.jle.im/entry/verified-instances-in-haskell.html
https://github.com/rpeszek/IdrisTddNotes/wiki/Play_FunctorLaws
except the types are so complicated that it is hard to see the laws, the value
level expressions are converted (automatically but still) to type level things and
you need to trust that conversion as well.
There is room for error in all of this which kinda defies the purpose of compiler acting as
a proof assistant.
(EDITED 2018.8.10) Talking about proof assistance, here is another big difference between Idris and Scala. There is nothing in Scala (or Haskell) that can prevent from writing diverging proofs:
case class Void(underlying: Nothing) extends AnyVal //should be uninhabited
def impossible() : Void = impossible()
while Idris has total keyword preventing code like this from compiling.
A Scala library that tries to unify value and type level code (like Haskell singletons) would be an interesting test for Scala's support of dependent types. Can such library be done much better in Scala because of path-dependent types?
I am too new to Scala to answer that question myself.