How are primitive types in Scala objects if we do not use the word "new" to instantiate the instances of those primitives? Programming in Scala by Martin Odersky described the reasoning as some enforcing by a "trick" that makes these value classes to be defined abstract and final, which did not quite make sense to me because how are we able to make an instance of these classes if its abstract? If that same primitive literal is to be stored somewhere let's say into a variable will that make the variable an object?
I assume that you use scala 2.13 with implementation of literal types. For this explanation you can think of type and class as synonyms, but in reality they are different concepts.
To put it all together it worth to treat each primitive type as a set of subtypes each of which representing type of one single literal value.
So literal 1 is a value and type at the same time (instance 1 of type 1), and it is subtype of value class Int.
Let's prove that 1 is subtype of Int by using 'implicitly':
implicitly[1 <:< Int] // compiles
The same but using val:
val one:1 = 1
implicitly[one.type <:< Int] // compiles
So one is kind of an instance (object) of type 1 (and instance of type Int at the same time because because Int is supertype of 1). You can use this value the same way as any other objects (pass it to function or assign to other vals etc).
val one:1 = 1
val oneMore: 1 = one
val oneMoreGeneric: Int = one
val oneNew:1 = 1
We can assume that all these vals contain the same instance of one single object because from practical perspective it doesn't actually matter if this is the same object or not.
Technically it's not an object at all, because primitives came form java (JVM) world where primitives are not objects. They are different kind of entities.
Scala language is trying to unify these two concepts into one (everything is classes), so developers don't have to think too much about differences.
But here are still some differences in a backstage. Each value class is a subtype of AnyVal, but the rest of the classes are subtype of AnyRef (regular class).
implicitly[1 <:< AnyVal] //compiles
implicitly[Int <:< AnyVal] // compiles
trait AnyTraint
implicitly[AnyTraint <:< AnyVal] // fails to compail
implicitly[AnyTraint <:< AnyRef] // compiles
And in addition, because of its non-class nature in the JVM, you can't extend value classes as regular class or use new to create an instance (because scala compiler emulates new by itself). That's why from perspective of extending value classes you should think about them as final and from perspective of creating instances manually you should think of them as abstract. But form most of the other perspectives it's like any other regular class.
So scala compiler can kind of extend Int by 1,2,3 .. types and create instances of them for vals, but developers can't do it manually.
Part I
Suppose I have a type class trait Show[T] { def print(t: T): String } with instances for String and Int. Suppose I have a value whose specific type is known only at runtime:
val x: Any = ...
How do I get the appropriate typeclass instance (at runtime, since we don't know the type statically) and do something with it.
Note that it's inadequate to define a method that literally just gives us the typeclass instance:
def instance(x: Any): Show[_]
Since Show.print requires statically known argument type T we still can't do anything with the result of instance. So really, we need to be able to dynamically dispatch to an already-defined function that uses the instance, such as the following:
def display[T](t: T)(implicit show: Show[T]) = "show: " + show.print(t) + "\n"
So assuming display is defined, how do we invoke display, passing along an appropriate Show instance. I.e. something that invokes display(x) properly.
Miles Sabin accomplishes this here using runtime compilation (Scala eval), as an example of "staging", but with only spare documentation as to what's going on:
https://github.com/milessabin/shapeless/blob/master/examples/src/main/scala/shapeless/examples/staging.scala
Can Miles's approach be put into a library? Also, what are the limitations of this approach e.g. with respect to generic types like Seq[T]?
Part II
Now suppose T is bounded by a sealed type (such that it's possible to enumerate all the sub-types):
trait Show[T <: Foo]
sealed trait Foo
case class Alpha(..) extends Foo
case class Beta(..) extends Foo
In this case, can we do it with a macro instead of runtime compilation? And can this functionality be provided in some library?
I mostly care about Scala 2.12, but it's worth mentioning if a solution works in 2.11 or 2.10.
I'm confused on how Scala's Any relates to java.lang.Object. I know that in scala, AnyRef corresponds to object, but it seems to make a difference whether the method (which takes java.lang.Object) is defined in a java class or a scala class):
the java class:
public class JavaClass {
public static void method(Object input) {
}
}
the scala application:
object ScalaObject extends App{
def method(input:java.lang.Object) = {}
val a:Any = null
method(a) // does not work
JavaClass.method(a) // does work
}
So if the method is in a java-Class, then the compiler allows me to pass a variable of type Any, why is that?
The compiler tries to "make up" for the difference between Scala's and Java's type systems. In Scala, Object =:= AnyRef (they're aliases) and AnyRef <: Any. Therefore, a Scala method that takes Object or AnyRef cannot take an Any or an AnyVal. If you wanted a method that worked on everything, well, then you would have written Any, right?
However, Java methods that take Object are normally meant to work on all values, whether they be actual Objects or primitives (int, long, etc.), and they work due to the boxing conversion of primitives into Objects. Primitives and Object do not have a common supertype like they do in Scala. The Java type system is not expressive enough to differentiate "I only want actual objects," from "I will take anything, be they object or primitive."
Therefore, the Scala compiler patches this up by turning Java methods of Object into methods of Any. This feature is simply to ease interop between the languages. It won't apply this transformation to Scala code though, because if you wanted that behavior then you would have actually written Any instead of Object.
The reason for that is that Any can be either AnyRef or AnyVal, while method can only accept objects which are AnyRef. If you modify the a type to be AnyRef, it is going to work:
def method(input: java.lang.Object) = {}
val a: AnyRef = new Object
method(a)
In case of calling the static Java method, the Scala compiler will turn Any into Object, which also includes boxing of AnyVal values.
value classes can be used to achieve type safety without the overhead of unboxing.
I had the impression that in runtime such types/classes would "not exist", being seen as simple types (for instance, a value class case class X(i: Int) extends AnyVal would be a simple Int on runtime).
But if you do call a .toString method on a value class instance it would print something like:
scala> val myValueClass = X(3)
myValueClass: X = 3
scala> myValueClass.toString
res5: String = X(3)
so I guess the compiler includes some information after all?
Not really. The compiler creates a static method (in Scala this corresponds to the class's companion object) which is called with your int value as a parameter in order to simulate calling a method on your value class-typed object.
Your value class itself only exists in the source code. In compiled bytecode an actual primitive int is used and static methods are called rather than new object instances with real method calls. You can read more about this mechanism here.
Value classes are designed so that adding or removing extends AnyVal (if legal) shouldn't change the results of calculations (except even non-case value classes have equals and hashCode defined automatically like case classes). This requires that in some circumstances they survive, e.g.
def toString(x: Any) = x.toString
toString(myValueClass)
but the situation in your question isn't one of them.
http://docs.scala-lang.org/sips/completed/value-classes.html#expansion-of-value-classes explains more precisely how value classes are implemented and is useful to see in what cases they survive, though some details may have changed since.
As I understand from this blog post "type classes" in Scala is just a "pattern" implemented with traits and implicit adapters.
As the blog says if I have trait A and an adapter B -> A then I can invoke a function, which requires argument of type A, with an argument of type B without invoking this adapter explicitly.
I found it nice but not particularly useful. Could you give a use case/example, which shows what this feature is useful for ?
One use case, as requested...
Imagine you have a list of things, could be integers, floating point numbers, matrices, strings, waveforms, etc. Given this list, you want to add the contents.
One way to do this would be to have some Addable trait that must be inherited by every single type that can be added together, or an implicit conversion to an Addable if dealing with objects from a third party library that you can't retrofit interfaces to.
This approach becomes quickly overwhelming when you also want to begin adding other such operations that can be done to a list of objects. It also doesn't work well if you need alternatives (for example; does adding two waveforms concatenate them, or overlay them?) The solution is ad-hoc polymorphism, where you can pick and chose behaviour to be retrofitted to existing types.
For the original problem then, you could implement an Addable type class:
trait Addable[T] {
def zero: T
def append(a: T, b: T): T
}
//yup, it's our friend the monoid, with a different name!
You can then create implicit subclassed instances of this, corresponding to each type that you wish to make addable:
implicit object IntIsAddable extends Addable[Int] {
def zero = 0
def append(a: Int, b: Int) = a + b
}
implicit object StringIsAddable extends Addable[String] {
def zero = ""
def append(a: String, b: String) = a + b
}
//etc...
The method to sum a list then becomes trivial to write...
def sum[T](xs: List[T])(implicit addable: Addable[T]) =
xs.FoldLeft(addable.zero)(addable.append)
//or the same thing, using context bounds:
def sum[T : Addable](xs: List[T]) = {
val addable = implicitly[Addable[T]]
xs.FoldLeft(addable.zero)(addable.append)
}
The beauty of this approach is that you can supply an alternative definition of some typeclass, either controlling the implicit you want in scope via imports, or by explicitly providing the otherwise implicit argument. So it becomes possible to provide different ways of adding waveforms, or to specify modulo arithmetic for integer addition. It's also fairly painless to add a type from some 3rd-party library to your typeclass.
Incidentally, this is exactly the approach taken by the 2.8 collections API. Though the sum method is defined on TraversableLike instead of on List, and the type class is Numeric (it also contains a few more operations than just zero and append)
Reread the first comment there:
A crucial distinction between type classes and interfaces is that for class A to be a "member" of an interface it must declare so at the site of its own definition. By contrast, any type can be added to a type class at any time, provided you can provide the required definitions, and so the members of a type class at any given time are dependent on the current scope. Therefore we don't care if the creator of A anticipated the type class we want it to belong to; if not we can simply create our own definition showing that it does indeed belong, and then use it accordingly. So this not only provides a better solution than adapters, in some sense it obviates the whole problem adapters were meant to address.
I think this is the most important advantage of type classes.
Also, they handle properly the cases where the operations don't have the argument of the type we are dispatching on, or have more than one. E.g. consider this type class:
case class Default[T](val default: T)
object Default {
implicit def IntDefault: Default[Int] = Default(0)
implicit def OptionDefault[T]: Default[Option[T]] = Default(None)
...
}
I think of type classes as the ability to add type safe metadata to a class.
So you first define a class to model the problem domain and then think of metadata to add to it. Things like Equals, Hashable, Viewable, etc. This creates a separation of the problem domain and the mechanics to use the class and opens up subclassing because the class is leaner.
Except for that, you can add type classes anywhere in the scope, not just where the class is defined and you can change implementations. For example, if I calculate a hash code for a Point class by using Point#hashCode, then I'm limited to that specific implementation which may not create a good distribution of values for the specific set of Points I have. But if I use Hashable[Point], then I may provide my own implementation.
[Updated with example]
As an example, here's a use case I had last week. In our product there are several cases of Maps containing containers as values. E.g., Map[Int, List[String]] or Map[String, Set[Int]]. Adding to these collections can be verbose:
map += key -> (value :: map.getOrElse(key, List()))
So I wanted to have a function that wraps this so I could write
map +++= key -> value
The main issue is that the collections don't all have the same methods for adding elements. Some have '+' while others ':+'. I also wanted to retain the efficiency of adding elements to a list, so I didn't want to use fold/map which create new collections.
The solution is to use type classes:
trait Addable[C, CC] {
def add(c: C, cc: CC) : CC
def empty: CC
}
object Addable {
implicit def listAddable[A] = new Addable[A, List[A]] {
def empty = Nil
def add(c: A, cc: List[A]) = c :: cc
}
implicit def addableAddable[A, Add](implicit cbf: CanBuildFrom[Add, A, Add]) = new Addable[A, Add] {
def empty = cbf().result
def add(c: A, cc: Add) = (cbf(cc) += c).result
}
}
Here I defined a type class Addable that can add an element C to a collection CC. I have 2 default implementations: For Lists using :: and for other collections, using the builder framework.
Then using this type class is:
class RichCollectionMap[A, C, B[_], M[X, Y] <: collection.Map[X, Y]](map: M[A, B[C]])(implicit adder: Addable[C, B[C]]) {
def updateSeq[That](a: A, c: C)(implicit cbf: CanBuildFrom[M[A, B[C]], (A, B[C]), That]): That = {
val pair = (a -> adder.add(c, map.getOrElse(a, adder.empty) ))
(map + pair).asInstanceOf[That]
}
def +++[That](t: (A, C))(implicit cbf: CanBuildFrom[M[A, B[C]], (A, B[C]), That]): That = updateSeq(t._1, t._2)(cbf)
}
implicit def toRichCollectionMap[A, C, B[_], M[X, Y] <: col
The special bit is using adder.add to add the elements and adder.empty to create new collections for new keys.
To compare, without type classes I would have had 3 options:
1. to write a method per collection type. E.g., addElementToSubList and addElementToSet etc. This creates a lot of boilerplate in the implementation and pollutes the namespace
2. to use reflection to determine if the sub collection is a List / Set. This is tricky as the map is empty to begin with (of course scala helps here also with Manifests)
3. to have poor-man's type class by requiring the user to supply the adder. So something like addToMap(map, key, value, adder), which is plain ugly
Yet another way I find this blog post helpful is where it describes typeclasses: Monads Are Not Metaphors
Search the article for typeclass. It should be the first match. In this article, the author provides an example of a Monad typeclass.
The forum thread "What makes type classes better than traits?" makes some interesting points:
Typeclasses can very easily represent notions that are quite difficult to represent in the presence of subtyping, such as equality and ordering.
Exercise: create a small class/trait hierarchy and try to implement .equals on each class/trait in such a way that the operation over arbitrary instances from the hierarchy is properly reflexive, symmetric, and transitive.
Typeclasses allow you to provide evidence that a type outside of your "control" conforms with some behavior.
Someone else's type can be a member of your typeclass.
You cannot express "this method takes/returns a value of the same type as the method receiver" in terms of subtyping, but this (very useful) constraint is straightforward using typeclasses. This is the f-bounded types problem (where an F-bounded type is parameterized over its own subtypes).
All operations defined on a trait require an instance; there is always a this argument. So you cannot define for example a fromString(s:String): Foo method on trait Foo in such a way that you can call it without an instance of Foo.
In Scala this manifests as people desperately trying to abstract over companion objects.
But it is straightforward with a typeclass, as illustrated by the zero element in this monoid example.
Typeclasses can be defined inductively; for example, if you have a JsonCodec[Woozle] you can get a JsonCodec[List[Woozle]] for free.
The example above illustrates this for "things you can add together".
One way to look at type classes is that they enable retroactive extension or retroactive polymorphism. There are a couple of great posts by Casual Miracles and Daniel Westheide that show examples of using Type Classes in Scala to achieve this.
Here's a post on my blog
that explores various methods in scala of retroactive supertyping, a kind of retroactive extension, including a typeclass example.
I don't know of any other use case than Ad-hoc polymorhism which is explained here the best way possible.
Both implicits and typeclasses are used for Type-conversion. The major use-case for both of them is to provide ad-hoc polymorphism(i.e) on classes that you can't modify but expect inheritance kind of polymorphism. In case of implicits you could use both an implicit def or an implicit class (which is your wrapper class but hidden from the client). Typeclasses are more powerful as they can add functionality to an already existing inheritance chain(eg: Ordering[T] in scala's sort function).
For more detail you can see https://lakshmirajagopalan.github.io/diving-into-scala-typeclasses/
In scala type classes
Enables ad-hoc polymorphism
Statically typed (i.e. type-safe)
Borrowed from Haskell
Solves the expression problem
Behavior can be extended
- at compile-time
- after the fact
- without changing/recompiling existing code
Scala Implicits
The last parameter list of a method can be marked implicit
Implicit parameters are filled in by the compiler
In effect, you require evidence of the compiler
… such as the existence of a type class in scope
You can also specify parameters explicitly, if needed
Below Example extension on String class with type class implementation extends the class with a new methods even though string is final :)
/**
* Created by nihat.hosgur on 2/19/17.
*/
case class PrintTwiceString(val original: String) {
def printTwice = original + original
}
object TypeClassString extends App {
implicit def stringToString(s: String) = PrintTwiceString(s)
val name: String = "Nihat"
name.printTwice
}
This is an important difference (needed for functional programming):
consider inc:Num a=> a -> a:
a received is the same that is returned, this cannot be done with subtyping
I like to use type classes as a lightweight Scala idiomatic form of Dependency Injection that still works with circular dependencies yet doesn't add a lot of code complexity. I recently rewrote a Scala project from using the Cake Pattern to type classes for DI and achieved a 59% reduction in code size.