How do you enrich value classes without overhead? - scala

Scala 2.10 introduces value classes, which you specify by making your class extend AnyVal. There are many restrictions on value classes, but one of their huge advantages is that they allow extension methods without the penalty of creating a new class: unless boxing is required e.g. to put the value class in an array, it is simply the old class plus a set of methods that take the class as the first parameter. Thus,
implicit class Foo(val i: Int) extends AnyVal {
def +*(j: Int) = i + j*j
}
unwraps to something that can be no more expensive than writing i + j*j yourself (once the JVM inlines the method call).
Unfortunately, one of the restrictions in SIP-15 which describes value classes is
The underlying type of C may not be a value class.
If you have a value class that you can get your hands on, say, as a way to provide type-safe units without the overhead of boxing (unless you really need it):
class Meter(val meters: Double) extends AnyVal {
def centimeters = meters*100.0 // No longer type-safe
def +(m: Meter) = new Meter(meters+m.meters) // Only works with Meter!
}
then is there a way to enrich Meter without object-creation overhead? The restriction in SIP-15 prevents the obvious
implicit class RichMeter(m: Meter) extends AnyVal { ... }
approach.

In order to extend value classes, you need to recapture the underlying type. Since value classes are required to have their wrapped type accessible (val i not just i above), you can always do this. You can't use the handy implicit class shortcut, but you can still add the implicit conversion longhand. So, if you want to add a - method to Meter you must do something like
class RichMeter(val meters: Double) extends AnyVal {
def -(m: Meter) = new Meter(meters - m.meters)
}
implicit def EnrichMeters(m: Meter) = new RichMeter(m.meters)
Note also that you are allowed to (freely) rewrap any parameters with the original value class, so if it has functionality that you rely on (e.g. it wraps a Long but performs complicated bit-mixing), you can just rewrap the underlying class in the value class you're trying to extend wherever you need it.
(Note also that you'll get a warning unless you import language.implicitConversions.)
Addendum: in Scala 2.11+, you may make the val private; for cases where this was done, you will not be able to use this trick.

Related

Scala: Value Class vs Case Class

I'm trying to discover the differences between using a value class or a case class in a given scenario. Suppose I want to model the integers mod 5 as a unique datatype. The question is which one I should begin with...
class IntegerMod5(val value: Int) extends AnyVal
case class IntegerMod5(value: Int)
Regardless, it seems that I can create an implementation of Numeric fairly easily. With the case class approach, then, I can simply do this:
case class IntegerMod5(value: Int)(implicit ev: Numeric[IntegerMod5]) {
import ev.mkNumericOps
}
However, it seems to be a much more difficult endeavour with value classes, mainly as the benefit is to avoid object creation. Thus, something like
implicit class IntegersMod5Ops(value: IntegerMod5)(implicit ev: Numeric[IntegerMod5]) {
import ev.mkNumericOps
}
Would appear to largely defeat the purpose. (Not sure if it even works, actually.)
The question is that is it possible to use Numeric with a value class, or will I have to bite the bullet and use a case class?
You don't need implicit ev: Numeric[IntegerMod5] as an argument, just define it in the companion object:
object IntegerMod5 {
implicit val numeric: Numeric[IntegerMod5] = ...
}
It will be automatically picked up when you use arithmetic operations on IntegerMod5s, and because it's a val, it's only initialized once (you can use object as well).

Specifying the requirements for a generic type

I want to call a constructor of a generic type T, but I also want it to have a specific constructor with only one Int argument:
class Class1[T] {
def method1(i: Int) = {
val instance = new T(i) //ops!
i
}
}
How do I specify this requirement?
UPDATE:
How acceptable (flexible, etc) is it to use something like this? That's a template method pattern.
abstract class Class1[T] {
def creator: Int => T
def method1(i: Int) = {
val instance = creator(i) //seems ok
i
}
}
Scala doesn't allow you to specify the constructor's signature in a type constraint (as e.g. C#).
However Scala does allow you to achieve something equivalent by using the type class pattern. This is more flexible, but requires writing a bit more boilerplate code.
First, define a trait which will be an interface for creating a T given an Int.
trait Factory[T] {
def fromInt(i: Int): T
}
Then, define an implicit instance for any type you want. Let's say you have some class Foo with an appropriate constructor.
implicit val FooFactory = new Factory[Foo] {
def fromInt(i: Int) = new Foo(i)
}
Now, you can specify a context bound for the type parameter T in the signature of Class1:
class Class1[T : Factory] {
def method1(i: Int) = {
val instance = implicitly[Factory[T]].fromInt(i)
// ...
}
}
The constraint T : Factory says that there must be an implicit Factory[T] in scope. When you need to use the instance, you grab it from implicit scope using the implicitly method.
Alternatively, you could specify the factory as an implicit parameter to the method that requires it.
class Class1[T] {
def method1(i: Int)(implicit factory: Factory[T]) = {
val instance = factory.fromInt(i)
// ...
}
}
This is more flexible than putting the constraint in the class signature, because it means you could have other methods on Class1 that don't require a Factory[T]. In that case, the compiler will not enforce that there is a Factory[T] unless you call one of the methods that requires it.
In response to your update (with the abstract creator method), this is a perfectly reasonable way to do it, as long as you don't mind creating a subtype of Class1 for every T. Also note that T will need to be a concrete type at any point that you want to create an instance of Class1, because you will need to provide a concrete implementation for the abstract method.
Consider trying to create an instance of Class1 inside another generic method. When using the type class pattern, you can extend the necessary type constraint to the type signature of that method, in order to make this compile:
def instantiateClass1[T : Factory] = new Class1[T]
If you don't need to do this, then you might not need the full power of the type class pattern.
When you create a generic class or trait, the class does not gain special access to the methods of whatever actual class you might parameterise it with. When you say
class Class1[T]
You are saying
This is a class which will work with unspecified type T.
Most of its methods will take instances of type T as a parameter or return T.
Any variance annotations or type bounds attached to the type parameter will be applied whenever it appears as a parameter of one of Class1's methods.
There is no such thing as type "Class1" but there may be an arbitrary number of derived classes of type "Class1[something]"
That's all. You get no special access to T from within Class1, because Scala does not know what T is. If you wanted Class1 to have access to T's fields and methods, you should have extended it or mixed it in.
If you want access to the methods of T (without using reflection), you can only do that from within one of Class1's methods which accepts a parameter of type T. And then you will get whichever version of the method belongs to the specific type of the actual object which is passed.
(You can work around this with reflection, but that is a runtime solution and absolutely not typesafe).
Look at what you are trying to do in your original code snippet...
You have specified that Class1 can be parameterised with any arbitrary type.
You want to invoke T with a constructor which takes a single Int parameter
But what have you done to promise the Scala compiler that T will have such a constructor? Nothing at all. So how can the compiler trust this? Well, it can't.
Even if you added an upper type bound, requiring that T be a subclass of some class which does have such a constructor, that doesn't help; T might be a subclass which has a more complex constructor, which calls back to the simpler constructor. So at the point where Class1 is defined, the compiler can have no confidence about the safety of constructing T with that simple method. So that call cannot be type-safe.
Class-based OO isn't about conjuring unknown types out of the ether; it doesn't let you plunge your hand into a top-hat-shaped class loader and pull out a surprise. It allows you to handle arbitrary already-created instances of some general type without knowing their specific type. At the point where those objects are created, there's no ambiguity at all.

How to design immutable model classes when using inheritance

I'm having trouble finding an elegant way of designing a some simple classes to represent HTTP messages in Scala.
Say I have something like this:
abstract class HttpMessage(headers: List[String]) {
def addHeader(header: String) = ???
}
class HttpRequest(path: String, headers: List[String])
extends HttpMessage(headers)
new HttpRequest("/", List("foo")).addHeader("bar")
How can I make the addHeader method return a copy of itself with the new header added? (and keep the current value of path as well)
Thanks,
Rob.
It is annoying but the solution to implement your required pattern is not trivial.
The first point to notice is that if you want to preserve your subclass type, you need to add a type parameter. Without this, you are not able to specify an unknown return type in HttpMessage
abstract class HttpMessage(headers: List[String]) {
type X <: HttpMessage
def addHeader(header: String):X
}
Then you can implement the method in your concrete subclasses where you will have to specify the value of X:
class HttpRequest(path: String, headers: List[String])
extends HttpMessage(headers){
type X = HttpRequest
def addHeader(header: String):HttpRequest = new HttpRequest(path, headers :+header)
}
A better, more scalable solution is to use implicit for the purpose.
trait HeaderAdder[T<:HttpMessage]{
def addHeader(httpMessage:T, header:String):T
}
and now you can define your method on the HttpMessage class like the following:
abstract class HttpMessage(headers: List[String]) {
type X <: HttpMessage
def addHeader(header: String)(implicit headerAdder:HeaderAdder[X]):X = headerAdder.add(this,header) }
}
This latest approach is based on the typeclass concept and scales much better than inheritance. The idea is that you are not forced to have a valid HeaderAdder[T] for every T in your hierarchy, and if you try to call the method on a class for which no implicit is available in scope, you will get a compile time error.
This is great, because it prevents you to have to implement addHeader = sys.error("This is not supported")
for certain classes in the hierarchy when it becomes "dirty" or to refactor it to avoid it becomes "dirty".
The best way to manage implicit is to put them in a trait like the following:
trait HeaderAdders {
implicit val httpRequestHeaderAdder:HeaderAdder[HttpRequest] = new HeaderAdder[HttpRequest] { ... }
implicit val httpRequestHeaderAdder:HeaderAdder[HttpWhat] = new HeaderAdder[HttpWhat] { ... }
}
and then you provide also an object, in case user can't mix it (for example if you have frameworks that investigate through reflection properties of the object, you don't want extra properties to be added to your current instance) (http://www.artima.com/scalazine/articles/selfless_trait_pattern.html)
object HeaderAdders extends HeaderAdders
So for example you can write things such as
// mixing example
class MyTest extends HeaderAdders // who cares about having two extra value in the object
// import example
import HeaderAdders._
class MyDomainClass // implicits are in scope, but not mixed inside MyDomainClass, so reflection from Hiberante will still work correctly
By the way, this design problem is the same of Scala collections, with the only difference that your HttpMessage is TraversableLike. Have a look to this question Calling map on a parallel collection via a reference to an ancestor type

Scala: How to make requirements of the type parametres of generic classes?

I'm creating some parameterized classes C[T] and I want to make some requirements of the characteristics of the type T to be able to be a parameter of my class. It would be simple if I just wanted to say that T inherited from traits or classes (as we do with Ordering). But I want it to implement some functions as well.
For example, I've seen that many pre-defined types implement MinValue and MaxValue, I would like my type T to implement these too. I've received some advice to just define an implicit function. But I wouldn't like that all the users were obliged to implement this function for these when it is already implemented. I could implement them at my code too, but it seems just to be a poor quick fix.
For example, when defining heaps, I would like to allowd users to construct a empty Heap. In these cases I want to inicialize value with the minimum value the type T could have. Obviously this code does not works.
class Heap[T](val value:T,val heaps:List[Heap[T]]){
def this()=this(T.MinValue,List())
}
I also would love to receive some advice about really good online Scala 2.8 references.
A bunch of things, all loosely related by virtue of sharing a few methods (though with different return types). Sure sounds like ad-hoc polymorphism to me!
roll on the type class...
class HasMinMax[T] {
def maxValue: T
def minValue: T
}
implicit object IntHasMinMax extends HasMinMax[Int] {
def maxValue = Int.MaxValue
def minValue = Int.MinValue
}
implicit object DoubleHasMinMax extends HasMinMax[Double] {
def maxValue = Double.MaxValue
def minValue = Double.MinValue
}
// etc
class C[T : HasMinMax](param : T) {
val bounds = implicitly[HasMinMax[T]]
// now use bounds.minValue or bounds.minValue as required
}
UPDATE
The [T : HasMinMax] notation is a context bound, and is syntactic sugar for:
class C[T](param : T)(implicit bounds: HasMinMax[T]) {
// now use bounds.minValue or bounds.minValue as required
}
You can either use type bounds:
trait Base
class C[T <: Base]
enabling C to be parametrized with any type T which is a subtype of Base.
Or you can use implicit parameters to express requirements:
trait Requirement[T] {
def requiredFunctionExample(t: T): T
}
class C[T](implicit req: Requirement[T])
Thus, objects of class C can only be constructed if there exists an implementation of the Requirement trait for the type T you wish to parametrize them with. You can place implementations of Requirement for different types T in, for instance, a package object, thus bringing them into scope whenever the corresponding package is imported.

Scala: reconciling type classes with dependency injection

There seems to be a lot of enthusiasm among Scala bloggers lately for the type classes pattern, in which a simple class has functionality added to it by an additional class conforming to some trait or pattern. As a vastly oversimplified example, the simple class:
case class Wotsit (value: Int)
can be adapted to the Foo trait:
trait Foo[T] {
def write (t: T): Unit
}
with the help of this type class:
implicit object WotsitIsFoo extends Foo[Wotsit] {
def write (wotsit: Wotsit) = println(wotsit.value)
}
The type class is typically captured at compile time with implicts, allowing both the Wotsit and its type class to be passed together into a higher order function:
def writeAll[T] (items: List[T])(implicit tc: Foo[T]) =
items.foreach(w => tc.write(w))
writeAll(wotsits)
(before you correct me, I said it was an oversimplified example)
However, the use of implicits assumes that the precise type of the items is known at compile time. I find in my code this often isn't the case: I will have a list of some type of item List[T], and need to discover the correct type class to work on them.
The suggested approach of Scala would appear to be to add the typeclass argument at all points in the call hierarchy. This can get annoying as an the code scales and these dependencies need to be passed down increasingly long chains, through methods to which they are increasingly irrelevant. This makes the code cluttered and harder to maintain, the opposite of what Scala is for.
Typically this is where dependency injection would step in, using a library to supply the desired object at the point it's needed. Details vary with the library chosen for DI - I've written my own in Java in the past - but typically the point of injection needs to define precisely the object desired.
Trouble is, in the case of a type class the precise value isn't known at compile time. It must be selected based on a polymorphic description. And crucially, the type information has been erased by the compiler. Manifests are Scala's solution to type erasure, but it's far from clear to me how to use them to address this issue.
What techniques and dependency injection libraries for Scala would people suggest as a way of tackling this? Am I missing a trick? The perfect DI library? Or is this really the sticking point it seems?
Clarification
I think there are really two aspects to this. In the first case, the point where the type class is needed is reached by direct function calls from the point where the exact type of its operand is known, and so sufficient type wrangling and syntactic sugar can allow the type class to be passed to the point it's needed.
In the second case, the two points are separated by a barrier - such as an API that can't be altered, or being stored in a database or object store, or serialised and send to another computer - that means the type class can't be passed along with its operand. In this case, given an object whose type and value are known only at runtime, the type class needs somehow to be discovered.
I think functional programmers have a habit of assuming the first case - that with a sufficiently advanced language, the type of the operand will always be knowable. David and mkniessl provided good answers for this, and I certainly don't want to criticise those. But the second case definitely does exist, and that's why I brought dependency injection into the question.
A fair amount of the tediousness of passing down those implicit dependencies can be alleviated by using the new context bound syntax. Your example becomes
def writeAll[T:Foo] (items: List[T]) =
items.foreach(w => implicitly[Foo[T]].write(w))
which compiles identically but makes for nice and clear signatures and has fewer "noise" variables floating around.
Not a great answer, but the alternatives probably involve reflection, and I don't know of any library that will just make this automatically work.
(I have substituted the names in the question, they did not help me think about the problem)
I'll attack the problem in two steps. First I show how nested scopes avoid having to declare the type class parameter all the way down its usage. Then I'll show a variant, where the type class instance is "dependency injected".
Type class instance as class parameter
To avoid having to declare the type class instance as implicit parameter in all intermediate calls, you can declare the type class instance in a class defining a scope where the specific type class instance should be available. I'm using the shortcut syntax ("context bound") for the definition of the class parameter.
object TypeClassDI1 {
// The type class
trait ATypeClass[T] {
def typeClassMethod(t: T): Unit
}
// Some data type
case class Something (value: Int)
// The type class instance as implicit
implicit object SomethingInstance extends ATypeClass[Something] {
def typeClassMethod(s: Something): Unit =
println("SomthingInstance " + s.value)
}
// A method directly using the type class
def writeAll[T:ATypeClass](items: List[T]) =
items.foreach(w => implicitly[ATypeClass[T]].typeClassMethod(w))
// A class defining a scope with a type class instance known to be available
class ATypeClassUser[T:ATypeClass] {
// bar only indirectly uses the type class via writeAll
// and does not declare an implicit parameter for it.
def bar(items: List[T]) {
// (here the evidence class parameter defined
// with the context bound is used for writeAll)
writeAll(items)
}
}
def main(args: Array[String]) {
val aTypeClassUser = new ATypeClassUser[Something]
aTypeClassUser.bar(List(Something(42), Something(4711)))
}
}
Type class instance as writable field (setter injection)
A variant of the above which would be usable using setter injection. This time the type class instance is passed via a setter call to the bean using the type class.
object TypeClassDI2 {
// The type class
trait ATypeClass[T] {
def typeClassMethod(t: T): Unit
}
// Some data type
case class Something (value: Int)
// The type class instance (not implicit here)
object SomethingInstance extends ATypeClass[Something] {
def typeClassMethod(s: Something): Unit =
println("SomthingInstance " + s.value)
}
// A method directly using the type class
def writeAll[T:ATypeClass](items: List[T]) =
items.foreach(w => implicitly[ATypeClass[T]].typeClassMethod(w))
// A "service bean" class defining a scope with a type class instance.
// Setter based injection style for simplicity.
class ATypeClassBean[T] {
implicit var aTypeClassInstance: ATypeClass[T] = _
// bar only indirectly uses the type class via writeAll
// and does not declare an implicit parameter for it.
def bar(items: List[T]) {
// (here the implicit var is used for writeAll)
writeAll(items)
}
}
def main(args: Array[String]) {
val aTypeClassBean = new ATypeClassBean[Something]()
// "inject" the type class instance
aTypeClassBean.aTypeClassInstance = SomethingInstance
aTypeClassBean.bar(List(Something(42), Something(4711)))
}
}
Note that the second solution has the common flaw of setter based injection that you can forget to set the dependency and get a nice NullPointerException upon use...
The argument against type classes as dependency injection here is that with type classes the "precise type of the items is known at compile time" whereas with dependency injection, they are not. You might be interested in this Scala project rewrite effort where I moved from the cake pattern to type classes for dependency injection. Take a look at this file where the implicit declarations are made. Notice how the use of environment variables determines the precise type? That is how you can reconcile the compile time requirements of type classes with the run time needs of dependency injection.