I'm trying to add new functions to existing types (so I can have the IDE auto suggest relevant functions for types I don't have control over, eg Future[Option[A]]). I've explored both implicit classes and implicit conversions to accomplish this and they both seem to offer the same behavior.
Is there any effective difference between using an implicit class:
case class Foo(a: Int)
implicit class EnrichedFoo(foo: Foo) {
def beep = "boop"
}
Foo(1).beep // "boop"
And using an implicit conversion:
case class Foo(a: Int)
trait Enriched {
def beep: String
}
implicit def fooToEnriched(foo: Foo) = new Enriched {
def beep = "boop"
}
Foo(1).beep // "boop"
I suppose one difference here might be that the first example creates a one-off class instead of a trait, but I could easily adapt the implicit class to extend an abstract trait, eg:
case class Foo(a: Int)
trait Enriched {
def beep: String
}
implicit class EnrichedFoo(foo: Foo) extends Enriched {
def beep = "boop"
}
Foo(1).beep // "boop"
As far as I'd know, they're pretty much exactly the same. The scoping rules also equally apply to both.
In my opinion, I'd use the implicit classes for your kind of situation. They were probably created exactly for something like that.
Implicit conversions, to me, are more appropriate when you already actually have two different kind of classes and want to convert between the two.
You can check out the initial proposal for implicit classes right here.
There it says:
A new language construct is proposed to simplify the creation of classes which provide extension methods to another type.
You can even see how it desugars implicit classes. The following:
implicit class RichInt(n: Int) extends Ordered[Int] {
def min(m: Int): Int = if (n <= m) n else m
...
}
will desugar into:
class RichInt(n: Int) extends Ordered[Int] {
def min(m: Int): Int = if (n <= m) n else m
...
}
implicit final def RichInt(n: Int): RichInt = new RichInt(n)
Well to me its a matter of preference. Actually the implicit classes came into being to ease the creation of classes which provide extension methods to another type.
Implicit classes add a lot of value to value classes though.
To add on Luka Jacobowitz answer: Implicit classes are basically extensions. Implicit conversion is used to tell the compiler that it may be treated as something with extension.
Sounds nearly the same. Two things of interest from implicit conversion to have some difference:
First: You may need to activate the language feature for disabling warnings when using implicit conversion.
Second: The term of "converting" the type may be confusing:
Implicit conversions are applied in two situations:
If an expression e is of type S, and S does not conform to the expression’s expected type T.
[Or:] In a selection e.m with e of type S, if the selector m does not denote a member of S.
case class Foo(a: Int)
trait Enriched {
def beep: String
}
implicit def fooToEnriched(foo: Foo) = new Enriched {
def beep = "boop"
}
Foo(1) match {
case _:Enriched => println("is an Enriched")
case _:Foo => println("no, was a Foo")
}
// no, was a Foo
but it may be treated as an Enriched...
val enriched: Enriched = Foo(2)
enriched match {
case _:Enriched => println("is an Enriched")
case _:Foo => println("no, was a Foo")
}
// is an Enriched
// plus unreachable code warning: case _:Foo => println("no, was a Foo")
Related
I'm reading through and working my way through using type classes and I came across this way of defining type classes from the Shapeless guide:
So here goes the example:
object CsvEncoder {
// "Summoner" method
def apply[A](implicit enc: CsvEncoder[A]): CsvEncoder[A] =
enc
// "Constructor" method
def instance[A](func: A => List[String]): CsvEncoder[A] =
new CsvEncoder[A] {
def encode(value: A): List[String] =
func(value)
}
// Globally visible type class instances
}
What I do not understand is the need for the apply method? What is it doing in this context above?
Later on, the guide describes how I could create a type class instance:
implicit val booleanEncoder: CsvEncoder[Boolean] =
new CsvEncoder[Boolean] {
def encode(b: Boolean): List[String] =
if(b) List("yes") else List("no")
}
is actually shortened to:
implicit val booleanEncoder: CsvEncoder[Boolean] =
instance(b => if(b) List("yes") else List("no"))
So my question now is, how does this work? What I do not get is the need for the apply method?
EDIT: I came across a blog post that describes the steps in creating type classes as below:
Define typeclass contract trait Foo.
Define a companion object Foo with a helper method apply that acts like implicitly, and a way of defining Foo instances typically from a function.
Define FooOps class that defines unary or binary operators.
Define FooSyntax trait that implicitly provides FooOps from a Foo instance.
So what is the deal with point number 2, 3 and 4?
Most of those practices came from Haskell (basically an intention to mimic Haskell's type-classes is a reason for so much boilerplate), some of it is just for convenience. So,
2) As #Alexey Romanov mentioned, companion object with apply is just for convenience, so instead of implicitly[CsvEncoder[IceCream]] you could write just CsvEncoder[IceCream] (aka CsvEncoder.apply[IceCream]()), which will return you a required type-class instance.
3) FooOps provides convenience methods for DSLs. For instance you could have something like:
trait Semigroup[A] {
...
def append(a: A, b: A)
}
import implicits._ //you should import actual instances for `Semigroup[Int]` etc.
implicitly[Semigroup[Int]].append(2,2)
But sometimes it's inconvenient to call append(2,2) method, so it's a good practice to provide a symbolic alias:
trait Ops[A] {
def typeClassInstance: Semigroup[A]
def self: A
def |+|(y: A): A = typeClassInstance.append(self, y)
}
trait ToSemigroupOps {
implicit def toSemigroupOps[A](target: A)(implicit tc: Semigroup[A]): Ops[A] = new Ops[A] {
val self = target
val typeClassInstance = tc
}
}
object SemiSyntax extends ToSemigroupOps
4) You can use it as follows:
import SemiSyntax._
import implicits._ //you should also import actual instances for `Semigroup[Int]` etc.
2 |+| 2
If you wonder why so much boilerplate, and why scala's implicit class syntax doesn't provide this functionality from scratch - the answer is that implicit class actually provides a way to create DSL's - it's just less powerful - it's (subjectively) harder to provide operation aliases, deal with more complex dispatching (when required) etc.
However, there is a macro solution that generates boilerplate automatically for you: https://github.com/mpilquist/simulacrum.
One another important point about your CsvEncoder example is that instance is convenience method for creating type-class instances, but apply is a shortcut for "summoning" (requiring) those instances. So, first one is for library extender (a way to implement interface), another one is for a user (a way to call a particular operation provided for that interface).
Another thing to note is that in shapeless the apply method is not only for cuter syntax.
Take for instance this simplified version of shapeless' Generic and some case class Foo.
trait Generic[T] {
type Repr
}
object Generic {
def apply[T](implicit gen: Generic[T]): Generic[T] { type Repr = gen.Repr } = gen
/* lots of macros to generate implicit instances omitted */
}
case class Foo(a: Int, b: String)
Now when I call Generic[Foo] I will get an instance that is typed as Generic[Foo] { type Repr = Int :: String :: HNil }. But if I call implicitly[Generic[Foo]] all the compiler knows about the result is that it's a Generic[Foo]. In other words: the concrete type of Repr is lost and I can't do anything useful with it. The reason is that implicitly is implemented as follows:
def implicitly[T](implicit e: T): T = e
That method declaration basically says: if you ask for a T I promise to give you a T, if I find one, and nothing more. So that means you'd have to ask implicitly[Generic[Foo] { type Repr = Int :: String :: HNil }] and that defeats the purpose of having automatic derivation.
Quoting the guide immediately after the object CsvEncoder definition:
The apply method ... allows us to summon a type class instance given a target type:
CsvEncoder[IceCream]
// res9: CsvEncoder[IceCream] = ...
I have a simple trait that requires the implementation to have a method quality(x:A) which I want to return an Ordered[B]. In other words, quality transforms A to Ordered[B]. Such that I can compare to B's.
I have the following basic code:
trait Foo[A] {
def quality[B](x:A):Ordered[B]
def baz(x1:A, x2:A) = {
// some algorithm work here, and then
if (quality(x1) > quality(x2)) {
// some stuff
}
}
Which I want to implement like follows:
class FooImpl extends Foo[Bar] {
def quality(x:Bar):Double = {
someDifficultQualityCalculationWhichReturnsADouble
}
}
I figured this could work because Double is implicitly converted to RichDouble which implements Ordered[Double] if I am correct.
But at the > in the baz method of the trait it gives me an error of quality(x2) stating: Type mismatch, expected Nothing, actual Ordered[Nothing]
I do not understand this, because, coming from C#, I find it comparable to returning something like IEnumerable<A> and then using al the nice extension methods of an IEnumerable.
What am I missing here? What I want to to with the trait is define a complex algorithm inside the trait, but the key functions need to be defined by the class implementing the trait. On of these functions is needed to calculate a quality factor. This can be a Double, Int or whatnot but it can also be something more sophisticated. I could rewrite it that it always returns Double and that is certainly possible, but I want the trait to be as generic as possible because I want it to describe behavior and not implementation. I thought about the class A implementing Ordered[A] but that also seems weird, because it is not the 'purpose' of this class to be compared.
Using Ordering[A] you can compare As without requiring A to implement Ordered[A].
We can request that an Ordering[A] exists in baz by adding an implicit parameter :
trait Foo[A] {
def baz(x1:A, x2:A)(implicit ord: Ordering[A]) =
if (ord.gt(x1, x2)) "first is bigger"
else "first is smaller or equal"
}
Lets create a Person case class with an Ordering in its companion object.
case class Person(name: String, age: Int)
object Person {
implicit val orderByAge = Ordering.by[Person, Int](_.age)
}
We can now use Foo[Person].baz because an Ordering[Person] exists :
val (alice, bob) = (Person("Alice", 50), Person("Bob", 40))
val foo = new Foo[Person] {}
foo.baz(alice, bob)
// String = first is bigger
// using an explicit ordering
foor.baz(alice, bob)(Ordering.by[Person, String](_.name))
// String = first is smaller or equal
In the same manner as I compared Persons by age, you could create an Ordering[A] to compare your A by your quality function.
To complement Peter's answer: in Scala we have two traits: Ordering[T] and Ordered[A]. You should use them in different situations.
Ordered[A] is for cases when a class you implement can be ordered naturally and that order is the only one.
Example:
class Fraction(val numerator: Int, val denominator: Int) extends Ordered[Fraction]
{
def compare(that: Fraction) = {
(this.numerator * that.denominator) compare (this.denominator * that.numerator)
}
}
Ordering[T] is for cases when you want to have different ways to order things. This way the strategy of defining the order can be decoupled from the class being ordered.
For an example I will borrow Peter's Person:
case class Person(name: String, age: Int)
object PersonNameOrdering extends Ordering[Person]
{
def compare(x: Person, y: Person) = x.name compare y.name
}
Note, that since PersonNameOrdering doesn't have any instance fields, all it does is encapsulate the logic of defining an order of two Person's. Thus I made it an object rather than a class.
To cut down the boilerplate you can use Ordering.on to define an Ordering:
val personAgeOrdering: Ordering[Person] = Ordering.on[Person](_.age)
Now to the fun part: how to use all this stuff.
In your original code Foo[A].quantity was indirectly defining a way to order your A's. Now to make it idiomatic Scala let's use Ordering[A] instead, and rename quantity to ord:
trait Foo[A] {
def baz(x1: A, x2: A, ord: Ordering[A]) = {
import ord._
if (x1 > x2) "first is greater"
else "first is less or equal"
}
}
Several things to note here:
import ord._ allows to use infix notation for comparisons, i.e. x1 > x2 vs ord.gt(x1, x2)
baz is now parametrized by ordering, so you can dynamically choose how to order x1 and x2 on a case-by-case basis:
foo.baz(person1, person2, PersonNameOrdering)
foo.baz(person1, person2, personAgeOrdering)
The fact that ord is now an explicit parameter can sometimes be inconvenient: you may not want to pass it explicitly all the time, while there might be some cases when you want to do so. Implicits to the rescue!
def baz(x1: A, x2: A) = {
def inner(implicit ord: Ordering[A]) = {
import ord._
if (x1 > x2) "first is greater"
else "first is less or equal"
}
inner
}
Note the implicit keyword. It is used to tell the compiler to draw the parameter from the implicit scope in case you don't provide it explicitly:
// put an Int value to the implicit scope
implicit val myInt: Int = 5
def printAnInt(implicit x: Int) = { println(x) }
// explicitly pass the parameter
printAnInt(10) // would print "10"
// let the compiler infer the parameter from the implicit scope
printAnInt // would print "5"
You might want to learn where does Scala look for implicits.
Another thing to note is the need of a nested function. You cannot write def baz(x1: A, x2: A, implicit ord: Ordering[A]) - that would not compile, because the implicit keyword applies to the whole parameter list.
In order to cope with this little problem baz was rewritten in such a clunky way.
This form of rewritting turned out to be so common that a nice syntactic sugar was introduced for it - multiple parameter list:
def baz(x1: A, x2: A)(implicit ord: Ordering[A]) = {
import ord._
if (x1 > x2) "first is greater"
else "first is less or equal"
}
The need of an implicit parametrized by a type is also quite common so the code above can be rewritten with even more sugar - context bound:
def baz[A: Ordering](x1: A, x2: A) = {
val ord = implicitly[Ordering[A]]
import ord._
if (x1 > x2) "first is greater"
else "first is less or equal"
}
Please bear in mind that all these transformations of baz function are nothing but syntactic sugar application. So all the versions are exactly the same and compiler would desugarize each of the versions to the same bytecode.
To recap:
extract the A ordering logic from quantity function to the Ordering[A] class;
put an instance of Ordering[A] to the implicit scope or pass the ordering explicitly depending on your needs;
pick "your flavor" of syntactic sugar for baz: no sugar/nested functions, multiple parameter list or context bound.
UPD
To answer the original question "why doesn't it compile?" let me start from a little digression on how infix comparison operator works in Scala.
Given the following code:
val x: Int = 1
val y: Int = 2
val greater: Boolean = x > y
Here's what actually happens. Scala doesn't have infix operators per se, instead infix operators are just a syntactic sugar for single parameter method invocation. So internally the code above transforms to this:
val greater: Boolean = x.>(y)
Now the tricky part: Int doesn't have an > method on its own. Pick ordering by inheritance on the ScalaDoc page and check that this method is listed in a group titled "Inherited by implicit conversion intWrapper from Int to RichInt".
So internally compiler does this (well, except that for performance reasons that there is no actual instantiation of an extra object on heap):
val greater: Boolean = (new RichInt(x)).>(y)
If we proceed to ScalaDoc of RichInt and again order methods by inheritance it turns out that the > method actually comes from Ordered!
Let's rewrite the whole block to make it clearer what actually happens:
val x: Int = 1
val y: Int = 2
val richX: RichInt = new RichInt(x)
val xOrdered: Ordered[Int] = richX
val greater: Boolean = xOrdered.>(y)
The rewriting should have highlighted the types of variables involved in comparison: Ordered[Int] on the left and Int on the right. Refer > documentation for confirmation.
Now let's get back to the original code and rewrite it the same way to highlight the types:
trait Foo[A] {
def quality[B](x: A): Ordered[B]
def baz(x1: A, x2: A) = {
// some algorithm work here, and then
val x1Ordered: Ordered[B] = quality(x1)
val x2Ordered: Ordered[B] = quality(x2)
if (x1Ordered > x2Ordered) {
// some stuff
}
}
}
As you can see the types do not align: they are Ordered[B] and Ordered[B], while for > comparison to work they should have been Ordered[B] and B respectively.
The question is where do you get this B to put on the right? To me it seems that B is in fact the same as A in this context. Here's what I came up with:
trait Foo[A] {
def quality(x: A): Ordered[A]
def baz(x1: A, x2: A) = {
// some algorithm work here, and then
if (quality(x1) > x2) {
"x1 is greater"
} else {
"x1 is less or equal"
}
}
}
case class Cargo(weight: Int)
class CargoFooImpl extends Foo[Cargo] {
override def quality(x: Cargo): Ordered[Cargo] = new Ordered[Cargo] {
override def compare(that: Cargo): Int = x.weight compare that.weight
}
}
The downside of this approach is that it is not obvious: the implementation of quality is too verbose and quality(x1) > x2 is not symmetrical.
The bottom line:
if you want the code to be idiomatic Scala go for Ordering[T]
if you don't want to mess with implicits and other Scala magic implement quality as quality(x: A): Double for all As; Doubles are good and generic enough to be compared and ordered.
Specialization promises to provide high-efficiency implmentations for primitive types
with minimal extra boilerplate. But specialization seems to be too eager for its own good.
If I want to specialize a class or method,
def foo[#specialized(Byte) A](a: A): String = ???
class Bar[#specialized(Int) B] {
var b: B = ???
def baz: B = ???
}
then I am required to write a single implementation that covers both the specialized and the generic cases.
What if those cases are really different from each other, such that the implementations do not overlap?
For example, if I wanted to perform math on bytes, I would need to insert a bunch of & 0xFFs into the
logic.
I could possibly write a specialized type-class to do the math right, but doesn't that just push the same
problem back one level? How do I write my specialized + method for that type class in a way that doesn't
conflict with a more general implementation?
class Adder[#specialized(Byte) A] {
def +(a1: A, a2: A): A = ???
}
Also, once I do create a type-class this way, how do I make sure the correct type class is used for my specialized methods
instead of the general version (which, if it is truly general, should probably compile and certainly would run, except that it isn't what I want)?
Is there a way to do this without macros? Is it easier with macros?
This is my best attempt so far. It works but the implementation isn't pretty (even if the results are). Improvements are welcome!
There is a macro-free way to do this, both at the class and method level, and it does involve type classes--quite a lot of
them! And the answer is not exactly the same for classes and methods. So bear with me.
Manually Specialized Classes
You manually specialize classes the same way that you manually provide any kind of different implementation for classes:
your superclass is abstract (or is a trait), and the subclasses provide the implementation details.
abstract class Bippy[#specialized(Int) B] {
def b: B
def next: Bippy[B]
}
class BippyInt(initial: Int) extends Bippy[Int] {
private var myB: Int = initial
def b: Int = myB
def next = { myB += 1; this }
}
class BippyObject(initial: Object) extends Bippy[Object] {
private var myB: Object = initial
def b: B = myB
def next = { myB = myB.toString; this }
}
Now, if only we had a specialized method to pick out the right implementations, we'd be done:
object Bippy{
def apply[#specialized(Int) B](initial: B) = ??? // Now what?
}
So we've converted our problem of providing custom specialized classes and methods into just
needing to provide custom specialized methods.
Manually Specialized Methods
Manually specializing a method requires a way to write one implementation that can nonetheless
select which implementation you want (at compile time). Type classes are great at this. Suppose
we already had type classes that implemented all of our functionality, and that the compiler would
select the right one. Then we could just write
def foo[#specialized(Int) A: SpecializedFooImpl](a: A): String =
implicitly[SpecializedFooImpl[A]](a)
...or we could if implicitly was guaranteed to preserve specialization and if we only
ever wanted a single type parameter. In general these things are not true, so we'll write
our type class out as an implicit parameter rather than relying on the A: TC syntactic sugar.
def foo[#specialized(Int) A](a: A)(implicit impl: SpecializedFooImpl[A]): String =
impl(a)
(Actually, that's less boilerplate anyway.)
So we've converted our problem of providing custom specialized methods into just needing
to write specialized typeclasses and getting the compiler to fill in the correct ones.
Manually Specialized Type Classes
Type classes are just classes, and now we have to write specialized classes again, but
there's a critical difference. The user isn't the one asking for arbitrary instances.
This gives us just enough extra flexibility for it to work.
For foo, we need an Int version and a fully generic version.
trait SpecFooImpl[#specialized (Int), A] {
def apply(param: A): String
}
final class SpecFooImplAny[A] extends SpecFooImpl[A] {
def apply(param: A) = param.toString
}
final class SpecFooImplInt extends SpecFooImpl[Int] {
def apply(param: Int) = "!" * math.max(0, param)
}
Now we could create implicits to supply those type classes like so
implicit def specFooAsAny[A] = new SpecFooImplAny[A]
implicit val specFooAsInt = new SpecFooImplInt
except we have a problem: if we actually try to call foo: Int, both implicits will apply.
So if we just had a way to prioritize which type class we chose, we'd be all set.
Selection of type classes (and implicits in general)
One of the secret ingredients the compiler uses to determine the right implicit to use
is inheritance. If implicits come from A via B extends A, but B
declares its own that also could apply, those in B win if all else is equal.
So we put the ones we want to win deeper in the inheritance hierarchy.
Also, since you're free to define implicits in traits, you can mix them in anywhere.
So the last piece of our puzzle is to pop our type class implicits into a chain
of traits that extend each other, with the more generic ones appearing earlier.
trait LowPriorityFooSpecializers {
implicit def specializeFooAsAny[A] = new SpecializedFooImplAny[A]
}
trait FooSpecializers extends LowPriorityFooSpecializers {
implicit val specializeFooAsInt = new SpecializedFooImplInt
}
Mix in the highest-priority trait to wherever the implicits are needed, and the
type classes will be picked as desired.
Note that the type classes will be as specialized as you make them even if the
specialized annotation is not used. So you can do without specialized at all,
as long as you know the type precisely enough, unless you want to use specialized
functions or interoperate with other specialized classes. (And you probably do.)
A complete example
Let's suppose we want to make a two-parameter specialized bippy function that
will do apply the following transformation:
bippy(a, b) -> b
bippy(a, b: Int) -> b+1
bippy(a: Int, b) -> b
bippy(a: Int, b: Int) -> a+b
We should be able to achieve this with three type classes and a single specialized
method. Let's try, first the method:
def bippy[#specialized(Int) A, #specialized(Int) B](a: A, b: B)(implicit impl: SpecBippy[A, B]) =
impl(a, b)
Then the type classes:
trait SpecBippy[#specialized(Int) A, #specialized(Int) B] {
def apply(a: A, b: B): B
}
final class SpecBippyAny[A, B] extends SpecBippy[A, B] {
def apply(a: A, b: B) = b
}
final class SpecBippyAnyInt[A] extends SpecBippy[A, Int] {
def apply(a: A, b: Int) = b + 1
}
final class SpecBippyIntInt extends SpecBippy[Int, Int] {
def apply(a: Int, b: Int) = a + b
}
Then the implicits in chained traits:
trait LowerPriorityBippySpeccer {
// Trick to avoid allocation since generic case is erased anyway!
private val mySpecBippyAny = new SpecBippyAny[AnyRef, AnyRef]
implicit def specBippyAny[A, B] = mySpecBippyAny.asInstanceOf[SpecBippyAny[A, B]]
}
trait LowPriorityBippySpeccer extends LowerPriorityBippySpeccer {
private val mySpecBippyAnyInt = new SpecBippyAnyInt[AnyRef]
implicit def specBippyAnyInt[A] = mySpecBippyAnyInt.asInstanceOf[SpecBippyAnyInt[A]]
}
// Make this last one an object so we can import the contents
object BippySpeccer extends LowPriorityBippySpeccer {
implicit val specBippyIntInt = new SpecBippyIntInt
}
and finally we'll try it out (after pasting everything in together in :paste in the REPL):
scala> import Speccer._
import Speccer._
scala> bippy(Some(true), "cod")
res0: String = cod
scala> bippy(1, "salmon")
res1: String = salmon
scala> bippy(None, 3)
res2: Int = 4
scala> bippy(4, 5)
res3: Int = 9
It works--our custom implementations are enabled. Just to check that we can use
any type, but we don't leak into the wrong implementation:
scala> bippy(4, 5: Short)
res4: Short = 5
scala> bippy(4, 5: Double)
res5: Double = 5.0
scala> bippy(3: Byte, 2)
res6: Int = 3
And finally, to verify that we have actually avoided boxing, we'll time bippy at
summing a bunch of integers:
scala> val th = new ichi.bench.Thyme
th: ichi.bench.Thyme = ichi.bench.Thyme#1130520d
scala> val adder = (i: Int, j: Int) => i + j
adder: (Int, Int) => Int = <function2>
scala> var a = Array.fill(1024)(util.Random.nextInt)
a: Array[Int] = Array(-698116967, 2090538085, -266092213, ...
scala> th.pbenchOff(){
var i, s = 0
while (i < 1024) { s = adder(a(i), s); i += 1 }
s
}{
var i, s = 0
while (i < 1024) { s = bippy(a(i), s); i += 1 }
s
}
Benchmark comparison (in 1.026 s)
Not significantly different (p ~= 0.2795)
Time ratio: 0.99424 95% CI 0.98375 - 1.00473 (n=30)
First 330.7 ns 95% CI 328.2 ns - 333.1 ns
Second 328.8 ns 95% CI 326.3 ns - 331.2 ns
So we can see that our specialized bippy-adder achieves the same kind of performance
as specialized Function2 does (about 3 adds per ns, which is about right for a modern
machine).
Summary
To write custom specialized code using the #specialized annotation,
Make the specialized class abstract and manually supply concrete implementations
Make specialized methods (including generators for a specialized class) take typeclasses that do the real work
Make the base typeclass trait #specialized and provide concrete implementations
Provide implicit vals or defs in an inheritance-hierarchy of traits so the correct one is selected
It's a lot of boilerplate, but at the end of it all you get a seamless custom-specialized experience.
This is an answer from the scala internals mailing list:
With miniboxing specialization, you can use the reflection feature:
import MbReflection._
import MbReflection.SimpleType._
import MbReflection.SimpleConv._
object Test {
def bippy[#miniboxed A, #miniboxed B](a: A, b: B): B =
(reifiedType[A], reifiedType[B]) match {
case (`int`, `int`) => (a.as[Int] + b.as[Int]).as[B]
case ( _ , `int`) => (b.as[Int] + 1).as[B]
case (`int`, _ ) => b
case ( _ , _ ) => b
}
def main(args: Array[String]): Unit = {
def x = 1.0
assert(bippy(3,4) == 7)
assert(bippy(x,4) == 5)
assert(bippy(3,x) == x)
assert(bippy(x,x) == x)
}
}
This way, you can choose the exact behavior of the bippy method based on the type arguments without defining any implicit classes.
I know it's quite old, but I bumped at it looking for something else and maybe it'll prove useful. I had a similar motivation, and answered it in how to check I'm inside a specialized function or class
I used a reverse lookup table - SpecializedKey is a specialized class which equals all other instances with the same specialization, so I can perform a check like this
def onlyBytes[#specialized E](arg :E) :Option[E] =
if (specializationFor[E]==specializationFor[Byte]) Some(arg)
else None
Of course, there's no performance benefit when working with individual primitive values, but with collections, especially iterators, it becomes useful.
final val AllButUnit = new Specializable.Group((Byte, Short, Int, Long, Char, Float, Double, Boolean, AnyRef))
def specializationFor[#specialized(AllButUnit) E] :ResolvedSpecialization[E] =
Specializations(new SpecializedKey[E]).asInstanceOf[ResolvedSpecialization[E]]
private val Specializations = Seq(
resolve[Byte],
resolve[Short],
resolve[Int],
resolve[Long],
resolve[Char],
resolve[Float],
resolve[Double],
resolve[Boolean],
resolve[Unit],
resolve[AnyRef]
).map(
spec => spec.key -> spec :(SpecializedKey[_], ResolvedSpecialization[_])
).toMap.withDefaultValue(resolve[AnyRef])
private def resolve[#specialized(AllButUnit) E :ClassTag] :ResolvedSpecialization[E] =
new ResolvedSpecialization[E](new SpecializedKey[E], new Array[E](0))
class ResolvedSpecialization[#specialized(AllButUnit) E] private[SpecializedCompanion]
(val array :Array[E], val elementType :Class[E], val classTag :ClassTag[E], private[SpecializedCompanion] val key :SpecializedKey[E]) {
private[SpecializedCompanion] def this(key :SpecializedKey[E], array :Array[E]) =
this(array, array.getClass.getComponentType.asInstanceOf[Class[E]], ClassTag(array.getClass.getComponentType.asInstanceOf[Class[E]]), key)
override def toString = s"#specialized($elementType)"
override def equals(that :Any) = that match {
case r :ResolvedSpecialization[_] => r.elementType==elementType
case _ => false
}
override def hashCode = elementType.hashCode
}
private class SpecializedKey[#specialized(AllButUnit) E] {
override def equals(that :Any) = that.getClass==getClass
override def hashCode = getClass.hashCode
def className = getClass.getName
override def toString = className.substring(className.indexOf("$")+1)
}
I am rather new to Scala programming, so still figuring out canonical ways to doing something. Lately, I wanted to write a function foo that would only accept a set of types for its only parameter. Think of something like
def foo(x: [Int, String]) = ???
But I couldn't find anything close to the syntax above (which I find most natural to me). Using type Any would loose the compile-side checking and would make it easier for problems to escape into the runtime land.
The best I was able to come up with is something like this:
sealed abstract class Base
case class TypeInt(v: Int) extends Base
case class TypeString(v: String) extends Base
implicit def toTypeInt(v: Int) = TypeInt(v)
implicit def toTypeString(v: String) = TypeString(v)
def foo(x: Base) = x match {
case TypeInt(v) => println("Int: ", v)
case TypeString(v) => println("String: ", v)
}
foo(1)
foo("hello")
(As a side note, I would like to be able to just write implicit case class ... to avoid creating manually the toType* functions but that doesn't compile.)
Is there a simpler way to write a function that accepts a parameter of a set of types in a typesafe manner?
UPDATE: It actually turns out that in my specific case I could just have used method overloading. For some reason it's not possible to use method overloading at all in Scala worksheet which made me think that Scala doesn't have overloading at all. But I was wrong - in regular Scala source it should be possible to use that. There are still some shortcomings of overload usage that are described in the article on Magnet pattern that was mentioned in comments below (e.g. there is no way to overload on type Foo[Type1] and Foo[Type2] because of type erasure in JVM generics.
The magnet pattern feels like overkill here—you can use plain old type classes:
trait Fooable[A] { def apply(a: A): Unit }
implicit object intFooable extends Fooable[Int] {
def apply(a: Int) = printf("Int: %d\n", a)
}
implicit object stringFooable extends Fooable[String] {
def apply(a: String) = printf("String: %s\n", a)
}
def foo[A: Fooable](a: A) = implicitly[Fooable[A]].apply(a)
And then:
scala> foo(1)
Int: 1
scala> foo("hello")
String: hello
Suppose you're worried about collisions after erasure. Let's try some instances for a generic type:
trait Bar[A]
implicit object barIntFooable extends Fooable[Bar[Int]] {
def apply(a: Bar[Int]) = println("A bar of ints.")
}
implicit object barStringFooable extends Fooable[Bar[String]] {
def apply(a: Bar[String]) = println("A bar of strings.")
}
And again:
scala> foo(new Bar[Int] {})
A bar of ints.
scala> foo(new Bar[String] {})
A bar of strings.
Everything works as expected.
Looking at some scala-docs of my libraries, it appeared to me that there is some unwanted noise from value classes. For example:
implicit class RichInt(val i: Int) extends AnyVal {
def squared = i * i
}
This introduces an unwanted symbol i:
4.i // arghh....
That stuff appears both in the scala docs and in the IDE auto completion which is really not good.
So... any ideas of how to mitigate this problem? I mean you can use RichInt(val self: Int) but that doesn't make it any better (4.self, wth?)
EDIT:
In the following example, does the compiler erase the intermediate object, or not?
import language.implicitConversions
object Definition {
trait IntOps extends Any { def squared: Int }
implicit private class IntOpsImpl(val i: Int) extends AnyVal with IntOps {
def squared = i * i
}
implicit def IntOps(i: Int): IntOps = new IntOpsImpl(i) // optimised or not?
}
object Application {
import Definition._
// 4.i -- forbidden
4.squared
}
In Scala 2.11 you can make the val private, which fixes this issue:
implicit class RichInt(private val i: Int) extends AnyVal {
def squared = i * i
}
It does introduce noise (note: in 2.10, in 2.11 and beyond you just declare the val private). You don't always want to. But that's the way it is for now.
You can't get around the problem by following the private-value-class pattern because the compiler can't actually see that it's a value class at the end of it, so it goes through the generic route. Here's the bytecode:
12: invokevirtual #24;
//Method Definition$.IntOps:(I)LDefinition$IntOps;
15: invokeinterface #30, 1;
//InterfaceMethod Definition$IntOps.squared:()I
See how the first one returns a copy of the class Definition$IntOps? It's boxed.
But these two patterns work, sort of:
(1) Common name pattern.
implicit class RichInt(val repr: Int) extends AnyVal { ... }
implicit class RichInt(val underlying: Int) extends AnyVal { ... }
Use one of these. Adding i as a method is annoying. Adding underlying when there is nothing underlying is not nearly so bad--you'll only hit it if you're trying to get the underlying value anyway. And if you keep using the same name over and over:
implicit class RicherInt(val repr: Int) extends AnyVal { def sq = repr * repr }
implicit class RichestInt(val repr: Int) extends AnyVal { def cu = repr * repr * repr }
scala> scala> 3.cu
res5: Int = 27
scala> 3.repr
<console>:10: error: type mismatch;
found : Int(3)
required: ?{def repr: ?}
Note that implicit conversions are not applicable because they are ambiguous:
both method RicherInt of type (repr: Int)RicherInt
and method RichestInt of type (repr: Int)RichestInt
the name collision sorta takes care of your problem anyway. If you really want to, you can create an empty value class that exists only to collide with repr.
(2) Explicit implicit pattern
Sometimes you internally want your value to be named something shorter or more mnemonic than repr or underlying without making it available on the original type. One option is to create a forwarding implicit like so:
class IntWithPowers(val i: Int) extends AnyVal {
def sq = i*i
def cu = i*i*i
}
implicit class EnableIntPowers(val repr: Int) extends AnyVal {
def pow = new IntWithPowers(repr)
}
Now you have to call 3.pow.sq instead of 3.sq--which may be a good way to carve up your namespace!--and you don't have to worry about the namespace pollution beyond the original repr.
Perhaps the problem is the heterogeneous scenarios for which value classes were plotted. From the SIP:
• Inlined implicit wrappers. Methods on those wrappers would be translated to extension methods.
• New numeric classes, such as unsigned ints. There would no longer need to be a boxing overhead for such classes. So this is similar to value classes in .NET.
• Classes representing units of measure. Again, no boxing overhead would be incurred for these classes.
I think there is a difference between the first and the last two. In the first case, the value class itself should be transparent. You wouldn't expect anywhere a type RichInt, but you only really operate on Int. In the second case, e.g. 4.meters, I understand that getting the actual "value" makes sense, hence requiring a val is ok.
This split is again reflected in the definition of a value class:
1. C must have exactly one parameter, which is marked with val and which has public accessibility.
...
7. C must be ephemeral.
The latter meaning it has no other fields etc., contradicting No. 1.
With
class C(val u: U) extends AnyVal
the only ever place in the SIP where u is used, is in example implementations (e.g. def extension$plus($this: Meter, other: Meter) = new Meter($this.underlying + other.underlying)); and then in intermediate representations, only to be erased again finally:
new C(e).u ⇒ e
The intermediate representation being accessible for synthetic methods IMO is something that could also be done by the compiler, but should not be visible in the user written code. (I.e., you can use a val if you want to access the peer, but don't have to).
A possibility is to use a name that is shadowed:
implicit class IntOps(val toInt: Int) extends AnyVal {
def squared = toInt * toInt
}
Or
implicit class IntOps(val toInt: Int) extends AnyVal { ops =>
import ops.{toInt => value}
def squared = value * value
}
This would still end up in the scala-docs, but at least calling 4.toInt is neither confusing, no actually triggering IntOps.
I'm not sure it's "unwanted noise" as I think you will almost always need to access the underlying values when using your RichInt.
Consider this:
// writing ${r} we use a RichInt where an Int is required
scala> def squareMe(r: RichInt) = s"${r} squared is ${r.squared}"
squareMe: (r: RichInt)String
// results are not what we hoped, we wanted "2", not "RichInt#2"
scala> squareMe(2)
res1: String = RichInt#2 squared is 4
// we actually need to access the underlying i
scala> def squareMeRight(r: RichInt) = s"${r.i} squared is ${r.squared}"
squareMe: (r: RichInt)String
Also, if you had a method that adds two RichInt you would need again to access the underlying value:
scala> implicit class ImplRichInt(val i: Int) extends AnyVal {
| def Add(that: ImplRichInt) = new ImplRichInt(i + that) // nope...
| }
<console>:12: error: overloaded method value + with alternatives:
(x: Int)Int <and>
(x: Char)Int <and>
(x: Short)Int <and>
(x: Byte)Int
cannot be applied to (ImplRichInt)
def Add(that: ImplRichInt) = new ImplRichInt(i + that)
^
scala> implicit class ImplRichInt(val i: Int) extends AnyVal {
| def Add(that: ImplRichInt) = new ImplRichInt(i + that.i)
| }
defined class ImplRichInt
scala> 2.Add(4)
res7: ImplRichInt = ImplRichInt#6