Is there a way mix in the Ordered trait for a covariant generic?
I have the following code:
trait Foo[+T <: Foo[T]] extends Ordered[T] {
def id: Int
override def compare(that : T) : Int = {
this.id compare that.id
}
}
Where I need T covariant and would like ordered to work too. The version above gives the "covariant type in contravariant position error".
You can't use Ordered with a covariant type because it requires the generic type in a contravariant position. Instead, you should use an implicit Orderingdefined in the companion object
trait Foo[+T] {
def id: Int
}
object Foo {
implicit def fooOrdering[A <: Foo[_]]: Ordering[A] = {
new Ordering[A] {
override def compare(x: A, y: A): Int = x.id compare y.id
}
}
}
Any reasonable function that compares object should be take in an Ordering instance for the objects it is comparing, and many do implicitly. For example
case class F(id: Int) extends Foo[Int]
case class G(id: Int) extends Foo[Int]
List(F(1), F(2), F(5), F(3), G(12)).max // = G(12)
Ordered[A] is invariant in A. The old documentation for this trait explains why:
A trait for totally ordered data. Note that since version 2006-07-24 this trait is no longer covariant in a. It is important that the equals method for an instance of Ordered[A] be consistent with the compare method. However, due to limitations inherent in the type erasure semantics, there is no reasonable way to provide a default implementation of equality for instances of Ordered[A]. Therefore, if you need to be able to use equality on an instance of Ordered[A] you must provide it yourself either when inheiriting or instantiating. It is important that the hashCode method for an instance of Ordered[A] be consistent with the compare method. However, it is not possible to provide a sensible default implementation. Therefore, if you need to be able compute the hash of an instance of Ordered[A] you must provide it yourself either when inheiriting or instantiating.
This means if you want to use Ordered[A] you'll explicitly have to provide an implementation of compare for subtypes of Foo.
A workaround can be done with an implicit Ordering[A]:
implicit def ord[A <: Foo[A]] = new math.Ordering[A] {
override def compare(a: A, b: A) = a.id compare b.id
}
Is there an (dis)advantage (for example performance-wise) for defining classes auxiliary to a trait in its companion object over defining them as inner classes? For example:
object Foo {
final class Bar[A, B](val x: A, val y: B)
}
trait Foo[A, B] {
def foo: Foo.Bar[A, B]
}
vs.
trait Foo[A, B] {
final class Bar(val x: A, val y: B)
def foo: Bar
}
I tend to use the first variant—because it does not require access to any member of Foo—but since the type parameters become redundant, I'm thinking about switching to the second variant. Class Bar is only used internally, so could get a protected modifier.
I reckon the internal class with have an invisible Foo.this member somewhere, but then my understanding is that Foo.Bar is also not purely static as in Java but probably has some pointer to Foo$.this, too.
I have the simplified situation:
abstract sealed trait Top
class A[T] extends Top
class B[T] extends Top
class Typeclass[T]
implicit def a[T] = new Typeclass[A[T]]
implicit def b[T] = new Typeclass[B[T]]
Now I have a Map[String, Top] and want to use an operation on all values in the map that require the presence of an instance of Typeclass to be available in the context. This will not compile as the concrete types of the values in the map are not visible from its type and I can therefore not set a context bound for them.
Is there a way to tell the compiler that in fact there will always be an instance available? In this example this is given as there are implicit functions to generate those instances for every concrete subtype of Top.
Or is the only solution to use a HList and recurse over its type requiring all the instances to be in context?
I recommend using some variation on this adaptation of Oleg's Existentials as universals in this sort of situation ... pack the the type class instance along with the value it's the instance for,
abstract sealed trait Top
class A[T] extends Top
class B[T] extends Top
class Typeclass[T]
implicit def a[T] = new Typeclass[A[T]]
implicit def b[T] = new Typeclass[B[T]]
trait Pack {
type T <: Top
val top: T
implicit val tc: Typeclass[T]
}
object Pack {
def apply[T0 <: Top](t0: T0)(implicit tc0: Typeclass[T0]): Pack =
new Pack { type T = T0 ; val top = t0 ; val tc = tc0 }
}
val m = Map("a" -> Pack(new A[Int]), "b" -> Pack(new B[Double]))
def foo[T: Typeclass](t: T): Unit = ()
def bar(m: Map[String, Pack], k: String): Unit =
m.get(k).map { pack =>
import pack._ // imports T, top and implicit tc
foo(top) // instance available for call of foo
}
bar(m, "a")
As discussed in comment it would be more convenient to have the typeclass defined on Top, and it might be done with pattern matching.
supposing part of the definition of the typeclass is
def f[T](t: T): FResult[T],
and you have the corresponding implentations
def fOnA[T](t: A[T]): FResult[A[T]] = ...
def fOnB[T](t: B[T]): FResult[B[T]] = ...
Then you can define
def fOnT(t: Top) : FResult[Top] = t match {
case a: A[_] => fOnA(a)
// provided an FResult[A[T]] is an FResult[Top],
// or some conversion is possible
case b: B[_] => fOnB(b)
}
If is both legal and safe to call a generic method, such as fOnA[T] with an existential (a matching A[_])
However, it might be difficult to convince the compiler that the parameter you pass to f or the result you get are ok, given the reduced information of the existential. If so, please post the signatures you need.
Specialization promises to provide high-efficiency implmentations for primitive types
with minimal extra boilerplate. But specialization seems to be too eager for its own good.
If I want to specialize a class or method,
def foo[#specialized(Byte) A](a: A): String = ???
class Bar[#specialized(Int) B] {
var b: B = ???
def baz: B = ???
}
then I am required to write a single implementation that covers both the specialized and the generic cases.
What if those cases are really different from each other, such that the implementations do not overlap?
For example, if I wanted to perform math on bytes, I would need to insert a bunch of & 0xFFs into the
logic.
I could possibly write a specialized type-class to do the math right, but doesn't that just push the same
problem back one level? How do I write my specialized + method for that type class in a way that doesn't
conflict with a more general implementation?
class Adder[#specialized(Byte) A] {
def +(a1: A, a2: A): A = ???
}
Also, once I do create a type-class this way, how do I make sure the correct type class is used for my specialized methods
instead of the general version (which, if it is truly general, should probably compile and certainly would run, except that it isn't what I want)?
Is there a way to do this without macros? Is it easier with macros?
This is my best attempt so far. It works but the implementation isn't pretty (even if the results are). Improvements are welcome!
There is a macro-free way to do this, both at the class and method level, and it does involve type classes--quite a lot of
them! And the answer is not exactly the same for classes and methods. So bear with me.
Manually Specialized Classes
You manually specialize classes the same way that you manually provide any kind of different implementation for classes:
your superclass is abstract (or is a trait), and the subclasses provide the implementation details.
abstract class Bippy[#specialized(Int) B] {
def b: B
def next: Bippy[B]
}
class BippyInt(initial: Int) extends Bippy[Int] {
private var myB: Int = initial
def b: Int = myB
def next = { myB += 1; this }
}
class BippyObject(initial: Object) extends Bippy[Object] {
private var myB: Object = initial
def b: B = myB
def next = { myB = myB.toString; this }
}
Now, if only we had a specialized method to pick out the right implementations, we'd be done:
object Bippy{
def apply[#specialized(Int) B](initial: B) = ??? // Now what?
}
So we've converted our problem of providing custom specialized classes and methods into just
needing to provide custom specialized methods.
Manually Specialized Methods
Manually specializing a method requires a way to write one implementation that can nonetheless
select which implementation you want (at compile time). Type classes are great at this. Suppose
we already had type classes that implemented all of our functionality, and that the compiler would
select the right one. Then we could just write
def foo[#specialized(Int) A: SpecializedFooImpl](a: A): String =
implicitly[SpecializedFooImpl[A]](a)
...or we could if implicitly was guaranteed to preserve specialization and if we only
ever wanted a single type parameter. In general these things are not true, so we'll write
our type class out as an implicit parameter rather than relying on the A: TC syntactic sugar.
def foo[#specialized(Int) A](a: A)(implicit impl: SpecializedFooImpl[A]): String =
impl(a)
(Actually, that's less boilerplate anyway.)
So we've converted our problem of providing custom specialized methods into just needing
to write specialized typeclasses and getting the compiler to fill in the correct ones.
Manually Specialized Type Classes
Type classes are just classes, and now we have to write specialized classes again, but
there's a critical difference. The user isn't the one asking for arbitrary instances.
This gives us just enough extra flexibility for it to work.
For foo, we need an Int version and a fully generic version.
trait SpecFooImpl[#specialized (Int), A] {
def apply(param: A): String
}
final class SpecFooImplAny[A] extends SpecFooImpl[A] {
def apply(param: A) = param.toString
}
final class SpecFooImplInt extends SpecFooImpl[Int] {
def apply(param: Int) = "!" * math.max(0, param)
}
Now we could create implicits to supply those type classes like so
implicit def specFooAsAny[A] = new SpecFooImplAny[A]
implicit val specFooAsInt = new SpecFooImplInt
except we have a problem: if we actually try to call foo: Int, both implicits will apply.
So if we just had a way to prioritize which type class we chose, we'd be all set.
Selection of type classes (and implicits in general)
One of the secret ingredients the compiler uses to determine the right implicit to use
is inheritance. If implicits come from A via B extends A, but B
declares its own that also could apply, those in B win if all else is equal.
So we put the ones we want to win deeper in the inheritance hierarchy.
Also, since you're free to define implicits in traits, you can mix them in anywhere.
So the last piece of our puzzle is to pop our type class implicits into a chain
of traits that extend each other, with the more generic ones appearing earlier.
trait LowPriorityFooSpecializers {
implicit def specializeFooAsAny[A] = new SpecializedFooImplAny[A]
}
trait FooSpecializers extends LowPriorityFooSpecializers {
implicit val specializeFooAsInt = new SpecializedFooImplInt
}
Mix in the highest-priority trait to wherever the implicits are needed, and the
type classes will be picked as desired.
Note that the type classes will be as specialized as you make them even if the
specialized annotation is not used. So you can do without specialized at all,
as long as you know the type precisely enough, unless you want to use specialized
functions or interoperate with other specialized classes. (And you probably do.)
A complete example
Let's suppose we want to make a two-parameter specialized bippy function that
will do apply the following transformation:
bippy(a, b) -> b
bippy(a, b: Int) -> b+1
bippy(a: Int, b) -> b
bippy(a: Int, b: Int) -> a+b
We should be able to achieve this with three type classes and a single specialized
method. Let's try, first the method:
def bippy[#specialized(Int) A, #specialized(Int) B](a: A, b: B)(implicit impl: SpecBippy[A, B]) =
impl(a, b)
Then the type classes:
trait SpecBippy[#specialized(Int) A, #specialized(Int) B] {
def apply(a: A, b: B): B
}
final class SpecBippyAny[A, B] extends SpecBippy[A, B] {
def apply(a: A, b: B) = b
}
final class SpecBippyAnyInt[A] extends SpecBippy[A, Int] {
def apply(a: A, b: Int) = b + 1
}
final class SpecBippyIntInt extends SpecBippy[Int, Int] {
def apply(a: Int, b: Int) = a + b
}
Then the implicits in chained traits:
trait LowerPriorityBippySpeccer {
// Trick to avoid allocation since generic case is erased anyway!
private val mySpecBippyAny = new SpecBippyAny[AnyRef, AnyRef]
implicit def specBippyAny[A, B] = mySpecBippyAny.asInstanceOf[SpecBippyAny[A, B]]
}
trait LowPriorityBippySpeccer extends LowerPriorityBippySpeccer {
private val mySpecBippyAnyInt = new SpecBippyAnyInt[AnyRef]
implicit def specBippyAnyInt[A] = mySpecBippyAnyInt.asInstanceOf[SpecBippyAnyInt[A]]
}
// Make this last one an object so we can import the contents
object BippySpeccer extends LowPriorityBippySpeccer {
implicit val specBippyIntInt = new SpecBippyIntInt
}
and finally we'll try it out (after pasting everything in together in :paste in the REPL):
scala> import Speccer._
import Speccer._
scala> bippy(Some(true), "cod")
res0: String = cod
scala> bippy(1, "salmon")
res1: String = salmon
scala> bippy(None, 3)
res2: Int = 4
scala> bippy(4, 5)
res3: Int = 9
It works--our custom implementations are enabled. Just to check that we can use
any type, but we don't leak into the wrong implementation:
scala> bippy(4, 5: Short)
res4: Short = 5
scala> bippy(4, 5: Double)
res5: Double = 5.0
scala> bippy(3: Byte, 2)
res6: Int = 3
And finally, to verify that we have actually avoided boxing, we'll time bippy at
summing a bunch of integers:
scala> val th = new ichi.bench.Thyme
th: ichi.bench.Thyme = ichi.bench.Thyme#1130520d
scala> val adder = (i: Int, j: Int) => i + j
adder: (Int, Int) => Int = <function2>
scala> var a = Array.fill(1024)(util.Random.nextInt)
a: Array[Int] = Array(-698116967, 2090538085, -266092213, ...
scala> th.pbenchOff(){
var i, s = 0
while (i < 1024) { s = adder(a(i), s); i += 1 }
s
}{
var i, s = 0
while (i < 1024) { s = bippy(a(i), s); i += 1 }
s
}
Benchmark comparison (in 1.026 s)
Not significantly different (p ~= 0.2795)
Time ratio: 0.99424 95% CI 0.98375 - 1.00473 (n=30)
First 330.7 ns 95% CI 328.2 ns - 333.1 ns
Second 328.8 ns 95% CI 326.3 ns - 331.2 ns
So we can see that our specialized bippy-adder achieves the same kind of performance
as specialized Function2 does (about 3 adds per ns, which is about right for a modern
machine).
Summary
To write custom specialized code using the #specialized annotation,
Make the specialized class abstract and manually supply concrete implementations
Make specialized methods (including generators for a specialized class) take typeclasses that do the real work
Make the base typeclass trait #specialized and provide concrete implementations
Provide implicit vals or defs in an inheritance-hierarchy of traits so the correct one is selected
It's a lot of boilerplate, but at the end of it all you get a seamless custom-specialized experience.
This is an answer from the scala internals mailing list:
With miniboxing specialization, you can use the reflection feature:
import MbReflection._
import MbReflection.SimpleType._
import MbReflection.SimpleConv._
object Test {
def bippy[#miniboxed A, #miniboxed B](a: A, b: B): B =
(reifiedType[A], reifiedType[B]) match {
case (`int`, `int`) => (a.as[Int] + b.as[Int]).as[B]
case ( _ , `int`) => (b.as[Int] + 1).as[B]
case (`int`, _ ) => b
case ( _ , _ ) => b
}
def main(args: Array[String]): Unit = {
def x = 1.0
assert(bippy(3,4) == 7)
assert(bippy(x,4) == 5)
assert(bippy(3,x) == x)
assert(bippy(x,x) == x)
}
}
This way, you can choose the exact behavior of the bippy method based on the type arguments without defining any implicit classes.
I know it's quite old, but I bumped at it looking for something else and maybe it'll prove useful. I had a similar motivation, and answered it in how to check I'm inside a specialized function or class
I used a reverse lookup table - SpecializedKey is a specialized class which equals all other instances with the same specialization, so I can perform a check like this
def onlyBytes[#specialized E](arg :E) :Option[E] =
if (specializationFor[E]==specializationFor[Byte]) Some(arg)
else None
Of course, there's no performance benefit when working with individual primitive values, but with collections, especially iterators, it becomes useful.
final val AllButUnit = new Specializable.Group((Byte, Short, Int, Long, Char, Float, Double, Boolean, AnyRef))
def specializationFor[#specialized(AllButUnit) E] :ResolvedSpecialization[E] =
Specializations(new SpecializedKey[E]).asInstanceOf[ResolvedSpecialization[E]]
private val Specializations = Seq(
resolve[Byte],
resolve[Short],
resolve[Int],
resolve[Long],
resolve[Char],
resolve[Float],
resolve[Double],
resolve[Boolean],
resolve[Unit],
resolve[AnyRef]
).map(
spec => spec.key -> spec :(SpecializedKey[_], ResolvedSpecialization[_])
).toMap.withDefaultValue(resolve[AnyRef])
private def resolve[#specialized(AllButUnit) E :ClassTag] :ResolvedSpecialization[E] =
new ResolvedSpecialization[E](new SpecializedKey[E], new Array[E](0))
class ResolvedSpecialization[#specialized(AllButUnit) E] private[SpecializedCompanion]
(val array :Array[E], val elementType :Class[E], val classTag :ClassTag[E], private[SpecializedCompanion] val key :SpecializedKey[E]) {
private[SpecializedCompanion] def this(key :SpecializedKey[E], array :Array[E]) =
this(array, array.getClass.getComponentType.asInstanceOf[Class[E]], ClassTag(array.getClass.getComponentType.asInstanceOf[Class[E]]), key)
override def toString = s"#specialized($elementType)"
override def equals(that :Any) = that match {
case r :ResolvedSpecialization[_] => r.elementType==elementType
case _ => false
}
override def hashCode = elementType.hashCode
}
private class SpecializedKey[#specialized(AllButUnit) E] {
override def equals(that :Any) = that.getClass==getClass
override def hashCode = getClass.hashCode
def className = getClass.getName
override def toString = className.substring(className.indexOf("$")+1)
}
I have (for lack of a better term) a factory method that encapsulates constructing an object:
def createMyObject = new SomeClass(a, b, c, d)
Now, depending on the context, I will need to mix in one or more traits into SomeClass:
new SomeClass with Mixin1
or
new SomeClass with Mixin2 with Mixin3
Instead of creating multiple separate factory methods for each "type" of instantiation, how can I pass in the traits to be mixed in so that it can be done with a single method? Or perhaps there is a good pattern for this that is structured differently?
I'd like to maintain the encapsulation so I'd rather not have each consumer just create the class on its own.
If you need only mixins without method overriding, you can just use type classes:
trait Marker
class C[+T <: Marker] { def b = 1 }
trait Marker1 extends Marker
implicit class I1[T <: Marker1](c: C[T]) {def a = 6 + c.b}
trait Marker2 extends Marker
implicit class I2[T <: Marker2](c: C[T]) {def a = 5 + c.b}
trait Marker3 extends Marker
implicit class I3[T <: Marker3](c: C[T]) {def k = 100}
trait Marker4 extends Marker3
implicit class I4[T <: Marker4](c: C[T]) {def z = c.k + 100} //marker3's `k` is visible here
scala> def create[T <: Marker] = new C[T]
create: [T <: Marker]=> C[T]
scala> val o = create[Marker1 with Marker3]
o: C[Marker1 with Marker3] = C#51607207
scala> o.a
res56: Int = 7
scala> o.k
res57: Int = 100
scala> create[Marker4].z
res85: Int = 200
But it won't work for create[Marker1 with Marker2].a (ambiguous implicits), so no linearization here. But if you want to just mix-in some methods (like in javascript's prototypes) and maybe inject something - seems to be fine. You can also combine it with traditional linearized mix-in by adding some traits to C, I1, I2, etc.
You can instantiate the class differently depending on the context.
def createMyObject =
if (context.foo)
new SomeClass
else
new SomeClass with Mixin1
However, if the consumers are the ones that know the traits that are supposed to be mixed in, then why wouldn't you just instantiate things there?