Related
Is it possible to have syntax like (parameter1, parameter2) applied myFunction. Here myFunction would be applied to the given parameters. Concrete example: val myFunction = (a:String) => a+a+"there"; "hello" applied myFunction should output "hellohellothere".
I know it's possible to do (parameter1, parameter2) match {case myFunctionWrittenOut}, so the above would become "hello" match {case a:String => a+a+"there"} but here you have to write out the function: you can't use a reference.
I don't think it's possible with standard scala. But you can write some helper methods that would make something like this available:
implicit class Applied1[T](val t: T) extends AnyVal {
def applied[R](f: T => R): R = f(t)
}
implicit class Applied2[T1, T2](val t: (T1, T2)) extends AnyVal {
def applied[R](f: (T1, T2) => R): R = f(t._1, t._2)
}
implicit class Applied3[T1, T2, T3](val t: (T1, T2, T3)) extends AnyVal {
def applied[R](f: (T1, T2, T3) => R): R = f(t._1, t._2, t._3)
}
// ... and 19 more implicit classes: Applied4 to Applied22
And then you can use it like this:
def minus(a: Int): Int = -a
def plus(a: Int, b: Int): Int = a + b
def plus(a: Int, b: Int, c: Int): Int = a + b + c
scala> 5 applied minus
res0: Int = -5
scala> (1, 2) applied plus
res1: Int = 3
scala> (1, 2, 3) applied plus
res2: Int = 6
But this may be a bit more complex to use with generic functions, or functions with implicit arguments:
def mul[T : Numeric](a: T, b: T): T = {
import Numeric.Implicits._
a * b
}
scala> (1.5, 2.5) applied (mul(_, _))
res3: Double = 3.75
Implicit classes can be used to achieve something which which seems to be similar to what you are looking for.
An implicit class with only one constructor argument can be used as a pattern to add methods to the a given type. One example is DurationInt which "adds" methods to integers to enable converting them to durations. It is imported into scope using import scala.concurrent.duration._
A simplified version of DurationInt could be defined as follows:
implicit class DurationInt(n: Int) {
def seconds: FiniteDuration = Duration(n, TimeUnit.SECONDS)
}
This enables use of the seconds method on all integers
2.seconds // Returns a duration object
For functions with multiple arguments you can use a tuple argument for the implicit class:
implicit class TupleConcat(tuple: (String, String)) {
def concat: String = tuple._1 + tuple._2
}
// enables the following syntax
("aa", "bb").concat
It is common for implicit classes such as these to extend AnyVal, this allows some compiler optimizations which avoid actually having to instantiate the implicit class in many cases.
implicit final class DurationInt(val n: Int) extends AnyVal { /* implementation */ }
In Scala, the parameter list of a function is always written before the function:
val fn = (a: Int, b: Int) => a + b
// ^ parameters ^ ^ function
The title describes a specific problem I encountered when trying to solve a more general problem: how to separate a type conversion concern from a calculation concern. If I can solve that larger problem by another means than partially applied functions, great!
I'm using a type class, NumberOps, to represent operations on numbers. This code is paired down, but still exhibits the problem and expresses my intent. The first part simply defines the type class and a couple of implementations.
trait NumberOps[T] { // the type class (simplified for debugging)
def neg(x: T): T // negate x
def abs(x: T): T // absolute value of x
// ... over 50 more operations
def toFloating(x:T):AnyVal // convert from native to Float or Double, preserving precision
def fromFloating(f:AnyVal):T // convert from Float or Double to native
// ... also to/from Integral and to/from Big
}
object NumberOps { // Implements NumberOps for each type
import language.implicitConversions
implicit object FloatOps extends NumberOps[Float] {
def neg(x: Float): Float = -x
def abs(x: Float): Float = x.abs
def toFloating(f:Float):Float = f
def fromFloating(x:AnyVal):Float = {
x match {
case f:Float => f
case d:Double => d.toFloat
}
}
}
implicit object DoubleOps extends NumberOps[Double] {
def neg(x: Double): Double = -x
def abs(x: Double): Double = x.abs
def toFloating(d:Double):Double = d
def fromFloating(x:AnyVal):Double = {
x match {
case f:Float => f.toDouble
case d:Double => d
}
}
}
// ... other implicits defined for all primitive types, plus BigInt, BigDec
} // NumberOps object
All well and good. But now I want to implement NumberOps for complex numbers. A complex number will be represented as a 2-element array of any numeric type already defined (i.e. all primitive types plus BigInt and BigDecimal).
The intent with this code is to avoid combinatorial explosion of number types with numeric operations. I had hoped to achieve this by separating Concern A (type conversion) from Concern B (generic calculation).
You'll notice that "Concern A" is embodied in def eval, while "Concern B" is defined as a generic method, f, and then passed as a partially applied function (f _) to method eval. This code depends on the earlier code.
object ImaginaryOps { // Implements NumberOps for complex numbers, as 2-element arrays of any numeric type
import language.implicitConversions
import reflect.ClassTag
import NumberOps._
implicit def ComplexOps[U: NumberOps : ClassTag]: NumberOps[Array[U]] = { // NumberOps[T] :: NumberOps[Array[U]]
val numOps = implicitly[NumberOps[U]]
type OpF2[V] = (V,V) => NumberOps[V] => (V,V) // equivalent to curried function: f[V](V,V)(NumberOps[V]):(V,V)
// Concern A: widen x,y from native type U to type V, evaluate function f, then convert the result back to native type U
def eval[V](x:U, y:U)(f:OpF2[V]):(U,U) = {
(numOps.toFloating(x), numOps.toFloating(y), f) match {
case (xf:Float, yf:Float, _:OpF2[Float] #unchecked) => // _:opF #unchecked permits compiler type inference on f
val (xv,yv) = f(xf.asInstanceOf[V], yf.asInstanceOf[V])(FloatOps.asInstanceOf[NumberOps[V]])
(numOps.fromFloating(xv.asInstanceOf[Float]), numOps.fromFloating(yv.asInstanceOf[Float]))
case (xd:Double, yd:Double, _:OpF2[Double] #unchecked) => // _:opF #unchecked permits compiler type inference on f
val (xv,yv) = f(xd.asInstanceOf[V], yd.asInstanceOf[V])(DoubleOps.asInstanceOf[NumberOps[V]])
(numOps.fromFloating(xv.asInstanceOf[Double]), numOps.fromFloating(yv.asInstanceOf[Double]))
}
} // eval
new NumberOps[Array[U]]{ // implement NumberOps for complex numbers of any type U
def neg(a: Array[U]): Array[U] = a match { case (Array(ax, ay)) =>
def f[V](xv:V, yv:V)(no:NumberOps[V]):(V,V) = (no.neg(xv), no.neg(yv)) // Concern B: the complex calculation
val (xu,yu) = eval(a(0), a(1))(f _) // combine Concern A (widening conversion) with Concern B (calculation)
a(0) = xu; a(1) = yu; a
}
def abs(a: Array[U]): Array[U] = a match { case (Array(ax, ay)) =>
def f[V](xv:V, yv:V)(no:NumberOps[V]):(V,V) = (no.abs(xv), no.abs(yv)) // Concern B: the complex calculation
val (xu,yu) = eval(a(0), a(1))(f _) // combine Concern A (widening conversion) with Concern B (calculation)
a(0) = xu; a(1) = yu; a
}
def toFloating(a:Array[U]):AnyVal = numOps.toFloating( a(0) )
def fromFloating(x:AnyVal):Array[U] = Array(numOps.fromFloating(x), numOps.fromFloating(x))
}
} // implicit def ComplexOps
} // ImaginaryOps object
object TestNumberOps {
def cxStr(a:Any) = { a match { case ad: Array[Double] => s"${ad(0)} + ${ad(1)}i" } }
def cxUnary[T:NumberOps](v: T)(unaryOp:T => T): T = {
val ops = implicitly[NumberOps[T]]
unaryOp(v)
}
def main(args:Array[String]) {
println("TestNo4")
import ImaginaryOps._
val complexDoubleOps = implicitly[NumberOps[Array[Double]]]
val complex1 = Array(1.0,1.0)
val neg1 = cxUnary(complex1)(complexDoubleOps.neg _)
val abs1 = cxUnary(neg1)(complexDoubleOps.abs _)
println(s"adz1 = ${cxStr(complex1)}, neg1 = ${cxStr(neg1)}, abs1 = ${cxStr(abs1)}, ")
}
} // TestNumberOps
Now this code compiles, but at runtime I get a class cast exception:
Exception in thread "main" java.lang.ClassCastException: java.lang.Double cannot be cast to scala.runtime.Nothing$
at ImaginaryOps$$anon$1$$anonfun$1.apply(Experiment4.scala:68)
at ImaginaryOps$.ImaginaryOps$$eval$1(Experiment4.scala:60)
at ImaginaryOps$$anon$1.neg(Experiment4.scala:68)
at TestNumberOps$$anonfun$3.apply(Experiment4.scala:97)
at TestNumberOps$$anonfun$3.apply(Experiment4.scala:97)
at TestNumberOps$.cxUnary(Experiment4.scala:89)
at TestNumberOps$.main(Experiment4.scala:97)
at TestNumberOps.main(Experiment4.scala)
I understand why this exception occurs. It's because the compiler couldn't resolve the type V of def f[V], so when it gets passed to method eval as (f _), its generic type V has been changed to scala.runtime.Nothing.
Having struggled without success and after searching futilely online, I'm hoping to find a useful suggestion here. Probably I'm making this harder than it is, but with Scala's strong type system there ought to be a solution. The problem is how to tell the compiler to use this type in evaluating this function.
What you want to do is to use a derived type class for your complex number.
Consider the following simplified scenario,
trait Addable[A] {
def apply(a: A, b: A): A
}
implicit val intAddable: Addable[Int] = new Addable[Int] {
def apply(a: Int, b: Int): Float = a + b
}
implicit val floatAddable: Addable[Float] = new Addable[Float] {
def apply(a: Float, b: Float): Float = a + b
}
implicit final class AddOps[A](a: A) {
def add(b: A)(implicit addable: Addable[A]): A = addable(a, b)
}
which basically allows us to call, 1.add(2) allowing the scala compiler to infer that there is an addable for ints.
However what about for your complex type? Since we want to essentially say there exists an addable for any complex type which is composited of 2 types which follow the addable law we essentially define it like this,
implicit def complexAddable[A](implicit addable: Addable[A]): Addable[Array[A]] = {
new Addable[Array[A]] {
def apply(a: Array[A], b: Array[A]): Array[A] = {
Array(a(0).add(b(0)), a(1).add(b(1)))
}
}
}
which works because there is an Addable[A] in scope. Note that this of course the implicit cannot be created if an addable for A doesn't exist, and hence you have lovely compile time safety.
You can find usages of this pattern in the excellent functional libraries such as scalaz, cats, scodec et cetera, and is known from haskell as the type class pattern.
Is there a way to use shorter syntax when using context-bound type parameters? At the moment I have something like this
case class Vector2D[a : Numeric](x: a, y: a) {
val numA = implicitly[Numeric[a]]
def length2 = numA.plus(numA.times(x, x), numA.times(y, y))
}
and it makes more complex formulae unreadable.
Try this REPL session:
scala> case class Vector2D[T : Numeric](x: T, y: T) {
val numeric = implicitly[Numeric[T]]
import numeric._
def length2 = (x*x)+(y*y)
}
defined class Vector2D
scala> Vector2D(3,4).length2
res0: Int = 25
This is because Numeric contains an implicit conversion called mkNumericOps which you can import as shown above. If it didn't come out of the box, the way you could roll this yourself would be something like:
scala> implicit class NumericOps[T](val x: T) extends AnyVal { def +(y: T)(implicit n: Numeric[T]): T = n.plus(x, y)
| def *(y: T)(implicit n: Numeric[T]): T = n.times(x, y)
| }
defined class NumericOps
scala> case class Vector2D[a : Numeric](x: a, y: a) { def length2 = (x*x)+(y*y) }
defined class Vector2D
scala> Vector2D(3,4).length2
res0: Int = 25
If you make NumericOps not a value class (don't extend AnyVal) then the implicit Numeric can go on the constructor instead of each method, which might be better, or not really matter.
Anyway there's no need to write your own since Numeric already has mkNumericOps.
These "ops" classes are called the "enrich my library" pattern.
Numeric.Ops is here
and the implicit being imported to auto-create it is mkNumericOps on Numeric, here.
Just
import Numeric.Implicits._
then for every type that for which an implicit Numeric can be found
(importing just the NumericOps conversion of one Numeric instance as suggested by #Havoc P gives you finer control as to for which types operations are available, but most of the time, Numeric.Implicits should be fine)
On the more general question is there a shorter syntax when using context bounds type parameters: in general, there is not. It is up to the typeclass to provide some sugar to make it easy to use, as Numeric does here.
For instance, it is more or less customary to have an apply method in the companion object which makes getting the instance a little easier than with implicitly
object Ordering {
def apply[T](implicit ord: Ordering[T]): Ordering[T] = implicitly[Ordering[T]]
}
so that you can get the implicit just with e.g Ordering[Int], rather than implicitly[Ordering[Int]].
Is it possible to use the magnet pattern with varargs:
object Values {
implicit def fromInt (x : Int ) = Values()
implicit def fromInts(xs: Int*) = Values()
}
case class Values()
object Foo {
def bar(values: Values) {}
}
Foo.bar(0)
Foo.bar(1,2,3) // ! "error: too many arguments for method bar: (values: Values)Unit"
?
As already mentioned by gourlaysama, turning the varargs into a single Product will do the trick, syntactically speaking:
implicit def fromInts(t: Product) = Values()
This allows the following call to compile fine:
Foo.bar(1,2,3)
This is because the compiler autmatically lifts the 3 arguments into a Tuple3[Int, Int, Int]. This will work with any number of arguments up to an arity of 22. Now the problem is how to make it type safe. As it is Product.productIterator is the only way to get back our argument list inside the method body, but it returns an Iterator[Any]. We don't have any guarantee that the method will be called only with Ints. This should come as no surprise as we actually never even mentioned in the signature that we wanted only Ints.
OK, so the key difference between an unconstrained Product and a vararg list is that in the latter case each element is of the same type. We can encode this using a type class:
abstract sealed class IsVarArgsOf[P, E]
object IsVarArgsOf {
implicit def Tuple2[E]: IsVarArgsOf[(E, E), E] = null
implicit def Tuple3[E]: IsVarArgsOf[(E, E, E), E] = null
implicit def Tuple4[E]: IsVarArgsOf[(E, E, E, E), E] = null
implicit def Tuple5[E]: IsVarArgsOf[(E, E, E, E, E), E] = null
implicit def Tuple6[E]: IsVarArgsOf[(E, E, E, E, E), E] = null
// ... and so on... yes this is verbose, but can be done once for all
}
implicit class RichProduct[P]( val product: P ) {
def args[E]( implicit evidence: P IsVarArgsOf E ): Iterator[E] = {
// NOTE: by construction, those casts are safe and cannot fail
product.asInstanceOf[Product].productIterator.asInstanceOf[Iterator[E]]
}
}
case class Values( xs: Seq[Int] )
object Values {
implicit def fromInt( x : Int ) = Values( Seq( x ) )
implicit def fromInts[P]( xs: P )( implicit evidence: P IsVarArgsOf Int ) = Values( xs.args.toSeq )
}
object Foo {
def bar(values: Values) {}
}
Foo.bar(0)
Foo.bar(1,2,3)
We have changed the method signature form
implicit def fromInts(t: Product)
to:
implicit def fromInts[P]( xs: P )( implicit evidence: P IsVarArgsOf Int )
Inside the method body, we use the new methodd args to get our arg list back.
Note that if we attempt to call bar with a a tuple that is not a tuple of Ints, we will get a compile error, which gets us our type safety back.
UPDATE: As pointed by 0__, my above solution does not play well with numeric widening. In other words, the following does not compile, although it would work if bar was simply taking 3 Int parameters:
Foo.bar(1:Short,2:Short,3:Short)
Foo.bar(1:Short,2:Byte,3:Int)
To fix this, all we need to do is to modify IsVarArgsOf so that all the implicits allow
the tuple elemts to be convertible to a common type, rather than all be of the same type:
abstract sealed class IsVarArgsOf[P, E]
object IsVarArgsOf {
implicit def Tuple2[E,X1<%E,X2<%E]: IsVarArgsOf[(X1, X2), E] = null
implicit def Tuple3[E,X1<%E,X2<%E,X3<%E]: IsVarArgsOf[(X1, X2, X3), E] = null
implicit def Tuple4[E,X1<%E,X2<%E,X3<%E,X4<%E]: IsVarArgsOf[(X1, X2, X3, X4), E] = null
// ... and so on ...
}
OK, actually I lied, we're not done yet. Because we are now accepting different types of elements (so long as they are convertible to a common type, we cannot just cast them to the expected type (this would lead to a runtime cast error) but instead we have to apply the implicit conversions. We can rework it like this:
abstract sealed class IsVarArgsOf[P, E] {
def args( p: P ): Iterator[E]
}; object IsVarArgsOf {
implicit def Tuple2[E,X1<%E,X2<%E] = new IsVarArgsOf[(X1, X2), E]{
def args( p: (X1, X2) ) = Iterator[E](p._1, p._2)
}
implicit def Tuple3[E,X1<%E,X2<%E,X3<%E] = new IsVarArgsOf[(X1, X2, X3), E]{
def args( p: (X1, X2, X3) ) = Iterator[E](p._1, p._2, p._3)
}
implicit def Tuple4[E,X1<%E,X2<%E,X3<%E,X4<%E] = new IsVarArgsOf[(X1, X2, X3, X4), E]{
def args( p: (X1, X2, X3, X4) ) = Iterator[E](p._1, p._2, p._3, p._4)
}
// ... and so on ...
}
implicit class RichProduct[P]( val product: P ) {
def args[E]( implicit isVarArg: P IsVarArgsOf E ): Iterator[E] = {
isVarArg.args( product )
}
}
This fixes the problem with numeric widening, and we still get a compile when mixing unrelated types:
scala> Foo.bar(1,2,"three")
<console>:22: error: too many arguments for method bar: (values: Values)Unit
Foo.bar(1,2,"three")
^
Edit:
The var-args implicit will never be picked because repeated parameters are not really first class citizens when it comes to types... they are only there when checking for applicability of a method to arguments.
So basically, when you call Foo.bar(1,2,3) it checks if bar is defined with variable arguments, and since it isn't, it isn't applicable to the arguments. And it can't go any further:
If you had called it with a single argument, it would have looked for an implicit conversion from the argument type to the expected type, but since you called with several arguments, there is an arity problem, there is no way it can convert multiple arguments to a single one with an implicit type conversion.
But: there is a solution using auto-tupling.
Foo.bar(1,2,3)
can be understood by the compiler as
Foo.bar((1,2,3))
which means an implicit like this one would work:
implicit def fromInts[T <: Product](t: T) = Values()
// or simply
implicit def fromInts(t: Product) = Values()
The problem with this is that the only way to get the arguments is via t.productIterator, which returns a Iterator[Any] and needs to be cast.
So you would lose type safety; this would compile (and fail at runtime when using it):
Foo.bar("1", "2", "3")
We can make this fully type-safe using Scala 2.10's implicit macros. The macro would just check that the argument is indeed a TupleX[Int, Int, ...] and only make itself available as an implicit conversion if it passes that check.
To make the example more useful, I changed Values to keep the Int arguments around:
case class Values(xs: Seq[Int])
object Values {
implicit def fromInt (x : Int ) = Values(Seq(x))
implicit def fromInts[T<: Product](t: T): Values = macro Macro.fromInts_impl[T]
}
With the macro implementation:
import scala.language.experimental.macros
import scala.reflect.macros.Context
object Macro {
def fromInts_impl[T <: Product: c.WeakTypeTag](c: Context)(t: c.Expr[T]) = {
import c.universe._
val tpe = weakTypeOf[T];
// abort if not a tuple
if (!tpe.typeSymbol.fullName.startsWith("scala.Tuple"))
c.abort(c.enclosingPosition, "Not a tuple!")
// extract type parameters
val TypeRef(_,_, tps) = tpe
// abort it not a tuple of ints
if (tps.exists(t => !(t =:= typeOf[Int])))
c.abort(c.enclosingPosition, "Only accept tuples of Int!")
// now, let's convert that tuple to a List[Any] and add a cast, with splice
val param = reify(t.splice.productIterator.toList.asInstanceOf[List[Int]])
// and return Values(param)
c.Expr(Apply(Select(Ident(newTermName("Values")), newTermName("apply")),
List(param.tree)))
}
}
And finally, defining Foo like this:
object Foo {
def bar(values: Values) { println(values) }
}
You get type-safe invocation with syntax exactly like repeated parameters:
scala> Foo.bar(1,2,3)
Values(List(1, 2, 3))
scala> Foo.bar("1","2","3")
<console>:13: error: too many arguments for method bar: (values: Values)Unit
Foo.bar("1","2","3")
^
scala> Foo.bar(1)
Values(List(1))
The spec only specifies the type of repeated parameters (varargs) from inside of a function:
The type of such a repeated parameter inside the method is then the sequence type scala.Seq[T ].
It does not cover the type anywhere else.
So I assume that the compiler internally - in a certain phase - cannot match the types.
From this observation (this does not compile => "double definition"):
object Values {
implicit def fromInt(x: Int) = Values()
implicit def fromInts(xs: Int*) = Values()
implicit def fromInts(xs: Seq[Int]) = Values()
}
it seems to be Seq[]. So the next try is to make it different:
object Values {
implicit def fromInt(x: Int) = Values()
implicit def fromInts(xs: Int*) = Values()
implicit def fromInts(xs: Seq[Int])(implicit d: DummyImplicit) = Values()
}
this compiles, but this does not solve the real problem.
The only workaround I found is to convert the varargs into a sequence explicitly:
def varargs(xs: Int*) = xs // return type is Seq[Int]
Foo.bar(varargs(1, 2, 3))
but this of course is not what we want.
Possibly related: An implicit conversion function has only one parameter. But from a logical (or the compiler's temporary) point of view, in case of varargs, it could be multiple as well.
As for the types, this might be of interest
Here is a solution which does use overloading (which I would prefer not to)
object Values {
implicit def fromInt (x : Int ) = Values()
implicit def fromInts(xs: Seq[Int]) = Values()
}
case class Values()
object Foo {
def bar(values: Values) { println("ok") }
def bar[A](values: A*)(implicit asSeq: Seq[A] => Values) { bar(values: Values) }
}
Foo.bar(0)
Foo.bar(1,2,3)
I'd like to implement a class C to store values of various numeric types, as well as boolean. Furthermore, I'd like to be able to operate on instances of this class, between types, converting where necessary Int --> Double and Boolean -> Int, i.e., to be able to add Boolean + Boolean, Int + Boolean, Boolean + Int, Int + Double, Double + Double etc., returning the smallest possible type (Int or Double) whenever possible.
So far I came up with this:
abstract class SemiGroup[A] { def add(x:A, y:A):A }
class C[A] (val n:A) (implicit val s:SemiGroup[A]) {
def +[T <% A](that:C[T]) = s.add(this.n, that.n)
}
object Test extends Application {
implicit object IntSemiGroup extends SemiGroup[Int] {
def add(x: Int, y: Int):Int = x + y
}
implicit object DoubleSemiGroup extends SemiGroup[Double] {
def add(x: Double, y: Double):Double = x + y
}
implicit object BooleanSemiGroup extends SemiGroup[Boolean] {
def add(x: Boolean, y: Boolean):Boolean = true;
}
implicit def bool2int(b:Boolean):Int = if(b) 1 else 0
val n = new C[Int](10)
val d = new C[Double](10.5)
val b = new C[Boolean](true)
println(d + n) // [1]
println(n + n) // [2]
println(n + b) // [3]
// println(n + d) [4] XXX - no implicit conversion of Double to Int exists
// println(b + n) [5] XXX - no implicit conversion of Int to Boolean exists
}
This works for some cases (1, 2, 3) but doesn't for (4, 5). The reason is that there is implicit widening of type from lower to higher, but not the other way. In a way, the method
def +[T <% A](that:C[T]) = s.add(this.n, that.n)
somehow needs to have a partner method that would look something like:
def +[T, A <% T](that:C[T]):T = that.s.add(this.n, that.n)
but that does not compile for two reasons, firstly that the compiler cannot convert this.n to type T (even though we specify view bound A <% T), and, secondly, that even if it were able to convert this.n, after type erasure the two + methods become ambiguous.
Sorry this is so long. Any help would be much appreciated! Otherwise it seems I have to write out all the operations between all the types explicitly. And it would get hairy if I had to add extra types (Complex is next on the menu...).
Maybe someone has another way to achieve all this altogether? Feels like there's something simple I'm overlooking.
Thanks in advance!
Okay then, Daniel!
I've restricted the solution to ignore Boolean, and only work with AnyVals that have a weak Least Upper Bound that has an instance of Numeric. These restrictions are arbitrary, you could remove them and encode your own weak conformance relationship between types -- the implementation of a2b and a2c could perform some conversion.
Its interesting to consider how implicit parameters can simulate inheritance (passing implicit parameters of type (Derived => Base) or Weak Conformance. They are really powerful, especially when the type inferencer helps you out.
First, we need a type class to represent the Weak Least Upper Bound of all pairs of types A and B that we are interested in.
sealed trait WeakConformance[A <: AnyVal, B <: AnyVal, C] {
implicit def aToC(a: A): C
implicit def bToC(b: B): C
}
object WeakConformance {
implicit def SameSame[T <: AnyVal]: WeakConformance[T, T, T] = new WeakConformance[T, T, T] {
implicit def aToC(a: T): T = a
implicit def bToC(b: T): T = b
}
implicit def IntDouble: WeakConformance[Int, Double, Double] = new WeakConformance[Int, Double, Double] {
implicit def aToC(a: Int) = a
implicit def bToC(b: Double) = b
}
implicit def DoubleInt: WeakConformance[Double, Int, Double] = new WeakConformance[Double, Int, Double] {
implicit def aToC(a: Double) = a
implicit def bToC(b: Int) = b
}
// More instances go here!
def unify[A <: AnyVal, B <: AnyVal, C](a: A, b: B)(implicit ev: WeakConformance[A, B, C]): (C, C) = {
import ev._
(a: C, b: C)
}
}
The method unify returns type C, which is figured out by the type inferencer based on availability of implicit values to provide as the implicit argument ev.
We can plug this into your wrapper class C as follows, also requiring a Numeric[WeakLub] so we can add the values.
case class C[A <: AnyVal](val value:A) {
import WeakConformance.unify
def +[B <: AnyVal, WeakLub <: AnyVal](that:C[B])(implicit wc: WeakConformance[A, B, WeakLub], num: Numeric[WeakLub]): C[WeakLub] = {
val w = unify(value, that.value) match { case (x, y) => num.plus(x, y)};
new C[WeakLub](w)
}
}
And finally, putting it all together:
object Test extends Application {
val n = new C[Int](10)
val d = new C[Double](10.5)
// The type ascriptions aren't necessary, they are just here to
// prove the static type is the Weak LUB of the two sides.
println(d + n: C[Double]) // C(20.5)
println(n + n: C[Int]) // C(20)
println(n + d: C[Double]) // C(20.5)
}
Test
There's a way to do that, but I'll leave it to retronym to explain it, since he wrote this solution. :-)