Partially applied generic function "cannot be cast to Nothing" - scala

The title describes a specific problem I encountered when trying to solve a more general problem: how to separate a type conversion concern from a calculation concern. If I can solve that larger problem by another means than partially applied functions, great!
I'm using a type class, NumberOps, to represent operations on numbers. This code is paired down, but still exhibits the problem and expresses my intent. The first part simply defines the type class and a couple of implementations.
trait NumberOps[T] { // the type class (simplified for debugging)
def neg(x: T): T // negate x
def abs(x: T): T // absolute value of x
// ... over 50 more operations
def toFloating(x:T):AnyVal // convert from native to Float or Double, preserving precision
def fromFloating(f:AnyVal):T // convert from Float or Double to native
// ... also to/from Integral and to/from Big
}
object NumberOps { // Implements NumberOps for each type
import language.implicitConversions
implicit object FloatOps extends NumberOps[Float] {
def neg(x: Float): Float = -x
def abs(x: Float): Float = x.abs
def toFloating(f:Float):Float = f
def fromFloating(x:AnyVal):Float = {
x match {
case f:Float => f
case d:Double => d.toFloat
}
}
}
implicit object DoubleOps extends NumberOps[Double] {
def neg(x: Double): Double = -x
def abs(x: Double): Double = x.abs
def toFloating(d:Double):Double = d
def fromFloating(x:AnyVal):Double = {
x match {
case f:Float => f.toDouble
case d:Double => d
}
}
}
// ... other implicits defined for all primitive types, plus BigInt, BigDec
} // NumberOps object
All well and good. But now I want to implement NumberOps for complex numbers. A complex number will be represented as a 2-element array of any numeric type already defined (i.e. all primitive types plus BigInt and BigDecimal).
The intent with this code is to avoid combinatorial explosion of number types with numeric operations. I had hoped to achieve this by separating Concern A (type conversion) from Concern B (generic calculation).
You'll notice that "Concern A" is embodied in def eval, while "Concern B" is defined as a generic method, f, and then passed as a partially applied function (f _) to method eval. This code depends on the earlier code.
object ImaginaryOps { // Implements NumberOps for complex numbers, as 2-element arrays of any numeric type
import language.implicitConversions
import reflect.ClassTag
import NumberOps._
implicit def ComplexOps[U: NumberOps : ClassTag]: NumberOps[Array[U]] = { // NumberOps[T] :: NumberOps[Array[U]]
val numOps = implicitly[NumberOps[U]]
type OpF2[V] = (V,V) => NumberOps[V] => (V,V) // equivalent to curried function: f[V](V,V)(NumberOps[V]):(V,V)
// Concern A: widen x,y from native type U to type V, evaluate function f, then convert the result back to native type U
def eval[V](x:U, y:U)(f:OpF2[V]):(U,U) = {
(numOps.toFloating(x), numOps.toFloating(y), f) match {
case (xf:Float, yf:Float, _:OpF2[Float] #unchecked) => // _:opF #unchecked permits compiler type inference on f
val (xv,yv) = f(xf.asInstanceOf[V], yf.asInstanceOf[V])(FloatOps.asInstanceOf[NumberOps[V]])
(numOps.fromFloating(xv.asInstanceOf[Float]), numOps.fromFloating(yv.asInstanceOf[Float]))
case (xd:Double, yd:Double, _:OpF2[Double] #unchecked) => // _:opF #unchecked permits compiler type inference on f
val (xv,yv) = f(xd.asInstanceOf[V], yd.asInstanceOf[V])(DoubleOps.asInstanceOf[NumberOps[V]])
(numOps.fromFloating(xv.asInstanceOf[Double]), numOps.fromFloating(yv.asInstanceOf[Double]))
}
} // eval
new NumberOps[Array[U]]{ // implement NumberOps for complex numbers of any type U
def neg(a: Array[U]): Array[U] = a match { case (Array(ax, ay)) =>
def f[V](xv:V, yv:V)(no:NumberOps[V]):(V,V) = (no.neg(xv), no.neg(yv)) // Concern B: the complex calculation
val (xu,yu) = eval(a(0), a(1))(f _) // combine Concern A (widening conversion) with Concern B (calculation)
a(0) = xu; a(1) = yu; a
}
def abs(a: Array[U]): Array[U] = a match { case (Array(ax, ay)) =>
def f[V](xv:V, yv:V)(no:NumberOps[V]):(V,V) = (no.abs(xv), no.abs(yv)) // Concern B: the complex calculation
val (xu,yu) = eval(a(0), a(1))(f _) // combine Concern A (widening conversion) with Concern B (calculation)
a(0) = xu; a(1) = yu; a
}
def toFloating(a:Array[U]):AnyVal = numOps.toFloating( a(0) )
def fromFloating(x:AnyVal):Array[U] = Array(numOps.fromFloating(x), numOps.fromFloating(x))
}
} // implicit def ComplexOps
} // ImaginaryOps object
object TestNumberOps {
def cxStr(a:Any) = { a match { case ad: Array[Double] => s"${ad(0)} + ${ad(1)}i" } }
def cxUnary[T:NumberOps](v: T)(unaryOp:T => T): T = {
val ops = implicitly[NumberOps[T]]
unaryOp(v)
}
def main(args:Array[String]) {
println("TestNo4")
import ImaginaryOps._
val complexDoubleOps = implicitly[NumberOps[Array[Double]]]
val complex1 = Array(1.0,1.0)
val neg1 = cxUnary(complex1)(complexDoubleOps.neg _)
val abs1 = cxUnary(neg1)(complexDoubleOps.abs _)
println(s"adz1 = ${cxStr(complex1)}, neg1 = ${cxStr(neg1)}, abs1 = ${cxStr(abs1)}, ")
}
} // TestNumberOps
Now this code compiles, but at runtime I get a class cast exception:
Exception in thread "main" java.lang.ClassCastException: java.lang.Double cannot be cast to scala.runtime.Nothing$
at ImaginaryOps$$anon$1$$anonfun$1.apply(Experiment4.scala:68)
at ImaginaryOps$.ImaginaryOps$$eval$1(Experiment4.scala:60)
at ImaginaryOps$$anon$1.neg(Experiment4.scala:68)
at TestNumberOps$$anonfun$3.apply(Experiment4.scala:97)
at TestNumberOps$$anonfun$3.apply(Experiment4.scala:97)
at TestNumberOps$.cxUnary(Experiment4.scala:89)
at TestNumberOps$.main(Experiment4.scala:97)
at TestNumberOps.main(Experiment4.scala)
I understand why this exception occurs. It's because the compiler couldn't resolve the type V of def f[V], so when it gets passed to method eval as (f _), its generic type V has been changed to scala.runtime.Nothing.
Having struggled without success and after searching futilely online, I'm hoping to find a useful suggestion here. Probably I'm making this harder than it is, but with Scala's strong type system there ought to be a solution. The problem is how to tell the compiler to use this type in evaluating this function.

What you want to do is to use a derived type class for your complex number.
Consider the following simplified scenario,
trait Addable[A] {
def apply(a: A, b: A): A
}
implicit val intAddable: Addable[Int] = new Addable[Int] {
def apply(a: Int, b: Int): Float = a + b
}
implicit val floatAddable: Addable[Float] = new Addable[Float] {
def apply(a: Float, b: Float): Float = a + b
}
implicit final class AddOps[A](a: A) {
def add(b: A)(implicit addable: Addable[A]): A = addable(a, b)
}
which basically allows us to call, 1.add(2) allowing the scala compiler to infer that there is an addable for ints.
However what about for your complex type? Since we want to essentially say there exists an addable for any complex type which is composited of 2 types which follow the addable law we essentially define it like this,
implicit def complexAddable[A](implicit addable: Addable[A]): Addable[Array[A]] = {
new Addable[Array[A]] {
def apply(a: Array[A], b: Array[A]): Array[A] = {
Array(a(0).add(b(0)), a(1).add(b(1)))
}
}
}
which works because there is an Addable[A] in scope. Note that this of course the implicit cannot be created if an addable for A doesn't exist, and hence you have lovely compile time safety.
You can find usages of this pattern in the excellent functional libraries such as scalaz, cats, scodec et cetera, and is known from haskell as the type class pattern.

Related

How can I dynamically construct a Scala class without knowing types in advance?

I want to build a simple library in which a developer can define a Scala class that represents command line arguments (to keep it simple, just a single set of required arguments -- no flags or optional arguments). I'd like the library to parse the command line arguments and return an instance of the class. The user of the library would just do something like this:
case class FooArgs(fluxType: String, capacitorCount: Int)
def main(args: Array[String]) {
val argsObject: FooArgs = ArgParser.parse(args).as[FooArgs]
// do real stuff
}
The parser should throw runtime errors if the provided arguments do not match the expected types (e.g. if someone passes the string "bar" at a position where an Int is expected).
How can I dynamically build a FooArgs without knowing its shape in advance? Since FooArgs can have any arity or types, I don't know how to iterate over the command line arguments, cast or convert them to the expected types, and then use the result to construct a FooArgs. Basically, I want to do something along these lines:
// ** notional code - does not compile **
def parse[T](args: Seq[String], klass: Class[T]): T = {
val expectedTypes = klass.getDeclaredFields.map(_.getGenericType)
val typedArgs = args.zip(expectedTypes).map({
case (arg, String) => arg
case (arg, Int) => arg.toInt
case (arg, unknownType) =>
throw new RuntimeException(s"Unsupported type $unknownType")
})
(klass.getConstructor(typedArgs).newInstance _).tupled(typedArgs)
}
Any suggestions on how I can achieve something like this?
When you want to abstract over case class (or Tuple) shape, the standard approach is to get the HList representation of the case class with the help of shapeless library. HList keeps track in its type signature of the types of its elements and their amount. Then you can implement the algorithm you want recursively on the HList. Shapeless also provides a number of helpful transformations of HLists in shapeless.ops.hlist.
For this problem, first we need to define an auxiliary typeclass to parse an argument of some type from String:
trait Read[T] {
def apply(str: String): T
}
object Read {
def make[T](f: String => T): Read[T] = new Read[T] {
def apply(str: String) = f(str)
}
implicit val string: Read[String] = make(identity)
implicit val int: Read[Int] = make(_.toInt)
}
You can define more instances of this typeclass, if you need to support other argument types than String or Int.
Then we can define the actual typeclass that parses a sequence of arguments into some type:
// This is needed, because there seems to be a conflict between
// HList's :: and the standard Scala's ::
import shapeless.{:: => :::, _}
trait ParseArgs[T] {
def apply(args: List[String]): T
}
object ParseArgs {
// Base of the recursion on HList
implicit val hnil: ParseArgs[HNil] = new ParseArgs[HNil] {
def apply(args: List[String]) =
if (args.nonEmpty) sys.error("too many args")
else HNil
}
// A single recursion step on HList
implicit def hlist[T, H <: HList](
implicit read: Read[T], parseRest: ParseArgs[H]
): ParseArgs[T ::: H] = new ParseArgs[T ::: H] {
def apply(args: List[String]) = args match {
case first :: rest => read(first) :: parseRest(rest)
case Nil => sys.error("too few args")
}
}
// The implementation for any case class, based on its HList representation
implicit def caseClass[C, H <: HList](
implicit gen: Generic.Aux[C, H], parse: ParseArgs[H]
): ParseArgs[C] = new ParseArgs[C] {
def apply(args: List[String]) = gen.from(parse(args))
}
}
And lastly we can define some API, that uses this typeclass. For example:
case class ArgParser(args: List[String]) {
def to[C](implicit parseArgs: ParseArgs[C]): C = parseArgs(args)
}
object ArgParser {
def parse(args: Array[String]): ArgParser = ArgParser(args.toList)
}
And a simple test:
scala> ArgParser.parse(Array("flux", "10")).to[FooArgs]
res0: FooArgs = FooArgs(flux,10)
There is a great guide on using shapeless for solving similar problems, which you may find helpful: The Type Astronaut’s Guide to Shapeless

StringOrInt from Idris -> Scala?

Type Driven Development with Idris presents this program:
StringOrInt : Bool -> Type
StringOrInt x = case x of
True => Int
False => String
How can such a method be written in Scala?
Alexey's answer is good, but I think we can get to a more generalizable Scala rendering of this function if we embed it in a slightly larger context.
Edwin shows a use of StringOrInt in the function valToString,
valToString : (x : Bool) -> StringOrInt x -> String
valToString x val = case x of
True => cast val
False => val
In words, valToString takes a Bool first argument which fixes the type of its second argument as either Int or String and renders the latter as a String as appropriate for its type.
We can translate this to Scala as follows,
sealed trait Bool
case object True extends Bool
case object False extends Bool
sealed trait StringOrInt[B, T] {
def apply(t: T): StringOrIntValue[T]
}
object StringOrInt {
implicit val trueInt: StringOrInt[True.type, Int] =
new StringOrInt[True.type, Int] {
def apply(t: Int) = I(t)
}
implicit val falseString: StringOrInt[False.type, String] =
new StringOrInt[False.type, String] {
def apply(t: String) = S(t)
}
}
sealed trait StringOrIntValue[T]
case class S(s: String) extends StringOrIntValue[String]
case class I(i: Int) extends StringOrIntValue[Int]
def valToString[T](x: Bool)(v: T)(implicit si: StringOrInt[x.type, T]): String =
si(v) match {
case S(s) => s
case I(i) => i.toString
}
Here we are using a variety of Scala's limited dependently typed features to encode Idris's full-spectrum dependent types.
We use the singleton types True.type and False.type to cross from the value level to the type level.
We encode the function StringOrInt as a type class indexed by the singleton Bool types, each case of the Idris function being represented by a distinct implicit instance.
We write valToString as a Scala dependent method, allowing us to use the singleton type of the Bool argument x to select the implicit StringOrInt instance si, which in turn determines the type parameter T which fixes the type of the second argument v.
We encode the dependent pattern matching in the Idris valToString by using the selected StringOrInt instance to lift the v argument into a Scala GADT which allows the Scala pattern match to refine the type of v on the RHS of the cases.
Exercising this on the Scala REPL,
scala> valToString(True)(23)
res0: String = 23
scala> valToString(False)("foo")
res1: String = foo
Lots of hoops to jump through, and lots of accidental complexity, nevertheless, it can be done.
This would be one approach (but it's a lot more limited than in Idris):
trait Type {
type T
}
def stringOrInt(x: Boolean) = // Scala infers Type { type T >: Int with String }
if (x) new Type { type T = Int } else new Type { type T = String }
and then to use it
def f(t: Type): t.T = ...
If you are willing to limit yourself to literals, you could use singleton types of true and false using Shapeless. From examples:
import syntax.singleton._
val wTrue = Witness(true)
type True = wTrue.T
val wFalse = Witness(false)
type False = wFalse.T
trait Type1[A] { type T }
implicit val Type1True: Type1[True] = new Type1[True] { type T = Int }
implicit val Type1False: Type1[False] = new Type1[False] { type T = String }
See also Any reason why scala does not explicitly support dependent types?, http://www.infoq.com/presentations/scala-idris and http://wheaties.github.io/Presentations/Scala-Dep-Types/dependent-types.html.
I noticed that getStringOrInt example has not been implemented by either of the two answers
getStringOrInt : (x : Bool) -> StringOrInt x
getStringOrInt x = case x of
True => 10
False => "Hello"
I found Miles Sabin explanation very useful and this approach is based on his.
I found it more intuitive to split GADT-like construction from Scala apply() trickery and try to map my code to Idris/Haskell concepts. I hope others find this useful. I am using GADT explicitly in names to emphasize the GADT-ness.
The 2 ingredients of this code are: GADT concept and Scala's implicits.
Here is a slightly modified Miles Sabin's solution that implements both getStringOrInt and valToString.
sealed trait StringOrIntGADT[T]
case class S(s: String) extends StringOrIntGADT[String]
case class I(i: Int) extends StringOrIntGADT[Int]
/* this compiles with 2.12.6
before I would have to use ah-hoc polymorphic extract method */
def extractStringOrInt[T](t: StringOrIntGADT[T]) : T =
t match {
case S(s) => s
case I(i) => i
}
/* apply trickery gives T -> StringOrIntGADT[T] conversion */
sealed trait EncodeInStringOrInt[T] {
def apply(t: T): StringOrIntGADT[T]
}
object EncodeInStringOrInt {
implicit val encodeString : EncodeInStringOrInt[String] = new EncodeInStringOrInt[String]{
def apply(t: String) = S(t)
}
implicit val encodeInt : EncodeInStringOrInt[Int] = new EncodeInStringOrInt[Int]{
def apply(t: Int) = I(t)
}
}
/* Subtyping provides type level Boolean */
sealed trait Bool
case object True extends Bool
case object False extends Bool
/* Type level mapping between Bool and String/Int types.
Somewhat mimicking type family concept in type theory or Haskell */
sealed trait DecideStringOrIntGADT[B, T]
case object PickS extends DecideStringOrIntGADT[False.type, String]
case object PickI extends DecideStringOrIntGADT[True.type, Int]
object DecideStringOrIntGADT {
implicit val trueInt: DecideStringOrIntGADT[True.type, Int] = PickI
implicit val falseString: DecideStringOrIntGADT[False.type, String] = PickS
}
All this work allows me to implement decent versions of getStringOrInt and valToString
def pickStringOrInt[B, T](c: DecideStringOrIntGADT[B, T]): StringOrIntGADT[T]=
c match {
case PickS => S("Hello")
case PickI => I(2)
}
def getStringOrInt[T](b: Bool)(implicit ev: DecideStringOrIntGADT[b.type, T]): T =
extractStringOrInt(pickStringOrInt(ev))
def valToString[T](b: Bool)(v: T)(implicit ev: EncodeInStringOrInt[T], de: DecideStringOrIntGADT[b.type, T]): String =
ev(v) match {
case S(s) => s
case I(i) => i.toString
}
All this (unfortunate) complexity appears to be needed, for example this will not compile
// def getStringOrInt2[T](b: Bool)(implicit ev: DecideStringOrIntGADT[b.type, T]): T =
// ev match {
// case PickS => "Hello"
// case PickI => 2
// }
I have a pet project in which I compared all code in the Idris book to Haskell.
https://github.com/rpeszek/IdrisTddNotes/wiki
(I am starting work on a Scala version of this comparison.)
With type-level Boolean (which is effectively what we have here) StringOrInt examples become super simple if we have type families (partial functions between types). See bottom of
https://github.com/rpeszek/IdrisTddNotes/wiki/Part1_Sec1_4_5
This makes Haskell/Idris code is so much simpler and easier to read and understand.
Notice that valToString matches on StringOrIntGADT[T]/StringOrIntValue[T] constructors and not directly on Bool. That is one example where Idris and Haskell shine.

unclear why my in-scope implicit conversions are not accepted as 'implicit evidence'

I've been experimenting with implicit conversions, and I have a decent understanding of the 'enrich-my-libray' pattern that uses these. I tried to combine my understanding of basic implicits with the use of implicit evidence... But I'm misunderstanding something crucial, as shown by the method below:
import scala.language.implicitConversions
object Moo extends App {
case class FooInt(i: Int)
implicit def cvtInt(i: Int) : FooInt = FooInt(i)
implicit def cvtFoo(f: FooInt) : Int = f.i
class Pair[T, S](var first: T, var second: S) {
def swap(implicit ev: T =:= S, ev2: S =:= T) {
val temp = first
first = second
second = temp
}
def dump() = {
println("first is " + first)
println("second is " + second)
}
}
val x = new Pair(FooInt(200), 100)
x.dump
x.swap
x.dump
}
When I run the above method I get this error:
Error:(31, 5) Cannot prove that nodescala.Moo.FooInt =:= Int.
x.swap
^
I am puzzled because I would have thought that my in-scope implict conversion would be sufficient 'evidence' that Int's can be converted to FooInt's and vice versa. Thanks in advance for setting me straight on this !
UPDATE:
After being unconfused by Peter's excellent answer, below, the light bulb went on for me one good reason you would want to use implicit evidence in your API. I detail that in my own answer to this question (also below).
=:= checks if the two types are equal and FooInt and Int are definitely not equal, although there exist implicit conversion for values of these two types.
I would create a CanConvert type class which can convert an A into a B :
trait CanConvert[A, B] {
def convert(a: A): B
}
We can create type class instances to transform Int into FooInt and vise versa :
implicit val Int2FooInt = new CanConvert[Int, FooInt] {
def convert(i: Int) = FooInt(i)
}
implicit val FooInt2Int = new CanConvert[FooInt, Int] {
def convert(f: FooInt) = f.i
}
Now we can use CanConvert in our Pair.swap function :
class Pair[A, B](var a: A, var b: B) {
def swap(implicit a2b: CanConvert[A, B], b2a: CanConvert[B, A]) {
val temp = a
a = b2a.convert(b)
b = a2b.convert(temp)
}
override def toString = s"($a, $b)"
def dump(): Unit = println(this)
}
Which we can use as :
scala> val x = new Pair(FooInt(200), 100)
x: Pair[FooInt,Int] = (FooInt(200), 100)
scala> x.swap
scala> x.dump
(FooInt(100), 200)
A =:= B is not evidence that A can be converted to B. It is evidence that A can be cast to B. And you have no implicit evidence anywhere that Int can be cast to FooInt vice versa (for good reason ;).
What you are looking for is:
def swap(implicit ev: T => S, ev2: S => T) {
After working through this excercise I think I have a better understanding of WHY you'd want to use implicit evidence serves in your API.
Implicit evidence can be very useful when:
you have a type parameterized class that provides various methods
that act on the types given by the parameters, and
when one or more of those methods only make sense when additional
constraints are placed on parameterized types.
So, in the case of the simple API given in my original question:
class Pair[T, S](var first: T, var second: S) {
def swap(implicit ev: T =:= S, ev2: S =:= T) = ???
def dump() = ???
}
We have a type Pair, which keeps two things together, and we can always call dump() to examine the two things. We can also, under certain conditions, swap the positions of the first and second items in the pair. And those conditions are given by the implicit evidence constraints.
The Programming in Scala book gives a nice example of how this technique
is used in Scala collections, specifically on the toMap method of Traversables.
The book points out that Map's constructor
wants key-value pairs, i.e., two-tuples, as arguments. If we have a
sequence [Traversable] of pairs, wouldn’t it be nice to create a Map
out of them in one step? That’s what toMap does, but we have a
dilemma. We can’t allow the user to call toMap if the sequence is not
a sequence of pairs.
So there's an example of a type [Traversable] that has a method [toMap] that can't be used in all situations... It can only be used when the compiler can 'prove' (via implicit evidence) that the items in the Traversable are pairs.

Magnet pattern with repeated parameters (varargs)

Is it possible to use the magnet pattern with varargs:
object Values {
implicit def fromInt (x : Int ) = Values()
implicit def fromInts(xs: Int*) = Values()
}
case class Values()
object Foo {
def bar(values: Values) {}
}
Foo.bar(0)
Foo.bar(1,2,3) // ! "error: too many arguments for method bar: (values: Values)Unit"
?
As already mentioned by gourlaysama, turning the varargs into a single Product will do the trick, syntactically speaking:
implicit def fromInts(t: Product) = Values()
This allows the following call to compile fine:
Foo.bar(1,2,3)
This is because the compiler autmatically lifts the 3 arguments into a Tuple3[Int, Int, Int]. This will work with any number of arguments up to an arity of 22. Now the problem is how to make it type safe. As it is Product.productIterator is the only way to get back our argument list inside the method body, but it returns an Iterator[Any]. We don't have any guarantee that the method will be called only with Ints. This should come as no surprise as we actually never even mentioned in the signature that we wanted only Ints.
OK, so the key difference between an unconstrained Product and a vararg list is that in the latter case each element is of the same type. We can encode this using a type class:
abstract sealed class IsVarArgsOf[P, E]
object IsVarArgsOf {
implicit def Tuple2[E]: IsVarArgsOf[(E, E), E] = null
implicit def Tuple3[E]: IsVarArgsOf[(E, E, E), E] = null
implicit def Tuple4[E]: IsVarArgsOf[(E, E, E, E), E] = null
implicit def Tuple5[E]: IsVarArgsOf[(E, E, E, E, E), E] = null
implicit def Tuple6[E]: IsVarArgsOf[(E, E, E, E, E), E] = null
// ... and so on... yes this is verbose, but can be done once for all
}
implicit class RichProduct[P]( val product: P ) {
def args[E]( implicit evidence: P IsVarArgsOf E ): Iterator[E] = {
// NOTE: by construction, those casts are safe and cannot fail
product.asInstanceOf[Product].productIterator.asInstanceOf[Iterator[E]]
}
}
case class Values( xs: Seq[Int] )
object Values {
implicit def fromInt( x : Int ) = Values( Seq( x ) )
implicit def fromInts[P]( xs: P )( implicit evidence: P IsVarArgsOf Int ) = Values( xs.args.toSeq )
}
object Foo {
def bar(values: Values) {}
}
Foo.bar(0)
Foo.bar(1,2,3)
We have changed the method signature form
implicit def fromInts(t: Product)
to:
implicit def fromInts[P]( xs: P )( implicit evidence: P IsVarArgsOf Int )
Inside the method body, we use the new methodd args to get our arg list back.
Note that if we attempt to call bar with a a tuple that is not a tuple of Ints, we will get a compile error, which gets us our type safety back.
UPDATE: As pointed by 0__, my above solution does not play well with numeric widening. In other words, the following does not compile, although it would work if bar was simply taking 3 Int parameters:
Foo.bar(1:Short,2:Short,3:Short)
Foo.bar(1:Short,2:Byte,3:Int)
To fix this, all we need to do is to modify IsVarArgsOf so that all the implicits allow
the tuple elemts to be convertible to a common type, rather than all be of the same type:
abstract sealed class IsVarArgsOf[P, E]
object IsVarArgsOf {
implicit def Tuple2[E,X1<%E,X2<%E]: IsVarArgsOf[(X1, X2), E] = null
implicit def Tuple3[E,X1<%E,X2<%E,X3<%E]: IsVarArgsOf[(X1, X2, X3), E] = null
implicit def Tuple4[E,X1<%E,X2<%E,X3<%E,X4<%E]: IsVarArgsOf[(X1, X2, X3, X4), E] = null
// ... and so on ...
}
OK, actually I lied, we're not done yet. Because we are now accepting different types of elements (so long as they are convertible to a common type, we cannot just cast them to the expected type (this would lead to a runtime cast error) but instead we have to apply the implicit conversions. We can rework it like this:
abstract sealed class IsVarArgsOf[P, E] {
def args( p: P ): Iterator[E]
}; object IsVarArgsOf {
implicit def Tuple2[E,X1<%E,X2<%E] = new IsVarArgsOf[(X1, X2), E]{
def args( p: (X1, X2) ) = Iterator[E](p._1, p._2)
}
implicit def Tuple3[E,X1<%E,X2<%E,X3<%E] = new IsVarArgsOf[(X1, X2, X3), E]{
def args( p: (X1, X2, X3) ) = Iterator[E](p._1, p._2, p._3)
}
implicit def Tuple4[E,X1<%E,X2<%E,X3<%E,X4<%E] = new IsVarArgsOf[(X1, X2, X3, X4), E]{
def args( p: (X1, X2, X3, X4) ) = Iterator[E](p._1, p._2, p._3, p._4)
}
// ... and so on ...
}
implicit class RichProduct[P]( val product: P ) {
def args[E]( implicit isVarArg: P IsVarArgsOf E ): Iterator[E] = {
isVarArg.args( product )
}
}
This fixes the problem with numeric widening, and we still get a compile when mixing unrelated types:
scala> Foo.bar(1,2,"three")
<console>:22: error: too many arguments for method bar: (values: Values)Unit
Foo.bar(1,2,"three")
^
Edit:
The var-args implicit will never be picked because repeated parameters are not really first class citizens when it comes to types... they are only there when checking for applicability of a method to arguments.
So basically, when you call Foo.bar(1,2,3) it checks if bar is defined with variable arguments, and since it isn't, it isn't applicable to the arguments. And it can't go any further:
If you had called it with a single argument, it would have looked for an implicit conversion from the argument type to the expected type, but since you called with several arguments, there is an arity problem, there is no way it can convert multiple arguments to a single one with an implicit type conversion.
But: there is a solution using auto-tupling.
Foo.bar(1,2,3)
can be understood by the compiler as
Foo.bar((1,2,3))
which means an implicit like this one would work:
implicit def fromInts[T <: Product](t: T) = Values()
// or simply
implicit def fromInts(t: Product) = Values()
The problem with this is that the only way to get the arguments is via t.productIterator, which returns a Iterator[Any] and needs to be cast.
So you would lose type safety; this would compile (and fail at runtime when using it):
Foo.bar("1", "2", "3")
We can make this fully type-safe using Scala 2.10's implicit macros. The macro would just check that the argument is indeed a TupleX[Int, Int, ...] and only make itself available as an implicit conversion if it passes that check.
To make the example more useful, I changed Values to keep the Int arguments around:
case class Values(xs: Seq[Int])
object Values {
implicit def fromInt (x : Int ) = Values(Seq(x))
implicit def fromInts[T<: Product](t: T): Values = macro Macro.fromInts_impl[T]
}
With the macro implementation:
import scala.language.experimental.macros
import scala.reflect.macros.Context
object Macro {
def fromInts_impl[T <: Product: c.WeakTypeTag](c: Context)(t: c.Expr[T]) = {
import c.universe._
val tpe = weakTypeOf[T];
// abort if not a tuple
if (!tpe.typeSymbol.fullName.startsWith("scala.Tuple"))
c.abort(c.enclosingPosition, "Not a tuple!")
// extract type parameters
val TypeRef(_,_, tps) = tpe
// abort it not a tuple of ints
if (tps.exists(t => !(t =:= typeOf[Int])))
c.abort(c.enclosingPosition, "Only accept tuples of Int!")
// now, let's convert that tuple to a List[Any] and add a cast, with splice
val param = reify(t.splice.productIterator.toList.asInstanceOf[List[Int]])
// and return Values(param)
c.Expr(Apply(Select(Ident(newTermName("Values")), newTermName("apply")),
List(param.tree)))
}
}
And finally, defining Foo like this:
object Foo {
def bar(values: Values) { println(values) }
}
You get type-safe invocation with syntax exactly like repeated parameters:
scala> Foo.bar(1,2,3)
Values(List(1, 2, 3))
scala> Foo.bar("1","2","3")
<console>:13: error: too many arguments for method bar: (values: Values)Unit
Foo.bar("1","2","3")
^
scala> Foo.bar(1)
Values(List(1))
The spec only specifies the type of repeated parameters (varargs) from inside of a function:
The type of such a repeated parameter inside the method is then the sequence type scala.Seq[T ].
It does not cover the type anywhere else.
So I assume that the compiler internally - in a certain phase - cannot match the types.
From this observation (this does not compile => "double definition"):
object Values {
implicit def fromInt(x: Int) = Values()
implicit def fromInts(xs: Int*) = Values()
implicit def fromInts(xs: Seq[Int]) = Values()
}
it seems to be Seq[]. So the next try is to make it different:
object Values {
implicit def fromInt(x: Int) = Values()
implicit def fromInts(xs: Int*) = Values()
implicit def fromInts(xs: Seq[Int])(implicit d: DummyImplicit) = Values()
}
this compiles, but this does not solve the real problem.
The only workaround I found is to convert the varargs into a sequence explicitly:
def varargs(xs: Int*) = xs // return type is Seq[Int]
Foo.bar(varargs(1, 2, 3))
but this of course is not what we want.
Possibly related: An implicit conversion function has only one parameter. But from a logical (or the compiler's temporary) point of view, in case of varargs, it could be multiple as well.
As for the types, this might be of interest
Here is a solution which does use overloading (which I would prefer not to)
object Values {
implicit def fromInt (x : Int ) = Values()
implicit def fromInts(xs: Seq[Int]) = Values()
}
case class Values()
object Foo {
def bar(values: Values) { println("ok") }
def bar[A](values: A*)(implicit asSeq: Seq[A] => Values) { bar(values: Values) }
}
Foo.bar(0)
Foo.bar(1,2,3)

How to set up implicit conversion to allow arithmetic between numeric types?

I'd like to implement a class C to store values of various numeric types, as well as boolean. Furthermore, I'd like to be able to operate on instances of this class, between types, converting where necessary Int --> Double and Boolean -> Int, i.e., to be able to add Boolean + Boolean, Int + Boolean, Boolean + Int, Int + Double, Double + Double etc., returning the smallest possible type (Int or Double) whenever possible.
So far I came up with this:
abstract class SemiGroup[A] { def add(x:A, y:A):A }
class C[A] (val n:A) (implicit val s:SemiGroup[A]) {
def +[T <% A](that:C[T]) = s.add(this.n, that.n)
}
object Test extends Application {
implicit object IntSemiGroup extends SemiGroup[Int] {
def add(x: Int, y: Int):Int = x + y
}
implicit object DoubleSemiGroup extends SemiGroup[Double] {
def add(x: Double, y: Double):Double = x + y
}
implicit object BooleanSemiGroup extends SemiGroup[Boolean] {
def add(x: Boolean, y: Boolean):Boolean = true;
}
implicit def bool2int(b:Boolean):Int = if(b) 1 else 0
val n = new C[Int](10)
val d = new C[Double](10.5)
val b = new C[Boolean](true)
println(d + n) // [1]
println(n + n) // [2]
println(n + b) // [3]
// println(n + d) [4] XXX - no implicit conversion of Double to Int exists
// println(b + n) [5] XXX - no implicit conversion of Int to Boolean exists
}
This works for some cases (1, 2, 3) but doesn't for (4, 5). The reason is that there is implicit widening of type from lower to higher, but not the other way. In a way, the method
def +[T <% A](that:C[T]) = s.add(this.n, that.n)
somehow needs to have a partner method that would look something like:
def +[T, A <% T](that:C[T]):T = that.s.add(this.n, that.n)
but that does not compile for two reasons, firstly that the compiler cannot convert this.n to type T (even though we specify view bound A <% T), and, secondly, that even if it were able to convert this.n, after type erasure the two + methods become ambiguous.
Sorry this is so long. Any help would be much appreciated! Otherwise it seems I have to write out all the operations between all the types explicitly. And it would get hairy if I had to add extra types (Complex is next on the menu...).
Maybe someone has another way to achieve all this altogether? Feels like there's something simple I'm overlooking.
Thanks in advance!
Okay then, Daniel!
I've restricted the solution to ignore Boolean, and only work with AnyVals that have a weak Least Upper Bound that has an instance of Numeric. These restrictions are arbitrary, you could remove them and encode your own weak conformance relationship between types -- the implementation of a2b and a2c could perform some conversion.
Its interesting to consider how implicit parameters can simulate inheritance (passing implicit parameters of type (Derived => Base) or Weak Conformance. They are really powerful, especially when the type inferencer helps you out.
First, we need a type class to represent the Weak Least Upper Bound of all pairs of types A and B that we are interested in.
sealed trait WeakConformance[A <: AnyVal, B <: AnyVal, C] {
implicit def aToC(a: A): C
implicit def bToC(b: B): C
}
object WeakConformance {
implicit def SameSame[T <: AnyVal]: WeakConformance[T, T, T] = new WeakConformance[T, T, T] {
implicit def aToC(a: T): T = a
implicit def bToC(b: T): T = b
}
implicit def IntDouble: WeakConformance[Int, Double, Double] = new WeakConformance[Int, Double, Double] {
implicit def aToC(a: Int) = a
implicit def bToC(b: Double) = b
}
implicit def DoubleInt: WeakConformance[Double, Int, Double] = new WeakConformance[Double, Int, Double] {
implicit def aToC(a: Double) = a
implicit def bToC(b: Int) = b
}
// More instances go here!
def unify[A <: AnyVal, B <: AnyVal, C](a: A, b: B)(implicit ev: WeakConformance[A, B, C]): (C, C) = {
import ev._
(a: C, b: C)
}
}
The method unify returns type C, which is figured out by the type inferencer based on availability of implicit values to provide as the implicit argument ev.
We can plug this into your wrapper class C as follows, also requiring a Numeric[WeakLub] so we can add the values.
case class C[A <: AnyVal](val value:A) {
import WeakConformance.unify
def +[B <: AnyVal, WeakLub <: AnyVal](that:C[B])(implicit wc: WeakConformance[A, B, WeakLub], num: Numeric[WeakLub]): C[WeakLub] = {
val w = unify(value, that.value) match { case (x, y) => num.plus(x, y)};
new C[WeakLub](w)
}
}
And finally, putting it all together:
object Test extends Application {
val n = new C[Int](10)
val d = new C[Double](10.5)
// The type ascriptions aren't necessary, they are just here to
// prove the static type is the Weak LUB of the two sides.
println(d + n: C[Double]) // C(20.5)
println(n + n: C[Int]) // C(20)
println(n + d: C[Double]) // C(20.5)
}
Test
There's a way to do that, but I'll leave it to retronym to explain it, since he wrote this solution. :-)