Type Driven Development with Idris presents this program:
StringOrInt : Bool -> Type
StringOrInt x = case x of
True => Int
False => String
How can such a method be written in Scala?
Alexey's answer is good, but I think we can get to a more generalizable Scala rendering of this function if we embed it in a slightly larger context.
Edwin shows a use of StringOrInt in the function valToString,
valToString : (x : Bool) -> StringOrInt x -> String
valToString x val = case x of
True => cast val
False => val
In words, valToString takes a Bool first argument which fixes the type of its second argument as either Int or String and renders the latter as a String as appropriate for its type.
We can translate this to Scala as follows,
sealed trait Bool
case object True extends Bool
case object False extends Bool
sealed trait StringOrInt[B, T] {
def apply(t: T): StringOrIntValue[T]
}
object StringOrInt {
implicit val trueInt: StringOrInt[True.type, Int] =
new StringOrInt[True.type, Int] {
def apply(t: Int) = I(t)
}
implicit val falseString: StringOrInt[False.type, String] =
new StringOrInt[False.type, String] {
def apply(t: String) = S(t)
}
}
sealed trait StringOrIntValue[T]
case class S(s: String) extends StringOrIntValue[String]
case class I(i: Int) extends StringOrIntValue[Int]
def valToString[T](x: Bool)(v: T)(implicit si: StringOrInt[x.type, T]): String =
si(v) match {
case S(s) => s
case I(i) => i.toString
}
Here we are using a variety of Scala's limited dependently typed features to encode Idris's full-spectrum dependent types.
We use the singleton types True.type and False.type to cross from the value level to the type level.
We encode the function StringOrInt as a type class indexed by the singleton Bool types, each case of the Idris function being represented by a distinct implicit instance.
We write valToString as a Scala dependent method, allowing us to use the singleton type of the Bool argument x to select the implicit StringOrInt instance si, which in turn determines the type parameter T which fixes the type of the second argument v.
We encode the dependent pattern matching in the Idris valToString by using the selected StringOrInt instance to lift the v argument into a Scala GADT which allows the Scala pattern match to refine the type of v on the RHS of the cases.
Exercising this on the Scala REPL,
scala> valToString(True)(23)
res0: String = 23
scala> valToString(False)("foo")
res1: String = foo
Lots of hoops to jump through, and lots of accidental complexity, nevertheless, it can be done.
This would be one approach (but it's a lot more limited than in Idris):
trait Type {
type T
}
def stringOrInt(x: Boolean) = // Scala infers Type { type T >: Int with String }
if (x) new Type { type T = Int } else new Type { type T = String }
and then to use it
def f(t: Type): t.T = ...
If you are willing to limit yourself to literals, you could use singleton types of true and false using Shapeless. From examples:
import syntax.singleton._
val wTrue = Witness(true)
type True = wTrue.T
val wFalse = Witness(false)
type False = wFalse.T
trait Type1[A] { type T }
implicit val Type1True: Type1[True] = new Type1[True] { type T = Int }
implicit val Type1False: Type1[False] = new Type1[False] { type T = String }
See also Any reason why scala does not explicitly support dependent types?, http://www.infoq.com/presentations/scala-idris and http://wheaties.github.io/Presentations/Scala-Dep-Types/dependent-types.html.
I noticed that getStringOrInt example has not been implemented by either of the two answers
getStringOrInt : (x : Bool) -> StringOrInt x
getStringOrInt x = case x of
True => 10
False => "Hello"
I found Miles Sabin explanation very useful and this approach is based on his.
I found it more intuitive to split GADT-like construction from Scala apply() trickery and try to map my code to Idris/Haskell concepts. I hope others find this useful. I am using GADT explicitly in names to emphasize the GADT-ness.
The 2 ingredients of this code are: GADT concept and Scala's implicits.
Here is a slightly modified Miles Sabin's solution that implements both getStringOrInt and valToString.
sealed trait StringOrIntGADT[T]
case class S(s: String) extends StringOrIntGADT[String]
case class I(i: Int) extends StringOrIntGADT[Int]
/* this compiles with 2.12.6
before I would have to use ah-hoc polymorphic extract method */
def extractStringOrInt[T](t: StringOrIntGADT[T]) : T =
t match {
case S(s) => s
case I(i) => i
}
/* apply trickery gives T -> StringOrIntGADT[T] conversion */
sealed trait EncodeInStringOrInt[T] {
def apply(t: T): StringOrIntGADT[T]
}
object EncodeInStringOrInt {
implicit val encodeString : EncodeInStringOrInt[String] = new EncodeInStringOrInt[String]{
def apply(t: String) = S(t)
}
implicit val encodeInt : EncodeInStringOrInt[Int] = new EncodeInStringOrInt[Int]{
def apply(t: Int) = I(t)
}
}
/* Subtyping provides type level Boolean */
sealed trait Bool
case object True extends Bool
case object False extends Bool
/* Type level mapping between Bool and String/Int types.
Somewhat mimicking type family concept in type theory or Haskell */
sealed trait DecideStringOrIntGADT[B, T]
case object PickS extends DecideStringOrIntGADT[False.type, String]
case object PickI extends DecideStringOrIntGADT[True.type, Int]
object DecideStringOrIntGADT {
implicit val trueInt: DecideStringOrIntGADT[True.type, Int] = PickI
implicit val falseString: DecideStringOrIntGADT[False.type, String] = PickS
}
All this work allows me to implement decent versions of getStringOrInt and valToString
def pickStringOrInt[B, T](c: DecideStringOrIntGADT[B, T]): StringOrIntGADT[T]=
c match {
case PickS => S("Hello")
case PickI => I(2)
}
def getStringOrInt[T](b: Bool)(implicit ev: DecideStringOrIntGADT[b.type, T]): T =
extractStringOrInt(pickStringOrInt(ev))
def valToString[T](b: Bool)(v: T)(implicit ev: EncodeInStringOrInt[T], de: DecideStringOrIntGADT[b.type, T]): String =
ev(v) match {
case S(s) => s
case I(i) => i.toString
}
All this (unfortunate) complexity appears to be needed, for example this will not compile
// def getStringOrInt2[T](b: Bool)(implicit ev: DecideStringOrIntGADT[b.type, T]): T =
// ev match {
// case PickS => "Hello"
// case PickI => 2
// }
I have a pet project in which I compared all code in the Idris book to Haskell.
https://github.com/rpeszek/IdrisTddNotes/wiki
(I am starting work on a Scala version of this comparison.)
With type-level Boolean (which is effectively what we have here) StringOrInt examples become super simple if we have type families (partial functions between types). See bottom of
https://github.com/rpeszek/IdrisTddNotes/wiki/Part1_Sec1_4_5
This makes Haskell/Idris code is so much simpler and easier to read and understand.
Notice that valToString matches on StringOrIntGADT[T]/StringOrIntValue[T] constructors and not directly on Bool. That is one example where Idris and Haskell shine.
Related
I have a situation similar to this:
trait Abst{
type T
def met1(p: T) = p.toString
def met2(p: T, f: Double=>this.type){
val v = f(1.0)
v.met1(p)
}
}
class MyClass(x: Double) extends Abst{
case class Param(a:Int)
type T = Param
val s = met2(Param(1), (d: Double) => new MyClass(d))
}
And it doesn't show errors until I run it, then it says:
type mismatch; found: MyClass, required:
MyClass.this.type
I tried also a solution with generic type but then I have conflict that this.T differs from v.T.
So I just need to overcome the error message above if possible?
Update
So, it turns out that this.type is the singleton type for that single instance. And I proposed in a comment usage of
val s = met2(Param(1), (d: Double) => (new MyClass(d)).asInstanceOf[this.type])
So just if someone would make a comment about this, I am aware how ugly it is, just interested how unsafe it is?
Also you all proposed to move definition of Param outside the class, which I definitely agree. So its definition will be in the companion object MyClass
The this.type is the singleton type that is inhabited by one single value, namely this. Therefore, accepting a function of type f: X => this.type as an argument is guaranteed to be nonsensical, because every invocation of f can just be replaced by this (plus the side-effects performed by f).
Here is a way of forcing your code to compile with minimal changes:
trait Abst { self =>
type T
def met1(p: T) = p.toString
def met2(p: T, f: Double => Abst { type T = self.T }){
val v = f(1.0)
v.met1(p)
}
}
case class Param(a:Int)
class MyClass(x: Double) extends Abst {
type T = Param
val s = met2(Param(1), (d: Double) => new MyClass(d))
}
But honestly: don't do it. And also don't do any of the F-bounded-stuff either, it will probably end up as a total mess, especially if you are not familiar with the pattern. Instead, refactor your code so that you don't have any self-referential spirals in it.
Update
Remark on why telling to the compiler that (new MyClass(d)) is of type this.type for some other this: MyClass is a really bad idea:
abstract class A {
type T
val x: T
val f: T => Unit
def blowup(a: A): Unit = a.asInstanceOf[this.type].f(x)
}
object A {
def apply[X](point: X, function: X => Unit): A = new A {
type T = X
val x = point
val f = function
}
}
val a = A("hello", (s: String) => println(s.size))
val b = A(42, (n: Int) => println(n + 58))
b.blowup(a)
This blows up with a ClassCastException, despite a and b both being of type A.
If you don't mind making the trait take T as a generic parameter, this is a fairly simple and straightforward equivalent solution:
trait Abst[T]{
def met1(p: T) = p.toString
def met2(p: T, f: Double=>Abst[T]){
val v = f(1.0)
v.met1(p)
}
}
case class Param(a:Int)
class MyClass(x: Double) extends Abst[Param]{
val s = met2(Param(1), (d: Double) => new MyClass(d))
}
I say it's equivalent because you're not losing any information by having met2 use the supertype instead of the subtype. The classic use case for referencing the subtype in a trait is e.g. having a method which you want to return MyClass instead of Abst even though it's defined in Abst, but that's not the situation you're in here. The only place your subtype reference is used is in the definition of f, and since function types are covariant on their output parameter you can pass any f: Double => MyClass into an f: Double => Abst[T] without problems.
If you do want to reference the subtype anyway, see Markus' answer... and if you do want to avoid having T be a generic parameter, things get a lot more complicated again, because now you have potential conflicts between the T of Abst versus the T of the subtype in the definition of met2.
To overcome this error message you have to use F-bounded polymorphism.
Your code will look somewhat like this:
trait Abst[F <: Abst[F, T], T]{ self: F =>
def met1(p: T): String = p.toString
def met2(p: T, f: Double => F): String = {
val v = f(1.0)
v.met1(p)
}
}
case class Param(a:Int)
class MyClass(x: Double) extends Abst[MyClass, Param] {
val s = met2(Param(1), (d: Double) => new MyClass(d))
}
Explanation:
Using self: F => inside of a trait or class definition constraints the value of this. So your code will not compile if this is not of type F.
We are using a cyclic type constraint of F: F <: Abst[F, T]. Though counterintuitive, the compiler does not mind it.
In the implementation, MyClass, we then extend MyClass with Abst[MyClass, Param], which in turn satisfies F <: Abst[F, T].
Now you can use F as return type of a function in Abst and have the MyClass return MyClass in the implementation.
You might think this solution is ugly, and if you do, then you're right.
Instead of using F-bounded polymorphism it's always recommended to use typeclasses for ad-hoc-polymorphism.
You will find more information about it in the link I provided earlier.
Really, read it. It will change your view on generic programming forever.
I hope this helps.
I want to build a simple library in which a developer can define a Scala class that represents command line arguments (to keep it simple, just a single set of required arguments -- no flags or optional arguments). I'd like the library to parse the command line arguments and return an instance of the class. The user of the library would just do something like this:
case class FooArgs(fluxType: String, capacitorCount: Int)
def main(args: Array[String]) {
val argsObject: FooArgs = ArgParser.parse(args).as[FooArgs]
// do real stuff
}
The parser should throw runtime errors if the provided arguments do not match the expected types (e.g. if someone passes the string "bar" at a position where an Int is expected).
How can I dynamically build a FooArgs without knowing its shape in advance? Since FooArgs can have any arity or types, I don't know how to iterate over the command line arguments, cast or convert them to the expected types, and then use the result to construct a FooArgs. Basically, I want to do something along these lines:
// ** notional code - does not compile **
def parse[T](args: Seq[String], klass: Class[T]): T = {
val expectedTypes = klass.getDeclaredFields.map(_.getGenericType)
val typedArgs = args.zip(expectedTypes).map({
case (arg, String) => arg
case (arg, Int) => arg.toInt
case (arg, unknownType) =>
throw new RuntimeException(s"Unsupported type $unknownType")
})
(klass.getConstructor(typedArgs).newInstance _).tupled(typedArgs)
}
Any suggestions on how I can achieve something like this?
When you want to abstract over case class (or Tuple) shape, the standard approach is to get the HList representation of the case class with the help of shapeless library. HList keeps track in its type signature of the types of its elements and their amount. Then you can implement the algorithm you want recursively on the HList. Shapeless also provides a number of helpful transformations of HLists in shapeless.ops.hlist.
For this problem, first we need to define an auxiliary typeclass to parse an argument of some type from String:
trait Read[T] {
def apply(str: String): T
}
object Read {
def make[T](f: String => T): Read[T] = new Read[T] {
def apply(str: String) = f(str)
}
implicit val string: Read[String] = make(identity)
implicit val int: Read[Int] = make(_.toInt)
}
You can define more instances of this typeclass, if you need to support other argument types than String or Int.
Then we can define the actual typeclass that parses a sequence of arguments into some type:
// This is needed, because there seems to be a conflict between
// HList's :: and the standard Scala's ::
import shapeless.{:: => :::, _}
trait ParseArgs[T] {
def apply(args: List[String]): T
}
object ParseArgs {
// Base of the recursion on HList
implicit val hnil: ParseArgs[HNil] = new ParseArgs[HNil] {
def apply(args: List[String]) =
if (args.nonEmpty) sys.error("too many args")
else HNil
}
// A single recursion step on HList
implicit def hlist[T, H <: HList](
implicit read: Read[T], parseRest: ParseArgs[H]
): ParseArgs[T ::: H] = new ParseArgs[T ::: H] {
def apply(args: List[String]) = args match {
case first :: rest => read(first) :: parseRest(rest)
case Nil => sys.error("too few args")
}
}
// The implementation for any case class, based on its HList representation
implicit def caseClass[C, H <: HList](
implicit gen: Generic.Aux[C, H], parse: ParseArgs[H]
): ParseArgs[C] = new ParseArgs[C] {
def apply(args: List[String]) = gen.from(parse(args))
}
}
And lastly we can define some API, that uses this typeclass. For example:
case class ArgParser(args: List[String]) {
def to[C](implicit parseArgs: ParseArgs[C]): C = parseArgs(args)
}
object ArgParser {
def parse(args: Array[String]): ArgParser = ArgParser(args.toList)
}
And a simple test:
scala> ArgParser.parse(Array("flux", "10")).to[FooArgs]
res0: FooArgs = FooArgs(flux,10)
There is a great guide on using shapeless for solving similar problems, which you may find helpful: The Type Astronaut’s Guide to Shapeless
The title describes a specific problem I encountered when trying to solve a more general problem: how to separate a type conversion concern from a calculation concern. If I can solve that larger problem by another means than partially applied functions, great!
I'm using a type class, NumberOps, to represent operations on numbers. This code is paired down, but still exhibits the problem and expresses my intent. The first part simply defines the type class and a couple of implementations.
trait NumberOps[T] { // the type class (simplified for debugging)
def neg(x: T): T // negate x
def abs(x: T): T // absolute value of x
// ... over 50 more operations
def toFloating(x:T):AnyVal // convert from native to Float or Double, preserving precision
def fromFloating(f:AnyVal):T // convert from Float or Double to native
// ... also to/from Integral and to/from Big
}
object NumberOps { // Implements NumberOps for each type
import language.implicitConversions
implicit object FloatOps extends NumberOps[Float] {
def neg(x: Float): Float = -x
def abs(x: Float): Float = x.abs
def toFloating(f:Float):Float = f
def fromFloating(x:AnyVal):Float = {
x match {
case f:Float => f
case d:Double => d.toFloat
}
}
}
implicit object DoubleOps extends NumberOps[Double] {
def neg(x: Double): Double = -x
def abs(x: Double): Double = x.abs
def toFloating(d:Double):Double = d
def fromFloating(x:AnyVal):Double = {
x match {
case f:Float => f.toDouble
case d:Double => d
}
}
}
// ... other implicits defined for all primitive types, plus BigInt, BigDec
} // NumberOps object
All well and good. But now I want to implement NumberOps for complex numbers. A complex number will be represented as a 2-element array of any numeric type already defined (i.e. all primitive types plus BigInt and BigDecimal).
The intent with this code is to avoid combinatorial explosion of number types with numeric operations. I had hoped to achieve this by separating Concern A (type conversion) from Concern B (generic calculation).
You'll notice that "Concern A" is embodied in def eval, while "Concern B" is defined as a generic method, f, and then passed as a partially applied function (f _) to method eval. This code depends on the earlier code.
object ImaginaryOps { // Implements NumberOps for complex numbers, as 2-element arrays of any numeric type
import language.implicitConversions
import reflect.ClassTag
import NumberOps._
implicit def ComplexOps[U: NumberOps : ClassTag]: NumberOps[Array[U]] = { // NumberOps[T] :: NumberOps[Array[U]]
val numOps = implicitly[NumberOps[U]]
type OpF2[V] = (V,V) => NumberOps[V] => (V,V) // equivalent to curried function: f[V](V,V)(NumberOps[V]):(V,V)
// Concern A: widen x,y from native type U to type V, evaluate function f, then convert the result back to native type U
def eval[V](x:U, y:U)(f:OpF2[V]):(U,U) = {
(numOps.toFloating(x), numOps.toFloating(y), f) match {
case (xf:Float, yf:Float, _:OpF2[Float] #unchecked) => // _:opF #unchecked permits compiler type inference on f
val (xv,yv) = f(xf.asInstanceOf[V], yf.asInstanceOf[V])(FloatOps.asInstanceOf[NumberOps[V]])
(numOps.fromFloating(xv.asInstanceOf[Float]), numOps.fromFloating(yv.asInstanceOf[Float]))
case (xd:Double, yd:Double, _:OpF2[Double] #unchecked) => // _:opF #unchecked permits compiler type inference on f
val (xv,yv) = f(xd.asInstanceOf[V], yd.asInstanceOf[V])(DoubleOps.asInstanceOf[NumberOps[V]])
(numOps.fromFloating(xv.asInstanceOf[Double]), numOps.fromFloating(yv.asInstanceOf[Double]))
}
} // eval
new NumberOps[Array[U]]{ // implement NumberOps for complex numbers of any type U
def neg(a: Array[U]): Array[U] = a match { case (Array(ax, ay)) =>
def f[V](xv:V, yv:V)(no:NumberOps[V]):(V,V) = (no.neg(xv), no.neg(yv)) // Concern B: the complex calculation
val (xu,yu) = eval(a(0), a(1))(f _) // combine Concern A (widening conversion) with Concern B (calculation)
a(0) = xu; a(1) = yu; a
}
def abs(a: Array[U]): Array[U] = a match { case (Array(ax, ay)) =>
def f[V](xv:V, yv:V)(no:NumberOps[V]):(V,V) = (no.abs(xv), no.abs(yv)) // Concern B: the complex calculation
val (xu,yu) = eval(a(0), a(1))(f _) // combine Concern A (widening conversion) with Concern B (calculation)
a(0) = xu; a(1) = yu; a
}
def toFloating(a:Array[U]):AnyVal = numOps.toFloating( a(0) )
def fromFloating(x:AnyVal):Array[U] = Array(numOps.fromFloating(x), numOps.fromFloating(x))
}
} // implicit def ComplexOps
} // ImaginaryOps object
object TestNumberOps {
def cxStr(a:Any) = { a match { case ad: Array[Double] => s"${ad(0)} + ${ad(1)}i" } }
def cxUnary[T:NumberOps](v: T)(unaryOp:T => T): T = {
val ops = implicitly[NumberOps[T]]
unaryOp(v)
}
def main(args:Array[String]) {
println("TestNo4")
import ImaginaryOps._
val complexDoubleOps = implicitly[NumberOps[Array[Double]]]
val complex1 = Array(1.0,1.0)
val neg1 = cxUnary(complex1)(complexDoubleOps.neg _)
val abs1 = cxUnary(neg1)(complexDoubleOps.abs _)
println(s"adz1 = ${cxStr(complex1)}, neg1 = ${cxStr(neg1)}, abs1 = ${cxStr(abs1)}, ")
}
} // TestNumberOps
Now this code compiles, but at runtime I get a class cast exception:
Exception in thread "main" java.lang.ClassCastException: java.lang.Double cannot be cast to scala.runtime.Nothing$
at ImaginaryOps$$anon$1$$anonfun$1.apply(Experiment4.scala:68)
at ImaginaryOps$.ImaginaryOps$$eval$1(Experiment4.scala:60)
at ImaginaryOps$$anon$1.neg(Experiment4.scala:68)
at TestNumberOps$$anonfun$3.apply(Experiment4.scala:97)
at TestNumberOps$$anonfun$3.apply(Experiment4.scala:97)
at TestNumberOps$.cxUnary(Experiment4.scala:89)
at TestNumberOps$.main(Experiment4.scala:97)
at TestNumberOps.main(Experiment4.scala)
I understand why this exception occurs. It's because the compiler couldn't resolve the type V of def f[V], so when it gets passed to method eval as (f _), its generic type V has been changed to scala.runtime.Nothing.
Having struggled without success and after searching futilely online, I'm hoping to find a useful suggestion here. Probably I'm making this harder than it is, but with Scala's strong type system there ought to be a solution. The problem is how to tell the compiler to use this type in evaluating this function.
What you want to do is to use a derived type class for your complex number.
Consider the following simplified scenario,
trait Addable[A] {
def apply(a: A, b: A): A
}
implicit val intAddable: Addable[Int] = new Addable[Int] {
def apply(a: Int, b: Int): Float = a + b
}
implicit val floatAddable: Addable[Float] = new Addable[Float] {
def apply(a: Float, b: Float): Float = a + b
}
implicit final class AddOps[A](a: A) {
def add(b: A)(implicit addable: Addable[A]): A = addable(a, b)
}
which basically allows us to call, 1.add(2) allowing the scala compiler to infer that there is an addable for ints.
However what about for your complex type? Since we want to essentially say there exists an addable for any complex type which is composited of 2 types which follow the addable law we essentially define it like this,
implicit def complexAddable[A](implicit addable: Addable[A]): Addable[Array[A]] = {
new Addable[Array[A]] {
def apply(a: Array[A], b: Array[A]): Array[A] = {
Array(a(0).add(b(0)), a(1).add(b(1)))
}
}
}
which works because there is an Addable[A] in scope. Note that this of course the implicit cannot be created if an addable for A doesn't exist, and hence you have lovely compile time safety.
You can find usages of this pattern in the excellent functional libraries such as scalaz, cats, scodec et cetera, and is known from haskell as the type class pattern.
I am writing simple variable system in Scala. I need a way to nicely hold these AnyVars and access them by their string names. I have this object called context to manage it but it is really just a wrapper for HashMap[String, AnyVar]. I want to find a nicer way to do this.
class Context {
import scala.collection.mutable.HashMap
import scala.reflect.ClassTag
private var variables = HashMap[String, AnyVal] ()
def getVariable (name: String):AnyVal = variables(name)
def setVariable (name: String, value: AnyVal) = variables += ((name, value))
def getVariableOfType[T <: AnyVal : ClassTag] (name:String):T ={
val v = variables(name)
v match {
case x: T => x
case _ => null.asInstanceOf[T]
}
}
}
I really want an implementation that is type safe
You are defining the upper bound of your variables as AnyVal. These are mostly primitive types such as Int, Boolean etc. not reference types for which null makes sense. Therefore you have to include a cast to T. This doesn't give you a null reference but the default value for that value type, such as 0 for Int, false for Boolean:
null.asInstanceOf[Int] // 0
null.asInstanceOf[Boolean] // false
So this has nothing to do with type-safety but just the fact that Scala would otherwise refuse to use a null value here. Your pattern match against x: T is already type-safe.
A better approach would be to return an Option[T] or to throw a NoSuchElementException if you try to query a non-existing variable or the wrong type.
Be aware that using a plain hash-map without synchronisation means your Context is not thread safe.
Pattern matching with abstract types is difficult, and more so with instances of AnyVal. I found an approach which works at ClassTag based pattern matching fails for primitives
In general for your problem, I would simply use a Map[String, AnyVal] with an implicit extension to give you getOfType. Then you can use the existing methods to put and get variables as you wish. You can use a type alias to give your type a more meaningful reference.
object Context {
import scala.reflect.ClassTag
import scala.runtime.ScalaRunTime
type Context = Map[String, AnyVal]
def apply(items: (String, AnyVal)*): Context = items.toMap
implicit class ContextMethods(c: Context) {
class MyTag[A](val t: ClassTag[A]) extends ClassTag[A] {
override def runtimeClass = t.runtimeClass
override def unapply(x: Any): Option[A] =
if (t.runtimeClass.isPrimitive && (ScalaRunTime isAnyVal x) &&
(x.getClass getField "TYPE" get null) == t.runtimeClass)
Some(x.asInstanceOf[A])
else t unapply x
}
def getOfType[T <: AnyVal](key: String)(implicit t: ClassTag[T]): Option[T] = {
implicit val u = new MyTag(t)
c.get(key).flatMap {
case x: T => Some(x: T)
case _ => None
}
}
}
}
With this stuff collected together in an object which you can import, use of it becomes straightforward:
import Context._
val context = Context(
"int_var" -> 123,
"long_var" -> 456l,
"double_var" -> 23.5d
)
context.get("int_var")//Some(123)
context.get("long_var")//Some(456)
context.getOfType[Int]("int_var")//Some(123)
context.getOfType[Double]("double_var")//Some(23.5)
context.getOfType[Int]("double_var")//None
I wrote this in scala and it won't compile:
class TestDoubleDef{
def foo(p:List[String]) = {}
def foo(p:List[Int]) = {}
}
the compiler notify:
[error] double definition:
[error] method foo:(List[String])Unit and
[error] method foo:(List[Int])Unit at line 120
[error] have same type after erasure: (List)Unit
I know JVM has no native support for generics so I understand this error.
I could write wrappers for List[String] and List[Int] but I'm lazy :)
I'm doubtful but, is there another way expressing List[String] is not the same type than List[Int]?
Thanks.
I like Michael Krämer's idea to use implicits, but I think it can be applied more directly:
case class IntList(list: List[Int])
case class StringList(list: List[String])
implicit def il(list: List[Int]) = IntList(list)
implicit def sl(list: List[String]) = StringList(list)
def foo(i: IntList) { println("Int: " + i.list)}
def foo(s: StringList) { println("String: " + s.list)}
I think this is quite readable and straightforward.
[Update]
There is another easy way which seems to work:
def foo(p: List[String]) { println("Strings") }
def foo[X: ClassTag](p: List[Int]) { println("Ints") }
def foo[X: ClassTag, Y: ClassTag](p: List[Double]) { println("Doubles") }
For every version you need an additional type parameter, so this doesn't scale, but I think for three or four versions it's fine.
[Update 2]
For exactly two methods I found another nice trick:
def foo(list: => List[Int]) = { println("Int-List " + list)}
def foo(list: List[String]) = { println("String-List " + list)}
Instead of inventing dummy implicit values, you can use the DummyImplicit defined in Predef which seems to be made exactly for that:
class TestMultipleDef {
def foo(p:List[String]) = ()
def foo(p:List[Int])(implicit d: DummyImplicit) = ()
def foo(p:List[java.util.Date])(implicit d1: DummyImplicit, d2: DummyImplicit) = ()
}
To understand Michael Krämer's solution, it's necessary to recognize that the types of the implicit parameters are unimportant. What is important is that their types are distinct.
The following code works in the same way:
class TestDoubleDef {
object dummy1 { implicit val dummy: dummy1.type = this }
object dummy2 { implicit val dummy: dummy2.type = this }
def foo(p:List[String])(implicit d: dummy1.type) = {}
def foo(p:List[Int])(implicit d: dummy2.type) = {}
}
object App extends Application {
val a = new TestDoubleDef()
a.foo(1::2::Nil)
a.foo("a"::"b"::Nil)
}
At the bytecode level, both foo methods become two-argument methods since JVM bytecode knows nothing of implicit parameters or multiple parameter lists. At the callsite, the Scala compiler selects the appropriate foo method to call (and therefore the appropriate dummy object to pass in) by looking at the type of the list being passed in (which isn't erased until later).
While it's more verbose, this approach relieves the caller of the burden of supplying the implicit arguments. In fact, it even works if the dummyN objects are private to the TestDoubleDef class.
Due to the wonders of type erasure, the type parameters of your methods' List get erased during compilation, thus reducing both methods to the same signature, which is a compiler error.
As Viktor Klang already says, the generic type will be erased by the compiler. Fortunately, there's a workaround:
class TestDoubleDef{
def foo(p:List[String])(implicit ignore: String) = {}
def foo(p:List[Int])(implicit ignore: Int) = {}
}
object App extends Application {
implicit val x = 0
implicit val y = ""
val a = new A()
a.foo(1::2::Nil)
a.foo("a"::"b"::Nil)
}
Thanks for Michid for the tip!
If I combine Daniels response and Sandor Murakozis response here I get:
#annotation.implicitNotFound(msg = "Type ${T} not supported only Int and String accepted")
sealed abstract class Acceptable[T]; object Acceptable {
implicit object IntOk extends Acceptable[Int]
implicit object StringOk extends Acceptable[String]
}
class TestDoubleDef {
def foo[A : Acceptable : Manifest](p:List[A]) = {
val m = manifest[A]
if (m equals manifest[String]) {
println("String")
} else if (m equals manifest[Int]) {
println("Int")
}
}
}
I get a typesafe(ish) variant
scala> val a = new TestDoubleDef
a: TestDoubleDef = TestDoubleDef#f3cc05f
scala> a.foo(List(1,2,3))
Int
scala> a.foo(List("test","testa"))
String
scala> a.foo(List(1L,2L,3L))
<console>:21: error: Type Long not supported only Int and String accepted
a.foo(List(1L,2L,3L))
^
scala> a.foo("test")
<console>:9: error: type mismatch;
found : java.lang.String("test")
required: List[?]
a.foo("test")
^
The logic may also be included in the type class as such (thanks to jsuereth):
#annotation.implicitNotFound(msg = "Foo does not support ${T} only Int and String accepted")
sealed trait Foo[T] { def apply(list : List[T]) : Unit }
object Foo {
implicit def stringImpl = new Foo[String] {
def apply(list : List[String]) = println("String")
}
implicit def intImpl = new Foo[Int] {
def apply(list : List[Int]) = println("Int")
}
}
def foo[A : Foo](x : List[A]) = implicitly[Foo[A]].apply(x)
Which gives:
scala> #annotation.implicitNotFound(msg = "Foo does not support ${T} only Int and String accepted")
| sealed trait Foo[T] { def apply(list : List[T]) : Unit }; object Foo {
| implicit def stringImpl = new Foo[String] {
| def apply(list : List[String]) = println("String")
| }
| implicit def intImpl = new Foo[Int] {
| def apply(list : List[Int]) = println("Int")
| }
| } ; def foo[A : Foo](x : List[A]) = implicitly[Foo[A]].apply(x)
defined trait Foo
defined module Foo
foo: [A](x: List[A])(implicit evidence$1: Foo[A])Unit
scala> foo(1)
<console>:8: error: type mismatch;
found : Int(1)
required: List[?]
foo(1)
^
scala> foo(List(1,2,3))
Int
scala> foo(List("a","b","c"))
String
scala> foo(List(1.0))
<console>:32: error: Foo does not support Double only Int and String accepted
foo(List(1.0))
^
Note that we have to write implicitly[Foo[A]].apply(x) since the compiler thinks that implicitly[Foo[A]](x) means that we call implicitly with parameters.
There is (at least one) another way, even if it is not too nice and not really type safe:
import scala.reflect.Manifest
object Reified {
def foo[T](p:List[T])(implicit m: Manifest[T]) = {
def stringList(l: List[String]) {
println("Strings")
}
def intList(l: List[Int]) {
println("Ints")
}
val StringClass = classOf[String]
val IntClass = classOf[Int]
m.erasure match {
case StringClass => stringList(p.asInstanceOf[List[String]])
case IntClass => intList(p.asInstanceOf[List[Int]])
case _ => error("???")
}
}
def main(args: Array[String]) {
foo(List("String"))
foo(List(1, 2, 3))
}
}
The implicit manifest paramenter can be used to "reify" the erased type and thus hack around erasure. You can learn a bit more about it in many blog posts,e.g. this one.
What happens is that the manifest param can give you back what T was before erasure. Then a simple dispatch based on T to the various real implementation does the rest.
Probably there is a nicer way to do the pattern matching, but I haven't seen it yet. What people usually do is matching on m.toString, but I think keeping classes is a bit cleaner (even if it's a bit more verbose). Unfortunately the documentation of Manifest is not too detailed, maybe it also has something that could simplify it.
A big disadvantage of it is that it's not really type safe: foo will be happy with any T, if you can't handle it you will have a problem. I guess it could be worked around with some constraints on T, but it would further complicate it.
And of course this whole stuff is also not too nice, I'm not sure if it worth doing it, especially if you are lazy ;-)
Instead of using manifests you could also use dispatchers objects implicitly imported in a similar manner. I blogged about this before manifests came up: http://michid.wordpress.com/code/implicit-double-dispatch-revisited/
This has the advantage of type safety: the overloaded method will only be callable for types which have dispatchers imported into the current scope.
Nice trick I've found from http://scala-programming-language.1934581.n4.nabble.com/disambiguation-of-double-definition-resulting-from-generic-type-erasure-td2327664.html
by Aaron Novstrup
Beating this dead horse some more...
It occurred to me that a cleaner hack is to use a unique dummy type
for each method with erased types in its signature:
object Baz {
private object dummy1 { implicit val dummy: dummy1.type = this }
private object dummy2 { implicit val dummy: dummy2.type = this }
def foo(xs: String*)(implicit e: dummy1.type) = 1
def foo(xs: Int*)(implicit e: dummy2.type) = 2
}
[...]
I tried improving on Aaron Novstrup’s and Leo’s answers to make one set of standard evidence objects importable and more terse.
final object ErasureEvidence {
class E1 private[ErasureEvidence]()
class E2 private[ErasureEvidence]()
implicit final val e1 = new E1
implicit final val e2 = new E2
}
import ErasureEvidence._
class Baz {
def foo(xs: String*)(implicit e:E1) = 1
def foo(xs: Int*)(implicit e:E2) = 2
}
But that will cause the compiler to complain that there are ambiguous choices for the implicit value when foo calls another method which requires an implicit parameter of the same type.
Thus I offer only the following which is more terse in some cases. And this improvement works with value classes (those that extend AnyVal).
final object ErasureEvidence {
class E1[T] private[ErasureEvidence]()
class E2[T] private[ErasureEvidence]()
implicit def e1[T] = new E1[T]
implicit def e2[T] = new E2[T]
}
import ErasureEvidence._
class Baz {
def foo(xs: String*)(implicit e:E1[Baz]) = 1
def foo(xs: Int*)(implicit e:E2[Baz]) = 2
}
If the containing type name is rather long, declare an inner trait to make it more terse.
class Supercalifragilisticexpialidocious[A,B,C,D,E,F,G,H,I,J,K,L,M] {
private trait E
def foo(xs: String*)(implicit e:E1[E]) = 1
def foo(xs: Int*)(implicit e:E2[E]) = 2
}
However, value classes do not allow inner traits, classes, nor objects. Thus also note Aaron Novstrup’s and Leo’s answers do not work with a value classes.
I didn't test this, but why wouldn't an upper bound work?
def foo[T <: String](s: List[T]) { println("Strings: " + s) }
def foo[T <: Int](i: List[T]) { println("Ints: " + i) }
Does the erasure translation to change from foo( List[Any] s ) twice, to foo( List[String] s ) and foo( List[Int] i ):
http://www.angelikalanger.com/GenericsFAQ/FAQSections/TechnicalDetails.html#FAQ108
I think I read that in version 2.8, the upper bounds are now encoded that way, instead of always an Any.
To overload on covariant types, use an invariant bound (is there such a syntax in Scala?...ah I think there isn't, but take the following as conceptual addendum to the main solution above):
def foo[T : String](s: List[T]) { println("Strings: " + s) }
def foo[T : String2](s: List[T]) { println("String2s: " + s) }
then I presume the implicit casting is eliminated in the erased version of the code.
UPDATE: The problem is that JVM erases more type information on method signatures than is "necessary". I provided a link. It erases type variables from type constructors, even the concrete bound of those type variables. There is a conceptual distinction, because there is no conceptual non-reified advantage to erasing the function's type bound, as it is known at compile-time and does not vary with any instance of the generic, and it is necessary for callers to not call the function with types that do not conform to the type bound, so how can the JVM enforce the type bound if it is erased? Well one link says the type bound is retained in metadata which compilers are supposed to access. And this explains why using type bounds doesn't enable overloading. It also means that JVM is a wide open security hole since type bounded methods can be called without type bounds (yikes!), so excuse me for assuming the JVM designers wouldn't do such an insecure thing.
At the time I wrote this, I didn't understand that stackoverflow was a system of rating people by quality of answers like some competition over reputation. I thought it was a place to share information. At the time I wrote this, I was comparing reified and non-reified from a conceptual level (comparing many different languages), and so in my mind it didn't make any sense to erase the type bound.