Suppose I am writing library code that should be easy to extend and to use without verbose syntax. It seems like implicit conversions can be used to avoid verbosity, as in the Scala Collections library, but I am struggling at applying it to traversables as follows.
I have a trait:
trait Wrapped[T]
{
def value : T
}
Then I have the class Foo, a key class in the library. Foos are constructed with a list of anything that is Wrapped.
case class Foo[T <: Wrapped[_]](lst : Traversable[T]) {
override def toString = lst mkString " "
}
A common use case would be wrapping an Int so I provide a WrappedInt class:
case class WrappedInt(value : Int) extends Wrapped[Int]
With this code, I can make a Foo like this:
val wrappedInts = Seq(1, 2, 3, 4) map { new WrappedInt(_) }
val someFoo = Foo(wrappedInts)
I do not like the extra code to wrap here. I would like the following code to be equivalent:
val foo = Foo(Seq(1, 2, 3, 4)) //should be a Foo[WrappedInt] - gives error
I define the following implicit:
object Conversions {
implicit def intsToWrapped(lst : Traversable[Int]) = lst map { new WrappedInt(_) }
}
However, it still doesn't work, my val foo has a compilation error after changing the Foo constructor parameter to implicit lst. The Scala book says that an implicit conversion essentially allows x + y to be changed to convert(x) + y, where convert is an implicit conversion. It seems to me like I have the same exact case, one conversion of one parameter here is enough. I verify by doing this:
val foo = Foo(Conversions.intsToWrapped(Seq(1, 2, 3, 4)))
Why is my implicit not being applied? And is there a different, more idiomatic way in current Scala to let Foo be constructed with less code?
EDIT: Adding import Conversions._ does not help and should, if I understand correctly, not be necessary because this example is in one file.
Specific compiler errors I get are these:
val foo = Foo(Seq(1, 2, 3, 4))
inferred type arguments [Int] do not conform to method apply's type parameter bounds [T <: Wrapped[_]]
type mismatch; found : Seq[Int] required: Traversable[T]
Specifying the type to help with type inference, like this:
val foo = Foo[WrappedInt](Seq(1, 2, 3, 4))
gives a message for each int like
type mismatch; found : Int(1) required: WrappedInt
You can specify an implicit conversion in foo's constructor (what once was a view bound). You have to specify that the collection elements are "viewable" as their wrapped versions, not the collection itself.
trait Wrapped[T] {
def value : T
}
// you can specify bounds on W if you want to be able to do something specific on the values, e.g. W <: MyClass
case class Foo[T, W](lst : Traversable[T])(implicit asWrapped: T => Wrapped[W]) {
override def toString = lst.map(_.value) mkString " "
}
case class WrappedInt(value : Int) extends Wrapped[Int]
// This is your implicit conversion from Int to WrappedInt
implicit def intToWrapped(x : Int): WrappedInt = WrappedInt(x)
val wrappedInts = Seq(1, 2, 3, 4) map { new WrappedInt(_) }
val someFoo = Foo(wrappedInts)
val foo = Foo(Traversable(1, 2, 3, 4))
Related
I have a situation where I need a method that can take in types:
Array[Int]
Array[Array[Int]]
Array[Array[Array[Int]]]
Array[Array[Array[Array[Int]]]]
etc...
let's call this type RAI for "recursive array of ints"
def make(rai: RAI): ArrayPrinter = { ArrayPrinter(rai) }
Where ArrayPrinter is a class that is initialized with an RAI and iterates through the entire rai (let's say it prints all the values in this Array[Array[Int]])
val arrayOfArray: Array[Array[Int]] = Array(Array(1, 2), Array(3, 4))
val printer: ArrayPrinter[Array[Array[Int]]] = make(arrayOfArray)
printer.print_! // prints "1, 2, 3, 4"
It can also return the original Array[Array[Int]] without losing any type information.
val arr: Array[Array[Int]] = printer.getNestedArray()
How do you implement this in Scala?
Let's first focus on type. According to your definition, a type T should typecheck as an argument for ArrayPrinter is it accepted by the following type function:
def accept[T]: Boolean =
T match { // That's everyday business in agda
case Array[Int] => true
case Array[X] => accept[X]
case _ => false
}
In Scala, you can encode that type function using implicit resolution:
trait RAI[T]
object RAI {
implicit val e0: RAI[Array[Int]] = null
implicit def e1[T](implicit i: RAI[T]): RAI[Array[T]] = null
}
case class ArrayPrinter[T: RAI](getNestedArray: T) // Only compiles it T is a RAI
To print things the simplest solution is to treat the rai: T as a rai: Any:
def print_!: Unit = {
def print0(a: Any): Unit = a match {
case a: Int => println(a)
case a: Array[_] => a.foreach(print0)
case _ => ???
}
}
You could also be fancy and write print_! using type classes, but that would probably be less efficient and take more time to write than the above... Left as an exercise for the reader ;-)
The way this is typically done is by defining an abstract class that contains all the functionality that you would want related to this recursive type, but does not actually take any constructor arguments. Rather, all of its methods take (at least one of) the type as an argument. The canonical example would be Ordering. Define one or more implicit implementations of this class, and then any time you need to use it, accept it as an implicit parameter. The corresponding example would be List's sorted method.
In your case, this might look like:
abstract class ArrayPrinter[A] {
def mkString(a: A): String
}
implicit object BaseArrayPrinter extends ArrayPrinter[Int] {
override def mkString(x: Int) = x.toString
}
class WrappedArrayPrinter[A](wrapped: ArrayPrinter[A]) extends ArrayPrinter[Array[A]] {
override def mkString(xs: Array[A]) = xs.map(wrapped.mkString).mkString(", ")
}
implicit def makeWrappedAP[A](implicit wrapped: ArrayPrinter[A]): ArrayPrinter[Array[A]] = new WrappedArrayPrinter(wrapped)
def printHello[A](xs: A)(implicit printer: ArrayPrinter[A]): Unit = {
println("hello, array: " + printer.mkString(xs))
}
This tends to be a bit cleaner than having that RAIOps class (or ArrayPrinter) take in an object as part of its constructor. That usually leads to more "boxing" and "unboxing", complicated type signatures, strange pattern matching, etc.
It also has the added benefit of being easier to extend. If later someone else has a reason to want an implementation of ArrayPrinter for a Set[Int], they can define it locally to their code. I have many times defined a custom Ordering.
Methods taking a single argument can be written as an infix operators in Scal. I.e. adding *(other:C) = foo(this, other) to class C, will allow us to write c1 * c2 instead of foo(c1,c2). But is there a way to define infix operators on existing classes that you cannot modify?
E.g. if I wanted to write c1 + c2 instead of xor(c1,c2), where c1,c2:Array[Byte], I obviously cannot modify the Array-Class.
I found this and tried
implicit class Bytearray(a1:Array[Byte]) extends Anyval {
def +(a2:Array[Byte]) = xor(a1,a2)
}
But that doesn't seem to work (c1 + c2).
Type mismatch, expected:String, actual:Array[Byte]
I thought that perhaps the issue was my using +, so I exchanged it for xor
but c1 xor c2 only lead to
Cannot resolve symbol xor
Any suggestions?
UPDATE
Interesting. I had a class Foo with an object Foo defined below it, containing the implicit class. This lead to the aforementioned errors.
However, deleting the object and instead putting the implicit class into a trait BytearrayHandling and then extending it (class Foo extends BytearrayHandling) seems to work. Why is that?
It should be straight forward with the normal declaration of extension methods:
implicit class ByteArrayOps(private val a1: Array[Byte]) extends AnyVal {
def + (a2: Array[Byte]): Array[Byte] =
(a1 zip a2).map { case (x, y) => (x ^ y).toByte }
}
"foo".getBytes + "bar".getBytes // Array(4, 14, 29)
However be aware that sometimes you will run into this:
Type mismatch, expected:String, actual: X
This is because of an implicit conversion kicking in that allows you to + anything by converting it to a String. I have given up trying to understand how to deactivate it. It will finally go in Scala 2.12 if I'm not mistaken.
As eugener pointed out, this error message may indicate that you haven't actually imported your extension method (implicit conversion). For example:
object MyStuff {
implicit class ByteArrayOps(private val a1: Array[Byte]) extends AnyVal {
def + (a2: Array[Byte]): Array[Byte] =
(a1 zip a2).map { case (x, y) => (x ^ y).toByte }
}
}
"foo".getBytes + "bar".getBytes // error
gives:
<console>:14: error: type mismatch;
found : Array[Byte]
required: String
"foo".getBytes + "bar".getBytes
^
because of this Predef conversion. After you import MyStuff.ByteArrayOps, it works.
You can do something like:
class ByteArray(self: Array[Byte]) {
def +(other: Array[Byte]) = Array[Byte](1, 2, 3) // replace with your code
}
implicit def byteArrayPlus(self: Array[Byte]) = new ByteArray(self)
Array[Byte](0, 1, 2) + Array[Byte](0, 2, 3)
the last line of which should yield Array(1, 2, 3).
I am searching for a way to restrict my polymorphic class to types that have a certain member function.
class Table[T](bla: Array[T]) {
val symbols = bla
symbols.foreach( x => x * probability(x))
def probability(t: T) : Double = ...
}
This code does not compile because T doesnt have member *. How can I assure this. I dont want to use inheritance.
Edit: probability is actually implemented. It returns a Double.
Any ideas?
The problem can be solved in different ways. For example, if you just want type T to have some method (and you don't care whether this method defined on the object or there is implicit conversion that coverts object to something that has this method), then you can use view bounds. Here is an example that expects type T to have method def *(times: Int): T:
class Table[T <% {def *(times: Int): T}](bla: Array[T]) {
bla.foreach( x => println(x * 2))
}
new Table(Array("Hello", "World"))
// Prints:
// HelloHello
// WorldWorld
String does not have method *, but there exist an implicit conversion to StringOps that has this method.
Here is another example. In this case I restricting type T with method def size: Int:
class Table[T <% {def size: Int}](bla: Array[T]) {
bla.foreach( x => println(x.size))
}
new Table(Array(List(1, 2, 3), List("World")))
// Prints:
// 3
// 1
List has method size, and it also works as expected.
But this could be more involving if you are working with numeric values like ints, floats, doubles, etc. In this case I can recommend you to use context bound. Scala has Numeric type class. You can use it to work with numbers without knowledge about their type (with Numeric you can actually work with anything that can be represented as number, so your code would be much more general and abstract). Here is an example if it:
import math.Numeric.Implicits._
class Table[T : Numeric](bla: Array[T]) {
bla.foreach( x => println(x * x))
}
new Table(Array(1, 2, 3))
// Prints:
// 1
// 4
// 9
new Table(Array(BigInt("13473264523654723574623"), BigInt("5786785634377457457465784685683746583454545454")))
// Prints:
// 181528856924372945350108280958825119049592129
// 33486887978237312740760811863500355048015109407078304275771413678604907671187978933752066116
Update
As you noted in comments, Numeric still does not solve your problem, because it can only work on the numbers of the same type. You can simply solve this problem by introducing new type class. Here is an example of it:
import math.Numeric.Implicits._
trait Convert[From, To] {
def convert(f: From): To
}
object Convert {
implicit object DoubleToInt extends Convert[Double, Int] {
def convert(d: Double): Int = d.toInt
}
implicit object DoubleToBigInt extends Convert[Double, BigInt] {
def convert(d: Double): BigInt = d.toLong
}
}
type DoubleConvert[To] = Convert[Double, To]
class Table[T : Numeric : DoubleConvert](bla: Array[T]) {
bla.foreach( x => println(x * implicitly[DoubleConvert[T]].convert(probability(x))))
def probability(t: T) : Double = t.toDouble + 2.5
}
new Table(Array(1, 2, 3))
new Table(Array(BigInt("13473264523654723574623"), BigInt("5786785634377453434")))
With DoubleConvert type class and T : Numeric : DoubleConvert context bound you are not only saying, that T should be some kind of number, but also that there should exist some evidence, that it can be converted from Double. You are receiving such evidence with implicitly[DoubleConvert[T]] and then you are using it to convert Double to T. I defined Convert for Double -> Int and Double -> BigInt, but you can also define you own for the types you need.
Use scala's structural typing: http://markthomas.info/blog/?p=66
Your code will end up looking something like this:
class Table[T <: {def *(i:Int): T}](bla: Array[T]) {
...
}
Everyone else is answering with "Structural Types". Quite rightly so, because that's the correct answer!
Instead of repeating the obvious, I'll expand upon it. Taking a snippet from Easy Angel's reply:
class Table[T <% {def *(times: Int): T}](bla: Array[T]) {
bla foreach {x => println(x*2)}
}
If you find you're using the same expression {def *(times: Int): T} more than once, then you can create a type alias for it
type HasTimes = {def *(times: Int): T}
class Table[T <% HasTimes](bla: Array[T]) {
bla foreach {x => println(x*2)}
}
If you don't want to use inheritance, the only other restriction you can apply is a context bound. So if you have a list of classes that are okay, you create an implicit object HasStar[X] for each class X and use a context bound like T:HasStar. I know this probably isn't exactly what you want, but I don't think there are any better options.
Given the following code:
abstract class Field {
type T
val data: List[T]
def sum: T = data(0) + data(1)
}
I get an error on the last line - def sum: T = data(0) + data(1):
types2.scala:6: error: type mismatch;
found : Field.this.T
required: String
def sum: T = data(0) + data(1)
^
That is, it expects data(1) to be a String.
I dont understand why... (scala 2.8.1)
Your explanation will be much appreciated!
Since T does not support an addition operation, compiler assumes + to be a string concatenation operation. The following line I tried out at REPL indicates so:
scala> implicitly[Any => {def +(s: String): String}]
res16: (Any) => AnyRef{def +(s: String): String} = <function1>
What you can do is require that T have a Semigroup algebra defined. (A type is a semigroup if it supports an associative append operation.)
scala> import scalaz._
import scalaz._
scala> import Scalaz._
import Scalaz._
scala> abstract class Field[A : Semigroup] {
| val data: IndexedSeq[A]
| def sum: A = data(0) |+| data(1)
| }
defined class Field
scala> val f = new Field[Int] {
| val data = IndexedSeq(2, 3, 4)
| }
f: Field[Int] = $anon$1#d1fd51
scala> f.sum
res12: Int = 5
I replaced abstract type by a type parameter simply because I do not know how to put a context bound on an abstract type. I also changed type of data from List[A] to IndexedSeq[A] because as the name indicates indexed sequences are more suitable for indexed access than lists (which is what you do in your sum method). And finally, |+| is the semigroup append operation. For numeric types it will perform addition. For sequences, concatenation etc.
As a complement to #missingfactor's answer, while in principle I would very much favor Semigroup, there is a trait Numeric in the standard library which would do the same. And on collections whose content is Numeric (where a "Numeric structure" exists for the elements' type), you can simply call collection.sum (should you want to sum all the elements rather than the two first ones).
I prefer Semigroup for two reasons. First Numeric is much more than what is needed here, second, what are the exact properties of a Numeric structure is not clear. On the other hand, even someone not familiar with basic algebra will have a reasonable understanding of what Numeric means.
So if you are afraid of scalaz and/or semigroups, you can replace Semigroup with Numeric and |+| with +. You must import Numeric.Implicits._ so that + is available.
The compiler doesn't know how to invoke + in your type T, because it knows nothing about T. The only solution to compile this + is then a pimped string concatenation (by means of the implicit Predef.any2stringadd), which expects a string as second argument — hence the error you're getting.
After a lot of playing with this, I came up with a very simple solution.
Here is the full program
package manytypes
abstract class Field {
type T
val data: List[T]
def add (a: T, b: T): T
}
abstract class FieldInt extends Field {
type T = Int
def add (a: T, b: T): T = a + b
}
abstract class FieldDouble extends Field {
type T = Double
def add (a: T, b: T): T = a + b
}
abstract class FieldString extends Field {
type T = String
def add (a: T, b: T): T = a + b
}
object A extends App {
val ints: Field = new FieldInt { val data = List(1, 2, 3)}
val doubles: Field = new FieldDouble { val data = List(1.2, 2.3, 3.4) }
val strings: Field = new FieldString { val data = List("hello ", "this is ", "a list ")}
val fields: List[Field] = List(ints, doubles, strings)
for (field <- fields) println(field.data.reduceLeft(field.add(_, _)))
}
I wrote this in scala and it won't compile:
class TestDoubleDef{
def foo(p:List[String]) = {}
def foo(p:List[Int]) = {}
}
the compiler notify:
[error] double definition:
[error] method foo:(List[String])Unit and
[error] method foo:(List[Int])Unit at line 120
[error] have same type after erasure: (List)Unit
I know JVM has no native support for generics so I understand this error.
I could write wrappers for List[String] and List[Int] but I'm lazy :)
I'm doubtful but, is there another way expressing List[String] is not the same type than List[Int]?
Thanks.
I like Michael Krämer's idea to use implicits, but I think it can be applied more directly:
case class IntList(list: List[Int])
case class StringList(list: List[String])
implicit def il(list: List[Int]) = IntList(list)
implicit def sl(list: List[String]) = StringList(list)
def foo(i: IntList) { println("Int: " + i.list)}
def foo(s: StringList) { println("String: " + s.list)}
I think this is quite readable and straightforward.
[Update]
There is another easy way which seems to work:
def foo(p: List[String]) { println("Strings") }
def foo[X: ClassTag](p: List[Int]) { println("Ints") }
def foo[X: ClassTag, Y: ClassTag](p: List[Double]) { println("Doubles") }
For every version you need an additional type parameter, so this doesn't scale, but I think for three or four versions it's fine.
[Update 2]
For exactly two methods I found another nice trick:
def foo(list: => List[Int]) = { println("Int-List " + list)}
def foo(list: List[String]) = { println("String-List " + list)}
Instead of inventing dummy implicit values, you can use the DummyImplicit defined in Predef which seems to be made exactly for that:
class TestMultipleDef {
def foo(p:List[String]) = ()
def foo(p:List[Int])(implicit d: DummyImplicit) = ()
def foo(p:List[java.util.Date])(implicit d1: DummyImplicit, d2: DummyImplicit) = ()
}
To understand Michael Krämer's solution, it's necessary to recognize that the types of the implicit parameters are unimportant. What is important is that their types are distinct.
The following code works in the same way:
class TestDoubleDef {
object dummy1 { implicit val dummy: dummy1.type = this }
object dummy2 { implicit val dummy: dummy2.type = this }
def foo(p:List[String])(implicit d: dummy1.type) = {}
def foo(p:List[Int])(implicit d: dummy2.type) = {}
}
object App extends Application {
val a = new TestDoubleDef()
a.foo(1::2::Nil)
a.foo("a"::"b"::Nil)
}
At the bytecode level, both foo methods become two-argument methods since JVM bytecode knows nothing of implicit parameters or multiple parameter lists. At the callsite, the Scala compiler selects the appropriate foo method to call (and therefore the appropriate dummy object to pass in) by looking at the type of the list being passed in (which isn't erased until later).
While it's more verbose, this approach relieves the caller of the burden of supplying the implicit arguments. In fact, it even works if the dummyN objects are private to the TestDoubleDef class.
Due to the wonders of type erasure, the type parameters of your methods' List get erased during compilation, thus reducing both methods to the same signature, which is a compiler error.
As Viktor Klang already says, the generic type will be erased by the compiler. Fortunately, there's a workaround:
class TestDoubleDef{
def foo(p:List[String])(implicit ignore: String) = {}
def foo(p:List[Int])(implicit ignore: Int) = {}
}
object App extends Application {
implicit val x = 0
implicit val y = ""
val a = new A()
a.foo(1::2::Nil)
a.foo("a"::"b"::Nil)
}
Thanks for Michid for the tip!
If I combine Daniels response and Sandor Murakozis response here I get:
#annotation.implicitNotFound(msg = "Type ${T} not supported only Int and String accepted")
sealed abstract class Acceptable[T]; object Acceptable {
implicit object IntOk extends Acceptable[Int]
implicit object StringOk extends Acceptable[String]
}
class TestDoubleDef {
def foo[A : Acceptable : Manifest](p:List[A]) = {
val m = manifest[A]
if (m equals manifest[String]) {
println("String")
} else if (m equals manifest[Int]) {
println("Int")
}
}
}
I get a typesafe(ish) variant
scala> val a = new TestDoubleDef
a: TestDoubleDef = TestDoubleDef#f3cc05f
scala> a.foo(List(1,2,3))
Int
scala> a.foo(List("test","testa"))
String
scala> a.foo(List(1L,2L,3L))
<console>:21: error: Type Long not supported only Int and String accepted
a.foo(List(1L,2L,3L))
^
scala> a.foo("test")
<console>:9: error: type mismatch;
found : java.lang.String("test")
required: List[?]
a.foo("test")
^
The logic may also be included in the type class as such (thanks to jsuereth):
#annotation.implicitNotFound(msg = "Foo does not support ${T} only Int and String accepted")
sealed trait Foo[T] { def apply(list : List[T]) : Unit }
object Foo {
implicit def stringImpl = new Foo[String] {
def apply(list : List[String]) = println("String")
}
implicit def intImpl = new Foo[Int] {
def apply(list : List[Int]) = println("Int")
}
}
def foo[A : Foo](x : List[A]) = implicitly[Foo[A]].apply(x)
Which gives:
scala> #annotation.implicitNotFound(msg = "Foo does not support ${T} only Int and String accepted")
| sealed trait Foo[T] { def apply(list : List[T]) : Unit }; object Foo {
| implicit def stringImpl = new Foo[String] {
| def apply(list : List[String]) = println("String")
| }
| implicit def intImpl = new Foo[Int] {
| def apply(list : List[Int]) = println("Int")
| }
| } ; def foo[A : Foo](x : List[A]) = implicitly[Foo[A]].apply(x)
defined trait Foo
defined module Foo
foo: [A](x: List[A])(implicit evidence$1: Foo[A])Unit
scala> foo(1)
<console>:8: error: type mismatch;
found : Int(1)
required: List[?]
foo(1)
^
scala> foo(List(1,2,3))
Int
scala> foo(List("a","b","c"))
String
scala> foo(List(1.0))
<console>:32: error: Foo does not support Double only Int and String accepted
foo(List(1.0))
^
Note that we have to write implicitly[Foo[A]].apply(x) since the compiler thinks that implicitly[Foo[A]](x) means that we call implicitly with parameters.
There is (at least one) another way, even if it is not too nice and not really type safe:
import scala.reflect.Manifest
object Reified {
def foo[T](p:List[T])(implicit m: Manifest[T]) = {
def stringList(l: List[String]) {
println("Strings")
}
def intList(l: List[Int]) {
println("Ints")
}
val StringClass = classOf[String]
val IntClass = classOf[Int]
m.erasure match {
case StringClass => stringList(p.asInstanceOf[List[String]])
case IntClass => intList(p.asInstanceOf[List[Int]])
case _ => error("???")
}
}
def main(args: Array[String]) {
foo(List("String"))
foo(List(1, 2, 3))
}
}
The implicit manifest paramenter can be used to "reify" the erased type and thus hack around erasure. You can learn a bit more about it in many blog posts,e.g. this one.
What happens is that the manifest param can give you back what T was before erasure. Then a simple dispatch based on T to the various real implementation does the rest.
Probably there is a nicer way to do the pattern matching, but I haven't seen it yet. What people usually do is matching on m.toString, but I think keeping classes is a bit cleaner (even if it's a bit more verbose). Unfortunately the documentation of Manifest is not too detailed, maybe it also has something that could simplify it.
A big disadvantage of it is that it's not really type safe: foo will be happy with any T, if you can't handle it you will have a problem. I guess it could be worked around with some constraints on T, but it would further complicate it.
And of course this whole stuff is also not too nice, I'm not sure if it worth doing it, especially if you are lazy ;-)
Instead of using manifests you could also use dispatchers objects implicitly imported in a similar manner. I blogged about this before manifests came up: http://michid.wordpress.com/code/implicit-double-dispatch-revisited/
This has the advantage of type safety: the overloaded method will only be callable for types which have dispatchers imported into the current scope.
Nice trick I've found from http://scala-programming-language.1934581.n4.nabble.com/disambiguation-of-double-definition-resulting-from-generic-type-erasure-td2327664.html
by Aaron Novstrup
Beating this dead horse some more...
It occurred to me that a cleaner hack is to use a unique dummy type
for each method with erased types in its signature:
object Baz {
private object dummy1 { implicit val dummy: dummy1.type = this }
private object dummy2 { implicit val dummy: dummy2.type = this }
def foo(xs: String*)(implicit e: dummy1.type) = 1
def foo(xs: Int*)(implicit e: dummy2.type) = 2
}
[...]
I tried improving on Aaron Novstrup’s and Leo’s answers to make one set of standard evidence objects importable and more terse.
final object ErasureEvidence {
class E1 private[ErasureEvidence]()
class E2 private[ErasureEvidence]()
implicit final val e1 = new E1
implicit final val e2 = new E2
}
import ErasureEvidence._
class Baz {
def foo(xs: String*)(implicit e:E1) = 1
def foo(xs: Int*)(implicit e:E2) = 2
}
But that will cause the compiler to complain that there are ambiguous choices for the implicit value when foo calls another method which requires an implicit parameter of the same type.
Thus I offer only the following which is more terse in some cases. And this improvement works with value classes (those that extend AnyVal).
final object ErasureEvidence {
class E1[T] private[ErasureEvidence]()
class E2[T] private[ErasureEvidence]()
implicit def e1[T] = new E1[T]
implicit def e2[T] = new E2[T]
}
import ErasureEvidence._
class Baz {
def foo(xs: String*)(implicit e:E1[Baz]) = 1
def foo(xs: Int*)(implicit e:E2[Baz]) = 2
}
If the containing type name is rather long, declare an inner trait to make it more terse.
class Supercalifragilisticexpialidocious[A,B,C,D,E,F,G,H,I,J,K,L,M] {
private trait E
def foo(xs: String*)(implicit e:E1[E]) = 1
def foo(xs: Int*)(implicit e:E2[E]) = 2
}
However, value classes do not allow inner traits, classes, nor objects. Thus also note Aaron Novstrup’s and Leo’s answers do not work with a value classes.
I didn't test this, but why wouldn't an upper bound work?
def foo[T <: String](s: List[T]) { println("Strings: " + s) }
def foo[T <: Int](i: List[T]) { println("Ints: " + i) }
Does the erasure translation to change from foo( List[Any] s ) twice, to foo( List[String] s ) and foo( List[Int] i ):
http://www.angelikalanger.com/GenericsFAQ/FAQSections/TechnicalDetails.html#FAQ108
I think I read that in version 2.8, the upper bounds are now encoded that way, instead of always an Any.
To overload on covariant types, use an invariant bound (is there such a syntax in Scala?...ah I think there isn't, but take the following as conceptual addendum to the main solution above):
def foo[T : String](s: List[T]) { println("Strings: " + s) }
def foo[T : String2](s: List[T]) { println("String2s: " + s) }
then I presume the implicit casting is eliminated in the erased version of the code.
UPDATE: The problem is that JVM erases more type information on method signatures than is "necessary". I provided a link. It erases type variables from type constructors, even the concrete bound of those type variables. There is a conceptual distinction, because there is no conceptual non-reified advantage to erasing the function's type bound, as it is known at compile-time and does not vary with any instance of the generic, and it is necessary for callers to not call the function with types that do not conform to the type bound, so how can the JVM enforce the type bound if it is erased? Well one link says the type bound is retained in metadata which compilers are supposed to access. And this explains why using type bounds doesn't enable overloading. It also means that JVM is a wide open security hole since type bounded methods can be called without type bounds (yikes!), so excuse me for assuming the JVM designers wouldn't do such an insecure thing.
At the time I wrote this, I didn't understand that stackoverflow was a system of rating people by quality of answers like some competition over reputation. I thought it was a place to share information. At the time I wrote this, I was comparing reified and non-reified from a conceptual level (comparing many different languages), and so in my mind it didn't make any sense to erase the type bound.