Scope of Scala's implicit class conversion - scala

Scala seems to apply the implicit class conversion on the largest possible expression, as in the following example:
scala> class B { def b = { println("bb"); true } }
defined class B
scala> class A { def a = { println("aa"); new B } }
defined class A
scala> (new A).a.b
aa
bb
res16: Boolean = true
scala> class XXX(b: => Boolean) { def xxx = 42 }
defined class XXX
scala> implicit def toXXX(b: => Boolean) = new XXX(b)
toXXX: (b: => Boolean)XXX
scala> (new A).a.b.xxx
res18: Int = 42
I'm very happy about this fact, but my question is that which part of the SLS specifies this behavior? Why does it not evaluate (new A).a.b to true first for example, and just apply the conversion on that value?

The line containing the implicit conversion
(new A).a.b.xxx
gets converted by the compiler (ie, at compile-time) into
toXXX((new A).a.b).xxx
We can see this if you use the -Xprint:typer option when starting Scala.
private[this] val res3: Int = $line5.$read.$iw.$iw.toXXX(new $line2.$read.$iw.$iw.A().a.b).xxx;
Since this conversion happens at compile-time and not run-time, it would be impossible for Scala to evaluate (new A).a.b to true before applying the conversion. Thus, the behavior you get is exactly the same as if you has just written toXXX((new A).a.b).xxx in the first place.

As answered by Ryan Hendrickson on the mailing list:
[The definition] you're looking for is in Section 7.3, in the list of the three situations in which views are applied:
In a selection e.m with e of type T, if the selector does not denote a member
of T. In this case, a view v is searched which is applicable to e and whose result
contains a member named m. The search proceeds as in the case of implicit
parameters, where the implicit scope is the one of T. If such a view is found,
the selection e.m is converted to v(e).m.
So the compiler can only generate something that is semantically equivalent to v(e).m, and as you've demonstrated, when by-name parameters are involved
val x = e
v(x).m
is not semantically equivalent to v(e).m.

Related

scala overloading resolution differences between function calls and implicit search

There is a difference in the way the scala 2.13.3 compiler determines which overloaded function to call compared to which overloaded implicit to pick.
object Thing {
trait A;
trait B extends A;
trait C extends A;
def f(a: A): String = "A"
def f(b: B): String = "B"
def f(c: C): String = "C"
implicit val a: A = new A {};
implicit val b: B = new B {};
implicit val c: C = new C {};
}
import Thing._
scala> f(new B{})
val res1: String = B
scala> implicitly[B]
val res2: Thing.B = Thing$$anon$2#2f64f99f
scala> f(new A{})
val res3: String = A
scala> implicitly[A]
^
error: ambiguous implicit values:
both value b in object Thing of type Thing.B
and value c in object Thing of type Thing.C
match expected type Thing.A
As we can see, the overload resolution worked for the function call but not for the implicit pick. Why isn't the implicit offered by val a be chosen as occurs with function calls? If the callers ask for an instance of A why the compilers considers instances of B and C when an instance of A is in scope. There would be no ambiguity if the resolution logic were the same as for function calls.
Edit 2:
The Edit 1 was removed because the assertion I wrote there was wrong.
In response to the comments I added another test to see what happens when the implicit val c: C is removed. In that case the compiler don't complains and picks implicit val b: B despite the caller asked for an instance of A.
object Thing {
trait A { def name = 'A' };
trait B extends A { def name = 'B' };
trait C extends A { def name = 'C' };
def f(a: A): String = "A"
def f(b: B): String = "B"
implicit val a: A = new A {};
implicit val b: B = new B {};
}
import Thing._
scala> f(new A{})
val res0: String = A
scala> implicitly[A].name
val res3: Char = B
So, the overloading resolution of implicit differs from function calls more than I expected.
Anyway, I still don't find a reason why the designers of scala decided to apply a different resolution logic for function and implicit overloading. (Edit: Later noticed why).
Let's see what happens in a real world example.
Suppose we are doing a Json parser that converts a Json string directly to scala Abstract data types, and we want it to support many standard collections.
The snippet in charge of parsing the iterable collections would be something like this:
trait Parser[+A] {
def parse(input: Input): ParseResult;
///// many combinators here
}
implicit def summonParser[T](implicit parserT: Parser[T]) = parserT;
/** #tparam IC iterator type constructor
* #tparam E element's type */
implicit def iterableParser[IC[E] <: Iterable[E], E](
implicit
parserE: Parser[E],
factory: IterableFactory[IC]
): Parser[IC[E]] = '[' ~> skipSpaces ~> (parserE <~ skipSpaces).repSepGen(coma <~ skipSpaces, factory.newBuilder[E]) <~ skipSpaces <~ ']';
Which requires a Parser[E] for the elements and a IterableFactory[IC] to construct the collection specified by the type parameters.
So, we have to put in implicit scope an instance of IterableFactory for every collection type we want to support.
implicit val iterableFactory: IterableFactory[Iterable] = Iterable
implicit val setFactory: IterableFactory[Set] = Set
implicit val listFactory: IterableFactory[List] = List
With the current implicit resolution logic implemented by the scala compiler, this snippet works fine for Set and List, but not for Iterable.
scala> def parserInt: Parser[Int] = ???
def parserInt: read.Parser[Int]
scala> Parser[List[Int]]
val res0: read.Parser[List[Int]] = read.Parser$$anonfun$pursue$3#3958db82
scala> Parser[Vector[Int]]
val res1: read.Parser[Vector[Int]] = read.Parser$$anonfun$pursue$3#648f48d3
scala> Parser[Iterable[Int]]
^
error: could not find implicit value for parameter parserT: read.Parser[Iterable[Int]]
And the reason is:
scala> implicitly[IterableFactory[Iterable]]
^
error: ambiguous implicit values:
both value listFactory in object IterableParser of type scala.collection.IterableFactory[List]
and value vectorFactory in object IterableParser of type scala.collection.IterableFactory[Vector]
match expected type scala.collection.IterableFactory[Iterable]
On the contrary, if the overloading resolution logic of implicits was like the one for function calls, this would work fine.
Edit 3: After many many coffees I noticed that, contrary to what I said above, there is no difference between the way the compiler decides which overloaded functions to call and which overloaded implicit to pick.
In the case of function call: from all the functions overloads such that the type of the argument is asignable to the type of the parameter, the compiler chooses the one such that the function's parameter type is assignable to all the others. If no function satisfies that, a compilation error is thrown.
In the case of implicit pick up: from all the implicit in scope such that the type of the implicit is asignable to the asked type, the compiler chooses the one such that the declared type is asignable to all the others. If no implicit satisfies that, an compilation error is thrown.
My mistake was that I didn't notice the inversion of the assignability.
Anyway, the resolution logic I proposed above (give me what I asked for) is not entirely wrong. It's solves the particular case I mentioned. But for most uses cases the logic implemented by the scala compiler (and, I suppose, all the other languages that support type classes) is better.
As explained in the Edit 3 section of the question, there are similitudes between the way the compiler decides which overloaded functions to call and which overloaded implicit to pick. In both cases the compiler does two steps:
Filters out all the alternatives that are not asignable.
From the remaining alternatives choses the most specific or complains if there is more than one.
In the case of the function call, the most specific alternative is the function with the most specific parameter type; and in the case of implicit pick is the instance with the most specific declared type.
But, if the logic in both cases were exactly the same, then why did the example of the question give different results? Because there is a difference: the assignability requirement that determines which alternatives pass the first step are oposite.
In the case of the function call, after the first step remain the functions whose parameter type is more generic than the argument type; and in the case of implicit pick, remain the instances whose declared type is more specific than the asked type.
The above words are enough to answers the question itself but don't give a solution to the problem that motivated it, which is: How to force the compiler to pick the implicit instance whose declared type is exactly the same than the summoned type? And the answer is: wrapping the implicit instances inside a non variant wrapper.

Magnet pattern and overloaded methods

There is a significant difference in how Scala resolves implicit conversions from "Magnet Pattern" for non-overloaded and overloaded methods.
Suppose there is a trait Apply (a variation of a "Magnet Pattern") implemented as follows.
trait Apply[A] {
def apply(): A
}
object Apply {
implicit def fromLazyVal[A](v: => A): Apply[A] = new Apply[A] {
def apply(): A = v
}
}
Now we create a trait Foo that has a single apply taking an instance of Apply so we can pass it any value of arbitrary type A since there an implicit conversion from A => Apply[A].
trait Foo[A] {
def apply(a: Apply[A]): A = a()
}
We can make sure it works as expected using REPL and this workaround to de-sugar Scala code.
scala> val foo = new Foo[String]{}
foo: Foo[String] = $anon$1#3a248e6a
scala> showCode(reify { foo { "foo" } }.tree)
res9: String =
$line21$read.foo.apply(
$read.INSTANCE.Apply.fromLazyVal("foo")
)
This works great, but suppose we pass a complex expression (with ;) to the apply method.
scala> val foo = new Foo[Int]{}
foo: Foo[Int] = $anon$1#5645b124
scala> var i = 0
i: Int = 0
scala> showCode(reify { foo { i = i + 1; i } }.tree)
res10: String =
$line23$read.foo.apply({
$line24$read.`i_=`($line24$read.i.+(1));
$read.INSTANCE.Apply.fromLazyVal($line24$read.i)
})
As we can see, an implicit conversion has been applied only on the last part of the complex expression (i.e., i), not to the whole expression. So, i = i + 1 was strictly evaluated at the moment we pass it to an apply method, which is not what we've been expecting.
Good (or bad) news. We can make scalac to use the whole expression in the implicit conversion. So i = i + 1 will be evaluated lazily as expected. To do so we (surprize, surprize!) we add an overload method Foo.apply that takes any type, but not Apply.
trait Foo[A] {
def apply(a: Apply[A]): A = a()
def apply(s: Symbol): Foo[A] = this
}
And then.
scala> var i = 0
i: Int = 0
scala> val foo = new Foo[Int]{}
foo: Foo[Int] = $anon$1#3ff00018
scala> showCode(reify { foo { i = i + 1; i } }.tree)
res11: String =
$line28$read.foo.apply($read.INSTANCE.Apply.fromLazyVal({
$line27$read.`i_=`($line27$read.i.+(1));
$line27$read.i
}))
As we can see, the entire expression i = i + 1; i made it under the implicit conversion as expected.
So my question is why is that? Why the scope of which an implicit conversion is applied depends on the fact whether or not there is an overloaded method in the class.
Now, that is a tricky one. And it's actually pretty awesome, I didn't know that "workaround" to the "lazy implicit does not cover full block" problem. Thanks for that!
What happens is related to expected types, and how they affect type inference works, implicit conversions, and overloads.
Type inference and expected types
First, we have to know that type inference in Scala is bi-directional. Most of the inference works bottom-up (given a: Int and b: Int, infer a + b: Int), but some things are top-down. For example, inferring the parameter types of a lambda is top-down:
def foo(f: Int => Int): Int = f(42)
foo(x => x + 1)
In the second line, after resolving foo to be def foo(f: Int => Int): Int, the type inferencer can tell that x must be of type Int. It does so before typechecking the lambda itself. It propagates type information from the function application down to the lambda, which is a parameter.
Top-down inference basically relies on the notion of expected type. When typechecking a node of the AST of the program, the typechecker does not start empty-handed. It receives an expected type from "above" (in this case, the function application node). When typechecking the lambda x => x + 1 in the above example, the expected type is Int => Int, because we know what parameter type is expected by foo. This drives the type inference into inferring Int for the parameter x, which in turn allows to typecheck x + 1.
Expected types are propagated down certain constructs, e.g., blocks ({}) and the branches of ifs and matches. Hence, you could also call foo with
foo({
val y = 1
x => x + y
})
and the typechecker is still able to infer x: Int. That is because, when typechecking the block { ... }, the expected type Int => Int is passed down to the typechecking of the last expression, i.e., x => x + y.
Implicit conversions and expected types
Now, we have to introduce implicit conversions into the mix. When typechecking a node produces a value of type T, but the expected type for that node is U where T <: U is false, the typechecker looks for an implicit T => U (I'm probably simplifying things a bit here, but the gist is still true). This is why your first example does not work. Let us look at it closely:
trait Foo[A] {
def apply(a: Apply[A]): A = a()
}
val foo = new Foo[Int] {}
foo({
i = i + 1
i
})
When calling foo.apply, the expected type for the parameter (i.e., the block) is Apply[Int] (A has already been instantiated to Int). We can "write" this typechecker "state" like this:
{
i = i + 1
i
}: Apply[Int]
This expected type is passed down to the last expression of the block, which gives:
{
i = i + 1
(i: Apply[Int])
}
at this point, since i: Int and the expected type is Apply[Int], the typechecker finds the implicit conversion:
{
i = i + 1
fromLazyVal[Int](i)
}
which causes only i to be lazified.
Overloads and expected types
OK, time to throw overloads in there! When the typechecker sees an application of an overload method, it has much more trouble deciding on an expected type. We can see that with the following example:
object Foo {
def apply(f: Int => Int): Int = f(42)
def apply(f: String => String): String = f("hello")
}
Foo(x => x + 1)
gives:
error: missing parameter type
Foo(x => x + 1)
^
In this case, the failure of the typechecker to figure out an expected type causes the parameter type not to be inferred.
If we take your "solution" to your issue, we have a different consequence:
trait Foo[A] {
def apply(a: Apply[A]): A = a()
def apply(s: Symbol): Foo[A] = this
}
val foo = new Foo[Int] {}
foo({
i = i + 1
i
})
Now when typechecking the block, the typechecker has no expected type to work with. It will therefore typecheck the last expression without expression, and eventually typecheck the whole block as an Int:
{
i = i + 1
i
}: Int
Only now, with an already typechecked argument, does it try to resolve the overloads. Since none of the overloads conforms directly, it tries to apply an implicit conversion from Int to either Apply[Int] or Symbol. It finds fromLazyVal[Int], which it applies to the entire argument. It does not push it inside the block anymore, giving:
fromLazyVal({
i = i + 1
i
}): Apply[Int]
In this case, the whole block is lazified.
This concludes the explanation. To summarize, the major difference is the presence vs absence of an expected type when typechecking the block. With an expected type, the implicit conversion is pushed down as much as possible, down to just i. Without the expected type, the implicit conversion is applied a posteriori on the entire argument, i.e., the whole block.

Is it possible to write a method in Scala returning objects with different type parameter?

Is it possible to write a method in Scala which returns an object of a type-parameterized class with different type paramter ? Something like this:
class A[T]
def f(switch: Boolean): A = if(switch) new A[Int] else new A[String]
Please note: The Code above is fictional to show the type of problem; The code above does not make semantically sense.
The code above will not compile because return type A is not parameterized.
You can, and you can even do it with type-safety with the aid of implicit arguments that encapsulate the pairings:
class TypeMapping[+A,B] {
def newListB = List.empty[B]
}
trait Logical
object True extends Logical
object False extends Logical
implicit val mapFalseToInt = new TypeMapping[False.type,Int]
implicit val mapTrueToString = new TypeMapping[True.type,String]
def f[A <: Logical,B](switch: A)(implicit tmap: TypeMapping[A,B]) = tmap.newListB
scala> f(True)
res2: List[String] = List()
scala> f(False)
res3: List[Int] = List()
You do have to explicitly map from boolean values to the custom True and False values.
(I have chosen List as the target class just as an example; you could pick anything or even make it generic with a little more work.)
(Edit: as oxbow_lakes points out, if you need all possible return values to be represented on the same code path, then this alone won't do it, because the superclass of List[Int] and List[String] is List[Any], which isn't much help. In that case, you should use an Either. My solution is for a single function that will be used only in the True or False contexts, and can maintain the type information there.)
One way of expressing this would be by using Either;
def f(switch: Boolean) = if (switch) Left(new A[Int]) else Right(newA[String])
This of course returns an Either[A[Int], A[String]]. You certainly cannot (at the moment) declare a method which returns some parameterized type P, with some subset of type parameters (i.e. only Int or String).
The language ceylon has union types and I understand the intention is to add these to scala in the near future, in which case, you could define a method:
def f(switch: Boolean): A[Int|String] = ...
Well, you could do something like that.
scala> class A {
| type T
| }
defined class A
scala> def f(b: Boolean): A = if(b) new A { type T = Int } else new A { type T = String }
f: (b: Boolean)A
But this is pointless. Types are a compile time information, and that information is getting lost here.
How about an absolutely minimal change to the "fictional code"? If we just add [_] after the "fictional" return type, the code will compile:
class A[T]
def f(switch: Boolean):A[_] = if(switch) new A[Int] else new A[String]
It is worth noting that A[_] is not the same as A[Any]. A[T] does not need to be defined covariant for the code to compile.
Unfortunately, information about the type gets lost.

forall in Scala

As shown below, in Haskell, it's possible to store in a list values with heterogeneous types with certain context bounds on them:
data ShowBox = forall s. Show s => ShowBox s
heteroList :: [ShowBox]
heteroList = [ShowBox (), ShowBox 5, ShowBox True]
How can I achieve the same in Scala, preferably without subtyping?
As #Michael Kohl commented, this use of forall in Haskell is an existential type and can be exactly replicted in Scala using either the forSome construct or a wildcard. That means that #paradigmatic's answer is largely correct.
Nevertheless there's something missing there relative to the Haskell original which is that instances of its ShowBox type also capture the corresponding Show type class instances in a way which makes them available for use on the list elements even when the exact underlying type has been existentially quantified out. Your comment on #paradigmatic's answer suggests that you want to be able to write something equivalent to the following Haskell,
data ShowBox = forall s. Show s => ShowBox s
heteroList :: [ShowBox]
heteroList = [ShowBox (), ShowBox 5, ShowBox True]
useShowBox :: ShowBox -> String
useShowBox (ShowBox s) = show s
-- Then in ghci ...
*Main> map useShowBox heteroList
["()","5","True"]
#Kim Stebel's answer shows the canonical way of doing that in an object-oriented language by exploiting subtyping. Other things being equal, that's the right way to go in Scala. I'm sure you know that, and have good reasons for wanting to avoid subtyping and replicate Haskell's type class based approach in Scala. Here goes ...
Note that in the Haskell above the Show type class instances for Unit, Int and Bool are available in the implementation of the useShowBox function. If we attempt to directly translate this into Scala we'll get something like,
trait Show[T] { def show(t : T) : String }
// Show instance for Unit
implicit object ShowUnit extends Show[Unit] {
def show(u : Unit) : String = u.toString
}
// Show instance for Int
implicit object ShowInt extends Show[Int] {
def show(i : Int) : String = i.toString
}
// Show instance for Boolean
implicit object ShowBoolean extends Show[Boolean] {
def show(b : Boolean) : String = b.toString
}
case class ShowBox[T: Show](t:T)
def useShowBox[T](sb : ShowBox[T]) = sb match {
case ShowBox(t) => implicitly[Show[T]].show(t)
// error here ^^^^^^^^^^^^^^^^^^^
}
val heteroList: List[ShowBox[_]] = List(ShowBox(()), ShowBox(5), ShowBox(true))
heteroList map useShowBox
and this fails to compile in useShowBox as follows,
<console>:14: error: could not find implicit value for parameter e: Show[T]
case ShowBox(t) => implicitly[Show[T]].show(t)
^
The problem here is that, unlike in the Haskell case, the Show type class instances aren't propagated from the ShowBox argument to the body of the useShowBox function, and hence aren't available for use. If we try to fix that by adding an additional context bound on the useShowBox function,
def useShowBox[T : Show](sb : ShowBox[T]) = sb match {
case ShowBox(t) => implicitly[Show[T]].show(t) // Now compiles ...
}
this fixes the problem within useShowBox, but now we can't use it in conjunction with map on our existentially quantified List,
scala> heteroList map useShowBox
<console>:21: error: could not find implicit value for evidence parameter
of type Show[T]
heteroList map useShowBox
^
This is because when useShowBox is supplied as an argument to the map function we have to choose a Show instance based on the type information we have at that point. Clearly there isn't just one Show instance which will do the job for all of the elements of this list and so this fails to compile (if we had defined a Show instance for Any then there would be, but that's not what we're after here ... we want to select a type class instance based on the most specific type of each list element).
To get this to work in the same way that it does in Haskell, we have to explicitly propagate the Show instances within the body of useShowBox. That might go like this,
case class ShowBox[T](t:T)(implicit val showInst : Show[T])
val heteroList: List[ShowBox[_]] = List(ShowBox(()), ShowBox(5), ShowBox(true))
def useShowBox(sb : ShowBox[_]) = sb match {
case sb#ShowBox(t) => sb.showInst.show(t)
}
then in the REPL,
scala> heteroList map useShowBox
res7: List[String] = List((), 5, true)
Note that we've desugared the context bound on ShowBox so that we have an explicit name (showInst) for the Show instance for the contained value. Then in the body of useShowBox we can explicitly apply it. Also note that the pattern match is essential to ensure that we only open the existential type once in the body of the function.
As should be obvious, this is a lot more vebose than the equivalent Haskell, and I would strongly recommend using the subtype based solution in Scala unless you have extremely good reasons for doing otherwise.
Edit
As pointed out in the comments, the Scala definition of ShowBox above has a visible type parameter which isn't present in the Haskell original. I think it's actually quite instructive to see how we can rectify that using abstract types.
First we replace the type parameter with an abstract type member and replace the constructor parameters with abstract vals,
trait ShowBox {
type T
val t : T
val showInst : Show[T]
}
We now need to add the factory method that case classes would otherwise give us for free,
object ShowBox {
def apply[T0 : Show](t0 : T0) = new ShowBox {
type T = T0
val t = t0
val showInst = implicitly[Show[T]]
}
}
We can now use plain ShowBox whereever we previously used ShowBox[_] ... the abstract type member is playing the role of the existential quantifier for us now,
val heteroList: List[ShowBox] = List(ShowBox(()), ShowBox(5), ShowBox(true))
def useShowBox(sb : ShowBox) = {
import sb._
showInst.show(t)
}
heteroList map useShowBox
(It's worth noting that prior to the introduction of explict forSome and wildcards in Scala this was exactly how you would represent existential types.)
We now have the existential in exactly the same place as it is in the original Haskell. I think this is as close to a faithful rendition as you can get in Scala.
The ShowBox example you gave involves an existential type. I'm renaming the ShowBox data constructor to SB to distinguish it from the type:
data ShowBox = forall s. Show s => SB s
We say s is "existential", but the forall here is a universal quantifier that pertains to the SB data constructor. If we ask for the type of the SB constructor with explicit forall turned on, this becomes much clearer:
SB :: forall s. Show s => s -> ShowBox
That is, a ShowBox is actually constructed from three things:
A type s
A value of type s
An instance of Show s.
Because the type s becomes part of the constructed ShowBox, it is existentially quantified. If Haskell supported a syntax for existential quantification, we could write ShowBox as a type alias:
type ShowBox = exists s. Show s => s
Scala does support this kind of existential quantification and Miles's answer gives the details using a trait that consists of exactly those three things above. But since this is a question about "forall in Scala", let's do it exactly like Haskell does.
Data constructors in Scala cannot be explicitly quantified with forall. However, every method on a module can be. So you can effectively use type constructor polymorphism as universal quantification. Example:
trait Forall[F[_]] {
def apply[A]: F[A]
}
A Scala type Forall[F], given some F, is then equivalent to a Haskell type forall a. F a.
We can use this technique to add constraints to the type argument.
trait SuchThat[F[_], G[_]] {
def apply[A:G]: F[A]
}
A value of type F SuchThat G is like a value of the Haskell type forall a. G a => F a. The instance of G[A] is implicitly looked up by Scala if it exists.
Now, we can use this to encode your ShowBox ...
import scalaz._; import Scalaz._ // to get the Show typeclass and instances
type ShowUnbox[A] = ({type f[S] = S => A})#f SuchThat Show
sealed trait ShowBox {
def apply[B](f: ShowUnbox[B]): B
}
object ShowBox {
def apply[S: Show](s: => S): ShowBox = new ShowBox {
def apply[B](f: ShowUnbox[B]) = f[S].apply(s)
}
def unapply(b: ShowBox): Option[String] =
b(new ShowUnbox[Option[String]] {
def apply[S:Show] = s => some(s.shows)
})
}
val heteroList: List[ShowBox] = List(ShowBox(()), ShowBox(5), ShowBox(true))
The ShowBox.apply method is the universally quantified data constructor. You can see that it takes a type S, an instance of Show[S], and a value of type S, just like the Haskell version.
Here's an example usage:
scala> heteroList map { case ShowBox(x) => x }
res6: List[String] = List((), 5, true)
A more direct encoding in Scala might be to use a case class:
sealed trait ShowBox
case class SB[S:Show](s: S) extends ShowBox {
override def toString = Show[S].shows(s)
}
Then:
scala> val heteroList = List(ShowBox(()), ShowBox(5), ShowBox(true))
heteroList: List[ShowBox] = List((), 5, true)
In this case, a List[ShowBox] is basically equivalent to a List[String], but you can use this technique with traits other than Show to get something more interesting.
This is all using the Show typeclass from Scalaz.
I don't think a 1-to-1 translation from Haskell to Scala is possible here. But why don't you want to use subtyping? If the types you want to use (such as Int) lack a show method, you can still add this via implicit conversions.
scala> trait Showable { def show:String }
defined trait Showable
scala> implicit def showableInt(i:Int) = new Showable{ def show = i.toString }
showableInt: (i: Int)java.lang.Object with Showable
scala> val l:List[Showable] = 1::Nil
l: List[Showable] = List($anon$1#179c0a7)
scala> l.map(_.show)
res0: List[String] = List(1)
( Edit: Adding methods to show, to answer comment. )
I think you can get the same using implicit methods with context bounds:
trait Show[T] {
def apply(t:T): String
}
implicit object ShowInt extends Show[Int] {
def apply(t:Int) = "Int("+t+")"
}
implicit object ShowBoolean extends Show[Boolean] {
def apply(t:Boolean) = "Boolean("+t+")"
}
case class ShowBox[T: Show](t:T) {
def show = implicitly[Show[T]].apply(t)
}
implicit def box[T: Show]( t: T ) =
new ShowBox(t)
val lst: List[ShowBox[_]] = List( 2, true )
println( lst ) // => List(ShowBox(2), ShowBox(true))
val lst2 = lst.map( _.show )
println( lst2 ) // => List(Int(2), Boolean(true))
Why not:
trait ShowBox {
def show: String
}
object ShowBox {
def apply[s](x: s)(implicit i: Show[s]): ShowBox = new ShowBox {
override def show: String = i.show(x)
}
}
As the authorities' answers suggested,
I'm often surprised that Scala can translate "Haskell type monsters" into very simple one.

Why is there no Tuple1 Literal for single element tuples in Scala?

Python has (1,) for a single element tuple. In Scala, (1,2) works for Tuple2(1,2) but we must use Tuple1(1) to get a single element tuple. This may seem like a small issue but designing APIs that expect a Product is a pain to deal for users that are passing single elements since they have to write Tuple1(1).
Maybe this is a small issue, but a major selling point of Scala is more typing with less typing. But in this case it seems it's more typing with more typing.
Please tell me:
1) I've missed this and it exists in another form, or
2) It will be added to a future version of the language (and they'll accept patches).
You can define an implicit conversion:
implicit def value2tuple[T](x: T): Tuple1[T] = Tuple1(x)
The implicit conversion will only apply if the argument's static type does not already conform to the method parameter's type. Assuming your method takes a Product argument
def m(v: Product) = // ...
the conversion will apply to a non-product value but will not apply to a Tuple2, for example. Warning: all case classes extend the Product trait, so the conversion will not apply to them either. Instead, the product elements will be the constructor parameters of the case class.
Product is the least upper bound of the TupleX classes, but you can use a type class if you want to apply the implicit Tuple1 conversion to all non-tuples:
// given a Tupleable[T], you can call apply to convert T to a Product
sealed abstract class Tupleable[T] extends (T => Product)
sealed class ValueTupler[T] extends Tupleable[T] {
def apply(x: T) = Tuple1(x)
}
sealed class TupleTupler[T <: Product] extends Tupleable[T] {
def apply(x: T) = x
}
// implicit conversions
trait LowPriorityTuple {
// this provides a Tupleable[T] for any type T, but is the
// lowest priority conversion
implicit def anyIsTupleable[T]: Tupleable[T] = new ValueTupler
}
object Tupleable extends LowPriorityTuple {
implicit def tuple2isTuple[T1, T2]: Tupleable[Tuple2[T1,T2]] = new TupleTupler
implicit def tuple3isTuple[T1, T2, T3]: Tupleable[Tuple3[T1,T2,T3]] = new TupleTupler
// ... etc ...
}
You can use this type class in your API as follows:
def m[T: Tupleable](v: T) = {
val p = implicitly[Tupleable[T]](v)
// ... do something with p
}
If you have your method return the product, you can see how the conversions are being applied:
scala> def m[T: Tupleable](v: T) = implicitly[Tupleable[T]](v)
m: [T](v: T)(implicit evidence$1: Tupleable[T])Product
scala> m("asdf") // as Tuple1
res12: Product = (asdf,)
scala> m(Person("a", "n")) // also as Tuple1, *not* as (String, String)
res13: Product = (Person(a,n),)
scala> m((1,2)) // as Tuple2
res14: Product = (1,2)
You could, of course, add an implicit conversion to your API:
implicit def value2tuple[A](x: A) = Tuple1(x)
I do find it odd that Tuple1.toString includes the trailing comma:
scala> Tuple1(1)
res0: (Int,) = (1,)
Python is not statically typed, so tuples there act more like fixed-size collections. That is not true of Scala, where each element of a tuple has a distinct type. Tuples, in Scala, doesn't have the same uses as in Python.