Scala Puzzle : Any ideas on how to make this work? - scala

This code doesn't work!
def add(a: Int, b: Int): Int = a plus b
So, I tried defining plus like so:
def plus(a: Int, b: Int): Int = a + b
But, the compiler still complains Cannot resolve symbol plus!
Any ideas?

a plus b doesn't work because it is shorthand for a.plus(b) and there is no such method on Int in the standard library. To make that work you have to "enhance" the Int class via implicit conversion.
implicit class MyPlus[T](a: T)(implicit ev:Numeric[T]) {
def plus(b: T): T = ev.plus(a,b)
}
Now you can do 3 plus 5 or a plus b etc.
You can also do it like this (a little more concise and readable but essentially the same thing):
import Numeric.Implicits._
implicit class MyPlus[T: Numeric](a: T) {
def plus(b: T): T = a + b
}

Since a inside add is an Int, and scala.Int doesn't have a plus method, we need to create an implicit conversion from scala.Int to something which wraps an Int and has a plus method:
implicit class IntWithPlus(val i: Int) extends AnyVal {
def plus(other: Int) = i + other
}
Now, everything just works:
def add(a: Int, b: Int): Int = a plus b
add(2, 3)
// => 5
An alternative would be to create a type Int that gets imported into the local namespace shadowing scala.Int and give that type a plus method.

Related

Scala can't overload two methods [duplicate]

While there might be valid cases where such method overloadings could become ambiguous, why does the compiler disallow code which is neither ambiguous at compile time nor at run time?
Example:
// This fails:
def foo(a: String)(b: Int = 42) = a + b
def foo(a: Int) (b: Int = 42) = a + b
// This fails, too. Even if there is no position in the argument list,
// where the types are the same.
def foo(a: Int) (b: Int = 42) = a + b
def foo(a: String)(b: String = "Foo") = a + b
// This is OK:
def foo(a: String)(b: Int) = a + b
def foo(a: Int) (b: Int = 42) = a + b
// Even this is OK.
def foo(a: Int)(b: Int) = a + b
def foo(a: Int)(b: String = "Foo") = a + b
val bar = foo(42)_ // This complains obviously ...
Are there any reasons why these restrictions can't be loosened a bit?
Especially when converting heavily overloaded Java code to Scala default arguments are a very important and it isn't nice to find out after replacing plenty of Java methods by one Scala methods that the spec/compiler imposes arbitrary restrictions.
I'd like to cite Lukas Rytz (from here):
The reason is that we wanted a deterministic naming-scheme for the
generated methods which return default arguments. If you write
def f(a: Int = 1)
the compiler generates
def f$default$1 = 1
If you have two overloads with defaults on the same parameter
position, we would need a different naming scheme. But we want to keep
the generated byte-code stable over multiple compiler runs.
A solution for future Scala version could be to incorporate type names of the non-default arguments (those at the beginning of a method, which disambiguate overloaded versions) into the naming schema, e.g. in this case:
def foo(a: String)(b: Int = 42) = a + b
def foo(a: Int) (b: Int = 42) = a + b
it would be something like:
def foo$String$default$2 = 42
def foo$Int$default$2 = 42
Someone willing to write a SIP proposal?
It would be very hard to get a readable and precise spec for the interactions of overloading resolution with default arguments. Of course, for many individual cases, like the one presented here, it's easy to say what should happen. But that is not enough. We'd need a spec that decides all possible corner cases. Overloading resolution is already very hard to specify. Adding default arguments in the mix would make it harder still. That's why we have opted to separate the two.
I can't answer your question, but here is a workaround:
implicit def left2Either[A,B](a:A):Either[A,B] = Left(a)
implicit def right2Either[A,B](b:B):Either[A,B] = Right(b)
def foo(a: Either[Int, String], b: Int = 42) = a match {
case Left(i) => i + b
case Right(s) => s + b
}
If you have two very long arg lists which differ in only one arg, it might be worth the trouble...
What worked for me is to redefine (Java-style) the overloading methods.
def foo(a: Int, b: Int) = a + b
def foo(a: Int, b: String) = a + b
def foo(a: Int) = a + "42"
def foo(a: String) = a + "42"
This ensures the compiler what resolution you want according to the present parameters.
Here is a generalization of #Landei answer:
What you really want:
def pretty(tree: Tree, showFields: Boolean = false): String = // ...
def pretty(tree: List[Tree], showFields: Boolean = false): String = // ...
def pretty(tree: Option[Tree], showFields: Boolean = false): String = // ...
Workarround
def pretty(input: CanPretty, showFields: Boolean = false): String = {
input match {
case TreeCanPretty(tree) => prettyTree(tree, showFields)
case ListTreeCanPretty(tree) => prettyList(tree, showFields)
case OptionTreeCanPretty(tree) => prettyOption(tree, showFields)
}
}
sealed trait CanPretty
case class TreeCanPretty(tree: Tree) extends CanPretty
case class ListTreeCanPretty(tree: List[Tree]) extends CanPretty
case class OptionTreeCanPretty(tree: Option[Tree]) extends CanPretty
import scala.language.implicitConversions
implicit def treeCanPretty(tree: Tree): CanPretty = TreeCanPretty(tree)
implicit def listTreeCanPretty(tree: List[Tree]): CanPretty = ListTreeCanPretty(tree)
implicit def optionTreeCanPretty(tree: Option[Tree]): CanPretty = OptionTreeCanPretty(tree)
private def prettyTree(tree: Tree, showFields: Boolean): String = "fun ..."
private def prettyList(tree: List[Tree], showFields: Boolean): String = "fun ..."
private def prettyOption(tree: Option[Tree], showFields: Boolean): String = "fun ..."
One of the possible scenario is
def foo(a: Int)(b: Int = 10)(c: String = "10") = a + b + c
def foo(a: Int)(b: String = "10")(c: Int = 10) = a + b + c
The compiler will be confused about which one to call. In prevention of other possible dangers, the compiler would allow at most one overloaded method has default arguments.
Just my guess:-)
My understanding is that there can be name collisions in the compiled classes with default argument values. I've seen something along these lines mentioned in several threads.
The named argument spec is here:
http://www.scala-lang.org/sites/default/files/sids/rytz/Mon,%202009-11-09,%2017:29/named-args.pdf
It states:
Overloading If there are multiple overloaded alternatives of a method, at most one is
allowed to specify default arguments.
So, for the time being at any rate, it's not going to work.
You could do something like what you might do in Java, eg:
def foo(a: String)(b: Int) = a + (if (b > 0) b else 42)

How to dynamical bind a method reference to a trait?

Given
def add(x: Int, y: Int): Int = x + y
val addAsMethodReference: (Int, Int) => Int = add _
trait BinaryOperator {
def execute(x: Int, y: Int): Int
}
val addAsBinaryOperator: BinaryOperator = addAsMethodReference...?
How do I bind addAsMethodReference to BinaryOperator without implementing BinaryOperator by hand?
Java 8 SAM will just work. I could use the method reference anywhere binary operator trait is used in Java 8.
Ideally, I want to write something like:
var addAsBinaryOperator: BinaryOperator = addAsMethodReference.asNewInstanceOf[BinaryOperator]
The reason I want this asNewInstanceOf method is it would work for any method signature. I don't care how many parameters are being passed. If I had to implement this by hand I have to carefully match each x and y. This is error prone at a larger scale.
The specification of left.asNewInstanceOf[right] would be if the right side has more than one abstract method, it fails at compilation. If the left side is not a functional type that matches the single abstract method signature in the right side, it would fail at compilation. The right side doesn't need to be a trait, it could be an abstract class with a single abstract method.
Well, you could make the implicit conversion
implicit def myConversion(f: (Int, Int) ⇒ Int): BinaryOperator = new BinaryOperator {
def execute(x: Int, y: Int): Int = f(x, y)
}
and if it's in scope, you can just do
val addAsBinaryOperator: BinaryOperator = addAsMethodReference
for any binary function of integers returning an integer. Although maybe this also classifies as "implementing by hand". I can't see a way in which the compiler magically realizes that you want to interpret a function as an instance of a user-created trait with some particular structure.
NEW ANSWER:
This does what you want. It dynamically creates a BinaryOperator object whose execute method is bound to addAsMethodReference:
def add(x: Int, y: Int): Int = x + y
val addAsMethodReference: (Int, Int) => Int = add _
trait BinaryOperator {
def execute(x: Int, y: Int): Int
}
val addAsBinaryOperator: BinaryOperator =
new BinaryOperator{ def execute(x: Int, y: Int): Int = addAsMethodReference(x,y) }
OLD ANSWER
Is this what you want?
implicit class EnhancedBinaryOperator(val self: BinaryOperator) extends AnyVal {
def addAsMethodReference(a: Int, b: Int) = a + b
}
val o: BinaryOperator = ??? // anything here
o.addAsMethodReference(1,2) // addAsMethodReference is now on BinaryOperator

scala 2.10: why there is a type mismatch?

Can't figure out what's wrong with this code:
trait NumberLike
{
def plus[T](a: T, b: T): T
}
class IntegerNumberLike extends NumberLike
{
def plus[Int](a: Int, b: Int): Int = 2 // type mismatch; found : scala.Int(2) required: Int
}
But if I do it this way, it works:
trait NumberLike[T]
{
def plus(a: T, b: T): T
}
class IntegerNumberLike extends NumberLike[Int]
{
def plus(a: Int, b: Int): Int = 2
}
So I got two questions:
Why first code sample doesn't work?
Generally, when should I use class type parameter and when should I use method type parameter?
Type parameters in methods are much like other parameters, the name of actual parameter isn't significant: plus[Int](a: Int, b: Int): Int is exactly the same as plus[T](a: T, b: T): T
Now, it is easy to understand why plus[T](a: T, b: T): T = 2 does not compile, isn't it? Because 2 is not of type T.
As to your second question, it is hard to answer exactly, because it is rather broad. In a nutshell, parametrized classes and methods define a template of a class or a method respectively. Think of it as a family of classes/methods rather than a single object. For example, instead of plus[T] (a: T, b: T): T, one could have written:
def plus(a: Int, b: Int): Int
def plus(a: Long, b: Long): Long
def plus(a: Double, b: Double): Double
etc.
Or, instead of class NumberLike[T], you could have:
class IntLike
class LongLike
class DoubleLike
etc.
Looking at it this way, you can ask yourself a question, what it is you are designing: is it a family of classes or a family of methods? Answering that question will tell you whether you should parametrize a class, a method, or, perhaps, both ... Consider:
class Iterable[+A] {
...
def def reduceLeft[B >: A](op: (B, A) ⇒ B): B
...
}
The definition:
def plus[Int](a: Int, b: Int): Int
is equivalent to
def plus[T](a: T, b: T): T
For example, you could see it more clearly through the following example:
type Int = String
def main(args: Array[String]) {
val x: Int = "foo"
println(x)
}
where you get no errors and "foo" printed. Since you are only renaming your parametric type as Int (instead of T). That's why the compiler complains that Int (in this case the value of 2) and Int (your parametric type name) are not the same. I don't know if you can implement the plus function for specific types out of the parametric definition. You use class type parameters if the parameters apply to all the class, and method parameters if the parameters apply only to that method. It's just a matter of visibility and responsibility of the methods.

Why does the Scala compiler disallow overloaded methods with default arguments?

While there might be valid cases where such method overloadings could become ambiguous, why does the compiler disallow code which is neither ambiguous at compile time nor at run time?
Example:
// This fails:
def foo(a: String)(b: Int = 42) = a + b
def foo(a: Int) (b: Int = 42) = a + b
// This fails, too. Even if there is no position in the argument list,
// where the types are the same.
def foo(a: Int) (b: Int = 42) = a + b
def foo(a: String)(b: String = "Foo") = a + b
// This is OK:
def foo(a: String)(b: Int) = a + b
def foo(a: Int) (b: Int = 42) = a + b
// Even this is OK.
def foo(a: Int)(b: Int) = a + b
def foo(a: Int)(b: String = "Foo") = a + b
val bar = foo(42)_ // This complains obviously ...
Are there any reasons why these restrictions can't be loosened a bit?
Especially when converting heavily overloaded Java code to Scala default arguments are a very important and it isn't nice to find out after replacing plenty of Java methods by one Scala methods that the spec/compiler imposes arbitrary restrictions.
I'd like to cite Lukas Rytz (from here):
The reason is that we wanted a deterministic naming-scheme for the
generated methods which return default arguments. If you write
def f(a: Int = 1)
the compiler generates
def f$default$1 = 1
If you have two overloads with defaults on the same parameter
position, we would need a different naming scheme. But we want to keep
the generated byte-code stable over multiple compiler runs.
A solution for future Scala version could be to incorporate type names of the non-default arguments (those at the beginning of a method, which disambiguate overloaded versions) into the naming schema, e.g. in this case:
def foo(a: String)(b: Int = 42) = a + b
def foo(a: Int) (b: Int = 42) = a + b
it would be something like:
def foo$String$default$2 = 42
def foo$Int$default$2 = 42
Someone willing to write a SIP proposal?
It would be very hard to get a readable and precise spec for the interactions of overloading resolution with default arguments. Of course, for many individual cases, like the one presented here, it's easy to say what should happen. But that is not enough. We'd need a spec that decides all possible corner cases. Overloading resolution is already very hard to specify. Adding default arguments in the mix would make it harder still. That's why we have opted to separate the two.
I can't answer your question, but here is a workaround:
implicit def left2Either[A,B](a:A):Either[A,B] = Left(a)
implicit def right2Either[A,B](b:B):Either[A,B] = Right(b)
def foo(a: Either[Int, String], b: Int = 42) = a match {
case Left(i) => i + b
case Right(s) => s + b
}
If you have two very long arg lists which differ in only one arg, it might be worth the trouble...
What worked for me is to redefine (Java-style) the overloading methods.
def foo(a: Int, b: Int) = a + b
def foo(a: Int, b: String) = a + b
def foo(a: Int) = a + "42"
def foo(a: String) = a + "42"
This ensures the compiler what resolution you want according to the present parameters.
Here is a generalization of #Landei answer:
What you really want:
def pretty(tree: Tree, showFields: Boolean = false): String = // ...
def pretty(tree: List[Tree], showFields: Boolean = false): String = // ...
def pretty(tree: Option[Tree], showFields: Boolean = false): String = // ...
Workarround
def pretty(input: CanPretty, showFields: Boolean = false): String = {
input match {
case TreeCanPretty(tree) => prettyTree(tree, showFields)
case ListTreeCanPretty(tree) => prettyList(tree, showFields)
case OptionTreeCanPretty(tree) => prettyOption(tree, showFields)
}
}
sealed trait CanPretty
case class TreeCanPretty(tree: Tree) extends CanPretty
case class ListTreeCanPretty(tree: List[Tree]) extends CanPretty
case class OptionTreeCanPretty(tree: Option[Tree]) extends CanPretty
import scala.language.implicitConversions
implicit def treeCanPretty(tree: Tree): CanPretty = TreeCanPretty(tree)
implicit def listTreeCanPretty(tree: List[Tree]): CanPretty = ListTreeCanPretty(tree)
implicit def optionTreeCanPretty(tree: Option[Tree]): CanPretty = OptionTreeCanPretty(tree)
private def prettyTree(tree: Tree, showFields: Boolean): String = "fun ..."
private def prettyList(tree: List[Tree], showFields: Boolean): String = "fun ..."
private def prettyOption(tree: Option[Tree], showFields: Boolean): String = "fun ..."
One of the possible scenario is
def foo(a: Int)(b: Int = 10)(c: String = "10") = a + b + c
def foo(a: Int)(b: String = "10")(c: Int = 10) = a + b + c
The compiler will be confused about which one to call. In prevention of other possible dangers, the compiler would allow at most one overloaded method has default arguments.
Just my guess:-)
My understanding is that there can be name collisions in the compiled classes with default argument values. I've seen something along these lines mentioned in several threads.
The named argument spec is here:
http://www.scala-lang.org/sites/default/files/sids/rytz/Mon,%202009-11-09,%2017:29/named-args.pdf
It states:
Overloading If there are multiple overloaded alternatives of a method, at most one is
allowed to specify default arguments.
So, for the time being at any rate, it's not going to work.
You could do something like what you might do in Java, eg:
def foo(a: String)(b: Int) = a + (if (b > 0) b else 42)

How to set up implicit conversion to allow arithmetic between numeric types?

I'd like to implement a class C to store values of various numeric types, as well as boolean. Furthermore, I'd like to be able to operate on instances of this class, between types, converting where necessary Int --> Double and Boolean -> Int, i.e., to be able to add Boolean + Boolean, Int + Boolean, Boolean + Int, Int + Double, Double + Double etc., returning the smallest possible type (Int or Double) whenever possible.
So far I came up with this:
abstract class SemiGroup[A] { def add(x:A, y:A):A }
class C[A] (val n:A) (implicit val s:SemiGroup[A]) {
def +[T <% A](that:C[T]) = s.add(this.n, that.n)
}
object Test extends Application {
implicit object IntSemiGroup extends SemiGroup[Int] {
def add(x: Int, y: Int):Int = x + y
}
implicit object DoubleSemiGroup extends SemiGroup[Double] {
def add(x: Double, y: Double):Double = x + y
}
implicit object BooleanSemiGroup extends SemiGroup[Boolean] {
def add(x: Boolean, y: Boolean):Boolean = true;
}
implicit def bool2int(b:Boolean):Int = if(b) 1 else 0
val n = new C[Int](10)
val d = new C[Double](10.5)
val b = new C[Boolean](true)
println(d + n) // [1]
println(n + n) // [2]
println(n + b) // [3]
// println(n + d) [4] XXX - no implicit conversion of Double to Int exists
// println(b + n) [5] XXX - no implicit conversion of Int to Boolean exists
}
This works for some cases (1, 2, 3) but doesn't for (4, 5). The reason is that there is implicit widening of type from lower to higher, but not the other way. In a way, the method
def +[T <% A](that:C[T]) = s.add(this.n, that.n)
somehow needs to have a partner method that would look something like:
def +[T, A <% T](that:C[T]):T = that.s.add(this.n, that.n)
but that does not compile for two reasons, firstly that the compiler cannot convert this.n to type T (even though we specify view bound A <% T), and, secondly, that even if it were able to convert this.n, after type erasure the two + methods become ambiguous.
Sorry this is so long. Any help would be much appreciated! Otherwise it seems I have to write out all the operations between all the types explicitly. And it would get hairy if I had to add extra types (Complex is next on the menu...).
Maybe someone has another way to achieve all this altogether? Feels like there's something simple I'm overlooking.
Thanks in advance!
Okay then, Daniel!
I've restricted the solution to ignore Boolean, and only work with AnyVals that have a weak Least Upper Bound that has an instance of Numeric. These restrictions are arbitrary, you could remove them and encode your own weak conformance relationship between types -- the implementation of a2b and a2c could perform some conversion.
Its interesting to consider how implicit parameters can simulate inheritance (passing implicit parameters of type (Derived => Base) or Weak Conformance. They are really powerful, especially when the type inferencer helps you out.
First, we need a type class to represent the Weak Least Upper Bound of all pairs of types A and B that we are interested in.
sealed trait WeakConformance[A <: AnyVal, B <: AnyVal, C] {
implicit def aToC(a: A): C
implicit def bToC(b: B): C
}
object WeakConformance {
implicit def SameSame[T <: AnyVal]: WeakConformance[T, T, T] = new WeakConformance[T, T, T] {
implicit def aToC(a: T): T = a
implicit def bToC(b: T): T = b
}
implicit def IntDouble: WeakConformance[Int, Double, Double] = new WeakConformance[Int, Double, Double] {
implicit def aToC(a: Int) = a
implicit def bToC(b: Double) = b
}
implicit def DoubleInt: WeakConformance[Double, Int, Double] = new WeakConformance[Double, Int, Double] {
implicit def aToC(a: Double) = a
implicit def bToC(b: Int) = b
}
// More instances go here!
def unify[A <: AnyVal, B <: AnyVal, C](a: A, b: B)(implicit ev: WeakConformance[A, B, C]): (C, C) = {
import ev._
(a: C, b: C)
}
}
The method unify returns type C, which is figured out by the type inferencer based on availability of implicit values to provide as the implicit argument ev.
We can plug this into your wrapper class C as follows, also requiring a Numeric[WeakLub] so we can add the values.
case class C[A <: AnyVal](val value:A) {
import WeakConformance.unify
def +[B <: AnyVal, WeakLub <: AnyVal](that:C[B])(implicit wc: WeakConformance[A, B, WeakLub], num: Numeric[WeakLub]): C[WeakLub] = {
val w = unify(value, that.value) match { case (x, y) => num.plus(x, y)};
new C[WeakLub](w)
}
}
And finally, putting it all together:
object Test extends Application {
val n = new C[Int](10)
val d = new C[Double](10.5)
// The type ascriptions aren't necessary, they are just here to
// prove the static type is the Weak LUB of the two sides.
println(d + n: C[Double]) // C(20.5)
println(n + n: C[Int]) // C(20)
println(n + d: C[Double]) // C(20.5)
}
Test
There's a way to do that, but I'll leave it to retronym to explain it, since he wrote this solution. :-)