Scala ambiguity with paren-less function calls - scala

Excuse the long set-up. This question relates to, but is not answered by, Scala: ambiguous reference to overloaded definition - best disambiguation? .
I'm pretty new to Scala, and one thing that's throwing me off is that Scala both:
Has first-class functions
Calls functions when using object-dot notation without any parenthetical argument lists (as if the function were a property)
These two language features are confusing me. Look at the below code:
class MyClass {
def something(in: String): String = {
in + "_X"
}
def something: String => String = {
case _ => "Fixed"
}
}
val my = new MyClass()
println(List("foo", "bar").map(my.something))
I would expect this to print List("foo_X", "bar_X") by calling the something prototype that matches the map's required String => ? argument. Instead, the output is List("Fixed", "Fixed") - Scala 2.11 is invoking the no-argument something() and then passing its return value to the map.
If we comment out the second no-argument prototype of something, the output changes to be the expected result, demonstrating that the other prototype is valid in context.
Adding an empty argument list to the second prototype (making it def something()) also changes the behavior.
Changing the my.something to my.something(_) wakes Scala up to the ambiguity it was silently ignoring before:
error: ambiguous reference to overloaded definition,
both method something in class MyClass of type => String => String
and method something in class MyClass of type (in: String)String
match argument types (String)
println(List("foo", "bar").map(my.something(_)))
Even using the supposedly-for-this-purpose magic trailing underscore doesn't work:
val myFun: (String) => String = my.something _
This results in:
error: type mismatch;
found : () => String => String
required: String => String
val myFun: (String) => String = my.something _
My questions:
If I have MyClass exactly as written (no changes to the prototypes, especially not adding an empty parameter list to one of the prototypes), how do I tell Scala, unambiguously, that I want the first one-argument version of something to pass as an argument to another call?
Since there are clearly two satisfying arguments that could be passed to map, why did the Scala compiler not report the ambiguity as an error?
Is there a way to disable Scala's behavior of (sometimes, not always) treating foo.bar as equivalent to foo.bar()?

I have filed a bug on the Scala issue tracker and the consensus seems to be that this behaviour is a bug. The compiler should have thrown an error about the ambiguous reference to "my.something".

Related

Did Trees$Literal (reflection) move in Scala 2.13.0-M3?

I'm getting the following 2.13.0-M3 compiler error: type Trees$Literal is not a member of package scala.reflect.internal
This compiled fine under 2.11 and 2.12. In 2.13.0-M3 I get the error in the title. Did this change? An example full line of code that broke is this:
// Extract MapName annotation if present
val optionalMapName = member.annotations.find(_.tree.tpe =:= typeOf[MapName])
.map { index =>
index.tree.children(1).productElement(1)
.asInstanceOf[scala.reflect.internal.Trees$Literal].value().value
}.asInstanceOf[Option[String]]
I don't know why this worked before and not anymore in 2.13.0-M3, but Trees$Literal is sort of an implementation detail. It's how names of inner classes are encoded by the compiler. An actual Scala type with which you can refer to that class is the type projection Trees#Literal.
So what will probably work is
const.asInstanceOf[scala.reflect.internal.Trees#Literal].value.value
I'm guessing value() in your code used to work because the compiler didn't recognize Trees$Literal as a Scala type anymore, and in Java all methods have a parameter list. A Scala def foo compiles to a method with a parameter list, but the compiler can see in the Scala-specific metadata if it's a method without parameter list or not. A (non local) val compiles to a field and a method.
By the way, interesting as this may be, I don't think you actually need to cast to the internal API for this. You can simply pattern match to get the value out of the literal constant. With Literal and Constant imported from scala.reflect.runtime.universe or c.universe (c being a macro context):
val Literal(Constant(value: String)) = const
or
const match {
case Literal(Constant(value: String)) => value
}

Trying to skip implicit parameter list

I'd like to call a function returned by a function with an implicit parameter, simply and elegantly. This doesn't work:
def resolveA(implicit a: A): String => String = { prefix =>
s"$prefix a=$a"
}
case class A(n: Int)
implicit val a = A(1)
println(resolveA("-->")) // won't compile
I've figured out what's going on: Scala sees the ("-->") and thinks it's an attempt to explicitly fill in the implicit parameter list. I want to pass that as the prefix argument, but Scala sees it as the a argument.
I've tried some alternatives, like putting an empty parameter list () before the implicit one, but so far I've always been stopped by the fact that Scala thinks the argument to the returned function is an attempt to fill in the implicit parameter list of resolveA.
What's a nice way to do what I'm trying to do here, even if it's not as nice as the syntax I tried above?
Another option would be to use the apply method of the String => String function returned by resolveA. This way the compiler won't confuse the parameter lists, and is a little shorter than writing implicltly[A].
scala> resolveA[A].apply("-->")
res3: String = --> a=A(1)

Strange implicit def with function parameter behaviour in Scala

I've written a simple code in Scala with implicit conversion of Function1 to some case class.
object MyApp extends App{
case class FunctionContainer(val function:AnyRef)
implicit def cast(function1: Int => String):FunctionContainer = new FunctionContainer(function1)
def someFunction(i:Int):String = "someString"
def abc(f : FunctionContainer):String = "abc"
println(abc(someFunction))
}
But it doesn't work. Compiler doesn't want to pass someFunction as an argument to abc. I can guess its reasons but don't know exactly why it doesn't work.
When you use a method name as you have, the compiler has to pick how to convert the method type to a value. If the expected type is a function, then it eta-expands; otherwise it supplies empty parens to invoke the method. That is described here in the spec.
But it wasn't always that way. Ten years ago, you would have got your function value just by using the method name.
The new online spec omits the "Change Log" appendix, so for the record, here is the moment when someone got frustrated with parens and introduced the current rules. (See Scala Reference 2.9, page 181.)
This has not eliminated all irksome anomalies.
Conversions
The rules for implicit conversions of methods to functions (§6.26) have been tightened. Previously, a parameterized method used as a value was always implicitly converted to a function. This could lead to unexpected results when method arguments were forgotten. Consider for instance the statement below:
show(x.toString)
where show is defined as follows:
def show(x: String) = Console.println(x)
Most likely, the programmer forgot to supply an empty argument list () to toString. The previous Scala version would treat this code as a partially applied method, and expand it to:
show(() => x.toString())
As a result, the address of a closure would be printed instead of the value of s. Scala version 2.0 will apply a conversion from partially applied method to function value only if the expected type of the expression is indeed a function type. For instance, the conversion would not be applied in the code above because the expected type of show’s parameter is String, not a function type. The new convention disallows some previously legal code. Example:
def sum(f: int => double)(a: int, b: int): double =
if (a > b) 0 else f(a) + sum(f)(a + 1, b)
val sumInts = sum(x => x) // error: missing arguments
The partial application of sum in the last line of the code above will not be converted to a function type. Instead, the compiler will produce an error message which states that arguments for method sum are missing. The problem can be fixed by providing an expected type for the partial application, for instance by annotating the definition of sumInts with its type:
val sumInts: (int, int) => double = sum(x => x) // OK
On the other hand, Scala version 2.0 now automatically applies methods with empty parameter lists to () argument lists when necessary. For instance, the show expression above will now be expanded to
show(x.toString())
Your someFunction appears as a method here.
You could try either
object MyApp extends App{
case class FunctionContainer(val function:AnyRef)
implicit def cast(function1: Int => String):FunctionContainer = new FunctionContainer(function1)
val someFunction = (i:Int) => "someString"
def abc(f : FunctionContainer):String = "abc"
println(abc(someFunction))
}
or
object MyApp extends App{
case class FunctionContainer(val function:AnyRef)
implicit def cast(function1: Int => String):FunctionContainer = new FunctionContainer(function1)
def someFunction(i:Int): String = "someString"
def abc(f : FunctionContainer):String = "abc"
println(abc(someFunction(_: Int)))
}
By the way: implicitly casting such common functions to something else can quickly lead to problems. Are you absolutely sure that you need this? Wouldn't it be easier to overload abc?
You should use eta-expansion
println(abc(someFunction _))

Right associative functions with two parameter list

I was looking at the FoldLeft and FoldRight methods and the operator version of the method was extremely peculiar which was something like this (0 /: List.range(1,10))(+).
For right associative functions with two parameter lists one would expect the syntax to be something like this((param1)(param2) op HostClass).
But here in this case it is of the syntax (param1 op HostClass)(param2). This causes ambiguity with another case where a right associative function returns another function that takes a single parameter.
Because of this ambiguity the class compiles but fails when the function call is made as shown below.
class Test() {
val func1:(String => String) = { (in) => in * 2 }
def `test:`(x:String) = { println(x); func1 }
def `test:`(x:String)(y:String) = { x+" "+y }
}
val test = new Test
(("Foo") `test:` test)("hello")
<console>:10: error: ambiguous reference to overloaded definition,
both method test: in class Test of type (x: String)(y: String)String
and method test: in class Test of type (x: String)String => String
match argument types (String)
(("Foo") `test:` test)("hello")
so my questions are
Is this an expected behaviour or is it a bug?
Why the two parameter list right associative function call has been designed the way it is, instead of what I think to be more intuitive syntax of ((param1)(param2) op HostClass)?
Is there a workaround to call either of the overloaded test: function without ambiguity.
The Scala's Type System considers only the first parameter list of the function for type inference. Hence to uniquely identify one of the overloaded method in a class or object the first parameter list of the method has to be distinct for each of the overloaded definition. This can be demonstrated by the following example.
object Test {
def test(x:String)(y:Int) = { x+" "+y.toString() }
def test(x:String)(y:String) = { x+" "+y }
}
Test.test("Hello")(1)
<console>:9: error: ambiguous reference to overloaded definition,
both method test in object Test of type (x: String)(y: String)String
and method test in object Test of type (x: String)(y: Int)String
match argument types (String)
Test.test("Hello")(1)
Does it really fail at runtime? When I tested it, the class compiles, but the call of the method test: does not.
I think that the problem is not with the operator syntax, but with the fact that you have two overloaded functions, one with just one and the other with two parameter lists.
You will get the same error with the dot-notation:
test.`test:`("Foo")("hello")
If you rename the one-param list function, the ambiguity will be gone and
(("Foo") `test:` test)("hello")
will compile.

Implicit conversion paradox

If I try to define an implicit conversion for a primitive type, then it doesn't seem to work. E.g.:
implicit def globalIntToString(a: Int) : String = { a.toString() + "globalhi" }
1.toInt + "hi"
The above will still return simply "1hi" as the result.
However, it seems that if I parameterize a class or a def and then pass in the implicit for the parametrized case, then it seems to work. Does anyone know what the reasons are? E.g. does this have something to do with boxing/unboxing of primtives (e.g., parameterized primitives are boxed)? Does implicit only work with reference types and not primitive types?
class typeConv[T] { implicit def tToStr(a: T) : String = { a.toString() + "hi" } }
class t[K](a: K)(tc : typeConv[K]) { import tc._; println(a + "cool"); println(1.toInt + "cool" ) }
new t(1)(new typeConv[Int])
There are a few subtle things going on here. #yan has already explained the core issue - I'll try to add some more specific information.
As noted, 1.toInt + "hi" will never use any implicit conversion because Scala Int class actually has a method + that takes a String parameter. The compiler will look for an implicit view only when it can't find matching member in the original type.
A little more complicated stuff is happening inside your t class. Scalac will look for implicit conversion from a generic type K to any type that has a + method that takes a String parameter. There will be two candidates for such a conversion: your own tc.tToStr and Scala built-in scala.Predef.any2stringadd.
Normally, any2stringadd would be used, but in your example, your own conversion is used. Why does it have precedence over any2stringadd?
During implicit search, tc.tToStr is seen as a function of type K => String, while any2stringadd is seen as a function of type Any => StringAdd. My guess is that K => String is seen by the compiler as a more specific conversion than Any => StringAdd, but someone would have to confirm it with a proper reference to Scala Language Specification.
As you can see, defining such conversions may cause you a lot of strange behaviour. I'd definitely say that introducing an implicit conversion to a String is asking for trouble.
This happens because Scala defines a + operator on the Int type that takes a String and does not need to resolve an implicit conversion. Also, converting to String is usually a bad idea, as you'd generally have a custom, one-off type that would define the methods you're trying to add.