Right associative functions with two parameter list - scala

I was looking at the FoldLeft and FoldRight methods and the operator version of the method was extremely peculiar which was something like this (0 /: List.range(1,10))(+).
For right associative functions with two parameter lists one would expect the syntax to be something like this((param1)(param2) op HostClass).
But here in this case it is of the syntax (param1 op HostClass)(param2). This causes ambiguity with another case where a right associative function returns another function that takes a single parameter.
Because of this ambiguity the class compiles but fails when the function call is made as shown below.
class Test() {
val func1:(String => String) = { (in) => in * 2 }
def `test:`(x:String) = { println(x); func1 }
def `test:`(x:String)(y:String) = { x+" "+y }
}
val test = new Test
(("Foo") `test:` test)("hello")
<console>:10: error: ambiguous reference to overloaded definition,
both method test: in class Test of type (x: String)(y: String)String
and method test: in class Test of type (x: String)String => String
match argument types (String)
(("Foo") `test:` test)("hello")
so my questions are
Is this an expected behaviour or is it a bug?
Why the two parameter list right associative function call has been designed the way it is, instead of what I think to be more intuitive syntax of ((param1)(param2) op HostClass)?
Is there a workaround to call either of the overloaded test: function without ambiguity.

The Scala's Type System considers only the first parameter list of the function for type inference. Hence to uniquely identify one of the overloaded method in a class or object the first parameter list of the method has to be distinct for each of the overloaded definition. This can be demonstrated by the following example.
object Test {
def test(x:String)(y:Int) = { x+" "+y.toString() }
def test(x:String)(y:String) = { x+" "+y }
}
Test.test("Hello")(1)
<console>:9: error: ambiguous reference to overloaded definition,
both method test in object Test of type (x: String)(y: String)String
and method test in object Test of type (x: String)(y: Int)String
match argument types (String)
Test.test("Hello")(1)

Does it really fail at runtime? When I tested it, the class compiles, but the call of the method test: does not.
I think that the problem is not with the operator syntax, but with the fact that you have two overloaded functions, one with just one and the other with two parameter lists.
You will get the same error with the dot-notation:
test.`test:`("Foo")("hello")
If you rename the one-param list function, the ambiguity will be gone and
(("Foo") `test:` test)("hello")
will compile.

Related

Strange implicit def with function parameter behaviour in Scala

I've written a simple code in Scala with implicit conversion of Function1 to some case class.
object MyApp extends App{
case class FunctionContainer(val function:AnyRef)
implicit def cast(function1: Int => String):FunctionContainer = new FunctionContainer(function1)
def someFunction(i:Int):String = "someString"
def abc(f : FunctionContainer):String = "abc"
println(abc(someFunction))
}
But it doesn't work. Compiler doesn't want to pass someFunction as an argument to abc. I can guess its reasons but don't know exactly why it doesn't work.
When you use a method name as you have, the compiler has to pick how to convert the method type to a value. If the expected type is a function, then it eta-expands; otherwise it supplies empty parens to invoke the method. That is described here in the spec.
But it wasn't always that way. Ten years ago, you would have got your function value just by using the method name.
The new online spec omits the "Change Log" appendix, so for the record, here is the moment when someone got frustrated with parens and introduced the current rules. (See Scala Reference 2.9, page 181.)
This has not eliminated all irksome anomalies.
Conversions
The rules for implicit conversions of methods to functions (§6.26) have been tightened. Previously, a parameterized method used as a value was always implicitly converted to a function. This could lead to unexpected results when method arguments were forgotten. Consider for instance the statement below:
show(x.toString)
where show is defined as follows:
def show(x: String) = Console.println(x)
Most likely, the programmer forgot to supply an empty argument list () to toString. The previous Scala version would treat this code as a partially applied method, and expand it to:
show(() => x.toString())
As a result, the address of a closure would be printed instead of the value of s. Scala version 2.0 will apply a conversion from partially applied method to function value only if the expected type of the expression is indeed a function type. For instance, the conversion would not be applied in the code above because the expected type of show’s parameter is String, not a function type. The new convention disallows some previously legal code. Example:
def sum(f: int => double)(a: int, b: int): double =
if (a > b) 0 else f(a) + sum(f)(a + 1, b)
val sumInts = sum(x => x) // error: missing arguments
The partial application of sum in the last line of the code above will not be converted to a function type. Instead, the compiler will produce an error message which states that arguments for method sum are missing. The problem can be fixed by providing an expected type for the partial application, for instance by annotating the definition of sumInts with its type:
val sumInts: (int, int) => double = sum(x => x) // OK
On the other hand, Scala version 2.0 now automatically applies methods with empty parameter lists to () argument lists when necessary. For instance, the show expression above will now be expanded to
show(x.toString())
Your someFunction appears as a method here.
You could try either
object MyApp extends App{
case class FunctionContainer(val function:AnyRef)
implicit def cast(function1: Int => String):FunctionContainer = new FunctionContainer(function1)
val someFunction = (i:Int) => "someString"
def abc(f : FunctionContainer):String = "abc"
println(abc(someFunction))
}
or
object MyApp extends App{
case class FunctionContainer(val function:AnyRef)
implicit def cast(function1: Int => String):FunctionContainer = new FunctionContainer(function1)
def someFunction(i:Int): String = "someString"
def abc(f : FunctionContainer):String = "abc"
println(abc(someFunction(_: Int)))
}
By the way: implicitly casting such common functions to something else can quickly lead to problems. Are you absolutely sure that you need this? Wouldn't it be easier to overload abc?
You should use eta-expansion
println(abc(someFunction _))

Scala ambiguity with paren-less function calls

Excuse the long set-up. This question relates to, but is not answered by, Scala: ambiguous reference to overloaded definition - best disambiguation? .
I'm pretty new to Scala, and one thing that's throwing me off is that Scala both:
Has first-class functions
Calls functions when using object-dot notation without any parenthetical argument lists (as if the function were a property)
These two language features are confusing me. Look at the below code:
class MyClass {
def something(in: String): String = {
in + "_X"
}
def something: String => String = {
case _ => "Fixed"
}
}
val my = new MyClass()
println(List("foo", "bar").map(my.something))
I would expect this to print List("foo_X", "bar_X") by calling the something prototype that matches the map's required String => ? argument. Instead, the output is List("Fixed", "Fixed") - Scala 2.11 is invoking the no-argument something() and then passing its return value to the map.
If we comment out the second no-argument prototype of something, the output changes to be the expected result, demonstrating that the other prototype is valid in context.
Adding an empty argument list to the second prototype (making it def something()) also changes the behavior.
Changing the my.something to my.something(_) wakes Scala up to the ambiguity it was silently ignoring before:
error: ambiguous reference to overloaded definition,
both method something in class MyClass of type => String => String
and method something in class MyClass of type (in: String)String
match argument types (String)
println(List("foo", "bar").map(my.something(_)))
Even using the supposedly-for-this-purpose magic trailing underscore doesn't work:
val myFun: (String) => String = my.something _
This results in:
error: type mismatch;
found : () => String => String
required: String => String
val myFun: (String) => String = my.something _
My questions:
If I have MyClass exactly as written (no changes to the prototypes, especially not adding an empty parameter list to one of the prototypes), how do I tell Scala, unambiguously, that I want the first one-argument version of something to pass as an argument to another call?
Since there are clearly two satisfying arguments that could be passed to map, why did the Scala compiler not report the ambiguity as an error?
Is there a way to disable Scala's behavior of (sometimes, not always) treating foo.bar as equivalent to foo.bar()?
I have filed a bug on the Scala issue tracker and the consensus seems to be that this behaviour is a bug. The compiler should have thrown an error about the ambiguous reference to "my.something".

type inference in argument list in combination with setter not working

Let's imagine the following items in scope:
object Thing {
var data: Box[String] = Empty
}
def perform[T](setter: Box[T] => Unit) {
// doesn't matter
}
The following fails to compile:
perform(Thing.data = _)
The error message is:
<console>:12: error: missing parameter type for expanded function ((x$1) => Thing.data = x$1)
perform(Thing.data = _)
^
<console>:12: warning: a type was inferred to be `Any`; this may indicate a programming error.
perform(Thing.data = _)
^
While the following compiles:
perform(Thing.data_=)
I have since surpassed this issue by creating a better abstraction, but my curiosity still remains.
Can anyone explain why this is?
Let's expand out what you're doing in the first example:
Thing.data = _
is shorthand for defining an anonymous function, which looks like:
def anon[T](x: Box[T]) {
Thing.data = x
}
So when you call
perform(Thing.data = _)
it's the same as
perform(anon)
The problem is anon and perform take a type parameter T and at no point are you declaring what T is. The compiler can only infer type parameters in a function call from passed arguments, not from within the function body, so it cannot infer in anon that T should be String.
Notice that if you call
perform[String](Thing.data = _)
the compiler has no issue because it now knows what T should be, and if you try to use any type besides string, you'll get a type mismatch error, but the error occurs in the body of the anonymous function, not on the call to perform.
However, when you call
perform(Thing.data_=)
you are passing the method Thing.data_=, which is explicitly defined as Box[String] => Unit, so the compiler can infer perform's type parameter because it is coming from a function argument.

Why the following scala code is valid?

My understanding is Unit = void, but why I can pass in multiple argument?
So can anyone explain why the following code is valid?
def foo(x: Unit) = println("foo")
foo("ss", 1)
If you run your snippet with scala -print you'll roughly get the following output for the code:
/* Definition of foo */
private def foo(x: scala.runtime.BoxedUnit): Unit = {
/* Invocation of foo */
foo({
new Tuple2("ss", scala.Int.box(1));
scala.runtime.BoxedUnit.UNIT
});
As you can see, the arguments to foo are rewritten into a code block that creates a tuple but then returns UNIT.
I can't see a good reason for this behaviour and I'd rather get a compiler error thrown instead.
A related question which gives a decent answer to this is here:
Scala: Why can I convert Int to Unit?
From Section 6.26.1 of the Scala Language Specification v2.9, "Value Discarding":
If e has some value type and the expected type is Unit, e is converted to the expected type by embedding it in the term { e; () }.
So, in your case it seems ("ss", 1) is being converted to a tuple so that it can be treated as a single argument, then as that argument type is not Unit, it is converted to a block which computes that tuple value then returns unit to match with the required type of the parameter.

Spurious ambiguous reference error in Scala 2.7.7 compiler/interpreter?

Can anyone explain the compile error below? Interestingly, if I change the return type of the get() method to String, the code compiles just fine. Note that the thenReturn method has two overloads: a unary method and a varargs method that takes at least one argument. It seems to me that if the invocation is ambiguous here, then it would always be ambiguous.
More importantly, is there any way to resolve the ambiguity?
import org.scalatest.mock.MockitoSugar
import org.mockito.Mockito._
trait Thing {
def get(): java.lang.Object
}
new MockitoSugar {
val t = mock[Thing]
when(t.get()).thenReturn("a")
}
error: ambiguous reference to overloaded definition,
both method thenReturn in trait OngoingStubbing of type
java.lang.Object,java.lang.Object*)org.mockito.stubbing.OngoingStubbing[java.lang.Object]
and method thenReturn in trait OngoingStubbing of type
(java.lang.Object)org.mockito.stubbing.OngoingStubbing[java.lang.Object]
match argument types (java.lang.String)
when(t.get()).thenReturn("a")
Well, it is ambiguous. I suppose Java semantics allow for it, and it might merit a ticket asking for Java semantics to be applied in Scala.
The source of the ambiguitity is this: a vararg parameter may receive any number of arguments, including 0. So, when you write thenReturn("a"), do you mean to call the thenReturn which receives a single argument, or do you mean to call the thenReturn that receives one object plus a vararg, passing 0 arguments to the vararg?
Now, what this kind of thing happens, Scala tries to find which method is "more specific". Anyone interested in the details should look up that in Scala's specification, but here is the explanation of what happens in this particular case:
object t {
def f(x: AnyRef) = 1 // A
def f(x: AnyRef, xs: AnyRef*) = 2 // B
}
if you call f("foo"), both A and B
are applicable. Which one is more
specific?
it is possible to call B with parameters of type (AnyRef), so A is
as specific as B.
it is possible to call A with parameters of type (AnyRef,
Seq[AnyRef]) thanks to tuple
conversion, Tuple2[AnyRef,
Seq[AnyRef]] conforms to AnyRef. So
B is as specific as A. Since both are
as specific as the other, the
reference to f is ambiguous.
As to the "tuple conversion" thing, it is one of the most obscure syntactic sugars of Scala. If you make a call f(a, b), where a and b have types A and B, and there is no f accepting (A, B) but there is an f which accepts (Tuple2(A, B)), then the parameters (a, b) will be converted into a tuple.
For example:
scala> def f(t: Tuple2[Int, Int]) = t._1 + t._2
f: (t: (Int, Int))Int
scala> f(1,2)
res0: Int = 3
Now, there is no tuple conversion going on when thenReturn("a") is called. That is not the problem. The problem is that, given that tuple conversion is possible, neither version of thenReturn is more specific, because any parameter passed to one could be passed to the other as well.
In the specific case of Mockito, it's possible to use the alternate API methods designed for use with void methods:
doReturn("a").when(t).get()
Clunky, but it'll have to do, as Martin et al don't seem likely to compromise Scala in order to support Java's varargs.
Well, I figured out how to resolve the ambiguity (seems kind of obvious in retrospect):
when(t.get()).thenReturn("a", Array[Object](): _*)
As Andreas noted, if the ambiguous method requires a null reference rather than an empty array, you can use something like
v.overloadedMethod(arg0, null.asInstanceOf[Array[Object]]: _*)
to resolve the ambiguity.
If you look at the standard library APIs you'll see this issue handled like this:
def meth(t1: Thing): OtherThing = { ... }
def meth(t1: Thing, t2: Thing, ts: Thing*): OtherThing = { ... }
By doing this, no call (with at least one Thing parameter) is ambiguous without extra fluff like Array[Thing](): _*.
I had a similar problem using Oval (oval.sf.net) trying to call it's validate()-method.
Oval defines 2 validate() methods:
public List<ConstraintViolation> validate(final Object validatedObject)
public List<ConstraintViolation> validate(final Object validatedObject, final String... profiles)
Trying this from Scala:
validator.validate(value)
produces the following compiler-error:
both method validate in class Validator of type (x$1: Any,x$2: <repeated...>[java.lang.String])java.util.List[net.sf.oval.ConstraintViolation]
and method validate in class Validator of type (x$1: Any)java.util.List[net.sf.oval.ConstraintViolation]
match argument types (T)
var violations = validator.validate(entity);
Oval needs the varargs-parameter to be null, not an empty-array, so I finally got it to work with this:
validator.validate(value, null.asInstanceOf[Array[String]]: _*)