I have the following code.
object example {
def foo(a: Any) = "Object"
def foo(a: String, args: String*) = "String"
def main() = {
println(foo("ABC")) // Should print "String"
}
}
In Scala 2, this code works fine. When I call foo with a single string argument, both overloads are valid but the String one is more specific, so it gets called. In Scala 3, this is ambiguous.
-- [E051] Reference Error: overloads.scala:9:12 --------------------------------------------------------------------
9 | println(foo("ABC"))
| ^^^
| Ambiguous overload. The overloaded alternatives of method foo in object example with types
| (a: String, args: String*): String
| (a: Any): String
| both match arguments (("ABC" : String))
The equivalent code (two overloads, one of them more specific and having a vararg parameter) also works in Java. What exactly changed in Scala 3 to make this particular call ambiguous? Is this behavior intended? And how can I tell Scala which overload to call in this case?
For what it's worth, I am well aware method overloading is strongly discouraged in Scala. In my actual code, I'm interfacing with a Java builder class that has an overloaded .append method, so I have no control over the function signatures.
This behaviour is mentioned in this PR, in particular in this comment. I don't know if that will be "fixed" or if it's exactly as intended.
Note that the following is not ambiguous:
def foo(s: String)
def foo(a: String, other: String*)
foo("fine!")
With def foo(s: Any), you can explicitly select one overload or the other like this:
foo("fine": Any) // calls foo(Any)
foo("fine", Nil: _*) // calls foo(String, String*) with an empty vararg
PS: I wouldn't say that overloading is "strongly discouraged", but we must be careful and admit that some cases are ambiguous, to the compiler and to humans. It's not obvious to me which definition foo("") should refer to, so it's not completely crazy to be more explicit in that case.
Related
In the spec of scala 6.26.2, there are 4 kind of conversions for methods:
MethodConversions
The following four implicit conversions can be applied to methods which are not applied to some argument list.
Evaluation. A parameterless method m of type => T is always converted to type T by evaluating the expression to which m is bound.
Implicit Application. If the method takes only implicit parameters, implicit argu- ments are passed following the rules of §7.2.
Eta Expansion. Otherwise, if the method is not a constructor, and the expected type pt is a function type (Ts′) ⇒ T′, eta-expansion (§6.26.5) is performed on the expression e.
Empty Application. Otherwise, if e has method type ()T , it is implicitly applied to the empty argument list, yielding e().
Some scala code:
def hello(): String = "abc"
def world1(s: String) = s
def world2(s: => String) = s
def world3(s: () => String) = s
world1(hello)
world2(hello)
world3(hello)
My question is, for world1, world2 and world3, which conversion rule is applied to each of them?
def world1(s: String) = s
world1(hello)
Empty application: hello is evaluated as hello() and then passed as the argument of world1
def world2(s: => String) = s
world2(hello)
Eta-expansion + Evaluation: hello is expanded into a function and evaluated lazily.
def world3(s: () => String) = s
world3(hello)
Eta-expansion: hello is expanded to a Function0[String]
Eta-expansion is necessary because methods and functions are two different things, even though the scala compiler is very good at hiding this difference.
The difference comes from the fact that methods are a JVM concept, whereas first-class functions are a scala-specific concept.
You can try this out in a REPL. First, let's define a method hello
scala> def hello(s: String) = s"hello $s"
hello: (s: String)String
scala> hello.hashCode
<console>:9: error: missing arguments for method hello;
follow this method with `_' if you want to treat it as a partially applied function
hello.hashCode
^
Woops, hello is not a value (in the JVM sense)! Not having an hashcode is a proof of it.
Let's get a Function1 out of the hello method
scala> hello _
res10: String => String = <function1>
scala> (hello _).hashCode
res11: Int = 1710358635
A function is a value, so it has an hashcode.
I was writing a wrapper class, that passes on most calls identically to the root object and I accidentally left the full definition (with parameter name x, etc) see below. To my surprise, it compiled. So what is going on here? Is this similar to assigning to root.p_ ? I find it strange that I can leave the name "x" in the assigment. Also, what would be the best (fastest) way to pass on wrapped calls - or maybe it makes no difference?
trait A {
def p(x:Int) = println("A"+123)
}
case class B(root:A) {
def p(x: Int): Unit = root.p(x:Int) // WHAT HAPPENED HERE?
}
object Test extends App {
val temp = new A{}
val b = B(temp)
b.p(123)
}
What happens is type ascription, and here, it is not much.
The code works just as if you had written
def p(x: Int): Unit = root.p(x)
as you intended. When you write x: Int in the call (not in the declaration, where it has a completely different meaning) or more generally expr: Type, it has the same value as expr, but it tells the compiler to check that the expr is of the given type (this is a check made a compile type, sort of an upcast, not at all a runtime check such as asInstanceOf[...]) and to treat it has having that type. Here, x is indeed an Int and it is already treated as an Int by the compiler, so the ascription changes nothing.
Besides documenting a non obvious type somewhere in the code, type ascription may be used to select between overloaded method:
def f(a: Any) ...
def f(i: Int) ...
f(3) // calls f(i: Int)
f(3: Any) // calls f(a: Any)
Note that in the second call, with the ascription, the compiler knows that 3 is of type Any, less precise than Int, but still true. That would be an error otherwise, this is not a cast. But the ascription makes it call the other version of f.
You can have a look at that answer for more details: https://stackoverflow.com/a/2087356/754787
Are you delegating the implementation of B.p to A.p ?
I don't see any unusual except for root.p(x:Int), you can save typing by root.p(x).
trait is a way of code mixin, I think the easiest way is:
trait A {
def p(x: Int) = println("A" + x)
}
case class B extends AnyRef with A
val b = B()
b.p(123)
Given the following constructs for defining a function in Scala, can you explain what the difference is, and what the implications will be?
def foo = {}
vs.
def foo() = {}
Update
Thanks for the quick responses. These are great. The only question that remains for me is:
If I omit the parenthesis, is there still a way to pass the function around? This is what I get in the repl:
scala> def foo = {}
foo: Unit
scala> def baz() = {}
baz: ()Unit
scala> def test(arg: () => Unit) = { arg }
test: (arg: () => Unit)() => Unit
scala> test(foo)
<console>:10: error: type mismatch;
found : Unit
required: () => Unit
test(foo)
^
scala> test(baz)
res1: () => Unit = <function0>
Update 2012-09-14
Here are some similar questions I noticed:
Difference between function with parentheses and without
Scala methods with no arguments
If you include the parentheses in the definition you can optionally omit them when you call the method. If you omit them in the definition you can't use them when you call the method.
scala> def foo() {}
foo: ()Unit
scala> def bar {}
bar: Unit
scala> foo
scala> bar()
<console>:12: error: Unit does not take parameters
bar()
^
Additionally, you can do something similar with your higher order functions:
scala> def baz(f: () => Unit) {}
baz: (f: () => Unit)Unit
scala> def bat(f: => Unit) {}
bat: (f: => Unit)Unit
scala> baz(foo)
scala> baz(bar)
<console>:13: error: type mismatch;
found : Unit
required: () => Unit
baz(bar)
^
scala> bat(foo)
scala> bat(bar) // both ok
Here baz will only take foo() and not bar. What use this is, I don't know. But it does show that the types are distinct.
Let me copy my answer I posted on a duplicated question:
A Scala 2.x method of 0-arity can be defined with or without parentheses (). This is used to signal the user that the method has some kind of side-effect (like printing out to std out or destroying data), as opposed to the one without, which can later be implemented as val.
See Programming in Scala:
Such parameterless methods are quite common in Scala. By contrast, methods defined with empty parentheses, such as def height(): Int, are called empty-paren methods. The recommended convention is to use a parameterless method whenever there are no parameters and the method accesses mutable state only by reading fields of the containing object (in particular, it does not change mutable state).
This convention supports the uniform access principle [...]
To summarize, it is encouraged style in Scala to define methods that take no parameters and have no side effects as parameterless methods, i.e., leaving off the empty parentheses. On the other hand, you should never define a method that has side-effects without parentheses, because then invocations of that method would look like a field selection.
Terminology
There are some confusing terminology around 0-arity methods, so I'll create a table here:
Programming in Scala
scala/scala jargon
def foo: Int
parameterless methods
nullary method
def foo(): Int
empty-paren methods
nilary method
I sounds cool to say "nullary method", but often people say it wrong and the readers will also be confused, so I suggest sticking with parameterless vs empty-paren methods, unless you're on a pull request where people are already using the jargons.
() is no longer optional in Scala 2.13 or 3.0
In The great () insert, Martin Odersky made change to Scala 3 to require () to call a method defined with (). This is documented in Scala 3 Migration Guide as:
Auto-application is the syntax of calling a nullary method without passing an empty argument list.
Note: Migration document gets the term wrong. It should read as:
Auto-application is the syntax of calling a empty-paren (or "nilary") method without passing an empty argument list.
Scala 2.13, followed Scala 3.x and deprecated the auto application of empty-paren methods in Eta-expand 0-arity method if expected type is Function0. A notable exception to this rule is Java-defined methods. We can continue to call Java methods such as toString without ().
To answer your second question, just add an _:
scala> def foo = println("foo!")
foo: Unit
scala> def test(arg: () => Unit) = { arg }
test: (arg: () => Unit)() => Unit
scala> test(foo _)
res10: () => Unit = <function0>
scala> test(foo _)()
foo!
scala>
I would recommend always start definition with a function like:
def bar {}
and only in cases, when you are forced, to change it to:
def bar() {}
Reason: Let's consider these 2 functions from a point of possible usage. How they can be infoked AND where they can be passed.
I would not call this a function at all:
def bar {}
It can be invoked as:
bar
but not as a function:
bar()
We can use this bar when we define a higher-order function with a call-by-name parameter:
def bat(f: => Unit) {
f //you must not use (), it will fail f()
}
We should remember, that => Unit - is not even a function. You absolutely cannot work with a thunk as if it's a function insofar as you cannot choose to treat it as Function value to be stored or passed around. You can only trigger evaluations of the actual argument expression (any number of them).
Scala: passing function as block of code between curly braces
A function, defined with () has a bigger scope for usage. It can be used exactly, in the same context, as bar:
def foo() = {}
//invokation:
foo
//or as a function:
foo()
It can be passed into a function with a call-by-name parameter:
bat(foo)
Additionally, if we define a higher-order function, that accepts not a call-by-name pamameter, but a real function:
def baz(f: () => Unit) {}
We can also pass foo to the baz:
baz(foo)
As we can see standard functions like foo have a bigger scope for usage. But using a functions defined without () plus defining higher-order functions, that accept call-by-name parameter, let us use more clear syntax.
If you do not try to archive a better, more readable code, or if you need ability to pass your piece of code both to function defined with a call-by-name parameter and to a function defined with a real function, then define your function as standard one:
def foo() {}
If you prefer to write more clear and readable code, AND your function has no side-effects, define a function as:
def bar {}
PLUS try to define your higher-order function to accept a call-by-name parameter, but not a function.
Only when you are forced, only in this case use the previous option.
An implicit question to newcomers to Scala seems to be: where does the compiler look for implicits? I mean implicit because the question never seems to get fully formed, as if there weren't words for it. :-) For example, where do the values for integral below come from?
scala> import scala.math._
import scala.math._
scala> def foo[T](t: T)(implicit integral: Integral[T]) {println(integral)}
foo: [T](t: T)(implicit integral: scala.math.Integral[T])Unit
scala> foo(0)
scala.math.Numeric$IntIsIntegral$#3dbea611
scala> foo(0L)
scala.math.Numeric$LongIsIntegral$#48c610af
Another question that does follow up to those who decide to learn the answer to the first question is how does the compiler choose which implicit to use, in certain situations of apparent ambiguity (but that compile anyway)?
For instance, scala.Predef defines two conversions from String: one to WrappedString and another to StringOps. Both classes, however, share a lot of methods, so why doesn't Scala complain about ambiguity when, say, calling map?
Note: this question was inspired by this other question, in the hopes of stating the problem in a more general manner. The example was copied from there, because it is referred to in the answer.
Types of Implicits
Implicits in Scala refers to either a value that can be passed "automatically", so to speak, or a conversion from one type to another that is made automatically.
Implicit Conversion
Speaking very briefly about the latter type, if one calls a method m on an object o of a class C, and that class does not support method m, then Scala will look for an implicit conversion from C to something that does support m. A simple example would be the method map on String:
"abc".map(_.toInt)
String does not support the method map, but StringOps does, and there's an implicit conversion from String to StringOps available (see implicit def augmentString on Predef).
Implicit Parameters
The other kind of implicit is the implicit parameter. These are passed to method calls like any other parameter, but the compiler tries to fill them in automatically. If it can't, it will complain. One can pass these parameters explicitly, which is how one uses breakOut, for example (see question about breakOut, on a day you are feeling up for a challenge).
In this case, one has to declare the need for an implicit, such as the foo method declaration:
def foo[T](t: T)(implicit integral: Integral[T]) {println(integral)}
View Bounds
There's one situation where an implicit is both an implicit conversion and an implicit parameter. For example:
def getIndex[T, CC](seq: CC, value: T)(implicit conv: CC => Seq[T]) = seq.indexOf(value)
getIndex("abc", 'a')
The method getIndex can receive any object, as long as there is an implicit conversion available from its class to Seq[T]. Because of that, I can pass a String to getIndex, and it will work.
Behind the scenes, the compiler changes seq.IndexOf(value) to conv(seq).indexOf(value).
This is so useful that there is syntactic sugar to write them. Using this syntactic sugar, getIndex can be defined like this:
def getIndex[T, CC <% Seq[T]](seq: CC, value: T) = seq.indexOf(value)
This syntactic sugar is described as a view bound, akin to an upper bound (CC <: Seq[Int]) or a lower bound (T >: Null).
Context Bounds
Another common pattern in implicit parameters is the type class pattern. This pattern enables the provision of common interfaces to classes which did not declare them. It can both serve as a bridge pattern -- gaining separation of concerns -- and as an adapter pattern.
The Integral class you mentioned is a classic example of type class pattern. Another example on Scala's standard library is Ordering. There's a library that makes heavy use of this pattern, called Scalaz.
This is an example of its use:
def sum[T](list: List[T])(implicit integral: Integral[T]): T = {
import integral._ // get the implicits in question into scope
list.foldLeft(integral.zero)(_ + _)
}
There is also syntactic sugar for it, called a context bound, which is made less useful by the need to refer to the implicit. A straight conversion of that method looks like this:
def sum[T : Integral](list: List[T]): T = {
val integral = implicitly[Integral[T]]
import integral._ // get the implicits in question into scope
list.foldLeft(integral.zero)(_ + _)
}
Context bounds are more useful when you just need to pass them to other methods that use them. For example, the method sorted on Seq needs an implicit Ordering. To create a method reverseSort, one could write:
def reverseSort[T : Ordering](seq: Seq[T]) = seq.sorted.reverse
Because Ordering[T] was implicitly passed to reverseSort, it can then pass it implicitly to sorted.
Where do Implicits come from?
When the compiler sees the need for an implicit, either because you are calling a method which does not exist on the object's class, or because you are calling a method that requires an implicit parameter, it will search for an implicit that will fit the need.
This search obey certain rules that define which implicits are visible and which are not. The following table showing where the compiler will search for implicits was taken from an excellent presentation (timestamp 20:20) about implicits by Josh Suereth, which I heartily recommend to anyone wanting to improve their Scala knowledge. It has been complemented since then with feedback and updates.
The implicits available under number 1 below has precedence over the ones under number 2. Other than that, if there are several eligible arguments which match the implicit parameter’s type, a most specific one will be chosen using the rules of static overloading resolution (see Scala Specification §6.26.3). More detailed information can be found in a question I link to at the end of this answer.
First look in current scope
Implicits defined in current scope
Explicit imports
wildcard imports
Same scope in other files
Now look at associated types in
Companion objects of a type
Implicit scope of an argument's type (2.9.1)
Implicit scope of type arguments (2.8.0)
Outer objects for nested types
Other dimensions
Let's give some examples for them:
Implicits Defined in Current Scope
implicit val n: Int = 5
def add(x: Int)(implicit y: Int) = x + y
add(5) // takes n from the current scope
Explicit Imports
import scala.collection.JavaConversions.mapAsScalaMap
def env = System.getenv() // Java map
val term = env("TERM") // implicit conversion from Java Map to Scala Map
Wildcard Imports
def sum[T : Integral](list: List[T]): T = {
val integral = implicitly[Integral[T]]
import integral._ // get the implicits in question into scope
list.foldLeft(integral.zero)(_ + _)
}
Same Scope in Other Files
Edit: It seems this does not have a different precedence. If you have some example that demonstrates a precedence distinction, please make a comment. Otherwise, don't rely on this one.
This is like the first example, but assuming the implicit definition is in a different file than its usage. See also how package objects might be used in to bring in implicits.
Companion Objects of a Type
There are two object companions of note here. First, the object companion of the "source" type is looked into. For instance, inside the object Option there is an implicit conversion to Iterable, so one can call Iterable methods on Option, or pass Option to something expecting an Iterable. For example:
for {
x <- List(1, 2, 3)
y <- Some('x')
} yield (x, y)
That expression is translated by the compiler to
List(1, 2, 3).flatMap(x => Some('x').map(y => (x, y)))
However, List.flatMap expects a TraversableOnce, which Option is not. The compiler then looks inside Option's object companion and finds the conversion to Iterable, which is a TraversableOnce, making this expression correct.
Second, the companion object of the expected type:
List(1, 2, 3).sorted
The method sorted takes an implicit Ordering. In this case, it looks inside the object Ordering, companion to the class Ordering, and finds an implicit Ordering[Int] there.
Note that companion objects of super classes are also looked into. For example:
class A(val n: Int)
object A {
implicit def str(a: A) = "A: %d" format a.n
}
class B(val x: Int, y: Int) extends A(y)
val b = new B(5, 2)
val s: String = b // s == "A: 2"
This is how Scala found the implicit Numeric[Int] and Numeric[Long] in your question, by the way, as they are found inside Numeric, not Integral.
Implicit Scope of an Argument's Type
If you have a method with an argument type A, then the implicit scope of type A will also be considered. By "implicit scope" I mean that all these rules will be applied recursively -- for example, the companion object of A will be searched for implicits, as per the rule above.
Note that this does not mean the implicit scope of A will be searched for conversions of that parameter, but of the whole expression. For example:
class A(val n: Int) {
def +(other: A) = new A(n + other.n)
}
object A {
implicit def fromInt(n: Int) = new A(n)
}
// This becomes possible:
1 + new A(1)
// because it is converted into this:
A.fromInt(1) + new A(1)
This is available since Scala 2.9.1.
Implicit Scope of Type Arguments
This is required to make the type class pattern really work. Consider Ordering, for instance: It comes with some implicits in its companion object, but you can't add stuff to it. So how can you make an Ordering for your own class that is automatically found?
Let's start with the implementation:
class A(val n: Int)
object A {
implicit val ord = new Ordering[A] {
def compare(x: A, y: A) = implicitly[Ordering[Int]].compare(x.n, y.n)
}
}
So, consider what happens when you call
List(new A(5), new A(2)).sorted
As we saw, the method sorted expects an Ordering[A] (actually, it expects an Ordering[B], where B >: A). There isn't any such thing inside Ordering, and there is no "source" type on which to look. Obviously, it is finding it inside A, which is a type argument of Ordering.
This is also how various collection methods expecting CanBuildFrom work: the implicits are found inside companion objects to the type parameters of CanBuildFrom.
Note: Ordering is defined as trait Ordering[T], where T is a type parameter. Previously, I said that Scala looked inside type parameters, which doesn't make much sense. The implicit looked for above is Ordering[A], where A is an actual type, not type parameter: it is a type argument to Ordering. See section 7.2 of the Scala specification.
This is available since Scala 2.8.0.
Outer Objects for Nested Types
I haven't actually seen examples of this. I'd be grateful if someone could share one. The principle is simple:
class A(val n: Int) {
class B(val m: Int) { require(m < n) }
}
object A {
implicit def bToString(b: A#B) = "B: %d" format b.m
}
val a = new A(5)
val b = new a.B(3)
val s: String = b // s == "B: 3"
Other Dimensions
I'm pretty sure this was a joke, but this answer might not be up-to-date. So don't take this question as being the final arbiter of what is happening, and if you do noticed it has gotten out-of-date, please inform me so that I can fix it.
EDIT
Related questions of interest:
Context and view bounds
Chaining implicits
Scala: Implicit parameter resolution precedence
I wanted to find out the precedence of the implicit parameter resolution, not just where it looks for, so I wrote a blog post revisiting implicits without import tax (and implicit parameter precedence again after some feedback).
Here's the list:
1) implicits visible to current invocation scope via local declaration, imports, outer scope, inheritance, package object that are accessible without prefix.
2) implicit scope, which contains all sort of companion objects and package object that bear some relation to the implicit's type which we search for (i.e. package object of the type, companion object of the type itself, of its type constructor if any, of its parameters if any, and also of its supertype and supertraits).
If at either stage we find more than one implicit, static overloading rule is used to resolve it.
Why does Jorge Ortiz advise to avoid method overloading?
Overloading makes it a little harder to lift a method to a function:
object A {
def foo(a: Int) = 0
def foo(b: Boolean) = 0
def foo(a: Int, b: Int) = 0
val function = foo _ // fails, must use = foo(_, _) or (a: Int) => foo(a)
}
You cannot selectively import one of a set of overloaded methods.
There is a greater chance that ambiguity will arise when trying to apply implicit views to adapt the arguments to the parameter types:
scala> implicit def S2B(s: String) = !s.isEmpty
S2B: (s: String)Boolean
scala> implicit def S2I(s: String) = s.length
S2I: (s: String)Int
scala> object test { def foo(a: Int) = 0; def foo(b: Boolean) = 1; foo("") }
<console>:15: error: ambiguous reference to overloaded definition,
both method foo in object test of type (b: Boolean)Int
and method foo in object test of type (a: Int)Int
match argument types (java.lang.String)
object test { def foo(a: Int) = 0; def foo(b: Boolean) = 1; foo("") }
It can quietly render default parameters unusable:
object test {
def foo(a: Int) = 0;
def foo(a: Int, b: Int = 0) = 1
}
Individually, these reasons don't compel you to completely shun overloading. I feel like I'm missing some bigger problems.
UPDATE
The evidence is stacking up.
It complicates the spec
It can render implicits unsuitable for use in view bounds.
It limits you to introduce defaults for parameters on only one of the overloaded alternatives.
Because the arguments will be typed without an expected type, you can't pass anonymous function literals like '_.foo' as arguments to overloaded methods.
UPDATE 2
You can't (currently) use overloaded methods in package objects.
Applicability errors are harder to diagnose for callers of your API.
UPDATE 3
static overload resolution can rob an API of all type safety:
scala> object O { def apply[T](ts: T*) = (); def apply(f: (String => Int)) = () }
defined object O
scala> O((i: String) => f(i)) // oops, I meant to call the second overload but someone changed the return type of `f` when I wasn't looking...
The reasons that Gilad and Jason (retronym) give are all very good reasons to avoid overloading if possible. Gilad's reasons focus on why overloading is problematic in general, whereas Jason's reasons focus on why it's problematic in the context of other Scala features.
To Jason's list, I would add that overloading interacts poorly with type inference. Consider:
val x = ...
foo(x)
A change in the inferred type of x could alter which foo method gets called. The value of x need not change, just the inferred type of x, which could happen for all sorts of reasons.
For all of the reasons given (and a few more I'm sure I'm forgetting), I think method overloading should be used as sparingly as possible.
I think the advice is not meant for scala especially, but for OO in general (so far I know scala is supposed to be a best-of-breed between OO and functional).
Overriding is fine, it's the heart of polymorphism and is central to OO design.
Overloading on the other hand is more problematic. With method overloading it's hard to discern which method will be really invoked and it's indeed a frequently a source of confusion. There is also rarely a justification why overloading is really necessary. The problem can most of the time be solved another way and I agree that overloading is a smell.
Here is an article that explain nicely what I mean with "overloading is a source of confusion", which I think is the prime reason why it's discouraged. It's for java but I think it applies to scala as well.