Scala parenthesis semantics - scala

When appending values to a MAP, why does Scala require the additional parenthesis-block to make this statement work?
Does NOT compile:
vmap += (item.getName(), item.getString()) // compiler output: "found: String"
However, this Does compile:
vmap += ((item.getName(), item.getString())) // note the second set of enclosures
TIA
EDIT: vmap is defined as
val vmap = new mutable.HashMap[String, String]()
Epilogue:
At the time of this edit there are posts detailing two possible explanations, both of which appear to contain elements of truth to them. Which one is actually Correct? I couldn't say with any degree of certainty...I'm just a guy who's still learning the language. That being said, I have changed the answer selection based upon the feeling that the one answer is (at least to some extent) encompassed within the other - so I've opted for the larger picture, as I think it will provide a broader meaning for someone else in search of an answer. Ironically, I was trying to get a better understanding of how to flatten out some of the little nuances of the language, and what I've come to realize is there are more of those than I had suspected. I'm not saying that's a bad thing - in fact (IMO) it's to be expected from any language that is as flexible and complex - but it sure does make a guy miss the black/white world of Assembly from time-to-time...
To draw this to an end, a couple observations:
1) the selected answer contains a link to a website full of Scala brain-benders (Which I found extremely helpful in trying to understand some of the aforementioned quarks in the language.) Highly recommended.
2) I did come across another interesting twist - whereas the single-parenthesis (example above) does not work, change it the following and it works just fine...
vmap += ("foo" -> "bar")
Which probably has something to do with matching method/function signatures, but that is just a guess on my part.

The accepted answer is actually wrong.
The reason you don't get tupling for Map.+= is that the method is overloaded with a second method that takes two or more args.
The compiler will only try tupling if the number of args is wrong. But if you give it two args, and there's a method that takes two, that's what it chooses, even if it fails to type check.
It doesn't start trying all possible combinations until something works because that would be fraught with obscurity. (Cf implicits.)
scala> def f(p: Pair[String, Int]) = { val (s,i) = p; s.toInt + i }
f: (p: Pair[String,Int])Int
scala> f("2",3) // ok to tuple
res0: Int = 5
scala> trait F { def +=(p: Pair[String, Int]) = f(p) }
defined trait F
scala> val foo = new F {}
foo: F = $anon$1#6bc77f62
scala> foo += ("2",3) // ok to tuple
res1: Int = 5
scala> trait G { def +=(p: Pair[String, Int]) = f(p); def +=(p:(String,Int),q:(String,Int),r:(String,Int)*) = f(p)+f(q)+(r map f).sum }
defined trait G
scala> val goo = new G {}
goo: G = $anon$1#183aeac3
scala> goo += ("2",3) // sorry
<console>:12: error: type mismatch;
found : String("2")
required: (String, Int)
goo += ("2",3)
^
scala> goo += (("2",3),("4",5),("6",7))
res3: Int = 27
I'd be remiss not to mention your friend and mine, -Xlint, which will warn about untoward arg adaptations:
apm#mara:~/tmp$ skala -Xlint
Welcome to Scala version 2.11.0-20130811-132927-95a4d6e987 (OpenJDK 64-Bit Server VM, Java 1.7.0_25).
Type in expressions to have them evaluated.
Type :help for more information.
scala> def f(p: Pair[String, Int]) = { val (s,i) = p; s.toInt + i }
f: (p: Pair[String,Int])Int
scala> f("2",3)
<console>:9: warning: Adapting argument list by creating a 2-tuple: this may not be what you want.
signature: f(p: Pair[String,Int]): Int
given arguments: "2", 3
after adaptation: f(("2", 3): (String, Int))
f("2",3)
^
res0: Int = 5
scala> List(1).toSet()
<console>:8: warning: Adapting argument list by inserting (): this is unlikely to be what you want.
signature: GenSetLike.apply(elem: A): Boolean
given arguments: <none>
after adaptation: GenSetLike((): Unit)
List(1).toSet()
^
res3: Boolean = false
On the perils of adaptation, see the Adaptive Reasoning puzzler and this new one that is pretty common because we learn that the presence or absence of parens is largely a matter of style, and when parens really matter, using them wrong results in a type error.
Adapting a tuple in the presence of overloading:
scala> class Foo {
| def f[A](a: A) = 1 // A can be (Int,Int,Int)
| def f[A](a: A, a2: A) = 2
| }
defined class Foo
scala> val foo = new Foo
foo: Foo = Foo#2645d22d
scala> foo.f(0,0,0)
<console>:10: warning: Adapting argument list by creating a 3-tuple: this may not be what you want.
signature: Foo.f[A](a: A): Int
given arguments: 0, 0, 0
after adaptation: Foo.f((0, 0, 0): (Int, Int, Int))
foo.f(0,0,0)
^
res9: Int = 1

Because the Map += method takes a Tuple2[A, B] as parameter.
You have to mark the Tuple2[A, B] with surrounding parentheses, otherwise the compiler won't infer the type to Tuple2.
A Tuple2 is a simple pair A -> B.
val x = (5, 7); // the type is inferred to Tuple2[Int, Int];
A Map iteration makes this even more obvious:
map.foreach { case (key, value) => ...};
(key, value)// is a Tuple2
The first set of enclosing parentheses are treated as fruitless/skip-able by the compiler.
The second set of enclosing parentheses creates the needed Tuple2.
val x = (item.getName(), item.getString());//Tuple2[String, String]
vmap += x; // THIS COMPILES
vmap += (item.getName(), item.getString())// is identical to
vmap += item.getName(), item.getString() // now it thinks you are adding a String, the result of item.getName()
vmap += ( (item.getName(), item.getString()) )// skips the first set, sees the Tuple, compiles.
From the SLS:
Postfix operators have lower precedence than infix operators,
so foo bar baz = foo.bar(baz)
In this case: vmap += (item.getName(), item.getString()); = vmap.+=(item.getName());

Related

Magnet pattern and overloaded methods

There is a significant difference in how Scala resolves implicit conversions from "Magnet Pattern" for non-overloaded and overloaded methods.
Suppose there is a trait Apply (a variation of a "Magnet Pattern") implemented as follows.
trait Apply[A] {
def apply(): A
}
object Apply {
implicit def fromLazyVal[A](v: => A): Apply[A] = new Apply[A] {
def apply(): A = v
}
}
Now we create a trait Foo that has a single apply taking an instance of Apply so we can pass it any value of arbitrary type A since there an implicit conversion from A => Apply[A].
trait Foo[A] {
def apply(a: Apply[A]): A = a()
}
We can make sure it works as expected using REPL and this workaround to de-sugar Scala code.
scala> val foo = new Foo[String]{}
foo: Foo[String] = $anon$1#3a248e6a
scala> showCode(reify { foo { "foo" } }.tree)
res9: String =
$line21$read.foo.apply(
$read.INSTANCE.Apply.fromLazyVal("foo")
)
This works great, but suppose we pass a complex expression (with ;) to the apply method.
scala> val foo = new Foo[Int]{}
foo: Foo[Int] = $anon$1#5645b124
scala> var i = 0
i: Int = 0
scala> showCode(reify { foo { i = i + 1; i } }.tree)
res10: String =
$line23$read.foo.apply({
$line24$read.`i_=`($line24$read.i.+(1));
$read.INSTANCE.Apply.fromLazyVal($line24$read.i)
})
As we can see, an implicit conversion has been applied only on the last part of the complex expression (i.e., i), not to the whole expression. So, i = i + 1 was strictly evaluated at the moment we pass it to an apply method, which is not what we've been expecting.
Good (or bad) news. We can make scalac to use the whole expression in the implicit conversion. So i = i + 1 will be evaluated lazily as expected. To do so we (surprize, surprize!) we add an overload method Foo.apply that takes any type, but not Apply.
trait Foo[A] {
def apply(a: Apply[A]): A = a()
def apply(s: Symbol): Foo[A] = this
}
And then.
scala> var i = 0
i: Int = 0
scala> val foo = new Foo[Int]{}
foo: Foo[Int] = $anon$1#3ff00018
scala> showCode(reify { foo { i = i + 1; i } }.tree)
res11: String =
$line28$read.foo.apply($read.INSTANCE.Apply.fromLazyVal({
$line27$read.`i_=`($line27$read.i.+(1));
$line27$read.i
}))
As we can see, the entire expression i = i + 1; i made it under the implicit conversion as expected.
So my question is why is that? Why the scope of which an implicit conversion is applied depends on the fact whether or not there is an overloaded method in the class.
Now, that is a tricky one. And it's actually pretty awesome, I didn't know that "workaround" to the "lazy implicit does not cover full block" problem. Thanks for that!
What happens is related to expected types, and how they affect type inference works, implicit conversions, and overloads.
Type inference and expected types
First, we have to know that type inference in Scala is bi-directional. Most of the inference works bottom-up (given a: Int and b: Int, infer a + b: Int), but some things are top-down. For example, inferring the parameter types of a lambda is top-down:
def foo(f: Int => Int): Int = f(42)
foo(x => x + 1)
In the second line, after resolving foo to be def foo(f: Int => Int): Int, the type inferencer can tell that x must be of type Int. It does so before typechecking the lambda itself. It propagates type information from the function application down to the lambda, which is a parameter.
Top-down inference basically relies on the notion of expected type. When typechecking a node of the AST of the program, the typechecker does not start empty-handed. It receives an expected type from "above" (in this case, the function application node). When typechecking the lambda x => x + 1 in the above example, the expected type is Int => Int, because we know what parameter type is expected by foo. This drives the type inference into inferring Int for the parameter x, which in turn allows to typecheck x + 1.
Expected types are propagated down certain constructs, e.g., blocks ({}) and the branches of ifs and matches. Hence, you could also call foo with
foo({
val y = 1
x => x + y
})
and the typechecker is still able to infer x: Int. That is because, when typechecking the block { ... }, the expected type Int => Int is passed down to the typechecking of the last expression, i.e., x => x + y.
Implicit conversions and expected types
Now, we have to introduce implicit conversions into the mix. When typechecking a node produces a value of type T, but the expected type for that node is U where T <: U is false, the typechecker looks for an implicit T => U (I'm probably simplifying things a bit here, but the gist is still true). This is why your first example does not work. Let us look at it closely:
trait Foo[A] {
def apply(a: Apply[A]): A = a()
}
val foo = new Foo[Int] {}
foo({
i = i + 1
i
})
When calling foo.apply, the expected type for the parameter (i.e., the block) is Apply[Int] (A has already been instantiated to Int). We can "write" this typechecker "state" like this:
{
i = i + 1
i
}: Apply[Int]
This expected type is passed down to the last expression of the block, which gives:
{
i = i + 1
(i: Apply[Int])
}
at this point, since i: Int and the expected type is Apply[Int], the typechecker finds the implicit conversion:
{
i = i + 1
fromLazyVal[Int](i)
}
which causes only i to be lazified.
Overloads and expected types
OK, time to throw overloads in there! When the typechecker sees an application of an overload method, it has much more trouble deciding on an expected type. We can see that with the following example:
object Foo {
def apply(f: Int => Int): Int = f(42)
def apply(f: String => String): String = f("hello")
}
Foo(x => x + 1)
gives:
error: missing parameter type
Foo(x => x + 1)
^
In this case, the failure of the typechecker to figure out an expected type causes the parameter type not to be inferred.
If we take your "solution" to your issue, we have a different consequence:
trait Foo[A] {
def apply(a: Apply[A]): A = a()
def apply(s: Symbol): Foo[A] = this
}
val foo = new Foo[Int] {}
foo({
i = i + 1
i
})
Now when typechecking the block, the typechecker has no expected type to work with. It will therefore typecheck the last expression without expression, and eventually typecheck the whole block as an Int:
{
i = i + 1
i
}: Int
Only now, with an already typechecked argument, does it try to resolve the overloads. Since none of the overloads conforms directly, it tries to apply an implicit conversion from Int to either Apply[Int] or Symbol. It finds fromLazyVal[Int], which it applies to the entire argument. It does not push it inside the block anymore, giving:
fromLazyVal({
i = i + 1
i
}): Apply[Int]
In this case, the whole block is lazified.
This concludes the explanation. To summarize, the major difference is the presence vs absence of an expected type when typechecking the block. With an expected type, the implicit conversion is pushed down as much as possible, down to just i. Without the expected type, the implicit conversion is applied a posteriori on the entire argument, i.e., the whole block.

Why can't I flatMap a Try?

Given
val strings = Set("Hi", "there", "friend")
def numberOfCharsDiv2(s: String) = scala.util.Try {
if (s.length % 2 == 0) s.length / 2 else throw new RuntimeException("grr")
}
Why can't I flatMap away the Try resulting from the method call? i.e.
strings.flatMap(numberOfCharsDiv2)
<console>:10: error: type mismatch;
found : scala.util.Try[Int]
required: scala.collection.GenTraversableOnce[?]
strings.flatMap(numberOfCharsDiv2)
or
for {
s <- strings
n <- numberOfCharsDiv2(s)
} yield n
<console>:12: error: type mismatch;
found : scala.util.Try[Int]
required: scala.collection.GenTraversableOnce[?]
n <- numberOfCharsDiv2(s)
However if I use Option instead of Try there's no problem.
def numberOfCharsDiv2(s: String) = if (s.length % 2 == 0)
Some(s.length / 2) else None
strings.flatMap(numberOfCharsDiv2) # => Set(1, 3)
What's the rationale behind not allowing flatMap on Try?
Let's look at the signature of flatMap.
def flatMap[B](f: (A) => GenTraversableOnce[B]): Set[B]
Your numberOfCharsDiv2 is seen as String => Try[Int]. Try is not a subclass of GenTraversableOnce and that is why you get the error. You don't strictly need a function that gives a Set only because you use flatMap on a Set. The function basically has to return any kind of collection.
So why does it work with Option? Option is also not a subclass of GenTraversableOnce, but there exists an implicit conversion inside the Option companion object, that transforms it into a List.
implicit def option2Iterable[A](xo: Option[A]): Iterable[A] = xo.toList
Then one question remains. Why not have an implicit conversion for Try as well? Because you will probably not get what you want.
flatMap can be seen as a map followed by a flatten.
Imagine you have a List[Option[Int]] like List(Some(1), None, Some(2)). Then flatten will give you List(1,2) of type List[Int].
Now look at an example with Try. List(Success(1), Failure(exception), Success(2)) of type List[Try[Int]].
How will flatten work with the failure now?
Should it disappear like None? Then why not work directly with Option?
Should it be included in the result? Then it would be List(1, exception, 2). The problem here is that the type is List[Any], because you have to find a common super class for Int and Throwable. You lose the type.
These should be reasons why there isn't an implicit conversion. Of course you are free to define one yourself, if you accept the above consequences.
The problem is that in your example, you're not flatmapping over Try. The flatmap you are doing is over Set.
Flatmap over Set takes a Set[A], and a function from A to Set[B]. As Kigyo points out in his comment below this isn't the actual type signature of flatmap on Set in Scala, but the general form of flat map is:
M[A] => (A => M[B]) => M[B]
That is, it takes some higher-kinded type, along with a function that operates on elements of the type in that higher-kinded type, and it gives you back the same higher-kinded type with the mapped elements.
In your case, this means that for each element of your Set, flatmap expects a call to a function that takes a String, and returns a Set of some type B which could be String (or could be anything else).
Your function
numberOfCharsDiv2(s: String)
correctly takes a String, but incorrectly returns a Try, rather then another Set as flatmap requires.
Your code would work if you used 'map', as that allows you to take some structure - in this case Set and run a function over each element transforming it from an A to a B without the function's return type conforming to the enclosing structure i.e. returning a Set
strings.map(numberOfCharsDiv2)
res2: scala.collection.immutable.Set[scala.util.Try[Int]] = Set(Success(1), Failure(java.lang.RuntimeException: grr), Success(3))
It is a Monad in Scala 2.11:
scala> import scala.util._
import scala.util._
scala> val x: Try[String] = Success[String]("abc")
x: scala.util.Try[String] = Success(abc)
scala> val y: Try[String] = Failure[String](new Exception("oops"))
y: scala.util.Try[String] = Failure(java.lang.Exception: oops)
scala> val z = Try(x)
z: scala.util.Try[scala.util.Try[String]] = Success(Success(abc))
scala> val t = Try(y)
t: scala.util.Try[scala.util.Try[String]] = Success(Failure(java.lang.Exception: oops))
scala> z.flatten
res2: scala.util.Try[String] = Success(abc)
scala> t.flatten
res3: scala.util.Try[String] =
Failure(java.lang.UnsupportedOperationException: oops)
Kigyo explains well why Scala does not do this implicitly. To put it simply, Scala does not want to automatically throw away the exception that is preserved in a Try.
Scala does provide a simple way to explicitly translate a Try into an Option though. This is how you can use a Try in a flatmap:
strings.flatMap(numberOfCharsDiv2(_).toOption)

Why Scala doesn't allow List.map _ and type signatures in REPL

Background
I recently attended a beginner Scala meetup and we were talking about the difference between methods and functions (also discussed in-depth here).
For example:
scala> val one = 1
one: Int = 1
scala> val addOne = (x: Int) => x + 1
addOne: Int => Int = <function1>
This demonstrates that vals can not only have an integer type, but can have a function type. We can see the type in the scala repl:
scala> :type addOne
Int => Int
We can also define methods in objects and classes:
scala> object Foo {
| def timesTwo(op: Int) = op * 2
| }
defined module Foo
And while a method doesn't have a type (but rather is has a type signature), we can lift it into a function to see what it is:
scala> :type Foo.timesTwo
<console>:9: error: missing arguments for method timesTwo in object Foo;
follow this method with `_' if you want to treat it as a partially applied function
Foo.timesTwo
^
scala> :type Foo.timesTwo _
Int => Int
So far, so good. We even talked about how functions are actually objects with an apply method and how we can de-syntactic sugarify expressions to show this:
scala> Foo.timesTwo _ apply(4)
res0: Int = 8
scala> addOne.apply(3)
res1: Int = 4
To me, this is quite helpful in learning the language because I can internalize what the syntax is actually implying.
Problematic Example
We did, however, run into a situation that we could not identify. Take, for example, a list of strings. We can map functions over the values demonstrating basic Scala collections and functional programming stuff:
scala> List(1,2,3).map(_*4)
res2: List[Int] = List(4, 8, 12)
Ok, so what is the type of List(1,2,3).map()? I would expect we would do the same :type trick in the repl:
scala> :type List(1,2,3).map _
<console>:8: error: Cannot construct a collection of type Nothing with elements of type Nothing based on a collection of type List[Int].
List(1,2,3).map _
^
From the API definition, I know the signature is:
def map[B](f: (A) ⇒ B): List[B]
But there is also a full signature:
def map[B, That](f: (A) ⇒ B)(implicit bf: CanBuildFrom[List[A], B, That]): That
Question
So there are two things I don't quite understand:
Why doesn't the normal function lift trick work with List.map? Is there a way to de-syntactic sugar the erroneous statement to demonstrate what is going on?
If the reason that the method can't be lifted is due to the full signature "implicit", what exactly is going on there?
Finally, is there a robust way to inspect both types and signatures from the REPL?
The problem you've encountered has to do with the fact that, in Scala, functions are monomorphic, while methods can be polymorphic. As a result, the type parameters B and That must be known in order to create a function value for List.map.
The compiler attempts to infer the parameters but can't come up with anything sensible. If you supply parameters, you'll get a valid function type:
scala> List(1,2,3).map[Char, List[Char]] _
res0: (Int => Char) => List[Char] = <function1>
scala> :type res0
(Int => Char) => List[Char]
Without an actual function argument, the inferred type of the function is Int => Nothing, but the target collection is also Nothing. There is no suitable CanBuildFrom[List[Int], Nothing, Nothing] in scope, which we can see by entering implicitly[CanBuildFrom[List[Int], Nothing, Nothing]] in the REPL (comes up with same error). If you supply the type parameters, then you can get a function:
scala> :type List(1,2,3).map[Int, List[Int]] _
(Int => Int) => List[Int]
I don't think you can inspect method signatures in the REPL. That's what Scaladoc is for.

reducing an Array of Float using scala.math.max

I am confused by the following behavior - why does reducing an Array of Int work using math.max, but an Array of Float requires a wrapped function? I have memories that this was not an issue in 2.9, but I'm not completely certain about that.
$ scala -version
Scala code runner version 2.10.2 -- Copyright 2002-2013, LAMP/EPFL
$ scala
scala> import scala.math._
scala> Array(1, 2, 4).reduce(max)
res47: Int = 4
scala> Array(1f, 3f, 4f).reduce(max)
<console>:12: error: type mismatch;
found : (Int, Int) => Int
required: (AnyVal, AnyVal) => AnyVal
Array(1f, 3f, 4f).reduce(max)
^
scala> def fmax(a: Float, b: Float) = max(a, b)
fmax: (a: Float, b: Float)Float
scala> Array(1f, 3f, 4f).reduce(fmax)
res45: Float = 4.0
update : this does work
scala> Array(1f, 2f, 3f).reduce{(x,y) => math.max(x,y)}
res2: Float = 3.0
so then it is just reduce(math.max) which cannot be shorthanded?
The first thing to note is that math.max is overloaded, and if the compiler has no hint about the expected argument types, it just picks one of the overloads (I'm not clear yet on what rules govern which overload is picked, but it will become clear before the end of this post).
Apparently it favors the overload that takes Int parameters over the others. This can be seen in the repl:
scala> math.max _
res6: (Int, Int) => Int = <function2>
That method is most specific because the first of the following compiles (by virtue of numeric widening conversions) and the second does not:
scala> (math.max: (Float,Float)=>Float)(1,2)
res0: Float = 2.0
scala> (math.max: (Int,Int)=>Int)(1f,2f)
<console>:8: error: type mismatch;
found : Float(1.0)
required: Int
(math.max: (Int,Int)=>Int)(1f,2f)
^
The test is whether one function applies to the param types of the other, and that test includes any conversions.
Now, the question is: why can't the compiler infer the correct expected type? It certainly knows that the type of Array(1f, 3f, 4f) is Array[Float]
We can get a clue if we replace reduce with reduceLeft: then it compiles fine.
So surely this has to do with a difference in the signature of reduceLeft and reduce.
We can reproduce the error with the following code snippet:
case class MyCollection[A]() {
def reduce[B >: A](op: (B, B) => B): B = ???
def reduceLeft[B >: A](op: (B, A) => B): B = ???
}
MyCollection[Float]().reduce(max) // Fails to compile
MyCollection[Float]().reduceLeft(max) // Compiles fine
The signatures are subtly different.
In reduceLeft the second argument is forced to A (the collection's type), so type inference is trivial: if A==Float (which the compiler knows), then the compiler knows that the only valid overload of max is one that takes a Float as its second argument. The compiler only finds one ( max(Float,Float) ), and it happens that the other constraint (that B >: A) is trivially satisfied (as B == A == Float for this overload).
This is different for reduce: both the first and second arguments can be any (same) super-type of A (that is, of Float in our specific case). This is a much more lax constraint, and while it could be argued that in this case the compiler could see that there is only one possibility, the compiler is not smart enough here.
Whether the compiler is supposed to be able to handle this case (meaning that this is an inference bug) or not, I must say I don't know. Type inference is a tricky business in scala, and as far as I know the spec is intentionally vague about what can be inferred or not.
Since there are useful applications such as:
scala> Array(1f,2f,3f).reduce[Any](_.toString+","+_.toString)
res3: Any = 1.0,2.0,3.0
trying overload resolution against every possible substitution of the type parameter is expensive and could change the result depending on the expected type you wind up with; or would it have to issue an ambiguity error?
Using -Xlog-implicits -Yinfer-debug shows the difference between reduce(math.max), where overload resolution happens first, and the version where the param type is solved for first:
scala> Array(1f,2f,3f).reduce(math.max(_,_))
[solve types] solving for A1 in ?A1
inferExprInstance {
tree scala.this.Predef.floatArrayOps(scala.Array.apply(1.0, 2.0, 3.0)).reduce[A1]
tree.tpe (op: (A1, A1) => A1)A1
tparams type A1
pt ?
targs Float
tvars =?Float
}
It looks like this is a bug in the inferrer, cause with Int it infers types correctly:
private[this] val res2: Int = scala.this.Predef.intArrayOps(scala.Array.apply(1, 2, 4)).reduce[Int]({
((x: Int, y: Int) => scala.math.`package`.max(x, y))
});
but with Floats:
private[this] val res1: AnyVal = scala.this.Predef.floatArrayOps(scala.Array.apply(1.0, 3.0, 4.0)).reduce[AnyVal]({
((x: Int, y: Int) => scala.math.`package`.max(x, y))
});
If you explicitly annotate reduce with a Float type it should work:
Array(1f, 3f, 4f).reduce[Float](max)
private[this] val res3: Float = scala.this.Predef.floatArrayOps(scala.Array.apply(1.0, 3.0, 4.0)).reduce[Float]({
((x: Float, y: Float) => scala.math.`package`.max(x, y))
});
There is always scala.math.Ordering:
Array(1f, 2f, 3f).reduceOption(Ordering.Float.max)
It doesn't seem to be a bug. Consider the following code:
class C1 {}
object C1 {
implicit def c2toc1(x: C2): C1 = new C1
}
class C2 {}
class C3 {
def f(x: C1): Int = 1
def f(x: C2): Int = 2
}
(new C3).f _ //> ... .C2 => Int = <function1>
If I remove implicit conversion I will get an error "ambiguous reference". And because Int has an implicit conversion to Float Scala tries to find the most specific type for min, which is (Int, Int) => Int. The closest common superclass for Int and Float is AnyVal, that's why you see (AnyVal, AnyVal) => AnyVal.
The reason why (x, y) => min(x, y) works is probably because eta-expansion is done before type inference and reduce has to deal with (Int, Int) => Int which will be converted to (AnyVal, AnyVal) => AnyVal.
UPDATE: Meanwhile (new C3).f(_) will fail with "missing parameter type" error, which means f(_) depends on type inference and doesn't consider implicit conversions while f _ doesn't need parameter type and will expand to the most specific argument type if Scala can find one.

Scala type parameter being inferred to tuple

I suddenly came across this (unexpected to me) situation:
def method[T](x: T): T = x
scala> method(1)
res4: Int = 1
scala> method(1, 2)
res5: (Int, Int) = (1,2)
Why in case of two and more parameters method returns and infers a tuple but throwing error about parameter list?
Is it by intention? Maybe this phenomenon has a name?
Here is the excerpt from scala compiler:
/** Try packing all arguments into a Tuple and apply `fun'
* to that. This is the last thing which is tried (after
* default arguments)
*/
def tryTupleApply: Option[Tree] = ...
And here is related issue: Spec doesn't mention automatic tupling
It all means that in the above written example (type-parameterized method of one argument) scala tries to pack parameters into tuple and apply function to that tuple. Further from this two short pieces of information we may conclude that this behaviour not mentioned in language specification, and people discuss to add compiler warnings for cases of auto-tupling. And that this may be called auto-tupling.
% scala2.10 -Xlint
scala> def method[T](x: T): T = x
method: [T](x: T)T
scala> method(1)
res1: Int = 1
scala> method(1, 2)
<console>:9: warning: Adapting argument list by creating a 2-tuple: this may not be what you want.
signature: method[T](x: T): T
given arguments: 1, 2
after adaptation: method((1, 2): (Int, Int))
method(1, 2)
^
res2: (Int, Int) = (1,2)