I am confused by the following behavior - why does reducing an Array of Int work using math.max, but an Array of Float requires a wrapped function? I have memories that this was not an issue in 2.9, but I'm not completely certain about that.
$ scala -version
Scala code runner version 2.10.2 -- Copyright 2002-2013, LAMP/EPFL
$ scala
scala> import scala.math._
scala> Array(1, 2, 4).reduce(max)
res47: Int = 4
scala> Array(1f, 3f, 4f).reduce(max)
<console>:12: error: type mismatch;
found : (Int, Int) => Int
required: (AnyVal, AnyVal) => AnyVal
Array(1f, 3f, 4f).reduce(max)
^
scala> def fmax(a: Float, b: Float) = max(a, b)
fmax: (a: Float, b: Float)Float
scala> Array(1f, 3f, 4f).reduce(fmax)
res45: Float = 4.0
update : this does work
scala> Array(1f, 2f, 3f).reduce{(x,y) => math.max(x,y)}
res2: Float = 3.0
so then it is just reduce(math.max) which cannot be shorthanded?
The first thing to note is that math.max is overloaded, and if the compiler has no hint about the expected argument types, it just picks one of the overloads (I'm not clear yet on what rules govern which overload is picked, but it will become clear before the end of this post).
Apparently it favors the overload that takes Int parameters over the others. This can be seen in the repl:
scala> math.max _
res6: (Int, Int) => Int = <function2>
That method is most specific because the first of the following compiles (by virtue of numeric widening conversions) and the second does not:
scala> (math.max: (Float,Float)=>Float)(1,2)
res0: Float = 2.0
scala> (math.max: (Int,Int)=>Int)(1f,2f)
<console>:8: error: type mismatch;
found : Float(1.0)
required: Int
(math.max: (Int,Int)=>Int)(1f,2f)
^
The test is whether one function applies to the param types of the other, and that test includes any conversions.
Now, the question is: why can't the compiler infer the correct expected type? It certainly knows that the type of Array(1f, 3f, 4f) is Array[Float]
We can get a clue if we replace reduce with reduceLeft: then it compiles fine.
So surely this has to do with a difference in the signature of reduceLeft and reduce.
We can reproduce the error with the following code snippet:
case class MyCollection[A]() {
def reduce[B >: A](op: (B, B) => B): B = ???
def reduceLeft[B >: A](op: (B, A) => B): B = ???
}
MyCollection[Float]().reduce(max) // Fails to compile
MyCollection[Float]().reduceLeft(max) // Compiles fine
The signatures are subtly different.
In reduceLeft the second argument is forced to A (the collection's type), so type inference is trivial: if A==Float (which the compiler knows), then the compiler knows that the only valid overload of max is one that takes a Float as its second argument. The compiler only finds one ( max(Float,Float) ), and it happens that the other constraint (that B >: A) is trivially satisfied (as B == A == Float for this overload).
This is different for reduce: both the first and second arguments can be any (same) super-type of A (that is, of Float in our specific case). This is a much more lax constraint, and while it could be argued that in this case the compiler could see that there is only one possibility, the compiler is not smart enough here.
Whether the compiler is supposed to be able to handle this case (meaning that this is an inference bug) or not, I must say I don't know. Type inference is a tricky business in scala, and as far as I know the spec is intentionally vague about what can be inferred or not.
Since there are useful applications such as:
scala> Array(1f,2f,3f).reduce[Any](_.toString+","+_.toString)
res3: Any = 1.0,2.0,3.0
trying overload resolution against every possible substitution of the type parameter is expensive and could change the result depending on the expected type you wind up with; or would it have to issue an ambiguity error?
Using -Xlog-implicits -Yinfer-debug shows the difference between reduce(math.max), where overload resolution happens first, and the version where the param type is solved for first:
scala> Array(1f,2f,3f).reduce(math.max(_,_))
[solve types] solving for A1 in ?A1
inferExprInstance {
tree scala.this.Predef.floatArrayOps(scala.Array.apply(1.0, 2.0, 3.0)).reduce[A1]
tree.tpe (op: (A1, A1) => A1)A1
tparams type A1
pt ?
targs Float
tvars =?Float
}
It looks like this is a bug in the inferrer, cause with Int it infers types correctly:
private[this] val res2: Int = scala.this.Predef.intArrayOps(scala.Array.apply(1, 2, 4)).reduce[Int]({
((x: Int, y: Int) => scala.math.`package`.max(x, y))
});
but with Floats:
private[this] val res1: AnyVal = scala.this.Predef.floatArrayOps(scala.Array.apply(1.0, 3.0, 4.0)).reduce[AnyVal]({
((x: Int, y: Int) => scala.math.`package`.max(x, y))
});
If you explicitly annotate reduce with a Float type it should work:
Array(1f, 3f, 4f).reduce[Float](max)
private[this] val res3: Float = scala.this.Predef.floatArrayOps(scala.Array.apply(1.0, 3.0, 4.0)).reduce[Float]({
((x: Float, y: Float) => scala.math.`package`.max(x, y))
});
There is always scala.math.Ordering:
Array(1f, 2f, 3f).reduceOption(Ordering.Float.max)
It doesn't seem to be a bug. Consider the following code:
class C1 {}
object C1 {
implicit def c2toc1(x: C2): C1 = new C1
}
class C2 {}
class C3 {
def f(x: C1): Int = 1
def f(x: C2): Int = 2
}
(new C3).f _ //> ... .C2 => Int = <function1>
If I remove implicit conversion I will get an error "ambiguous reference". And because Int has an implicit conversion to Float Scala tries to find the most specific type for min, which is (Int, Int) => Int. The closest common superclass for Int and Float is AnyVal, that's why you see (AnyVal, AnyVal) => AnyVal.
The reason why (x, y) => min(x, y) works is probably because eta-expansion is done before type inference and reduce has to deal with (Int, Int) => Int which will be converted to (AnyVal, AnyVal) => AnyVal.
UPDATE: Meanwhile (new C3).f(_) will fail with "missing parameter type" error, which means f(_) depends on type inference and doesn't consider implicit conversions while f _ doesn't need parameter type and will expand to the most specific argument type if Scala can find one.
Related
Does anyone knows how we can convert from any Seq to _* in an automatic way? It's quite cumbersome to force the type every time we have a Seq and a method uses a parameter of type vararg.
def mean[T: Numeric](elems: T*): Double
...
elems = Seq(1.0, 2.0, 3.0)
mean(elems) // this doesn't compiles
mean(elems: _*) // this compiles but it is cumbersome
That's the only way. It's a reason why varargs are arguably best only used at the public interface of a library, and even then, particularly when you think the caller will be calling with literally specified elements instead of a collection. If a method will likely be called on a collection argument, varargs can backfire in its goal of reducing syntactic nois, as you've noticed.
If the method isn't generic, you can add an overload:
def mean(elems: Seq[Double]): Double = ...
def mean(elems: Double*)(implicit d: DummyImplicit): Double = mean(elems)
Alas, it doesn't work in this case:
scala> object X { def f[T: Numeric](x: T*) = x; def f[T: Numeric](x: Seq[T])(implicit d: DummyImplicit) = x }
defined module X
scala> X.f(Seq(1, 2))
<console>:9: error: ambiguous reference to overloaded definition, both method f in object X of type [T](x: Seq[T])(implicit evidence$2: Numeric[T], implicit d: DummyImplicit)Seq[T] and method f in object X of type [T](x: T*)(implicit evidence$1: Numeric[T])Seq[T] match argument types (Seq[Int])
X.f(Seq(1, 2))
^
because the compiler thinks T could be Int or Seq[Int], and stops before checking whether implicits are available for both (at least in Scala 2.10).
scala> def b(x:Int) = { x match { case 1 => 1; case 2 => 3.5; case k => throw new Exception("Nothing")}}
b: (x: Int)AnyVal
scala> def c(x: Int) = if (x == 1) 1 else if (x == 2) 3.5 else throw new Exception("Nothing")
c: (x: Int)Double
This is what I got from REPL. Why does scala compiler treat function b' s return type as AnyVal. As I think, it should be Double.
Any pointing will be helpful.
Nothing is a subtype of every type (see Scaladoc). This is necessary to allow expressions such as
val x : Int = ???
The least common supertype of Int and Double is AnyVal. Nothing, being a subtype of anything (including AnyVal), hence does not change the inferred type.
You can declare it as def b(x:Int): Double if you need it to be treated that way.
Without it, the compiler gets confused by the throws clause and infers the type incorrectly. Type inference isn't perfect, sometimes you have to help the magic :)
Background
I recently attended a beginner Scala meetup and we were talking about the difference between methods and functions (also discussed in-depth here).
For example:
scala> val one = 1
one: Int = 1
scala> val addOne = (x: Int) => x + 1
addOne: Int => Int = <function1>
This demonstrates that vals can not only have an integer type, but can have a function type. We can see the type in the scala repl:
scala> :type addOne
Int => Int
We can also define methods in objects and classes:
scala> object Foo {
| def timesTwo(op: Int) = op * 2
| }
defined module Foo
And while a method doesn't have a type (but rather is has a type signature), we can lift it into a function to see what it is:
scala> :type Foo.timesTwo
<console>:9: error: missing arguments for method timesTwo in object Foo;
follow this method with `_' if you want to treat it as a partially applied function
Foo.timesTwo
^
scala> :type Foo.timesTwo _
Int => Int
So far, so good. We even talked about how functions are actually objects with an apply method and how we can de-syntactic sugarify expressions to show this:
scala> Foo.timesTwo _ apply(4)
res0: Int = 8
scala> addOne.apply(3)
res1: Int = 4
To me, this is quite helpful in learning the language because I can internalize what the syntax is actually implying.
Problematic Example
We did, however, run into a situation that we could not identify. Take, for example, a list of strings. We can map functions over the values demonstrating basic Scala collections and functional programming stuff:
scala> List(1,2,3).map(_*4)
res2: List[Int] = List(4, 8, 12)
Ok, so what is the type of List(1,2,3).map()? I would expect we would do the same :type trick in the repl:
scala> :type List(1,2,3).map _
<console>:8: error: Cannot construct a collection of type Nothing with elements of type Nothing based on a collection of type List[Int].
List(1,2,3).map _
^
From the API definition, I know the signature is:
def map[B](f: (A) ⇒ B): List[B]
But there is also a full signature:
def map[B, That](f: (A) ⇒ B)(implicit bf: CanBuildFrom[List[A], B, That]): That
Question
So there are two things I don't quite understand:
Why doesn't the normal function lift trick work with List.map? Is there a way to de-syntactic sugar the erroneous statement to demonstrate what is going on?
If the reason that the method can't be lifted is due to the full signature "implicit", what exactly is going on there?
Finally, is there a robust way to inspect both types and signatures from the REPL?
The problem you've encountered has to do with the fact that, in Scala, functions are monomorphic, while methods can be polymorphic. As a result, the type parameters B and That must be known in order to create a function value for List.map.
The compiler attempts to infer the parameters but can't come up with anything sensible. If you supply parameters, you'll get a valid function type:
scala> List(1,2,3).map[Char, List[Char]] _
res0: (Int => Char) => List[Char] = <function1>
scala> :type res0
(Int => Char) => List[Char]
Without an actual function argument, the inferred type of the function is Int => Nothing, but the target collection is also Nothing. There is no suitable CanBuildFrom[List[Int], Nothing, Nothing] in scope, which we can see by entering implicitly[CanBuildFrom[List[Int], Nothing, Nothing]] in the REPL (comes up with same error). If you supply the type parameters, then you can get a function:
scala> :type List(1,2,3).map[Int, List[Int]] _
(Int => Int) => List[Int]
I don't think you can inspect method signatures in the REPL. That's what Scaladoc is for.
When appending values to a MAP, why does Scala require the additional parenthesis-block to make this statement work?
Does NOT compile:
vmap += (item.getName(), item.getString()) // compiler output: "found: String"
However, this Does compile:
vmap += ((item.getName(), item.getString())) // note the second set of enclosures
TIA
EDIT: vmap is defined as
val vmap = new mutable.HashMap[String, String]()
Epilogue:
At the time of this edit there are posts detailing two possible explanations, both of which appear to contain elements of truth to them. Which one is actually Correct? I couldn't say with any degree of certainty...I'm just a guy who's still learning the language. That being said, I have changed the answer selection based upon the feeling that the one answer is (at least to some extent) encompassed within the other - so I've opted for the larger picture, as I think it will provide a broader meaning for someone else in search of an answer. Ironically, I was trying to get a better understanding of how to flatten out some of the little nuances of the language, and what I've come to realize is there are more of those than I had suspected. I'm not saying that's a bad thing - in fact (IMO) it's to be expected from any language that is as flexible and complex - but it sure does make a guy miss the black/white world of Assembly from time-to-time...
To draw this to an end, a couple observations:
1) the selected answer contains a link to a website full of Scala brain-benders (Which I found extremely helpful in trying to understand some of the aforementioned quarks in the language.) Highly recommended.
2) I did come across another interesting twist - whereas the single-parenthesis (example above) does not work, change it the following and it works just fine...
vmap += ("foo" -> "bar")
Which probably has something to do with matching method/function signatures, but that is just a guess on my part.
The accepted answer is actually wrong.
The reason you don't get tupling for Map.+= is that the method is overloaded with a second method that takes two or more args.
The compiler will only try tupling if the number of args is wrong. But if you give it two args, and there's a method that takes two, that's what it chooses, even if it fails to type check.
It doesn't start trying all possible combinations until something works because that would be fraught with obscurity. (Cf implicits.)
scala> def f(p: Pair[String, Int]) = { val (s,i) = p; s.toInt + i }
f: (p: Pair[String,Int])Int
scala> f("2",3) // ok to tuple
res0: Int = 5
scala> trait F { def +=(p: Pair[String, Int]) = f(p) }
defined trait F
scala> val foo = new F {}
foo: F = $anon$1#6bc77f62
scala> foo += ("2",3) // ok to tuple
res1: Int = 5
scala> trait G { def +=(p: Pair[String, Int]) = f(p); def +=(p:(String,Int),q:(String,Int),r:(String,Int)*) = f(p)+f(q)+(r map f).sum }
defined trait G
scala> val goo = new G {}
goo: G = $anon$1#183aeac3
scala> goo += ("2",3) // sorry
<console>:12: error: type mismatch;
found : String("2")
required: (String, Int)
goo += ("2",3)
^
scala> goo += (("2",3),("4",5),("6",7))
res3: Int = 27
I'd be remiss not to mention your friend and mine, -Xlint, which will warn about untoward arg adaptations:
apm#mara:~/tmp$ skala -Xlint
Welcome to Scala version 2.11.0-20130811-132927-95a4d6e987 (OpenJDK 64-Bit Server VM, Java 1.7.0_25).
Type in expressions to have them evaluated.
Type :help for more information.
scala> def f(p: Pair[String, Int]) = { val (s,i) = p; s.toInt + i }
f: (p: Pair[String,Int])Int
scala> f("2",3)
<console>:9: warning: Adapting argument list by creating a 2-tuple: this may not be what you want.
signature: f(p: Pair[String,Int]): Int
given arguments: "2", 3
after adaptation: f(("2", 3): (String, Int))
f("2",3)
^
res0: Int = 5
scala> List(1).toSet()
<console>:8: warning: Adapting argument list by inserting (): this is unlikely to be what you want.
signature: GenSetLike.apply(elem: A): Boolean
given arguments: <none>
after adaptation: GenSetLike((): Unit)
List(1).toSet()
^
res3: Boolean = false
On the perils of adaptation, see the Adaptive Reasoning puzzler and this new one that is pretty common because we learn that the presence or absence of parens is largely a matter of style, and when parens really matter, using them wrong results in a type error.
Adapting a tuple in the presence of overloading:
scala> class Foo {
| def f[A](a: A) = 1 // A can be (Int,Int,Int)
| def f[A](a: A, a2: A) = 2
| }
defined class Foo
scala> val foo = new Foo
foo: Foo = Foo#2645d22d
scala> foo.f(0,0,0)
<console>:10: warning: Adapting argument list by creating a 3-tuple: this may not be what you want.
signature: Foo.f[A](a: A): Int
given arguments: 0, 0, 0
after adaptation: Foo.f((0, 0, 0): (Int, Int, Int))
foo.f(0,0,0)
^
res9: Int = 1
Because the Map += method takes a Tuple2[A, B] as parameter.
You have to mark the Tuple2[A, B] with surrounding parentheses, otherwise the compiler won't infer the type to Tuple2.
A Tuple2 is a simple pair A -> B.
val x = (5, 7); // the type is inferred to Tuple2[Int, Int];
A Map iteration makes this even more obvious:
map.foreach { case (key, value) => ...};
(key, value)// is a Tuple2
The first set of enclosing parentheses are treated as fruitless/skip-able by the compiler.
The second set of enclosing parentheses creates the needed Tuple2.
val x = (item.getName(), item.getString());//Tuple2[String, String]
vmap += x; // THIS COMPILES
vmap += (item.getName(), item.getString())// is identical to
vmap += item.getName(), item.getString() // now it thinks you are adding a String, the result of item.getName()
vmap += ( (item.getName(), item.getString()) )// skips the first set, sees the Tuple, compiles.
From the SLS:
Postfix operators have lower precedence than infix operators,
so foo bar baz = foo.bar(baz)
In this case: vmap += (item.getName(), item.getString()); = vmap.+=(item.getName());
I followed the advice found here to define a function called square, and then tried to pass it to a function called twice. The functions are defined like this:
def square[T](n: T)(implicit numeric: Numeric[T]): T = numeric.times(n, n)
def twice[T](f: (T) => T, a: T): T = f(f(a))
When calling twice(square, 2), the REPL spits out an error message:
scala> twice(square, 2)
<console>:8: error: could not find implicit value for parameter numeric: Numeric[T]
twice(square, 2)
^
Anyone?
I disagree with everyone here except Andrew Phillips. Well, everyone so far. :-) The problem is here:
def twice[T](f: (T) => T, a: T): T = f(f(a))
You expect, like newcomers to Scala often do, for Scala's compiler to take into account both parameters to twice to infer the correct types. Scala doesn't do that, though -- it only uses information from one parameter list to the next, but not from one parameter to the next. That mean the parameters f and a are analyzed independently, without having the advantage of knowing what the other is.
That means, for instance, that this works:
twice(square[Int], 2)
Now, if you break it down into two parameter lists, then it also works:
def twice[T](a: T)(f: (T) => T): T = f(f(a))
twice(2)(square)
So, basically, everything you were trying to do was correct and should work, except for the part that you expected one parameter to help figuring out the type of the other parameter (as you wrote it).
Here's a session from the Scala REPL.
Welcome to Scala version 2.8.0.final (Java HotSpot(TM) 64-Bit Server VM, Java 1.6.0_20).
Type in expressions to have them evaluated.
Type :help for more information.
scala> def square[T : Numeric](n: T) = implicitly[Numeric[T]].times(n, n)
square: [T](n: T)(implicit evidence$1: Numeric[T])T
scala> def twice2[T](f: T => T)(a: T) = f(f(a))
twice2: [T](f: (T) => T)(a: T)T
scala> twice2(square)(3)
<console>:8: error: could not find implicit value for evidence parameter of type
Numeric[T]
twice2(square)(3)
^
scala> def twice3[T](a: T, f: T => T) = f(f(a))
twice3: [T](a: T,f: (T) => T)T
scala> twice3(3, square)
<console>:8: error: could not find implicit value for evidence parameter of type
Numeric[T]
twice3(3, square)
scala> def twice[T](a: T)(f: T => T) = f(f(a))
twice: [T](a: T)(f: (T) => T)T
scala> twice(3)(square)
res0: Int = 81
So evidently the type of "twice(3)" needs to be known before the implicit can be resolved. I guess that makes sense, but I'd still be glad if a Scala guru could comment on this one...
Another solution is to lift square into partially applied function:
scala> twice(square(_:Int),2)
res1: Int = 16
This way the implicit is applied to square as in:
scala> twice(square(_:Int)(implicitly[Numeric[Int]]),2)
res3: Int = 16
There is even another approach:
def twice[T:Numeric](f: (T) => T, a: T): T = f(f(a))
scala> twice[Int](square,2)
res1: Int = 16
But again, the type parameter don't get inferred.
Your problem is that square isn't a function (ie. a scala.Function1[T, T] aka (T) => T). Instead it's a type parametrized method with multiple argument lists one of which is implicit ... there's no syntax in Scala to define an exactly equivalent function.
Interestingly, your use of the Numeric type class means that the usual encodings of higher-ranked functions in Scala don't directly apply here, but we can adapt them to this case and get something like this,
trait HigherRankedNumericFunction {
def apply[T : Numeric](t : T) : T
}
val square = new HigherRankedNumericFunction {
def apply[T : Numeric](t : T) : T = implicitly[Numeric[T]].times(t, t)
}
This gives us a higher-ranked "function" with its type parameter context-bounded to Numeric,
scala> square(2)
res0: Int = 4
scala> square(2.0)
res1: Double = 4.0
scala> square("foo")
<console>:8: error: could not find implicit value for evidence parameter of type Numeric[java.lang.String]
square("foo")
We can now define twice in terms of HigherRankedNumericFunctions,
def twice[T : Numeric](f : HigherRankedNumericFunction, a : T) : T = f(f(a))
scala> twice(square, 2)
res2: Int = 16
scala> twice(square, 2.0)
res3: Double = 16.0
The obvious downside of this approach is that you lose out on the conciseness of Scala's monomorphic function literals.