type inference still need enhance,any better suggestion for this example? - scala

for instance in Clojure:
user=> (map #(* % 2) (concat [1.1 3] [5 7]))
(2.2 6 10 14)
but in Scala:
scala> List(1.1,3) ::: List(5, 7) map (_ * 2)
<console>:6: error: value * is not a member of AnyVal
List(1.1,3) ::: List(5, 7) map (_ * 2)
^
Here ::: obtain a list of type List,oops then failed. Can any coding more intuitively like Clojure above?

Here are you individual lists:
scala> List(1.1,3)
res1: List[Double] = List(1.1, 3.0)
scala> List(5, 7)
res2: List[Int] = List(5, 7)
The computed least upper bound (LUB) of Double and Int, needed to capture the type of the new list that includes elements of both the arguments lists passed to :::, is AnyVal. AnyVal includes Boolean, e.g., so there are no numeric operations defined.

As Randall has already said, the common supertype of Double and Int is AnyVal, which is inferred in this case. The only way I could make your example work is to add a type parameter to the second list:
scala> List[Double](1.1,3) ::: List(5, 7) map (_ * 2)
:6: error: value * is not a member of AnyVal
List[Double](1.1,3) ::: List(5, 7) map (_ * 2)
scala> List(1.1,3) ::: List[Double](5, 7) map (_ * 2)
res: List[Double] = List(2.2, 6.0, 10.0, 14.0)
I guess that in the latter case the implicit conversion from Int to Double is applied. I'm not sure why it this not applied in when adding the type parameter to the first list, however.

The first list is of type List[Double] because of literal type widening. Scala sees the literals, and notes that, even though they are of different types, they can be unified by widening some of them. If there was no type widening, then the most common superclass, AnyVal would be adopted.
List(1.1 /* Double literal */), 3 /* Int literal */)
The second list is clearly List[Int], though explicit calling for Double will result in type widening for the literals.
List(5 /* Int literal */, 7 /* Int literal */)
Now, it's important to note that type widening is something that happens at compile time. The compiled code will not contain any Int 3, only Double 3.0. Once a list has been creater, however, it is not possible to do type widening, because the stored objects are, in fact, different.
So, once you concat the two lists, the resulting type will be a superclass of Double and Int. Namely, AnyVal. As a result of Java interoperability, however, AnyVal cannot contain any useful methods (such as numeric operators).
I do wonder what Clojure does internally. Does it convert integers into doubles when concatenating? Or does it store everything as Object (like Scala does) but has smarter math operators?

I see two ways to make it work - braces for the second part
scala> List (1.1, 3) ::: (List (5, 7) map (_ * 2))
res6: List[AnyVal] = List(1.1, 3, 10, 14)
or explicit Doubles everywhere:
scala> List (1.1, 3.) ::: List (5., 7.) map (_ * 2)
res9: List[Double] = List(2.2, 6.0, 10.0, 14.0)
which is of course semantically different.

Why not just,
(List(1.1,3) ::: List(5, 7)).asInstanceOf[List[Double]] map (_ * 2)
?

Related

Lists in Scala - plus colon vs double colon (+: vs ::)

I am little bit confused about +: and :: operators that are available.
It looks like both of them gives the same results.
scala> List(1,2,3)
res0: List[Int] = List(1, 2, 3)
scala> 0 +: res0
res1: List[Int] = List(0, 1, 2, 3)
scala> 0 :: res0
res2: List[Int] = List(0, 1, 2, 3)
For my novice eye source code for both methods looks similar (plus-colon method has additional condition on generics with use of builder factories).
Which one of these methods should be used and when?
+: works with any kind of collection, while :: is specific implementation for List.
If you look at the source for +: closely, you will notice that it actually calls :: when the expected return type is List. That is because :: is implemented more efficiently for the List case: it simply connects the new head to the existing list and returns the result, which is a constant-time operation, as opposed to linear copying the entire collection in the generic case of +:.
+: on the other hand, takes CanBuildFrom, so you can do fancy (albeit, not looking as nicely in this case) things like:
val foo: Array[String] = List("foo").+:("bar")(breakOut)
(It's pretty useless in this particular case, as you could start with the needed type to begin with, but the idea is you can prepend and element to a collection, and change its type in one "go", avoiding an additional copy).

scala - How does method :: works in List?

I notice that List class define the method ::, which adds an element at the beginning of the list
def ::(x: A): List[A]
Example:
1 :: List(2, 3) = List(2, 3).::(1) = List(1, 2, 3)
However, I am confused at How does scala compiler recognize such conversion? Because as far as I am concerned,
1 :: List(2,3)
should raise an error: :: is not a member of Int
Do I miss something about operator definition of scala?
Methods whose names end with : are right-associative when called using infix operator notation. I.e.
a foo_: b
is the same as
b.foo_:(a)
This rule exists specifically for the case of methods like this, which are commonly (in other languages such as Haskell and ML) operators like : or ::.

Why doesn't the scala compiler recognize this as a tuple?

If I create a map:
val m = Map((4, 3))
And try to add a new key value pair:
val m_prime = m + (1, 5)
I get:
error: type mismatch;
found : Int(1)
required: (Int, ?)
val m_prime = m + (1, 5)
If I do:
val m_prime = m + ((1, 5))
Or:
val m_prime = m + (1 -> 5)
Then it works. Why doesn't the compiler accept the first example?
I am using 2.10.2
This is indeed very annoying (I run into this frequently). First of all, the + method comes from a general collection trait, taking only one argument—the collection's element type. Map's element type is the pair (A, B). However, Scala interprets the parentheses here as method call parentheses, not a tuple constructor. The explanation is shown in the next section.
To solve this, you can either avoid tuple syntax and use the arrow association key -> value instead, or use double parentheses, or use method updated which is specific to Map. updated does the same as + but takes key and value as separate arguments:
val m_prime = m updated (1, 5)
Still it is unclear why Scala fails here, as in general infix syntax should work and not expect parentheses. It appears that this particular case is broken because of a method overloading: There is a second + method that takes a variable number of tuple arguments.
Demonstration:
trait Foo {
def +(tup: (Int, Int)): Foo
}
def test1(f: Foo) = f + (1, 2) // yes, it works!
trait Baz extends Foo {
def +(tups: (Int, Int)*): Foo // overloaded
}
def test2(b: Baz) = b + (1, 2) // boom. we broke it.
My interpretation is that with the vararg version added, there is now an ambiguity: Is (a, b) a Tuple2 or a list of two arguments a and b (even if a and b are not of type Tuple2, perhaps the compiler would start looking for an implicit conversion). The only way to resolve the ambiguity is to use either of the three approaches described above.

Why does leaving the dot out in foldLeft cause a compilation error?

Can anyone explain why I see this compile error for the following when I omit the dot notation for applying the foldLeft function?(version 2.9.2)
scala> val l = List(1, 2, 3)
res19: List[Int] = List(1 ,2 ,3)
scala> l foldLeft(1)(_ * _)
<console>:9: error: Int(1) does not take parameters
l foldLeft(1)(_ * _)
^
but
scala> l.foldLeft(1)(_ * _)
res27: Int = 6
This doesn't hold true for other higher order functions such as map which doesn't seem to care whether I supply the dot or not.
I don't think its an associativity thing because I can't just invoke foldLeft(1)
It's because foldLeft is curried. As well as using the dot notation, you can also fix this by adding parentheses:
scala> (l foldLeft 1)(_ * _)
res3: Int = 6
Oh - and regarding your comment about not being able to invoke foldLeft(l), you can, but you need to partially apply it like this:
scala> (l foldLeft 1) _
res3: ((Int, Int) => Int) => Int = <function1>
Omitting the dot is possible because of scala's syntactic support for the infix notation, which expects 3 parts:
leftOperand operator rightOperand.
But because foldLeft had two list of parameters, you end up with 4 parts at the syntactic level: l foldLeft (1) (_ * _)
Which does not fit infix notation, hence the error.

In Scala 2, type inference fails on Set made with .toSet?

Why is type inference failing here?
scala> val xs = List(1, 2, 3, 3)
xs: List[Int] = List(1, 2, 3, 3)
scala> xs.toSet map(_*2)
<console>:9: error: missing parameter type for expanded function ((x$1) => x$1.$times(2))
xs.toSet map(_*2)
However, if xs.toSet is assigned, it compiles.
scala> xs.toSet
res42: scala.collection.immutable.Set[Int] = Set(1, 2, 3)
scala> res42 map (_*2)
res43: scala.collection.immutable.Set[Int] = Set(2, 4, 6)
Also, going the other way, converting to Set from List, and mapping on List complies.
scala> Set(5, 6, 7)
res44: scala.collection.immutable.Set[Int] = Set(5, 6, 7)
scala> res44.toList map(_*2)
res45: List[Int] = List(10, 12, 14)
Q: Why doesn't toSet do what I want?
A: That would be too easy.
Q: But why doesn't this compile? List(1).toSet.map(x => ...)
A: The Scala compiler is unable to infer that x is an Int.
Q: What, is it stupid?
A: Well, List[A].toSet doesn't return an immutable.Set[A]. It returns an immutable.Set[B] for some unknown B >: A.
Q: How was I supposed to know that?
A: From the Scaladoc.
Q: But why is toSet defined that way?
A: You might be assuming immutable.Set is covariant, but it isn't. It's invariant. And the return type of toSet is in covariant position, so the return type can't be allowed to be invariant.
Q: What do you mean, "covariant position"?
A: Let me Wikipedia that for you: http://en.wikipedia.org/wiki/Covariance_and_contravariance_(computer_science) . See also chapter 19 of Odersky, Venners & Spoon.
Q: I understand now. But why is immutable.Set invariant?
A: Let me Stack Overflow that for you: Why is Scala's immutable Set not covariant in its type?
Q: I surrender. How do I fix my original code?
A: This works: List(1).toSet[Int].map(x => ...). So does this: List(1).toSet.map((x: Int) => ...)
(with apologies to Friedman & Felleisen. thx to paulp & ijuma for assistance)
EDIT: There is valuable additional information in Adriaan's answer and in the discussion in the comments both there and here.
The type inference does not work properly as the signature of List#toSet is
def toSet[B >: A] => scala.collection.immutable.Set[B]
and the compiler would need to infer the types in two places in your call. An alternative to annotating the parameter in your function would be to invoke toSet with an explicit type argument:
xs.toSet[Int] map (_*2)
UPDATE:
Regarding your question why the compiler can infer it in two steps, let's just look at what happens when you type the lines one by one:
scala> val xs = List(1,2,3)
xs: List[Int] = List(1, 2, 3)
scala> val ys = xs.toSet
ys: scala.collection.immutable.Set[Int] = Set(1, 2, 3)
Here the compiler will infer the most specific type for ys which is Set[Int] in this case. This type is known now, so the type of the function passed to map can be inferred.
If you filled in all possible type parameters in your example the call would be written as:
xs.toSet[Int].map[Int,Set[Int]](_*2)
where the second type parameter is used to specify the type of the returned collection (for details look at how Scala collections are implemented). This means I even underestimated the number of types the compiler has to infer.
In this case it may seem easy to infer Int but there are cases where it is not (given the other features of Scala like implicit conversions, singleton types, traits as mixins etc.). I don't say it cannot be done - it's just that the Scala compiler does not do it.
I agree it would be nice to infer "the only possible" type, even when calls are chained, but there are technical limitations.
You can think of inference as a breadth-first sweep over the expression, collecting constraints (which arise from subtype bounds and required implicit arguments) on type variables, followed by solving those constraints. This approach allows, e.g., implicits to guide type inference. In your example, even though there is a single solution if you only look at the xs.toSet subexpression, later chained calls could introduce constraints that make the system unsatisfiable. The downside of leaving the type variables unsolved is that type inference for closures requires the target type to be known, and will thus fail (it needs something concrete to go on -- the required type of the closure and the type of its argument types must not both be unknown).
Now, when delaying solving the constraints makes inference fail, we could backtrack, solve all the type variables, and retry, but this is tricky to implement (and probably quite inefficient).