In Scala, the following expression raises a type error:
val pair: (A => String, A) forSome { type A } = ( { a: Int => a.toString }, 19 )
pair._1(pair._2)
As mentioned in SI-9899 and this answer, this is correct according to the spec:
I think this is working as designed as per SLS 6.1: "The following
skolemization rule is applied universally for every expression: If the
type of an expression would be an existential type T, then the type of
the expression is assumed instead to be a skolemization of T."
However, I do not fully understand this. At which point is this rule applied? Does it apply in the first line (i.e., the type of pair is a different one than given by the type annotation), or in the second line (but applying the rule to the second line as a whole would not lead to a type error)?
Let's assume SLS 6.1 applies to the first line. It is supposed to skolemize existential types. We can make the in the first line a non-existential type by putting the existential inside a type parameter:
case class Wrap[T](x:T)
val wrap = Wrap(( { a: Int => a.toString }, 19 ) : (A => String, A) forSome { type A })
wrap.x._1(wrap.x._2)
It works! (No type error.) So that means, the existential type got "lost" when we defined pair? No:
val wrap2 = Wrap(pair)
wrap2.x._1(wrap2.x._2)
This type checks! If it would have been the "fault" of the assignment to pair, this should not work. Thus, the reason why it does not work lies in the second line. If that's the case, what's the difference between the wrap and the pair example?
To wrap up, here is one more pair of examples:
val Wrap((a2,b2)) = wrap
a2(b2)
val (a3,b3) = pair
a3(b3)
Both don't work, but by analogy to the fact that wrap.x._1(wrap.x._2) did typecheck, I would have thought that a2(b2) might typecheck, too.
I believe I figured out most of the process how the expressions above are typed.
First, what does this mean:
The following skolemization rule is applied universally for every
expression: If the type of an expression would be an existential type
T, then the type of the expression is assumed instead to be a
skolemization of T. [SLS 6.1]
It means that whenever an expression or subexpression is determined to have type T[A] forSome {type A}, then a fresh type name A1 is chosen, and the expression is given type T[A1]. This makes sense since T[A] forSome {type A} intuitively means that there is some type A such that the expression has type T[A]. (What name is chosen depends on the compiler implementation. I use A1 to distinguish it from the bound type variable A)
We look at the first line of code:
val pair: (A => String, A) forSome { type A } = ({ a: Int => a.toString }, 19)
Here the skolemization rule is actually not yet used.
({ a: Int => a.toString }, 19) has type (Int=>String, Int). This is a subtype of (A => String, A) forSome { type A } since there exists an A (namely Int) such that the rhs is of type (A=>String,A).
The value pair now has type (A => String, A) forSome { type A }.
The next line is
pair._1(pair._2)
Now the typer assigns types to subexpressions from inside out. First, the first occurrence of pair is given a type. Recall that pair had type (A => String, A) forSome { type A }. Since the skolemization rule applies to every subexpression, we apply it to the first pair. We pick a fresh type name A1, and type pair as (A1 => String, A1). Then we assign a type to the second occurrence of pair. Again the skolemization rule applies, we pick another fresh type name A2, and the second occurrence of pair is types as (A2=>String,A2).
Then pair._1 has type A1=>String and pair._2 has type A2, thus pair._1(pair._2) is not well-typed.
Note that it is not the skolemization rule's "fault" that typing fails. If we would not have the skolemization rule, pair._1 would type as (A=>String) forSome {type A} and pair._2 would type as A forSome {type A} which is the same as Any. And then pair._1(pair._2) would still not be well-typed. (The skolemization rule is actually helpful in making things type, we will see that below.)
So, why does Scala refuse to understand that the two instances of pair actually are of type (A=>String,A) for the same A? I do not know a good reason in case of a val pair, but for example if we would have a var pair of the same type, the compiler must not skolemize several occurrences of it with the same A1. Why? Imagine that within an expression, the content of pair changes. First it contains an (Int=>String, Int), and then towards the end of the evaluation of the expression, it contains a (Bool=>String,Bool). This is OK if the type of pair is (A=>String,A) forSome {type A}. But if the computer would give both occurrences of pair the same skolemized type (A1=>String,A1), the typing would not be correct. Similarly, if pair would be a def pair, it could return different results on different invocations, and thus must not be skolemized with the same A1. For val pair, this argument does not hold (since val pair cannot change), but I assume that the type system would get too complicated if it would try to treat a val pair different from a var pair. (Also, there are situations where a val can change content, namely from unitialized to initialized. But I don't know whether that can lead to problems in this context.)
However, we can use the skolemization rule to make pair._1(pair._2) well-typed. A first try would be:
val pair2 = pair
pair2._1(pair2._2)
Why should this work? pair types as (A=>String,A) forSome {type A}. Thus it's type becomes (A3=>String,A3) for some fresh A3. So the new val pair2 should be given type (A3=>String,A3) (the type of the rhs). And if pair2 has type (A3=>String,A3), then pair2._1(pair2._2) will be well-typed. (No existentials involved any more.)
Unfortunately, this will actually not work, because of another rule in the spec:
If the value definition is not recursive, the type T may be omitted,
in which case the packed type of expression e is assumed. [SLS 4.1]
The packed type is the opposite of skolemization. That means, all the fresh variables that have been introduced inside the expression due to the skolemization rule are now transformed back into existential types. That is, T[A1] becomes T[A] forSome {type A}.
Thus, in
val pair2 = pair
pair2 will actually be given type (A=>String,A) forSome {type A} even though the rhs was given type (A3=>String,A3). Then pair2._1(pair2._2) will not type, as explained above.
But we can use another trick to achieve the desired result:
pair match { case pair2 =>
pair2._1(pair2._2) }
At the first glance, this is a pointless pattern match, since pair2 is just assigned pair, so why not just use pair? The reason is that the rule from SLS 4.1 only applied to vals and vars. Variable patterns (like pair2 here) are not affected. Thus pair is typed as (A4=>String,A4) and pair2 is given the same type (not the packed type). Then pair2._1 is typed A4=>String and pair2._2 is typed A4 and all is well-typed.
So a code fragment of the form x match { case x2 => can be used to "upgrade" x to a new "pseudo-value" x2 that can make some expressions well-typed that would not be well-typed using x. (I don't know why the spec does not simply allow the same thing to happen when we write val x2 = x. It would certainly be nicer to read since we do not get an extra level of indentation.)
After this excursion, let us go through the typing of the remaining expressions from the question:
val wrap = Wrap(({ a: Int => a.toString }, 19) : (A => String, A) forSome { type A })
Here the expression ({ a: Int => a.toString }, 19) types as (Int=>String,Int). The type case makes this into an expression of type
(A => String, A) forSome { type A }). Then the skolemization rule is applied, so the expression (the argument of Wrap, that is) gets type (A5=>String,A5) for a fresh A5. We apply Wrap to it, and that the rhs has type Wrap[(A5=>String,A5)]. To get the type of wrap, we need to apply the rule from SLS 4.1 again: We compute the packed type of Wrap[(A5=>String,A5)] which is Wrap[(A=>String,A)] forSome {type A}. So wrap has type Wrap[(A=>String,A)] forSome {type A} (and not Wrap[(A=>String,A) forSome {type A}] as one might expect at the first glance!) Note that we can confirm that wrap has this type by running the compiler with option -Xprint:typer.
We now type
wrap.x._1(wrap.x._2)
Here the skolemization rule applies to both occurrences of wrap, and they get typed as Wrap[(A6=>String,A6)] and Wrap[(A7=>String,A7)], respectively. Then wrap.x._1 has type A6=>String, and wrap.x._2 has type A7. Thus wrap.x._1(wrap.x._2) is not well-typed.
But the compiler disagrees and accepts wrap.x._1(wrap.x._2)! I do not know why. Either there is some additional rule in the Scala type system that I don't know about, or it is simply a compiler bug. Running the compiler with -Xprint:typer does not give extra insight, either, since it does not annotate the subexpressions in wrap.x._1(wrap.x._2).
Next is:
val wrap2 = Wrap(pair)
Here pair has type (A=>String,A) forSome {type A} and skolemizes to (A8=>String,A8). Then Wrap(pair) has type Wrap[(A8=>String,A8)] and wrap2 gets the packed type Wrap[(A=>String,A)] forSome {type A}. I.e., wrap2 has the same type as wrap.
wrap2.x._1(wrap2.x._2)
As with wrap.x._1(wrap.x._2), this should not type but does.
val Wrap((a2,b2)) = wrap
Here we see a new rule: [SLS 4.1] (not the part quoted above) explains that such a pattern match val statement is expanded to:
val tmp = wrap match { case Wrap((a2,b2)) => (a2,b2) }
val a2 = tmp._1
val b2 = tmp._2
Now we can see that (a2,b2) gets type (A9=>String,A9) for fresh A9,
tmp gets type (A=>String,A) forSome A due to the packed type rule. Then tmp._1 gets type A10=>String using the skolemization rule, and val a2 gets type (A=>String) forSome {type A} by the packed type rule. And tmp._2 gets type A11 using the skolemization rule, and val b2 gets type A forSome {type A} by the packed type rule (this is the same as Any).
Thus
a2(b2)
is not well-typed, because a2 gets type A12=>String and b2 gets type A13=>String from the skolemization rule.
Similarly,
val (a3,b3) = pair
expands to
val tmp2 = pair match { case (a3,b3) => (a3,b3) }
val a3 = tmp2._1
val b3 = tmp2._2
Then tmp2 gets type (A=>String,A) forSome {type A} by the packed type rule, and val a3 and val b3 get type (A=>String) forSome {type A} and A forSome {type A} (a.k.a. Any), respectively.
Then
a3(b3)
is not well-typed for the same reasons as a2(b2) wasn't.
Related
How to implement the function normalize(type: Type): Type such that:
A =:= B if and only if normalize(A) == normalize(B) and normalize(A).hashCode == normalize(B).hashCode.
In other words, normalize must return equal results for all equivalent Type instances; and not equal nor equivalent results for all pair of non equivalent inputs.
There is a deprecated method called normalize in the TypeApi, but it does not the same.
In my particular case I only need to normalize types that represent a class or a trait (tpe.typeSymbol.isClass == true).
Edit 1: The fist comment suggests that such a function might not be possible to implement in general. But perhaps it is possible if we add another constraint:
B is obtained by navigating from A.
In the next example fooType would be A, and nextAppliedType would be B:
import scala.reflect.runtime.universe._
sealed trait Foo[V]
case class FooImpl[V](next: Foo[V]) extends Foo[V]
scala> val fooType = typeOf[Foo[Int]]
val fooType: reflect.runtime.universe.Type = Foo[Int]
scala> val nextType = fooType.typeSymbol.asClass.knownDirectSubclasses.iterator.next().asClass.primaryConstructor.typeSignature.paramLists(0)(0).typeSignature
val nextType: reflect.runtime.universe.Type = Foo[V]
scala> val nextAppliedType = appliedType(nextType.typeConstructor, fooType.typeArgs)
val nextAppliedType: reflect.runtime.universe.Type = Foo[Int]
scala> println(fooType =:= nextAppliedType)
true
scala> println(fooType == nextAppliedType)
false
Inspecting the Type instances with showRaw shows why they are not equal (at least when Foo and FooImpl are members of an object, in this example, the jsfacile.test.RecursionTest object):
scala> showRaw(fooType)
val res2: String = TypeRef(SingleType(SingleType(SingleType(ThisType(<root>), jsfacile), jsfacile.test), jsfacile.test.RecursionTest), jsfacile.test.RecursionTest.Foo, List(TypeRef(ThisType(scala), scala.Int, List())))
scala> showRaw(nextAppliedType)
val res3: String = TypeRef(ThisType(jsfacile.test.RecursionTest), jsfacile.test.RecursionTest.Foo, List(TypeRef(ThisType(scala), scala.Int, List())))
The reason I need this is difficult to explain. Let's try:
I am developing this JSON library which works fine except when there is a recursive type reference. For example:
sealed trait Foo[V]
case class FooImpl[V](next: Foo[V]) extends Foo[V]
That happens because the parser/appender it uses to parse and format are type classes that are materialized by an implicit macro. And when an implicit parameter is recursive the compiler complains with a divergence error.
I tried to solve that using by-name implicit parameter but it not only didn't solve the recursion problem, but also makes many non recursive algebraic data type to fail.
So, now I am trying to solve this problem by storing the resolved materializations in a map, which also would improve the compilation speed. And that map key is of type Type. So I need to normalize the Type instances, not only to be usable as key of a map, but also to equalize the values generated from them.
If I understood you well, any equivalence class would be fine. There is no preference.
I suspect you didn't. At least "any equivalence class would be fine", "There is no preference" do not sound good. I'll try to elaborate.
In math there is such construction as factorization. If you have a set A and equivalence relation ~ on this set (relation means that for any pair of elements from A we know whether they are related a1 ~ a2 or not, equivalence means symmetricity a1 ~ a2 => a2 ~ a1, reflexivity a ~ a, transitivity a1 ~ a2, a2 ~ a3 => a1 ~ a3) then you can consider the factor-set A/~ whose elements are all equivalence classes A/~ = { [a] | a ∈ A} (the equivalence class
[a] = {b ∈ A | b ~ a}
of an element a is a set consisting of all elements equivalent (i.e. ~-related) to a).
The axiom of choice says that there is a map (function) from A/~ to A i.e. we can select a representative in every equivalence class and in such way form a subset of A (this is true if we accept the axiom of choice, if we don't then it's not clear whether we get a set in such way). But even if we accept the axiom of choice and therefore there is a function A/~ -> A this doesn't mean we can construct such function.
Simple example. Let's consider the set of all real numbers R and the following equivalence relation: two real numbers are equivalent r1 ~ r2 if their difference is a rational number
r2 - r1 = p/q ∈ Q
(p, q≠0 are arbitrary integers). This is an equivalence relation. So it's known that there is a function selecting a single real number from any equivalence class but how to define this function explicitly for a specific input? For example what is the output of this function for the input being the equivalence class of 0 or 1 or π or e or √2 or log 2...?
Similarly, =:= is an equivalence relation on types, so it's known that there is a function normalize (maybe there are even many such functions) selecting a representative in every equivalence class but how to prefer a specific one (how to define or construct the output explicitly for any specific input)?
Regarding your struggle against implicit divergence. It's not necessary that you've selected the best possible approach. Sounds like you're doing some compiler work manually. How do other json libraries solve the issue? For example Circe? Besides by-name implicits => there is also shapeless.Lazy / shapeless.Strict (not equivalent to by-name implicits). If you have specific question about deriving type classes, overcoming implicit divergence maybe you should start a different question about that?
Regarding your approach with HashMap with Type keys. I'm still reminding that we're not supposed to rely on == for Types, correct comparison is =:=. So you should build your HashMap using =:= rather than ==. Search at SO for something like: hashmap custom equals.
Actually I guess your normalize sounds like you want some caching. You should have a type cache. Then if you asked to calculate normalize(typ) you should check whether in the cache there is already a t such that t =:= typ. If so you should return t, otherwise you should add typ to the cache and return typ.
This satisfies your requirement: A =:= B if and only if normalize(A) == normalize(B) (normalize(A).hashCode == normalize(B).hashCode should follow from normalize(A) == normalize(B)).
Regarding transformation of fooType into nextAppliedType try
def normalize(typ: Type): Type = typ match {
case TypeRef(pre, sym, args) =>
internal.typeRef(internal.thisType(pre.typeSymbol), sym, args)
}
Then normalize(fooType) == nextAppliedType should be true.
As per the example below, calling xs.toList.map(_.toBuffer) succeeds, but xs.toBuffer.map(_.toBuffer) fails. But when the steps in the latter are performed using an intermediate result, it succeeds. What causes this inconsistency?
scala> "ab-cd".split("-").toBuffer
res0: scala.collection.mutable.Buffer[String] = ArrayBuffer(ab, cd)
scala> res0.map(_.toBuffer)
res1: scala.collection.mutable.Buffer[scala.collection.mutable.Buffer[Char]] = ArrayBuffer(ArrayBuffer(a, b), ArrayBuffer(c, d))
scala> "ab-cd".split("-").toBuffer.map(_.toBuffer)
<console>:8: error: missing parameter type for expanded function ((x$1) => x$1.toBuffer)
"ab-cd".split("-").toBuffer.map(_.toBuffer)
^
scala> "ab-cd".split("-").toList.map(_.toBuffer)
res3: List[scala.collection.mutable.Buffer[Char]] = List(ArrayBuffer(a, b), ArrayBuffer(c, d))
Look at the definitions of toBuffer and toList:
def toBuffer[A1 >: A]: Buffer[A1]
def toList: List[A]
As you can see, toBuffer is generic, while toList is not.
The reason for this difference is - I believe - that Buffer is invariant, while List is covariant.
Let's say that we have the following classes:
class Foo
class Bar extends Foo
Because List is covariant, you can call toList on an instance of Iterable[Bar] and treat the result as a List[Foo].
If List where invariant, this would not be the case.
Buffer being invariant, if toBuffer was defined as def toBuffer: Buffer[A] you would similarly not be able to treat the result
of toBuffer (on an instance of Iterable[Bar]) as an instance of Buffer[Foo] (as Buffer[Bar] is not a sub-type of Buffer[Foo], unlike for lists).
But by declaring toBuffer as def toBuffer[A1 >: A] (notice the added type parameter A1), we get back the possibility to have toBuffer return an instance of Buffer[Foo] :
all we need is to explcitly set A1 to Foo, or let the compiler infer it (if toBuffer is called at a site where a Buffer[Foo] is expected).
I think this explains the reason why toList and toBuffer are defined differently.
Now the problem with this is that toBuffer is generic, and this can badly affect inference.
When you do this:
"ab-cd".split("-").toBuffer
You never explicitly say that A1 is String, but because "ab-cd".split("-") has unambiguously the type Array[String], the compiler knows that A is String.
It also knows that A1 >: A (in toBuffer), and without any further constraint, it will infer A1 to be exactly A, in other words String.
So in the end the whole expression returns a Buffer[String].
But here's the thing: in scala, type inference happens in an expression as a whole.
When you have something like a.b.c, you might expect that scala will infer an exact type
for a, then from that infer an exact type for a.b, and finally for a.b.c. Not so.
Type inference is deferred to the whole expression a.b.c (see scala specification "6.26.4 Local Type Inference
", "case 1: selections")
So, going back to your problem, in the expression "ab-cd".split("-").toBuffer.map(_.toBuffer), the sub-expression "ab-cd".split("-").toBuffer is not typed Buffer[String], but instead
it stays typed as something like Buffer[A1] forSome A1 >: String. In other words, A1 is not fixed, we just carry the constraint A1 >: String to the next step of inference.
This next step is map(_.toBuffer), where map is defined as map[C](f: (B) ⇒ C): Buffer[B]. Here B is actually the same as A1, but at this point A1
is still not fully known, we only know that A1 >: String.
Here lies our problem. The compiler needs to know the exact type of the anonymous function (_.toBuffer) (simply because instantiating a Function1[A,R] requires to know the exact types of A and R, just like for any generic type).
So you need to tell him explcitly somehow, as it was not able to infer it exactly.
This means you need to do either:
"ab-cd".split("-").toBuffer[String].map(_.toBuffer)
Or:
"ab-cd".split("-").toBuffer.map((_:String).toBuffer)
I'm chaining transformations, and I'd like to accumulate the result of each transformation so that it can potentially be used in any subsequent step, and also so that all the results are available at the end (mostly for debugging purposes). There are several steps and from time to time I need to add a new step or change the inputs for a step.
HList seems to offer a convenient way to collect the results in a flexible but still type-safe way. But I'd rather not complicate the actual steps by making them deal with the HList and the accompanying business.
Here's a simplified version of the combinator I'd like to write, which isn't working. The idea is that given an HList containing an A, and the index of A, and a function from A -> B, mapNth will extract the A, run the function, and cons the result onto the list. The resulting extended list captures the type of the new result, so several of these mapNth-ified steps can be composed to produce a list containing the result from each step:
def mapNth[L <: HList, A, B]
(l: L, index: Nat, f: A => B)
(implicit at: shapeless.ops.hlist.At[L, index.N]):
B :: L =
f(l(index)) :: l
Incidentally, I'll also need map2Nth taking two indices and f: (A, B) => C, but I believe the issues are the same.
However, mapNth does not compile, saying l(index) has type at.Out, but f's argument should be A. That's correct, of course, so what I suppose I need is a way to provide evidence that at.Out is in fact A (or, at.Out <: A).
Is there a way to express that constraint? I believe it will have to take the form of an implicit, because of course the constraint can only be checked when mapNth is applied to a particular list and function.
You're exactly right about needing evidence that at.Out is A, and you can provide that evidence by including the value of the type member in at's type:
def mapNth[L <: HList, A, B]
(l: L, index: Nat, f: A => B)
(implicit at: shapeless.ops.hlist.At[L, index.N] { type Out = A }):
B :: L =
f(l(index)) :: l
The companion objects for type classes like At in Shapeless also define an Aux type that includes the output type as a final type parameter.
def mapNth[L <: HList, A, B]
(l: L, index: Nat, f: A => B)
(implicit at: shapeless.ops.hlist.At.Aux[L, index.N, A]):
B :: L =
f(l(index)) :: l
This is pretty much equivalent but more idiomatic (and it looks a little nicer).
Set is defined as Set[A]. It takes a in-variant parameter. Doing below works as expected as we are passing co-variant argument:
scala> val a = Set(new Object)
a: scala.collection.immutable.Set[Object] = Set(java.lang.Object#118c38f)
scala> val b = Set("hi")
b: scala.collection.immutable.Set[String] = Set(hi)
scala> a & b
<console>:10: error: type mismatch;
found : scala.collection.immutable.Set[String]
required: scala.collection.GenSet[Object]
Note: String <: Object, but trait GenSet is invariant in type A.
You may wish to investigate a wildcard type such as `_ <: Object`. (SLS 3.2.10)
a & b
But the below works:
scala> Set(new Object) & Set("hi")
res1: scala.collection.immutable.Set[Object] = Set()
Above as I see it, the scala compiler converts Set("hi") to Set[Object] type and hence works.
What is the type-inference doing here? Can someone please link to specification explaining the behavior and when does it happen in general? Shouldn't it be throwing a compile time error for such cases? As 2 different output for the same operation type.
Not sure, but I think what you're looking for is described in the language spec under "Local Type Inference" (at this time of writing, section 6.26.4 on page 100).
Local type inference infers type arguments to be passed to expressions of polymorphic type. Say e is of type [ a1 >: L1 <: U1, ..., an >: Ln <: Un ] T and no explicit type
parameters are given.
Local type inference converts this expression to a type application e [ T1, ..., Tn ]. The choice of the type arguments T1, ..., Tn depends on the context in which the expression appears and on the expected type pt. There are three cases.
[ ... ]
If the expression e appears as a value without being applied to value arguments, the type arguments are inferred by solving a constraint system which relates the expression's type T with the expected type pt. Without loss of generality we can assume that T is a value type; if it is a method type we apply eta-expansion to convert it to a function type. Solving means finding a substitution σ of types Ti for the type parameters ai such that
None of inferred types Ti is a singleton type
All type parameter bounds are respected, i.e. σ Li <: σ ai and σ ai <: σ Ui for i = 1, ..., n.
The expression's type conforms to the expected type, i.e. σ T <: σ pt.
It is a compile time error if no such substitution exists. If several substitutions exist, local-type inference will choose for each type variable ai a minimal or maximal type Ti of the solution space. A maximal type Ti will be chosen if the type parameter ai appears contravariantly in the type T of the expression. A minimal type Ti will be chosen in all other situations, i.e. if the variable appears covariantly, nonvariantly or not at all in the type T. We call such a substitution an optimal solution of the given constraint system for the type T.
In short: Scalac has to choose values for the generic types that you omitted, and it picks the most specific choices possible, under the constraint that the result compiles.
The expression Set("hi") can be either a scala.collection.immutable.Set[String] or a scala.collection.immutable.Set[Object], depending on what the context requires. (A String is a valid Object, of course.) When you write this:
Set(new Object) & Set("hi")
the context requires Set[Object], so that's the type that's inferred; but when you write this:
val b = Set("hi")
the context doesn't specify, so the more-specific type Set[String] is chosen, which (as you expected) then makes a & b be ill-typed.
I was trying to reverse a List of Integers as follows:
List(1,2,3,4).foldLeft(List[Int]()){(a,b) => b::a}
My question is that is there a way to specify the seed to be some List[_] where the _ is the type automatically filled in by scala's type-inference mechanism, instead of having to specify the type as List[Int]?
Thanks
Update: After reading a bit more on Scala's type inference, I found a better answer to your question. This article which is about the limitations of the Scala type inference says:
Type information in Scala flows from function arguments to their results [...], from left to right across argument lists, and from first to last across statements. This is in contrast to a language with full type inference, where (roughly speaking) type information flows unrestricted in all directions.
So the problem is that Scala's type inference is rather limited. It first looks at the first argument list (the list in your case) and then at the second argument list (the function). But it does not go back.
This is why neither this
List(1,2,3,4).foldLeft(Nil){(a,b) => b::a}
nor this
List(1,2,3,4).foldLeft(List()){(a,b) => b::a}
will work. Why? First, the signature of foldLeft is defined as:
foldLeft[B](z: B)(f: (B, A) => B): B
So if you use Nil as the first argument z, the compiler will assign Nil.type to the type parameter B. And if you use List(), the compiler will use List[Nothing] for B.
Now, the type of the second argument f is fully defined. In your case, it's either
(Nil.type, Int) => Nil.type
or
(List[Nothing], Int) => List[Nothing]
And in both cases the lambda expression (a, b) => b :: a is not valid, since its return type is inferred to be List[Int].
Note that the bold part above says "argument lists" and not "arguments". The article later explains:
Type information does not flow from left to right within an argument list, only from left to right across argument lists.
So the situation is even worse if you have a method with a single argument list.
The only way I know how is
scala> def foldList[T](l: List[T]) = l.foldLeft(List[T]()){(a,b) => b::a}
foldList: [T](l: List[T])List[T]
scala> foldList(List(1,2,3,4))
res19: List[Int] = List(4, 3, 2, 1)
scala> foldList(List("a","b","c"))
res20: List[java.lang.String] = List(c, b, a)