Why does Haskell's foldr NOT stackoverflow while the same Scala implementation does? - scala

I am reading FP in Scala.
Exercise 3.10 says that foldRight overflows (See images below).
As far as I know , however foldr in Haskell does not.
http://www.haskell.org/haskellwiki/
-- if the list is empty, the result is the initial value z; else
-- apply f to the first element and the result of folding the rest
foldr f z [] = z
foldr f z (x:xs) = f x (foldr f z xs)
-- if the list is empty, the result is the initial value; else
-- we recurse immediately, making the new initial value the result
-- of combining the old initial value with the first element.
foldl f z [] = z
foldl f z (x:xs) = foldl f (f z x) xs
How is this different behaviour possible?
What is the difference between the two languages/compilers that cause this different behaviour?
Where does this difference come from ? The platform ? The language? The compiler?
Is it possible to write a stack-safe foldRight in Scala? If yes, how?

Haskell is lazy. The definition
foldr f z (x:xs) = f x (foldr f z xs)
tells us that the behaviour of foldr f z xs with a non-empty list xs is determined by the laziness of the combining function f.
In particular the call foldr f z (x:xs) allocates just one thunk on the heap, {foldr f z xs} (writing {...} for a thunk holding an expression ...), and calls f with two arguments - x and the thunk. What happens next, is f's responsibility.
In particular, if it's a lazy data constructor (like e.g. (:)), it will immediately be returned to the caller of the foldr call (with the constructor's two slots filled by (references to) the two values).
And if f does demand its value on the right, with minimal compiler optimizations no thunks should be created at all (or one, at the most - the current one), as the value of foldr f z xs is immediately needed and the usual stack-based evaluation can used:
foldr f z [a,b,c,....,n] ==
a `f` (b `f` (c `f` (... (n `f` z)...)))
So foldr can indeed cause SO, when used with strict combining function on extremely long input lists. But if the combining function doesn't demand right away its value on the right, or only demands a part of it, the evaluation will be suspended in a thunk, and the partial result as created by f will be immediately returned. Same with the argument on the left, but they already come as thunks, potentially, in the input list.

Haskell is lazy. So foldr allocates on the heap, not the stack. Depending on the strictness of the argument function, it may allocate a single (small) result, or a large structure.
You're still losing space, compared to a strict, tail-recursive implementation, but it doesn't look as obvious, since you've traded stack for heap.

Note that the authors here are not referring to any foldRight definition in the scala standard library, such as the one defined on List. They are referring to the definition of foldRight they gave above in section 3.4.
The scala standard library defines the foldRight in terms of foldLeft by reversing the list (which can be done in constant stack space) then calling foldLeft with the the arguments of the passed function reversed. This works for lists, but won't work for a structure which cannot be safely reversed, for example:
scala> Stream.continually(false)
res0: scala.collection.immutable.Stream[Boolean] = Stream(false, ?)
scala> res0.reverse
java.lang.OutOfMemoryError: GC overhead limit exceeded
Now lets think about what should be the result of this operation:
Stream.continually(false).foldRight(true)(_ && _)
The answer should be false, it doesn't matter how many false values are in the stream or if it is infinite, if we are going to combine them with a conjunction, the result will be false.
haskell of course gets this with no problem:
Prelude> foldr (&&) True (repeat False)
False
And that is because of two important things: haskell's foldr will traverse the stream from left to right, not right to left, and haskell is lazy by default. The first item here, that foldr actually traverses the list from left to right might surprise or confuse some people who think of a right fold as starting from the right, but the important feature of a right fold is not which end of a structure it starts on, but in which direction the associativity is. So give a list [1,2,3,4] and an op named op, a left fold is
((1 op 2) op 3) op 4)
and a right fold is
(1 op (2 op (3 op 4)))
But the order of evaluation shouldn't matter. So what the authors have done here in chapter 3 is to give you a fold which traverses the list from left to right, but because scala is by default strict, we still will not be able to traverse our stream of infinite falses, but have some patience, they will get to that in chapter 5 :) I'll give you a sneak peek, lets look at the difference between foldRight as it is defined in the standard library and as it is defined in the Foldable typeclass in scalaz:
Here's the implementation from the scala standard library:
def foldRight[B](z: B)(op: (A, B) => B): B
Here's the definition from scalaz's Foldable:
def foldRight[B](z: => B)(f: (A, => B) => B): B
The difference is that the Bs are all lazy, and now we get to fold our infinite stream again, as long as we give a function which is sufficiently lazy in its second parameter:
scala> Foldable[Stream].foldRight(Stream.continually(false),true)(_ && _)
res0: Boolean = false

One easy way to demonstrate this in Haskell is to use equational reasoning to demonstrate lazy evaluation. Let's write the find function in terms of foldr:
-- Return the first element of the list that satisfies the predicate, or `Nothing`.
find :: (a -> Bool) -> [a] -> Maybe a
find p = foldr (step p) Nothing
where step pred x next = if pred x then Just x else next
foldr :: (a -> b -> b) -> b -> [a] -> b
foldr f z [] = z
foldr f z (x:xs) = f x (foldr f z xs)
In an eager language, if you wrote find with foldr it would traverse the whole list and use O(n) space. With lazy evaluation, it stops at the first element that satisfies the predicate, and uses only O(1) space (modulo garbage collection):
find odd [0..]
== foldr (step odd) Nothing [0..]
== step odd 0 (foldr (step odd) Nothing [1..])
== if odd 0 then Just 0 else (foldr (step odd) Nothing [1..])
== if False then Just 0 else (foldr (step odd) Nothing [1..])
== foldr (step odd) Nothing [1..]
== step odd 1 (foldr (step odd) Nothing [2..])
== if odd 1 then Just 1 else (foldr (step odd) Nothing [2..])
== if True then Just 1 else (foldr (step odd) Nothing [2..])
== Just 1
This evaluation stops in a finite number of steps, in spite of the fact that the list [0..] is infinite, so we know that we're not traversing the whole list. In addition, there is an upper bound on the complexity of the expressions at each step, which translates into a constant upper bound on the memory required to evaluate this.
The key here is that the step function that we're folding with has this property: no matter what the values of x and next are, it will either:
Evaluate to Just x, without invoking the next thunk, or
Tail-call the next thunk (in effect, if not literally).

Related

What kind of morphism is `filter` in category theory?

In category theory, is the filter operation considered a morphism? If yes, what kind of morphism is it? Example (in Scala)
val myNums: Seq[Int] = Seq(-1, 3, -4, 2)
myNums.filter(_ > 0)
// Seq[Int] = List(3, 2) // result = subset, same type
myNums.filter(_ > -99)
// Seq[Int] = List(-1, 3, -4, 2) // result = identical than original
myNums.filter(_ > 99)
// Seq[Int] = List() // result = empty, same type
One interesting way of looking at this matter involves not picking filter as a primitive notion. There is a Haskell type class called Filterable which is aptly described as:
Like Functor, but it [includes] Maybe effects.
Formally, the class Filterable represents a functor from Kleisli Maybe to Hask.
The morphism mapping of the "functor from Kleisli Maybe to Hask" is captured by the mapMaybe method of the class, which is indeed a generalisation of the homonymous Data.Maybe function:
mapMaybe :: Filterable f => (a -> Maybe b) -> f a -> f b
The class laws are simply the appropriate functor laws (note that Just and (<=<) are, respectively, identity and composition in Kleisli Maybe):
mapMaybe Just = id
mapMaybe (g <=< f) = mapMaybe g . mapMaybe f
The class can also be expressed in terms of catMaybes...
catMaybes :: Filterable f => f (Maybe a) -> f a
... which is interdefinable with mapMaybe (cf. the analogous relationship between sequenceA and traverse)...
catMaybes = mapMaybe id
mapMaybe g = catMaybes . fmap g
... and amounts to a natural transformation between the Hask endofunctors Compose f Maybe and f.
What does all of that have to do with your question? Firstly, a functor is a morphism between categories, and a natural transformation is a morphism between functors. That being so, it is possible to talk of morphisms here in a sense that is less boring than the "morphisms in Hask" one. You won't necessarily want to do so, but in any case it is an existing vantage point.
Secondly, filter is, unsurprisingly, also a method of Filterable, its default definition being:
filter :: Filterable f => (a -> Bool) -> f a -> f a
filter p = mapMaybe $ \a -> if p a then Just a else Nothing
Or, to spell it using another cute combinator:
filter p = mapMaybe (ensure p)
That indirectly gives filter a place in this particular constellation of categorical notions.
To answer are question like this, I'd like to first understand what is the essence of filtering.
For instance, does it matter that the input is a list? Could you filter a tree? I don't see why not! You'd apply a predicate to each node of the tree and discard the ones that fail the test.
But what would be the shape of the result? Node deletion is not always defined or it's ambiguous. You could return a list. But why a list? Any data structure that supports appending would work. You also need an empty member of your data structure to start the appending process. So any unital magma would do. If you insist on associativity, you get a monoid. Looking back at the definition of filter, the result is a list, which is indeed a monoid. So we are on the right track.
So filter is just a special case of what's called Foldable: a data structure over which you can fold while accumulating the results in a monoid. In particular, you could use the predicate to either output a singleton list, if it's true; or an empty list (identity element), if it's false.
If you want a categorical answer, then a fold is an example of a catamorphism, an example of a morphism in the category of algebras. The (recursive) data structure you're folding over (a list, in the case of filter) is an initial algebra for some functor (the list functor, in this case), and your predicate is used to define an algebra for this functor.
In this answer, I will assume that you are talking about filter on Set (the situation seems messier for other datatypes).
Let's first fix what we are talking about. I will talk specifically about the following function (in Scala):
def filter[A](p: A => Boolean): Set[A] => Set[A] =
s => s filter p
When we write it down this way, we see clearly that it's a polymorphic function with type parameter A that maps predicates A => Boolean to functions that map Set[A] to other Set[A]. To make it a "morphism", we would have to find some categories first, in which this thing could be a "morphism". One might hope that it's natural transformation, and therefore a morphism in the category of endofunctors on the "default ambient category-esque structure" usually referred to as "Hask" (or "Scal"? "Scala"?). To show that it's natural, we would have to check that the following diagram commutes for every f: B => A:
- o f
Hom[A, Boolean] ---------------------> Hom[B, Boolean]
| |
| |
| |
| filter[A] | filter[B]
| |
V ??? V
Hom[Set[A], Set[A]] ---------------> Hom[Set[B], Set[B]]
however, here we fail immediately, because it's not clear what to even put on the horizontal arrow at the bottom, since the assignment A -> Hom[Set[A], Set[A]] doesn't even seem functorial (for the same reasons why A -> End[A] is not functorial, see here and also here).
The only "categorical" structure that I see here for a fixed type A is the following:
Predicates on A can be considered to be a partially ordered set with implication, that is p LEQ q if p implies q (i.e. either p(x) must be false, or q(x) must be true for all x: A).
Analogously, on functions Set[A] => Set[A], we can define a partial order with f LEQ g whenever for each set s: Set[A] it holds that f(s) is subset of g(s).
Then filter[A] would be monotonic, and therefore a functor between poset-categories. But that's somewhat boring.
Of course, for each fixed A, it (or rather its eta-expansion) is also just a function from A => Boolean to Set[A] => Set[A], so it's automatically a "morphism" in the "Hask-category". But that's even more boring.
filter can be written in terms of foldRight as:
filter p ys = foldRight(nil)( (x, xs) => if (p(x)) x::xs else xs ) ys
foldRight on lists is a map of T-algebras (where here T is the List datatype functor), so filter is a map of T-algebras.
The two algebras in question here are the initial list algebra
[nil, cons]: 1 + A x List(A) ----> List(A)
and, let's say the "filter" algebra,
[nil, f]: 1 + A x List(A) ----> List(A)
where f(x, xs) = if p(x) x::xs else xs.
Let's call filter(p, _) the unique map from the initial algebra to the filter algebra in this case (it is called fold in the general case). The fact that it is a map of algebras means that the following equations are satisfied:
filter(p, nil) = nil
filter(p, x::xs) = f(x, filter(p, xs))

What does >>= mean in purescript?

I was reading the purescript wiki and found following section which explains do in terms of >>=.
What does >>= mean?
Do notation
The do keyword introduces simple syntactic sugar for monadic
expressions.
Here is an example, using the monad for the Maybe type:
maybeSum :: Maybe Number -> Maybe Number -> Maybe Number
maybeSum a b = do
n <- a
m <- b
let result = n + m
return result
maybeSum takes two
values of type Maybe Number and returns their sum if neither number is
Nothing.
When using do notation, there must be a corresponding
instance of the Monad type class for the return type. Statements can
have the following form:
a <- x which desugars to x >>= \a -> ...
x which desugars to x >>= \_ -> ... or just x if this is the last statement.
A let binding let a = x. Note the lack of the in keyword.
The example maybeSum desugars to ::
maybeSum a b =
a >>= \n ->
b >>= \m ->
let result = n + m
in return result
>>= is a function, nothing more. It resides in the Prelude module and has type (>>=) :: forall m a b. (Bind m) => m a -> (a -> m b) -> m b, being an alias for the bind function of the Bind type class. You can find the definitions of the Prelude module in this link, found in the Pursuit package index.
This is closely related to the Monad type class in Haskell, which is a bit easier to find resources. There's a famous question on SO about this concept, which is a good starting point if you're looking to improve your knowledge on the bind function (if you're starting on functional programming now, you can skip it for a while).

Is it possible to lazily traverse a recursive data-structure with O(1) memory usage, tail-call optimized?

Let's say that we have a recursive data-structure, like a binary tree. There are many ways to traverse it, and they have different memory-usage profiles. For instance, if we were to simply print the value of each node, using pseudo-code like the following in-order traversal...
visitNode(node) {
if (node == null) return;
visitNode(node.leftChild);
print(node.value);
visitNode(node.rightChild);
}
...our memory usage would be constant, but due to the recursive calls, we would increase the size of the call stack. On very large trees, this could potentially overflow it.
Let's say that we decided to optimize for call-stack size; assuming that this language is capable of proper tailcalls, we could rewrite this as the following pre-order traversal...
visitNode(node, nodes = []) {
if (node != null) {
print(node.value);
visitNode(nodes.head, nodes.tail + [node.left, node.right]);
} else if (node == null && nodes.length != 0 ) {
visitNode(nodes.head, nodes.tail);
} else return;
}
While we would never blow the stack, we would now see heap usage increase linearly with respect to the size of the tree.
Let's say we were then to attempt to lazily traverse the tree - here is where my reasoning gets fuzzy. I think that even using a basic lazy evaluation strategy, we would grow memory at the same rate as the tailcall optimized version. Here is a concrete example using Scala's Stream class, which provides lazy evaluation:
sealed abstract class Node[A] {
def toStream: Stream[Node[A]]
def value: A
}
case class Fork[A](value: A, left: Node[A], right: Node[A]) extends Node[A] {
def toStream: Stream[Node[A]] = this #:: left.toStream.append(right.toStream)
}
case class Leaf[A](value: A) extends Node[A] {
def toStream: Stream[Node[A]] = this #:: Stream.empty
}
Although only the head of the stream is strictly evaluated, anytime the left.toStream.append(right.toStream) is evaluated, I think this would actually evaluate the head of both the left and right streams. Even if it doesn't (due to append cleverness), I think that recursively building this thunk (to borrow a term from Haskell) would essentially grow memory at the same rate. Rather than saying, "put this node in the list of nodes to traverse", we're basically saying, "here's another value to evaluate that will tell you what to traverse next", but the outcome is the same; linear memory growth.
The only strategy I can think of that would avoid this is having mutable state in each node declaring which paths have been traversed. This would allow us to have a referentially transparent function that says, "Given a node, I will tell you which single node you should traverse next", and we could use that to build an O(1) iterator.
Is there another way to accomplish O(1), tailcall optimized traversal of a binary tree, possibly without mutable state?
Is there another way to accomplish O(1), tailcall optimized traversal of a binary tree, possibly without mutable state?
As I stated in my comment, you can do this if the tree need not survive the traversal. Here's a Haskell example:
data T = Leaf | Node T Int T
inOrder :: T -> [Int]
inOrder Leaf = []
inOrder (Node Leaf x r) = x : inOrder r
inOrder (Node (Node l x m) y r) = inOrder $ Node l x (Node m y r)
This takes O(1) auxiliary space if we assume the garbage collector will clean up any Node that we just processed, so we effectively replace it by a right-rotated version. However, if the nodes we process cannot immediately be garbage-collected, then the final clause may build up an O(n) number of nodes before it hits a leaf.
If you have parent pointers, then it's also doable. Parent pointers require mutable state, though, and prevent sharing of subtrees, so they're not really functional. If you represent an iterator by a pair (cur, prev) that is initially (root, nil), then you can perform iteration as outlined here. You need a language with pointer comparisons to make this work, though.
Without parent pointers and mutable state, you need to maintain some data structure that at least tracks where the root of the tree is and how to get there, since you'll need such a structure at some point during in-order or post-order traversal. Such a structure necessarily takes Ω(d) space where d is the depth of the tree.
A fancy answer.
We can use free monads to get efficient memory utilization bound.
{-# LANGUAGE RankNTypes
, MultiParamTypeClasses
, FlexibleInstances
, UndecidableInstances #-}
class Algebra f x where
phi :: f x -> x
A algebra of a functor f is a function phi from f x to x for some x. For example, any monad has a algebra for any object m x:
instance (Monad m) => Algebra m (m x) where
phi = join
A free monad for any functor f can be constructed (possibly, some sort of functors only, like omega-cocomplete, or some such; but all Haskell types are polynomial functors, which are omega-cocomplete, so the statement is certainly true for all Haskell functors):
data Free f a = Free (forall x. Algebra f x => (a -> x) -> x)
runFree g (Free m) = m g
instance Functor (Free f) where
fmap f m = Free $ \g -> runFree (g . f) m
wrap :: (Functor f) => f (Free f a) -> Free f a
wrap f = Free $ \g -> phi $ fmap (runFree g) f
instance (Functor f) => Algebra f (Free f a) where
phi = wrap
instance (Functor f) => Monad (Free f) where
return a = Free ($ a)
m >>= f = fjoin $ fmap f m
fjoin :: (Functor f) => Free f (Free f a) -> Free f a
fjoin mma = Free $ \g -> runFree (runFree g) mma
Now we can use Free to construct free monad for functor T a:
data T a b = T a b b
instance Functor (T a) where
fmap f (T a l r) = T a (f l) (f r)
For this functor we can define a algebra for object [a]
instance Algebra (T a) [a] where
phi (T a l r) = l++(a:r)
A tree is a free monad over functor T a:
type Tree a = Free (T a) ()
It can be constructed using the following functions (if defined as ADT, they'd be constructor names, so nothing extraordinary):
tree :: a -> Tree a -> Tree a -> Tree a
tree a l r = phi $ T a l r -- phi here is for Algebra f (Free f a)
-- and translates T a (Tree a) into Tree a
leaf :: Tree a
leaf = return ()
To demonstrate how this works:
bar = tree 'a' (tree 'b' leaf leaf) $ tree 'r' leaf leaf
buz = tree 'b' leaf $ tree 'u' leaf $ tree 'z' leaf leaf
foo = tree 'f' leaf $ tree 'o' (tree 'o' leaf leaf) leaf
toString = runFree (\_ -> [] :: String)
main = print $ map toString [bar, buz, foo]
As runFree traverses the tree to replace leaf () with [], the algebra for T a [a] in all contexts is the algebra that constructs a string representing in-order traversal of the tree. Because functor T a b constructs a new tree as it goes, it must have the same memory consumption characteristics as the solution quoted by larsmans - if the tree is not kept in memory, the nodes are discarded as soon as they are replaced by the string representing the whole subtree.
Given that you have references to nodes' parents, there's a nice solution posted here. Replace the while loop with a tail-recursive call (passing in last and current and that should do it.
The built-in back-references allow you to keep track of traversal ordering. Without these, I can't think of a way to do it on a (balanced) tree with less than O(log(n)) auxiliary space.
I was not able to find an answer but I got some pointers. Go have a look at http://www.ics.uci.edu/~dan/pub.html, scroll down to
[33] D.S. Hirschberg and S.S. Seiden, A bounded-space tree traversal algorithm, Information Processing Letters 47 (1993)
Download the postscript file, you may need to convert it to PDF (my ps viewer was unable to present it correctly). It mentions on page 2 (Table 1) a number of algorithms and additional literature.

How to concisely express function iteration?

Is there a concise, idiomatic way how to express function iteration? That is, given a number n and a function f :: a -> a, I'd like to express \x -> f(...(f(x))...) where f is applied n-times.
Of course, I could make my own, recursive function for that, but I'd be interested if there is a way to express it shortly using existing tools or libraries.
So far, I have these ideas:
\n f x -> foldr (const f) x [1..n]
\n -> appEndo . mconcat . replicate n . Endo
but they all use intermediate lists, and aren't very concise.
The shortest one I found so far uses semigroups:
\n f -> appEndo . times1p (n - 1) . Endo,
but it works only for positive numbers (not for 0).
Primarily I'm focused on solutions in Haskell, but I'd be also interested in Scala solutions or even other functional languages.
Because Haskell is influenced by mathematics so much, the definition from the Wikipedia page you've linked to almost directly translates to the language.
Just check this out:
Now in Haskell:
iterateF 0 _ = id
iterateF n f = f . iterateF (n - 1) f
Pretty neat, huh?
So what is this? It's a typical recursion pattern. And how do Haskellers usually treat that? We treat that with folds! So after refactoring we end up with the following translation:
iterateF :: Int -> (a -> a) -> (a -> a)
iterateF n f = foldr (.) id (replicate n f)
or point-free, if you prefer:
iterateF :: Int -> (a -> a) -> (a -> a)
iterateF n = foldr (.) id . replicate n
As you see, there is no notion of the subject function's arguments both in the Wikipedia definition and in the solutions presented here. It is a function on another function, i.e. the subject function is being treated as a value. This is a higher level approach to a problem than implementation involving arguments of the subject function.
Now, concerning your worries about the intermediate lists. From the source code perspective this solution turns out to be very similar to a Scala solution posted by #jmcejuela, but there's a key difference that GHC optimizer throws away the intermediate list entirely, turning the function into a simple recursive loop over the subject function. I don't think it could be optimized any better.
To comfortably inspect the intermediate compiler results for yourself, I recommend to use ghc-core.
In Scala:
Function chain Seq.fill(n)(f)
See scaladoc for Function. Lazy version: Function chain Stream.fill(n)(f)
Although this is not as concise as jmcejuela's answer (which I prefer), there is another way in scala to express such a function without the Function module. It also works when n = 0.
def iterate[T](f: T=>T, n: Int) = (x: T) => (1 to n).foldLeft(x)((res, n) => f(res))
To overcome the creation of a list, one can use explicit recursion, which in reverse requires more static typing.
def iterate[T](f: T=>T, n: Int): T=>T = (x: T) => (if(n == 0) x else iterate(f, n-1)(f(x)))
There is an equivalent solution using pattern matching like the solution in Haskell:
def iterate[T](f: T=>T, n: Int): T=>T = (x: T) => n match {
case 0 => x
case _ => iterate(f, n-1)(f(x))
}
Finally, I prefer the short way of writing it in Caml, where there is no need to define the types of the variables at all.
let iterate f n x = match n with 0->x | n->iterate f (n-1) x;;
let f5 = iterate f 5 in ...
I like pigworker's/tauli's ideas the best, but since they only gave it as a comments, I'm making a CW answer out of it.
\n f x -> iterate f x !! n
or
\n f -> (!! n) . iterate f
perhaps even:
\n -> ((!! n) .) . iterate

Difference in asymptotic time of two variants of flatten

I am going through the Scala by Example document and I am having trouble with exercise 9.4.2. Here is the text:
Exercise 9.4.2 Consider the problem of writing a function flatten, which takes a list of element lists as arguments. The result of flatten should be the concatenation of all element lists into a single list. Here is an implementation of this method in terms of :\.
def flatten[A](xs: List[List[A]]): List[A] =
(xs :\ (Nil: List[A])) {(x, xs) => x ::: xs}
Consider replacing the body of flatten by
((Nil: List[A]) /: xs) ((xs, x) => xs ::: x)
What would be the difference in asymptotic complexity between the two versions of flatten?
In fact flatten is predefined together with a set of other userful function in an object
called List in the standatd Scala library. It can be accessed from user program by calling List.flatten. Note that flatten is not a method of class List – it would not make sense there, since it applies only to lists of lists, not to all lists in general.
I do not see how the asymptotic time of these two function variants are different. I'm sure it's because I am missing something fundamental about the meaning of fold left and fold right.
Here is a pdf of the document I am describing:
http://www.scala-lang.org/docu/files/ScalaByExample.pdf
I am generally finding this an excellent introduction into Scala.
Look at the implementation of concatenation ::: (p.68) (the rest of answer is masked with spoiler-tags, mouse-over to read !)
Witness that it's linear (in ::) in the size of the left argument (the list that ends up being the prefix of the result).
Assume (for the sake of the complexity analysis) that your list of lists contains n equal-sized small lists of size a fixed constant k, k<n. If you use foldLeft, you compute:
f (... (f (f a b1) b2) ...) bn
Where f is the concatenation. If you use foldRight:
f a1 (f a2 (... (f an b) ...))
With again f standing for the prefix notation of concatenation. In the second case it's easy : you add k elements at the head each time, so you do (k*n cons).
For the first case (foldLeft), in the first concatenation, the list (f a b1) is of size k. You add it on the second round to b2 to form (f (f a b1) b2) of size 2k ... You do (k+(k+k)+(3k)+... = k*sum_{i=1}^n(i) = k*n(n+1)/2 cons).
(Followup question : is this the only parameter that should be taken into account while thinking of the efficiency of that function ? Doesn't foldLeft have an advantage -not asymptotic complexity- that foldRight doesn't ?)