Why do we need flatMap (in general)? - scala

I have been looking into FP languages (off and on) for some time and have played with Scala, Haskell, F#, and some others. I like what I see and understand some of the fundamental concepts of FP (with absolutely no background in Category Theory - so don't talk Math, please).
So, given a type M[A] we have map which takes a function A=>B and returns a M[B]. But we also have flatMap which takes a function A=>M[B] and returns a M[B]. We also have flatten which takes a M[M[A]] and returns a M[A].
In addition, many of the sources I have read describe flatMap as map followed by flatten.
So, given that flatMap seems to be equivalent to flatten compose map, what is its purpose? Please don't say it is to support 'for comprehensions' as this question really isn't Scala-specific. And I am less concerned with the syntactic sugar than I am in the concept behind it. The same question arises with Haskell's bind operator (>>=). I believe they both are related to some Category Theory concept but I don't speak that language.
I have watched Brian Beckman's great video Don't Fear the Monad more than once and I think I see that flatMap is the monadic composition operator but I have never really seen it used the way he describes this operator. Does it perform this function? If so, how do I map that concept to flatMap?
BTW, I had a long writeup on this question with lots of listings showing experiments I ran trying to get to the bottom of the meaning of flatMap and then ran into this question which answered some of my questions. Sometimes I hate Scala implicits. They can really muddy the waters. :)

FlatMap, known as "bind" in some other languages, is as you said yourself for function composition.
Imagine for a moment that you have some functions like these:
def foo(x: Int): Option[Int] = Some(x + 2)
def bar(x: Int): Option[Int] = Some(x * 3)
The functions work great, calling foo(3) returns Some(5), and calling bar(3) returns Some(9), and we're all happy.
But now you've run into the situation that requires you to do the operation more than once.
foo(3).map(x => foo(x)) // or just foo(3).map(foo) for short
Job done, right?
Except not really. The output of the expression above is Some(Some(7)), not Some(7), and if you now want to chain another map on the end you can't because foo and bar take an Int, and not an Option[Int].
Enter flatMap
foo(3).flatMap(foo)
Will return Some(7), and
foo(3).flatMap(foo).flatMap(bar)
Returns Some(15).
This is great! Using flatMap lets you chain functions of the shape A => M[B] to oblivion (in the previous example A and B are Int, and M is Option).
More technically speaking; flatMap and bind have the signature M[A] => (A => M[B]) => M[B], meaning they take a "wrapped" value, such as Some(3), Right('foo), or List(1,2,3) and shove it through a function that would normally take an unwrapped value, such as the aforementioned foo and bar. It does this by first "unwrapping" the value, and then passing it through the function.
I've seen the box analogy being used for this, so observe my expertly drawn MSPaint illustration:
This unwrapping and re-wrapping behavior means that if I were to introduce a third function that doesn't return an Option[Int] and tried to flatMap it to the sequence, it wouldn't work because flatMap expects you to return a monad (in this case an Option)
def baz(x: Int): String = x + " is a number"
foo(3).flatMap(foo).flatMap(bar).flatMap(baz) // <<< ERROR
To get around this, if your function doesn't return a monad, you'd just have to use the regular map function
foo(3).flatMap(foo).flatMap(bar).map(baz)
Which would then return Some("15 is a number")

It's the same reason you provide more than one way to do anything: it's a common enough operation that you may want to wrap it.
You could ask the opposite question: why have map and flatten when you already have flatMap and a way to store a single element inside your collection? That is,
x map f
x filter p
can be replaced by
x flatMap ( xi => x.take(0) :+ f(xi) )
x flatMap ( xi => if (p(xi)) x.take(0) :+ xi else x.take(0) )
so why bother with map and filter?
In fact, there are various minimal sets of operations you need to reconstruct many of the others (flatMap is a good choice because of its flexibility).
Pragmatically, it's better to have the tool you need. Same reason why there are non-adjustable wrenches.

The simplest reason is to compose an output set where each entry in the input set may produce more than one (or zero!) outputs.
For example, consider a program which outputs addresses for people to generate mailers. Most people have one address. Some have two or more. Some people, unfortunately, have none. Flatmap is a generalized algorithm to take a list of these people and return all of the addresses, regardless of how many come from each person.
The zero output case is particularly useful for monads, which often (always?) return exactly zero or one results (think Maybe- returns zero results if the computation fails, or one if it succeeds). In that case you want to perform an operation on "all of the results", which it just so happens may be one or many.

The "flatMap", or "bind", method, provides an invaluable way to chain together methods that provide their output wrapped in a Monadic construct (like List, Option, or Future). For example, suppose you have two methods that produce a Future of a result (eg. they make long-running calls to databases or web service calls or the like, and should be used asynchronously):
def fn1(input1: A): Future[B] // (for some types A and B)
def fn2(input2: B): Future[C] // (for some types B and C)
How to combine these? With flatMap, we can do this as simply as:
def fn3(input3: A): Future[C] = fn1(a).flatMap(b => fn2(b))
In this sense, we have "composed" a function fn3 out of fn1 and fn2 using flatMap, which has the same general structure (and so can be composed in turn with further similar functions).
The map method would give us a not-so-convenient - and not readily chainable - Future[Future[C]]. Certainly we can then use flatten to reduce this, but the flatMap method does it in one call, and can be chained as far as we wish.
This is so useful a way of working, in fact, that Scala provides the for-comprehension as essentially a short-cut for this (Haskell, too, provides a short-hand way of writing a chain of bind operations - I'm not a Haskell expert, though, and don't recall the details) - hence the talk you will have come across about for-comprehensions being "de-sugared" into a chain of flatMap calls (along with possible filter calls and a final map call for the yield).

Well, one could argue, you don't need .flatten either. Why not just do something like
#tailrec
def flatten[T](in: Seq[Seq[T], out: Seq[T] = Nil): Seq[T] = in match {
case Nil => out
case head ::tail => flatten(tail, out ++ head)
}
Same can be said about map:
#tailrec
def map[A,B](in: Seq[A], out: Seq[B] = Nil)(f: A => B): Seq[B] = in match {
case Nil => out
case head :: tail => map(tail, out :+ f(head))(f)
}
So, why are .flatten and .map provided by the library? Same reason .flatMap is: convenience.
There is also .collect, which is really just
list.filter(f.isDefinedAt _).map(f)
.reduce is actually nothing more then list.foldLeft(list.head)(f),
.headOption is
list match {
case Nil => None
case head :: _ => Some(head)
}
Etc ...

Related

Why chain of flatMap stops on first Left value but continues on Right values

I don't understand this line:
Right(1).flatMap(_ => Left(2)).flatMap(_ => Left(3))
Right(1) is passed to .flatMap(_ => Left(2). It returns Left(2) which is passed to .flatMap(_ => Left(3). And it should've returned Left(3). But it returns Left(2).
Why is that so?
Another example is Right(1).flatMap(_ => Right(2)).flatMap(_ => Right(3))
It returns Right(3) (as it should have).
From what I understand it works as follows:
Right(1) is passed to .flatMap(_ => Right(2)). It returns Right(2) which is passed to .flatMap(_ => Right(3). At the end it returns Right(3)
Scala fiddle
The reason is that starting from Scala 2.12 Either is right-biased. It means that operations like flatMap will stop computing, when the result is left. Check the implementation to understand it:
def flatMap[EE >: E, B](f: A => Either[EE, B]): Either[EE, B] =
this match {
case Left(value) => Left(value)
case Right(value) => f(value)
}
So as you can see in the case of Left it construct's Left with the value extracted from it without applying f.
flatmap is Right associated. What I mean by that is that it will only operate on Right values, and not on Left values. This allows the sequence of flatMaps to shortcuirt when it hits the first Left.
See the documentation for more examples of this:
The chain of flat-mapped computations is short-circuited on the first evaluated Left due to Either monad being success-biased on the Right values. The reason for such bias is that programers often interpreted left side to represent an error result of computation, whilst the right hand side would mean the successful result of computation. So if left means error, there is not much point in continuing computing with error down the chain, hence the chain is broken.
Note, the Either monad used to be biased only by convention. Conventional right-bias of Either was formalised in Scala 2.12. Some argue Either should be unbiased, for example,
If you use Either for error reporting, then it is true that you want
it to be biased to one side, but that is only one use-case of many, and
hardcoding a single; special use-case into a general interface smells of
bad design. And for that use-case, you might just as well use Try,
which is basically a biased Either.
whilst others argue favouring one side, for example,
... with Scala 2.12. it became right-biased, which is IMHO a better
design choice, and fits perfectly with other similar sum types from
other libraries. For example, it’s very easy now to go from Either to
/ (scalaz dicjuntion) now that there is no bias mismatch. They are
completely isomorphic.
Nevertheless, bias of Either does not force semantics of just “happy/unhappy”, for example, the requirement in Creating a method which returns one or two parameters may be addressed with Either where left side is interpreted as a happy/successful value.

flatmap(GenTraversableOnce) on Options

This doesn't work:
val res = myOption flatMap (value => Seq(value, “blo”))
But this yes:
val res = myOption.toSeq flatMap (value => Seq(value, “blo”))
Don't you think flatMap on Options should take a GenTraversableOnce just like Seq does?
Or this code is bad for the readability and I should use a match or a map/getOrElse?
Edit: We also get the same issue on for/yield.
Cheers
Option.flatMap returns an Option, which is "like" a sequence, but cannot contain more than one element. If it was allowed to take a function, that returned a Seq, and it returned a Seq containing more than one element, what would be the return value of flatMap (remember, it needs to be an Option)?
Why does flatMap need to return an option in the first place? Well, all flatMap implementations return the same type they started with. It makes sense: if I have an Option of something, and want to transform the contents somehow, the most common use case is that I want to end up with another Option. If flatMap returned me a Seq, would would I do? .headOption? That is not a very good idea, because it would potentially silently discard data. if(seq.size < 2) seq.headOption else throw ....? Well, this is a little bit better, but looks ugly, and isn't enforceable at compile time.
Converting an Option to a Seq when you need it on the other hand, is very easy and entirely safe: just do .toSeq.
Overall semantic of flatMap is to work like monadic bind method, that means it tends to have signature like
[A]this:T[A].flatMap[B](f: A => T[B]): T[B]
Sometimes (SeqLike) this signature is generalized to
[A]this:T[A].flatMap[B](f: A => F[B]): T[B]
where F[B] is something easily convertible to T[B]
So not only Option, but also concurrent.Future, util.Try and BindOps - extension syntax for scalaz monads have method flatMap that does not accept any traversable, but only the same wrapper type .
I.e. flatMap is more a thing from monads world, not from collections

How to apply function to parameters when they meet given constraints?

Lets say we have a function
f: (A, A) => A
and we are given two optional values
val x: Option[A] = ...
val y: Option[A] = ...
Now we want to apply the function f to x,y. In case one of the parameters is None, we want to return None. In scalaz we could write
^(x, y)(f)
Now assume that we want to restrict the input parameters for function f by coming up with another function which checks if the parameters meet a constraint. If so the function applies f to its arguments and returns the result wrapped in a Some object. Otherwise it returns None:
fRestricted: (A, A) => Option[A]
The problem now is that we cannot make use of Scalaz anymore, i.e.
^(x, y)(fRestricted)
does not work anymore, since the types are wrong.
Is this actually a common problem? Is there a generic solution for the use case?
It is one of the most basic problems in functional programming (not just Scala).
The thing you were doing with Scalaz was based on applicative functor capabilities of the Option. Unfortunately the standard Scala library does not support applicative functors, however it does support monads, and Option is also a monad. Applicative functors are strictly a more general thing than monads, so whatever applicative functors can do, monads can do too.
Here's how you can exploit the fact that Option is a monad to solve your problem with the standard library:
for {
a <- x
b <- y
c <- fRestricted(a, b)
}
yield c
The syntax above is called a "for-comprehension", and in fact it is nothing more than just a sugar for the following:
x.flatMap( a => y.flatMap( b => fRestricted(a, b) ) )

Semigroup-like thing where the append operation can fail

Suppose I have some type with an associative binary operation that feels a lot like append except that the operation may fail. For example, here's a wrapper for List[Int] that only allows us to "add" lists with the same length:
case class Foo(xs: List[Int]) {
def append(other: Foo): Option[Foo] =
if (xs.size != other.xs.size) None else Some(
Foo(xs.zip(other.xs).map { case (a, b) => a + b })
)
}
This is a toy example, but one of the things it has in common with my real use case is that we could in principle use the type system to make the operation total—in this case by tracking the length of the lists with something like Shapeless's Sized, so that adding lists of unequal lengths would be a compile-time error instead of a runtime failure. That's not too bad, but in my real use case managing the constraints in the type system would require a lot more work and isn't really practical.
(In my use case I have a sensible identity, unlike in this toy example, but we can ignore that for now.)
Is there some principled way to do this kind of thing? Searching for a -> a -> m a or a -> a -> Maybe a on Hoogle doesn't turn up anything interesting. I know I can write an ad-hoc append method that just returns its result wrapped in whatever type I'm using to model failure, but it'd be nice to have something more general that would give me foldMap1, etc. for free—especially since this isn't the first time I've found myself wanting this kind of thing.

Elegant way to reverse a list using foldRight?

I was reading about fold techniques in Programming in Scala book and came across this snippet:
def reverseLeft[T](xs:List[T]) = (List[T]() /: xs) {
(y,ys) => ys :: y
}
As you can see, it was done using foldLeft or /: operator. Curious how it would look like if I did it using :\, I came up with this:
def reverseRight[T](xs:List[T]) = (xs :\ List[T]()) {
(y,ys) => ys ::: List(y)
}
As I understand it, ::: doesn't seem to be as fast as :: and has a linear cost depending on the size of the operand list. Admittedly, I don't have a background in CS and no prior FP experience. So my questions are:
How do you recognise/distinguish between foldLeft/foldRight in problem approaches?
Is there a better way of doing this without using :::?
Since foldRight on List in the standard library is strict and implemented using linear recursion, you should avoid using it, as a rule. An iterative implementation of foldRight would be as follows:
def foldRight[A,B](f: (A, B) => B, z: B, xs: List[A]) =
xs.reverse.foldLeft(z)((x, y) => f(y, x))
A recursive implementation of foldLeft could be this:
def foldLeft[A,B](f: (B, A) => B, z: B, xs: List[A]) =
xs.reverse.foldRight(z)((x, y) => f(y, x))
So you see, if both are strict, then one or the other of foldRight and foldLeft is going to be implemented (conceptually anyway) with reverse. Since the way lists are constructed with :: associates to the right, the straightforward iterative fold is going to be foldLeft, and foldRight is simply "reverse then foldLeft".
Intuitively, you might think that this would be a slow implementation of foldRight, since it folds the list twice. But:
"Twice" is a constant factor anyway, so it's asymptotically equivalent to folding once.
You have to go over the list twice anyway. Once to push computations onto the stack and again to pop them off the stack.
The implementation of foldRight above is faster than the one in the standard library.
Operations on a List are intentionally not symmetric. The List data structure is a singly-linked list where each node (both data and pointer) are immutable. The idea behind this data structure is that you perform modifications on the front of the list by taking references to internal nodes and adding new nodes that point to them -- different versions of the list will share the same nodes for the end of the list.
The ::: operator which appends a new element on to the end of the list has to create a new copy of the entire list, because otherwise it would modify other lists that share nodes with the list you're appending to. This is why ::: takes linear time.
Scala has a data structure called a ListBuffer that you can use instead of the ::: operator to make appending to the end of a list faster. Basically, you create a new ListBuffer and it starts with an empty list. The ListBuffer maintains a list completely separate from any other list that the program knows about, so it's safe to modify it by adding on to the end. When you're finished adding on to the end, you call ListBuffer.toList, which releases the list into the world, at which point you can no longer add on to the end without copying it.
foldLeft and foldRight also share a similar assymmetry. foldRight requires you to walk the entire list to get to the end of the list, and keep track of everywhere you've visited on the way there, so that you an visit them in reverse order. This is usually done recursively, and it can lead to foldRight causing stack overflows on large lists. foldLeft on the other hand, deals with nodes in the order they appear in the list, so it can forget the ones it's visited already and only needs to know about one node at a time. Though foldLeft is also usually implemented recursively, it can take advantage of an optimization called tail recursion elimination, in which the compiler transforms the recursive calls into a loop because the function doesn't do anything after returning from the recursive call. Thus, foldLeft doesn't overflow the stack even on very long lists. EDIT: foldRight in Scala 2.8 is actually implemented by reversing the list and running foldLeft on the reversed list -- so the tail recursion issue is not an issue -- both data structures optimize tail recursion correctly, and you could choose either one (You do get into the issue now that you're defining reverse in terms of reverse -- you don't need to worry if you're defining your own reverse method for the fun of it, but you wouldn't have the foldRight option at all if you were defining Scala's reverse method.)
Thus, you should prefer foldLeft and :: over foldRight and :::.
(In an algorithm that would combine foldLeft with ::: or foldRight with ::, then you need to make a decision for yourself about which is more important: stack space or running time. Or you should use foldLeft with a ListBuffer.)