Kleisli vs flapMap Sequencing - scala

Checking at Kleisli definition,
in Cats, and Functional and Reactive Domain Modelling
However I'm not yet able to graps the usefulness of it. If we talk about the case of composing Monadic function, as in function that return monad i.e. A => F[B], i don't see what it actually add to simply sequencing a chain of
flatMap[A, B](ma: F[A])(f: A => F[B]): F[B]
Indeed, being able to chain the above is similar to
If you have a function f: A => F[B] and another function g: B => F[C], where F is a monad, then you can compose them to get A => F[C]
What is it that i am not seeing that is the real added value of "Kleisli" ?

Kleisli is simply a name for a function of shape A => F[B].
We could say that flatMap and Kleisli revolve around a similar idea and are tied to similar concepts, but they are not the same thing. Neither one "adds value upon the other one". Here's an example of their connection:
Monad can be defined in several different, but equally powerful ways. One is using unit + flatMap, with its laws defined as:
left-identity law:
unit(x).flatMap(f) == f(x)
right-identity law:
m.flatMap(unit) == m
associativity law:
m.flatMap(f).flatMap(g) == m.flatMap(x ⇒ f(x).flatMap(g))
Another way is using unit + compose, with its laws defined as:
left-identity law:
unit.compose(f) == f
right-identity law:
f.compose(unit) == f
associativity law:
f.compose(g.compose(h)) == (f.compose(g)).compose(h)
In the above definitions, flatMap is the good old flatMap you know:
def flatMap: F[A] => (A => F[B]) => F[B]
and compose is the composition of Kleisli arrows:
def compose: (A => F[B]) => (B => F[C]) => A => F[C]
So basically it's all about terminology. They often pop up in a similar context, but they are not the same. They are simply names for two related, but different things.

In addition to the answer given by slouc, I think it useful to add that I have never really seen the term Kleisli used without the term composition appended to it. So, you might say that the real benefit of separating out Kleisli functions is in how they can be composed.
flatMap isn't a composition of functions. It is, instead, a sequencing of operations on data. But Kleisli composition (like composition of other functions) allows the creation of new functions from other functions following certain rules - as pointed out by slouc.
In Haskell, composition is done with the dot operator. So, if f: A => B and g: B => C, you can have:
h = g . f // h: A => C
But if f and g are Kleisli functions (f: A => M[B] and g: B => M[C]) this doesn't work. This is where Kleisli composition comes in to play. You often see it defined as the 'fish' operator, >=>, or something similar. Using Kleisli composition you can have:
h = g >=> f // h: A => M[C]
BTW, depending on the language or library, the order of g and f in the fish operator may be reversed. But the concept still applies. You are building a new function from two existing functions via composition. You can, later, apply this function to data and have the same result you would have with sequential applications of flatMap.
One other thing I should probably mention is that since Kleisli functions compose they form a proper category so you will also see the term Kleisli Category. It isn't all that important to a SW developer but I had to come to grips with it since I saw it often in documentation and blogs so I thought I would pass it on.

Related

Understanding monads in scala

I'm trying to understand what monads are (not just in scala, but by example using scala). Let's consider the most (in my opinion) simple example of a monad:
scala.Some
As some articles state, every monad in its classic sense should preserve some rules for the flatMap and unit functions.
Here is the definition from scala.Some
#inline final def flatMap[B](f: A => Option[B]): Option[B]
So, understand it better I want to understand it from the category theory standpoint. So, we're considering a monad and it's supposed to be a functor (but between what?).
Here we have to category Option[A] and Option[B] and the flatMap along with the f: A => Option[B] passed into it is supposed to define a Functor between them. But in the tranditional category definition it's a functor from a category to itself.
The category is the category of scala types, where the objects are types and the arrows are functions between values of those types. Option is an endofunctor on this category. For each object (i.e. type) in the Scala category, the Option type constructor maps each type A into a type Option[A].
In addition it maps each arrow f: A => B into an arrow fo: Option[A] => Option[B] which is what Option.map does.
A Monad is a Functor M along with two operations, unit: A => M[A] and join: M[M[A]] => M[A]. For Option, unit(x: A) = Some(x) and join can be defined as:
def join[A](o: Option[Option[A]]): Option[A] = o match {
case None => None
case Some(i) => i
}
flatMap can then be defined as, flatMap(f, m) = join(map(f, m)). Alternatively the monad can be defined using unit and flatMap and join defined as join(m) = flatMap(id, m).

Curry in scala with parametric types

The authors of Functional Programming in Scala give this as the definition of curry in scala:
def curry[A,B,C](f: (A, B) => C): A => (B => C) =
a => b => f(a, b)
However, if we apply it to a function taking parametric types, e.g.:
def isSorted[A](as: Array[A], ordered:(A,A)=>Boolean) =
if(as.size < 2)
true else
as.zip(as.drop(1)).map(ordered.tupled).reduce(_ && _)
Then the result wants A (in isSorted) to be nothing:
scala> curry(isSorted)
res29: Array[Nothing] => (((Nothing, Nothing) => Boolean) => Boolean) = <function1>
This is obviously not what is desired. Should curry be defined differently, or called differently, or is it not practical to implement curry in Scala?
You're running into two separate problems here. The first is that isSorted when it is passed to curry is forced to become monomorphic. The second is that Scala's type inference is failing you here.
This is one of those times where the difference between a function and a method matters in Scala. isSorted is eta-expanded into a function which in turn is a Scala value, not a method. Scala values are always monomorphic, only methods can be polymorphic. For any method of types (A, B) C (this is the syntax for a method type and is different from (A, B) => C which is a function and therefore a value), the default eta-expansion is going to result in the superclass of all functions of that arity, namely (Nothing, Nothing) => Any. This is responsible for all the Nothings you see (you don't have any Anys because isSorted is monomorphic in its return value).
You might imagine despite the monomorphic nature of Scala values though, that you could ideally do something like
def first[A, B](x: A, y: B): A = x
curry(first)(5)(6) // This doesn't compile
This is Scala's local type inference biting you. It works on separate parameter lists from left to right first is the first thing to get a type inferred and as mentioned above, it gets inferred to be (Nothing, Nothing) => Any. This clashes with Ints that follow.
As you've realized, one way of getting around this is annotating your polymorphic method that you pass to curry so that it eta-expands into the correct type. This is almost certainly the way to go.
Another thing you could possibly do (although I'm don't think it'll serve anything except pedagogical purposes) is to curry curry itself and define it as follows:
def curryTwo[A, B, C](x: A)(y: B)(f: (A, B) => C): C = f(x, y)
On the one hand, the below works now because of the left-to-right type inference.
curryTwo(5)(6)(first) // 5
On the other hand, to use curryTwo in the scenarios where you'd want to use curry, you're going to need to need to provide types to Scala's type inference engine anyway.
It turns out I can call curry like this:
curry(isSorted[Int])
Which yields:
scala> curry(isSorted[Int])
res41: Array[Int] => (((Int, Int) => Boolean) => Boolean) = <function1>
See https://stackoverflow.com/a/4593509/21640

Reason for type inference limitations in Scala compiler when dealing with partially applied functions

In Scala, when using partially applied functions vs curried functions, we have to deal with a different way of handling type inference. Let me show it with an example, using a basic filtering function (examples taken from the excellent Functional Programming in Scala book):
1) Partially applied function
def dropWhile[A](l: List[A], f: A => Boolean): List[A] = l match {
case Nil => Nil
case x::xs if (f(x)) => dropWhile(xs, f)
case _ => l
}
2) Curried partially applied function
def dropWhileCurried[A](l: List[A])(f: A => Boolean): List[A] = l match {
case Nil => Nil
case x::xs if (f(x)) => dropWhileCurried(xs)(f)
case _ => l
}
Now, while the implementation is identical in both versions, the difference comes when we call these functions. While the curried version can be simply called like:
dropWhileCurried(List(1,2,3,4,5))(x => x < 3)
This same way (omitting type for x) cannot be used with the non curried one:
dropWhile(List(1,2,3,4,5), x => x < 3)
<console>:9: error: missing parameter type
dropWhile(List(1,2,3,4,5), x => x < 3)
So this form must be used instead:
dropWhile(List(1,2,3,4,5), (x: Int) => x < 3)
I understand this is the case, and I know there are other questions in SO regarding this fact, but what I am trying to understand is why this is the case. What is the reason for the Scala compiler to treat this two types of partially applied functions differently when it comes to type inference?
Firstly both of your examples are not partially applied functions. Partially applied function (do not confuse with Partial Functions) is the function of which only part of it's arguments applied, - but you have all your arguments in place.
But you can easily make the 2nd example into partially applied function (and curried): val a = dropWhileCurried(List(new B, new B))_. Now you have a which has only first argument applied, and you need to apply the 2nd to execute it: println(a(x => true)). You can do the same with 1st example: val a = dropWhile(List(new B, new B), _: B => Boolean).
Now as for the inference and why it works like that: I can only assume, but it sounds quite reasonable for me. You can think of each argument in the function as equal by its importance, but if inference would work and you wrote dropWhile(List(new B, new B), _ => true), you'd assume that _ is of type B, however this also is possible dropWhile(List(new B, new B), _: A => true if B extends A. In that case if you change the order of arguments the inference would change or it wouldn't work at all: dropWhile(_ => true, List(new B, new B)) And it would definetely make the inference quite complicated for the compiler to do as it has to scan the definition several times.
Now if you get back to the partial application and think of call dropWhileCurried(xs)(f) as always a partial application of xs to dropWhileCurried and then application of f to the result of previous operation, it starts to sounds reasonable. Compiler needs to infer type when you already wrote dropWhileCurried(xs) because this is a partial application (I'm still missing _ in the end though). So now, when the type is inferred, it can continue and apply (f) to it.
This is at least how I perceive the question. There might be more reasons to this, but this should help to understand some of the background if you won't receive any better answer.

Generic programming & Rotten Bananas in Scala involving functional dependencies

So just to contextualize this for the uninitiated (not necessarily excluding myself), functors are a grade A context/mapping abstraction. In Scalanese:
trait FunctorStr[F[_]] {
def map[A, B](f: A => B): F[A] => F[B]
}
lots of things are functors blah blah, now if you are interested in generic programming and DSL crafting as a design pattern functors come up a lot. So, in keeping with the theme of extending intuition, lets get down to it. Half way through comonad.com's Rotten Bananas we are introduced to class Cata
given in Haskellese:
class Cata f t | t -> f where
cata:: (f a -> a) -> t -> a
now this Class is near the beginning of the fun for us the reader, but for me the scala implementer ...
Cata is the beginning of our trouble
this functional dependency t -> f does it mean "f is uniquely determined by t" ?
if you asked Miles Sabin in 2011
the fundep pattern is totally realizable in scala and put simply involves initiating an implicit search via an implicit parameter section and witnessing types to resolve the search but I can't say I get it so well as to instantly translate t -> f to scala
I'm seeing it in scala as something like
abstract class Cata[F[_], T](implicit e: ???) {
def cata[A]: (F[A] => A) => T => A
}
trait CataFunctor[F[_]] extends FunctorStr[({type l[x] = Cata[F, x]})#l] {
def map[A, B](f: A => B): Cata[F, A] => Cata[F, B]
}
Quoting the article:
given cata and fmap one can go through and build up a whole host of
other recursion schemes, paramorphisms, zygomorphisms, histomorphisms,
generalized catamorphisms, ...; the menagerie is quite forbidding and
these can be used to tear apart covariant functors with reckless
abandon. With the power of a paramorphism you rederive the notion of
general recursion, and so you can basically write any recursive
function you want. (On the coalgebra side of the house there are
anamorphisms, apomorphisms, and all sorts of other beasts for
effectively generating covariant functors)
This is the power I seek.
I'm trying to work this out and would really like some help? Now I've the InvariantFunctor Implemented in scalaz so I know that this isn't a fools errand.
Can I get a nudge in the right direction here? I'm down for as much detail as possible so like, go nuts.
Reading the linked post it looks like you need:
trait CataDep[T, F[_]]
abstract class Cata[F[_], T](implicit e: CataDep[T, F]) {
def cata[A]: (F[A] => A) => T => A
}

Scala Functor and Monad differences

Can please someone explain the differences between Functor and Monad in the Scala context?
Scala itself really does not emphasize the Functor and Monad terms that much. I guess using map is the functor side, using flatMap is the Monad side.
For me looking and playing around with scalaz has been so far the best avenue to get a sense of those functional concepts in the scala context (versus the haskell context). Two years ago when I started scala, the scalaz code was gibberish to me, then a few months ago I started looking again and I realized that it's really a clean implementation of that particular style of functional programming.
For instance the Monad implementation shows that a monad is a pointed functor because it extends the Pointed trait (as well as the Applicative trait). I invite you to go look at the code. It has linking in the source itself and it's really easy to follow the links.
So functors are more general. Monads provide additional features. To get a sense of what you can do when you have a functor or when you have a monad, you can look at MA
You'll see utility methods that need an implicit functor (in particular applicative functors) such as sequence and sometime methods that needs a full monad such as replicateM.
Taking scalaz as the reference point, a type F[_] (that is, a type F which is parameterized by some single type) is a functor if a function can be lifted into it. What does this mean:
class Function1W[A, B](self: A => B) {
def lift[F[_]: Functor]: F[A] => F[B]
}
That is, if I have a function A => B, a functor F[_], then I now have a function F[A] => F[B]. This is really just the reverse-way of looking at scala's map method, which (ignoring the CanBuildFrom stuff) is basically:
F[A] => (A => B) => F[B]
If I have a List of Strings, a function from String to Int, then I can obviously produce a List of Ints. This goes for Option, Stream etc. They are all functors
What I find interesting about this is that you might immediately jump to the (incorrect) conclusion that a Functor is a "container" of As. This is an unnecesssary restriction. For example, think about a function X => A. If I have a function X => A and a function A => B then clearly, by composition, I have a function X => B. But now, look at it this way:
type F[Y] = X => Y //F is fixed in X
(X => A) andThen (A => B) is X => B
F[A] A => B F[B]
So the type X => A for some fixed X is also a functor. In scalaz, functor is designed as a trait as follows:
trait Functor[F[_]] { def fmap[A, B](fa: F[A], f: A => B): F[B] }
hence the Function1.lift method above is implemented
def lift[F[_]: Functor]: F[A] => F[B]
= (f: F[A]) => implicitly[Functor[F]].fmap(f, self)
A couple of functor instances:
implicit val OptionFunctor = new Functor[Option] {
def fmap[A, B](fa: Option[A], f: A => B) = fa map f
}
implicit def Functor1Functor[X] = new Functor[({type l[a]=X => a})#l] {
def fmap[A, B](fa: X => B, f: A => B) = f compose fa
}
In scalaz, a monad is designed like this:
trait Monad[M[_]] {
def pure[A](a: A): M[A] //given a value, you can lift it into the monad
def bind[A, B](ma: M[A], f: A => B): M[B]
}
It is not particularly obvious what the usefulness of this might be. It turns out that the answer is "very". I found Daniel Spiewak's Monads are not Metaphors extremely clear in describing why this might be and also Tony Morris's stuff on configuration via the reader monad, a good practical example of what might be meant by writing your program inside a monad.
A while ago I wrote about that: http://gabrielsw.blogspot.com/2011/08/functors-applicative-functors-and.html (I'm no expert though)
The first thing to understand is the type ' T[X] ' : It's a kind of "context" (is useful to encode things in types and with this you're "composing" them) But see the other answers :)
Ok, now you have your types inside a context, say M[A] (A "inside" M), and you have a plain function f:A=>B ... you can't just go ahead and apply it, because the function expects A and you have M[A]. You need some way to "unpack" the content of M, apply the function and "pack" it again. If you have "intimate" knowledge of the internals of M you can do it, if you generalize it in a trait you end with
trait Functor[T[_]]{
def fmap[A,B](f:A=>B)(ta:T[A]):T[B]
}
And that's exactly what a functor is. It transforms a T[A] into a T[B] by applying the function f.
A Monad is a mythical creature with elusive understanding and multiple metaphors, but I found it pretty easy to understand once you get the applicative functor:
Functor allow us to apply functions to things in a context. But what if the functions we want to apply are already in a context? (And is pretty easy to end in that situation if you have functions that take more than one parameter).
Now we need something like a Functor but that also takes functions already in the context and applies them to elements in the context. And that's what the applicative functor is. Here is the signature:
trait Applicative[T[_]] extends Functor[T]{
def pure[A](a:A):T[A]
def <*>[A,B](tf:T[A=>B])(ta:T[A]):T[B]
}
So far so good.
Now comes the monads: what if now you have a function that puts things in the context? It's signature will be g:X=>M[X] ... you can't use a functor because it expects X=>Y so we'll end with M[M[X]], you can't use the applicative functor because is expecting the function already in the context M[X=>Y] .
So we use a monad, that takes a function X=>M[X] and something already in the context M[A] and applies the function to what's inside the context, packing the result in only one context. The signature is:
trait Monad[M[_]] extends Applicative[M]{
def >>=[A,B](ma:M[A])(f:A=>M[B]):M[B]
}
It can be pretty abstract, but if you think on how to work with "Option" it shows you how to compose functions X=>Option[X]
EDIT: Forgot the important thing to tie it: the >>= symbol is called bind and is flatMap in Scala. (Also, as a side note, there are some laws that functors, applicatives, and monads have to follow to work properly).
The best article laying out in details those two notions is "The Essence of the Iterator Pattern " from Eric Torreborre's Blog.
Functor
trait Functor[F[_]] {
def fmap[A, B](f: A => B): F[A] => F[B]
}
One way of interpreting a Functor is to describe it as a computation of values of type A.
For example:
List[A] is a computation returning several values of type A (non-deterministic computation),
Option[A] is for computations that you may or may not have,
Future[A] is a computation of a value of type A that you will get later, and so on.
Another way of picturing it is as some kind of "container" for values of type A.
It is the basic layer from which you define:
PointedFunctor (to create a value of type F[A]) and
Applic (to provide a method applic, being a computed value inside the container F (F[A => B]), to apply to a value F[A]),
Applicative Functor (aggregation of an Applic and a PointedFunctor).
All three elements are used to define a Monad.