Why does Scalaz use complex symbols and no in-code documentation? - scala

I'm sometimes looking at Scalaz and find it pretty hard to understand for a beginner Scala programmer.
implicit def KleisliCategory[M[_]: Monad]: Category[({type λ[α, β]=Kleisli[M, α, β]})#λ] = new Category[({type λ[α, β]=Kleisli[M, α, β]})#λ] {
def id[A] = ☆(_ η)
def compose[X, Y, Z](f: Kleisli[M, Y, Z], g: Kleisli[M, X, Y]) = f <=< g
}
implicit def CokleisliCategory[M[_]: Comonad]: Category[({type λ[α, β]=Cokleisli[M, α, β]})#λ] = new Category[({type λ[α, β]=Cokleisli[M, α, β]})#λ] {
def id[A] = ★(_ copure)
def compose[X, Y, Z](f: Cokleisli[M, Y, Z], g: Cokleisli[M, X, Y]) = f =<= g
}
Scalaz methods may seem obvious for experienced functional programmers, but for anyone else it's hard to understand.
Why is there so few documentation in Scalaz code?
Why do they use so many operators that are unreadable for most people?
I don't even know how to type ★ or ☆ without copy/pasting. And it's just an example because there are many.
Some people say that Scalaz was unreadable at the beginning, but 2 years later they find it great.
I wonder where to start with Scalaz. The Scala validation seems the easiest part, but after that?

I agree that Scalaz is mostly undocumented. The problem is that it collects a lot of advanced concepts from Haskell (and the underlying mathematics) and documenting them all in detail would become writing a whole book about functional programming (and mathematics). So I believe Scalaz's approach is:
If you know and need some concept from functional programming prepared for Scala, you'll most likely find it here.
If you don't know it, you'll have to learn it elsewhere.
Let's have a look at your example: If you know Kleisli categories and how every monads gives rise to one, the definition is quite self-contained. If you don't, then KleisliCategory has no use for you anyway.
(In my experience, Haskell is better for learning advanced concepts from functional programming. While Scala is far better than Java, it still drags around Java's OO/imperative heritage that clutters things a bit.)
Considering the Unicode symbols. Looking at sources it seems that they're used only as a syntactic sugar or at least they have a non-Unicode counterpart:
def ☆[M[_], A, B](f: A => M[B]): Kleisli[M, A, B] = kleisli(f)
def η[F[_]](implicit p: Pure[F]): F[A] = pure
def cokleisli[W[_], A, B](f: W[A] => B): Cokleisli[W, A, B] = ★(f)
So you can go without them, if you want.
Still, I'm not sure if having them in Scalaz is a good idea. It could make the code impossible to read for someone who lacks the proper fonts. I'd rather prefer pure ASCII symbols.

A place to start is to read Learn You a Haskell which covers many concepts.
Watch Chris Marshall's (#oxbow_lakes) scalaz talks: http://skillsmatter.com/expert/scala/chris-marshall
Get a copy of Functional Programming in Scala from Manning written by some of the authors of scalaz.
I have a couple of small examples on my blog http://www.casualmiracles.com/blog/
I would say there is even easier places to start with scalaz than validation, which is the various enrichments on Option like ~foo which returns the value contained in the option or the 'zero' for the options's type (0 for numbers, empty string for String etc).
I forgot about a very detailed series of articles called Learning Scalaz at http://eed3si9n.com/

As with any open-source project, the only really true and acceptable answer to "Why isn't their better documentation?" is "Because no one has written it yet. You are free to volunteer."
(I honestly have no idea whether this answer will result in upvotes or downvotes. Interesting experiment.)

Related

Interpreting the `Reader` trait in "Simplicitly"

In "Simplicitly: foundations and applications of implicit function types", Odersky et al. briefly introduce the Reader monad, just to replace it with a superior alternative one paragraph later. This is what the given definition looks like:
trait Reader[R, A] {
def ask: R
def local[A](f: R => R)(a: A): A
}
Thinking that I roughly understand the idea behind Reader, I've repeatedly glanced over the definition, without actually reading it. But now, while re-reading this paper, I'm just staring at it, struggling to understand what it's trying to tell me. The [A] after local, as well as the seemingly completely unconstrained A in Reader[R, A] look odd.
Here is what I looked at while trying to figure out what it should have meant:
The def local in object Reader in Cats seems perfectly clear: it just precomposes an f: R => R to a Reader[R, A], nothing surprising here.
The Ask in Cats-MTL has a superficially similar shape, but it's an MTL-style trait describing the capability of a monad F[_]. In the paper, there is no F[_] in the first place.
Similarly for Local: in the Cats-MTL, it's all about the capabilities of the F[_]; Also, there is no additional [A].
There is the Haskell MonadReader typeclass, whose ask and local signatures look very similar, but again, it's describing the capabilities of a monad m.
It looks as if the Reader from the paper combined the definitions of MTL Ask and Local, but then removed any mention of the monad that's being described. It's like "MTL", but without the "M" in it - not sure how to interpret it.
Is it a typo?
If it is one: does my hypothesis about how this typo occurred seem plausible?
Are there other closely related definitions that would help make sense of this one?
There is a "monad instance omitted"-remark right before the code block. Is there some kind of convention about "omitting" all the M[_]s and F[_]s in "the obvious" positions?

Application of understanding monad in real world

I'm studying functional programming in Scala and I learnt term monad. In short monad is:
trait M[A] {
def flatMap[B](f: A => M[B]): M[B]
}
def unit[A](x: A): M[A]
I know monad is just a concept based on above 2 rules. And we can meet many monads in real world such as List, Future ....
The only one problem I don't know is: why should we know term "monad" as comparing to understanding List apis, Future apis or anything apis ... Is understanding monad help us write better code or can design better functional code structure.
Thanks
Because Monad already is a known term in category theory. There are also 3 very important Monad laws, that a Monad has to adhere to.
In theory, we could call Monads whatever we'd like, i.e. "FlatMappable" or "Bindable", but the name "Monad" is already an established term in the functional programming community and is deeply linked to the Monad laws.
As to why you should learn to appreciate Monads over learning each api individually, it's all about abstraction and reuse of knowledge. Oftentimes when we look at a new concept we compare them to concepts we already know.
If you already understand the Future Monad, understanding the Task Monad will be much easier.
It's also good to mention, that for-comprehensions in Scala work exclusively on Monads. In fact for-comprehensions are just syntactic sugar for flatMap and map (there's also filter, but that's not incredibly relevant to Monads). So recognizing if something is a Monad instance, enables you to utilize this extra piece of syntactic sugar.
Also once you fully grasp the abstraction you can make use of concepts like Monad transformers, where the actual type of the Monad is less important.
Lastly, here are the Monad laws for completeness sake:
Left identity: M[F].pure(x).flatMap(f) == f(x)
Right identity: m.flatMap(pure(_)) == m
Associativity: m.flatMap(f).flatMap(g) == m.flatMap(x => f(x).flatMap(g))
About Monad API vs concrete APIs:
An example could be Free Monad pattern. It essentialy uses (at least) two monads: first one is wrapping your DSL's expressions, and the second one is effect monad, that is, modality that you interpret your expressions in (Option corresponds to something that could fail, Future also adds latency etc).
To be more concrete: consider a task where you have some latency, and you decide to use Futures. How will you unit test that? Return some futures and then use Await? Apart from adding unnecessary complexity, you can run into some problems with that. And you won't actually need to use Futures for some tests. The answer is to parametrize methods that are supposed to use Futures with Monad, so you can just use Identity monad, or Option, and just forget about aforementioned problem.

Understanding GenericTraversableTemplate and other Scala collection internals

I was exchanging emails with an acquaintance that is a big Kotlin, Clojure and Java8 fan and asked him why not Scala. He provided many reasons (Scala is too academic, too many features, not the first time I hear this and I think this is very subjective)
but his biggest pain point was as an example, that he doesn't like a language where he can't understand the implementation of basic data structures, and he gave LinkedList as an example.
I took a look at scala.collection.LinkedList and counted the things I either understand or somewhat understand.
CanBuildFrom - after some effort, I get it, type classes, not the longest suicide note
in history [1]
LinkedListLike - I can't remember where I read it, but I got convinced this is there for a good reason
But then I started to stare at these
GenericTraversableTemplate - now I'm scratching my head as well...
SeqFactory, GenericCompanion - OK, now you lost me, I start to understand his point
Can someone who understand this well please explain GenericTraversableTemplate SeqFactory and GenericCompanion in the context of LinkedList? What they are for, what impact on the end user they have (e.g. I'm sure they are there for a good reason, what is that reason?)
Are they there for a practical reason? or is it a level of abstraction that could have been simplified?
I like Scala collections because I don't have to understand the internals to be able to effectively use them. I don't mind a complex implementation if it helps me to keep my usage simpler. e.g. I don't mind paying the price of a complex library if I get the ability to write cleaner more elegant code using it in return. but it will sure be nice to better understand it.
[1] - Is the Scala 2.8 collections library a case of "the longest suicide note in history"?
I will try to describe the concepts from the point of view of a random pedestrian (I've never contributed a single line to the Scala collection library, so don't hit me too hard if I'm wrong).
Since LinkedList is now deprecated, and because Maps provide a better example, I will use TreeMap as example.
CanBuildFrom
The motivation is this: If we take a TreeMap[Int, Int] and map it with
case (x, y) => (2 * x, y * y * 0.3d)
we get TreeMap[Int, Double]. This type safety alone would already explain the necessity for
simple genericBuilder[X] constructs.
However, if we map it with
case (x, y) => x
we obtain an Iterable[Int] (more precisely: a List[Int]), this is no longer a Map, the type of the container has changed. This is where CBF's come into play:
CanBuildFrom[This, X, That]
can be seen as a kind of "type-level function" that tells us: if we map a collection of type
This with a function that returns values of type X, we can build a That. The most specific
CBF is provided at compile time, in the first case it will be something like
CanBuildFrom[TreeMap[_,_], (X,Y), TreeMap[X,Y]]
in the second case it will be something like
CanBuildFrom[TreeMap[_,_], X, Iterable[X]]
and so we always get the right type of the container. The pattern is pretty general.
Every time you have a generic function
foo[X1, ..., Xn](x1: X1, ..., xn: Xn): Y
where the result type Y depends on X1, ..., Xn, you can introduce an implicit parameter as
follows:
foo[X1, ...., Xn, Y](x1: X1, ..., xn: Xn)(implicit CanFooFrom[X1, ..., Xn, Y]): Y
and then define the type-level function X1, ..., Xn -> Y piecewise by providing multiple
implicit CanFooFrom's.
LinkedListLike
In the class definition, we see something like this:
TreeMap[A, B] extends SortedMap[A, B] with SortedMapLike[A, B, TreeMap[A, B]]
This is Scala's way to express the so-called F-bounded polymorphism.
The motivation is as follows: Suppose we have a dozen (or at least two...) implementations of the trait SortedMap[A, B]. Now we want to implement a method withoutHead, it could look
somewhat like this:
def withoutHead = this.remove(this.head)
If we move the implementation into SortedMap[A, B] itself, the best we can do is this:
def withoutHead: SortedMap[A, B] = this.remove(this.head)
But is this the most specific result type we can get? No, that's too vague.
We would like to return TreeMap[A, B] if the original map is a TreeMap, and
CrazySortedLinkedHashMap (or whatever...) if the original was a CrazySortedLinkedHashMap.
This is why we move the implementation into SortedMapLike, and give the following signature to the withoutHead method:
trait SortedMapLike[A, B, Repr <: SortedMap[A, B]] {
...
def withoutHead: Repr = this.remove(this.head)
}
now because TreeMap[A, B] extends SortedMapLike[A, B, TreeMap[A, B]], the result type of
withoutHead is TreeMap[A,B]. The same holds for CrazySortedLinkedHashMap: we get the exact type back. In Java, you would either have to return SortedMap[A, B] or override the method in each subclass (which turned out to be a maintenance nightmare for the feature-rich traits in Scala)
GenericTraversableTemplate
The type is: GenericTraversableTemplate[+A, +CC[X] <: GenTraversable[X]]
As far as i can tell, this is just a trait that provides implementations of
methods that somehow return regular collections with same container type but
possibly different content type (stuff like flatten, transpose, unzip).
Stuff like foldLeft, reduce, exists are not here because these methods care only about content type, not container type.
Stuff like flatMap is not here, because the container type can change (again, CBF's).
Why is it a separate trait, is there a fundamental reason why it exists?
I don't think so... It probably would be possible to group the godzillion of methods somewhat differently. But this is just what happens naturally: you start to implement a trait, and it turns out that it has very many methods. So instead you group loosely related methods, and put them into 10 different traits with awkward names like "GenTraversableTemplate", and them mix them all into traits/classes where you need them...
GenericCompanion
This is just an abstract class that implements some basic functionality which is common
for companion objects of most collection classes (essentially, it just implements very
simple factory methods apply(varargs) and empty).
For example there is method apply that takes varargs of some type A and returns a collection of type CC[A]:
Array(1, 2, 3, 4) // calls Array.apply[A](elems: A*) on the companion object
List(1, 2, 3, 4) // same for List
The implementation is very simple, it's something like this:
def apply[A](varargs: A*): CC[A] = {
val builder = newBuilder[A]
for (arg <- varargs) builder += arg
builder.result()
}
This is obviously the same for Arrays and Lists and TreeMaps and almost everything else, except 'constrained irregular Collections' like Bitset. So this is just common functionality in a common ancestor class of most companion objects. Nothing special about that.
SeqFactory
Similar to GenericCompanion, but this time more specifically for Sequences.
Adds some common factory methods like fill() and iterate() and tabulate() etc.
Again, nothing particularly rocket-scientific here...
Few general remarks
In general: I don't think that one should attempt to understand every single trait in this library. Rather, one should try to look at the library as a whole. As a whole, it has a very interesting architecture. And in my personal opinion, it's actually a very aesthetic piece of software, but one has to stare at it for quite a while (and try to re-implement the whole architectural pattern several times) to grasp it. On the other hand: for example CBF's are kind of "design pattern" that clearly should be eliminated in successors of this language. The whole story with the scope of implicit CBF's still seems like a total nightmare to me. But many things seemed completely inscrutable at first, and almost always, it ended with an epiphany (which is very specific for Scala: for the majority of other languages, such struggles usually end with the thought "Author of this is a complete idiot").

Are there any documented anti-patterns for functional programming? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Next month I'm going to work on a new R&D project that will adopt a functional programming language (I voted for Haskell, but right now F# got more consensus). Now, I've played with such languages for a while and developed a few command line tools with them, but this is a quite bigger project and I'm trying to improve my functional programming knowledge and technique. I've also read a lot on the topic, but I can't find any books or resources that document anti-patterns in the functional programming world.
Now, learning about anti-patterns means learning about other smart people failures: in OOP I know a few of them, and I'm experienced enough to choose wisely when something that generally is an anti-pattern, perfectly fit my needs. But I can choose this because I know the lesson learned by other smart guys.
Thus, my question is: are there any documented anti-patterns in functional programming? Till now, all of my collegues told me that they do not know any, but they can't state why.
If yes, please include one single link to an authoritative source (a catalogue, an essay, a book or equivalent).
If no, please support your answer by a proper theorem.
Please don't turn this question in a list: it is a boolean question that just requires a proof to evaluate the answer. For example, if you are Oleg Kiselyov, "Yes" is enough, since everybody will be able to find your essay on the topic. Still, please be generous.
Note that I am looking for formal anti-patterns, not simple bad habits or bad practices.
From the linked wikipedia article on Anti-Patterns:
... there must be at least two key elements present to formally distinguish an actual anti-pattern from a simple bad habit, bad practice, or bad idea:
some repeated pattern of action, process or structure that initially appears to be beneficial, but ultimately produces more bad consequences than beneficial results, and
an alternative solution exists that is clearly documented, proven in actual practice and repeatable.
Moreover by "documented" I mean something from authoritative authors or well known sources.
The languages that I'm used to are:
Haskell (where I'm really starting to think that if code compiles, it works!)
Scala
F#
but I can also adapt knowledge about anti-patterns documented in other functional languages.
I searched a lot in the web, but all the resources I've found are either related to OOP or to function layout (define variable at the beginning of the function, and the like...).
The only anti-pattern I've seen is over-monadization, and since monads can be incredibly useful this falls somewhere in between a bad practice and an anti-pattern.
Suppose you have some property P that you want to be true of some of your objects. You could decorate your objects with a P monad (here in Scala, use paste in the REPL to get the object and its companion to stick together):
class P[A](val value: A) {
def flatMap[B](f: A => P[B]): P[B] = f(value) // AKA bind, >>=
def map[B](f: A => B) = flatMap(f andThen P.pure) // (to keep `for` happy)
}
object P {
def pure[A](a: A) = new P(a) // AKA unit, return
}
Okay, so far so good; we cheated a little bit by making value a val rather than making this a comonad (if that's what we wanted), but we now have a handy wrapper in which we can wrap anything. Now let's suppose we also have properties Q and R.
class Q[A](val value: A) {
def flatMap[B](f: A => Q[B]): Q[B] = f(value)
def map[B](f: A => B) = flatMap(f andThen Q.pure)
}
object Q {
def pure[A](a: A) = new Q(a)
}
class R[A](val value: A) {
def flatMap[B](f: A => R[B]): R[B] = f(value)
def map[B](f: A => B) = flatMap(f andThen R.pure)
}
object R {
def pure[A](a: A) = new R(a)
}
So we decorate our object:
class Foo { override def toString = "foo" }
val bippy = R.pure( Q.pure( P.pure( new Foo ) ) )
Now we are suddenly faced with a host of problems. If we have a method that requires property Q, how do we get to it?
def bar(qf: Q[Foo]) = qf.value.toString + "bar"
Well, clearly bar(bippy) isn't going to work. There are traverse or swap operations that effectively flip monads, so we could, if we'd defined swap in an appropriate way, do something like
bippy.map(_.swap).map(_.map(bar))
to get our string back (actually, a R[P[String]]). But we've now committed ourselves to doing something like this for every method that we call.
This is usually the wrong thing to do. When possible, you should use some other abstraction mechanism that is equally safe. For instance, in Scala you could also create marker traits
trait X
trait Y
trait Z
val tweel = new Foo with X with Y with Z
def baz(yf: Foo with Y) = yf.toString + "baz"
baz(tweel)
Whew! So much easier. Now it is very important to point out that not everything is easier. For example, with this method if you start manipulating Foo you will have to keep track of all the decorators yourself instead of letting the monadic map/flatMap do it for you. But very often you don't need to do a bunch of in-kind manipulations, and then the deeply nested monads are an anti-pattern.
(Note: monadic nesting has a stack structure while traits have a set structure; there is no inherent reason why a compiler could not allow set-like monads, but it's not a natural construct for typical formulations of type theory. The anti-pattern is a simple consequence of the fact that deep stacks are tricky to work with. They can be somewhat easier if you implement all the Forth stack operations for your monads (or the standard set of Monad transformers in Haskell).)

What are practical uses of applicative style?

I am a Scala programmer, learning Haskell now. It's easy to find practical use cases and real world examples for OO concepts, such as decorators, strategy pattern etc. Books and interwebs are filled with it.
I came to the realization that this somehow is not the case for functional concepts. Case in point: applicatives.
I am struggling to find practical use cases for applicatives. Almost all of the tutorials and books I have come across so far provide the examples of [] and Maybe. I expected applicatives to be more applicable than that, seeing all the attention they get in the FP community.
I think I understand the conceptual basis for applicatives (maybe I am wrong), and I have waited long for my moment of enlightenment. But it doesn't seem to be happening. Never while programming, have I had a moment when I would shout with a joy, "Eureka! I can use applicative here!" (except again, for [] and Maybe).
Can someone please guide me how applicatives can be used in a day-to-day programming? How do I start spotting the pattern? Thanks!
Applicatives are great when you've got a plain old function of several variables, and you have the arguments but they're wrapped up in some kind of context. For instance, you have the plain old concatenate function (++) but you want to apply it to 2 strings which were acquired through I/O. Then the fact that IO is an applicative functor comes to the rescue:
Prelude Control.Applicative> (++) <$> getLine <*> getLine
hi
there
"hithere"
Even though you explicitly asked for non-Maybe examples, it seems like a great use case to me, so I'll give an example. You have a regular function of several variables, but you don't know if you have all the values you need (some of them may have failed to compute, yielding Nothing). So essentially because you have "partial values", you want to turn your function into a partial function, which is undefined if any of its inputs is undefined. Then
Prelude Control.Applicative> (+) <$> Just 3 <*> Just 5
Just 8
but
Prelude Control.Applicative> (+) <$> Just 3 <*> Nothing
Nothing
which is exactly what you want.
The basic idea is that you're "lifting" a regular function into a context where it can be applied to as many arguments as you like. The extra power of Applicative over just a basic Functor is that it can lift functions of arbitrary arity, whereas fmap can only lift a unary function.
Since many applicatives are also monads, I feel there's really two sides to this question.
Why would I want to use the applicative interface instead of the monadic one when both are available?
This is mostly a matter of style. Although monads have the syntactic sugar of do-notation, using applicative style frequently leads to more compact code.
In this example, we have a type Foo and we want to construct random values of this type. Using the monad instance for IO, we might write
data Foo = Foo Int Double
randomFoo = do
x <- randomIO
y <- randomIO
return $ Foo x y
The applicative variant is quite a bit shorter.
randomFoo = Foo <$> randomIO <*> randomIO
Of course, we could use liftM2 to get similar brevity, however the applicative style is neater than having to rely on arity-specific lifting functions.
In practice, I mostly find myself using applicatives much in the same way like I use point-free style: To avoid naming intermediate values when an operation is more clearly expressed as a composition of other operations.
Why would I want to use an applicative that is not a monad?
Since applicatives are more restricted than monads, this means that you can extract more useful static information about them.
An example of this is applicative parsers. Whereas monadic parsers support sequential composition using (>>=) :: Monad m => m a -> (a -> m b) -> m b, applicative parsers only use (<*>) :: Applicative f => f (a -> b) -> f a -> f b. The types make the difference obvious: In monadic parsers the grammar can change depending on the input, whereas in an applicative parser the grammar is fixed.
By limiting the interface in this way, we can for example determine whether a parser will accept the empty string without running it. We can also determine the first and follow sets, which can be used for optimization, or, as I've been playing with recently, constructing parsers that support better error recovery.
I think of Functor, Applicative and Monad as design patterns.
Imagine you want to write a Future[T] class. That is, a class that holds values that are to be calculated.
In a Java mindset, you might create it like
trait Future[T] {
def get: T
}
Where 'get' blocks until the value is available.
You might realize this, and rewrite it to take a callback:
trait Future[T] {
def foreach(f: T => Unit): Unit
}
But then what happens if there are two uses for the future? It means you need to keep a list of callbacks. Also, what happens if a method receives a Future[Int] and needs to return a calculation based on the Int inside? Or what do you do if you have two futures and you need to calculate something based on the values they will provide?
But if you know of FP concepts, you know that instead of working directly on T, you can manipulate the Future instance.
trait Future[T] {
def map[U](f: T => U): Future[U]
}
Now your application changes so that each time you need to work on the contained value, you just return a new Future.
Once you start in this path, you can't stop there. You realize that in order to manipulate two futures, you just need to model as an applicative, in order to create futures, you need a monad definition for future, etc.
UPDATE: As suggested by #Eric, I've written a blog post: http://www.tikalk.com/incubator/blog/functional-programming-scala-rest-us
I finally understood how applicatives can help in day-to-day programming with that presentation:
https://web.archive.org/web/20100818221025/http://applicative-errors-scala.googlecode.com/svn/artifacts/0.6/chunk-html/index.html
The autor shows how applicatives can help for combining validations and handling failures.
The presentation is in Scala, but the author also provides the full code example for Haskell, Java and C#.
Warning: my answer is rather preachy/apologetic. So sue me.
Well, how often in your day-to-day Haskell programming do you create new data types? Sounds like you want to know when to make your own Applicative instance, and in all honesty unless you are rolling your own parser, you probably won't need to do it very much. Using applicative instances, on the other hand, you should learn to do frequently.
Applicative is not a "design pattern" like decorators or strategies. It is an abstraction, which makes it much more pervasive and generally useful, but much less tangible. The reason you have a hard time finding "practical uses" is because the example uses for it are almost too simple. You use decorators to put scrollbars on windows. You use strategies to unify the interface for both aggressive and defensive moves for your chess bot. But what are applicatives for? Well, they're a lot more generalized, so it's hard to say what they are for, and that's OK. Applicatives are handy as parsing combinators; the Yesod web framework uses Applicative to help set up and extract information from forms. If you look, you'll find a million and one uses for Applicative; it's all over the place. But since it's so abstract, you just need to get the feel for it in order to recognize the many places where it can help make your life easier.
I think Applicatives ease the general usage of monadic code. How many times have you had the situation that you wanted to apply a function but the function was not monadic and the value you want to apply it to is monadic? For me: quite a lot of times!
Here is an example that I just wrote yesterday:
ghci> import Data.Time.Clock
ghci> import Data.Time.Calendar
ghci> getCurrentTime >>= return . toGregorian . utctDay
in comparison to this using Applicative:
ghci> import Control.Applicative
ghci> toGregorian . utctDay <$> getCurrentTime
This form looks "more natural" (at least to my eyes :)
Coming at Applicative from "Functor" it generalizes "fmap" to easily express acting on several arguments (liftA2) or a sequence of arguments (using <*>).
Coming at Applicative from "Monad" it does not let the computation depend on the value that is computed. Specifically you cannot pattern match and branch on a returned value, typically all you can do is pass it to another constructor or function.
Thus I see Applicative as sandwiched in between Functor and Monad. Recognizing when you are not branching on the values from a monadic computation is one way to see when to switch to Applicative.
Here is an example taken from the aeson package:
data Coord = Coord { x :: Double, y :: Double }
instance FromJSON Coord where
parseJSON (Object v) =
Coord <$>
v .: "x" <*>
v .: "y"
There are some ADTs like ZipList that can have applicative instances, but not monadic instances. This was a very helpful example for me when understanding the difference between applicatives and monads. Since so many applicatives are also monads, it's easy to not see the difference between the two without a concrete example like ZipList.
I think it might be worthwhile to browse the sources of packages on Hackage, and see first-handedly how applicative functors and the like are used in existing Haskell code.
I described an example of practical use of the applicative functor in a discussion, which I quote below.
Note the code examples are pseudo-code for my hypothetical language which would hide the type classes in a conceptual form of subtyping, so if you see a method call for apply just translate into your type class model, e.g. <*> in Scalaz or Haskell.
If we mark elements of an array or hashmap with null or none to
indicate their index or key is valid yet valueless, the Applicative
enables without any boilerplate skipping the valueless elements while
applying operations to the elements that have a value. And more
importantly it can automatically handle any Wrapped semantics that
are unknown a priori, i.e. operations on T over
Hashmap[Wrapped[T]] (any over any level of composition, e.g. Hashmap[Wrapped[Wrapped2[T]]] because applicative is composable but monad is not).
I can already picture how it will make my code easier to
understand. I can focus on the semantics, not on all the
cruft to get me there and my semantics will be open under extension of
Wrapped whereas all your example code isn’t.
Significantly, I forgot to point out before that your prior examples
do not emulate the return value of the Applicative, which will be a
List, not a Nullable, Option, or Maybe. So even my attempts to
repair your examples were not emulating Applicative.apply.
Remember the functionToApply is the input to the
Applicative.apply, so the container maintains control.
list1.apply( list2.apply( ... listN.apply( List.lift(functionToApply) ) ... ) )
Equivalently.
list1.apply( list2.apply( ... listN.map(functionToApply) ... ) )
And my proposed syntactical sugar which the compiler would translate
to the above.
funcToApply(list1, list2, ... list N)
It is useful to read that interactive discussion, because I can't copy it all here. I expect that url to not break, given who the owner of that blog is. For example, I quote from further down the discussion.
the conflation of out-of-statement control flow with assignment is probably not desired by most programmers
Applicative.apply is for generalizing the partial application of functions to parameterized types (a.k.a. generics) at any level of nesting (composition) of the type parameter. This is all about making more generalized composition possible. The generality can’t be accomplished by pulling it outside the completed evaluation (i.e. return value) of the function, analogous to the onion can’t be peeled from the inside-out.
Thus it isn’t conflation, it is a new degree-of-freedom that is not currently available to you. Per our discussion up thread, this is why you must throw exceptions or stored them in a global variable, because your language doesn’t have this degree-of-freedom. And that is not the only application of these category theory functors (expounded in my comment in moderator queue).
I provided a link to an example abstracting validation in Scala, F#, and C#, which is currently stuck in moderator queue. Compare the obnoxious C# version of the code. And the reason is because the C# is not generalized. I intuitively expect that C# case-specific boilerplate will explode geometrically as the program grows.