This may seem really obvious to the FP cognoscenti here, but what is point free style in Scala good for? What would really sell me on the topic is an illustration that shows how point free style is significantly better in some dimension (e.g. performance, elegance, extensibility, maintainability) than code solving the same problem in non-point free style.
Quite simply, it's about being able to avoid specifying a name where none is needed, consider a trivial example:
List("a","b","c") foreach println
In this case, foreach is looking to accept String => Unit, a function that accepts a String and returns Unit (essentially, that there's no usable return and it works purely through side effect)
There's no need to bind a name here to each String instance that's passed to println. Arguably, it just makes the code more verbose to do so:
List("a","b","c") foreach {println(_)}
Or even
List("a","b","c") foreach {s => println(s)}
Personally, when I see code that isn't written in point-free style, I take it as an indicator that the bound name may be used twice, or that it has some significance in documenting the code. Likewise, I see point-free style as a sign that I can reason about the code more simply.
One appeal of point-free style in general is that without a bunch of "points" (values as opposed to functions) floating around, which must be repeated in several places to thread them through the computation, there are fewer opportunities to make a mistake, e.g. when typing a variable's name.
However, the advantages of point-free are quickly counterbalanced in Scala by its meagre ability to infer types, a fact which is exacerbated by point-free code because "points" serve as clues to the type inferencer. In Haskell, with its almost-complete type inferencing, this is usually not an issue.
I see no other advantage than "elegance": It's a little bit shorter, and may be more readable. It allows to reason about functions as entities, without going mentally a "level deeper" to function application, but of course you need getting used to it first.
I don't know any example where performance improves by using it (maybe it gets worse in cases where you end up with a function when a method would be sufficient).
Scala's point-free syntax is part of the magic Scala operators-which-are-really-functions. Even the most basic operators are functions:
For example:
val x = 1
val y = x + 1
...is the same as...
val x = 1
val y = x.+(1)
...but of course, the point-free style reads more naturally (the plus appears to be an operator).
Related
I have heard that it is a good practice in Scala to eliminate for loops and do things "the Scala way". I even found a Scala style checker at http://www.scalastyle.org. Are for loops a no-no in Scala? In a course at https://www.udemy.com/course/apache-spark-with-scala-hands-on-with-big-data/learn/lecture/5363798#overview I found this example, which makes me thing that for looks are okay to use, but using the Scala format and syntax of course, in a single line and not like the traditional Java for looks in multiple lines of code. See this example I found from that Udemy course:
val shipList = List("Enterprise", "Defiant", "Voyager", "Deep Space Nine")
for (ship <- shipList) {println(ship)}
That for loop prints this result, as expected:
Enterprise Defiant Voyager Deep Space Nine
I was wondering if using for as in the example above is acceptable Scala style code, or it if is a no-no and why. Thank you!
There is no problem in this for loop, but you can use functions form List object for your work in more functional way.
e.g. instead of using
val shipList = List("Enterprise", "Defiant", "Voyager", "Deep Space Nine")
for (ship <- shipList) {println(ship)}
You can use
val shipList = List("Enterprise", "Defiant", "Voyager", "Deep Space Nine")
shipList.foreach(element => println(element) )
or
shipList.foreach(println)
You can use for loops in Scala, there is no problem with that. But the difference is that this for-loop is not an expression and does not return a value, so you need to use a variable in order to return any value. Scala gives preference to work with immutable types.
In your example you print messages in the console, you need to perform a "side effect" to extract the value breaking the referencial transparency, I mean, you depend on the IO operation to extract a value, or you have mutate a variable which is in the scope which maybe is being accessed by another thread or another concurrent task thereby there is no guarantee that the value that you collect wont be what you are expecting. Obviously, all these hypothesis are related to concurrent/parallel programming and there is where Scala and the immutable style help.
To show the elements of a collection you can use a for loop, but if you want to count the total number of chars in Scala you do that using a expression like:
val chars = shipList.foldLeft(0)((a, b) => a + b.length)
To sum up, most of the times the Scala code that you will read uses immutable style of programming although not always because Scala supports the other way of coding too, but it is weird to find something using a classic Java OOP style, mutating object instances and using getters and setters.
I am learning functional programming using scala. In general I notice that for loops are not much used in functional programs instead they use map.
Questions
What are the advantages of using map over for loop in terms of performance, readablity etc ?
What is the intention of bringing in a map function when it can be achieved using loop ?
Program 1: Using For loop
val num = 1 to 1000
val another = 1000 to 2000
for ( i <- num )
{
for ( j <- another)
{
println(i,j)
}
}
Program 2 : Using map
val num = 1 to 1000
val another = 1000 to 2000
val mapper = num.map(x => another.map(y => (x,y))).flatten
mapper.map(x=>println(x))
Both program 1 and program 2 does the same thing.
The answer is quite simple actually.
Whenever you use a loop over a collection it has a semantic purpose. Either you want to iterate the items of the collection and print them. Or you want to transform the type of the elements to another type (map). Or you want to change the cardinality, such as computing the sum of the elements of a collection (fold).
Of course, all that can also be done using for - loops but to the reader of the code, it is more work to figure out which semantic purpose the loop has, compared to a well known named operation such as map, iter, fold, filter, ...
Another aspect is, that for loops lead to the dark side of using mutable state. How would you sum the elements of a collection in a for loop without mutable state? You would not. Instead you would need to write a recursive function. So, for good measure, it is best to drop the habit of thinking in for loops early and enjoy the brave new functional way of doing things.
I'll start by quoting Programming in Scala.
"Every for expression can be expressed in terms of the three higher-order functions map, flatMap and filter. This section describes the translation scheme, which is also used by the Scala compiler."
http://www.artima.com/pins1ed/for-expressions-revisited.html#23.4
So the reason that you noticed for-loops are not used as much is because they technically aren't needed, and any for expressions you do see are just syntactic sugar which the compiler will translate into some equivalent. The rules for translating a for expression into a map/flatMap/filter expression are listed in the link above.
Generally speaking, in functional programming there is no index variable to mutate. This means one typically makes heavy use of function calls (often in the form of recursion) such as list folds in place of a while or for loop.
For a good example of using list folds in place of while/for loops, I recommend "Explain List Folds to Yourself" by Tony Morris.
https://vimeo.com/64673035
If a function is tail-recursive (denoted with #tailrec) then it can be optimized so as to not incur the high use of the stack which is common in recursive functions. In this case the compiler can translate the tail-recursive function to the "while loop equivalent".
To answer the second part of Question 1, there are some cases where one could make an argument that a for expression is clearer (although certainly there are cases where the opposite is true too.) One such example is given in the Coursera.org course "Functional Programming with Scala" by Dr. Martin Odersky:
for {
i <- 1 until n
j <- 1 until i
if isPrime(i + j)
} yield (i, j)
is arguably more clear than
(1 until n).flatMap(i =>
(1 until i).withFilter(j => isPrime(i + j))
.map(j => (i, j)))
For more information check out Dr. Martin Odersky's "Functional Programming with Scala" course on Coursera.org. Lecture 6.5 "Translation of For" in particular discusses this in more detail.
Also, as a quick side note, in your example you use
mapper.map(x => println(x))
It is generally more accepted to use foreach in this case because you have the intent of side-effecting. Also, there is short hand
mapper.foreach(println)
As for Question 2, it is better to use the map function in place of loops (especially when there is mutation in the loop) because map is a function and it can be composed. Also, once one is acquainted and used to using map, it is very easy to reason about.
The two programs that you have provided are not the same, even if the output might suggest that they are. It is true that for comprehensions are de-sugared by the compiler, but the first program you have is actually equivalent to:
val num = 1 to 1000
val another = 1000 to 2000
num.foreach(i => another.foreach(j => println(i,j)))
It should be noted that the resultant type for the above (and your example program) is Unit
In the case of your second program, the resultant type of the program is, as determined by the compiler, Seq[Unit] - which is now a Seq that has the length of the product of the loop members. As a result, you should always use foreach to indicate an effect that results in a Unit result.
Think about what is happening at the machine-language level. Loops are still fundamental. Functional programming abstracts the loop that is implemented in conventional programming.
Essentially, instead of writing a loop as you would in conventional or imparitive programming, the use of chaining or pipelining in functional programming allows the compiler to optimize the code for the user, and map is simply mapping the function to each element as a list or collection is iterated through. Functional programming, is more convenient, and abstracts the mundane implementation of "for" loops etc. There are limitations to this convenience, particularly if you intend to use functional programming to implement parallel processing.
It is arguable depending on the Software Engineer or developer, that the compiler will be more efficient and know ahead of time the situation it is implemented in. IMHO, mid-level Software Engineers who are familiar with functional programming, well versed in conventional programming, and knowledgeable in parallel processing, will implement both conventional and functional.
Why is it that many people say that using underscore is good practice in Scala and makes your code more readable? They say the motivation comes from formal language theory. Nevertheless many programmers, particularly from other languages, especially those that have anonymous functions, prefer not to use underscores particularly for placeholders.
So what is the point in the underscore? Why does Scala (and some other functional languages as pointed by om-nom-nom) have the underscore? And what is the formal underpinning, in terms of complexity and language theory, as to why it often good style to use it?
Linguistics
The origin and motivation for most of the underscore uses in Scala is to allow one to construct expressions and declarations without the need to always give every variable (I mean "variable" as in Predicate Calculus, not in programming) of the language a name. We use this all the time in Natural Language, for example I referred to a concept in the previous sentence in this sentence using "this" and I referred to this sentence using "this" without there being any confusion over what I mean. In Natural Language these words are usually called "pronouns", "anaphors", "cataphors", the referents "antecedent" or "postcedent", and the process of understanding/dereferencing them is called "anaphora".
Algorithmic Information Theory
If we had to name every 'thing' in Natural Language before we can refer to it, similarly every type of thing in order to quantify over it, as in Predicate Calculus and in most programming languages, then speaking would become extremely long winded. It is thanks to context that we can infer what is meant by words like "this", "it", "that", etc, we do it easily.
Therefore why restrict this simple, elegant and efficient means to communicate to Natural Language? So it was added to Scala.
If we did attempt to name every single 'thing' or 'type of thing', sentences become so long and complicated that it becomes very difficult to understand due to it's verbosity and the introduction of redundant symbols. The more symbols you add to a sentence the more difficult it becomes to understand, ergo this is why it's good practice, not only in Natural Language, but in Scala too. In fact one could formalize this assertion in terms of Kolmogorov Complexity and prove that a sequence of sentences adopting placeholders have lower complexity than those that unnecessarily name everything (unless the name is exactly the same in every instance, but that usually doesn't make sense). Therefore we can conclusively say contrary to some programmers belief, that the placeholder syntax is simpler and easier to read.
The reason why it has some resistance in it's use, is that if one is already a programmer, one must make an effort to retrain the brain not to name everything, just as (if they can remember) they may have found learning to code in the first place required quite an effort.
Examples
Now let's look at some specific uses more formally:
Placeholder Syntax
Means "it", "them", "that", "their" etc (i.e. pronouns), e.g. 1
lines.map(_.length)
can be read as "map lines to their length", similarly we can read lineOption.map(_.length) as "map the line to it's length". In terms of complexity theory, this is simpler than "for each 'line' in lines, take the length of 'line'" - which would be lines.map(line => line.length).
Can also be read as "the" (definite article) when used with type annotation, e.g.
(_: Int) + 1
"Add 1 to the integer"
Existential Types
Means "of some type" ("some" the pronoun), e.g
foo: Option[_]
means "foo is an Option of some type".
Higher Kinded type parameters
Again, basically means "of some type" ("some" the pronoun), e.g.
class A[K[_],T](a: K[T])
Can be read "class A takes some K of some type ..."
Pattern Match Wildcards
Means "anything" or "whatever" (pronouns), e.g.
case Foo(_) => "hello"
can be read as "for a Foo containing anything, return 'hello'", or "for a Foo containing whatever, return 'hello'"
Import Wildcards
Means "everything" (pronoun), e.g.
import foo._
can be read as "import everything from foo".
Default Values
Now I read this like "a" (indefinite article), e.g.
val wine: RedWine = _
"Give me a red wine", the waiter should give you the house red.
Other uses of underscore
The other uses of underscores are not really related to the point of this Q&A, nevertheless we breifly discuss them
Ignored Values/Params/Extractions
Allow us to ignore things in an explicit 'pattern safe' way. E.g.
val (x, _) = getMyPoint
Says, we are not going to use the second coordinate, so no need to get freaky when you cant find a use in the code.
Import Hidding
Just a way to say "except" (preposition).
Function Application
E.g.
val f: String => Unit = println _
This is an interesting one as it has an exact analogue in linguistics, namely nominalization, "the use of a verb, an adjective, or an adverb as the head of a noun phrase, with or without morphological transformation" - wikipedia. More simply it is the process of turning verbs or adjectives into nouns.
Use in special method names
Purely a syntax thing and doesn't really relate to linguistics.
I'm trying to learn scala and I'm unable to grasp this concept. Why does making an object immutable help prevent side-effects in functions. Can anyone explain like I'm five?
Interesting question, a bit difficult to answer.
Functional programming is very much about using mathematics to reason about programs. To do so, one needs a formalism that describe the programs and how one can make proofs about properties they might have.
There are many models of computation that provide such formalisms, such as lambda calculus and turing machines. And there's a certain degree of equivalency between them (see this question, for a discussion).
In a very real sense, programs with mutability and some other side effects have a direct mapping to functional program. Consider this example:
a = 0
b = 1
a = a + b
Here are two ways of mapping it to functional program. First one, a and b are part of a "state", and each line is a function from a state into a new state:
state1 = (a = 0, b = ?)
state2 = (a = state1.a, b = 1)
state3 = (a = state2.a + state2.b, b = state2.b)
Here's another, where each variable is associated with a time:
(a, t0) = 0
(b, t1) = 1
(a, t2) = (a, t0) + (b, t1)
So, given the above, why not use mutability?
Well, here's the interesting thing about math: the less powerful the formalism is, the easier it is to make proofs with it. Or, to put it in other words, it's too hard to reason about programs that have mutability.
As a consequence, there's very little advance regarding concepts in programming with mutability. The famous Design Patterns were not arrived at through study, nor do they have any mathematical backing. Instead, they are the result of years and years of trial and error, and some of them have since proved to be misguided. Who knows about the other dozens "design patterns" seen everywhere?
Meanwhile, Haskell programmers came up with Functors, Monads, Co-monads, Zippers, Applicatives, Lenses... dozens of concepts with mathematical backing and, most importantly, actual patterns of how code is composed to make up programs. Things you can use to reason about your program, increase reusability and improve correctness. Take a look at the Typeclassopedia for examples.
It's no wonder people not familiar with functional programming get a bit scared with this stuff... by comparison, the rest of the programming world is still working with a few decades-old concepts. The very idea of new concepts is alien.
Unfortunately, all these patterns, all these concepts, only apply with the code they are working with does not contain mutability (or other side effects). If it does, then their properties cease to be valid, and you can't rely on them. You are back to guessing, testing and debugging.
In short, if a function mutates an object then it has side effects. Mutation is a side effect. This is just true by definition.
In truth, in a purely functional language it should not matter if an object is technically mutable or immutable, because the language will never "try" to mutate an object anyway. A pure functional language doesn't give you any way to perform side effects.
Scala is not a pure functional language, though, and it runs in the Java environment in which side effects are very popular. In this environment, using objects that are incapable of mutation encourages you to use a pure functional style because it makes a side-effect oriented style impossible. You are using data types to enforce purity because the language does not do it for you.
Now I will say a bunch of other stuff in the hope that it helps this make sense to you.
Fundamental to the concept of a variable in functional languages is referential transparency.
Referential transparency means that there is no difference between a value, and a reference to that value. In a language where this is true, it makes it much simpler to think about a program works, since you never have to stop and ask, is this a value, or a reference to a value? Anyone who's ever programmed in C recognizes that a great part of the challenge of learning that paradigm is knowing which is which at all times.
In order to have referential transparency, the value that a reference refers to can never change.
(Warning, I'm about to make an analogy.)
Think of it this way: in your cell phone, you have saved some phone numbers of other people's cell phones. You assume that whenever you call that phone number, you will reach the person you intend to talk to. If someone else wants to talk to your friend, you give them the phone number and they reach that same person.
If someone changes their cell phone number, this system breaks down. Suddenly, you need to get their new phone number if you want to reach them. Maybe you call the same number six months later and reach a different person. Calling the same number and reaching a different person is what happens when functions perform side effects: you have what seems to be the same thing, but you try to use it, it turns out it's different now. Even if you expected this, what about all the people you gave that number to, are you going to call them all up and tell them that the old number doesn't reach the same person anymore?
You counted on the phone number corresponding to that person, but it didn't really. The phone number system lacks referential transparency: the number isn't really ALWAYS the same as the person.
Functional languages avoid this problem. You can give out your phone number and people will always be able to reach you, for the rest of your life, and will never reach anybody else at that number.
However, in the Java platform, things can change. What you thought was one thing, might turn into another thing a minute later. If this is the case, how can you stop it?
Scala uses the power of types to prevent this, by making classes that have referential transparency. So, even though the language as a whole isn't referentially transparent, your code will be referentially transparent as long as you use immutable types.
Practically speaking, the advantages of coding with immutable types are:
Your code is simpler to read when the reader doesn't have to look out for surprising side effects.
If you use multiple threads, you don't have to worry about locking because shared objects can never change. When you have side effects, you have to really think through the code and figure out all the places where two threads might try to change the same object at the same time, and protect against the problems that this might cause.
Theoretically, at least, the compiler can optimize some code better if it uses only immutable types. I don't know if Java can do this effectively, though, since it allows side effects. This is a toss-up at best, anyway, because there are some problems that can be solved much more efficiently by using side effects.
I'm running with this 5 year old explanation:
class Account(var myMoney:List[Int] = List(10, 10, 1, 1, 1, 5)) {
def getBalance = println(myMoney.sum + " dollars available")
def myMoneyWithInterest = {
myMoney = myMoney.map(_ * 2)
println(myMoney.sum + " dollars will accru in 1 year")
}
}
Assume we are at an ATM and it is using this code to give us account information.
You do the following:
scala> val myAccount = new Account()
myAccount: Account = Account#7f4a6c40
scala> myAccount.getBalance
28 dollars available
scala> myAccount.myMoneyWithInterest
56 dollars will accru in 1 year
scala> myAccount.getBalance
56 dollars available
We mutated the account balance when we wanted to check our current balance plus a years worth of interest. Now we have an incorrect account balance. Bad news for the bank!
If we were using val instead of var to keep track of myMoney in the class definition, we would not have been able to mutate the dollars and raise our balance.
When defining the class (in the REPL) with val:
error: reassignment to val
myMoney = myMoney.map(_ * 2
Scala is telling us that we wanted an immutable value but are trying to change it!
Thanks to Scala, we can switch to val, re-write our myMoneyWithInterest method and rest assured that our Account class will never alter the balance.
One important property of functional programming is: If I call the same function twice with the same arguments I'll get the same result. This makes reasoning about code much easier in many cases.
Now imagine a function returning the attribute content of some object. If that content can change the function might return different results on different calls with the same argument. => no more functional programming.
First a few definitions:
A side effect is a change in state -- also called a mutation.
An immutable object is an object which does not support mutation, (side effects).
A function which is passed mutable objects (either as parameters or in the global environment) may or may not produce side effects. This is up to the implementation.
However, it is impossible for a function which is passed only immutable objects (either as parameters or in the global environment) to produce side effects. Therefore, exclusive use of immutable objects will preclude the possibility of side effects.
Nate's answer is great, and here is some example.
In functional programming, there is an important feature that when you call a function with same argument, you always get same return value.
This is always true for immutable objects, because you can't modify them after create it:
class MyValue(val value: Int)
def plus(x: MyValue) = x.value + 10
val x = new MyValue(10)
val y = plus(x) // y is 20
val z = plus(x) // z is still 20, plus(x) will always yield 20
But if you have mutable objects, you can't guarantee that plus(x) will always return same value for same instance of MyValue.
class MyValue(var value: Int)
def plus(x: MyValue) = x.value + 10
val x = new MyValue(10)
val y = plus(x) // y is 20
x.value = 30
val z = plus(x) // z is 40, you can't for sure what value will plus(x) return because MyValue.value may be changed at any point.
Why do immutable objects enable functional programming?
They don't.
Take one definition of "function," or "prodecure," "routine" or "method," which I believe applies to many programming languages: "A section of code, typically named, accepting arguments and/or returning a value."
Take one definition of "functional programming:" "Programming using functions." The ability to program with functions is indepedent of whether state is modified.
For instance, Scheme is considered a functional programming language. It features tail calls, higher-order functions and aggregate operations using functions. It also has mutable objects. While mutability destroys some nice mathematical qualities, it does not necessarily prevent "functional programming."
I've read all the answers and they don't satisfy me, because they mostly talk about "immutability", and not about its relation to FP.
The main question is:
Why do immutable objects enable functional programming?
So I've searched a bit more and I have another answer, I believe the easy answer to this question is: "Because Functional Programming is basically defined on the basis of functions that are easy to reason about". Here's the definition of Functional Programming:
The process of building software by composing pure functions.
If a function is not pure -- which means receiving the same input, it's not guaranteed to always produce the same output (e.g., if the function relies on a global object, or date and time, or a random number to compute the output) -- then that function is unpredictable, that's it! Now exactly the same story goes about "immutability" as well, if objects are not immutable, a function with the same object as its input may have different results (aka side effects) each time used, and this will make it hard to reason about the program.
I first tried to put this in a comment, but it got longer than the limit, I'm by no means a pro so please take this answer with a grain of salt.
I am a Scala programmer, learning Haskell now. It's easy to find practical use cases and real world examples for OO concepts, such as decorators, strategy pattern etc. Books and interwebs are filled with it.
I came to the realization that this somehow is not the case for functional concepts. Case in point: applicatives.
I am struggling to find practical use cases for applicatives. Almost all of the tutorials and books I have come across so far provide the examples of [] and Maybe. I expected applicatives to be more applicable than that, seeing all the attention they get in the FP community.
I think I understand the conceptual basis for applicatives (maybe I am wrong), and I have waited long for my moment of enlightenment. But it doesn't seem to be happening. Never while programming, have I had a moment when I would shout with a joy, "Eureka! I can use applicative here!" (except again, for [] and Maybe).
Can someone please guide me how applicatives can be used in a day-to-day programming? How do I start spotting the pattern? Thanks!
Applicatives are great when you've got a plain old function of several variables, and you have the arguments but they're wrapped up in some kind of context. For instance, you have the plain old concatenate function (++) but you want to apply it to 2 strings which were acquired through I/O. Then the fact that IO is an applicative functor comes to the rescue:
Prelude Control.Applicative> (++) <$> getLine <*> getLine
hi
there
"hithere"
Even though you explicitly asked for non-Maybe examples, it seems like a great use case to me, so I'll give an example. You have a regular function of several variables, but you don't know if you have all the values you need (some of them may have failed to compute, yielding Nothing). So essentially because you have "partial values", you want to turn your function into a partial function, which is undefined if any of its inputs is undefined. Then
Prelude Control.Applicative> (+) <$> Just 3 <*> Just 5
Just 8
but
Prelude Control.Applicative> (+) <$> Just 3 <*> Nothing
Nothing
which is exactly what you want.
The basic idea is that you're "lifting" a regular function into a context where it can be applied to as many arguments as you like. The extra power of Applicative over just a basic Functor is that it can lift functions of arbitrary arity, whereas fmap can only lift a unary function.
Since many applicatives are also monads, I feel there's really two sides to this question.
Why would I want to use the applicative interface instead of the monadic one when both are available?
This is mostly a matter of style. Although monads have the syntactic sugar of do-notation, using applicative style frequently leads to more compact code.
In this example, we have a type Foo and we want to construct random values of this type. Using the monad instance for IO, we might write
data Foo = Foo Int Double
randomFoo = do
x <- randomIO
y <- randomIO
return $ Foo x y
The applicative variant is quite a bit shorter.
randomFoo = Foo <$> randomIO <*> randomIO
Of course, we could use liftM2 to get similar brevity, however the applicative style is neater than having to rely on arity-specific lifting functions.
In practice, I mostly find myself using applicatives much in the same way like I use point-free style: To avoid naming intermediate values when an operation is more clearly expressed as a composition of other operations.
Why would I want to use an applicative that is not a monad?
Since applicatives are more restricted than monads, this means that you can extract more useful static information about them.
An example of this is applicative parsers. Whereas monadic parsers support sequential composition using (>>=) :: Monad m => m a -> (a -> m b) -> m b, applicative parsers only use (<*>) :: Applicative f => f (a -> b) -> f a -> f b. The types make the difference obvious: In monadic parsers the grammar can change depending on the input, whereas in an applicative parser the grammar is fixed.
By limiting the interface in this way, we can for example determine whether a parser will accept the empty string without running it. We can also determine the first and follow sets, which can be used for optimization, or, as I've been playing with recently, constructing parsers that support better error recovery.
I think of Functor, Applicative and Monad as design patterns.
Imagine you want to write a Future[T] class. That is, a class that holds values that are to be calculated.
In a Java mindset, you might create it like
trait Future[T] {
def get: T
}
Where 'get' blocks until the value is available.
You might realize this, and rewrite it to take a callback:
trait Future[T] {
def foreach(f: T => Unit): Unit
}
But then what happens if there are two uses for the future? It means you need to keep a list of callbacks. Also, what happens if a method receives a Future[Int] and needs to return a calculation based on the Int inside? Or what do you do if you have two futures and you need to calculate something based on the values they will provide?
But if you know of FP concepts, you know that instead of working directly on T, you can manipulate the Future instance.
trait Future[T] {
def map[U](f: T => U): Future[U]
}
Now your application changes so that each time you need to work on the contained value, you just return a new Future.
Once you start in this path, you can't stop there. You realize that in order to manipulate two futures, you just need to model as an applicative, in order to create futures, you need a monad definition for future, etc.
UPDATE: As suggested by #Eric, I've written a blog post: http://www.tikalk.com/incubator/blog/functional-programming-scala-rest-us
I finally understood how applicatives can help in day-to-day programming with that presentation:
https://web.archive.org/web/20100818221025/http://applicative-errors-scala.googlecode.com/svn/artifacts/0.6/chunk-html/index.html
The autor shows how applicatives can help for combining validations and handling failures.
The presentation is in Scala, but the author also provides the full code example for Haskell, Java and C#.
Warning: my answer is rather preachy/apologetic. So sue me.
Well, how often in your day-to-day Haskell programming do you create new data types? Sounds like you want to know when to make your own Applicative instance, and in all honesty unless you are rolling your own parser, you probably won't need to do it very much. Using applicative instances, on the other hand, you should learn to do frequently.
Applicative is not a "design pattern" like decorators or strategies. It is an abstraction, which makes it much more pervasive and generally useful, but much less tangible. The reason you have a hard time finding "practical uses" is because the example uses for it are almost too simple. You use decorators to put scrollbars on windows. You use strategies to unify the interface for both aggressive and defensive moves for your chess bot. But what are applicatives for? Well, they're a lot more generalized, so it's hard to say what they are for, and that's OK. Applicatives are handy as parsing combinators; the Yesod web framework uses Applicative to help set up and extract information from forms. If you look, you'll find a million and one uses for Applicative; it's all over the place. But since it's so abstract, you just need to get the feel for it in order to recognize the many places where it can help make your life easier.
I think Applicatives ease the general usage of monadic code. How many times have you had the situation that you wanted to apply a function but the function was not monadic and the value you want to apply it to is monadic? For me: quite a lot of times!
Here is an example that I just wrote yesterday:
ghci> import Data.Time.Clock
ghci> import Data.Time.Calendar
ghci> getCurrentTime >>= return . toGregorian . utctDay
in comparison to this using Applicative:
ghci> import Control.Applicative
ghci> toGregorian . utctDay <$> getCurrentTime
This form looks "more natural" (at least to my eyes :)
Coming at Applicative from "Functor" it generalizes "fmap" to easily express acting on several arguments (liftA2) or a sequence of arguments (using <*>).
Coming at Applicative from "Monad" it does not let the computation depend on the value that is computed. Specifically you cannot pattern match and branch on a returned value, typically all you can do is pass it to another constructor or function.
Thus I see Applicative as sandwiched in between Functor and Monad. Recognizing when you are not branching on the values from a monadic computation is one way to see when to switch to Applicative.
Here is an example taken from the aeson package:
data Coord = Coord { x :: Double, y :: Double }
instance FromJSON Coord where
parseJSON (Object v) =
Coord <$>
v .: "x" <*>
v .: "y"
There are some ADTs like ZipList that can have applicative instances, but not monadic instances. This was a very helpful example for me when understanding the difference between applicatives and monads. Since so many applicatives are also monads, it's easy to not see the difference between the two without a concrete example like ZipList.
I think it might be worthwhile to browse the sources of packages on Hackage, and see first-handedly how applicative functors and the like are used in existing Haskell code.
I described an example of practical use of the applicative functor in a discussion, which I quote below.
Note the code examples are pseudo-code for my hypothetical language which would hide the type classes in a conceptual form of subtyping, so if you see a method call for apply just translate into your type class model, e.g. <*> in Scalaz or Haskell.
If we mark elements of an array or hashmap with null or none to
indicate their index or key is valid yet valueless, the Applicative
enables without any boilerplate skipping the valueless elements while
applying operations to the elements that have a value. And more
importantly it can automatically handle any Wrapped semantics that
are unknown a priori, i.e. operations on T over
Hashmap[Wrapped[T]] (any over any level of composition, e.g. Hashmap[Wrapped[Wrapped2[T]]] because applicative is composable but monad is not).
I can already picture how it will make my code easier to
understand. I can focus on the semantics, not on all the
cruft to get me there and my semantics will be open under extension of
Wrapped whereas all your example code isn’t.
Significantly, I forgot to point out before that your prior examples
do not emulate the return value of the Applicative, which will be a
List, not a Nullable, Option, or Maybe. So even my attempts to
repair your examples were not emulating Applicative.apply.
Remember the functionToApply is the input to the
Applicative.apply, so the container maintains control.
list1.apply( list2.apply( ... listN.apply( List.lift(functionToApply) ) ... ) )
Equivalently.
list1.apply( list2.apply( ... listN.map(functionToApply) ... ) )
And my proposed syntactical sugar which the compiler would translate
to the above.
funcToApply(list1, list2, ... list N)
It is useful to read that interactive discussion, because I can't copy it all here. I expect that url to not break, given who the owner of that blog is. For example, I quote from further down the discussion.
the conflation of out-of-statement control flow with assignment is probably not desired by most programmers
Applicative.apply is for generalizing the partial application of functions to parameterized types (a.k.a. generics) at any level of nesting (composition) of the type parameter. This is all about making more generalized composition possible. The generality can’t be accomplished by pulling it outside the completed evaluation (i.e. return value) of the function, analogous to the onion can’t be peeled from the inside-out.
Thus it isn’t conflation, it is a new degree-of-freedom that is not currently available to you. Per our discussion up thread, this is why you must throw exceptions or stored them in a global variable, because your language doesn’t have this degree-of-freedom. And that is not the only application of these category theory functors (expounded in my comment in moderator queue).
I provided a link to an example abstracting validation in Scala, F#, and C#, which is currently stuck in moderator queue. Compare the obnoxious C# version of the code. And the reason is because the C# is not generalized. I intuitively expect that C# case-specific boilerplate will explode geometrically as the program grows.