How do error handling monads like Eithers achieve referential transparency? - scala

From reading about FP, my understanding of the benefits of removing side-effects is that if all our functions are pure/have referential transparency (something that can only be achieved without side-effects), then our functions are easier to test, debug, reuse, and are more modular.
Since exceptions are a form of side-effect, we need to avoid throwing exceptions. Obviously we still need to be able to terminate processes when something goes wrong so FP uses monads to achieve both referential transparency and the ability to handle exceptions.
What I'm confused about is how exactly monads achieve this. Suppose I have this code using scalaz
def couldThrowException: Exception \/ Boolean = ??
val f = couldThrowException()
val g = couldThrowException()
Since couldThrowException may return an exception, there is no gaurantee that f and g will be the same. f could be \/-(true) and g be a -\/(NullPointerException). Since couldThrowException can return different values with the same input, it is not a pure function. Wasn't the point of using monads to keep our functions pure?

f() and g() should evaluate to the same value, given the same input.
In pure FP a function with no arguments must necessarily evaluate to the same result every time it's called. So it's not pure FP if your couldThrowException sometimes returns \/-(true) and sometimes -\/(NullPointerException).
It makes more sense to return an Either if couldThrowException takes a parameter. If it's a pure function, it will have referential transparency, so some inputs will always result in \/-(true) and some inputs will always result in -\/(NullPointerException).
In Scala you may well be using a function that is not pure, and not referentially transparent. Perhaps it's a Java class. Perhaps it's using a part of Scala that's not pure.
But I guess you're interested in bridging between the pure FP world and impure libraries. A classic example of this is IO. println could fail for all kinds of reasons - permissions, filesystem full, etc.
The classic way to handle this in FP is to use an IO function that takes a "world" state as an input parameter, and returns both the result of the IO call, and the new "world" state. The state can be a "fake" value backed by library code in an impure language, but what it means is that each time you call the function, you're passing a different state, so it's referentially transparent.
Often, a monad is used to encapsulate the "world".
You can find out a lot about this approach by reading about Haskell's IO monad.
Core Scala IO isn't wholly pure, so println might throw and exception, and hence, as you've spotted, isn't fully referentially transparent. Scalaz provides an IO monad similar to Haskell's.
One thing to note, because it trips up a lot of beginners: there's nothing about the "World" approach that requires a monad, and IO isn't easiest the monad to look at when first learning what a monad is and why they're useful.

Related

Scala compiler: detecing a pure/impure function

In FP languages like Scala, Haskell etc. pure functions are used which makes it possible for compiler to optimize the code. For eg:
val x = method1()// a pure function call
val y = method2// another pure function call
val c = method3(x,y)
As method1 and method2 are pure functions and hence evaluations are independent of each other, compiler can parallelize both the calls.
Language like Haskell has constructs within it (like IO monad) which tells whether function is pure or performs some IO operation. But how does Scala compiler detect that a function is pure function?
The general approach to classifying a block of code as pure is to define which operations are pure and since purity composes, a composition of pure operations is pure.
Parallelization isn't actually one of the more important benefits of pure code: the benefit is that any evaluation strategy can be used. Evaluation can be reordered or results can be cached etc. Parallelization is another evaluation strategy but without a good sense of the actual execution cost (and note that modern CPUs and memory hierarchies can make it really difficult to get such a sense), it often slows things down relative to other strategies. For modern pure code, laziness and caching repeated values is often more generally effective, while parallelism is controlled by the developer (one benefit of pure code is that you can make arbitrary changes to how you're parallelizing without changing the semantics of the code).
In the case of Scala, the compiler makes no real effort to classify pure/impure code and generally doesn't try alternative evaluation strategies: control of that is left to the programmer (the language helps somewhat by having call-by-name and lazy).
The JVM's JIT compiler can and does perform some purity analysis on bytecode when deciding what it can safely inline and reorder. This isn't Scala-specific, though final local variables (aka local vals in Scala or final variables in Java) enable some optimizations that can't otherwise be performed. Javascript runtimes (for ScalaJS) can (and really aggressively do, in practice) likewise perform that analysis, as does LLVM (for Scala Native).
In the general case, Purity Analysis is equivalent to solving the Halting Problem. In other words, it is impossible to statically decide, in the general case, whether a chunk of code is pure or not.
In a language like Haskell, there is no way of writing impure code in Haskell, therefore purity analysis is trivial. Here is a simple function that takes a Haskell program as an argument and tells you whether it is true or not:
isPureProgram :: a -> Bool
isPureProgram _ = True
Note, I am simplifying a couple of things here:
unsafePerformIO and friends allow you to, well, perform unsafe I/O. It is generally assumed that you know what you are doing when you use these functions.
Exceptions are side-effects.
Contrary to popular belief, the IO monad does not allow you to write impure code in Haskell. What the IO monad does is to write a pure program which returns a list of IO actions, which when interpreted by the runtime system result in impure computation. However, the Haskell program which generates these IO actions is still pure – it is the interpreter which is impure. But of course, the end result will be the same: an impure computation will be performed.
However, since Scala is an impure language at its core, the compiler cannot rely on similar restrictions as a Haskell compiler can, and thus cannot perform purity analysis in the general case.

"Lifting" exceptions to Option types

Both F# and Scala act as a hybrid language that is often used to bridge the words of tradional object oriented code to functional code.
A concept that belongs more to the OO world are exceptions, whereas the functional world favors Option types in many cases.
To wrap existing library code that relies on exceptions - and make it more functional - I would thus like to "lift" exception-throwing code to instead return an option type.
In Scala, there is a nice library function to "catch all" and convert to option. It can be used like this:
import scala.util.control.Exception._
val functionalVersion = allCatch opt myFunction
see In Scala, is there a pre-existing library function for converting exceptions to Options?
Now that I'm moving to F# I have the same requirement, but I can't seem to find an existing utility function for this - and also struggle to implement one myself.
I can create such a wrapper for a unit function, aka an action
let catchAll f = try Some (f()) with | _ -> None
But the problem here is that I don't want to first wrap all the exeption throwing code into an action.
For example I would like to wrap the array-indexing operator, so that it doesn't throw.
// Wrap out-of-bounds exception with option type
let maybeGetIndex (array: int[]) (index: int) = catchAll (fun () -> array.[index])
maybeGetIndex [| 1; 2; 3 |] 10 // -> None
However, it would be much nicer if one could simple write
(catchAll a.[index])
i.e. apply catchAll to a whole expression before it is evaluated.
(Scala can achieve this through call-by-name parameters which seem to be missing from F#)
So this question is twofold:
Is there an existing library function to wrap exceptions into option
types?
Is there a language feature that would allow me to implement
it?
First of all, I think that it's not true that a "concept that belongs more to the object-oriented world are exceptions". Exceptions exist in many functional languages of the ML family and, for example, OCaml relies on them quite heavily and uses them even for certain control flow structures. In F#, this is not so much the case, because .NET exceptions are somewhat slower, but I see exceptions very much orthogonal to the object-oriented/functional issue.
For this reason, I actually find exceptions often preferable to option types in F#. The downside is that there is less type-checking (you do not know what might throw), but the upside is that the language provides a nice integrated langauge support for exceptions. If you need to handle exceptional situations then exceptions are a good way of doing that!
To answer your original question about syntactic tricks you can make - I would probably just use a function as your existing code does, because that's explicit and easy to understand (and presumably, you'll only need exception wrapping in some core functions that you implement).
That said, you can define a computation expression builder that wraps the code in the body and serves as your catchAll function with a somewhat neater syntax:
type CatchAllBuilder() =
member x.Delay(f) = try Some(f()) with _ -> None
member x.Return(v) = v
let catchAll = CatchAllBuilder()
This lets you write something like:
catchAll { return Array.empty.[0] }
As mentioned earlier, I wouldn't do this because (i) I don't think all exceptions need to be converted to options in F# and (ii) it introduces unfamiliar syntax that new team members (and future you) might be confused by, but it is probably the nicest syntax you can get.
[EDIT: Now a working version with return - this is somewhat less pretty, but perhaps still useful!]

Why future has side effects?

I am reading the book FPiS and on the page 107 the author says:
We should note that Future doesn’t have a purely functional interface.
This is part of the reason why we don’t want users of our library to
deal with Future directly. But importantly, even though methods on
Future rely on side effects, our entire Par API remains pure. It’s
only after the user calls run and the implementation receives an
ExecutorService that we expose the Future machinery. Our users
therefore program to a pure interface whose implementation
nevertheless relies on effects at the end of the day. But since our
API remains pure, these effects aren’t side effects.
Why Future has not purely functional interface?
The problem is that creating a Future that induces a side-effect is in itself also a side-effect, due to Future's eager nature.
This breaks referential transparency. I.e. if you create a Future that only prints to the console, the future will be run immediately and run the side-effect without you asking it to.
An example:
for {
x <- Future { println("Foo") }
y <- Future { println("Foo") }
} yield ()
This results in "Foo" being printed twice. Now if Future was referentially transparent we should be able to get the same result in the non-inlined version below:
val printFuture = Future { println("Foo") }
for {
x <- printFuture
y <- printFuture
} yield ()
However, this instead prints "Foo" only once and even more problematic, it prints it no matter if you include the for-expression or not.
With referentially transparent expression we should be able to inline any expression without changing the semantics of the program, Future can not guarantee this, therefore it breaks referential transparency and is inherently effectful.
A basic premise of FP is referential transparency. In other words, avoiding side effects.
What's a side effect? From Wikipedia:
In computer science, a function or expression is said to have a side effect if it modifies some state outside its scope or has an observable interaction with its calling functions or the outside world. (Except, by convention, returning a value: returning a value has an effect on the calling function, but this is usually not considered as a side effect.)
And what is a Scala future? From the documentation page:
A Future is a placeholder object for a value that may not yet exist.
So a future can transition from a not-yet-existing-value to an existing-value without any interaction from or with the rest of the program, and, as you quoted: "methods on Future rely on side effects."
It would appear that Scala futures do not maintain referential transparency.
As far as I know, Future runs its computation automatically when it's created. Even if it lacks side-effects in its nested computation, it still breaks flatMap composition rule, because it changes state over time:
someFuture.flatMap(Future(_)) == someFuture // can be false
Equality implementation questions aside, we can have a race condition here: new Future immediately runs for a tiny fraction of time, and its isCompleted can differ from someFuture if it is already done.
In order to be pure w.r.t. effect it represents, Future should defer its computation and run it only when explicitly asked for it, like in the case of Par (or scalaz's Task).
To complement the other points and explain relationship between referential transparency (a requirement) and side-effects (mutation that might break this requirement), here is kinda simplistic but pragmatic view on what's happening:
newly created Future immediately submits a Callable task into your pool's queue. Given that queue is a mutable collection - this is basically a side-effect
any subscription (from onComplete to map) does the same + uses an additional mutable collection of subscribers per Callable.
Btw, subscriptions are not only in violation of Monad laws as noted by #P.Frolov (for flatMap) - Functor laws f.map(identity) == f are broken too. Especially, in the light of fact that newly created Future (by map) isn't equivalent to original - it has its separate subscriptions and Callable
This "fire and subscribe" allows you to do stuff like:
val f = Future{...}
val f2 = f.map(...)
val f3 = f.map(...)//twice or more
Every line of this code produces a side-effect that might potentially break referential transparency and actually does as many mentioned.
The reason why many authors prefer "referential transparency" term is probably because from low-level perspective we always do some side-effects, however only subset (usually a more high-level one) of those actually makes your code "non-functional".
As per the futures, breaking referential transparency is most disruptive as it also leads to non-determinism (in Futures case):
val f1 = Future {
println("1")
}
val f2 = Future {
println("2")
}
It gets worse when this is combined with Monads, including for-comprehension cases mentioned by #Luka Jacobowitz. In practice, monads are used not only to flatten-merge compatible containers, but also in order to guarantee [con]sequential relation. This is probably because even in abstract algebra Monads are generalizing over consequence operators meant as a general characterization of the notion of deduction.
This simply means that it's hard to reason about non-deterministic logic, even harder than just non-referential-transparent stuff:
analyzing logs produced by Futures, or even worse actors, is a hell. Even no matter how many labels and thread-local propagation you have - everything breaks eventually.
non-deterministic (aka "sometimes appearing") bugs are most annoying and stay in production for years(!) - even extensive high-load testing (including performance tests) doesn't always catch those.
So, even in absence of other criteria, code that is easier to reason about, is essentially more functional and Futures often lead to code that isn't.
P.S. As a conclusion, if your project is tolerant to scalaz/cats/monix/fs2 so on, it's better to use Tasks/Streams/Iteratees. Those libraries introduce some risks of overdesgn of course; however, IMO it's better to spent time simplifying incomprehensible scalaz-code than debugging an incomprehensible bug.

How pure and lazy can Scala be?

This is just one of those "I was wondering..." questions.
Scala has immutable data structures and (optional) lazy vals etc.
How close can a Scala program be to one that is fully pure (in a functional programming sense) and fully lazy (or as Ingo points out, can it be sufficiently non-strict)? What values are unavoidably mutable and what evaluation unavoidably greedy?
Regarding lazyness - currently, passing a parameter to a method is by default strict:
def square(a: Int) = a * a
but you use call-by-name parameters:
def square(a: =>Int) = a * a
but this is not lazy in the sense that it computes the value only once when needed:
scala> square({println("calculating");5})
calculating
calculating
res0: Int = 25
There's been some work into adding lazy method parameters, but it hasn't been integrated yet (the below declaration should print "calculating" from above only once):
def square(lazy a: Int) = a * a
This is one piece that is missing, although you could simulate it with a local lazy val:
def square(ap: =>Int) = {
lazy val a = ap
a * a
}
Regarding mutability - there is nothing holding you back from writing immutable data structures and avoid mutation. You can do this in Java or C as well. In fact, some immutable data structures rely on the lazy primitive to achieve better complexity bounds, but the lazy primitive can be simulated in other languages as well - at the cost of extra syntax and boilerplate.
You can always write immutable data structures, lazy computations and fully pure programs in Scala. The problem is that the Scala programming model allows writing non pure programs as well, so the type checker can't always infer some properties of the program (such as purity) which it could infer given that the programming model was more restrictive.
For example, in a language with pure expressions the a * a in the call-by-name definition above (a: =>Int) could be optimized to evaluate a only once, regardless of the call-by-name semantics. If the language allows side-effects, then such an optimization is not always applicable.
Scala can be as pure and lazy as you like, but a) the compiler won't keep you honest with regards to purity and b) it will take a little extra work to make it lazy. There's nothing too profound about this; you can even write lazy and pure Java code if you really want to (see here if you dare; achieving laziness in Java requires eye-bleeding amounts of nested anonymous inner classes).
Purity
Whereas Haskell tracks impurities via the type system, Scala has chosen not to go that route, and it's difficult to tack that sort of thing on when you haven't made it a goal from the beginning (and also when interoperability with a thoroughly impure language like Java is a major goal of the language).
That said, some believe it's possible and worthwhile to make the effort to document effects in Scala's type system. But I think purity in Scala is best treated as a matter of self-discipline, and you must be perpetually skeptical about the supposed purity of third-party code.
Laziness
Haskell is lazy by default but can be made stricter with some annotations sprinkled in your code... Scala is the opposite: strict by default but with the lazy keyword and by-name parameters you can make it as lazy as you like.
Feel free to keep things immutable. On the other hand, there's no side effect tracking, so you can't enforce or verify it.
As for non-strictness, here's the deal... First, if you choose to go completely non-strict, you'll be forsaking all of Scala's classes. Even Scalaz is not non-strict for the most part. If you are willing to build everything yourself, you can make your methods non-strict and your values lazy.
Next, I wonder if implicit parameters can be non-strict or not, or what would be the consequences of making them non-strict. I don't see a problem, but I could be wrong.
But, most problematic of all, function parameters are strict, and so are closures parameters.
So, while it is theoretically possible to go fully non-strict, it will be incredibly inconvenient.

What are practical uses of applicative style?

I am a Scala programmer, learning Haskell now. It's easy to find practical use cases and real world examples for OO concepts, such as decorators, strategy pattern etc. Books and interwebs are filled with it.
I came to the realization that this somehow is not the case for functional concepts. Case in point: applicatives.
I am struggling to find practical use cases for applicatives. Almost all of the tutorials and books I have come across so far provide the examples of [] and Maybe. I expected applicatives to be more applicable than that, seeing all the attention they get in the FP community.
I think I understand the conceptual basis for applicatives (maybe I am wrong), and I have waited long for my moment of enlightenment. But it doesn't seem to be happening. Never while programming, have I had a moment when I would shout with a joy, "Eureka! I can use applicative here!" (except again, for [] and Maybe).
Can someone please guide me how applicatives can be used in a day-to-day programming? How do I start spotting the pattern? Thanks!
Applicatives are great when you've got a plain old function of several variables, and you have the arguments but they're wrapped up in some kind of context. For instance, you have the plain old concatenate function (++) but you want to apply it to 2 strings which were acquired through I/O. Then the fact that IO is an applicative functor comes to the rescue:
Prelude Control.Applicative> (++) <$> getLine <*> getLine
hi
there
"hithere"
Even though you explicitly asked for non-Maybe examples, it seems like a great use case to me, so I'll give an example. You have a regular function of several variables, but you don't know if you have all the values you need (some of them may have failed to compute, yielding Nothing). So essentially because you have "partial values", you want to turn your function into a partial function, which is undefined if any of its inputs is undefined. Then
Prelude Control.Applicative> (+) <$> Just 3 <*> Just 5
Just 8
but
Prelude Control.Applicative> (+) <$> Just 3 <*> Nothing
Nothing
which is exactly what you want.
The basic idea is that you're "lifting" a regular function into a context where it can be applied to as many arguments as you like. The extra power of Applicative over just a basic Functor is that it can lift functions of arbitrary arity, whereas fmap can only lift a unary function.
Since many applicatives are also monads, I feel there's really two sides to this question.
Why would I want to use the applicative interface instead of the monadic one when both are available?
This is mostly a matter of style. Although monads have the syntactic sugar of do-notation, using applicative style frequently leads to more compact code.
In this example, we have a type Foo and we want to construct random values of this type. Using the monad instance for IO, we might write
data Foo = Foo Int Double
randomFoo = do
x <- randomIO
y <- randomIO
return $ Foo x y
The applicative variant is quite a bit shorter.
randomFoo = Foo <$> randomIO <*> randomIO
Of course, we could use liftM2 to get similar brevity, however the applicative style is neater than having to rely on arity-specific lifting functions.
In practice, I mostly find myself using applicatives much in the same way like I use point-free style: To avoid naming intermediate values when an operation is more clearly expressed as a composition of other operations.
Why would I want to use an applicative that is not a monad?
Since applicatives are more restricted than monads, this means that you can extract more useful static information about them.
An example of this is applicative parsers. Whereas monadic parsers support sequential composition using (>>=) :: Monad m => m a -> (a -> m b) -> m b, applicative parsers only use (<*>) :: Applicative f => f (a -> b) -> f a -> f b. The types make the difference obvious: In monadic parsers the grammar can change depending on the input, whereas in an applicative parser the grammar is fixed.
By limiting the interface in this way, we can for example determine whether a parser will accept the empty string without running it. We can also determine the first and follow sets, which can be used for optimization, or, as I've been playing with recently, constructing parsers that support better error recovery.
I think of Functor, Applicative and Monad as design patterns.
Imagine you want to write a Future[T] class. That is, a class that holds values that are to be calculated.
In a Java mindset, you might create it like
trait Future[T] {
def get: T
}
Where 'get' blocks until the value is available.
You might realize this, and rewrite it to take a callback:
trait Future[T] {
def foreach(f: T => Unit): Unit
}
But then what happens if there are two uses for the future? It means you need to keep a list of callbacks. Also, what happens if a method receives a Future[Int] and needs to return a calculation based on the Int inside? Or what do you do if you have two futures and you need to calculate something based on the values they will provide?
But if you know of FP concepts, you know that instead of working directly on T, you can manipulate the Future instance.
trait Future[T] {
def map[U](f: T => U): Future[U]
}
Now your application changes so that each time you need to work on the contained value, you just return a new Future.
Once you start in this path, you can't stop there. You realize that in order to manipulate two futures, you just need to model as an applicative, in order to create futures, you need a monad definition for future, etc.
UPDATE: As suggested by #Eric, I've written a blog post: http://www.tikalk.com/incubator/blog/functional-programming-scala-rest-us
I finally understood how applicatives can help in day-to-day programming with that presentation:
https://web.archive.org/web/20100818221025/http://applicative-errors-scala.googlecode.com/svn/artifacts/0.6/chunk-html/index.html
The autor shows how applicatives can help for combining validations and handling failures.
The presentation is in Scala, but the author also provides the full code example for Haskell, Java and C#.
Warning: my answer is rather preachy/apologetic. So sue me.
Well, how often in your day-to-day Haskell programming do you create new data types? Sounds like you want to know when to make your own Applicative instance, and in all honesty unless you are rolling your own parser, you probably won't need to do it very much. Using applicative instances, on the other hand, you should learn to do frequently.
Applicative is not a "design pattern" like decorators or strategies. It is an abstraction, which makes it much more pervasive and generally useful, but much less tangible. The reason you have a hard time finding "practical uses" is because the example uses for it are almost too simple. You use decorators to put scrollbars on windows. You use strategies to unify the interface for both aggressive and defensive moves for your chess bot. But what are applicatives for? Well, they're a lot more generalized, so it's hard to say what they are for, and that's OK. Applicatives are handy as parsing combinators; the Yesod web framework uses Applicative to help set up and extract information from forms. If you look, you'll find a million and one uses for Applicative; it's all over the place. But since it's so abstract, you just need to get the feel for it in order to recognize the many places where it can help make your life easier.
I think Applicatives ease the general usage of monadic code. How many times have you had the situation that you wanted to apply a function but the function was not monadic and the value you want to apply it to is monadic? For me: quite a lot of times!
Here is an example that I just wrote yesterday:
ghci> import Data.Time.Clock
ghci> import Data.Time.Calendar
ghci> getCurrentTime >>= return . toGregorian . utctDay
in comparison to this using Applicative:
ghci> import Control.Applicative
ghci> toGregorian . utctDay <$> getCurrentTime
This form looks "more natural" (at least to my eyes :)
Coming at Applicative from "Functor" it generalizes "fmap" to easily express acting on several arguments (liftA2) or a sequence of arguments (using <*>).
Coming at Applicative from "Monad" it does not let the computation depend on the value that is computed. Specifically you cannot pattern match and branch on a returned value, typically all you can do is pass it to another constructor or function.
Thus I see Applicative as sandwiched in between Functor and Monad. Recognizing when you are not branching on the values from a monadic computation is one way to see when to switch to Applicative.
Here is an example taken from the aeson package:
data Coord = Coord { x :: Double, y :: Double }
instance FromJSON Coord where
parseJSON (Object v) =
Coord <$>
v .: "x" <*>
v .: "y"
There are some ADTs like ZipList that can have applicative instances, but not monadic instances. This was a very helpful example for me when understanding the difference between applicatives and monads. Since so many applicatives are also monads, it's easy to not see the difference between the two without a concrete example like ZipList.
I think it might be worthwhile to browse the sources of packages on Hackage, and see first-handedly how applicative functors and the like are used in existing Haskell code.
I described an example of practical use of the applicative functor in a discussion, which I quote below.
Note the code examples are pseudo-code for my hypothetical language which would hide the type classes in a conceptual form of subtyping, so if you see a method call for apply just translate into your type class model, e.g. <*> in Scalaz or Haskell.
If we mark elements of an array or hashmap with null or none to
indicate their index or key is valid yet valueless, the Applicative
enables without any boilerplate skipping the valueless elements while
applying operations to the elements that have a value. And more
importantly it can automatically handle any Wrapped semantics that
are unknown a priori, i.e. operations on T over
Hashmap[Wrapped[T]] (any over any level of composition, e.g. Hashmap[Wrapped[Wrapped2[T]]] because applicative is composable but monad is not).
I can already picture how it will make my code easier to
understand. I can focus on the semantics, not on all the
cruft to get me there and my semantics will be open under extension of
Wrapped whereas all your example code isn’t.
Significantly, I forgot to point out before that your prior examples
do not emulate the return value of the Applicative, which will be a
List, not a Nullable, Option, or Maybe. So even my attempts to
repair your examples were not emulating Applicative.apply.
Remember the functionToApply is the input to the
Applicative.apply, so the container maintains control.
list1.apply( list2.apply( ... listN.apply( List.lift(functionToApply) ) ... ) )
Equivalently.
list1.apply( list2.apply( ... listN.map(functionToApply) ... ) )
And my proposed syntactical sugar which the compiler would translate
to the above.
funcToApply(list1, list2, ... list N)
It is useful to read that interactive discussion, because I can't copy it all here. I expect that url to not break, given who the owner of that blog is. For example, I quote from further down the discussion.
the conflation of out-of-statement control flow with assignment is probably not desired by most programmers
Applicative.apply is for generalizing the partial application of functions to parameterized types (a.k.a. generics) at any level of nesting (composition) of the type parameter. This is all about making more generalized composition possible. The generality can’t be accomplished by pulling it outside the completed evaluation (i.e. return value) of the function, analogous to the onion can’t be peeled from the inside-out.
Thus it isn’t conflation, it is a new degree-of-freedom that is not currently available to you. Per our discussion up thread, this is why you must throw exceptions or stored them in a global variable, because your language doesn’t have this degree-of-freedom. And that is not the only application of these category theory functors (expounded in my comment in moderator queue).
I provided a link to an example abstracting validation in Scala, F#, and C#, which is currently stuck in moderator queue. Compare the obnoxious C# version of the code. And the reason is because the C# is not generalized. I intuitively expect that C# case-specific boilerplate will explode geometrically as the program grows.