Does Scala have an equivelent to golangs defer?
from:
http://golang.org/doc/effective_go.html#defer
Go's defer statement schedules a function call (the deferred function) to be run immediately before the function executing the defer returns. It's an unusual but effective way to deal with situations such as resources that must be released regardless of which path a function takes to return. The canonical examples are unlocking a mutex or closing a file.
Scala does not offer defer by design, however you can create it yourself by wrapping
your function in another function, passing an object which keeps track of functions to call.
Example:
class DeferTracker() {
class LazyVal[A](val value:() => A)
private var l = List[LazyVal[Any]]()
def apply(f: => Any) = { l = new LazyVal(() => f) :: l }
def makeCalls() = l.foreach { x => x.value() }
}
def Deferrable[A](context: DeferTracker => A) = {
val dt = new DeferTracker()
val res = context(dt)
dt.makeCalls
res
}
In this example, Deferable would be the wrapping function which calls context and
returns it contents, giving it an object which tracks defer calls.
You can use this construct like this:
def dtest(x:Int) = println("dtest: " + x)
def someFunction(x:Int):Int = Deferrable { defer =>
defer(dtest(x))
println("before return")
defer(dtest(2*x))
x * 3
}
println(someFunction(3))
The output would be:
before return
dtest: 6
dtest: 3
3
I'm aware that this can be solved differently but it is really just an example that
Scala supports the concept of defer without too much fuss.
I can't think of a Scala specific way, but wouldn't this be equivalent (though not as pretty):
try {
// Do stuff
} finally {
// "defer"
}
No. Go has this construct precisely because it doesn't support exceptions and has no try...finally syntax.
Personally, I think it invites a maintenance nightmare; calls to defer can be buried anywhere in a function. Even where responsible coders put the defers right beside the thing to be cleaned up, I think it's less clear than a finally block and as for what it lets the messy coders do... at least finally blocks put all the clean-up in one place.
defer is the opposite of idiomatic Scala. Scala offers monadic ways to control program flow and defer magic does not contribute at all. Monads offer a functional improvement over try...finally which let you
Define your own error handling flow
Manipulate defined flows functionally
Make a function's anticipated errors part of its signature
defer has no place in this.
Related
I'm new to functional programming and Scala, and I was checking out the Cats Effect framework and trying to understand what the IO monad does. So far what I've understood is that writing code in the IO block is just a description of what needs to be done and nothing happens until you explicitly run using the unsafe methods provided, and also a way to make code that performs side-effects referentially transparent by actually not running it.
I tried executing the snippet below just to try to understand what it means:
object Playground extends App {
var out = 10
var state = "paused"
def changeState(newState: String): IO[Unit] = {
state = newState
IO(println("Updated state."))
}
def x(string: String): IO[Unit] = {
out += 1
IO(println(string))
}
val tuple1 = (x("one"), x("two"))
for {
_ <- x("1")
_ <- changeState("playing")
} yield ()
println(out)
println(state)
}
And the output was:
13
paused
I don't understand why the assignment state = newState does not run, but the increment and assign expression out += 1 run. Am I missing something obvious on how this is supposed to work? I could really use some help. I understand that I can get this to run using the unsafe methods.
In your particular example, I think what is going on is that regular imperative Scala coded is unaffected by the IO monad--it runs when it normally would under the rules of Scala.
When you run:
for {
_ <- x("1")
_ <- changeState("playing")
} yield ()
this immediately calls x. That has nothing to do with the IO monad; it's just how for comprehensions are defined. The first step is to evaluate the first statement so you can call flatMap on it.
As you observe, you never "run" the monadic result, so the argument to flatMap, the monadic continuation, is never invoked, resulting in no call to changeState. This is specific to the IO monad, as, e.g., the List monad's flatMap would have immediately invoked the function (unless it were an empty list).
This might be a really dumb question but I am trying to understand the logic behind using #flatMap and not just #map in this method definition in Finatra's HttpClient definition:
def executeJson[T: Manifest](request: Request, expectedStatus: Status = Status.Ok): Future[T] = {
execute(request) flatMap { httpResponse =>
if (httpResponse.status != expectedStatus) {
Future.exception(new HttpClientException(httpResponse.status, httpResponse.contentString))
} else {
Future(parseMessageBody[T](httpResponse, mapper.reader[T]))
.transformException { e =>
new HttpClientException(httpResponse.status, s"${e.getClass.getName} - ${e.getMessage}")
}
}
}
}
Why create a new Future when I can just use #map and instead have something like:
execute(request) map { httpResponse =>
if (httpResponse.status != expectedStatus) {
throw new HttpClientException(httpResponse.status, httpResponse.contentString)
} else {
try {
FinatraObjectMapper.parseResponseBody[T](httpResponse, mapper.reader[T])
} catch {
case e => throw new HttpClientException(httpResponse.status, s"${e.getClass.getName} - ${e.getMessage}")
}
}
}
Would this be purely a stylistic difference and using Future.exception is just better style in this case, whereas throwing almost looks like a side-effect (in reality it's not, as it doesn't exit the context of a Future) or is there something more behind it, such as order of execution and such?
Tl;dr:
What's the difference between throwing within a Future vs returning a Future.exception?
From a theoretical point of view, if we take away the exceptions part (they cannot be reasoned about using category theory anyway), then those two operations are completely identical as long as your construct of choice (in your case Twitter Future) forms a valid monad.
I don't want to go into length over these concepts, so I'm just going to present the laws directly (using Scala Future):
import scala.concurrent.ExecutionContext.Implicits.global
// Functor identity law
Future(42).map(x => x) == Future(42)
// Monad left-identity law
val f = (x: Int) => Future(x)
Future(42).flatMap(f) == f(42)
// combining those two, since every Monad is also a Functor, we get:
Future(42).map(x => x) == Future(42).flatMap(x => Future(x))
// and if we now generalise identity into any function:
Future(42).map(x => x + 20) == Future(42).flatMap(x => Future(x + 20))
So yes, as you already hinted, those two approaches are identical.
However, there are three comments that I have on this, given that we are including exceptions into the mix:
Be careful - when it comes to throwing exceptions, Scala Future (probably Twitter too) violates the left-identity law on purpose, in order to trade it off for some extra safety.
Example:
import scala.concurrent.ExecutionContext.Implicits.global
def sneakyFuture = {
throw new Exception("boom!")
Future(42)
}
val f1 = Future(42).flatMap(_ => sneakyFuture)
// Future(Failure(java.lang.Exception: boom!))
val f2 = sneakyFuture
// Exception in thread "main" java.lang.Exception: boom!
As #randbw mentioned, throwing exceptions is not idiomatic to FP and it violates principles such as purity of functions and referential transparency of values.
Scala and Twitter Future make it easy for you to just throw an exception - as long as it happens in a Future context, exception will not bubble up, but instead cause that Future to fail. However, that doesn't mean that literally throwing them around in your code should be permitted, because it ruins the structure of your programs (similarly to how GOTO statements do it, or break statements in loops, etc.).
Preferred practice is to always evaluate every code path into a value instead of throwing bombs around, which is why it's better to flatMap into a (failed) Future than to map into some code that throws a bomb.
Keep in mind referential transparency.
If you use map instead of flatMap and someone takes the code from the map and extracts it out into a function, then you're safer if this function returns a Future, otherwise someone might run it outside of Future context.
Example:
import scala.concurrent.ExecutionContext.Implicits.global
Future(42).map(x => {
// this should be done inside a Future
x + 1
})
This is fine. But after completely valid refactoring (which utilizes the rule of referential transparency), your codfe becomes this:
def f(x: Int) = {
// this should be done inside a Future
x + 1
}
Future(42).map(x => f(x))
And you will run into problems if someone calls f directly. It's much safer to wrap the code into a Future and flatMap on it.
Of course, you could argue that even when using flatMap someone could rip out the f from .flatMap(x => Future(f(x)), but it's not that likely. On the other hand, simply extracting the response processing logic into a separate function fits perfectly with the functional programming's idea of composing small functions into bigger ones, and it's likely to happen.
From my understanding of FP, exceptions are not thrown. This would be, as you said, a side-effect. Exceptions are instead values that are handled at some point in the execution of the program.
Cats (and i'm sure other libraries, too) employs this technique too (https://github.com/typelevel/cats/blob/master/core/src/main/scala/cats/ApplicativeError.scala).
Therefore, the flatMap call allows the exception to be contained within a satisfied Future here and handled at a later point in the program's execution where other exception value handling may also occur.
I am a new to Scala coming from Java background, currently confused about the best practice considering Option[T].
I feel like using Option.map is just more functional and beautiful, but this is not a good argument to convince other people. Sometimes, isEmpty check feels more straight forward thus more readable. Is there any objective advantages, or is it just personal preference?
Example:
Variation 1:
someOption.map{ value =>
{
//some lines of code
}
} orElse(foo)
Variation 2:
if(someOption.isEmpty){
foo
} else{
val value = someOption.get
//some lines of code
}
I intentionally excluded the options to use fold or pattern matching. I am simply not pleased by the idea of treating Option as a collection right now, and using pattern matching for a simple isEmpty check is an abuse of pattern matching IMHO. But no matter why I dislike these options, I want to keep the scope of this question to be the above two variations as named in the title.
Is there any objective advantages, or is it just personal preference?
I think there's a thin line between objective advantages and personal preference. You cannot make one believe there is an absolute truth to either one.
The biggest advantage one gains from using the monadic nature of Scala constructs is composition. The ability to chain operations together without having to "worry" about the internal value is powerful, not only with Option[T], but also working with Future[T], Try[T], Either[A, B] and going back and forth between them (also see Monad Transformers).
Let's try and see how using predefined methods on Option[T] can help with control flow. For example, consider a case where you have an Option[Int] which you want to multiply only if it's greater than a value, otherwise return -1. In the imperative approach, we get:
val option: Option[Int] = generateOptionValue
var res: Int = if (option.isDefined) {
val value = option.get
if (value > 40) value * 2 else -1
} else -1
Using collections style method on Option, an equivalent would look like:
val result: Int = option
.filter(_ > 40)
.map(_ * 2)
.getOrElse(-1)
Let's now consider a case for composition. Let's say we have an operation which might throw an exception. Additionaly, this operation may or may not yield a value. If it returns a value, we want to query a database with that value, otherwise, return an empty string.
A look at the imperative approach with a try-catch block:
var result: String = _
try {
val maybeResult = dangerousMethod()
if (maybeResult.isDefined) {
result = queryDatabase(maybeResult.get)
} else result = ""
}
catch {
case NonFatal(e) => result = ""
}
Now let's consider using scala.util.Try along with an Option[String] and composing both together:
val result: String = Try(dangerousMethod())
.toOption
.flatten
.map(queryDatabase)
.getOrElse("")
I think this eventually boils down to which one can help you create clear control flow of your operations. Getting used to working with Option[T].map rather than Option[T].get will make your code safer.
To wrap up, I don't believe there's a single truth. I do believe that composition can lead to beautiful, readable, side effect deferring safe code and I'm all for it. I think the best way to show other people what you feel is by giving them examples as we just saw, and letting them feel for themselves the power they can leverage with these sets of tools.
using pattern matching for a simple isEmpty check is an abuse of pattern matching IMHO
If you do just want an isEmpty check, isEmpty/isDefined is perfectly fine. But in your case you also want to get the value. And using pattern matching for this is not abuse; it's precisely the basic use-case. Using get allows to very easily make errors like forgetting to check isDefined or making the wrong check:
if(someOption.isEmpty){
val value = someOption.get
//some lines of code
} else{
//some other lines
}
Hopefully testing would catch it, but there's no reason to settle for "hopefully".
Combinators (map and friends) are better than get for the same reason pattern matching is: they don't allow you to make this kind of mistake. Choosing between pattern matching and combinators is a different question. Generally combinators are preferred because they are more composable (as Yuval's answer explains). If you want to do something covered by a single combinator, I'd generally choose them; if you need a combination like map ... getOrElse, or a fold with multi-line branches, it depends on the specific case.
It seems similar to you in case of Option but just consider the case of Future. You will not be able to interact with the future's value after going out of Future monad.
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Promise
import scala.util.{Success, Try}
// create a promise which we will complete after sometime
val promise = Promise[String]();
// Now lets consider the future contained in this promise
val future = promise.future;
val byGet = if (!future.value.isEmpty) {
val valTry = future.value.get
valTry match {
case Success(v) => v + " :: Added"
case _ => "DEFAULT :: Added"
}
} else "DEFAULT :: Added"
val byMap = future.map(s => s + " :: Added")
// promise was completed now
promise.complete(Try("PROMISE"))
//Now lets print both values
println(byGet)
// DEFAULT :: Added
println(byMap)
// Success(PROMISE :: Added)
Scala in Depth demonstrates the Loaner Pattern:
def readFile[T](f: File)(handler: FileInputStream => T): T = {
val resource = new java.io.FileInputStream(f)
try {
handler(resource)
} finally {
resource.close()
}
}
Example usage:
readFile(new java.io.File("test.txt")) { input =>
println(input.readByte)
}
This code appears simple and clear. What is an "anti-pattern" of the Loaner pattern in Scala so that I know how to avoid it?
Make sure that whatever you compute is evaluated eagerly and no longer depends on the resource. Scala makes lazy computation fairly easy. For instance, if you wrap scala.io.Source.fromFile in this way, you might try
readFile("test.txt")(_.getLines)
Unfortunately, this doesn't work because getLines is lazy (returns an iterator). And Scala doesn't have any great way to indicate which methods are lazy and which are not. So you just have to know (docs will tend to tell you), and you have to actually do the work before returning:
readFile("test.txt")(_.getLines.toVector)
Overall, it's a very useful pattern. Just make sure that all accesses to the resource are completed before exiting the block (so no uncompleted futures, no lazy vals that depend on the resource, no iterators, no returning the resource itself, no streams that haven't been fully read, etc.; of course any of these things are okay if they do not depend on the open resource but only on some fully-computed quantity based upon the resource).
With the Loan pattern it is important to know when the "bit" of code that is going to actually call your loaned resource is going to use it.
If you want to return a future from a loan pattern I advise to not create it inside the function that is passed to the loan pattern function.
Don't write
readFile("text.file")(future { doSomething })
but do:
future { readFile("text.file")( doSomething ) }
what I usually do is that I define two types of loan pattern functions: Synchronous and Async
So in your case I would have:
def asyncReadFile[T](f: File)(handler: FileInputStream => T): Future[T] = {
future{
readFile(f)(handler)
}
}
This way you avoid calling closed resources. And you reuse your already tested and hopefully correct code of the Synchronous function.
Perhaps this is just my background in more imperative programming, but I like having return statements in my code.
I understand that in Scala, returns are not necessary in many methods, because whatever the last computed value is is returned by default. I understand that this makes perfect sense for a "one-liner", e.g.
def square(x) = x * x
I also understand the definitive case for using explicit returns (when you have several branches your code could take, and you want to break out of the method for different branches, e.g. if an error occurs). But what about multiline functions? Wouldn't it be more readable and make more sense if there was an explicit return, e.g.
def average(x: List[Int]) : Float = {
var sum = 0
x.foreach(sum += _)
return sum / x.length.toFloat
}
def average(x: List[Int]) : Float =
x.foldLeft(0)(_ + _) / x.length.toFloat
UPDATE: While I was aiming to show how an iterative code can be made into a functional expression, #soc rightly commented that an even shorter version is x.sum / x.length.toFloat
I usually find that in Scala I have less need to "return in the middle". Furthermore, a large function is broken into smaller expressions that are clearer to reason. So instead of
var x = ...
if (some condition) x = ...
else x = ...
I would write
if (some condition) ...
else ....
Similar thing happens using match expressions. And you can always have helper nested classes.
Once you are comfortable with many forms of expressions that evaluate to a result (e.g., 'if') without a return statement, then having one in your methods looks out of place.
At one place of work we had a rule of not having 'return' in the middle of a method since it is easy for a reader of your code to miss it. If 'return' is only at the last line, what's the point of having it at all?
The return doesn't tell you anything extra, so I find it actually clutters my understanding of what goes on. Of course it returns; there are no other statements!
And it's also a good idea to get away from the habit of using return because it's really unclear where you are returning to when you're using heavily functional code:
def manyFutures = xs.map(x => Futures.future { if (x<5) return Nil else x :: Nil })
Where should the return leave execution? Outside of manyFutures? Just the inner block?
So although you can use explicit returns (at least if you annotate the return type explicitly), I suggest that you try to get used to the last-statement-is-the-return-value custom instead.
Your sense that the version with the return is more readable is probably a sign that you are still thinking imperatively rather than declaratively. Try to think of your function from the perspective of what it is (i.e. how it is defined), rather than what it does (i.e. how it is computed).
Let's look at your average example, and pretend that the standard library doesn't have the features that make an elegant one-liner possible. Now in English, we would say that the average of a list of numbers is the sum of the numbers in the list divided by its length.
def average(xs: List[Int]): Float = {
var sum = 0
xs.foreach(sum += _)
sum / xs.length.toFloat
}
This description is fairly close to the English version, ignoring the first two lines of the function. We can factor out the sum function to get a more declarative description:
def average(xs: List[Int]): Float = {
def sum(xs: List[Int]): Int = // ...
sum(xs) / xs.length.toFloat
}
In the declarative definition, a return statement reads quite unnaturally because it explicitly puts the focus on doing something.
I don't miss the return-statement at the end of a method because it is unnecessary. Furthermore, in Scala each method does return a value - even methods which should not return something. Instead of void there is the type Unit which is returned:
def x { println("hello") }
can be written as:
def x { println("hello"); return Unit }
But println returns already the type Unit - thus when you explicitly want to use a return-statement you have to use it in methods which return Unit, too. Otherwise you do not have continuous identical build-on code.
But in Scala there is the possibility to break your code in a lot of small methods:
def x(...): ReturnType = {
def func1 = ... // use funcX
def func2 = ... // use funcX
def func3 = ... // use funcX
funcX
}
An explicitly used return-statement does not help to understand the code better.
Also Scala has a powerful core library which allows you to solve many problems with less code:
def average(x: List[Int]): Float = x.sum.toFloat / x.length
I believe you should not use return, I think it somehow goes against the elegant concept that in Scala everything is an expression that evaluates to something.
Like your average method: it's just an expression that evaluates to Float, no need to return anything to make that work.