I am using the following two code snippets to execute code in multiple threads. But I am getting different behaviour.
Snippet 1:
val futures = Future.sequence(Seq(f1, f2, f3, f4, f5))
futures.onComplete{
case Success(value) =>
case Failure(value) =>
}
Snippet 2:
Await.result(Future.sequence(Seq(f1, f2, f3, f4, f5)), Duration(500, TimeUnit.SECONDS))
In futures I am just setting some property and retrieving the result.
Note: knowing only the behaviour difference between above two snippets is sufficient.
onComplete runs on some arbitrary (unspecified) thread in the ExecutionContext, whereas Await.result runs on the current thread, and blocks it until it completes or the specified timeout is exceeded. The first is non-blocking, the second is blocking.
There's also a difference in how failures are handled in the two snippets, but this is kind of obvious from looking at the code.
Actually future.onComplete register a call back and wait for the result as soon as the future got completed control goes inside to the future, and see what the future has inside, it could be either success or failure.
On the other hand the Await blocks the thread on which the future is running until the future got completed for specific timeout.
Hence the onComplete is non blocking and Await is blocking in the nature.
If you want to use await then try collecting as much as future you can and then do Await once you should not use Await for each and every future you have In the code it will rather slow your code.
Related
I have some async (ZIO) code, which I need to test. If I create a testing part using Thread.sleep() it works fine and I always get response:
for {
saved <- database.save(smth)
result <- eventually {
Thread.sleep(20000)
database.search(...)
}
} yield result
But if I made same logic using timeout and interval from eventually then it never works correctly ( I got timeouts):
for {
saved <- database.save(smth)
result <- eventually(timeout(Span(20, Seconds)), interval(Span(20, Seconds))) {
database.search(...)
}
} yield result
I do not understand why timeout and interval works different then Thread.sleep. It should be doing exactly same thing. Can someone explain it to me and tell how I should change this code to do not need to use Thread.sleep()?
Assuming database.search(...) returns ZIO[] object.
eventually{database.search(...)} most probably succeeds immediately after the first try.
It successfully created a task to query the database.
Then database is queried without any retry logic.
Regarding how to make it work:
val search: ZIO[Any, Throwable, String] = ???
val retried: ZIO[Any with Clock, Throwable, Option[String]] = search.retry(Schedule.spaced(Duration.fromMillis(1000))).timeout(Duration.fromMillis(20000))
Something like that should work. But I believe that more elegant solutions exist.
The other answer from #simpadjo addresses the "what" quite succinctly. I'll add some additional context as to why you might see this behavior.
for {
saved <- database.save(smth)
result <- eventually {
Thread.sleep(20000)
database.search(...)
}
} yield result
There are three different technologies being mixed here which is causing some confusion.
First is ZIO which is an asynchronous programming library that uses it's own custom runtime and execution model to perform tasks. The second is eventually which comes from ScalaTest and is useful for checking asynchronous computations by effectively polling the state of a value. And thirdly, there is Thread.sleep which is a Java api that literally suspends the current thread and prevents task progression until the timer expires.
eventually uses a simple retry mechanism that differs based on whether you are using a normal value or a Future from the scala standard library. Basically it runs the code in the block and if it throws then it sleeps the current thread and then retries it based on some interval configuration, eventually timing out. Notably in this case the behavior is entirely synchronous, meaning that as long as the value in the {} doesn't throw an exception it won't keep retrying.
Thread.sleep is a heavy weight operation and in this case it is effectively blocking the function being passed to eventually from progressing for 20 seconds. Meaning that by the time the database.search is called the operation has likely completed.
The second variant is different, it executes the code in the eventually block immediately, if it throws an exception then it will attempt it again based on the interval/timeout logic that your provide. In this scenario the save may not have completed (or propagated if it is eventually consistent). Because you are returning a ZIO which is designed not to throw, and eventually doesn't understand ZIO it will simply return the search attempt with no retry logic.
The accepted answer:
val retried: ZIO[Any with Clock, Throwable, Option[String]] = search.retry(Schedule.spaced(Duration.fromMillis(1000))).timeout(Duration.fromMillis(20000))
works because the retry and timeout are using the built-in ZIO operators which do understand how to actually retry and timeout a ZIO. Meaning that if search fails the retry will handle it until it succeeds.
For few days I have been wrapping my head around cats-effect and IO. And I feel I have some misconceptions about this effect or simply I missed its point.
First of all - if IO can replace Scala's Future, how can we create an async IO task? Using IO.shift? Using IO.async? Is IO.delay sync or async? Can we make a generic async task with code like this Async[F].delay(...)? Or async happens when we call IO with unsafeToAsync or unsafeToFuture?
What's the point of Async and Concurrent in cats-effect? Why they are separated?
Is IO a green thread? If yes, why is there a Fiber object in cats-effect? As I understand the Fiber is the green thread, but docs claim we can think of IOs as green threads.
I would appreciate some clarifing on any of this as I have failed comprehending cats-effect docs on those and internet was not that helpfull...
if IO can replace Scala's Future, how can we create an async IO task
First, we need to clarify what is meant as an async task. Usually async means "does not block the OS thread", but since you're mentioning Future, it's a bit blurry. Say, if I wrote:
Future { (1 to 1000000).foreach(println) }
it would not be async, as it's a blocking loop and blocking output, but it would potentially execute on a different OS thread, as managed by an implicit ExecutionContext. The equivalent cats-effect code would be:
for {
_ <- IO.shift
_ <- IO.delay { (1 to 1000000).foreach(println) }
} yield ()
(it's not the shorter version)
So,
IO.shift is used to maybe change thread / thread pool. Future does it on every operation, but it's not free performance-wise.
IO.delay { ... } (a.k.a. IO { ... }) does NOT make anything async and does NOT switch threads. It's used to create simple IO values from synchronous side-effecting APIs
Now, let's get back to true async. The thing to understand here is this:
Every async computation can be represented as a function taking callback.
Whether you're using API that returns Future or Java's CompletableFuture, or something like NIO CompletionHandler, it all can be converted to callbacks. This is what IO.async is for: you can convert any function taking callback to an IO. And in case like:
for {
_ <- IO.async { ... }
_ <- IO(println("Done"))
} yield ()
Done will be only printed when (and if) the computation in ... calls back. You can think of it as blocking the green thread, but not OS thread.
So,
IO.async is for converting any already asynchronous computation to IO.
IO.delay is for converting any completely synchronous computation to IO.
The code with truly asynchronous computations behaves like it's blocking a green thread.
The closest analogy when working with Futures is creating a scala.concurrent.Promise and returning p.future.
Or async happens when we call IO with unsafeToAsync or unsafeToFuture?
Sort of. With IO, nothing happens unless you call one of these (or use IOApp). But IO does not guarantee that you would execute on a different OS thread or even asynchronously unless you asked for this explicitly with IO.shift or IO.async.
You can guarantee thread switching any time with e.g. (IO.shift *> myIO).unsafeRunAsyncAndForget(). This is possible exactly because myIO would not be executed until asked for it, whether you have it as val myIO or def myIO.
You cannot magically transform blocking operations into non-blocking, however. That's not possible neither with Future nor with IO.
What's the point of Async and Concurrent in cats-effect? Why they are separated?
Async and Concurrent (and Sync) are type classes. They are designed so that programmers can avoid being locked to cats.effect.IO and can give you API that supports whatever you choose instead, such as monix Task or Scalaz 8 ZIO, or even monad transformer type such as OptionT[Task, *something*]. Libraries like fs2, monix and http4s make use of them to give you more choice of what to use them with.
Concurrent adds extra things on top of Async, most important of them being .cancelable and .start. These do not have a direct analogy with Future, since that does not support cancellation at all.
.cancelable is a version of .async that allows you to also specify some logic to cancel the operation you're wrapping. A common example is network requests - if you're not interested in results anymore, you can just abort them without waiting for server response and don't waste any sockets or processing time on reading the response. You might never use it directly, but it has it's place.
But what good are cancelable operations if you can't cancel them? Key observation here is that you cannot cancel an operation from within itself. Somebody else has to make that decision, and that would happen concurrently with the operation itself (which is where the type class gets its name). That's where .start comes in. In short,
.start is an explicit fork of a green thread.
Doing someIO.start is akin to doing val t = new Thread(someRunnable); t.start(), except it's green now. And Fiber is essentially a stripped down version of Thread API: you can do .join, which is like Thread#join(), but it does not block OS thread; and .cancel, which is safe version of .interrupt().
Note that there are other ways to fork green threads. For example, doing parallel operations:
val ids: List[Int] = List.range(1, 1000)
def processId(id: Int): IO[Unit] = ???
val processAll: IO[Unit] = ids.parTraverse_(processId)
will fork processing all IDs to green threads and then join them all. Or using .race:
val fetchFromS3: IO[String] = ???
val fetchFromOtherNode: IO[String] = ???
val fetchWhateverIsFaster = IO.race(fetchFromS3, fetchFromOtherNode).map(_.merge)
will execute fetches in parallel, give you first result completed and automatically cancel the fetch that is slower. So, doing .start and using Fiber is not the only way to fork more green threads, just the most explicit one. And that answers:
Is IO a green thread? If yes, why is there a Fiber object in cats-effect? As I understand the Fiber is the green thread, but docs claim we can think of IOs as green threads.
IO is like a green thread, meaning you can have lots of them running in parallel without overhead of OS threads, and the code in for-comprehension behaves as if it was blocking for the result to be computed.
Fiber is a tool for controlling green threads explicitly forked (waiting for completion or cancelling).
We have a Scala Play webapp which does a number of database operations as part of a HTTP request, each of which is a Future. Usually we bubble up the Futures to an async controller action and let Play handle waiting for them.
But I've also noticed in a number of places we don't bubble up the Future or even wait for it to complete. I think this is bad because it means the HTTP request wont fail if the future fails, but does it actually even guarantee the future will be executed at all, since nothing is going to wait on the result of it? Will Play drop un-awaited futures after the HTTP request has been served, or leave them running in the background?
TL;DR
Play will not kill your Futures after sending the HTTP response.
Errors will not be reported if any of your Futures fail.
Long version
Your futures will not be killed when the HTTP response has been sent. You can try that out for yourself like this:
def futuresTest = Action.async { request =>
println(s"Entered futuresTest at ${LocalDateTime.now}")
val ignoredFuture = Future{
var i = 0
while (i < 10) {
Thread.sleep(1000)
println(LocalDateTime.now)
i += 1
}
}
println(s"Leaving futuresTest at ${LocalDateTime.now}")
Future.successful(Ok)
}
However you are right that the request will not fail if any of the futures fail. If this is a problem then you can compose the futures using a for comprehension or flatMaps. Here's an example of what you can do (I'm assuming that your Futures only perform side efects (Future[Unit])
To let your futures execute in paralell
val dbFut1 = dbCall1(...)
val dbFut2 = dbCall2(...)
val wsFut1 = wsCall1(...)
val fut = for(
_ <- dbFut1;
_ <- dbFut2;
_ <- wsFut1
) yield ()
fut.map(_ => Ok)
To have them execute in sequence
val fut = for(
_ <- dbCall1(...);
_ <- dbCall2(...);
_ <- wsCall2(...)
) yield ()
fut.map(_ => Ok)
does it actually even guarantee the future will be executed at all,
since nothing is going to wait on the result of it? Will Play drop
un-awaited futures after the HTTP request has been served, or leave
them running in the background?
This question actually runs much deeper than Play. You're generally asking "If I don't synchronously wait on a future, how can I guarantee it will actually complete without being GCed?". To answer that, we need to understand how the GC actually views threads. From the GC point of view, a thread is what we call a "root". Such a root is the starting point for the heap to traverse it's objects and see which ones are eligible for collection. Among roots are also static fields, for example, which are known to live throughout the life time of the application.
So, when you view it like that, and think of what a Future actually does, which is queue a function that runs on a dedicated thread from the pool of threads available via the underlying ExecutorService (which we refer to as ExecutionContext in Scala), you see that even though you're not waiting on the completion, the JVM runtime does guarantee that your Future will run to completion. As for the Future object wrapping the function, it holds a reference to that unfinished function body so the Future itself isn't collected.
When you think about it from that point of view, it's totally logical, since execution of a Future happens asynchronously, and we usually continue processing it in an asynchronous manner using continuations such as map, flatMap, onComplete, etc.
What is in your view the best scala solution for the case that you have some plain chain of synchronous function calls, and you need to add an asynchronous action in the middle of it without blocking?
I think going for futures entails refactoring existing code to callback spaghetti, tethering futures all the way through the formerly synchronous chain of function calls, or polling on the promise every interval, not so optimal but maybe I am missing some trivial/suitable options here.
It may seem that refactoring for (akka) actors, entail a whole lot of boilerplate for such a simple feat of engineering.
How would you plug in an asynchronous function within an existing synchronous flow, without blocking, and without going into a lot of boilerplate?
Right now in my code I block with Await.result, which just means a thread is sleeping...
One simple dirty trick:
Let's say you have:
def f1Sync: T1
def f2Sync: T2
...
def fAsynchronous: Future[TAsync]
import scala.concurrent.{ Future, Promise }
object FutureHelper {
// A value class is much cheaper than an implicit conversion.
implicit class FutureConverter[T](val t: T) extends AnyVal {
def future: Future[T] = Promise.successful(t).future
}
}
Then you can for yield:
import FutureHelper._
def completeChain: Future[Whatever] = {
for {
r1 <- f1Sync.future
r2 <- f2Sync.future
.. all your syncs
rn <- fAsynchronous // this should also return a future
rnn <- f50Sync(rn).future// you can even pass the result of the async to the next function
} yield rn
}
There is minimal boilerplate of converting your sync calls to immediately resolved futures. Don't be tempted to do that with Future.apply[T](t) or simply Future(a) as that will put daemon threads onto the executor. You won't be able to convert without an implicit executor.
With promises you pay the price of 3, 4 promises which is negligible and you get what you want. for yield acts as a sequencer, it will wait for every result in turn, so you can even do something with your async result after it has been processed.
It will "wait" for every sync call to complete as well, which will happen immediately by design, except for the async call where normal async processing will be automatically employed.
You could also use the async/await library, with the caveat that you wind up with one big Future out of it that you still have to deal with:
http://docs.scala-lang.org/sips/pending/async.html
But, it results in code almost identical to the sychronous code; where you were previously blocking, you add an:
await { theAsyncFuture }
and then carry on with synchronous code.
I am having a issue with the below piece of code. I want 'combine' method to get triggered after all groundCoffee,heaterWater,frothedMilk method completes. They would be triggered concurrently.All the 4 methods grind,heatWater,frothMilk,brew are concurrently executed using a future.
def prepareCappuccino(): Future[Cappuccino] = {
val groundCoffee = grind("arabica beans")
val heatedWater = heatWater(Water(20))
val frothedMilk = frothMilk("milk")
for {
ground <- groundCoffee
water <- heatedWater
foam <- frothedMilk
espresso <- brew(ground, water)
} yield combine(espresso, foam)
}
When I execute the above method the output I am getting is below
start grinding...
heating the water now
milk frothing system engaged!
And the program exits after this. I got this example from a site while I was trying to learn futures. How can the program be made to wait so that combine method get triggered after all the futures return?
The solution already posted to Await for a future is a solution when you want to deliberately block execution on that thread. Two common reasons to do this are for testing, when you want to wait for the outcome before making an assertion, and when otherwise all threads would exit (as is the case when doing toy examples).
However in a proper long lived application Await is generally to be avoided.
Your question already contains one of the correct ways to do future composition - using a for comprehension. Bear in mind here, that for-comprehensions are converted to flatMaps, maps and withFilter operations, so any futures you invoke in the for-comprehension will only be created after the others complete, ie serially.
If you want a bunch of futures to operate in concurrently, then you would create them before entering the for-comprehension as you have done.
You can use the Await here:
val f = Future.sequence(futures.toList)
Await.ready(f, Duration.Inf)
I assume, you have all the futures packed in a list. The Await.ready makes all the waiting work.