I'm trying to use andThen method to register action stages for the future. Like this:
implicit val executionContext = ExecutionContext.global
val f = Future(0)
f.andThen {
case r => println(r.get + "1")
} andThen {
case r => println(r.get + "2")
}
Since this method executes asynchronously and does not produce a
return value, any non-fatal exceptions thrown will be reported to the
ExecutionContext .
What does it mean asynchronously? It means asynchronously to the future execution itself?
What does it mean asynchronously? It means asynchronously to the
future execution itself?
It means that the execution of andThen will happen on a given thread provided by the ExecutionContext after the originating Future[T] completes.
If we look at the implementation:
def andThen[U](pf: PartialFunction[Try[T], U])(implicit executor: ExecutionContext): Future[T] = {
val p = Promise[T]()
onComplete {
case r => try pf.applyOrElse[Try[T], Any](r, Predef.conforms[Try[T]]) finally p complete r
}
p.future
}
This method is a smart wrapper over Future.onComplete which is a side effecting method that returns Unit. It applies the given function, and returns the original Future[T] result. As the documentation states, it is useful when you want to guarantee the order of several onComplete callbacks registered on the future.
Related
I have two scala futures. I want to perform an action once both are completed, regardless of whether they were completed successfully. (Additionally, I want the ability to inspect those results at that time.)
In Javascript, this is Promise.allSettled.
Does Scala offer a simple way to do this?
One last wrinkle, if it matters: I want to do this in a JRuby application.
You can use the transform method to create a Future that will always succeed and return the result or the error as a Try object.
def toTry[A](future: Future[A])(implicit ec: ExecutionContext): Future[Try[A]] =
future.transform(x => Success(x))
To combine two Futures into one, you can use zip:
def settle2[A, B](fa: Future[A], fb: Future[B])(implicit ec: ExecutionContext)
: Future[(Try[A], Try[B])] =
toTry(fa).zip(toTry(fb))
If you want to combine an arbitrary number of Futures this way, you can use Future.traverse:
def allSettled[A](futures: List[Future[A]])(implicit ec: ExecutionContext)
: Future[List[Try[A]]] =
Future.traverse(futures)(toTry(_))
Normally in this case we use Future.sequence to transform a collection of a Future into one single Future so you can map on it, but Scala short circuit the failed Future and doesn't wait for anything after that (Scala considers one failure to be a failure for all), which doesn't fit your case.
In this case you need to map failed ones to successful, then do the sequence, e.g.
val settledFuture = Future.sequence(List(future1, future2, ...).map(_.recoverWith { case _ => Future.unit }))
settledFuture.map(//Here it is all settled)
EDIT
Since the results need to be kept, instead of mapping to Future.unit, we map the actual result into another layer of Try:
val settledFuture = Future.sequence(
List(Future(1), Future(throw new Exception))
.map(_.map(Success(_)).recover(Failure(_)))
)
settledFuture.map(println(_))
//Output: List(Success(1), Failure(java.lang.Exception))
EDIT2
It can be further simplified with transform:
Future.sequence(listOfFutures.map(_.transform(Success(_))))
Perhaps you could use a concurrent counter to keep track of the number of completed Futures and then complete the Promise once all Futures have completed
def allSettled[T](futures: List[Future[T]]): Future[List[Future[T]]] = {
val p = Promise[List[Future[T]]]()
val length = futures.length
val completedCount = new AtomicInteger(0)
futures foreach {
_.onComplete { _ =>
if (completedCount.incrementAndGet == length) p.trySuccess(futures)
}
}
p.future
}
val futures = List(
Future(-11),
Future(throw new Exception("boom")),
Future(42)
)
allSettled(futures).andThen(println(_))
// Success(List(Future(Success(-11)), Future(Failure(java.lang.Exception: boom)), Future(Success(42))))
scastie
Suppose I've got a function like this:
import scala.concurrent._
def plus2Future(fut: Future[Int])
(implicit ec: ExecutionContext): Future[Int] = fut.map(_ + 2)
Now I am using plus2Future to create a new Future:
import import scala.concurrent.ExecutionContext.Implicits.global
val fut1 = Future { 0 }
val fut2 = plus2Future(fut1)
Does function plus2 always run now on the same thread as fut1 ? I guess it does not.
Does using map in plus2Future add an overhead of thread context switching, creating a new Runnable etc. ?
Does using map in plus2Future add an overhead of thread context
switching, creating a new Runnable etc. ?
map, on the default implementation of Future (via DefaultPromise) is:
def map[S](f: T => S)(implicit executor: ExecutionContext): Future[S] = { // transform(f, identity)
val p = Promise[S]()
onComplete { v => p complete (v map f) }
p.future
}
Where onComplete creates a new CallableRunnable and eventually invokes dispatchOrAddCallback:
/** Tries to add the callback, if already completed, it dispatches the callback to be executed.
* Used by `onComplete()` to add callbacks to a promise and by `link()` to transfer callbacks
* to the root promise when linking two promises togehter.
*/
#tailrec
private def dispatchOrAddCallback(runnable: CallbackRunnable[T]): Unit = {
getState match {
case r: Try[_] => runnable.executeWithValue(r.asInstanceOf[Try[T]])
case _: DefaultPromise[_] => compressedRoot().dispatchOrAddCallback(runnable)
case listeners: List[_] => if (updateState(listeners, runnable :: listeners)) () else dispatchOrAddCallback(runnable)
}
}
Which dispatches the call to the underlying execution context. This means that it is up to the implementing ExecutionContext to decide how and where the future is ran, so one cannot give a deterministic answer of "It runs here or there". We can see that there is both an allocation of a Promise and a callback object.
In general, I wouldn't worry about this, unless I've benchmarked the code and found this to be a bottleneck.
No, it cannot (deterministically) use the same thread (unless the EC only has one thread) since an unbounded amount of time can elapse between the two lines of code.
I have two methods, let's call them load() and init(). Each one starts a computation in its own thread and returns a Future on its own execution context. The two computations are independent.
val loadContext = ExecutionContext.fromExecutor(...)
def load(): Future[Unit] = {
Future
}
val initContext = ExecutionContext.fromExecutor(...)
def init(): Future[Unit] = {
Future { ... }(initContext)
}
I want to call both of these from some third thread -- say it's from main() -- and perform some other computation when both are finished.
def onBothComplete(): Unit = ...
Now:
I don't care which completes first
I don't care what thread the other computation is performed on, except:
I don't want to block either thread waiting for the other;
I don't want to block the third (calling) thread; and
I don't want to have to start a fourth thread just to set the flag.
If I use for-comprehensions, I get something like:
val loading = load()
val initialization = initialize()
for {
loaded <- loading
initialized <- initialization
} yield { onBothComplete() }
and I get Cannot find an implicit ExecutionContext.
I take this to mean Scala wants a fourth thread to wait for the completion of both futures and set the flag, either an explicit new ExecutionContext or ExecutionContext.Implicits.global. So it would appear that for-comprehensions are out.
I thought I might be able to nest callbacks:
initialization.onComplete {
case Success(_) =>
loading.onComplete {
case Success(_) => onBothComplete()
case Failure(t) => log.error("Unable to load", t)
}
case Failure(t) => log.error("Unable to initialize", t)
}
Unfortunately onComplete also takes an implicit ExecutionContext, and I get the same error. (Also this is ugly, and loses the error message from loading if initialization fails.)
Is there any way to compose Scala Futures without blocking and without introducing another ExecutionContext? If not, I might have to just throw them over for Java 8 CompletableFutures or Javaslang Vavr Futures, both of which have the ability to run callbacks on the thread that did the original work.
Updated to clarify that blocking either thread waiting for the other is also not acceptable.
Updated again to be less specific about the post-completion computation.
Why not just reuse one of your own execution contexts? Not sure what your requirements for those are but if you use a single thread executor you could just reuse that one as the execution context for your comprehension and you won't get any new threads created:
implicit val loadContext = ExecutionContext.fromExecutor(Executors.newSingleThreadExecutor)
If you really can't reuse them you may consider this as the implicit execution context:
implicit val currentThreadExecutionContext = ExecutionContext.fromExecutor(
(runnable: Runnable) => {
runnable.run()
})
Which will run futures on the current thread. However, the Scala docs explicitly recommends against this as it introduces nondeterminism in which thread runs the Future (but as you stated, you don't care which thread it runs on so this may not matter).
See Synchronous Execution Context for why this isn't advisable.
An example with that context:
val loadContext = ExecutionContext.fromExecutor(Executors.newSingleThreadExecutor)
def load(): Future[Unit] = {
Future(println("loading thread " + Thread.currentThread().getName))(loadContext)
}
val initContext = ExecutionContext.fromExecutor(Executors.newSingleThreadExecutor)
def init(): Future[Unit] = {
Future(println("init thread " + Thread.currentThread().getName))(initContext)
}
val doneFlag = new AtomicBoolean(false)
val loading = load()
val initialization = init()
implicit val currentThreadExecutionContext = ExecutionContext.fromExecutor(
(runnable: Runnable) => {
runnable.run()
})
for {
loaded <- loading
initialized <- initialization
} yield {
println("yield thread " + Thread.currentThread().getName)
doneFlag.set(true)
}
prints:
loading thread pool-1-thread-1
init thread pool-2-thread-1
yield thread main
Though the yield line may print either pool-1-thread-1 or pool-2-thread-1 depending on the run.
In Scala, a Future represents a piece of work to be executed async (i.e. concurrently to other units of work). An ExecutionContext represent a pool of threads for executing Futures. In other words, ExecutionContext is the team of worker who performs the actual work.
For efficiency and scalability, it's better to have big team(s) (e.g. single ExecutionContext with 10 threads to execute 10 Future's) rather than small teams (e.g. 5 ExecutionContext with 2 threads each to execute 10 Future's).
In your case if you want to limit the number of threads to 2, you can:
def load()(implicit teamOfWorkers: ExecutionContext): Future[Unit] = {
Future { ... } /* will use the teamOfWorkers implicitly */
}
def init()(implicit teamOfWorkers: ExecutionContext): Future[Unit] = {
Future { ... } /* will use the teamOfWorkers implicitly */
}
implicit val bigTeamOfWorkers = ExecutionContext.fromExecutorService(Executors.newFixedThreadPool(2))
/* All async works in the following will use
the same bigTeamOfWorkers implicitly and works will be shared by
the 2 workers (i.e. thread) in the team */
for {
loaded <- loading
initialized <- initialization
} yield doneFlag.set(true)
The Cannot find an implicit ExecutionContext error does not mean that Scala wants additional threads. It only means that Scala wants a ExecutionContext to do the work. And additional ExecutionContext does not necessarily implies additional 'thread', e.g. the following ExecutionContext, instead of creating new threads, will execute works in the current thread:
val currThreadExecutor = ExecutionContext.fromExecutor(new Executor {
override def execute(command: Runnable): Unit = command.run()
})
I was hoping code like follows would wait for both futures, but it does not.
object Fiddle {
val f1 = Future {
throw new Throwable("baaa") // emulating a future that bumped into an exception
}
val f2 = Future {
Thread.sleep(3000L) // emulating a future that takes a bit longer to complete
2
}
val lf = List(f1, f2) // in the general case, this would be a dynamically sized list
val seq = Future.sequence(lf)
seq.onComplete {
_ => lf.foreach(f => println(f.isCompleted))
}
}
val a = FuturesSequence
I assumed seq.onComplete would wait for them all to complete before completing itself, but not so; it results in:
true
false
.sequence was a bit hard to follow in the source of scala.concurrent.Future, I wonder how I would implement a parallel that waits for all original futures of a (dynamically sized) sequence, or what might be the problem here.
Edit: A related question: https://worldbuilding.stackexchange.com/questions/12348/how-do-you-prove-youre-from-the-future :)
One common approach to waiting for all results (failed or not) is to "lift" failures into a new representation inside the future, so that all futures complete with some result (although they may complete with a result that represents failure). One natural way to get that is lifting to a Try.
Twitter's implementation of futures provides a liftToTry method that makes this trivial, but you can do something similar with the standard library's implementation:
import scala.util.{ Failure, Success, Try }
val lifted: List[Future[Try[Int]]] = List(f1, f2).map(
_.map(Success(_)).recover { case t => Failure(t) }
)
Now Future.sequence(lifted) will be completed when every future is completed, and will represent successes and failures using Try.
And so, a generic solution for waiting on all original futures of a sequence of futures may look as follows, assuming an execution context is of course implicitly available.
import scala.util.{ Failure, Success, Try }
private def lift[T](futures: Seq[Future[T]]) =
futures.map(_.map { Success(_) }.recover { case t => Failure(t) })
def waitAll[T](futures: Seq[Future[T]]) =
Future.sequence(lift(futures)) // having neutralized exception completions through the lifting, .sequence can now be used
waitAll(SeqOfFutures).map {
// do whatever with the completed futures
}
A Future produced by Future.sequence completes when either:
all the futures have completed successfully, or
one of the futures has failed
The second point is what's happening in your case, and it makes sense to complete as soon as one of the wrapped Future has failed, because the wrapping Future can only hold a single Throwable in the failure case. There's no point in waiting for the other futures because the result will be the same failure.
This is an example that supports the previous answer. There is an easy way to do this using just the standard Scala APIs.
In the example, I am creating 3 futures. These will complete at 5, 7, and 9 seconds respectively. The call to Await.result will block until all futures have resolved. Once all 3 futures have completed, a will be set to List(5,7,9) and execution will continue.
Additionally, if an exception is thrown in any of the futures, Await.result will immediately unblock and throw the exception. Uncomment the Exception(...) line to see this in action.
try {
val a = Await.result(Future.sequence(Seq(
Future({
blocking {
Thread.sleep(5000)
}
System.err.println("A")
5
}),
Future({
blocking {
Thread.sleep(7000)
}
System.err.println("B")
7
//throw new Exception("Ha!")
}),
Future({
blocking {
Thread.sleep(9000)
}
System.err.println("C")
9
}))),
Duration("100 sec"))
System.err.println(a)
} catch {
case e: Exception ⇒
e.printStackTrace()
}
Even though it is quite old question But this is how I got it running in recent time.
object Fiddle {
val f1 = Future {
throw new Throwable("baaa") // emulating a future that bumped into an exception
}
val f2 = Future {
Thread.sleep(3000L) // emulating a future that takes a bit longer to complete
2
}
val lf = List(f1, f2) // in the general case, this would be a dynamically sized list
val seq = Future.sequence(lf)
import scala.concurrent.duration._
Await.result(seq, Duration.Inf)
}
This won't get completed and will wait till all the future gets completed. You can change the waiting time as per your use case. I have kept it to infinite and that was required in my case.
We can enrich Seq[Future[T]] with its own onComplete method through an implicit class:
def lift[T](f: Future[T])(implicit ec: ExecutionContext): Future[Try[T]] =
f map { Success(_) } recover { case e => Failure(e) }
def lift[T](fs: Seq[Future[T]])(implicit ec: ExecutionContext): Seq[Future[Try[T]]] =
fs map { lift(_) }
implicit class RichSeqFuture[+T](val fs: Seq[Future[T]]) extends AnyVal {
def onComplete[U](f: Seq[Try[T]] => U)(implicit ec: ExecutionContext) = {
Future.sequence(lift(fs)) onComplete {
case Success(s) => f(s)
case Failure(e) => throw e // will never happen, because of the Try lifting
}
}
}
Then, in your particular MWE, you can do:
val f1 = Future {
throw new Throwable("baaa") // emulating a future that bumped into an exception
}
val f2 = Future {
Thread.sleep(3000L) // emulating a future that takes a bit longer to complete
2
}
val lf = List(f1, f2)
lf onComplete { _ map {
case Success(v) => ???
case Failure(e) => ???
}}
This solution has the advantage of allowing you to call an onComplete on a sequence of futures as you would on a single future.
Create the Future with a Try to avoid extra hoops.
implicit val ec = ExecutionContext.global
val f1 = Future {
Try {
throw new Throwable("kaboom")
}
}
val f2 = Future {
Try {
Thread.sleep(1000L)
2
}
}
Await.result(
Future.sequence(Seq(f1, f2)), Duration("2 sec")
) foreach {
case Success(res) => println(s"Success. $res")
case Failure(e) => println(s"Failure. ${e.getMessage}")
}
Like the author of this question I'm trying to understand the reasoning for user-visible promises in Scala 2.10's futures and promises.
Particularly, going again to the example from the SIP, isn't it completely flawed:
import scala.concurrent.{ future, promise }
val p = promise[T]
val f = p.future
val producer = future {
val r = produceSomething()
p success r
continueDoingSomethingUnrelated()
}
val consumer = future {
startDoingSomething()
f onSuccess {
case r => doSomethingWithResult()
}
}
I am imagining the case where the call to produceSomething results in a runtime exception. Because promise and producer-future are completely detached, this means the system hangs and the consumer will never complete with either success or failure.
So the only safe way to use promises requires something like
val producer = future {
try {
val r.produceSomething()
p success r
} catch {
case e: Throwable =>
p failure e
throw e // ouch
}
continueDoingSomethingUnrelated()
}
Which obviously error-prone and verbose.
The only case I can see for a visible promise type—where future {} is insufficient—is the one of the callback hook in M. A. D.'s answer. But the example of the SIP doesn't make sense to me.
This is why you rarely use success and failure unless you already know something is bulletproof. If you want bulletproof, this is what Try is for:
val producer = future {
p complete Try( produceSomething )
continueDoingSomethingUnrelated()
}
It doesn't seem necessary to throw the error again; you've already dealt with it by packing it into the answer to the promise, no? (Also, note that if produceSomething itself returns a future, you can use completeWith instead.)
Combinators
You can use Promise to build additional Future combinators that aren't already in the library.
"Select" off the first future to be satisfied. Return as a result, with the remainder of the Futures as a sequence: https://gist.github.com/viktorklang/4488970.
An after method that returns a Future that is completed after a certain period of time, to "time out" a Future: https://gist.github.com/3804710.
You need Promises to be able to create other combinators like this.
Adapt Callbacks
Use Promise to adapt callback-based APIs to Future-based APIs. For example:
def retrieveThing(key: String): Future[Thing] = {
val p = Promise[Thing]()
val callback = new Callback() {
def receive(message: ThingMessage) {
message.getPayload match {
case t: Thing =>
p success t
case err: SystemErrorPayload =>
p failure new Exception(err.getMessage)
}
}
}
thingLoader.load(key, callback, timeout)
p.future
}
Synchronizers
Build synchronizers using Promise. For example, return a cached value for an expensive operation, or compute it, but don't compute twice for the same key:
private val cache = new ConcurrentHashMap[String, Promise[T]]
def getEntry(key: String): Future[T] = {
val newPromise = Promise[T]()
val foundPromise = cache putIfAbsent (key, newPromise)
if (foundPromise == null) {
newPromise completeWith getExpensive(key)
newPromise.future
} else {
foundPromise.future
}
}
Promise has a completeWith(f: Future) method that would solve this problem by automatically handling the success/failure scenarios.
promise.completeWith( future {
r.produceSomething
})