Get back to IO fiber/thread after running a future - scala

I have some cats IO operations, and Future among then. Simplified:
IO(getValue())
.flatMap(v => IO.fromFuture(IO(blockingProcessValue(v)))(myBlockingPoolContextShift))
.map(moreProcessing)
So I have some value in IO, then I need to do some blocking operation using a library that returns Future, and then I need to do some more processing on the value returned from Future
Future runs on a dedicated thread pool - so far so good. The problem is after Future is completed. moreProcessing runs on the same thread the Future was running on.
Is there a way to get back to the thread getValue() was running on?

After the discussion in the chat, the conclusion is that the only thing OP needs is to create a ContextShift in the application entry point using the appropriate (compute) EC and then pass it down to the class containing this method.
// Entry point
val computeEC = ???
val cs = IO.contextShift(computeEC)
val myClass = new MyClass(cs, ...)
// Inside the method on MyClass
IO(getValue())
.flatMap(v => IO.fromFuture(IO(blockingProcessValue(v)))(myBlockingPoolContextShift))
.flatTap(_ => cs.shift)
.map(moreProcessing)
This Scastie showed an approach using Blocker and other techniques common in the Typelevel ecosystem but were not really suitable for OP's use case; anyways I find it useful for future readers who may have a similar problem.

Related

Play 2.5.x (Scala) -- How does one put a value obtained via wsClient into a (lazy) val

The use case is actually fairly typical. A lot of web services use authorization tokens that you retrieve at the start of a session and you need to send those back on subsequent requests.
I know I can do it like this:
lazy val myData = {
val request = ws.url("/some/url").withAuth(user, password, WSAuthScheme.BASIC).withHeaders("Accept" -> "application/json")
Await.result(request.get().map{x => x.json }, 120.seconds)
}
That just feels wrong as all the docs say never us Await.
Is there a Future/Promise Scala style way of handling this?
I've found .onComplete which allows me to run code upon the completion of a Promise however without using a (mutable) var I see no way of getting a value in that scope into a lazy val in a different scope. Even with a var there is a possible timing issue -- hence the evils of mutable variables :)
Any other way of doing this?
Unfortunately, there is no way to make this non-blocking - lazy vals are designed to be synchronous and to block any thread accessing them until they are completed with a value (internally a lazy val is represented as a simple synchronized block).
A Future/Promise Scala way would be to use a Future[T] or a Promise[T] instead of a val x: T, but that way implies a great deal of overhead with executionContexts and maps upon each use of the val, and more optimal resource utilization may not be worth the decreased readability in all cases, so it may be OK to leave the Await there if you extensively use the value in many parts of your application.

What are advantages of a Twitter Future over a Scala Future?

I know a lot of reasons for Scala Future to be better. Are there any reasons to use Twitter Future instead? Except the fact Finagle uses it.
Disclaimer: I worked at Twitter on the Future implementation. A little bit of context, we started our own implementation before Scala had a "good" implementation of Future.
Here're the features of Twitter's Future:
Some method names are different and Twitter's Future has some new helper methods in the companion.
e.g. Just one example: Future.join(f1, f2) can work on heterogeneous Future types.
Future.join(
Future.value(new Object), Future.value(1)
).map {
case (o: Object, i: Int) => println(o, i)
}
o and i keep their types, they're not casted into the least common supertype Any.
A chain of onSuccess is guaranteed to be executed in order:
e.g.:
f.onSuccess {
println(1) // #1
} onSuccess {
println(2) // #2
}
#1 is guaranteed to be executed before #2
The Threading model is a little bit different. There's no notion of ExecutionContext, the Thread that set the value in a Promise (Mutable implementation of a Future) is the one executing all the computations in the future graph.
e.g.:
val f1 = new Promise[Int]
f1.map(_ * 2).map(_ + 1)
f1.setValue(2) // <- this thread also executes *2 and +1
There's a notion of interruption/cancellation. With Scala's Futures, the information only flows in one direction, with Twitter's Future, you can notify a producer of some information (not necessarily a cancellation). In practice, it's used in Finagle to propagate the cancellation of a RPC. Because Finagle also propagates the cancellation across the network and because Twitter has a huge fan out of requests, this actually saves lots of work.
class MyMessage extends Exception
val p = new Promise[Int]
p.setInterruptHandler {
case ex: MyMessage => println("Receive MyMessage")
}
val f = p.map(_ + 1).map(_ * 2)
f.raise(new MyMessage) // print "Receive MyMessage"
Until recently, Twitter's Future were the only one to implement efficient tail recursion (i.e. you can have a recursive function that call itself without blowing up you call stack). It has been implemented in Scala 2.11+ (I believe).
As far as I can tell the main difference that could go in favor of using Twitter's Future is that it can be cancelled, unlike scala's Future.
Also, there used to be some support for tracing the call chains (as you probably know plain stack traces are close to being useless when using Futures). In other words, you could take a Future and tell what chain of map/flatMap produced it. But the idea has been abandoned if I understand correctly.

Playframework non-blocking Action

Came across a problem I did not find an answer yet.
Running on playframework 2 with Scala.
Was required to write an Action method that performs multiple Future calls.
My question:
1) Is the attached code non-blocking and hence looking the way it should be ?
2) Is there a guarantee that both DAO results are caught at any given time ?
def index = Action.async {
val t2:Future[Tuple2[List[PlayerCol],List[CreatureCol]]] = for {
p <- PlayerDAO.findAll()
c <- CreatureDAO.findAlive()
}yield(p,c)
t2.map(t => Ok(views.html.index(t._1, t._2)))
}
Thanks for your feedback.
Is the attached code non-blocking and hence looking the way it should be ?
That depends on a few things. First, I'm going to assume that PlayerDAO.findAll() and CreatureDAO.findAlive() return Future[List[PlayerCol]] and Future[List[CreatureCol]] respectively. What matters most is what these functions are actually calling themselves. Are they making JDBC calls, or using an asynchronous DB driver?
If the answer is JDBC (or some other synchronous db driver), then you're still blocking, and there's no way to make it fully "non-blocking". Why? Because JDBC calls block their current thread, and wrapping them in a Future won't fix that. In this situation, the most you can do is have them block a different ExecutionContext than the one Play is using to handle requests. This is generally a good idea, because if you have several db requests running concurrently, they can block Play's internal thread pool used for handling HTTP requests, and suddenly your server will have to wait to handle other requests (even if they don't require database calls).
For more on different ExecutionContexts see the thread pools documentation and this answer.
If you're answer is an asynchronous database driver like reactive mongo (there's also scalike-jdbc, and maybe some others), then you're in good shape, and I probably made you read a little more than you had to. In that scenario your index controller function would be fully non-blocking.
Is there a guarantee that both DAO results are caught at any given time ?
I'm not quite sure what you mean by this. In your current code, you're actually making these calls in sequence. CreatureDAO.findAlive() isn't executed until PlayerDAO.findAll() has returned. Since they are not dependent on each other, it seems like this isn't intentional. To make them run in parallel, you should instantiate the Futures before mapping them in a for-comprehension:
def index = Action.async {
val players: Future[List[PlayerCol]] = PlayerDAO.findAll()
val creatures: Future[List[CreatureCol]] = CreatureDAO.findAlive()
val t2: Future[(List[PlayerCol], List[CreatureCol])] = for {
p <- players
c <- creatures
} yield (p, c)
t2.map(t => Ok(views.html.index(t._1, t._2)))
}
The only thing you can guarantee about having both results being completed is that yield isn't executed until the Futures have completed (or never, if they failed), and likewise the body of t2.map(...) isn't executed until t2 has been completed.
Further reading:
Are there any benefits in using non-async actions in Play Framework 2.2?
Understanding the Difference Between Non-Blocking Web Service Calls vs Non-Blocking JDBC

Akka actor forward message with continuation

I have an actor which takes the result from another actor and applies some check on it.
class Actor1(actor2:Actor2) {
def receive = {
case SomeMessage =>
val r = actor2 ? NewMessage()
r.map(someTransform).pipeTo(sender)
}
}
now if I make an ask of Actor1, we now have 2 futures generated, which doesnt seem overly efficient. Is there a way to provide a foward with some kind of continuation, or some other approach I could use here?
case SomeMessage => actor2.forward(NewMessage, someTransform)
Futures are executed in an ExecutionContext, which are like thread pools. Creating a new future is not as expensive as creating a new thread, but it has its cost. The best way to work with futures is to create as much as needed and compose then in a way that things that can be computed in parallel are computed in parallel if the necessary resources are available. This way you will make the best use of your machine.
You mentioned that akka documentation discourages excessive use of futures. I don't know where you read this, but what I think it means is to prefer transforming futures rather than creating your own. This is exactly what you are doing by using map. Also, it may mean that if you create a future where it is not needed you are adding unnecessary overhead.
In your case you have a call that returns a future and you need to apply sometransform and return the result. Using map is the way to go.

Scala - futures and concurrency

I am trying to understand Scala futures coming from Java background: I understand you can write:
val f = Future { ... }
then I have two questions:
How is this future scheduled? Automatically?
What scheduler will it use? In Java you would use an executor that could be a thread pool etc.
Furthermore, how can I achieve a scheduledFuture, the one that executes after a specific time delay? Thanks
The Future { ... } block is syntactic sugar for a call to Future.apply (as I'm sure you know Maciej), passing in the block of code as the first argument.
Looking at the docs for this method, you can see that it takes an implicit ExecutionContext - and it is this context which determines how it will be executed. Thus to answer your second question, the future will be executed by whichever ExecutionContext is in the implicit scope (and of course if this is ambiguous, it's a compile-time error).
In many case this will be the one from import ExecutionContext.Implicits.global, which can be tweaked by system properties but by default uses a ThreadPoolExecutor with one thread per processor core.
The scheduling however is a different matter. For some use-cases you could provide your own ExecutionContext which always applied the same delay before execution. But if you want the delay to be controllable from the call site, then of course you can't use Future.apply as there are no parameters to communicate how this should be scheduled. I would suggest submitting tasks directly to a scheduled executor in this case.
Andrzej's answer already covers most of the ground in your question. Worth mention is that Scala's "default" implicit execution context (import scala.concurrent.ExecutionContext.Implicits._) is literally a java.util.concurrent.Executor, and the whole ExecutionContext concept is a very thin wrapper, but is closely aligned with Java's executor framework.
For achieving something similar to scheduled futures, as Mauricio points out, you will have to use promises, and any third party scheduling mechanism.
Not having a common mechanism for this built into Scala 2.10 futures is a pity, but nothing fatal.
A promise is a handle for an asynchronous computation. You create one (assuming ExecutionContext in scope) by calling val p = Promise[Int](). We just promised an integer.
Clients can grab a future that depends on the promise being fulfilled, simply by calling p.future, which is just a Scala future.
Fulfilling a promise is simply a matter of calling p.successful(3), at which point the future will complete.
Play 2.x solves scheduling by using promises and a plain old Java 1.4 Timer.
Here is a linkrot-proof link to the source.
Let's also take a look at the source here:
object Promise {
private val timer = new java.util.Timer()
def timeout[A](message: => A, duration: Long, unit: TimeUnit = TimeUnit.MILLISECONDS)
(implicit ec: ExecutionContext): Future[A] = {
val p = Promise[A]()
timer.schedule(new java.util.TimerTask {
def run() {
p.completeWith(Future(message)(ec))
}
}, unit.toMillis(duration))
p.future
}
}
This can then be used like so:
val future3 = Promise.timeout(3, 10000) // will complete after 10 seconds
Notice this is much nicer than plugging a Thread.sleep(10000) into your code, which will block your thread and force a context switch.
Also worth noticing in this example is the val p = Promise... at the function's beginning, and the p.future at the end. This is a common pattern when working with promises. Take it to mean that this function makes some promise to the client, and kicks off an asynchronous computation in order to fulfill it.
Take a look here for more information about Scala promises. Notice they use a lowercase future method from the concurrent package object instead of Future.apply. The former simply delegates to the latter. Personally, I prefer the lowercase future.