When to use Scala Futures? - scala

I'm a spark Scala Programmer. I have a spark job that has sub-tasks which to complete the whole job. I wanted to use Future to complete the subtasks in parallel. On completion of the whole job I have to return the whole job response.
What I heard about scala Future is once the main thread executed and stopped the remaining threads will be killed and also you will get empty response.
I have to use Await.result to collect the results. But all the blogs are telling that you should avoid Await.result and it's a bad practice.
Is using Await.result is correct way of doing or not in my case?
def computeParallel(): Future[String] = {
val f1 = Future { "ss" }
val f2 = Future { "sss" }
val f3 = Future { "ssss" }
for {
r1 <- f1
r2 <- f2
r3 <- f3
} yield (r1 + r2 + r3)
}
computeParallel().map(result => ???)
To my understanding, we have to use Future in webservice kind of application where it has one process always running that won't be exited. But in my case, once logic execution(scala program) is complete it will exit.
Can I use Future to my problem or not?

Using futures in Spark is probably not advisable except in special cases, and simply parallelizing computation isn't one of them (giving a non-blocking wrapper to blocking I/O (e.g. making requests to an outside service) is quite possibly the only special case).
Note that Future doesn't guarantee parallelism (whether and how they're executed in parallel depends on the ExecutionContext in which they're run), just asynchrony. Also, in the event that you're spawning computation-performing futures inside a Spark transformation (i.e. on the executor, not the driver), chances are that there won't be any performance improvement, since Spark tends to do a good job of keeping the cores on the executors busy, all spawning those futures does is contend for cores with Spark.
Broadly, be very careful about combining parallelism abstractions like Spark RDDs/DStreams/Dataframes, actors, and futures: there are a lot of potential minefields where such combinations can violate guarantees and/or conventions in the various components.
It's also worth noting that Spark has requirements around serializability of intermediate values and that futures aren't generally serializable, so a Spark stage can't result in a future; this means that you basically have no choice but to Await on the futures spawned in a stage.
If you still want to spawn futures in a Spark stage (e.g. posting them to a web service), it's probably best to use Future.sequence to collapse the futures into one and then Await on that (note that I have not tested this idea: I'm assuming that there's an implicit CanBuildFrom[Iterator[Future[String]], String, Future[String]] available):
def postString(s: String): Future[Unit] = ???
def postStringRDD(rdd: RDD[String]): RDD[String] = {
rdd.mapPartitions { strings =>
// since this is only get used for combining the futures in the Await, it's probably OK to use the implicit global execution context here
implicit val ectx = ???
Await.result(strings.map(postString))
}
rdd // Pass through the original RDD
}

Related

Spark httpclient job does not parallel well [duplicate]

I build a RDD from a list of urls, and then try to fetch datas with some async http call.
I need all the results before doing other calculs.
Ideally, I need to make the http calls on differents nodes for scaling considerations.
I did something like this:
//init spark
val sparkContext = new SparkContext(conf)
val datas = Seq[String]("url1", "url2")
//create rdd
val rdd = sparkContext.parallelize[String](datas)
//httpCall return Future[String]
val requests = rdd.map((url: String) => httpCall(url))
//await all results (Future.sequence may be better)
val responses = requests.map(r => Await.result(r, 10.seconds))
//print responses
response.collect().foreach((s: String) => println(s))
//stop spark
sparkContext.stop()
This work, but Spark job never finish !
So I wonder what is are the best practices for dealing with Future using Spark (or Future[RDD]).
I think this use case looks pretty common, but didn't find any answer yet.
Best regards
this use case looks pretty common
Not really, because it simply doesn't work as you (probably) expect. Since each task operates on standard Scala Iterators these operations will be squashed together. It means that all operations will be blocking in practice. Assuming you have three URLs ["x", "y", "z"] you code will be executed in a following order:
Await.result(httpCall("x", 10.seconds))
Await.result(httpCall("y", 10.seconds))
Await.result(httpCall("z", 10.seconds))
You can easily reproduce the same behavior locally. If you want to execute your code asynchronously you should handle this explicitly using mapPartitions:
rdd.mapPartitions(iter => {
??? // Submit requests
??? // Wait until all requests completed and return Iterator of results
})
but this is relatively tricky. There is no guarantee all data for a given partition fits into memory so you'll probably need some batching mechanism as well.
All of that being said I couldn't reproduce the problem you've described to is can be some configuration issue or a problem with httpCall itself.
On a side note allowing a single timeout to kill whole task doesn't look like a good idea.
I couldnt find an easy way to achieve this. But after several iteration of retries this is what I did and its working for a huge list of queries. Basically we used this to do a batch operation for a huge query into multiple sub queries.
// Break down your huge workload into smaller chunks, in this case huge query string is broken
// down to a small set of subqueries
// Here if needed to optimize further down, you can provide an optimal partition when parallelizing
val queries = sqlContext.sparkContext.parallelize[String](subQueryList.toSeq)
// Then map each one those to a Spark Task, in this case its a Future that returns a string
val tasks: RDD[Future[String]] = queries.map(query => {
val task = makeHttpCall(query) // Method returns http call response as a Future[String]
task.recover {
case ex => logger.error("recover: " + ex.printStackTrace()) }
task onFailure {
case t => logger.error("execution failed: " + t.getMessage) }
task
})
// Note:: Http call is still not invoked, you are including this as part of the lineage
// Then in each partition you combine all Futures (means there could be several tasks in each partition) and sequence it
// And Await for the result, in this way you making it to block untill all the future in that sequence is resolved
val contentRdd = tasks.mapPartitions[String] { f: Iterator[Future[String]] =>
val searchFuture: Future[Iterator[String]] = Future sequence f
Await.result(searchFuture, threadWaitTime.seconds)
}
// Note: At this point, you can do any transformations on this rdd and it will be appended to the lineage.
// When you perform any action on that Rdd, then at that point,
// those mapPartition process will be evaluated to find the tasks and the subqueries to perform a full parallel http requests and
// collect those data in a single rdd.
If you dont want to perform any transformation on the content like parsing the response payload, etc. Then you could use foreachPartition instead of mapPartitions to perform all those http calls immediately.
I finally made it using scalaj-http instead of Dispatch.
Call are synchronous, but this match my use case.
I think the Spark Job never finish using Dispatch because the Http connection was not closed properly.
Best Regards
This wont work.
You cannot expect the request objects be distributed and responses collected over a cluster by other nodes. If you do then the spark calls for future will never end. The futures will never work in this case.
If your map() make sync(http) requests then please collect responses within the same action/transformation call and then subject the results(responses) to further map/reduce/other calls.
In your case, please rewrite logic collect the responses for each call in sync and remove the notion of futures then all should be fine.

Scala : Futures with map, flatMap for IO/CPU bound tasks

I am aware that in Java 8 it is not a good idea to start long-running tasks in stream framework's filter, map, etc. methods, as there is no way to tune the underlying fork-join pool and it can cause latency problems and starvings.
Now, my question would be, is there any problem like that with Scala? I tried to google it, but I guess I just can't put this question in a google-able sentence.
Let's say I have a list of objects and I want to save them into a database using a forEach, would that cause any issues? I guess this would not be an issue in Scala, as functional transformations are fundamental building blocks of the language, but anyway...
If you don't see any kind of I/O operations then using futures can be an overhead.
def add(x: Int, y: Int) = Future { x + y }
Executing purely CPU-bound operations in a Future constructor will make your logic slower to execute, not faster. Mapping and flatmapping over them, can increase add fuel to this problem.
In case you want to initialize a Future with a constant/simple calculation, you can use Future.successful().
But all blocking I/O, including SQL queries makes sence be wrapped inside a Future with blocking
E.g :
Future {
DB.withConnection { implicit connection =>
val query = SQL("select * from bar")
query()
}
}
Should be done as,
import scala.concurrent.blocking
Future {
blocking {
DB.withConnection { implicit connection =>
val query = SQL("select * from bar")
query()
}
}
This blocking notifies the thread pool that this task is blocking. This allows the pool to temporarily spawn new workers as needed. This is done to prevent starvation in blocking applications.
The thread pool(by default the scala.concurrent.ExecutionContext.global pool) knows when the code in a blocking is completed.(Since it's a fork join thread pool)
Therefore it will remove the spare worker threads as they completes, and the pool will shrink back down to its expected size with time(Number of cores by default).
But this scenario can also backfire if there is not enough memory to expand the thread pool.
So for your scenario, you can use
images.foreach(i => {
import scala.concurrent.blocking
Future {
blocking {
DB.withConnection { implicit connection =>
val query = SQL("insert into .........")
query()
}
}
})
If you're doing a lot of blocking I/O, then it's a good practice to create a separate thread-pool/execution context and execute all blocking calls in that pool.
References :
scala-best-practices
demystifying-the-blocking-construct-in-scala-futures
Hope this helps.

Are there better monadic abstraction alternative for representing long running, async task?

A Future is good at representing a single asynchronous task that will / should be completed within some fixed amount of time.
However, there exist another kind of asynchronous task, one where it's not possible / very hard to know exactly when it will finish. For example, the time taken for a particular string processing task might depend on various factors such as the input size.
For these kind of task, detecting failure might be better by checking if the task is able to make progress within a reasonable amount of time instead of by setting a hard timeout value such as in Future.
Are there any libraries providing suitable monadic abstraction of such kind of task in Scala?
You could use a stream of values like this:
sealed trait Update[T]
case class Progress[T](value: Double) extends Update[T]
case class Finished[T](result: T) extends Update[T]
let your task emit Progress values when it is convenient (e.g. every time a chunk of the computation has finished), and emit one Finished value once the complete computation is finished. The consumer could check for progress values to ensure that the task is still making progress. If a consumer is not interested in progress updates, you can just filter them out. I think this is more composable than an actor-based approach.
Depending on how much performance or purity you need, you might want to look at akka streams or scalaz streams. Akka streams has a pure DSL for building flow graphs, but allows mutability in processing stages. Scalaz streams is more functional, but has lower performance last I heard.
You can break your work into chunks. Each chunk is a Future with a timeout - your notion of reasonable progress. Chain those Futures together to get a complete task.
Example 1 - both chunks can run in parallel and don't depend on each other (embarassingly parallel task):
val chunk1 = Future { ... } // chunk 1 starts execution here
val chunk2 = Future { ... } // chunk 2 starts execution here
val result = for {
c1 <- chunk1
c2 <- chunk2
} yield combine(c1, c2)
Example 2 - second chunk depends on the first:
val chunk1 = Future { ... } // chunk 1 starts execution here
val result = for {
c1 <- chunk1
c2 <- Future { c1 => ... } // chunk 2 starts execution here
} yield combine(c1, c2)
There are obviously other constructs to help you when you have many Futures like sequence.
The article "The worst thing in our Scala code: Futures" by Ken Scambler points the need for a separation of concerns:
scala.concurrent.Future is built to work naturally with scala.util.Try, so our code often ends up clumsily using Try everywhere to represent failure, using raw exceptions as failure values even where no exceptions get thrown.
To do anything with a scala.concurrent.Future, you need to lug around an implicit ExecutionContext. This obnoxious dependency needs to be threaded through everywhere they are used.
So if your code does not depend directly on Future, but on simple Monad properties, you can abstract it with a Monad type:
trait Monad[F[_]] {
def flatMap[A,B](fa: F[A], f: A => F[B]): F[B]
def pure[A](a: A): F[A]
}
// read the type parameter as “for all F, constrained by Monad”.
final def load[F[_]: Monad](pageUrl: URL): F[Page]

How to wait for result of Scala Futures in for comprehension

I am having a issue with the below piece of code. I want 'combine' method to get triggered after all groundCoffee,heaterWater,frothedMilk method completes. They would be triggered concurrently.All the 4 methods grind,heatWater,frothMilk,brew are concurrently executed using a future.
def prepareCappuccino(): Future[Cappuccino] = {
val groundCoffee = grind("arabica beans")
val heatedWater = heatWater(Water(20))
val frothedMilk = frothMilk("milk")
for {
ground <- groundCoffee
water <- heatedWater
foam <- frothedMilk
espresso <- brew(ground, water)
} yield combine(espresso, foam)
}
When I execute the above method the output I am getting is below
start grinding...
heating the water now
milk frothing system engaged!
And the program exits after this. I got this example from a site while I was trying to learn futures. How can the program be made to wait so that combine method get triggered after all the futures return?
The solution already posted to Await for a future is a solution when you want to deliberately block execution on that thread. Two common reasons to do this are for testing, when you want to wait for the outcome before making an assertion, and when otherwise all threads would exit (as is the case when doing toy examples).
However in a proper long lived application Await is generally to be avoided.
Your question already contains one of the correct ways to do future composition - using a for comprehension. Bear in mind here, that for-comprehensions are converted to flatMaps, maps and withFilter operations, so any futures you invoke in the for-comprehension will only be created after the others complete, ie serially.
If you want a bunch of futures to operate in concurrently, then you would create them before entering the for-comprehension as you have done.
You can use the Await here:
val f = Future.sequence(futures.toList)
Await.ready(f, Duration.Inf)
I assume, you have all the futures packed in a list. The Await.ready makes all the waiting work.

Scala - futures and concurrency

I am trying to understand Scala futures coming from Java background: I understand you can write:
val f = Future { ... }
then I have two questions:
How is this future scheduled? Automatically?
What scheduler will it use? In Java you would use an executor that could be a thread pool etc.
Furthermore, how can I achieve a scheduledFuture, the one that executes after a specific time delay? Thanks
The Future { ... } block is syntactic sugar for a call to Future.apply (as I'm sure you know Maciej), passing in the block of code as the first argument.
Looking at the docs for this method, you can see that it takes an implicit ExecutionContext - and it is this context which determines how it will be executed. Thus to answer your second question, the future will be executed by whichever ExecutionContext is in the implicit scope (and of course if this is ambiguous, it's a compile-time error).
In many case this will be the one from import ExecutionContext.Implicits.global, which can be tweaked by system properties but by default uses a ThreadPoolExecutor with one thread per processor core.
The scheduling however is a different matter. For some use-cases you could provide your own ExecutionContext which always applied the same delay before execution. But if you want the delay to be controllable from the call site, then of course you can't use Future.apply as there are no parameters to communicate how this should be scheduled. I would suggest submitting tasks directly to a scheduled executor in this case.
Andrzej's answer already covers most of the ground in your question. Worth mention is that Scala's "default" implicit execution context (import scala.concurrent.ExecutionContext.Implicits._) is literally a java.util.concurrent.Executor, and the whole ExecutionContext concept is a very thin wrapper, but is closely aligned with Java's executor framework.
For achieving something similar to scheduled futures, as Mauricio points out, you will have to use promises, and any third party scheduling mechanism.
Not having a common mechanism for this built into Scala 2.10 futures is a pity, but nothing fatal.
A promise is a handle for an asynchronous computation. You create one (assuming ExecutionContext in scope) by calling val p = Promise[Int](). We just promised an integer.
Clients can grab a future that depends on the promise being fulfilled, simply by calling p.future, which is just a Scala future.
Fulfilling a promise is simply a matter of calling p.successful(3), at which point the future will complete.
Play 2.x solves scheduling by using promises and a plain old Java 1.4 Timer.
Here is a linkrot-proof link to the source.
Let's also take a look at the source here:
object Promise {
private val timer = new java.util.Timer()
def timeout[A](message: => A, duration: Long, unit: TimeUnit = TimeUnit.MILLISECONDS)
(implicit ec: ExecutionContext): Future[A] = {
val p = Promise[A]()
timer.schedule(new java.util.TimerTask {
def run() {
p.completeWith(Future(message)(ec))
}
}, unit.toMillis(duration))
p.future
}
}
This can then be used like so:
val future3 = Promise.timeout(3, 10000) // will complete after 10 seconds
Notice this is much nicer than plugging a Thread.sleep(10000) into your code, which will block your thread and force a context switch.
Also worth noticing in this example is the val p = Promise... at the function's beginning, and the p.future at the end. This is a common pattern when working with promises. Take it to mean that this function makes some promise to the client, and kicks off an asynchronous computation in order to fulfill it.
Take a look here for more information about Scala promises. Notice they use a lowercase future method from the concurrent package object instead of Future.apply. The former simply delegates to the latter. Personally, I prefer the lowercase future.