Defining the future implicit context in Play for Scala - scala

In addition to the future's execution context provided by Scala:
import scala.concurrent.ExecutionContext.Implicits.global
Play provides another execution context:
import play.api.libs.concurrent.Execution.Implicits.defaultContext
When to use each in Play for Scala?

You can find an answer here:
Play's internal execution context
That question is not complete duplicate but very close, and the answer there cover your question as well.
In a short:
You must not use import scala.concurrent.ExecutionContext.Implicits.global in Play.
Response to the comment
The quote from the answer:
Instead, you would use
play.api.libs.concurrent.Execution.Implicits.defaultContext, which
uses an ActorSystem.
scala.concurrent.ExecutionContext.Implicits.global is an
ExecutionContext defined in the Scala standard library. It is a
special ForkJoinPool that using the blocking method to handle
potentially blocking code in order to spawn new threads in the pool.
You really shouldn't use this in a Play application, as Play will have
no control over it. It also has the potential to spawn a lot of
threads and use a ton of memory, if you're not careful.

As a general rule, if you need an ExecutionContext inside a method or class, require it as an implicit parameter (Scala) or a normal parameter (Java). Convention is to put this parameter last.
This rule allows the caller/creator to control where/how/when asynchronous effects are evaluated.
The main exception to this rule is when you already have an ExecutionContext and do not wish for the caller/creator to be in control of where the effects are evaluated.

Related

Spray API blocked by bad Execution Context in Scala?

I use the Spray API to listen for requests from a server. A computation in one specific scala class ends up blocking Spray from responding across the whole application. This is a slight simplification of the problem, but I can provide more info. if needed.
class SomeClass(implicit execc: ExecutionContext){
implicit val x = ...
val foo = Await.result(...someFunc(x))
}
I added this import and it resolved my issue:
import scala.concurrent.ExecutionContext.Implicits.global
Can anyone explain how or why this worked?
===================================================
Edit:
OuterClass instantiates SomeClass, but itself is never instantiated with the ExecutionContext parameter. It appears that it may be using the global execution context by default, and that is why it is blocking then?
class OuterClass(executor: ExecutionContext){
val s = new someClass
}
val x = (new OuterClass).someFunction
Spray route handler is a single actor that receives HTTP requests from Spray IO/Spray-can/library and passes them to the route handling function - essentially a partial function that has no concurrency on it's own. Thus if your route blocks, Spray will also block and requests will queue in the route handler actor queue.
There are 3 ways to properly handle blocking request processing in the route: spawn an actor per request, return Future of response or take request completion function and use it somewhere else unblocking the route (search for more detailed explanation if interested).
I can't be sure which execution context was used in your case, but it must have been very limited in terms of allocated threads and/or shared by your Spray route handler and long running task. This would lead to them both running on the same thread.
If you didn't have any execution context imported explicitly it must have been found through implicit resolution from regular scopes. You must have had one since you have it as a constructor parameter. Try to check your implicits in the scope to see which one it was. I'm curious myself whether it was one provided by Spray or something else.

How do I create a `Scheduler` for `observeOn` method?

I'm using RxJava in my Scala project and I need to execute my Observable in a separate thread. I know in order to achieve this I need to call observeOn method on it and pass an instance of rx.lang.scala.Scheduler as an argument.
But how can I create that instance? I did not find any apparent ways of instantiating of rx.lang.scala.Scheduler trait. For example, I have this code:
Observable.from(List(1,2,3)).observeOn(scheduler)
Can someone provide an example of working scheduler variable that will do the trick?
A trait is not instantiable.
You need to use one of the subclasses of the trait listed under "Known Subclasses" in the API documentation.
All schedulers are in the package
import rx.lang.scala.schedulers._
For blocking IO operations, use IO scheduler
Observable.from(List(1,2,3)).observeOn(IOScheduler())
For computational work, use computation scheduler
Observable.from(List(1,2,3)).observeOn(ComputationScheduler())
To execute on the current thread
Observable.from(List(1,2,3)).observeOn(ImmediateScheduler())
To execute on a new thread
Observable.from(List(1,2,3)).observeOn(NewThreadScheduler())
To queues work on the current thread to be executed after the current one
Observable.from(List(1,2,3)).observeOn(TrampolineScheduler())
If you want to use your own custom thread pool
val threadPoolExecutor = Executors.newFixedThreadPool(2)
val executionContext = ExecutionContext.fromExecutor(threadPoolExecutor)
val customScheduler = ExecutionContextScheduler(executionContext)
Observable.from(List(1,2,3)).observeOn(customScheduler)

Slick threadLocalSession vs implicit session

I encountered this problem while posting this question: Slick Write a Simple Table Creation Function
I am very new to Slick and concurrency, knowing only the basics. I worked with JDBC before, but in there you have to manually open a session and then close it. Nothing goes beyond that, and there is very little automation (at least I didn't have to make automated process).
However, I get confused with Slick session. In the tutorial, the example "Getting Started" encourages people to use threadLocalSession:
// Use the implicit threadLocalSession
import Database.threadLocalSession
http://slick.typesafe.com/doc/1.0.0/gettingstarted.html
The original recommendation is:
The only extra import we use is the threadLocalSession. This
simplifies the session handling by attaching a session to the current
thread so you do not have to pass it around on your own (or at least
assign it to an implicit variable).
Well, I researched a bit online, and some people suggest not to use threadLocalSession and only use implicit session. Some suggest using threadLocalSession.
One reason to support implicit session is that "makes sure at compile time that you have a session". Well, I only have two questions:
When people use "thread", are they referring to concurrency? Slick/JDBC data storage was handled through concurrency?
Which way is better? Implicit or threadLocalSession? Or when to use which?
If it is not too much to ask, I read the syntax of {implicit session:Session => ...} somewhere in my Scala book, and I forgot where it was. What's this expression?
A threadLocalSession is called this way because it is stored in a "thread local variable", local to the current execution thread.
As of Slick 2, we recommend not to use threadLocalSession (which is now called dynamicSession) unless you see a particular need for it and are aware of the disadvantages. threadLocalSession is implicit, too, by the way. The problem is, that a threadLocalSession is only valid at runtime when a withSession (in Slick 2.0 withDynSession) call happened somewhere down the call stack. If it didn't the code still compiles but fails at runtime
{implicit session:Session => ...} is a function from (the explicitly annotated type) Session to ..., where the session is available as an implicit value in ... . In db.withSession{ implicit session:Session => ... }, db creates a session, passes it into the closure handed to withSession. In the closure body ..., the session is implicit and can implicitly used by .list calls, etc.

What is the most efficient way of running futures in scala?

Currently we use :
val simpleOps: ExecutionContext = Akka.system(app).dispatchers.lookup("akka.actor.simple-ops")
Then we implicitely import this when we create and compose our futures. Other than that we currently don't use Akka.
There are easier ways to get ExecutionContext, but I am not sure that it is going to run over Java Fork/Join Pool, which is a bit more performant than regular java ExecutorService.
Is Akka the only way to get FJP powered ExecutionContext?
Are there any other ways to get ExecutionContext that are as performant that Akka FJP MessageDispatcher?
Scala futures already use ForkJoinPool under the hood (specifically, they use a scala specific fork of java's ForkJoinPool).
See https://github.com/scala/scala/blob/v2.10.1/src/library/scala/concurrent/impl/ExecutionContextImpl.scala#L1
In particular, notice that DefaultThreadFactory extends ForkJoinPool.ForkJoinWorkerThreadFactory:
class DefaultThreadFactory(daemonic: Boolean) extends ThreadFactory with ForkJoinPool.ForkJoinWorkerThreadFactory

Futures for blocking calls in Scala

The Akka documentation says:
you may be tempted to just wrap the blocking call inside a Future and work with that instead, but this strategy is too simple: you are quite likely to find bottlenecks or run out of memory or threads when the application runs under increased load.
They suggest the following strategies:
Do the blocking call within a Future, ensuring an upper bound on the number of such calls at any point in time (submitting an unbounded number of tasks of this nature will exhaust your memory or thread limits).
Do the blocking call within a Future, providing a thread pool with an upper limit on the number of threads which is appropriate for the hardware on which the application runs.
Do you know about any implementation of those strategies?
Futures are run within execution contexts. This is obvious from the Future API: any call which involves attaching some callbacks to a future or to build a future from an arbitrary computation or from another future requires an implicitly available ExecutionContext object. So you can control the concurrency setup for your futures by tuning the ExecutionContext in which they run.
For instance, to implement the second strategy you can do something like
import scala.concurrent.ExecutionContext
import java.util.concurrent.Executors
import scala.concurrent.future
object Main extends App {
val ThreadCount = 10
implicit val executionContext = ExecutionContext.fromExecutor(Executors.newFixedThreadPool(ThreadCount))
val f = future {
println(s"Hello ! I'm running in an execution context with $ThreadCount threads")
}
}
Akka itself implements all this, you can wrap your blocking calls into Actors and then use dispatchers to control execution thread pools.