How to tune Play Framework application with proper threadpools? - scala

I am working with Play Framework (Scala) version 2.3. From the docs:
You can’t magically turn synchronous IO into asynchronous by wrapping it in a Future. If you can’t change the application’s architecture to avoid blocking operations, at some point that operation will have to be executed, and that thread is going to block. So in addition to enclosing the operation in a Future, it’s necessary to configure it to run in a separate execution context that has been configured with enough threads to deal with the expected concurrency.
This has me a bit confused on how to tune my webapp. Specifically, since my app has a good amount of blocking calls: a mix of JDBC calls, and calls to 3rd party services using blocking SDKs, what is the strategy for configuring the execution context and determining the number of threads to provide? Do I need a separate execution context? Why can't I simply configure the default pool to have a sufficient amount of threads (and if I do this, why would I still need to wrap the calls in a Future?)?
I know this ultimately will depend on the specifics of my app, but I'm looking for some guidance on the strategy and approach. The play docs preach the use of non-blocking operations everywhere but in reality the typical web-app hitting a sql database has many blocking calls, and I got the impression from reading the docs that this type of app will perform far from optimally with the default configurations.

[...] what is the strategy for configuring the execution context and
determining the number of threads to provide
Well, that's the tricky part which depends on your individual requirements.
First of all, you probably should choose a basic profile from the docs (pure asynchronous, highly synchronous or many specific thread pools)
The second step is to fine-tune your setup by profiling and benchmarking your application
Do I need a separate execution context?
Not necessarily. But it makes sense to use separate execution contexts if you want to trigger all your blocking IO-calls at once and not in a sequential way (so database call B does not have to wait until database call A is finished).
Why can't I simply configure the default pool to have a sufficient
amount of threads (and if I do this, why would I still need to wrap
the calls in a Future?)?
You can, check the docs:
play {
akka {
akka.loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = WARNING
actor {
default-dispatcher = {
fork-join-executor {
parallelism-min = 300
parallelism-max = 300
}
}
}
}
}
With this approach, you basically are turning Play into a one-thread-per-request-model. This is not the idea behind Play, but if you're doing a lot of blocking IO calls, it's the simplest approach. In this case, you don't need to wrap your database calls in a Future.
To put it in a nutshell, you basically have three ways to go:
Only use (IO-)technologies whose API calls are non-blocking and asynchronous. This allows you to use a small threadpool / default execution context which suits the nature of Play
Turn Play into a one-thread-per-request Framework by drastically increasing the default execution context. No futures needed, just call your blocking database as always
Create specific execution contexts for your blocking IO-calls and gain fine-grained control of what you are doing

Firstly, before diving in and refactoring your app, you should determine whether this is actually a problem for you. Run some benchmarks (gatling is superb) and do a few profiles with something like JProfiler. If you can live with the current performance then happy days.
The ideal is to use a reactive driver which would return you a future that then gets passed all the way back to your controller. Unfortunately async is still an Open ticket for slick. Interacting with REST APIs can be made reactive using the PlayWS library, but if you have to go via a library that your 3rd party provides then you're stuck.
So, assuming that none of these are feasible and that you do need to improve performance, the question is what benefit would Play's suggestion have? I think what they're getting at here is that it's useful to partition your threads into those that block and those that can make use of asynchronous techniques.
If, for instance, only some proportion of your requests are long and blocking then with a single thread pool you risk all threads being used for the blocking operations. Your controller would then not be able to handle any new requests, irrespective of whether that request needs to call a blocking service. If you can allocate enough threads that this never happens then no problem.
If, on the other hand, you are hitting your limit for threads then by using two pools you can keep your fast, non-blocking requests snappy. You would have one pool servicing requests in your controller and calling into services which return futures. Some of these futures would actually be performing work using a separate pool of threads, but only for the blocking operations. If there is any portion of your app which could be made reactive, then your controller could take advantage of this while isolating the controller from the blocking operations.

Related

When to use scala.concurrent.blocking?

I am asking myself the question: "When should you use scala.concurrent.blocking?"
If I understood correctly, the blocking {} only makes sense to be used in conjunction with the ForkJoinPool. In addition docs.scala-lang.org highlights, that blocking shouldn't be used for long running executions:
Last but not least, you must remember that the ForkJoinPool is not designed for long-lasting blocking operations.
I assume a long running execution is a database call or some kind of external IO. In this case a separate thread pools should be used, e.g. CachedThreadPool. Most IO related frameworks, like sttp, doobie, cats can make use of a provided IO thread pool.
So I am asking myself, which use-case still exists for the blocking statement? Is this only useful, when working with locking and waiting operations, like semaphores?
Consider the problem of thread pool starvation. Say you have a fixed size thread pool of 10 available threads, something like so:
implicit val myFixedThreadPool =
ExecutionContext.fromExecutor(Executors.newFixedThreadPool(10))
If for some reason all 10 threads are tied up, and a new request comes in which requires an 11th thread to do its work, then this 11th request will hang until one of the threads becomes available.
blocking { Future { ... } } construct can be interpreted as saying please do not consume a thread from myFixedThreadPool but instead spin up a new thread outside myFixedThreadPool.
One practical use case for this is if your application can conceptually be considered to be in two parts, one part which say in 90% of cases is talking to proper async APIs, but there is another part which in few special cases has to talk to say a very slow external API which takes many seconds to respond and which we have no control over. Using the fixed thread pool for the true async part is relatively safe from thread pool starvation, however also using the same fixed thread pool for the second part presents the danger of the situation where suddenly 10 requests are made to the slow external API, which now causes 90% of other requests to hang waiting for those slow requests to finish. Wrapping those slow requests in blocking would help minimise the chances of 90% of other requests from hanging.
Another way of achieving this kind of "swimlaning" of true async request from blocking requests is by offloading the blocking request to a separate dedicated thread pool to be used just for the blocking calls, something like so
implicit val myDefaultPool =
ExecutionContext.fromExecutor(Executors.newFixedThreadPool(10))
val myPoolForBlockingRequests =
ExecutionContext.fromExecutor(Executors.newFixedThreadPool(20))
Future {
callAsyncApi
} // consume thread from myDefaultPool
...
Future {
callBlockingApi
}(myPoolForBlockingRequests) // consume thread from myPoolForBlockingRequests
I am asking myself the question: "When should you use scala.concurrent.blocking?"
Well, since that is mostly useful for Future and Future should never be used for serious business logic then never.
Now, "jokes" aside, when using Futures then you should always use blocking when wrapping blocking operations, AND receive a custom ExecutionContext; instead of hardcoding the global one. Note, this should always be the case, even for non-blocking operations, but IME most folks using Future don't do this... but that is another discussion.
Then, callers of those blocking operations may decide if they will use their compute EC or a blocking one.
When the docs mention long-lasting they don't mean anything specific, mostly because is too hard to be specific about that; is context / application specific. What you need to understand is that blocking by default (note the actual EC may do whatever they want) will just create a new thread, and if you create a lot of threads and they take too long to be released you will saturate your memory and kill the program with an OOM error.
For those situations, the recommendation is to control the back pressure of your app to avoid creating too many threads. One way to do that is to create a fixed thread pool for the maximum number of blocking operations you will support and just enqueue all other pending tasks; such EC should just ignore blocking calls. You may also just have an unbound number of threads but manage the back pressure manually in other parts of your code; e.g. with an explicit Queue, this was common advice before: https://gist.github.com/djspiewak/46b543800958cf61af6efa8e072bfd5c
However, having blocked threads is always hurtful for the performance of your app, even if the compute EC is not blocked. The latest talks by Daniel explain those in detail: "The case for effect systems" & "Threads at scale".
So the ecosystem is pushing hard the state of the art to avoid that at all costs but is not a simple task. Still, runtimes like the ones provided by cats-effect or ZIO are optimized to handle blocking tasks the best they can as of today, and will probably improve during this and next years.

Asynchronous blocking thread magic

I've been learning play, and I'm getting most of the major concepts, but I'm struggling with what magic the platform is doing to enable all of these things.
In particular, let's say I have a controller that does something time-intensive. Now I understand how using Futures and asynchronous processing I can make these things appear not to block, but if it's something resource intensive, of course in the end it must block somewhere. Per the documentation:
You can’t magically turn synchronous IO into asynchronous by wrapping it in a Future. If you can’t change the application’s architecture to avoid blocking operations, at some point that operation will have to be executed, and that thread is going to block. So in addition to enclosing the operation in a Future, it’s necessary to configure it to run in a separate execution context that has been configured with enough threads to deal with the expected concurrency.
This bit I'm not understanding: if some task that I'm doing via a Future is possibly being handled in a separate thread pool, how/what magic is Scala/Play doing in the framework to coordinate these threads such that whichever thread is listening to the HTTP socket blocks long enough to do all of the complex processing (DB loads, serialization to JSON, etc. etc.) -- in separate threads, and yet somehow returning to the original blocking thread that has to send something back to the client for that request?
Disclaimer: this is a simplified answer for the general problem, I don't want to make this even more complex by going inside Play and Akka internals.
One method is to have a thread listening to the socket, but not writing to it, let's call it A. A spans a Future that contains, on itself, all the data needed for the computation. It is important that you don't confuse the thread that does the processing with the data that is being processed, as the data (memory) is shared by all threads (and sometimes needs explicit synchronization). The future will be processed (eventually), by a thread B.
Now, do I need for A to block until B is done? It could (and in many general cases that might be the right solution), but in this case, we hardly want to stop listening to our socket. So no, we don't, A forgets everything about the message and carries on with its life.
So when B is done, the Future might be mapped or have a listener that sends the proper response. B itself can send it given the information that it has on the original message! You just need to be careful synchronizing access to the socket, to avoid colliding with a possible thread C that might have been processing a previous or later message in parallel.
Things can obviously get more complex by having threads spawning even more threads, queues where some threads write data and other read data, etc. (Play, being based in Akka, certainly includes a lot of message queues). But I hope to have convinced you that while this statement is correct:
You can’t magically turn synchronous IO into asynchronous by wrapping
it in a Future. If you can’t change the application’s architecture to
avoid blocking operations
Such a change in application's architecture is certainly possible in many (most?) cases, and certainly has been done inside Play.

Event based design - Futures, Promises vs Akka Persistence

I have multiple use cases which require predefined events to be fired based on a certain user actions.
e.g. let's say when NewUser is created in the application, it'll have to call CreateUserInWorkflowSystem and FireEmailToTheUser asynchronously. There are many other business cases of this nature where events will be predefined based on a usecase. I can use Promises/Futures to model these events as below
if 'NewUser' then
call `CreateUserInWorkflowSystem` (which will be Future based API)
call `FireEmailToTheUser` (which will be Future based API)
if 'FileImport' then
call `API3` (which will be Future based call)
call `API4` (which will be Future based call)
All those Future calls will have to log failures somewhere so failed calls can be retried etc. Note NewUser call won't be waiting for those Futures (events per say) to complete.
That was using plain Futures/Promises APIs. However I am thinking Akka Persistence will be an appropriate fit here and blocking calls can still run into Futures. With Akka persistence, handling failure will be easy as it provides it out of box etc. I understand Akka persistence is still in experimental stage but that doesn't seem to be a big concern as typesafe generally keeps these new frameworks into experimental state before promoting into future release etc. (same was true with Macros). Given these requirements do you think Futures/Promises or Akka persistence is a better fit here?
This is an opinion based question - not the best type to ask on SO. Anyway, trying to answer.
It really depends what you are more comfortable with and what your requirements are. Do you need to scale the system later beyond a single JVM - use Akka. Do you want to keep it more simple - use Futures.
If you use Futures you can store all state and actions to execute in a job queue/db. It's quite reasonable.
If you use Akka Persistence then obviously it will help you with persistence. Akka will help to perform supervison, recovery and retries easier. If your CreateUserInWorkflowSystem action fails result is propagated to supervising actor which probably restarts the failed actor and makes it retry for N times. If your supervising actor fails then his supervisor will do the right thing, or eventually the whole app will crash which is good. With Futures you would have to implement this mechanism yourself and make sure that application can crash when needed.
If you have completely independent actions then Futures and Actors sound about the same. If you have to chain actions and compose them, then using Futures will be a somewhat more natural thing to do: for comprehensions, etc. In Akka you would have to wait for a message and based on a type of a message perform next action.
Try to mock a simple implementation using both and compare what you like/dislike given your particular application requirements. Overall, both choices are good, but I'm slightly leaning towards actors in this case.

Using Scala Akka framework for blocking CLI calls

I'm relatively new to Akka & Scala, but I would like to use Akka as a generic framework to pull together information from various web tools, and cli commands.
I understand the general principal that in an Actor model, it is highly desirable not to have the actors block. And in the case of the http requests, there are async http clients (such as Spray) that means that I can handle the requests asynchronously within the Actor framework.
However, I'm unsure what is the best approach when combining actors with existing blocking API calls such as the scala ProcessBuilder/ProcessIO libraries. In terms of issuing these CLI commands I expect a relatively small amount of concurrency, e.g. perhaps executing a max of 10 concurrent CLI invocations on a 12 core machine.
Is it better to have a single actor managing these CLI commands, farming the actual work off to Futures that are created as needed? Or would it be cleaner just to maintain a set of separate actors backed by a PinnedDispatcher? Or something else?
From the Akka documentation ( http://doc.akka.io/docs/akka/snapshot/general/actor-systems.html#Blocking_Needs_Careful_Management ):
"
Blocking Needs Careful Management
In some cases it is unavoidable to do blocking operations, i.e. to put a thread to sleep for an indeterminate time, waiting for an external event to occur. Examples are legacy RDBMS drivers or messaging APIs, and the underlying reason in typically that (network) I/O occurs under the covers. When facing this, you may be tempted to just wrap the blocking call inside a Future and work with that instead, but this strategy is too simple: you are quite likely to find bottle-necks or run out of memory or threads when the application runs under increased load.
The non-exhaustive list of adequate solutions to the “blocking problem” includes the following suggestions:
Do the blocking call within an actor (or a set of actors managed by a router [Java, Scala]), making sure to configure a thread pool which is either dedicated for this purpose or sufficiently sized.
Do the blocking call within a Future, ensuring an upper bound on the number of such calls at any point in time (submitting an unbounded number of tasks of this nature will exhaust your memory or thread limits).
Do the blocking call within a Future, providing a thread pool with an upper limit on the number of threads which is appropriate for the hardware on which the application runs.
Dedicate a single thread to manage a set of blocking resources (e.g. a NIO selector driving multiple channels) and dispatch events as they occur as actor messages.
The first possibility is especially well-suited for resources which are single-threaded in nature, like database handles which traditionally can only execute one outstanding query at a time and use internal synchronization to ensure this. A common pattern is to create a router for N actors, each of which wraps a single DB connection and handles queries as sent to the router. The number N must then be tuned for maximum throughput, which will vary depending on which DBMS is deployed on what hardware."

How to limit concurrency when using actors in Scala?

I'm coming from Java, where I'd submit Runnables to an ExecutorService backed by a thread pool. It's very clear in Java how to set limits to the size of the thread pool.
I'm interested in using Scala actors, but I'm unclear on how to limit concurrency.
Let's just say, hypothetically, that I'm creating a web service which accepts "jobs". A job is submitted with POST requests, and I want my service to enqueue the job then immediately return 202 Accepted — i.e. the jobs are handled asynchronously.
If I'm using actors to process the jobs in the queue, how can I limit the number of simultaneous jobs that are processed?
I can think of a few different ways to approach this; I'm wondering if there's a community best practice, or at least, some clearly established approaches that are somewhat standard in the Scala world.
One approach I've thought of is having a single coordinator actor which would manage the job queue and the job-processing actors; I suppose it could use a simple int field to track how many jobs are currently being processed. I'm sure there'd be some gotchyas with that approach, however, such as making sure to track when an error occurs so as to decrement the number. That's why I'm wondering if Scala already provides a simpler or more encapsulated approach to this.
BTW I tried to ask this question a while ago but I asked it badly.
Thanks!
I'd really encourage you to have a look at Akka, an alternative Actor implementation for Scala.
http://www.akkasource.org
Akka already has a JAX-RS[1] integration and you could use that in concert with a LoadBalancer[2] to throttle how many actions can be done in parallell:
[1] http://doc.akkasource.org/rest
[2] http://github.com/jboner/akka/blob/master/akka-patterns/src/main/scala/Patterns.scala
You can override the system properties actors.maxPoolSize and actors.corePoolSize which limit the size of the actor thread pool and then throw as many jobs at the pool as your actors can handle. Why do you think you need to throttle your reactions?
You really have two problems here.
The first is keeping the thread pool used by actors under control. That can be done by setting the system property actors.maxPoolSize.
The second is runaway growth in the number of tasks that have been submitted to the pool. You may or may not be concerned with this one, however it is fully possible to trigger failure conditions such as out of memory errors and in some cases potentially more subtle problems by generating too many tasks too fast.
Each worker thread maintains a dequeue of tasks. The dequeue is implemented as an array that the worker thread will dynamically enlarge up to some maximum size. In 2.7.x the queue can grow itself quite large and I've seen that trigger out of memory errors when combined with lots of concurrent threads. The max dequeue size is smaller 2.8. The dequeue can also fill up.
Addressing this problem requires you control how many tasks you generate, which probably means some sort of coordinator as you've outlined. I've encountered this problem when the actors that initiate a kind of data processing pipeline are much faster than ones later in the pipeline. In order control the process I usually have the actors later in the chain ping back actors earlier in the chain every X messages, and have the ones earlier in the chain stop after X messages and wait for the ping back. You could also do it with a more centralized coordinator.