How Akka Actors are clearing resources? - scala

We're experiencing strange memory behavior on our web server built using Akka HTTP.
Our architecture goes like this:
web server routes calls various actors, get results for future and streams it to response
actors call non-blocking operations (using futures), combine and process data fetched from them and pipe results to sender. We're using standard Akka actors, implementing its receive method (not Akka typed)
there is no blocking code anywhere in the app
When I run web server locally, at the start it takes around 350 MB. After first request, memory usage jumps to around 430 MB and slowly is increasing with each request (monitored using Activity Monitor on Mac). But shouldn't GC clean things after each request? Shouldn't memory usage after processing be 350 MB again?
I also installed YourKit java profiler and here is a digram of head memory
It can be seen that once memory usage increase, it never goes back, and system is stateless. Also, when I run GC manually from profiler, it almost doesn't do anything, just a small decrease in memory usage. I understand some services might cache things after first request, consuming memory temporarily, but is there any policy inside Akka Actors or Akka HTTP about this?
I tried to check objects furthest from GC but it only shows library classes and Akka built in classes, nothing related to our code.
So, I have a 2 questions:
How the actor is closing resources and freeing memory after message processing? Did you experienced anything similar?
Is there any better way of profiling Akka HTTP which will show me stacktrace of using classed furthest from GC?
On a side note, is it advisable to use scheduler inside Actors (running inside Akka HTTP server)? When I do that, it seems memory usage increases heavily and app runs our of memory on DEV environment.
Thanks in advance,
Amer

An actor remains active until it is explicitly stopped: there is no garbage collection.
Probably the two most common methods for managing actor lifetimes (beyond the actor itself deciding that it's time to stop) are:
Parent is responsible for stopping children. If the actors are being spawned for performing specific tasks on behalf of the parent, for instance, this approach is called for.
Using an inactivity timeout. If the actors represent domain entities (e.g. an actor for every user account, where this actor in some sense serves as an in-memory cache), using context.setReceiveTimeout to cause a ReceiveTimeout message to be sent to the actor after the timeout has passed (note that in some cases the scheduled send of that message may not be canceled in time if a message was enqueued in the mailbox but not processed when the timeout expired: receiving a ReceiveTimeout is not a guarantee that the timeout has passed since the last received message) is a pretty reasonable solution, especially if using Akka Persistence and Akka Cluster Sharding to allow the actor's state to be recovered.
Update to add:
Regarding
shouldn't GC clean things after each request?
The short answer is no, GC will not clean things after each request (and if it does, that's a very good sign that you haven't provisioned enough memory).
The longer answer is that the characteristics of garbage collection on the JVM are very underspecified: the only rule a garbage collection implementation has to respect is that it never frees an object reachable from a GC root (basically any variable on a thread's stack or static to a class) by a chain of strong references. When and even whether the garbage collector reclaims the space taken up by garbage is entirely implementation dependent (I say "whether" to account for the existence of the Epsilon garbage collector, which never frees memory; this is useful for benchmarking JVMs without the complication of garbage collection and also in environments where the application can be restarted when it runs out of memory: the JVM crash is in this some sense the actual garbage collector).
You could try executing java.lang.System.gc when the server stops: this may cause a GC run (note that there is no requirement that the system actually collect any garbage in such a scenario). If a garbage collector will free any memory, about the only time it has to run is if there's not enough space to fulfill an object allocation request: therefore if the application stops allocating objects, there may not be a garbage collection run.
For performance reasons, most modern garbage collectors in the JVM wait until there's no more than a certain amount of free space before they collect garbage: this is because the time taken to reclaim all space is proportional to the number of objects which aren't reclaimable and for a great many applications, the pattern is that most objects are comparatively ephemeral, so the number of objects which aren't reclaimable is reasonably constant. The consequence of that is that the garbage collector will do about the same amount of work in a "full" GC for a given application regardless of how much free space there is.

Related

Akka Actor Messaging Delay

I'm experiencing issues scaling my app with multiple requests.
Each request sends an ask to an actor, which then spawns other actors. This is fine, however, under load(5+ asks at once), the ask takes a massive amount of time to deliver the message to the target actor. The original design was to bulkhead requests evenly, but this is causing a bottleneck. Example:
In this picture, the ask is sent right after the query plan resolver. However, there is a multi-second gap when the Actor receives this message. This is only experienced under load(5+ requests/sec). I first thought this was a starvation issue.
Design:
Each planner-executor is a seperate instance for each request. It spawns a new 'Request Acceptor' actor each time(it logs 'requesting score' when it receives a message).
I gave the actorsystem a custom global executor(big one). I noticed the threads were not utilized beyond the core threadpool size even during this massive delay
I made sure all executioncontexts in the child actors used the correct executioncontext
Made sure all blocking calls inside actors used a future
I gave the parent actor(and all child) a custom dispatcher with core size 50 and max size 100. It did not request more(it stayed at 50) even during these delays
Finally, I tried creating a totally new Actorsystem for each request(inside the planner-executor). This also had no noticable effect!
I'm a bit stumped by this. From these tests it does not look like a thread starvation issue. Back at square one, I have no idea why the message takes longer and longer to deliver the more concurrent requests I make. The Zipkin trace before reaching this point does not degrade with more requests until it reaches the ask here. Before then, the server is able to handle multiple steps to e.g veify the request, talk to the db, and then finally go inside the planner-executor. So I doubt the application itself is running out of cpu time.
We had this very similar issue with Akka. We observed huge delay in ask pattern to deliver messages to the target actor on peek load.
Most of these issues are related to heap memory consumption and not because of usages of dispatchers.
Finally we fixed these issues by tuning some of the below configuration and changes.
1) Make sure you stop entities/actors which are no longer required. If its a persistent actor then you can always bring it back when you need it.
Refer : https://doc.akka.io/docs/akka/current/cluster-sharding.html#passivation
2) If you are using cluster sharding then check the akka.cluster.sharding.state-store-mode. By changing this to persistence we gained 50% more TPS.
3) Minimize your log entries (set it to info level).
4) Tune your logs to publish messages frequently to your logging system. Update the batch size, batch count and interval accordingly. So that the memory is freed. In our case huge heap memory is used for buffering the log messages and send in bulk. If the interval is more then you may fill your heap memory and that affects the performance (more GC activity required).
5) Run blocking operations on a separate dispatcher.
6) Use custom serializers (protobuf) and avoid JavaSerializer.
7) Add the below JAVA_OPTS to your jar
export JAVA_OPTS="$JAVA_OPTS -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2 -Djava.security.egd=file:/dev/./urandom"
The main thing is XX:MaxRAMFraction=2 which will utilize more than 60% of available memory. By default its 4 means your application will use only one fourth of the available memory, which might not be sufficient.
Refer : https://blog.csanchez.org/2017/05/31/running-a-jvm-in-a-container-without-getting-killed/
Regards,
Vinoth

How heavy are akka actors?

I am aware this is a very imprecise question and might be deemed inappropriate for stackoverflow. Unfortunately smaller applications (in terms of the number of actors) and 'tutorial-like' ones don't help me develop intuition about the overhead of message dispatch and a swift spot for granularity between a 'scala object' and a 'CORBA object'.
While almost certainly keeping a state of conversation with a client for example deserves an actor, in most real use cases it would involve conditional/parallel/alternative interactions modeled by many classes. This leaves the choice between treating actors as facades to quite complex services, similar to the justly retired EJB, or akin to smalltalk objects, firing messages between each other willy-nilly whenever communication can possibly be implemented in an asynchronous manner.
Apart from the overhead of message passing itself, there will also be overhead involved with lifecycle management, and I am wary of potential problems caused by chained-restarts of whole subtrees of actors due to exceptions or other errors in their root.
For the sake of this question we may assume that vast majority of the communication happens within a single machine and network crossing is insignificant.
I am not sure what you mean by an "overhead of message passing itself".
When network/serialisation is not involved then the overhead is negligible: one side pushes a message in a queue, another reads it from it.
Akka claims that it can go as fast as 50 millions messages per second on a single machine. This means that you wouldn't use actors just as façade for complex subsystems. You would rather model them as mush smaller "working units". They can be more complex compare to smalltalk objects when convenient. You could have, say, KafkaConsumerActor which would utilise internally other "normal" classes such as Connection, Configuration, etc., these don't have to be akka actors. But it is still small enough to be a simple working unit doing one simple thing (consuming a message and sending it somewhere).
50 millions a second is really a lot.
A memory footprint is also extremely small. Akka itself claims that you can have ~2.5 millions actors for just 1GB of heap. Compare to what a typical system does it is, indeed, nothing.
As for lifecycle, creating an actor is not much heavier than creating an class instance and a mailbox so I don't really expect it to be that significant.
Saying that, typically you don't have many actors in your system that would handle one message and die. Normally you spawn actors which live much longer. Like, an actor that calculates your mortgage repayments based on parameters you provide doesn't have any reason to die at all.
Also Akka makes it very simple to use actor pools (different kinds of them).
So performance here is very tweakable.
Last point is that you should compare Akka overhead in a context. For example, if your system is doing database queries, or serving/performing HTTP requests, or even doing significant IO of some sort, then probably overhead of these activities makes overhead of Akka so insignificant so you wouldn't even bother thinking about it. Like a roundtrip to the DB for 50 millis would be an equivalent of an overhead from ~2.5 millions akka messages. Does it matter?
So can you find an edge case scenario where Akka would force you to pay performance penalties? Probably. Akka is not a golden hammer (and nothing is).
But with all the above in mind you should think if it is Akka that is a performance bottleneck in your specific context or you are wasting time in micro-optimisation.

bad use cases of scala.concurrent.blocking?

With reference to the third point in this accepted answer, are there any cases for which it would be pointless or bad to use blocking for a long-running computation, whether CPU- or IO-bound, that is being executed 'within' a Future?
It depends on the ExecutionContext your Future is being executed in.
Pointless:
If the ExecutionContext is not a BlockContext, then using blocking will be pointless. That is, it would use the DefaultBlockContext, which simply executes the code without any special handling. It probably wouldn't add that much overhead, but pointless nonetheless.
Bad:
Scala's ExecutionContext.Implicits.global is made to spawn new threads in a ForkJoinPool when the thread pool is about to be exhausted. That is, if it knows that is going to happen via blocking. This can be bad if you're spawning lots of threads. If you're queuing up a lot of work in a short span of time, the global context will happily expand until gridlock. #dk14's answer explains this in more depth, but the gist is that it can be a performance killer as managed blocking can actually become quickly unmanageable.
The main purpose of blocking is to avoid deadlocks within thread pools, so it is tangentially related to performance in the sense that reaching a deadlock would be worse than spawning a few more threads. However, it is definitely not a magical performance enhancer.
I've written more about blocking in particular in this answer.
From my practice, blocking + ForkJoinPool may lead to contionuous and uncontrollable creation of threads if you have a lot of messages to process and each one requires long blocking (which also means that it holds some memory during such). ForkJoinPool creates new thread to compensate the "managable blocked" one, regardless of MaxThreadCount; say hello to hundreds of threads in VisualVm. And it almost kills backpressure, as there is always a place for task in the pool's queue (if your backpressure is based on ThreadPoolExecutor's policies). Performance becomes killed by both new-thread-allocation and garbage collection.
So:
it's good when message rate is not much higher than 1/blocking_time as it allows you to use full power of threads. Some smart backpressure might help to slow down incoming messages.
It's pointless if a task actually uses your CPU during blocking{} (no locks), as it will just increase counts of threads more than count of real cores in system.
And bad for any other cases - you should use separate fixed thread-pool (and maybe polling) then.
P.S. blocking is hidden inside Await.result, so it's not always obvious. In our project someone just did such Await inside some underlying worker actor.

Using Scala Akka framework for blocking CLI calls

I'm relatively new to Akka & Scala, but I would like to use Akka as a generic framework to pull together information from various web tools, and cli commands.
I understand the general principal that in an Actor model, it is highly desirable not to have the actors block. And in the case of the http requests, there are async http clients (such as Spray) that means that I can handle the requests asynchronously within the Actor framework.
However, I'm unsure what is the best approach when combining actors with existing blocking API calls such as the scala ProcessBuilder/ProcessIO libraries. In terms of issuing these CLI commands I expect a relatively small amount of concurrency, e.g. perhaps executing a max of 10 concurrent CLI invocations on a 12 core machine.
Is it better to have a single actor managing these CLI commands, farming the actual work off to Futures that are created as needed? Or would it be cleaner just to maintain a set of separate actors backed by a PinnedDispatcher? Or something else?
From the Akka documentation ( http://doc.akka.io/docs/akka/snapshot/general/actor-systems.html#Blocking_Needs_Careful_Management ):
"
Blocking Needs Careful Management
In some cases it is unavoidable to do blocking operations, i.e. to put a thread to sleep for an indeterminate time, waiting for an external event to occur. Examples are legacy RDBMS drivers or messaging APIs, and the underlying reason in typically that (network) I/O occurs under the covers. When facing this, you may be tempted to just wrap the blocking call inside a Future and work with that instead, but this strategy is too simple: you are quite likely to find bottle-necks or run out of memory or threads when the application runs under increased load.
The non-exhaustive list of adequate solutions to the “blocking problem” includes the following suggestions:
Do the blocking call within an actor (or a set of actors managed by a router [Java, Scala]), making sure to configure a thread pool which is either dedicated for this purpose or sufficiently sized.
Do the blocking call within a Future, ensuring an upper bound on the number of such calls at any point in time (submitting an unbounded number of tasks of this nature will exhaust your memory or thread limits).
Do the blocking call within a Future, providing a thread pool with an upper limit on the number of threads which is appropriate for the hardware on which the application runs.
Dedicate a single thread to manage a set of blocking resources (e.g. a NIO selector driving multiple channels) and dispatch events as they occur as actor messages.
The first possibility is especially well-suited for resources which are single-threaded in nature, like database handles which traditionally can only execute one outstanding query at a time and use internal synchronization to ensure this. A common pattern is to create a router for N actors, each of which wraps a single DB connection and handles queries as sent to the router. The number N must then be tuned for maximum throughput, which will vary depending on which DBMS is deployed on what hardware."

How to limit concurrency when using actors in Scala?

I'm coming from Java, where I'd submit Runnables to an ExecutorService backed by a thread pool. It's very clear in Java how to set limits to the size of the thread pool.
I'm interested in using Scala actors, but I'm unclear on how to limit concurrency.
Let's just say, hypothetically, that I'm creating a web service which accepts "jobs". A job is submitted with POST requests, and I want my service to enqueue the job then immediately return 202 Accepted — i.e. the jobs are handled asynchronously.
If I'm using actors to process the jobs in the queue, how can I limit the number of simultaneous jobs that are processed?
I can think of a few different ways to approach this; I'm wondering if there's a community best practice, or at least, some clearly established approaches that are somewhat standard in the Scala world.
One approach I've thought of is having a single coordinator actor which would manage the job queue and the job-processing actors; I suppose it could use a simple int field to track how many jobs are currently being processed. I'm sure there'd be some gotchyas with that approach, however, such as making sure to track when an error occurs so as to decrement the number. That's why I'm wondering if Scala already provides a simpler or more encapsulated approach to this.
BTW I tried to ask this question a while ago but I asked it badly.
Thanks!
I'd really encourage you to have a look at Akka, an alternative Actor implementation for Scala.
http://www.akkasource.org
Akka already has a JAX-RS[1] integration and you could use that in concert with a LoadBalancer[2] to throttle how many actions can be done in parallell:
[1] http://doc.akkasource.org/rest
[2] http://github.com/jboner/akka/blob/master/akka-patterns/src/main/scala/Patterns.scala
You can override the system properties actors.maxPoolSize and actors.corePoolSize which limit the size of the actor thread pool and then throw as many jobs at the pool as your actors can handle. Why do you think you need to throttle your reactions?
You really have two problems here.
The first is keeping the thread pool used by actors under control. That can be done by setting the system property actors.maxPoolSize.
The second is runaway growth in the number of tasks that have been submitted to the pool. You may or may not be concerned with this one, however it is fully possible to trigger failure conditions such as out of memory errors and in some cases potentially more subtle problems by generating too many tasks too fast.
Each worker thread maintains a dequeue of tasks. The dequeue is implemented as an array that the worker thread will dynamically enlarge up to some maximum size. In 2.7.x the queue can grow itself quite large and I've seen that trigger out of memory errors when combined with lots of concurrent threads. The max dequeue size is smaller 2.8. The dequeue can also fill up.
Addressing this problem requires you control how many tasks you generate, which probably means some sort of coordinator as you've outlined. I've encountered this problem when the actors that initiate a kind of data processing pipeline are much faster than ones later in the pipeline. In order control the process I usually have the actors later in the chain ping back actors earlier in the chain every X messages, and have the ones earlier in the chain stop after X messages and wait for the ping back. You could also do it with a more centralized coordinator.