Blocking IO in Akka - scala

I'm doing some Akka lately and wonder: Can I do blocking I/O in Akka without getting into big trouble? Let us say we have an Actor which does some blocking I/O because it uses a legacy library or for any other reason: Couldn't I just use a special dispatcher for those Actors which a reasonably sized ThreadPool and do blocking I/O without blocking all other actors because they run with a different dispatcher?
What are the downsides of this? And what would be the optimal way to call a 3rd party HTTP-API from an actor?

Doing blocking IO is a bad idea in general, and in a reactive multithreaded environment in particular, so your first step is to try to avoid it alltogether, that means looking into using AsyncHttpClient or HttpAsyncClient.
If that does not work, you can at least mitigate the risks by giving the blocking actors their own threads. This will of course be costly and you still risk filling up their mailboxes, but such is the choice of using blocking IO.
You also might want to look at the IO Actor module for a more raw interface to network IO.
Hope any of this helps,
Cheers,
√

Related

What is the standard way to use Kotlin+Sockets+Coroutines?

Is it currently possible to coroutines in Kotlin for networking?
I could find examples with threads but not with coroutines.
You may use Dispatchers.IO dispatcher to work with sockets from coroutines like you do it from threads - sockets are blocking I/O, so each sockets takes a whole thread, and this dispatcher can launch a LOT of threads.
Also there are some non-blocking I/O libraries for Java, and you may find adapters from them to Kotlin Coroutines API (for example, you may use this proxy between Netty and Coroutines).

what the essential difference between akka and ThreadPool+BlockingQueue in ONE Process?

We know Akka is one implementation of actor pattern. Without Akka, I usually implement a simple actor pattern using ThreadPool+BlockingQueue. So the message is offered into the queue, and the works(actors) take the message from the Queue, then do what they should do. Of course, this kind of implementation can be only in just ONE process.
So as to in one process,
What's the essential difference between these two(Akka vs.
ThreadPool+BlockingQueue)
Moreover, what's the difference between actor pattern and producer-consumer model?
Actor model is indeed quite similar to producer-consumer model (P-C).
However, if you use a blocking queue with P-C your application won't be completely non-blocking and asynchronous. The promise of actor model and Akka is that all messages are sent asynchronously and don't block the sender.
Another aspect of it is managing these queues gets quite cumbersome once you have many consumers and producers. With actors you simply send a message and don't have to think about these low level details. Under the hood Akka will keep a message queue aka mailbox per actor with a dispatcher assigning actors to the thread pool to process those messages.
It's much easier to use Akka to achieve highly performant and resilient application than coding it yourself. You get fault tolerance, resource management, location transparency, routing, distributed, async processing, hierarchical supervision out of the box. Not to mention other frameworks and libraries leveraging these features to give you even more (reactive streams, akka http, etc). There are lot's of patterns developed for you already there, so why bother with your own.

Akka IO app consumes 100% cpu

I am trying to profile an Akka app that is constantly at or near 100% CPU usage. I took a CPU sample using visualvm. The sample indicates that there are 2 threads that make up 98.9% of CPU usage. 79% of the cpu time was spent on a method called sun.misc.Unsafe. Other answers on SO say that it just means that a thread is waiting but in the native implementation layer (outside of the jvm).
In questions similar to mine, people have been told to look elsewhere without being given specifics. Where should I look to figure out what's causing the cpu spike?
The application is a server that primarily uses Akka IO to listen for TCP socket connections.
Without seeing any of the source code, or even knowing what IO channel you are talking about (sockets, files, etc), there is very little insight that anyone here can give you.
I do have some rather general suggestions though.
First, you should be using reactive techniques and reactive IO in your application. This issue could be occurring because you are polling the status of some resource in a tight loop, or using a blocking call when you should be using a reactive one. This tends to be an anti-pattern and a performance drain exactly because you can spend CPU cycles doing nothing but "actively waiting". I recommend double checking for:
resource polling
blocking calls
system calls
disk flushes
waiting on a Future when it would be appropriate to map it instead
Second, you should NOT be using Mutexes or other thread synchronization in your application. If so, then you might be suffering from a live-lock. Unlike dead-locks, live-locks manifest with symptoms like 100% CPU usage as threads constantly lock and unlock concurrency primitives in an attempt to "catch them all". Wikipedia has a nice technical description of what a live lock looks like. With Akka in place you shouldn't have any need to use Mutexes or any thread synchronization primitives. If you are then you probably need to re-design your application.
Third, you should be throttling IO (as well as error handling like reconnection attempts). This issue could be occurring because your system lacks effective throttling. Often with data channels we leave their bandwidth unconstrained. However this can become an issue when that channel reaches 100% saturation and begins to steal resources from other parts of the system. This can happen, for example, if you are moving large files around without a reasonable limit.
Alternatively, you also need to throttle connection retries when you encounter any errors, rather than retrying immediately. Lots of systems will attempt to reconnect to a server if they lose their connection. While normally desirable, this can lead to problematic behavior if you use a naive reconnection strategy. For example, imagine a network client that was written this way:
class MyClient extends Client {
... other code...
def onDisconnect() = {
reconnect()
}
}
Every time the Client disconnects for ANY reason it will attempt to reconnect. You can see how this would cause a tight loop between the error handling code and the Client if the Wifi cut-out or a network cable was unplugged.
Fourth, your application should have well defined data sources and sinks. Your issue could be caused by a "data loop", that is some set of Akka actors that are just sending messages to the next actor in the chain, with the last actor sending the message back to the first actor in the chain. Make sure you have a clear and definite way for messages to enter and exit your system.
Fifth, use appropriate profiling and instrumentation for your application. Instrument your application using Kamon or Coda Hale's Metrics library.
Finding an appropriate profiler will be more difficult, since we as a community have far to go to develop mature tools for reactive applications. Personally I have found visualvm useful, but not always overwhelmingly helpful for detecting code paths that are CPU bound. The issue is that sampling profilers are only able to collect data when the JVM reaches a safepoint. This has the potential to bias certain code paths. The fix is to use a profiler that supports AsyncGetStackTrace.
Best of luck! And please add more context if you can.

How is ReactiveMongo implemented so that it is considered non-blocking?

Reading the documentation about the Play Framework and ReactiveMongo leads me to believe that ReactiveMongo works in such a way that it uses few threads and never blocks.
However, it seems that the communication from the Play application to the Mongo server would have to happen on some thread somewhere. How is this implemented? Links to the source code for Play, ReactiveMongo, Akka, etc. would also be very appreciated.
The Play Framework includes some documentation about this on this page about thread pools. It starts off:
Play framework is, from the bottom up, an asynchronous web framework. Streams are handled asynchronously using iteratees. Thread pools in Play are tuned to use fewer threads than in traditional web frameworks, since IO in play-core never blocks.
It then talks a little bit about ReactiveMongo:
The most common place that a typical Play application will block is when it’s talking to a database. Unfortunately, none of the major databases provide asynchronous database drivers for the JVM, so for most databases, your only option is to using blocking IO. A notable exception to this is ReactiveMongo, a driver for MongoDB that uses Play’s Iteratee library to talk to MongoDB.
Following is a note about using Futures:
Note that you may be tempted to therefore wrap your blocking code in Futures. This does not make it non blocking, it just means the blocking will happen in a different thread. You still need to make sure that the thread pool that you are using there has enough threads to handle the blocking.
There is a similar note in the Play documentation on the page Handling Asynchronous Results:
You can’t magically turn synchronous IO into asynchronous by wrapping it in a Future. If you can’t change the application’s architecture to avoid blocking operations, at some point that operation will have to be executed, and that thread is going to block. So in addition to enclosing the operation in a Future, it’s necessary to configure it to run in a separate execution context that has been configured with enough threads to deal with the expected concurrency.
The documentation seems to be saying that ReactiveMongo is non-blocking, so you don't have to worry about it eating up a lot of the threads in your thread pool. But ReactiveMongo has to communicate with the Mongo server somewhere.
How is this communication implemented so that Mongo doesn't use up threads from Play's default thread pool?
Once again, links to the specific files in Play, ReactiveMongo, Akka, etc, would be very appreciated.
Yes, indeed, you still need to use threads to perform any kind of work, including communication with the database. What's important is how exactly this communication happens.
ReactiveMongo "does not use threads" in a sense that it does not use blocking I/O. Usual Java I/O facilities like java.io.InputStream are blocking; this means that reading from such an InputStream or writing to OutputStream blocks the thread until the "other side" provides the required data or is ready to accept it. For network communication this means that threads will be blocked.
However, Java provides NIO API which supports non-blocking and asynchronous I/O. I don't want to get into its details right now, but the basic idea, naturally, is that non-blocking I/O allow not to block threads which need to exchange some data with the outside world: for example, these threads can poll the data source to check if there is some data available, and if there is none, they return to the thread pool and can be used for other tasks. Of course, down there these facilities are provided by the underlying OS.
Exact implementation details of non-blocking I/O is usually hidden inside high-level libraries like Netty because it is not at all nice to use. Netty (which is exactly the library ReactiveMongo uses), for example, provides nice asynchronous callback-like API which is really easy to use but is also powerful and expressive enough to allow building complex I/O-heavy applications with high throughput.
So, ReactiveMongo uses Netty to talk with Mongo database server, and because Netty is an implementation of asynchronous network I/O, ReactiveMongo really does not need to block threads for a long time.

The Scala way to use one actor per socket connection

I am wondering how it is possible to avoid one socket connection pr. thread in Scala. I have thought a lot about it, but I always end up with some code which is listening for incoming data for each client connection.
The problem is that I want to develop an application which should simultanously handle perhaps a couple of thousand connections. However I will of course not want to create a thread for each connection because of the lack of scalability and context switching.
What would be the "right" way to do this. In my world it should be possible to have one actor for each connection without the need to block one thread per actor.
In the book "Programming Scala" the authors used a library called naggati which provides a framework that combines NIO and actors, http://programming-scala.labs.oreilly.com/ch09.html.
I have an application that mixes actors with non-blocking sockets (i.e. NIO). The way I have done this is to have a dedicated IO thread, which sends messages to actors (in much the same way it would delegate work to a thread pool in a Java system) using the reactor pattern.
Obviously using the old blocking sockets, you are restricted to one thread per connection. And actor could handle this but of course this places a restriction on the number of connections which can be handled simultaneously.
In the case of a single IO thread, this is a bottleneck in theory but not much in practice (in our observations) as the IO thread is doing computationally non-intensive work. There are plenty of good discussions to be found on the NIO reactor pattern.