Importance of Akka Routers - scala

I have this lingering doubt in my mind about the importance of Akka Routers. I have used Akka Routers in the current project I am working on. However, I am a little confused about the importance of it. Out of the two below methods, which is more beneficial.
having routers and routees.
Creating as many actors as needed.
I understood that router will assign the incoming messages among its routees based on the strategy. Also, we can have supervisor strategy based on the router.
I have also understood that actors are also lightweight and it is not an overhead to create as many actors as possible. So, we can create actors for each of the incoming messages and kill it if necessary after the processing si completed.
So I want to understand which one of the above design is better? Or in other words, in which case (1) has advantage over (2) OR vice versa.

Good question. I had similar doubts before I read Akka documentation. Here are the reasons:
Efficiency. From docs:
On the surface routers look like normal actors, but they are actually
implemented differently. Routers are designed to be extremely
efficient at receiving messages and passing them quickly on to
routees.
A normal actor can be used for routing messages, but an actor's
single-threaded processing can become a bottleneck. Routers can
achieve much higher throughput with an optimization to the usual
message-processing pipeline that allows concurrent routing. This is
achieved by embedding routers' routing logic directly in their
ActorRef rather than in the router actor. Messages sent to a router's
ActorRef can be immediately routed to the routee, bypassing the
single-threaded router actor entirely.
The cost to this is, of course, that the internals of routing code are
more complicated than if routers were implemented with normal actors.
Fortunately all of this complexity is invisible to consumers of the
routing API. However, it is something to be aware of when implementing
your own routers.
Default implementation of multiple routing strategies. You can always write your own, but it might get tricky. You have to take into account supervision, recovery, load balancing, remote deployment, etc.
Akka Router patterns will be familiar to Akka users. If you roll-out your custom routing then everyone will have to spend time understanding all corner cases and implications (+ testing? :)).
TL;DR If you don't care about efficiency too much and if it's easier for you to spawn new actors then go for it. Otherwise use Routers.

Related

what the essential difference between akka and ThreadPool+BlockingQueue in ONE Process?

We know Akka is one implementation of actor pattern. Without Akka, I usually implement a simple actor pattern using ThreadPool+BlockingQueue. So the message is offered into the queue, and the works(actors) take the message from the Queue, then do what they should do. Of course, this kind of implementation can be only in just ONE process.
So as to in one process,
What's the essential difference between these two(Akka vs.
ThreadPool+BlockingQueue)
Moreover, what's the difference between actor pattern and producer-consumer model?
Actor model is indeed quite similar to producer-consumer model (P-C).
However, if you use a blocking queue with P-C your application won't be completely non-blocking and asynchronous. The promise of actor model and Akka is that all messages are sent asynchronously and don't block the sender.
Another aspect of it is managing these queues gets quite cumbersome once you have many consumers and producers. With actors you simply send a message and don't have to think about these low level details. Under the hood Akka will keep a message queue aka mailbox per actor with a dispatcher assigning actors to the thread pool to process those messages.
It's much easier to use Akka to achieve highly performant and resilient application than coding it yourself. You get fault tolerance, resource management, location transparency, routing, distributed, async processing, hierarchical supervision out of the box. Not to mention other frameworks and libraries leveraging these features to give you even more (reactive streams, akka http, etc). There are lot's of patterns developed for you already there, so why bother with your own.

What happens to messages that come to a server implements stream processing after the source reached its bound?

Im learning akka streams but obviously its relevant to any streaming framework :)
quoting akka documentation:
Reactive Streams is just to define a common mechanism of how to move
data across an asynchronous boundary without losses, buffering or
resource exhaustion
Now, from what I understand is that if up until before streams, lets take an http server for example, the request would come and when the receiver wasent finished with a request, so the new requests that are coming will be collected in a buffer that will hold the waiting requests, and then there is a problem that this buffer have an unknown size and at some point if the server is overloaded we can loose requests that were waiting.
So then stream processing came to play and they bounded this buffer to be controllable...so we can predefine the number of messages (requests in my example) we want to have in line and we can take care of each at a time.
my question, if we implement that a source in our server can have a 3 messages at most, so if the 4th id coming what happens with it?
I mean when another server will call us and we are already taking care of 3 requests...what will happened to he's request?
What you're describing is not actually the main problem that Reactive Streams implementations solve.
Backpressure in terms of the number of requests is solved with regular networking tools. For example, in Java you can configure a thread pool of a networking library (for example Netty) to some parallelism level, and the library will take care of accepting as much requests as possible. Or, if you use synchronous sockets API, it is even simpler - you can postpone calling accept() on the server socket until all of the currently connected clients are served. In either case, there is no "buffer" on either side, it's just until the server accepts a connection, the client will be blocked (either inside a system call for blocking APIs, or in an event loop for async APIs).
What Reactive Streams implementations solve is how to handle backpressure inside a higher-level data pipeline. Reactive streams implementations (e.g. akka-streams) provide a way to construct a pipeline of data in which, when the consumer of the data is slow, the producer will slow down automatically as well, and this would work across any kind of underlying transport, be it HTTP, WebSockets, raw TCP connections or even in-process messaging.
For example, consider a simple WebSocket connection, where the client sends a continuous stream of information (e.g. data from some sensor), and the server writes this data to some database. Now suppose that the database on the server side becomes slow for some reason (networking problems, disk overload, whatever). The server now can't keep up with the data the client sends, that is, it cannot save it to the database in time before the new piece of data arrives. If you're using a reactive streams implementation throughout this pipeline, the server will signal to the client automatically that it cannot process more data, and the client will automatically tweak its rate of producing in order not to overload the server.
Naturally, this can be done without any Reactive Streams implementation, e.g. by manually controlling acknowledgements. However, like with many other libraries, Reactive Streams implementations solve this problem for you. They also provide an easy way to define such pipelines, and usually they have interfaces for various external systems like databases. In particular, such libraries may implement backpressure on the lowest level, down to to the TCP connection, which may be hard to do manually.
As for Reactive Streams itself, it is just a description of an API which can be implemented by a library, which defines common terms and behavior and allows such libraries to be interchangeable or to interact easily, e.g. you can connect an akka-streams pipeline to a Monix pipeline using the interfaces from the specification, and the combined pipeline will work seamlessly and supporting all of the backpressure features of Reacive Streams.

Multiple actor systems for an application

This article talks about how we should not create 'too' many actor systems. But the docs say:
An ActorSystem is a heavyweight structure that will allocate 1…N
Threads, so create one per logical application.
I am unable to understand what is the real issue here with using multiple actor systems in an application. Also, is it possible for actors from different actor system to message each other?
There is no issue with using multiple systems. There is a potential issue with creating too many of them. The reason is that with an ActorSystem comes some non-negligible overhead - mainly because each one would allocate its own fork-join pool.
I recommend you read this blogpost for more info.
Actors from different ActorSystems can message each other, but AFAIK this needs to happen through remoting. This counts as yet another reason why system segregation doesn't really make sense as a local pattern.

Akka.Net work queues

I have an existing distributed computing framework built on top of MassTransit and RabbitMQ. There is essentially a manager which responds with work based on requests. Each worker will take a certain amount of items based on the physcial machine specs. The worker then sends completion messages when done. It works rather well and seems to be highly scalable since the only link is the service bus.
I recently evaluated Akka.Net in order to see if that would be a simpler system to implement the same pattern. After looking at it I was somewhat confused at what exactly it is used for. It seems that if I wanted to do something similar the manager would have to know about each worker ahead of time and directly send it work.
I believe I am missing something because that model doesn't seem to scale well.
Service buses like MassTransit are build as reliable messaging services. Ensuring the message delivery is primary concern there.
Actor frameworks also use messages, but this is the only similarity. Messaging is only a mean to achieve goal and it's not as reliable as in case of the service buses. They are more oriented on building high performance, easily distributed system topologies, centered around actors as primary unit of work. Conceptually actor is close to Active Record pattern (however this is a great simplification). They are also very lightweight. You can have millions of them living in memory of the executing machine.
When it comes to performance, Akka.NET is able to send over 30 mln messages/sec on a single VM (tested on 8 cores) - a lot more than any service bus, but the characteristics also differs significantly.
On the JVM we now that akka clusters may rise up to 2400 machines. Unfortunately we where not able to test, what the .NET implementation limits are.
You have to decide what do you really need: a messaging library, an actor framework or a combination of both.
I agree with #Horusiath answer. In addition, I'd say that in most cases you can replace a servicebus for the messaging system of an actor model like akka, but they are not in the same class.
Messaging is just one thing that Akka provides, and while it's a great feature, I wouldn't say it's the main one. When analyzing it as an alternative, you must first look at the benefits of the model itself and then look if the messaging capabilities are good enough for your use case. You can still use a dedicated external servicebus to distribute messages across different clusters and keep akka.net exchanging messages inside clusters for example.
But the point is that if you decide to use Akka.net, you won't be using it only for messaging.

Akka events between local and remote actors

Using the event bus mechanism between actors in the same ActorSystem is straight-forward, but I was wondering if there was a sanctioned method for doing so between:
Actors in different ActorSystems in the same JVM
Actors in different JVMs (via remoting)
Assuming that I know the paths to the actors is fine, but if there was a commonly used mechanism to discover those kinds of things as well, I'd love to hear about it.
I think in this case you need to look for distributed publish-subscribe on a cluster, supposing you want to subscribe actors to events, without awareness of the location of the actors. This link may prove useful.
This is a note from the official Akka documentation:
The event stream is a local facility, meaning that it will not
distribute events to other nodes in a clustered environment (unless
you subscribe a Remote Actor to the stream explicitly). If you need to
broadcast events in an Akka cluster, without knowing your recipients
explicitly (i.e. obtaining their ActorRefs), you may want to look
into: Distributed Publish Subscribe in Cluster.