I want to convert some set of actors into FSM using Akka FSM. Currently system is designed in the way that every actor knows what to do with results of it's action and which actor is next in sequence of processing.
Now I want to have some sort of dedicated actors, which are doing only things they should know (and now know about entire message routing), and central FSM, which knows how to route messages and process transformation flow.
Client sends some request to FSM actor, FSM actor - on transition to next state - sends message to some actor in onTransition block. That actor replies to sender with some message, which is processed inside FSM state somehow until request is finished.
So far everything looks good, however I'm not sure what will happen if multiple clients will start interaction with FSM actor. Will the "workflow" be recorded somewhere, so flows from different clients won't collide at some point (like, FSM actor receives message from another client instead of originating one)?
Is it safe to have say 10 FSM actors and round-robin router, or I need to create new FSM actor on every request from client, and then kill it once finished?
Each Akka FSM actor will have only one state at a time, so you can't use multiple FSM actors with a round-robin router in this scenario. You may consider to create a new FSM actor on every request from a client. There are other options (a shared multi-user non-Akka FSM and a pool of FSM actors which may be "busy") but creation of a per-user FSM should be better solution because of the light-weight nature of Akka actors.
Related
do you have any experience with akka actor memory management/leak. Here I have a module use akka actor to communicate with other modules, but as time goes by one of module went down, because of heap memory size.
Is it necessary to send poison pill to children actor after it finished? Because every request in, I'd like to make another actor, each request. Is it necessary to send poison pill again in children actor, if they have their own children actor also?
ps: I'm using Scala Akka
Thanks
Yes, every Actor you create needs to be stopped explicitly. This is typically done by calling context.stop(self) from within the Actor (if it can determine that it is finished with its task) or having the supervisor stop it using context.stop(child).
To prevent running out of memory you can use bounded message queue aka mailbox on the receiving actor: http://doc.akka.io/docs/akka/snapshot/scala/mailboxes.html.
To pick how you want to manage child actors (restart, kill, etc) use supervisor strategy:
http://doc.akka.io/docs/akka/snapshot/general/supervision.html. Supervisor strategy can be picked at any level/parent.
I am learning Scala and Akka.
In the problem I am trying to solve I want an actor to be reading a real-time data stream and perform a certain calculation that would update its state.
Every 3 seconds I am sending a request through a Scheduler for the actor to return to its state.
While I have pretty much everything implemented, with my actor having a broadcaster and receiver and the function to update the state right. I am not entirely sure how to do it, I could potentially put the calculations always running in a separate thread inside the actor but I would like to now if there is a more elegant way to make this in scala.
I would suggest to divide the work between two actors. The parent actor would manage child worker actor and would track the state. It sends a message to the child worker actor to trigger data processing.
The child worker actor processes the data stream - don't forget to wrap the processing into a Future so that it doesn't block the actor from processing messages. It also periodically sends messages to the master with current state. So the child worker is stateless, it sends notifications when its state changes.
If you want to know the current state of the work overall, you ask the master. In principle, you can merge this into one actor which sends the status message to itself. I wouldn't update the state directly to avoid concurrency issues. The reason is that the data processing work running in the Future can possible run on a different thread than message processing.
I'm a newbie to actor model. Could anyone please explain the lifecycle of an actor in actor model? I've been looking for the answer in the documentation, but I couldn't find anything satisfactory.
I'm interested in what an actor does after it completes the onReceive() method - is it still alive or is it dead? Can we control its lifetime to say "don't die, wait there for the next message"? For example, with a round-robin router, if I set it to have 5 actors - would it always distribute the work across the same 5 actors? Or actors are destroyed and created anytime there is a message, but the maximum limit is always 5.
Thanks!
The Actor is always alive unless you explicitly "kill" it (or it crashes somehow). When it receives a message, it will "use" a thread, process the message, then go back to an "idle" state. When it receives another message, it becomes "active" again.
In the case of a round-robin router with 5 Actors, it is the same 5 Actors - the router does not create new ones each time a message is sent to the router.
The Actor model follows an "isolated mutability" (concurrency) model - it encapsulates state only to itself - other Actors are not able to touch this state directly, they can only interact with it via message passing. The Actors must be "alive" in order to keep the state.
My Akka FSM actor has the need to prioritize messages dependent on type. To be specific, the actor receives messages in one of these categories, in prioritized order:
Messages that triggers state transitions
Messages that query the current state
Messages that causes the actor to perform some work ("WorkMsg")
According to the Akka docs, messages can be prioritized according to the list above with a PriorityExecutorBasedEventDrivenDispatcher containing a PriorityGenerator. I've implemented the FSM actor with this dispatcher and it works well.
The problem is that this dispatcher also reorders WorkMsgs, which is not what I want.
WorkMsgs contain a timestamp and are sent to the FSM actor sorted by this timestamp. When the FSM actor processes WorkMsgs, it discards WorkMsgs that are older than the previous WorkMsg. So if these are reordered, I lose data.
Without the PriorityExecutorBasedEventDrivenDispatcher, WorkMsgs are not reordered, but then the priorities in the list above are not satisfied.
How can I maintain the priorities in the list above, while preventing reordering of messages of the same priority?
A prioritizing proxy actor can prioritize messages to be sent to your worker actor. You will have to sort and store the incoming messages as well as implement the prioritization logic. The worker will additionally have to respond to the proxy after each message to let it know that it is ready for more work.
I'm new to the Akka framework and I'm building an HTTP server application on top of Netty + Akka.
My idea so far is to create an actor for each type of request. E.g. I would have an actor for a POST to /my-resource and another actor for a GET to /my-resource.
Where I'm confused is how I should go about actor creation? Should I:
Create a new actor for every request (by this I mean for every request should I do a TypedActor.newInstance() of the appropriate actor)? How expensive is it to create a new actor?
Create one instance of each actor on server start up and use that actor instance for every request? I've read that an actor can only process one message at a time, so couldn't this be a bottle neck?
Do something else?
Thanks for any feedback.
Well, you create an Actor for each instance of mutable state that you want to manage.
In your case, that might be just one actor if my-resource is a single object and you want to treat each request serially - that easily ensures that you only return consistent states between modifications.
If (more likely) you manage multiple resources, one actor per resource instance is usually ideal unless you run into many thousands of resources. While you can also run per-request actors, you'll end up with a strange design if you don't think about the state those requests are accessing - e.g. if you just create one Actor per POST request, you'll find yourself worrying how to keep them from concurrently modifying the same resource, which is a clear indication that you've defined your actors wrongly.
I usually have fairly trivial request/reply actors whose main purpose it is to abstract the communication with external systems. Their communication with the "instance" actors is then normally limited to one request/response pair to perform the actual action.
If you are using Akka, you can create an actor per request. Akka is extremely slim on resources and you can create literarily millions of actors on an pretty ordinary JVM heap. Also, they will only consume cpu/stack/threads when they actually do something.
A year ago I made a comparison between the resource consumption of the thread-based and event-based standard actors. And Akka is even better than the event-base.
One of the big points of Akka in my opinion is that it allows you to design your system as "one actor per usage" where earlier actor systems often forced you to do "use only actors for shared services" due to resource overhead.
I would recommend that you go for option 1.
Options 1) or 2) have both their drawbacks. So then, let's use options 3) Routing (Akka 2.0+)
Router is an element which act as a load balancer, routing the requests to other Actors which will perform the task needed.
Akka provides different Router implementations with different logic to route a message (for example SmallestMailboxPool or RoundRobinPool).
Every Router may have several children and its task is to supervise their Mailbox to further decide where to route the received message.
//This will create 5 instances of the actor ExampleActor
//managed and supervised by a RoundRobinRouter
ActorRef roundRobinRouter = getContext().actorOf(
Props.create(ExampleActor.class).withRouter(new RoundRobinRouter(5)),"router");
This procedure is well explained in this blog.
It's quite a reasonable option, but whether it's suitable depends on specifics of your request handling.
Yes, of course it could.
For many cases the best thing to do would be to just have one actor responding to every request (or perhaps one actor per type of request), but the only thing this actor does is to forward the task to another actor (or spawn a Future) which will actually do the job.
For scaling up the serial requests handling, add a master actor (Supervisor) which in turn will delegate to the worker actors (Children) (round-robin fashion).