I am trying to use a hierarchy of Akka actors to handle per user state. There is a parent actor that owns all the children, and handles the get-or-create in the correct way (see a1, a2):
class UserActorRegistry extends Actor {
override def Receive = {
case msg# DoPerUserWork(userId, _) =>
val perUserActor = getOrCreateUserActor(userId)
// perUserActor is live now, but will it receive "msg"?
perUserActor.forward(msg)
}
def getOrCreateUserActor(userId: UserId): ActorRef = {
val childName = userId.toActorName
context.child(childName) match {
case Some(child) => child
case None => context.actorOf(Props(classOf[UserActor], userId), childName)
}
}
In order to reclaim memory, the UserActors expire after a period of idleness (i.e. a timer triggers the child actor to call context.stop(self)).
My problem is that I think I have a race condition between the "getOrCreateUserActor" and the child actor receiving the forwarded message -- if the child expires in that window then the forwarded message will be lost.
Is there any way I can either detect this edge case, or refactor the UserActorRegistry to preclude it?
I can see two problems with your current design that open yourself up to the race condition you mention:
1) Having the termination condition (timer sending a poison pill) go directly to the child actor. By taking this approach, the child can certainly be terminated on a separate thread (within the dispatcher) while at the same time, a message has been setup to be forwarded to it in the UserActorRegistry actor (on a different thread within the dispatcher).
2) Using a PoisonPill to terminate the child. A PoisonPill is for a graceful stop, allowing for other messages in the mailbox to be processed first. In your case, you are terminating due to inactivity, which seems to indicate no other messages already in the mailbox. I see a PoisonPill as wrong here because in your case, another message might be sent after the PosionPill and that message would surely be lost after the PoisonPill is processed.
So I'm going to suggest that you delegate the termination of the inactive children to the UserActorRegistry as opposed to doing it in the children themselves. When you detect the condition of inactivity, send a message to the instance of UserActorRegistry indicating that a particular child needs to be terminated. When you receive that message, terminate that child via stop instead of sending a PoisonPill. By using the single mailbox of the UserActorRegistry which is processed in a serial manner, you can help ensure that a child is not about to be terminated in parallel while you are about to send it a message.
Now, there is a complication here that you have to deal with. Stopping an actor is asynchronous. So if you call stop on a child, it might not be completely stopped when you are processing a DoPerUserWork message and thus might send it a message that will be lost because it's in the process of stopping. You can solve this by keeping some internal state (a List) that represents children that are in the process of being stopped. When you stop a child, add its name to that list and then setup DeathWatch (via context watch child) on it. When you receive the Terminated event for that child, remove it's name from the list of children being terminated. If you receive work for a child while its name is in that list, requeue it for re-processing, maybe up to a max number of times so as to not try and reprocess forever.
This is not a perfect solution; it's just an identification of some of the issues with your approach and a push in the right direction for solving some of them. Let me know if you want to see the code for this and I'll whip something together.
Edit
In response to your second comment. I don't think you'll be able to look at a child ActorRef and see that it's currently shutting down, thus the need for that list of children that are in the process of being shutdown. You could enhance the DoPerUserWork message to contain a numberOfAttempts:Int field and increment this and send back to self for reprocessing if you see the target child is currently shutting down. You could then use the numberOfAttempts to prevent re-queuing forever, stopping at some max number of attempts. If you don't feel completely comfortable relying on DeathWatch, you could add a time-to-live component to the items in the list of children shutting down. You could then prune items as you encounter them if they are in the list but have been in there too long.
Related
How can I say when the children of supervisor are getting restarted? I want to be able to send some initialization messages to the children that are restarted (after they've been recreated). Is it possible?
I could override postRestart at each child, but I would prefer to do this in the supervisor, as he controls the initialization. Is it possible?
I've tried watching the children, but Terminated doesn't seem to trigger if the child restarts
If you define a supervision strategy in the parent, then in its decider block you can access state of that actor (the parent) just like from a receive block. You can send there a message to the child (sender will point to the failing child). Since at that point, where this supervision block decides the fate of the child the child is already suspended, this message will be only processed by the child after restart.
I have an actor which receive Int and accumulate result to the buffer. Some times actor can throw exception. What is the best way to restore previous state of buffer into actor after exception ?
BR.
In order to control what happens you need to set the supervisor strategy on the parent actor.
override val supervisorStrategy =
OneForOneStrategy(maxNrOfRetries = 10, withinTimeRange = 1 minute) {
case e: Exception => {
println(s"got exception ~ ${e.getMessage()}")
Resume // Or Continue, Escalate, Restart, Stop
}
}
If you would like to handle the exception then resume the actor then your handler would return Resume
When resuming, the actor state remains unchanged.
In addition to manub's answer, you should also consider separating
the part that accumulates the value, and
the part that may fail
into two actors as follows:
The accumulating actor should create a child actor. This child actor does the job that can potentially fail, and then sends the result of its job as a message back to its parent. The parent actor receives the result of the job and does the accumulating, or a notification that the child actor has failed. This way, you can, for example, restart failed jobs using various supervision strategies.
For more information on such an approach, see for example this answer Handling Faults in Akka actors
In order to be able to recover the state of a stateful actor you should consider Akka Persistence. You will require a durable store (Cassandra or LevelDB for example) to persist state, and the framework will handle restoring the state for you (provided that you have a PersistentActor).
http://doc.akka.io/docs/akka/current/scala/persistence.html explains in detail how to achieve this.
You can set the Supervisor strategy to Resume. When your Actor throws an exception, the Supervisor will handle the exception and your Actor will throw away the current message and move on to the next one. (Newer versions of Akka let you set a maximum number of retires on the same message)
If instead, you would like your Actor to continue processing the message that caused the exception, you can use a try-catch-finally block.
I am learning Scala and Akka.
In the problem I am trying to solve I want an actor to be reading a real-time data stream and perform a certain calculation that would update its state.
Every 3 seconds I am sending a request through a Scheduler for the actor to return to its state.
While I have pretty much everything implemented, with my actor having a broadcaster and receiver and the function to update the state right. I am not entirely sure how to do it, I could potentially put the calculations always running in a separate thread inside the actor but I would like to now if there is a more elegant way to make this in scala.
I would suggest to divide the work between two actors. The parent actor would manage child worker actor and would track the state. It sends a message to the child worker actor to trigger data processing.
The child worker actor processes the data stream - don't forget to wrap the processing into a Future so that it doesn't block the actor from processing messages. It also periodically sends messages to the master with current state. So the child worker is stateless, it sends notifications when its state changes.
If you want to know the current state of the work overall, you ask the master. In principle, you can merge this into one actor which sends the status message to itself. I wouldn't update the state directly to avoid concurrency issues. The reason is that the data processing work running in the Future can possible run on a different thread than message processing.
I am confused by behavior I am seeing in Akka. Briefly, I have a set of actors performing scientific calculations (star formation simulation). They have some state. When an error occurs such that one or more enter an invalid state, I want to restart the whole set to start over. I also want to do this if a single calc (over the entire set) takes too long (there is no way to predict in advance how long it may run).
So, there is the set of Simulation actors at the bottom of the tree, then a Director above them (that creates them via a Router, and sends them messages via that Router as well). There is one more Director level above that to create Directors on different machines and collect results from them all.
I handle the timeout case by using the Akka Scheduler to create a one-time timeout event, in the local Director, when the simulation is started. When the Director gets this event, if all its Simulation actors have not finished, it does this:
children ! Broadcast(Kill)
where children is the Router that owns/created them - this sends a Kill to all the children (SimulActors).
What I thought would occur is that all the child actors would be restarted. However, their preRestart() hook method is never called. I see the Kill message received, but that's it.
I must be missing something fundamental here. I have read the Akka docs on this topic and I have to say I find them less than clear (especially the page on Supervisors). I would really appreciate either a thorough explanation of the Kill/restart process, or just some other references (Google wasn't very helpful).
Note
If the child of a router terminates, the router will not automatically
spawn a new child. In the event that all children of a router have
terminated the router will terminate itself.
Taken from the akka docs.
I would consider using a supervision strategy - akka has behavior built in for killing all actors (all for one strategy) and you can define the specific strategy - eg restart.
I think a more idiomatic way to run this would be to have the actors throw x exception if they're not done after a period of time and then the supervisor handle that via supervision strategy.
You could throw a not done exception from the child and then define the behaviour like so:
override val supervisorStrategy =
AllForOneStrategy(maxNrOfRetries = 0) {
case _: NotDoneException ⇒ Stop
case _: Exception ⇒ Restart
}
It's important to understand that a restart means stopping the old actor and creating a new separate object/Actor
References:
http://doc.akka.io/docs/akka/snapshot/scala/fault-tolerance.html
http://doc.akka.io/docs/akka/snapshot/general/supervision.html
I'm using Akka actors in Scala for parallel processing of queue items. I have a MasterActor with 10 reusable child ProcessorActors.
Items being processed can be of different types, e.g. red, blue & green items. At one moment of time, only a single item of some type can be processed. Thus, if one red item is being processed, no more reds can be processed at the same time.
All is fine but now I try to implemented good fault tolerance for the app and it turns out I can't get much info about what item type failed in Terminated message. If ProcessorActor fails, I need to mark appropriate type in MasterActor as available for processing. Right now I'm stuck as I just can't get what item type failed. I have an ActorRef in Terminated message but afaiu it's not good to send messages to it right after I get that message.
In the end I can be left with all possible types marked as "being processed" while in reality it's just their appropriate actors are dead.
Please advice.
On Terminated set some 'pause' flag of not providing new jobs to children, then ping all ProcessorActors if they have active jobs, if any job can be assigned - assign job and release 'pause' flag. If terminated received when that 'pause' flag is set - do ping once more.
It seems like what you want to do could be achieved just by redesigning your architecture to have a red actor, a blue actor and a green actor, and setting the supervision strategy to do a restart on actor failure, instead of a stop. Really simple. No need to even handle the Termination message yourself.
Your ProcessorActor can send a message (with type of the item) to MasterActor just before start processing the item. In your MasterActor you can maintain a Map[ActorRef, ItemType] using which you can determine by the ActorRef (received in the Terminated message) the last item type being processing when the ProcessorActor dies.
Ok, what I finally did and what seemed best. I set a Restart policy on ProcessorActor failure and handle preRestart method in it. If reason:Option[Any] is not empty and matches StartProcessingMessage(Item(ItemType)) then I send a ProcessingFailedMessage(Item(ItemType)) to sender. Supervisor actor then marks type as not processed and next while loop will initiate processing by some ProcessorActor, maybe even the one that failed and was just restarted.