Let's suppose I have a master with a n slaves, where the slaves are located on other computers.
Master actor looks like
class MasterActor extends Actor {
val router: ActorRef = // ... initialized router to contact the slaves
override def receive: Receive = {
case work: DoWork => router ! work
}
}
While my slaves look like
class SlaveActor extends Actor {
override def receive: Receive = {
case work: DoWork => // Some logic that takes a couple of seconds
}
}
In the case where a machine where some slaves are hosted crashes, or if the app has been stopped while processing some work, I expect a fallback mechanism that enable the system (I suppose the master) to be aware of that failure and then redistribute the lost work to the other slaves.
I have understood the principle of supervision in Akka, that enables master to get notified when a child actor is unreachable, but how can I get back the specific instance of work that the actor had to redistribute it ?
As I started using Akka very recently, my approach is perhaps not adapted to the best practices and should I solve this situation in another way ?
Thanks !
Now this approach won't work in every scenario but how about storing the child actor work to a file system. You can create provide an id to each child and save the work in a file based on key -> value pair, where child actor id is the key and value is the work in progress.
Now, what data has to be saved, that you have to decide but this is one of the simplest approach that you can give a try.
Hope this helps!
It sounds like what you want is Akka Persistence. The main model for this is event sourcing
A rough outline would be to persist one event when receiving a DoWork and persist a second event when finishing the work. On recovery, the state is then a list of work not finished (which will, if you get the protocol right be at most one piece of work), along these lines:
case class DidWork(dw: DoWork)
class SlaveActor extends PersistentActor {
val persistenceId: String = ???
var workToDo: List[DoWork] = Nil
val receiveRecover: Receive = {
case dw: DoWork => workToDo = dw :: workToDo
case DidWork(dw: DoWork) => workToDo = workToDo.filter(_ != dw)
case SnapshotOffer(_, snapshot: List[DoWork]) => workToDo = snapshot
}
val snapshotInterval = ???
val receiveCommand: Receive = {
case dw: DoWork =>
persist(dw) { event =>
workToDo = dw :: workToDo
context.system.eventStream.publish(event)
if (lastSequenceNr % snapshotInterval == 0 && lastSequenceNr != 0)
saveSnapshot(workToDo)
doWork
}
}
private[this] def doWork: Unit = {
workToDo.reverse.foreach { dw: DoWork =>
// do the work
persist(DidWork(dw)) { event =>
workToDo = workToDo.tail
context.system.eventStream.publish(event)
if (lastSequenceNr % snapshotInterval == 0 && lastSequenceNr != 0)
saveSnapshot(workToDo)
}
}
}
}
If you want to make sure that the master knows about workers going away you can watch them (see docs https://doc.akka.io/docs/akka/current/actors.html#lifecycle-monitoring-aka-deathwatch ). If you keep track of what workload you have assigned to what worker, that is yet not completed you can implement resending. However you must likely consider the case where the master or it's actor system goes away as well, and how to recover if that happens.
The Akka distributed workers sample covers exactly this so it could be a good source of inspiration: https://developer.lightbend.com/guides/akka-distributed-workers-scala/
Related
I have an actor, that when he receives one message he starts to do come calculations in a loop, and he does them for some time (like 100 times he does the same). Now I need him to react to other mesages that may come ASAP. The best way would be to add some instruction in his loop like "if there is a message in queue react and then return here" but I haven't seen such functionality.
I thought that actor could send message to himself instead of doing a loop, then such messages would be queued at the end and he would react to other ones in between, but I've heard that communication is bad (much more time consuming than calculations) and don't know if communication with self counts as such.
My question is what do you think about such solution and do you have anyother ideas how to handle communication inbetween calculations?
Time-consuming computation should not be done in the main receive method as it reduces responsiveness of the system. Put the computation in a blocking
Future or Task or other asynchronous object, and send a message to the actor when the computation completes. The actor can continue to process messages ASAP while the computation continues on a different thread.
This gets more complicated if the actor needs to modify the computation while it is running (in response to messages) but the solution depends on what the computation is and what kind of modification is needed, so it isn't really possible to give a general answer.
In general in Akka you want to limit the amount of work done "per unit" where a unit in this case:
an actor processing a message
work done in a Future/Task or a callback of the same
Overlong work units can easily limit the responsiveness of the entire system by consuming a thread. For tasks which aren't consuming CPU but are blocked waiting for I/O, those can be executed in a different thread pool, but for doing some CPU-consuming work, that doesn't really help.
So the broad approach, if you're doing a loop, is to suspend the loop's state into a message and send it to yourself. It introduces a small performance hit (the latency of constructing the message, sending it to yourself (a guaranteed-to-be local send), and destructuring it is likely going to be on the order of microseconds when the system is otherwise idle), but can improve overall system latency.
For example, imagine we have an actor which will calculate the nth fibonacci number. I'm implementing this using Akka Typed, but the broad principle applies in Classic:
object Fibonacci {
sealed trait Command
case class SumOfFirstN(n: Int, replyTo: ActorRef[Option[Long]]) extends Command
private object Internal {
case class Iterate(i: Int, a: Int, b: Int) extends Command
val initialIterate = Iterate(1, 0, 1)
}
case class State(waiting: SortedMap[Int, Set[ActorRef[Option[Long]]]]) {
def behavior: Behavior[Command] =
Behaviors.receive { (context, msg) =>
msg match {
case SumOfFirstN(n, replyTo) =>
if (n < 1) {
replyTo ! None
Behaviors.same
} else {
if (waiting.isEmpty) {
context.self ! Internal.initialIterate
}
val nextWaiting =
waiting.updated(n, waiting.get(n).fold(Set(replyTo))(_.incl(replyTo))
copy(waiting = nextWaiting).behavior
}
case Internal.Iterate(i, a, b) =>
// the ith fibonacci number is b, the (i-1)th is a
if (waiting.rangeFrom(i).isEmpty) {
// Nobody waiting for this run to complete
if (waiting.nonEmpty) {
context.self ! Internal.initialIterate
}
Behaviors.same
} else {
var nextWaiting = waiting
var nextA = a
var nextB = b
(1 to 10).foreach { x =>
val next = nextA + nextB
nextWaiting.get(x + i).foreach { waiters =>
waiters.foreach(_ ! Some(next))
}
nextWaiting = nextWaiting.removed(x + i)
nextA = nextB
nextB = next
}
context.self ! Internal.Iterate(i + 10, nextA, nextB)
copy(waiting = nextWaiting)
}
}
}
}
}
Note that multiple requests (if sufficiently temporally close) for the same number will only be computed once, and temporally close requests for intermediate results will result in no extra computation.
An option is to delegate the task, using for eg:Future, and use a separate ExecutionContext with a fixed-pool-size (configurable in application.conf) equal to the number of CPUs (or cores) so that the computations are done efficiently using the available cores. As mentioned by #Tim, you could notify the main actor once the computation is complete.
Another option is to make another actor behind a router do the computation while restricting the number of routees to the number of CPUs.
A simplistic sample:
object DelegatingSystem extends App {
val sys = ActorSystem("DelegatingSystem")
case class TimeConsuming(i: Int)
case object Other
class Worker extends Actor with ActorLogging {
override def receive: Receive = {
case message =>
Thread.sleep(1000)
log.info(s"$self computed long $message")
}
}
class Delegator extends Actor with ActorLogging {
//Set the number of routees to be equal to #of cpus
val router: ActorRef = context.actorOf(RoundRobinPool(2).props(Props[Worker]))
override def receive: Receive = {
case message:TimeConsuming => router ! message
case _ =>
log.info("process other messages")
}
}
val delegator = sys.actorOf(Props[Delegator])
delegator ! TimeConsuming(1)
delegator ! Other
delegator ! TimeConsuming(2)
delegator ! Other
delegator ! TimeConsuming(3)
delegator ! Other
delegator ! TimeConsuming(4)
}
I have a parent akka actor named buildingCoordinator which creates childs name elevator_X. For now I am creating only one elevator. The buildingCoordinator sends a sequence of messages and wait for responses in order to move an elevator. The sequence is this: sends ? RequestElevatorState -> receive ElevatorState -> sends ? MoveRequest -> receives MoveRequestSuccess -> changes the state. As you can see I am using the ask pattern. After the movement is successes the buildingCoordinator changes its state using context.become.
The problem that I am running is that the elevator is receiving MoveRequest(1,4) for the same floor twice, sometimes three times. I do remove the floor when I call context.become. However I remove inside the last map. I think it is because I am using context.become inside a future and I should use it outside. But I am having trouble implementing it.
case class BuildingCoordinator(actorName: String,
numberOfFloors: Int,
numberOfElevators: Int,
elevatorControlSystem: ElevatorControlSystem)
extends Actor with ActorLogging {
import context.dispatcher
implicit val timeout = Timeout(4 seconds)
val elevators = createElevators(numberOfElevators)
override def receive: Receive = operational(Map[Int, Queue[Int]](), Map[Int, Queue[Int]]())
def operational(stopsRequests: Map[Int, Queue[Int]], pickUpRequests: Map[Int, Queue[Int]]): Receive = {
case msg#MoveElevator(elevatorId) =>
println(s"[BuildingCoordinator] received $msg")
val elevatorActor: ActorSelection = context.actorSelection(s"/user/$actorName/elevator_$elevatorId")
val newState = (elevatorActor ? RequestElevatorState(elevatorId))
.mapTo[ElevatorState]
.flatMap { state =>
val nextStop = elevatorControlSystem.findNextStop(stopsRequests.get(elevatorId).get, state.currentFloor, state.direction)
elevatorActor ? MoveRequest(elevatorId, nextStop)
}
.mapTo[MoveRequestSuccess]
.flatMap(moveRequestSuccess => elevatorActor ? MakeMove(elevatorId, moveRequestSuccess.targetFloor))
.mapTo[MakeMoveSuccess]
.map { makeMoveSuccess =>
println(s"[BuildingCoordinator] Elevator ${makeMoveSuccess.elevatorId} arrived at floor [${makeMoveSuccess.floor}]")
// removeStopRequest
val stopsRequestsElevator = stopsRequests.get(elevatorId).getOrElse(Queue[Int]())
val newStopsRequestsElevator = stopsRequestsElevator.filterNot(_ == makeMoveSuccess.floor)
val newStopsRequests = stopsRequests + (elevatorId -> newStopsRequestsElevator)
val pickUpRequestsElevator = pickUpRequests.get(elevatorId).getOrElse(Queue[Int]())
val newPickUpRequestsElevator = {
if (pickUpRequestsElevator.contains(makeMoveSuccess.floor)) {
pickUpRequestsElevator.filterNot(_ == makeMoveSuccess.floor)
} else {
pickUpRequestsElevator
}
}
val newPickUpRequests = pickUpRequests + (elevatorId -> newPickUpRequestsElevator)
// I THINK I SHOULD NOT CALL context.become HERE
// context.become(operational(newStopsRequests, newPickUpRequests))
val dropOffFloor = BuildingUtil.generateRandomFloor(numberOfFloors, makeMoveSuccess.floor, makeMoveSuccess.direction)
context.self ! DropOffRequest(makeMoveSuccess.elevatorId, dropOffFloor)
(newStopsRequests, newPickUpRequests)
}
// I MUST CALL context.become HERE, BUT I DONT KNOW HOW
// context.become(operational(newState.flatMap(state => (state._1, state._2))))
}
Other thing that might be nasty here is this big chain of map and flatMap. This was my way to implement, however I think it might exist one way better.
You can't and you should not call context.become or anyhow change actor state outside Receive method and outside Receive method invoke thread (which is Akka distpatcher thread), like in your example. Eg:
def receive: Receive = {
// This is a bug, because context is not and is not supposed to be thread safe.
case message: Message => Future(context.become(anotherReceive))
}
What you should do - send message to self after async operation finished and change the state receive after. If in a mean time you don't want to handle incoming messages - you can stash them. See for more details: https://doc.akka.io/docs/akka/current/typed/stash.html
High level example, technical details omitted:
case OperationFinished(calculations: Map[Any, Any])
class AsyncActor extends Actor with Stash {
def operation: Future[Map[Any, Any]] = ...//some implementation of heavy async operation
def receiveStartAsync(calculations: Map[Any, Any]): Receive = {
case StartAsyncOperation =>
//Start async operation and inform yourself that it is finished
operation.map(OperationFinished.apply) pipeTo self
context.become(receiveWaitAsyncOperation)
}
def receiveWaitAsyncOperation: Receive = {
case OperationFinished =>
unstashAll()
context.become(receiveStartAsync)
case _ => stash()
}
}
I like your response #Ivan Kurchenko.
But, according to: Akka Stash docs
When unstashing the buffered messages by calling unstashAll the messages will be processed sequentially in the order they were added and all are processed unless an exception is thrown. The actor is unresponsive to other new messages until unstashAll is completed. That is another reason for keeping the number of stashed messages low. Actors that hog the message processing thread for too long can result in starvation of other actors.
Meaning that under load, for example, the unstashAll operation will cause all other Actors to be in starvation.
According to the same doc:
That can be mitigated by using the StashBuffer.unstash with numberOfMessages parameter and then send a message to context.self before continuing unstashing more. That means that other new messages may arrive in-between and those must be stashed to keep the original order of messages. It becomes more complicated, so better keep the number of stashed messages low.
Bottom line: you should keep the stashed message count low. It might be not suitable for load operation.
I have an API that creates actor A (at runtime). Then, A creates Actor B (at runtime as well).
I have another API that creates Actor C (different from actor A, No command code between them) and C creates Actor D.
I want to gracefully shutdown A and C once B and D has finished processing their messages (A and C not necessarily run together, They are unrelated).
Sending poison pill to A/C is not good enough because the children (B/D) will still get context stop, and will not be able to finish their tasks.
I understand I need to implement a new type of message.
I didn't understand how to create an infrastructure so both A and C will know how to respond to this message without having same duplicate receive method in both.
The solution I found was to create a new trait that extends Actor and override the unhandled method.
The code looks like this:
object SuicideActor {
case class PleaseKillYourself()
case class IKilledMyself()
}
trait SuicideActor extends Actor {
override def unhandled(message: Any): Unit = message match {
case PleaseKillYourself =>
Logger.debug(s"Actor ${self.path} received PleaseKillYourself - stopping children and aborting...")
val livingChildren = context.children.size
if (livingChildren == 0) {
endLife()
} else {
context.children.foreach(_ ! PleaseKillYourself)
context become waitForChildren(livingChildren)
}
case _ => super.unhandled(message)
}
protected[crystalball] def waitForChildren(livingChildren: Int): Receive = {
case IKilledMyself =>
val remaining = livingChildren - 1
if (remaining == 0) { endLife() }
else { context become waitForChildren(remaining) }
}
private def endLife(): Unit = {
context.parent ! IKilledMyself
context stop self
}
}
But this sound a bit hacky.... Is there a better (non hacky) solution ?
I think designing your own termination procedure is not necessary and potentially can cause headache for future maintainers of you code as this is nonstandard akka behaviour that needs to be yet again understood.
If A and C unrelated and can terminate independently, the following options are possible. To avoid any confusion, I'll use just actor A and B in my explanations.
Option 1.
Actor A uses context.watch on newly created actor B and reacts on Terminated message in its receive method. Actor B calls context.stop(context.self) when it's done with its task and this will generate Terminated event that will be handled by actor A that can clean up its state if needed and terminate too.
Check these docs for more details.
Option 2.
Actor B calls context.stop(context.parent) to terminate the parent directly when it's done with its own task. This option does not allow parent to react and perform additional clean up tasks if needed.
Finally, sharing this logic between actors A and C can be done with a trait in the way you did but the logic is very small and having duplicated code is not all the time a bad thing.
So it took me a bit but I find my answer.
I implemented The Reaper Pattern
The SuicideActor create a dedicated Reaper actor when it finished its block. The Reaper watch all of the SuicideActor children and once they all Terminated it send a PoisonPill to the SuicideActor and to itself
The SuicideActor code is :
trait SuicideActor extends Actor {
def killSwitch(block: => Unit): Unit = {
block
Logger.info(s"Actor ${self.path.name} is commencing suicide sequence...")
context become PartialFunction.empty
val children = context.children
val reaper = context.system.actorOf(ReaperActor.props(self), s"ReaperFor${self.path.name}")
reaper ! Reap(children.toSeq)
}
override def postStop(): Unit = Logger.debug(s"Actor ${self.path.name} is dead.")
}
And the Reaper is:
object ReaperActor {
case class Reap(underWatch: Seq[ActorRef])
def props(supervisor: ActorRef): Props = {
Props(new ReaperActor(supervisor))
}
}
class ReaperActor(supervisor: ActorRef) extends Actor {
override def preStart(): Unit = Logger.info(s"Reaper for ${supervisor.path.name} started")
override def postStop(): Unit = Logger.info(s"Reaper for ${supervisor.path.name} ended")
override def receive: Receive = {
case Reap(underWatch) =>
if (underWatch.isEmpty) {
killLeftOvers
} else {
underWatch.foreach(context.watch)
context become reapRemaining(underWatch.size)
underWatch.foreach(_ ! PoisonPill)
}
}
def reapRemaining(livingActorsNumber: Int): Receive = {
case Terminated(_) =>
val remainingActorsNumber = livingActorsNumber - 1
if (remainingActorsNumber == 0) {
killLeftOvers
} else {
context become reapRemaining(remainingActorsNumber)
}
}
private def killLeftOvers = {
Logger.debug(s"All children of ${supervisor.path.name} are dead killing supervisor")
supervisor ! PoisonPill
self ! PoisonPill
}
}
I've built a distributed streaming machine learning model using akka's actor model. Training the model happens asynchronously by sending a Training Instance (Data to train on) to an Actor. Training on this data takes up computation time and changes the state of the actor.
Currently I'm using historical data to train the model. I want to run a bunch of differently configured models that are trained on the same data and see how different ensemble metrics vary. Essentially here is a much less complicated simulation of what I have going on with the Thread.sleep(1) and the data Array representing computation time and state.
implicit val as = ActorSystem()
case object Report
case class Model(dataSize: Int) {
val modelActor: ActorRef = actor(new Act {
val data = Array.fill(dataSize)(0)
become {
case trainingData: Int => {
// Screw with the state of the actor and pretend that it takes time
Thread.sleep(1)
data(Math.abs(Random.nextInt % dataSize)) == trainingData
}
case Report => {
println(s"Finished $dataSize")
context.stop(self)
}
}
})
def train(trainingInstance: Int) = modelActor ! trainingInstance
def report: Unit = modelActor ! Report
}
val trainingData = Array.fill(5000)(Random.nextInt)
val dataSizeParams = (1 to 500)
Next I use a for loop to vary the parameters (represent by dataSizeParams array)
for {
param <- dataSizeParams
} {
// make model with params
val model = Model(param)
for {
trainingInstance <- trainingData
} {
model.train(trainingInstance)
}
model.report
}
The for loop is definitely the WRONG WAY to do what I'm trying to do. It kicks off all the different models in parallel. It works well when the dataSizeParams are in the 1 to 500 range, but if I bump that up to something high my Models EACH start to take up noticeable chunks of memory. What I came up with is the code below. Essentially I have a Model master who can control the number of models running at once based on the number of Run messages he has received. Each Model now contains a reference to this master actor and sends him a message when he is done processing:
// Alternative that doesn't use a for loop and instead controls concurrency through what I'm calling a master actor
case object ImDone
case object Run
case class Model(dataSize: Int, master: ActorRef) {
val modelActor: ActorRef = actor(new Act {
val data = Array.fill(dataSize)(0)
become {
case trainingData: Int => {
// Screw with the state of the actor and pretend that it takes time
Tread.sleep(1)
data(Math.abs(Random.nextInt % dataSize)) == trainingData
}
case Report => {
println(s"Finished $dataSize")
master ! ImDone
context.stop(self)
}
}
})
def train(trainingInstance: Int) = modelActor ! trainingInstance
def report: Unit = modelActor ! Report
}
val master: ActorRef = actor(new Act {
var paramRuns = dataSizeParams.toIterator
become {
case Run => {
if (paramRuns.hasNext) {
val model = Model(paramRuns.next(), self)
for {
trainingInstance <- trainingData
} {
model.train(trainingInstance)
}
model.report
} else {
println("No more to run")
context.stop(self)
}
}
case ImDone => {
self ! Run
}
}
})
master ! Run
There isn't any problem with master code(that I can see). I have tight control over the number of Models spawned at one time, but I feel like I'm missing a much easier/clean/out-of-the box way of doing this. Also I was wondering if there were any neat ways to throttle the number of Models running at once, by say looking into the System's CPU and memory usage.
You're looking for the work pulling pattern. I highly suggest this blog post by the Akka devs:
http://letitcrash.com/post/29044669086/balancing-workload-across-nodes-with-akka-2
We use a variant of this on top of Akka's clustering features to avoid rogue concurrency. By having worker actors pull work instead of having a supervisor push work, you can gracefully control the amount of work (and therefore, CPU and memory usage) by simply limiting the number of worker actors.
This has a few advantages over pure routers: it's easier to track failures (as outlined in that post) and work won't languish in a mailbox (which can potentially be lost).
Also, if you're using remoting, I recommend that you not send large amounts of data in the message. Let the worker nodes pull the data from another source themselves when triggered. We use S3.
I'm getting all in a twist trying to get this to work. New to scala and to actors so may inadvertently be making bad design decisions - please tell me if so.
The setup is this:
I have a controlling actor which contains a number of worker actors. Each worker represents a calculation that for a given input will spit out 1..n outputs. The controller has to set off each worker, collect the returned outputs, then carry on and do a bunch more stuff once this is complete. This is how I approached it using receive in the controller actor:
class WorkerActor extends Actor {
def act() {
loop {
react {
case DoJob =>
for (1 to n) sender ! Result
sender ! Done
}
}
}
}
The worker actor is simple enough - it spits out results until it's done, when it sends back a Done message.
class ControllerActor(val workers: List[WorkerActor]) extends Actor {
def act() {
workers.foreach(w => w ! DoJob)
receiveResults(workers.size)
//do a bunch of other stuff
}
def receiveResults(count: Int) {
if (count == 0) return
receive {
case Result =>
// do something with this result (that updates own mutable state)
receiveResults(count)
case Done
receiveResults(count - 1)
}
}
}
The controller actor kicks off each of the workers, then recursively calls receive until it has received a Done message for each of the workers.
This works, but I need to create lots of the controller actors, so receive is too heavyweight - I need to replace it with react.
However, when I use react, the behind-the-scenes exception kicks in once the final Done message is processed, and the controller actor's act method is short-circuited, so none of the "//do a bunch of other stuff" that comes after happens.
I can make something happen after the final Done message by using andThen { } - but I actually need to do several sets of calculations in this manner so would end up with a ridiculously nested structure of andThen { andThen { andThen } }s.
I also want to hide away this complexity in a method, which would then be moved into a separate trait, such that a controller actor with a number of lists of worker actors can just be something like this:
class ControllerActor extends Actor with CalculatingTrait {
//CalculatingTrait has performCalculations method
val listOne: List[WorkerActor]
val ListTwo: List[WorkerActor]
def act {
performCalculations(listOne)
performCalculations(listTwo)
}
}
So is there any way to stop the short-circuiting of the act method in the performCalculations method? Is there a better design approach I could be taking?
You can avoid react/receive entirely by using Akka actor's. Here's what you implementation could look like:
import akka.actor._
class WorkerActor extends Actor {
def receive = {
case DoJob =>
for (_ <- 1 to n) sender ! Result
sender ! Done
}
}
class ControllerActor(workers: List[ActorRef]) extends Actor {
private[this] var countdown = workers.size
override def preStart() {
workers.foreach(_ ! DoJob)
}
def receive = {
case Result =>
// do something with this result
case Done =>
countdown -= 1
if (countdown == 0) {
// do a bunch of other stuff
// It looks like your controllers die when the workers
// are done, so I'll do the same.
self ! PoisonPill
}
}
}
Here's how I might approach it (in way that seems to be more comments and boilerplate than actual content):
class WorkerController(val workerCriteria: List[WorkerCriteria]) {
// The actors that only _I_ interact with are probably no one else's business
// Your call, though
val workers = generateWorkers(workerCriteria)
// No need for an `act` method--no need for this to even be an actor
/* Will send `DoJob` to each actor, expecting a reply from each.
* Could also use the `!!` operator (instead of `!?`) if you wanted
* them to return futures (so WorkerController could continue doing other
* things while the results compute). The futures could then be evaluated
* with `results map (_())`, which will _then_ halt execution to wait for each
* future that isn't already computed (if any).
*/
val results = workers map (_ !? DoJob)
//do a bunch of other stuff with your results
def generateWorkers(criteria: List[WorkerCriteria]) = // Create some workers!
}
class Worker extends Actor {
def act() {
loop {
react {
case DoJob =>
// Will generate a result and send it back to the caller
reply(generateResult)
}
}
}
def generateResult = // Result?
}
EDIT: Have just been reading about Akka actors and spotted that they "guarantee message order on a per sender basis". So I updated my example such that, if the controller needed to later ask the receiver for the computed value and needed to be sure it was all complete, it could do so with a message order guarantee on only a per sender basis (the example is still scala actors, not akka).
It finally hit me, with a bit of help from #Destin's answer, that I could make it a lot simpler by separating out the part of the controller responsible for kicking off the workers from the part responsible for accepting and using the results. Single responsibility principle I suppose... Here's what I did (separating out the original controlling actor into a controlling class and a 'receiver' actor):
case class DoJob(receiever: Actor)
case object Result
case object JobComplete
case object Acknowledged
case object Done
class Worker extends Actor {
def act {
loop {
react {
case DoJob(receiver) =>
receiver ! Result
receiver ! Result
receiver !? JobComplete match {
case Acknowledged =>
sender ! Done
}
}
}
}
}
class Receiver extends Actor {
def act {
loop {
react {
case Result => println("Got result!")
case JobComplete => sender ! Acknowledged
}
}
}
}
class Controller {
val receiver = new Receiver
val workers = List(new Worker, new Worker, new Worker)
receiver.start()
workers.foreach(_.start())
workers.map(_ !! DoJob(receiver)).map(_())
println("All the jobs have been done")
}