Checking message query by actor in between calculations in scala/akka - scala

I have an actor, that when he receives one message he starts to do come calculations in a loop, and he does them for some time (like 100 times he does the same). Now I need him to react to other mesages that may come ASAP. The best way would be to add some instruction in his loop like "if there is a message in queue react and then return here" but I haven't seen such functionality.
I thought that actor could send message to himself instead of doing a loop, then such messages would be queued at the end and he would react to other ones in between, but I've heard that communication is bad (much more time consuming than calculations) and don't know if communication with self counts as such.
My question is what do you think about such solution and do you have anyother ideas how to handle communication inbetween calculations?

Time-consuming computation should not be done in the main receive method as it reduces responsiveness of the system. Put the computation in a blocking
Future or Task or other asynchronous object, and send a message to the actor when the computation completes. The actor can continue to process messages ASAP while the computation continues on a different thread.
This gets more complicated if the actor needs to modify the computation while it is running (in response to messages) but the solution depends on what the computation is and what kind of modification is needed, so it isn't really possible to give a general answer.

In general in Akka you want to limit the amount of work done "per unit" where a unit in this case:
an actor processing a message
work done in a Future/Task or a callback of the same
Overlong work units can easily limit the responsiveness of the entire system by consuming a thread. For tasks which aren't consuming CPU but are blocked waiting for I/O, those can be executed in a different thread pool, but for doing some CPU-consuming work, that doesn't really help.
So the broad approach, if you're doing a loop, is to suspend the loop's state into a message and send it to yourself. It introduces a small performance hit (the latency of constructing the message, sending it to yourself (a guaranteed-to-be local send), and destructuring it is likely going to be on the order of microseconds when the system is otherwise idle), but can improve overall system latency.
For example, imagine we have an actor which will calculate the nth fibonacci number. I'm implementing this using Akka Typed, but the broad principle applies in Classic:
object Fibonacci {
sealed trait Command
case class SumOfFirstN(n: Int, replyTo: ActorRef[Option[Long]]) extends Command
private object Internal {
case class Iterate(i: Int, a: Int, b: Int) extends Command
val initialIterate = Iterate(1, 0, 1)
}
case class State(waiting: SortedMap[Int, Set[ActorRef[Option[Long]]]]) {
def behavior: Behavior[Command] =
Behaviors.receive { (context, msg) =>
msg match {
case SumOfFirstN(n, replyTo) =>
if (n < 1) {
replyTo ! None
Behaviors.same
} else {
if (waiting.isEmpty) {
context.self ! Internal.initialIterate
}
val nextWaiting =
waiting.updated(n, waiting.get(n).fold(Set(replyTo))(_.incl(replyTo))
copy(waiting = nextWaiting).behavior
}
case Internal.Iterate(i, a, b) =>
// the ith fibonacci number is b, the (i-1)th is a
if (waiting.rangeFrom(i).isEmpty) {
// Nobody waiting for this run to complete
if (waiting.nonEmpty) {
context.self ! Internal.initialIterate
}
Behaviors.same
} else {
var nextWaiting = waiting
var nextA = a
var nextB = b
(1 to 10).foreach { x =>
val next = nextA + nextB
nextWaiting.get(x + i).foreach { waiters =>
waiters.foreach(_ ! Some(next))
}
nextWaiting = nextWaiting.removed(x + i)
nextA = nextB
nextB = next
}
context.self ! Internal.Iterate(i + 10, nextA, nextB)
copy(waiting = nextWaiting)
}
}
}
}
}
Note that multiple requests (if sufficiently temporally close) for the same number will only be computed once, and temporally close requests for intermediate results will result in no extra computation.

An option is to delegate the task, using for eg:Future, and use a separate ExecutionContext with a fixed-pool-size (configurable in application.conf) equal to the number of CPUs (or cores) so that the computations are done efficiently using the available cores. As mentioned by #Tim, you could notify the main actor once the computation is complete.
Another option is to make another actor behind a router do the computation while restricting the number of routees to the number of CPUs.
A simplistic sample:
object DelegatingSystem extends App {
val sys = ActorSystem("DelegatingSystem")
case class TimeConsuming(i: Int)
case object Other
class Worker extends Actor with ActorLogging {
override def receive: Receive = {
case message =>
Thread.sleep(1000)
log.info(s"$self computed long $message")
}
}
class Delegator extends Actor with ActorLogging {
//Set the number of routees to be equal to #of cpus
val router: ActorRef = context.actorOf(RoundRobinPool(2).props(Props[Worker]))
override def receive: Receive = {
case message:TimeConsuming => router ! message
case _ =>
log.info("process other messages")
}
}
val delegator = sys.actorOf(Props[Delegator])
delegator ! TimeConsuming(1)
delegator ! Other
delegator ! TimeConsuming(2)
delegator ! Other
delegator ! TimeConsuming(3)
delegator ! Other
delegator ! TimeConsuming(4)
}

Related

How do I call context become outside of a Future from Ask messages?

I have a parent akka actor named buildingCoordinator which creates childs name elevator_X. For now I am creating only one elevator. The buildingCoordinator sends a sequence of messages and wait for responses in order to move an elevator. The sequence is this: sends ? RequestElevatorState -> receive ElevatorState -> sends ? MoveRequest -> receives MoveRequestSuccess -> changes the state. As you can see I am using the ask pattern. After the movement is successes the buildingCoordinator changes its state using context.become.
The problem that I am running is that the elevator is receiving MoveRequest(1,4) for the same floor twice, sometimes three times. I do remove the floor when I call context.become. However I remove inside the last map. I think it is because I am using context.become inside a future and I should use it outside. But I am having trouble implementing it.
case class BuildingCoordinator(actorName: String,
numberOfFloors: Int,
numberOfElevators: Int,
elevatorControlSystem: ElevatorControlSystem)
extends Actor with ActorLogging {
import context.dispatcher
implicit val timeout = Timeout(4 seconds)
val elevators = createElevators(numberOfElevators)
override def receive: Receive = operational(Map[Int, Queue[Int]](), Map[Int, Queue[Int]]())
def operational(stopsRequests: Map[Int, Queue[Int]], pickUpRequests: Map[Int, Queue[Int]]): Receive = {
case msg#MoveElevator(elevatorId) =>
println(s"[BuildingCoordinator] received $msg")
val elevatorActor: ActorSelection = context.actorSelection(s"/user/$actorName/elevator_$elevatorId")
val newState = (elevatorActor ? RequestElevatorState(elevatorId))
.mapTo[ElevatorState]
.flatMap { state =>
val nextStop = elevatorControlSystem.findNextStop(stopsRequests.get(elevatorId).get, state.currentFloor, state.direction)
elevatorActor ? MoveRequest(elevatorId, nextStop)
}
.mapTo[MoveRequestSuccess]
.flatMap(moveRequestSuccess => elevatorActor ? MakeMove(elevatorId, moveRequestSuccess.targetFloor))
.mapTo[MakeMoveSuccess]
.map { makeMoveSuccess =>
println(s"[BuildingCoordinator] Elevator ${makeMoveSuccess.elevatorId} arrived at floor [${makeMoveSuccess.floor}]")
// removeStopRequest
val stopsRequestsElevator = stopsRequests.get(elevatorId).getOrElse(Queue[Int]())
val newStopsRequestsElevator = stopsRequestsElevator.filterNot(_ == makeMoveSuccess.floor)
val newStopsRequests = stopsRequests + (elevatorId -> newStopsRequestsElevator)
val pickUpRequestsElevator = pickUpRequests.get(elevatorId).getOrElse(Queue[Int]())
val newPickUpRequestsElevator = {
if (pickUpRequestsElevator.contains(makeMoveSuccess.floor)) {
pickUpRequestsElevator.filterNot(_ == makeMoveSuccess.floor)
} else {
pickUpRequestsElevator
}
}
val newPickUpRequests = pickUpRequests + (elevatorId -> newPickUpRequestsElevator)
// I THINK I SHOULD NOT CALL context.become HERE
// context.become(operational(newStopsRequests, newPickUpRequests))
val dropOffFloor = BuildingUtil.generateRandomFloor(numberOfFloors, makeMoveSuccess.floor, makeMoveSuccess.direction)
context.self ! DropOffRequest(makeMoveSuccess.elevatorId, dropOffFloor)
(newStopsRequests, newPickUpRequests)
}
// I MUST CALL context.become HERE, BUT I DONT KNOW HOW
// context.become(operational(newState.flatMap(state => (state._1, state._2))))
}
Other thing that might be nasty here is this big chain of map and flatMap. This was my way to implement, however I think it might exist one way better.
You can't and you should not call context.become or anyhow change actor state outside Receive method and outside Receive method invoke thread (which is Akka distpatcher thread), like in your example. Eg:
def receive: Receive = {
// This is a bug, because context is not and is not supposed to be thread safe.
case message: Message => Future(context.become(anotherReceive))
}
What you should do - send message to self after async operation finished and change the state receive after. If in a mean time you don't want to handle incoming messages - you can stash them. See for more details: https://doc.akka.io/docs/akka/current/typed/stash.html
High level example, technical details omitted:
case OperationFinished(calculations: Map[Any, Any])
class AsyncActor extends Actor with Stash {
def operation: Future[Map[Any, Any]] = ...//some implementation of heavy async operation
def receiveStartAsync(calculations: Map[Any, Any]): Receive = {
case StartAsyncOperation =>
//Start async operation and inform yourself that it is finished
operation.map(OperationFinished.apply) pipeTo self
context.become(receiveWaitAsyncOperation)
}
def receiveWaitAsyncOperation: Receive = {
case OperationFinished =>
unstashAll()
context.become(receiveStartAsync)
case _ => stash()
}
}
I like your response #Ivan Kurchenko.
But, according to: Akka Stash docs
When unstashing the buffered messages by calling unstashAll the messages will be processed sequentially in the order they were added and all are processed unless an exception is thrown. The actor is unresponsive to other new messages until unstashAll is completed. That is another reason for keeping the number of stashed messages low. Actors that hog the message processing thread for too long can result in starvation of other actors.
Meaning that under load, for example, the unstashAll operation will cause all other Actors to be in starvation.
According to the same doc:
That can be mitigated by using the StashBuffer.unstash with numberOfMessages parameter and then send a message to context.self before continuing unstashing more. That means that other new messages may arrive in-between and those must be stashed to keep the original order of messages. It becomes more complicated, so better keep the number of stashed messages low.
Bottom line: you should keep the stashed message count low. It might be not suitable for load operation.

Akka Cluster - Recover data from a crashed actor

Let's suppose I have a master with a n slaves, where the slaves are located on other computers.
Master actor looks like
class MasterActor extends Actor {
val router: ActorRef = // ... initialized router to contact the slaves
override def receive: Receive = {
case work: DoWork => router ! work
}
}
While my slaves look like
class SlaveActor extends Actor {
override def receive: Receive = {
case work: DoWork => // Some logic that takes a couple of seconds
}
}
In the case where a machine where some slaves are hosted crashes, or if the app has been stopped while processing some work, I expect a fallback mechanism that enable the system (I suppose the master) to be aware of that failure and then redistribute the lost work to the other slaves.
I have understood the principle of supervision in Akka, that enables master to get notified when a child actor is unreachable, but how can I get back the specific instance of work that the actor had to redistribute it ?
As I started using Akka very recently, my approach is perhaps not adapted to the best practices and should I solve this situation in another way ?
Thanks !
Now this approach won't work in every scenario but how about storing the child actor work to a file system. You can create provide an id to each child and save the work in a file based on key -> value pair, where child actor id is the key and value is the work in progress.
Now, what data has to be saved, that you have to decide but this is one of the simplest approach that you can give a try.
Hope this helps!
It sounds like what you want is Akka Persistence. The main model for this is event sourcing
A rough outline would be to persist one event when receiving a DoWork and persist a second event when finishing the work. On recovery, the state is then a list of work not finished (which will, if you get the protocol right be at most one piece of work), along these lines:
case class DidWork(dw: DoWork)
class SlaveActor extends PersistentActor {
val persistenceId: String = ???
var workToDo: List[DoWork] = Nil
val receiveRecover: Receive = {
case dw: DoWork => workToDo = dw :: workToDo
case DidWork(dw: DoWork) => workToDo = workToDo.filter(_ != dw)
case SnapshotOffer(_, snapshot: List[DoWork]) => workToDo = snapshot
}
val snapshotInterval = ???
val receiveCommand: Receive = {
case dw: DoWork =>
persist(dw) { event =>
workToDo = dw :: workToDo
context.system.eventStream.publish(event)
if (lastSequenceNr % snapshotInterval == 0 && lastSequenceNr != 0)
saveSnapshot(workToDo)
doWork
}
}
private[this] def doWork: Unit = {
workToDo.reverse.foreach { dw: DoWork =>
// do the work
persist(DidWork(dw)) { event =>
workToDo = workToDo.tail
context.system.eventStream.publish(event)
if (lastSequenceNr % snapshotInterval == 0 && lastSequenceNr != 0)
saveSnapshot(workToDo)
}
}
}
}
If you want to make sure that the master knows about workers going away you can watch them (see docs https://doc.akka.io/docs/akka/current/actors.html#lifecycle-monitoring-aka-deathwatch ). If you keep track of what workload you have assigned to what worker, that is yet not completed you can implement resending. However you must likely consider the case where the master or it's actor system goes away as well, and how to recover if that happens.
The Akka distributed workers sample covers exactly this so it could be a good source of inspiration: https://developer.lightbend.com/guides/akka-distributed-workers-scala/

Akka: The order of responses

My demo app is simple. Here is an actor:
class CounterActor extends Actor {
#volatile private[this] var counter = 0
def receive: PartialFunction[Any, Unit] = {
case Count(id) ⇒ sender ! self ? Increment(id)
case Increment(id) ⇒ sender ! {
counter += 1
println(s"req_id=$id, counter=$counter")
counter
}
}
}
The main app:
sealed trait ActorMessage
case class Count(id: Int = 0) extends ActorMessage
case class Increment(id: Int) extends ActorMessage
object CountingApp extends App {
// Get incremented counter
val future0 = counter ? Count(1)
val future1 = counter ? Count(2)
val future2 = counter ? Count(3)
val future3 = counter ? Count(4)
val future4 = counter ? Count(5)
// Handle response
handleResponse(future0)
handleResponse(future1)
handleResponse(future2)
handleResponse(future3)
handleResponse(future4)
// Bye!
exit()
}
My handler:
def handleResponse(future: Future[Any]): Unit = {
future.onComplete {
case Success(f) => f.asInstanceOf[Future[Any]].onComplete {
case x => x match {
case Success(n) => println(s" -> $n")
case Failure(t) => println(s" -> ${t.getMessage}")
}
}
case Failure(t) => println(t.getMessage)
}
}
If I run the app I'll see the next output:
req_id=1, counter=1
req_id=2, counter=2
req_id=3, counter=3
req_id=4, counter=4
req_id=5, counter=5
-> 4
-> 1
-> 5
-> 3
-> 2
The order of handled responses is random. Is it normal behaviour? If no, how can I make it ordered?
PS
Do I need volatile var in the actor?
PS2
Also, I'm looking for some more convenient logic for handleResponse, because matching here is very ambiguous...
normal behavior?
Yes, this is absolutely normal behavior.
Your Actor is receiving the Count increments in the order you sent them but the Futures are being completed via submission to an underlying thread pool. It is that indeterminate ordering of Future-thread binding that is resulting in the out of order println executions.
how can I make it ordered?
If you want ordered execution of Futures then that is synonymous with synchronous programming, i.e. no concurrency at all.
do I need volatile?
The state of an Actor is only accessible within an Actor itself. That is why users of the Actor never get an actual Actor object, they only get an ActorRef, e.g. val actorRef = actorSystem actorOf Props[Actor] . This is partially to ensure that users of Actors never have the ability to change an Actor's state except through messaging. From the docs:
The good news is that Akka actors conceptually each have their own
light-weight thread, which is completely shielded from the rest of the
system. This means that instead of having to synchronize access using
locks you can just write your actor code without worrying about
concurrency at all.
Therefore, you don't need volatile.
more convenient logic
For more convenient logic I would recommend Agents, which are a kind of typed Actor with a simpler message framework. From the docs:
import scala.concurrent.ExecutionContext.Implicits.global
import akka.agent.Agent
val agent = Agent(5)
val result = agent()
val result = agent.get
agent send 7
agent send (_ + 1)
Reads are synchronous but instantaneous. Writes are asynch. This means any time you do a read you don't have to worry about Futures because the internal value returns immediately. But definitely read the docs because there are more complicated tricks you can play with the queueing logic.
Real trouble in your approach is not asynchronous nature but overcomplicated logic.
And despite pretty answer from Ramon which I +1d, yes there is way to ensure order in some parts of akka. As we can read from the doc there is message ordering per sender–receiver pair guarantee.
It means that for each one-way channel of two actors there is guarantee, that messages will be delivered in order they been sent.
But there is no such guarantee for Future task accomplishment order which you are using to handle answers. And sending Future from ask as message to original sender is way strange.
Thing you can do:
redefine your Increment as
case class Increment(id: Int, requester: ActorRef) extends ActorMessage
so handler could know original requester
modify CounterActor's receive as
def receive: Receive = {
case Count(id) ⇒ self ! Increment(id, sender)
case Increment(id, snd) ⇒ snd ! {
counter += 1
println(s"req_id=$id, counter=$counter")
counter
}
}
simplify your handleResponse to
def handleResponse(future: Future[Any]): Unit = {
future.onComplete {
case Success(n: Int) => println(s" -> $n")
case Failure(t) => println(t.getMessage)
}
}
Now you can probably see that messages are received back in the same order.
I said probably because handling still occures in Future.onComplete so we need another actor to ensure the order.
Lets define additional message
case object StartCounting
And actor itself:
class SenderActor extends Actor {
val counter = system.actorOf(Props[CounterActor])
def receive: Actor.Receive = {
case n: Int => println(s" -> $n")
case StartCounting =>
counter ! Count(1)
counter ! Count(2)
counter ! Count(3)
counter ! Count(4)
counter ! Count(5)
}
}
In your main you can now just write
val sender = system.actorOf(Props[SenderActor])
sender ! StartCounting
And throw away that handleResponse method.
Now you definitely should see your message handling in the right order.
We've implemented whole logic without single ask, and that's good.
So magic rule is: leave handling responses to actors, get only final results from them via ask.
Note there is also forward method but this creates proxy actor so message ordering will be broken again.

Controlling spawning of actors in Akka who consume noticeable amounts of memory

I've built a distributed streaming machine learning model using akka's actor model. Training the model happens asynchronously by sending a Training Instance (Data to train on) to an Actor. Training on this data takes up computation time and changes the state of the actor.
Currently I'm using historical data to train the model. I want to run a bunch of differently configured models that are trained on the same data and see how different ensemble metrics vary. Essentially here is a much less complicated simulation of what I have going on with the Thread.sleep(1) and the data Array representing computation time and state.
implicit val as = ActorSystem()
case object Report
case class Model(dataSize: Int) {
val modelActor: ActorRef = actor(new Act {
val data = Array.fill(dataSize)(0)
become {
case trainingData: Int => {
// Screw with the state of the actor and pretend that it takes time
Thread.sleep(1)
data(Math.abs(Random.nextInt % dataSize)) == trainingData
}
case Report => {
println(s"Finished $dataSize")
context.stop(self)
}
}
})
def train(trainingInstance: Int) = modelActor ! trainingInstance
def report: Unit = modelActor ! Report
}
val trainingData = Array.fill(5000)(Random.nextInt)
val dataSizeParams = (1 to 500)
Next I use a for loop to vary the parameters (represent by dataSizeParams array)
for {
param <- dataSizeParams
} {
// make model with params
val model = Model(param)
for {
trainingInstance <- trainingData
} {
model.train(trainingInstance)
}
model.report
}
The for loop is definitely the WRONG WAY to do what I'm trying to do. It kicks off all the different models in parallel. It works well when the dataSizeParams are in the 1 to 500 range, but if I bump that up to something high my Models EACH start to take up noticeable chunks of memory. What I came up with is the code below. Essentially I have a Model master who can control the number of models running at once based on the number of Run messages he has received. Each Model now contains a reference to this master actor and sends him a message when he is done processing:
// Alternative that doesn't use a for loop and instead controls concurrency through what I'm calling a master actor
case object ImDone
case object Run
case class Model(dataSize: Int, master: ActorRef) {
val modelActor: ActorRef = actor(new Act {
val data = Array.fill(dataSize)(0)
become {
case trainingData: Int => {
// Screw with the state of the actor and pretend that it takes time
Tread.sleep(1)
data(Math.abs(Random.nextInt % dataSize)) == trainingData
}
case Report => {
println(s"Finished $dataSize")
master ! ImDone
context.stop(self)
}
}
})
def train(trainingInstance: Int) = modelActor ! trainingInstance
def report: Unit = modelActor ! Report
}
val master: ActorRef = actor(new Act {
var paramRuns = dataSizeParams.toIterator
become {
case Run => {
if (paramRuns.hasNext) {
val model = Model(paramRuns.next(), self)
for {
trainingInstance <- trainingData
} {
model.train(trainingInstance)
}
model.report
} else {
println("No more to run")
context.stop(self)
}
}
case ImDone => {
self ! Run
}
}
})
master ! Run
There isn't any problem with master code(that I can see). I have tight control over the number of Models spawned at one time, but I feel like I'm missing a much easier/clean/out-of-the box way of doing this. Also I was wondering if there were any neat ways to throttle the number of Models running at once, by say looking into the System's CPU and memory usage.
You're looking for the work pulling pattern. I highly suggest this blog post by the Akka devs:
http://letitcrash.com/post/29044669086/balancing-workload-across-nodes-with-akka-2
We use a variant of this on top of Akka's clustering features to avoid rogue concurrency. By having worker actors pull work instead of having a supervisor push work, you can gracefully control the amount of work (and therefore, CPU and memory usage) by simply limiting the number of worker actors.
This has a few advantages over pure routers: it's easier to track failures (as outlined in that post) and work won't languish in a mailbox (which can potentially be lost).
Also, if you're using remoting, I recommend that you not send large amounts of data in the message. Let the worker nodes pull the data from another source themselves when triggered. We use S3.

Is it possible to use 'react' to wait for a number of child actors to complete, then continue afterwards?

I'm getting all in a twist trying to get this to work. New to scala and to actors so may inadvertently be making bad design decisions - please tell me if so.
The setup is this:
I have a controlling actor which contains a number of worker actors. Each worker represents a calculation that for a given input will spit out 1..n outputs. The controller has to set off each worker, collect the returned outputs, then carry on and do a bunch more stuff once this is complete. This is how I approached it using receive in the controller actor:
class WorkerActor extends Actor {
def act() {
loop {
react {
case DoJob =>
for (1 to n) sender ! Result
sender ! Done
}
}
}
}
The worker actor is simple enough - it spits out results until it's done, when it sends back a Done message.
class ControllerActor(val workers: List[WorkerActor]) extends Actor {
def act() {
workers.foreach(w => w ! DoJob)
receiveResults(workers.size)
//do a bunch of other stuff
}
def receiveResults(count: Int) {
if (count == 0) return
receive {
case Result =>
// do something with this result (that updates own mutable state)
receiveResults(count)
case Done
receiveResults(count - 1)
}
}
}
The controller actor kicks off each of the workers, then recursively calls receive until it has received a Done message for each of the workers.
This works, but I need to create lots of the controller actors, so receive is too heavyweight - I need to replace it with react.
However, when I use react, the behind-the-scenes exception kicks in once the final Done message is processed, and the controller actor's act method is short-circuited, so none of the "//do a bunch of other stuff" that comes after happens.
I can make something happen after the final Done message by using andThen { } - but I actually need to do several sets of calculations in this manner so would end up with a ridiculously nested structure of andThen { andThen { andThen } }s.
I also want to hide away this complexity in a method, which would then be moved into a separate trait, such that a controller actor with a number of lists of worker actors can just be something like this:
class ControllerActor extends Actor with CalculatingTrait {
//CalculatingTrait has performCalculations method
val listOne: List[WorkerActor]
val ListTwo: List[WorkerActor]
def act {
performCalculations(listOne)
performCalculations(listTwo)
}
}
So is there any way to stop the short-circuiting of the act method in the performCalculations method? Is there a better design approach I could be taking?
You can avoid react/receive entirely by using Akka actor's. Here's what you implementation could look like:
import akka.actor._
class WorkerActor extends Actor {
def receive = {
case DoJob =>
for (_ <- 1 to n) sender ! Result
sender ! Done
}
}
class ControllerActor(workers: List[ActorRef]) extends Actor {
private[this] var countdown = workers.size
override def preStart() {
workers.foreach(_ ! DoJob)
}
def receive = {
case Result =>
// do something with this result
case Done =>
countdown -= 1
if (countdown == 0) {
// do a bunch of other stuff
// It looks like your controllers die when the workers
// are done, so I'll do the same.
self ! PoisonPill
}
}
}
Here's how I might approach it (in way that seems to be more comments and boilerplate than actual content):
class WorkerController(val workerCriteria: List[WorkerCriteria]) {
// The actors that only _I_ interact with are probably no one else's business
// Your call, though
val workers = generateWorkers(workerCriteria)
// No need for an `act` method--no need for this to even be an actor
/* Will send `DoJob` to each actor, expecting a reply from each.
* Could also use the `!!` operator (instead of `!?`) if you wanted
* them to return futures (so WorkerController could continue doing other
* things while the results compute). The futures could then be evaluated
* with `results map (_())`, which will _then_ halt execution to wait for each
* future that isn't already computed (if any).
*/
val results = workers map (_ !? DoJob)
//do a bunch of other stuff with your results
def generateWorkers(criteria: List[WorkerCriteria]) = // Create some workers!
}
class Worker extends Actor {
def act() {
loop {
react {
case DoJob =>
// Will generate a result and send it back to the caller
reply(generateResult)
}
}
}
def generateResult = // Result?
}
EDIT: Have just been reading about Akka actors and spotted that they "guarantee message order on a per sender basis". So I updated my example such that, if the controller needed to later ask the receiver for the computed value and needed to be sure it was all complete, it could do so with a message order guarantee on only a per sender basis (the example is still scala actors, not akka).
It finally hit me, with a bit of help from #Destin's answer, that I could make it a lot simpler by separating out the part of the controller responsible for kicking off the workers from the part responsible for accepting and using the results. Single responsibility principle I suppose... Here's what I did (separating out the original controlling actor into a controlling class and a 'receiver' actor):
case class DoJob(receiever: Actor)
case object Result
case object JobComplete
case object Acknowledged
case object Done
class Worker extends Actor {
def act {
loop {
react {
case DoJob(receiver) =>
receiver ! Result
receiver ! Result
receiver !? JobComplete match {
case Acknowledged =>
sender ! Done
}
}
}
}
}
class Receiver extends Actor {
def act {
loop {
react {
case Result => println("Got result!")
case JobComplete => sender ! Acknowledged
}
}
}
}
class Controller {
val receiver = new Receiver
val workers = List(new Worker, new Worker, new Worker)
receiver.start()
workers.foreach(_.start())
workers.map(_ !! DoJob(receiver)).map(_())
println("All the jobs have been done")
}