How do I call context become outside of a Future from Ask messages? - scala

I have a parent akka actor named buildingCoordinator which creates childs name elevator_X. For now I am creating only one elevator. The buildingCoordinator sends a sequence of messages and wait for responses in order to move an elevator. The sequence is this: sends ? RequestElevatorState -> receive ElevatorState -> sends ? MoveRequest -> receives MoveRequestSuccess -> changes the state. As you can see I am using the ask pattern. After the movement is successes the buildingCoordinator changes its state using context.become.
The problem that I am running is that the elevator is receiving MoveRequest(1,4) for the same floor twice, sometimes three times. I do remove the floor when I call context.become. However I remove inside the last map. I think it is because I am using context.become inside a future and I should use it outside. But I am having trouble implementing it.
case class BuildingCoordinator(actorName: String,
numberOfFloors: Int,
numberOfElevators: Int,
elevatorControlSystem: ElevatorControlSystem)
extends Actor with ActorLogging {
import context.dispatcher
implicit val timeout = Timeout(4 seconds)
val elevators = createElevators(numberOfElevators)
override def receive: Receive = operational(Map[Int, Queue[Int]](), Map[Int, Queue[Int]]())
def operational(stopsRequests: Map[Int, Queue[Int]], pickUpRequests: Map[Int, Queue[Int]]): Receive = {
case msg#MoveElevator(elevatorId) =>
println(s"[BuildingCoordinator] received $msg")
val elevatorActor: ActorSelection = context.actorSelection(s"/user/$actorName/elevator_$elevatorId")
val newState = (elevatorActor ? RequestElevatorState(elevatorId))
.mapTo[ElevatorState]
.flatMap { state =>
val nextStop = elevatorControlSystem.findNextStop(stopsRequests.get(elevatorId).get, state.currentFloor, state.direction)
elevatorActor ? MoveRequest(elevatorId, nextStop)
}
.mapTo[MoveRequestSuccess]
.flatMap(moveRequestSuccess => elevatorActor ? MakeMove(elevatorId, moveRequestSuccess.targetFloor))
.mapTo[MakeMoveSuccess]
.map { makeMoveSuccess =>
println(s"[BuildingCoordinator] Elevator ${makeMoveSuccess.elevatorId} arrived at floor [${makeMoveSuccess.floor}]")
// removeStopRequest
val stopsRequestsElevator = stopsRequests.get(elevatorId).getOrElse(Queue[Int]())
val newStopsRequestsElevator = stopsRequestsElevator.filterNot(_ == makeMoveSuccess.floor)
val newStopsRequests = stopsRequests + (elevatorId -> newStopsRequestsElevator)
val pickUpRequestsElevator = pickUpRequests.get(elevatorId).getOrElse(Queue[Int]())
val newPickUpRequestsElevator = {
if (pickUpRequestsElevator.contains(makeMoveSuccess.floor)) {
pickUpRequestsElevator.filterNot(_ == makeMoveSuccess.floor)
} else {
pickUpRequestsElevator
}
}
val newPickUpRequests = pickUpRequests + (elevatorId -> newPickUpRequestsElevator)
// I THINK I SHOULD NOT CALL context.become HERE
// context.become(operational(newStopsRequests, newPickUpRequests))
val dropOffFloor = BuildingUtil.generateRandomFloor(numberOfFloors, makeMoveSuccess.floor, makeMoveSuccess.direction)
context.self ! DropOffRequest(makeMoveSuccess.elevatorId, dropOffFloor)
(newStopsRequests, newPickUpRequests)
}
// I MUST CALL context.become HERE, BUT I DONT KNOW HOW
// context.become(operational(newState.flatMap(state => (state._1, state._2))))
}
Other thing that might be nasty here is this big chain of map and flatMap. This was my way to implement, however I think it might exist one way better.

You can't and you should not call context.become or anyhow change actor state outside Receive method and outside Receive method invoke thread (which is Akka distpatcher thread), like in your example. Eg:
def receive: Receive = {
// This is a bug, because context is not and is not supposed to be thread safe.
case message: Message => Future(context.become(anotherReceive))
}
What you should do - send message to self after async operation finished and change the state receive after. If in a mean time you don't want to handle incoming messages - you can stash them. See for more details: https://doc.akka.io/docs/akka/current/typed/stash.html
High level example, technical details omitted:
case OperationFinished(calculations: Map[Any, Any])
class AsyncActor extends Actor with Stash {
def operation: Future[Map[Any, Any]] = ...//some implementation of heavy async operation
def receiveStartAsync(calculations: Map[Any, Any]): Receive = {
case StartAsyncOperation =>
//Start async operation and inform yourself that it is finished
operation.map(OperationFinished.apply) pipeTo self
context.become(receiveWaitAsyncOperation)
}
def receiveWaitAsyncOperation: Receive = {
case OperationFinished =>
unstashAll()
context.become(receiveStartAsync)
case _ => stash()
}
}

I like your response #Ivan Kurchenko.
But, according to: Akka Stash docs
When unstashing the buffered messages by calling unstashAll the messages will be processed sequentially in the order they were added and all are processed unless an exception is thrown. The actor is unresponsive to other new messages until unstashAll is completed. That is another reason for keeping the number of stashed messages low. Actors that hog the message processing thread for too long can result in starvation of other actors.
Meaning that under load, for example, the unstashAll operation will cause all other Actors to be in starvation.
According to the same doc:
That can be mitigated by using the StashBuffer.unstash with numberOfMessages parameter and then send a message to context.self before continuing unstashing more. That means that other new messages may arrive in-between and those must be stashed to keep the original order of messages. It becomes more complicated, so better keep the number of stashed messages low.
Bottom line: you should keep the stashed message count low. It might be not suitable for load operation.

Related

integrating with a non thread safe service, while maintaining back pressure, in a concurrent environment using Futures, akka streams and akka actors

I am using a third party library to provide parsing services (user agent parsing in my case) which is not a thread safe library and has to operate on a single threaded basis. I would like to write a thread safe API that can be called by multiple threads to interact with it via Futures API as the library might introduce some potential blocking (IO). I would also like to provide back pressure when necessary and return a failed future when the parser doesn't catch up with the producers.
It could actually be a generic requirement/question, how to interact with any client/library which is not thread safe (user agents/geo locations parsers, db clients like redis, loggers collectors like fluentd), with back pressure in a concurrent environments.
I came up with the following formula:
encapsulate the parser within a dedicated Actor.
create an akka stream source queue that receives ParseReuqest that contains the user agent and a Promise to complete, and using the ask pattern via mapAsync to interact with the parser actor.
create another actor to encapsulate the source queue.
Is this the way to go? Is there any other way to achieve this, maybe simpler ? maybe using graph stage? can it be done without the ask pattern and less code involved?
the actor mentioned in number 3, is because I'm not sure if the source queue is thread safe or not ?? I wish it was simply stated in the docs, but it doesn't. there are multiple versions over the web, some stating it's not and some stating it is.
Is the source queue, once materialized, is thread safe to push elements from different threads?
(the code may not compile and is prone to potential failures, and is only intended for this question in place)
class UserAgentRepo(dbFilePath: String)(implicit actorRefFactory: ActorRefFactory) {
import akka.pattern.ask
import akka.util.Timeout
import scala.concurrent.duration._
implicit val askTimeout = Timeout(5 seconds)
// API to parser - delegates the request to the back pressure actor
def parse(userAgent: String): Future[Option[UserAgentData]] = {
val p = Promise[Option[UserAgentData]]
parserBackPressureProvider ! UserAgentParseRequest(userAgent, p)
p.future
}
// Actor to provide back pressure that delegates requests to parser actor
private class ParserBackPressureProvider extends Actor {
private val parser = context.actorOf(Props[UserAgentParserActor])
val queue = Source.queue[UserAgentParseRequest](100, OverflowStrategy.dropNew)
.mapAsync(1)(request => (parser ? request.userAgent).mapTo[Option[UserAgentData]].map(_ -> request.p))
.to(Sink.foreach({
case (result, promise) => promise.success(result)
}))
.run()
override def receive: Receive = {
case request: UserAgentParseRequest => queue.offer(request).map {
case QueueOfferResult.Enqueued =>
case _ => request.p.failure(new RuntimeException("parser busy"))
}
}
}
// Actor parser
private class UserAgentParserActor extends Actor {
private val up = new UserAgentParser(dbFilePath, true, 50000)
override def receive: Receive = {
case userAgent: String =>
sender ! Try {
up.parseUa(userAgent)
}.toOption.map(UserAgentData(userAgent, _))
}
}
private case class UserAgentParseRequest(userAgent: String, p: Promise[Option[UserAgentData]])
private val parserBackPressureProvider = actorRefFactory.actorOf(Props[ParserBackPressureProvider])
}
Do you have to use actors for this?
It does not seem like you need all this complexity, scala/java hasd all the tools you need "out of the box":
class ParserFacade(parser: UserAgentParser, val capacity: Int = 100) {
private implicit val ec = ExecutionContext
.fromExecutor(
new ThreadPoolExecutor(
1, 1, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue(capacity)
)
)
def parse(ua: String): Future[Option[UserAgentData]] = try {
Future(Some(UserAgentData(ua, parser.parseUa(ua)))
.recover { _ => None }
} catch {
case _: RejectedExecutionException =>
Future.failed(new RuntimeException("parser is busy"))
}
}

Am I safely mutating in my akka actors or is this not thread safe?

I am a little confused if I am safely mutating my mutable maps/queue inside of my actor.
Can someone tell me if this code is thread-safe and correct?
class SomeActor extends Actor {
val userQ = mutable.Queue.empty[User]
val tranQ = mutable.Map.empty[Int, Transaction]
def receive = {
case Blank1 =>
if(userQ.isEmpty)
userQ ++= getNewUsers()
case Blank2 =>
val companyProfile = for {
company <- api.getCompany() // Future[Company]
location <- api.getLoc() // Future[Location]
} yield CompanyProfile(company, location)
companyProfile.map { cp =>
tranQ += cp.id -> cp.transaction // tranQ mutatated here
}
}
}
Since I am mutating the tranQ with futures, is this safe?
It is my understanding that each actor message is handled in a serial fashion, so although maybe frowned upon I can use mutable state like this.
I am just confused if using it inside of a future call like tranQ is safe or not.
No, your code is not safe.
While an actor processes one message at a time, you will lose this guarantee as soon as Futures are involed. At that point, the code inside the Future is executed on a (potentially) different thread and the next message might be handled by the actor.
A typical pattern to work around this issue is to send a message with the result of the Future using the pipeTo pattern, like so:
import akka.pattern.pipe
def receive: Receive {
case MyMsg =>
myFutureOperation()
.map(res => MyMsg2(res))
.pipeTo(self)
case MyMsg2(res) =>
// do mutation now
}
More information about using Futures can be found in akka's documentation: http://doc.akka.io/docs/akka/2.5/scala/futures.html

Akka: The order of responses

My demo app is simple. Here is an actor:
class CounterActor extends Actor {
#volatile private[this] var counter = 0
def receive: PartialFunction[Any, Unit] = {
case Count(id) ⇒ sender ! self ? Increment(id)
case Increment(id) ⇒ sender ! {
counter += 1
println(s"req_id=$id, counter=$counter")
counter
}
}
}
The main app:
sealed trait ActorMessage
case class Count(id: Int = 0) extends ActorMessage
case class Increment(id: Int) extends ActorMessage
object CountingApp extends App {
// Get incremented counter
val future0 = counter ? Count(1)
val future1 = counter ? Count(2)
val future2 = counter ? Count(3)
val future3 = counter ? Count(4)
val future4 = counter ? Count(5)
// Handle response
handleResponse(future0)
handleResponse(future1)
handleResponse(future2)
handleResponse(future3)
handleResponse(future4)
// Bye!
exit()
}
My handler:
def handleResponse(future: Future[Any]): Unit = {
future.onComplete {
case Success(f) => f.asInstanceOf[Future[Any]].onComplete {
case x => x match {
case Success(n) => println(s" -> $n")
case Failure(t) => println(s" -> ${t.getMessage}")
}
}
case Failure(t) => println(t.getMessage)
}
}
If I run the app I'll see the next output:
req_id=1, counter=1
req_id=2, counter=2
req_id=3, counter=3
req_id=4, counter=4
req_id=5, counter=5
-> 4
-> 1
-> 5
-> 3
-> 2
The order of handled responses is random. Is it normal behaviour? If no, how can I make it ordered?
PS
Do I need volatile var in the actor?
PS2
Also, I'm looking for some more convenient logic for handleResponse, because matching here is very ambiguous...
normal behavior?
Yes, this is absolutely normal behavior.
Your Actor is receiving the Count increments in the order you sent them but the Futures are being completed via submission to an underlying thread pool. It is that indeterminate ordering of Future-thread binding that is resulting in the out of order println executions.
how can I make it ordered?
If you want ordered execution of Futures then that is synonymous with synchronous programming, i.e. no concurrency at all.
do I need volatile?
The state of an Actor is only accessible within an Actor itself. That is why users of the Actor never get an actual Actor object, they only get an ActorRef, e.g. val actorRef = actorSystem actorOf Props[Actor] . This is partially to ensure that users of Actors never have the ability to change an Actor's state except through messaging. From the docs:
The good news is that Akka actors conceptually each have their own
light-weight thread, which is completely shielded from the rest of the
system. This means that instead of having to synchronize access using
locks you can just write your actor code without worrying about
concurrency at all.
Therefore, you don't need volatile.
more convenient logic
For more convenient logic I would recommend Agents, which are a kind of typed Actor with a simpler message framework. From the docs:
import scala.concurrent.ExecutionContext.Implicits.global
import akka.agent.Agent
val agent = Agent(5)
val result = agent()
val result = agent.get
agent send 7
agent send (_ + 1)
Reads are synchronous but instantaneous. Writes are asynch. This means any time you do a read you don't have to worry about Futures because the internal value returns immediately. But definitely read the docs because there are more complicated tricks you can play with the queueing logic.
Real trouble in your approach is not asynchronous nature but overcomplicated logic.
And despite pretty answer from Ramon which I +1d, yes there is way to ensure order in some parts of akka. As we can read from the doc there is message ordering per sender–receiver pair guarantee.
It means that for each one-way channel of two actors there is guarantee, that messages will be delivered in order they been sent.
But there is no such guarantee for Future task accomplishment order which you are using to handle answers. And sending Future from ask as message to original sender is way strange.
Thing you can do:
redefine your Increment as
case class Increment(id: Int, requester: ActorRef) extends ActorMessage
so handler could know original requester
modify CounterActor's receive as
def receive: Receive = {
case Count(id) ⇒ self ! Increment(id, sender)
case Increment(id, snd) ⇒ snd ! {
counter += 1
println(s"req_id=$id, counter=$counter")
counter
}
}
simplify your handleResponse to
def handleResponse(future: Future[Any]): Unit = {
future.onComplete {
case Success(n: Int) => println(s" -> $n")
case Failure(t) => println(t.getMessage)
}
}
Now you can probably see that messages are received back in the same order.
I said probably because handling still occures in Future.onComplete so we need another actor to ensure the order.
Lets define additional message
case object StartCounting
And actor itself:
class SenderActor extends Actor {
val counter = system.actorOf(Props[CounterActor])
def receive: Actor.Receive = {
case n: Int => println(s" -> $n")
case StartCounting =>
counter ! Count(1)
counter ! Count(2)
counter ! Count(3)
counter ! Count(4)
counter ! Count(5)
}
}
In your main you can now just write
val sender = system.actorOf(Props[SenderActor])
sender ! StartCounting
And throw away that handleResponse method.
Now you definitely should see your message handling in the right order.
We've implemented whole logic without single ask, and that's good.
So magic rule is: leave handling responses to actors, get only final results from them via ask.
Note there is also forward method but this creates proxy actor so message ordering will be broken again.

Can actors read messages under a certain condition?

I have this situation:
ActorA sends ActorB start/stop messages every 30-40 seconds
ActorA sends ActorB strings to print (always)
ActorB must print the strings he receive, but only if ActorA sent just a start message
Now i wonder if i can do the following things:
Can ActorB read messages only under a certain condition (if a boolean is set as true) without losing the messages he receives while that boolean is set as false?
Can ActorB read a start/stop message from ActorA before the other string messages? I'd like to have this situation: ActorA sends a start message to ActorB, ActorB start printing the strings he received before the start messages (and that is still receiving) and then stop as soon as it receives a stop messages?
I don't know if I explained it well.
EDIT: Thank you, the answers are great, but I still have some doubts.
Does the become mantain the order of the messages? I mean, if i have "Start-M1-M2-Stop-M3-M4-M5-Start-M6-M7-Stop", will the printing order be "M1-M2" and then "M3-M4-M5-M6-M7" or could M6 be read before M3, M4 and M5 (if M6 is received just after the become)?
Can I give a higher priority to start/stop messages? If ActorB receives "M1-M2-M3", and then it receives a stop message while it is printing "M1", i want that ActorB saves again M2 and M3.
You can exactly solve your problem with the Stash trait and the become/unbecome functionality of Akka. The idea is the following:
When you receive a Stop message then you switch to a behaviour where you stash all messages which are not Start. When you receive a Start message, then you switch to a behaviour where you print all received messages and additionally you unstash all messages which have arrived in the meantime.
case object Start
case object Stop
case object TriggerStateChange
case object SendMessage
class ActorB extends Actor with Stash {
override def receive: Receive = {
case Start =>
context.become(printingBehavior, false)
unstashAll()
case x => stash()
}
def printingBehavior: Receive = {
case msg: String => println(msg)
case Stop => context.unbecome()
}
}
class ActorA(val actorB: ActorRef) extends Actor {
var counter = 0
var started = false
override def preStart: Unit = {
import context.dispatcher
this.context.system.scheduler.schedule(0 seconds, 5 seconds, self, TriggerStateChange)
this.context.system.scheduler.schedule(0 seconds, 1 seconds, self, SendMessage)
}
override def receive: Actor.Receive = {
case SendMessage =>
actorB ! "Message: " + counter
counter += 1
case TriggerStateChange =>
actorB ! (if (started) {
started = false
Stop
} else {
started = true
Start
})
}
}
object Akka {
def main(args: Array[String]) = {
val system = ActorSystem.create("TestActorSystem")
val actorB = system.actorOf(Props(classOf[ActorB]), "ActorB")
val actorA = system.actorOf(Props(classOf[ActorA], actorB), "ActorA")
system.awaitTermination()
}
}
Check the Become/Unbecome functionality. It lets you change the behavior of the actor.
If I understood correctly you want your ActorB to have two different states. In the first state it should cache the messages it receives. In the second state, it should print all the cached messages and start printing all the new ones.
Something like this:
case class Start
case class Stop
case class Message(val: String)
class ActorB extends Actor {
var cache = Queue()
def initial: Receive = {
case Message(msg) => cache += msg
case Start =>
for (m <- cache) self ! Message(m)
context.become(printing)
}
def printing: Receive = {
case Message(msg) => println(msg)
case Stop => context.become(initial) //or stop the actor
}
def receive = initial
}
Have Actor B alternate between two states (two different behaviours). In the initial 'pending' state, it waits for a 'start' message, while stashing any other messages.
On receipt of a 'start' message, unstash all the stored messages and become 'active', waiting on a 'stop' message and writing out the other messages received (which will include the unstashed ones). On receiveing a 'stop' message, revert to the 'pending' behaviour.
Some of my thoughts
Yes if the boolean flag is got from some system resource like db or a config file, but I don't think it should be dependent on any external message, given that the actor receive messages from multiple other actors. If ActorB is only used by ActorA, the two can be merged to one
Similar as 1, how to handle the messages from multiple actors? If there is only one actorA, the two actors can be merged. If there are multiple, the flag can be set in database, actorA change the flag in db to "Start" or "Stop". and Actor B will print or not based on the flag.
An actor should be doing something very independently on other actor's state. The start and stop is actually some state of ActorA instead of ActorB
You already have a lot of good answers, but somehow I feel compelled to offer something more brief, as what you need is not necessarily a state machine:
class ActorB extends Actor {
def receive: Receive = caching(Nil)
def caching(cached: List[String]): Receive = {
case msg: String =>
context.become(caching(msg :: cached))
case Start =>
cached.reverse.foreach(println)
context.become(caching(Nil))
}
}

Multiple Future calls in an Actor's receive method

I'm trying to make two external calls (to a Redis database) inside an Actor's receive method. Both calls return a Future and I need the result of the first Future inside the second.
I'm wrapping both calls inside a Redis transaction to avoid anyone else from modifying the value in the database while I'm reading it.
The internal state of the actor is updated based on the value of the second Future.
Here is what my current code looks like which I is incorrect because I'm updating the internal state of the actor inside a Future.onComplete callback.
I cannot use the PipeTo pattern because I need both both Future have to be in a transaction.
If I use Await for the first Future then my receive method will block.
Any idea how to fix this ?
My second question is related to how I'm using Futures. Is this usage of Futures below correct? Is there a better way of dealing with multiple Futures in general? Imagine if there were 3 or 4 Future each depending on the previous one.
import akka.actor.{Props, ActorLogging, Actor}
import akka.util.ByteString
import redis.RedisClient
import scala.concurrent.Future
import scala.util.{Failure, Success}
object GetSubscriptionsDemo extends App {
val akkaSystem = akka.actor.ActorSystem("redis-demo")
val actor = akkaSystem.actorOf(Props(new SimpleRedisActor("localhost", "dummyzset")), name = "simpleactor")
actor ! UpdateState
}
case object UpdateState
class SimpleRedisActor(ip: String, key: String) extends Actor with ActorLogging {
//mutable state that is updated on a periodic basis
var mutableState: Set[String] = Set.empty
//required by Future
implicit val ctx = context dispatcher
var rClient = RedisClient(ip)(context.system)
def receive = {
case UpdateState => {
log.info("Start of UpdateState ...")
val tran = rClient.transaction()
val zf: Future[Long] = tran.zcard(key) //FIRST Future
zf.onComplete {
case Success(z) => {
//SECOND Future, depends on result of FIRST Future
val rf: Future[Seq[ByteString]] = tran.zrange(key, z - 1, z)
rf.onComplete {
case Success(x) => {
//convert ByteString to UTF8 String
val v = x.map(_.utf8String)
log.info(s"Updating state with $v ")
//update actor's internal state inside callback for a Future
//IS THIS CORRECT ?
mutableState ++ v
}
case Failure(e) => {
log.warning("ZRANGE future failed ...", e)
}
}
}
case Failure(f) => log.warning("ZCARD future failed ...", f)
}
tran.exec()
}
}
}
The compiles but when I run it gets struck.
2014-08-07 INFO [redis-demo-akka.actor.default-dispatcher-3] a.e.s.Slf4jLogger - Slf4jLogger started
2014-08-07 04:38:35.106UTC INFO [redis-demo-akka.actor.default-dispatcher-3] e.c.s.e.a.g.SimpleRedisActor - Start of UpdateState ...
2014-08-07 04:38:35.134UTC INFO [redis-demo-akka.actor.default-dispatcher-8] r.a.RedisClientActor - Connect to localhost/127.0.0.1:6379
2014-08-07 04:38:35.172UTC INFO [redis-demo-akka.actor.default-dispatcher-4] r.a.RedisClientActor - Connected to localhost/127.0.0.1:6379
UPDATE 1
In order to use pipeTo pattern I'll need access to the tran and the FIRST Future (zf) in the actor where I'm piping the Future to because the SECOND Future depends on the value (z) of FIRST.
//SECOND Future, depends on result of FIRST Future
val rf: Future[Seq[ByteString]] = tran.zrange(key, z - 1, z)
Without knowing too much about the redis client you are using, I can offer an alternate solution that should be cleaner and won't have issues with closing over mutable state. The idea is to use a master/worker kind of situation, where the master (the SimpleRedisActor) receives the request to do the work and then delegates off to a worker that performs the work and responds with the state to update. That solution would look something like this:
object SimpleRedisActor{
case object UpdateState
def props(ip:String, key:String) = Props(classOf[SimpleRedisActor], ip, key)
}
class SimpleRedisActor(ip: String, key: String) extends Actor with ActorLogging {
import SimpleRedisActor._
import SimpleRedisWorker._
//mutable state that is updated on a periodic basis
var mutableState: Set[String] = Set.empty
val rClient = RedisClient(ip)(context.system)
def receive = {
case UpdateState =>
log.info("Start of UpdateState ...")
val worker = context.actorOf(SimpleRedisWorker.props)
worker ! DoWork(rClient, key)
case WorkResult(result) =>
mutableState ++ result
case FailedWorkResult(ex) =>
log.error("Worker got failed work result", ex)
}
}
object SimpleRedisWorker{
case class DoWork(client:RedisClient, key:String)
case class WorkResult(result:Seq[String])
case class FailedWorkResult(ex:Throwable)
def props = Props[SimpleRedisWorker]
}
class SimpleRedisWorker extends Actor{
import SimpleRedisWorker._
import akka.pattern.pipe
import context._
def receive = {
case DoWork(client, key) =>
val trans = client.transaction()
trans.zcard(key) pipeTo self
become(waitingForZCard(sender, trans, key) orElse failureHandler(sender, trans))
}
def waitingForZCard(orig:ActorRef, trans:RedisTransaction, key:String):Receive = {
case l:Long =>
trans.zrange(key, l -1, l) pipeTo self
become(waitingForZRange(orig, trans) orElse failureHandler(orig, trans))
}
def waitingForZRange(orig:ActorRef, trans:RedisTransaction):Receive = {
case s:Seq[ByteString] =>
orig ! WorkResult(s.map(_.utf8String))
finishAndStop(trans)
}
def failureHandler(orig:ActorRef, trans:RedisTransaction):Receive = {
case Status.Failure(ex) =>
orig ! FailedWorkResult(ex)
finishAndStop(trans)
}
def finishAndStop(trans:RedisTransaction) {
trans.exec()
context stop self
}
}
The worker starts the transaction and then makes calls into redis and ultimately completes the transaction before stopping itself. When it calls redis, it gets the future and pipes back to itself for the continuation of the processing, changing the receive method between as a mechanism of showing progressing through its states. In a model like this (which I suppose is somewhat similar to the error kernal pattern), the master owns and protects the state, delegating the "risky" work off to a child who can figure out what the change for the state should be, but the changing is still owned by the master.
Now again, I have no idea about the capabilities of the redis client you are using and if it is safe enough to even do this kind of stuff, but that's not really the point. The point was to show a safer structure for doing something like this that involves futures and state that needs to be changed safely.
Using the callback to mutate internal state is not a good idea, excerpt from the akka docs:
When using future callbacks, such as onComplete, onSuccess, and onFailure, inside actors you need to carefully avoid closing over the containing actor’s reference, i.e. do not call methods or access mutable state on the enclosing actor from within the callback.
Why do you worry about pipeTo and transactions?
Not sure how redis transactions work, but I would guess that the transaction does not encompass the onComplete callback on the second future anyways.
I would put the state into a separate actor which you pipe the future too. This way you have a separate mailbox, and the ordering there will be the same as the ordering of the messages that came in to modify the state. Also if any read requests come in, they will also be put in the correct order.
Edit to respond to edited question: Ok, so you don't want to pipe the first future, that makes sense, and should be no problem as the first callback is harmless. The callback of the second future is the problem, as it manipulates the state. But this future can be pipe without the need for access to the transaction.
So basically my suggestion is:
val firstFuture = tran.zcard
firstFuture.onComplete {
val secondFuture = tran.zrange
secondFuture pipeTo stateActor
}
With stateActor containing the mutable state.