akka unstashAll not working - scala

I am implementing an actor with multiple states and using Stash not to lose any messages. My states are initializing (get something from DB), running (handling requests) and updating (updating my state).
My problem is that I lose the messages when i try to unstashAll() in future resolving.
def initializing: Receive = {
case Initialize =>
log.info("initializing")
(for {
items1 <- db.getItems("1")
items2 <- db.getItems("2")
} yield items1 ::: items2) map {items =>
unstashAll()
context.become(running(items))
}
case r =>
log.debug(s"actor received message: $r while initializing and stashed it for further processing")
stash()}
i fixed it by changing my implementation to this
def initializing: Receive = {
case Initialize =>
log.info("initializing")
(for {
items1 <- db.getItems("1")
items2 <- db.getItems("2")
} yield items1 ::: items2) pipeTo self
context.become({
case items: List[Item] =>
unstashAll()
context.become(running(items))
case r =>
log.debug(s"actor received message: $r while initializing and stashed it for further processing")
stash()
})
case r =>
log.debug(s"actor received message: $r while initializing and stashed it for further processing")
stash()}
can anyone explain why the first didn't work ?

I think the unstashAll part works fine. The problem is that you were running it - together with your context.become - as part of a future callback.
This means that the code inside your map block escapes the predictability of your actor sequential processing. In other words, this could happen:
you call unstashAll
your messages are put back in the actor's mailbox
the actor picks up your messages one by one. The context still hasn't changed, so they are stashed again
your context finally becomes running (but it's too late)
The solution is - as you found out - the pipeTo pattern, which essentially sends a Futures result to the actor as a message. This makes it all sequential and predictable.

Related

How do I call context become outside of a Future from Ask messages?

I have a parent akka actor named buildingCoordinator which creates childs name elevator_X. For now I am creating only one elevator. The buildingCoordinator sends a sequence of messages and wait for responses in order to move an elevator. The sequence is this: sends ? RequestElevatorState -> receive ElevatorState -> sends ? MoveRequest -> receives MoveRequestSuccess -> changes the state. As you can see I am using the ask pattern. After the movement is successes the buildingCoordinator changes its state using context.become.
The problem that I am running is that the elevator is receiving MoveRequest(1,4) for the same floor twice, sometimes three times. I do remove the floor when I call context.become. However I remove inside the last map. I think it is because I am using context.become inside a future and I should use it outside. But I am having trouble implementing it.
case class BuildingCoordinator(actorName: String,
numberOfFloors: Int,
numberOfElevators: Int,
elevatorControlSystem: ElevatorControlSystem)
extends Actor with ActorLogging {
import context.dispatcher
implicit val timeout = Timeout(4 seconds)
val elevators = createElevators(numberOfElevators)
override def receive: Receive = operational(Map[Int, Queue[Int]](), Map[Int, Queue[Int]]())
def operational(stopsRequests: Map[Int, Queue[Int]], pickUpRequests: Map[Int, Queue[Int]]): Receive = {
case msg#MoveElevator(elevatorId) =>
println(s"[BuildingCoordinator] received $msg")
val elevatorActor: ActorSelection = context.actorSelection(s"/user/$actorName/elevator_$elevatorId")
val newState = (elevatorActor ? RequestElevatorState(elevatorId))
.mapTo[ElevatorState]
.flatMap { state =>
val nextStop = elevatorControlSystem.findNextStop(stopsRequests.get(elevatorId).get, state.currentFloor, state.direction)
elevatorActor ? MoveRequest(elevatorId, nextStop)
}
.mapTo[MoveRequestSuccess]
.flatMap(moveRequestSuccess => elevatorActor ? MakeMove(elevatorId, moveRequestSuccess.targetFloor))
.mapTo[MakeMoveSuccess]
.map { makeMoveSuccess =>
println(s"[BuildingCoordinator] Elevator ${makeMoveSuccess.elevatorId} arrived at floor [${makeMoveSuccess.floor}]")
// removeStopRequest
val stopsRequestsElevator = stopsRequests.get(elevatorId).getOrElse(Queue[Int]())
val newStopsRequestsElevator = stopsRequestsElevator.filterNot(_ == makeMoveSuccess.floor)
val newStopsRequests = stopsRequests + (elevatorId -> newStopsRequestsElevator)
val pickUpRequestsElevator = pickUpRequests.get(elevatorId).getOrElse(Queue[Int]())
val newPickUpRequestsElevator = {
if (pickUpRequestsElevator.contains(makeMoveSuccess.floor)) {
pickUpRequestsElevator.filterNot(_ == makeMoveSuccess.floor)
} else {
pickUpRequestsElevator
}
}
val newPickUpRequests = pickUpRequests + (elevatorId -> newPickUpRequestsElevator)
// I THINK I SHOULD NOT CALL context.become HERE
// context.become(operational(newStopsRequests, newPickUpRequests))
val dropOffFloor = BuildingUtil.generateRandomFloor(numberOfFloors, makeMoveSuccess.floor, makeMoveSuccess.direction)
context.self ! DropOffRequest(makeMoveSuccess.elevatorId, dropOffFloor)
(newStopsRequests, newPickUpRequests)
}
// I MUST CALL context.become HERE, BUT I DONT KNOW HOW
// context.become(operational(newState.flatMap(state => (state._1, state._2))))
}
Other thing that might be nasty here is this big chain of map and flatMap. This was my way to implement, however I think it might exist one way better.
You can't and you should not call context.become or anyhow change actor state outside Receive method and outside Receive method invoke thread (which is Akka distpatcher thread), like in your example. Eg:
def receive: Receive = {
// This is a bug, because context is not and is not supposed to be thread safe.
case message: Message => Future(context.become(anotherReceive))
}
What you should do - send message to self after async operation finished and change the state receive after. If in a mean time you don't want to handle incoming messages - you can stash them. See for more details: https://doc.akka.io/docs/akka/current/typed/stash.html
High level example, technical details omitted:
case OperationFinished(calculations: Map[Any, Any])
class AsyncActor extends Actor with Stash {
def operation: Future[Map[Any, Any]] = ...//some implementation of heavy async operation
def receiveStartAsync(calculations: Map[Any, Any]): Receive = {
case StartAsyncOperation =>
//Start async operation and inform yourself that it is finished
operation.map(OperationFinished.apply) pipeTo self
context.become(receiveWaitAsyncOperation)
}
def receiveWaitAsyncOperation: Receive = {
case OperationFinished =>
unstashAll()
context.become(receiveStartAsync)
case _ => stash()
}
}
I like your response #Ivan Kurchenko.
But, according to: Akka Stash docs
When unstashing the buffered messages by calling unstashAll the messages will be processed sequentially in the order they were added and all are processed unless an exception is thrown. The actor is unresponsive to other new messages until unstashAll is completed. That is another reason for keeping the number of stashed messages low. Actors that hog the message processing thread for too long can result in starvation of other actors.
Meaning that under load, for example, the unstashAll operation will cause all other Actors to be in starvation.
According to the same doc:
That can be mitigated by using the StashBuffer.unstash with numberOfMessages parameter and then send a message to context.self before continuing unstashing more. That means that other new messages may arrive in-between and those must be stashed to keep the original order of messages. It becomes more complicated, so better keep the number of stashed messages low.
Bottom line: you should keep the stashed message count low. It might be not suitable for load operation.

Sending actorRefWithAck inside stream

I'm using answer from this thread because I need to treat first element especially. The problem is, I need to send this data to another Actor or persist locally (which is not possibl).
So, my stream looks like this:
val flow: Flow[Message, Message, (Future[Done], Promise[Option[Message]])] = Flow.fromSinkAndSourceMat(
Flow[Message].mapAsync[Trade](1) {
case TextMessage.Strict(text) =>
Unmarshal(text).to[Trade]
case streamed: TextMessage.Streamed =>
streamed.textStream.runFold("")(_ ++ _).flatMap(Unmarshal(_).to[Trade])
}.groupBy(pairs.size, _.s).prefixAndTail(1).flatMapConcat {
case (head, tail) =>
// sending first element here
val result = Source(head).to(Sink.actorRefWithAck(
ref = actor,
onInitMessage = Init,
ackMessage = Ack,
onCompleteMessage = "done"
)).run()
// some kind of operation on the result
Source(head).concat(tail)
}.mergeSubstreams.toMat(sink)(Keep.right),
Source.maybe[Message])(Keep.both)
Is this a good practice? Will it have unintended consequences? Unfortunately, I cannot call persist inside stream, so I want to send this data to the external system.
Your current approach doesn't use result in any way, so a simpler alternative would be to fire and forget the first Message to the actor:
groupBy(pairs.size, _.s).prefixAndTail(1).flatMapConcat {
case (head, tail) =>
// sending first element here
actor ! head.head
Source(head).concat(tail)
}
The actor would then not have to worry about handling Init and sending Ack messages and could be solely concerned with persisting Message instances.

Am I safely mutating in my akka actors or is this not thread safe?

I am a little confused if I am safely mutating my mutable maps/queue inside of my actor.
Can someone tell me if this code is thread-safe and correct?
class SomeActor extends Actor {
val userQ = mutable.Queue.empty[User]
val tranQ = mutable.Map.empty[Int, Transaction]
def receive = {
case Blank1 =>
if(userQ.isEmpty)
userQ ++= getNewUsers()
case Blank2 =>
val companyProfile = for {
company <- api.getCompany() // Future[Company]
location <- api.getLoc() // Future[Location]
} yield CompanyProfile(company, location)
companyProfile.map { cp =>
tranQ += cp.id -> cp.transaction // tranQ mutatated here
}
}
}
Since I am mutating the tranQ with futures, is this safe?
It is my understanding that each actor message is handled in a serial fashion, so although maybe frowned upon I can use mutable state like this.
I am just confused if using it inside of a future call like tranQ is safe or not.
No, your code is not safe.
While an actor processes one message at a time, you will lose this guarantee as soon as Futures are involed. At that point, the code inside the Future is executed on a (potentially) different thread and the next message might be handled by the actor.
A typical pattern to work around this issue is to send a message with the result of the Future using the pipeTo pattern, like so:
import akka.pattern.pipe
def receive: Receive {
case MyMsg =>
myFutureOperation()
.map(res => MyMsg2(res))
.pipeTo(self)
case MyMsg2(res) =>
// do mutation now
}
More information about using Futures can be found in akka's documentation: http://doc.akka.io/docs/akka/2.5/scala/futures.html

What is the best way to get an actor from the context in Akka?

I tried two ways:
Use Await.result
Await.result(context.actorSelection("akka://Post/user/John").resolveOne(3 seconds), 10 seconds)
Get future value manually
context.actorSelection("akka://Post/user/John").resolveOne(3 seconds).value.get.get
But I have that feeling I'm doing something wrong.
Code of my actor:
def receive: Receive = {
case "msg" ⇒ {
...
val reader = Await.result(context.actorSelection("akka://Post/user/John").resolveOne(3 seconds), 10 seconds)
reader ! data
}
}
You should never block inside an Actor's receive loop. At best it wastes resources, at worse it could lead to a deadlock where the resolution cannot happen because all threads in the thread pool are already in use.
Your best options are:
Send the message directly to the actorSelection
You can just call tell directly on the actor selection. Of course, you don't know if the resolution succeeds. And if this happens a lot, it's certainly less efficient than having the actorRef hang around.
context.actorSelection("akka://Post/user/John").resolveOne(3 seconds) ! data
Resolve the selection asynchronously
Given that you have a Future on your hand you could either send your message in the onComplete or use pipeTo to send the resolved ref back to the yourself:
send on resolution
def receive = {
case data:Data =>
context.actorSelection("akka://Post/user/John").resolveOne(3 seconds)
.onComplete {
case Success(reader) => reader ! data
case Failure(e) => // handle failure
}
}
pipe to self
case class Deferred(data: Data, ref: ActorRef)
var johnRef: Option[ActorRef] = None
def receive = {
case data:Data => johnRef match {
case Some(reader) => reader ! data
case None => context.actorSelection("akka://Post/user/John")
.resolveOne(3 seconds)
.map( reader => Deferred(data,reader)
.pipeTo(self)
case Deferred(data,ref) =>
johnRef = Some(ref)
ref ! data
}

How to handle exception with ask pattern and supervision

How should I handle an exception thrown by the DbActor here ? I'm not sure how to handle it, should pipe the Failure case ?
class RestActor extends Actor with ActorLogging {
import context.dispatcher
val dbActor = context.actorOf(Props[DbActor])
implicit val timeout = Timeout(10 seconds)
override val supervisorStrategy: SupervisorStrategy = {
OneForOneStrategy(maxNrOfRetries = 10, withinTimeRange = 10 seconds) {
case x: Exception => ???
}
}
def receive = {
case GetRequest(reqCtx, id) => {
// perform db ask
ask(dbActor, ReadCommand(reqCtx, id)).mapTo[SomeObject] onComplete {
case Success(obj) => { // some stuff }
case Failure(err) => err match {
case x: Exception => ???
}
}
}
}
}
Would be glad to get your thought, thanks in advance !
There are a couple of questions I can see here based on the questions in your code sample:
What types of things can I do when I override the default supervisor behavior in the definition of how to handle exceptions?
When using ask, what types of things can I do when I get a Failure result on the Future that I am waiting on?
Let's start with the first question first (usually a good idea). When you override the default supervisor strategy, you gain the ability to change how certain types of unhandled exceptions in the child actor are handled in regards to what to do with that failed child actor. The key word in that previous sentence is unhandled. For actors that are doing request/response, you may actually want to handle (catch) specific exceptions and return certain response types instead (or fail the upstream future, more on that later) as opposed to letting them go unhandled. When an unhandled exception happens, you basically lose the ability to respond to the sender with a description of the issue and the sender will probably then get a TimeoutException instead as their Future will never be completed. Once you figured out what you handle explicitly, then you can consider all the rest of exceptions when defining your custom supervisor strategy. Inside this block here:
OneForOneStrategy(maxNrOfRetries = 10, withinTimeRange = 10 seconds) {
case x: Exception => ???
}
You get a chance to map an exception type to a failure Directive, which defines how the failure will be handled from a supervision standpoint. The options are:
Stop - Completely stop the child actor and do not send any more messages to it
Resume - Resume the failed child, not restarting it thus keeping its current internal state
Restart - Similar to resume, but in this case, the old instance is thrown away and a new instance is constructed and internal state is reset (preStart)
Escalate - Escalate up the chain to the parent of the supervisor
So let's say that given a SQLException you wanted to resume and given all others you want to restart then your code would look like this:
OneForOneStrategy(maxNrOfRetries = 10, withinTimeRange = 10 seconds) {
case x: SQLException => Resume
case other => Restart
}
Now for the second question which pertains to what to do when the Future itself returns a Failure response. In this case, I guess it depends on what was supposed to happen as a result of that Future. If the rest actor itself was responsible for completing the http request (let's say that httpCtx has a complete(statusCode:Int, message:String) function on it), then you could do something like this:
ask(dbActor, ReadCommand(reqCtx, id)).mapTo[SomeObject] onComplete {
case Success(obj) => reqCtx.complete(200, "All good!")
case Failure(err:TimeoutException) => reqCtx.complete(500, "Request timed out")
case Failure(ex) => reqCtx.complete(500, ex.getMessage)
}
Now if another actor upstream was responsible for completing the http request and you needed to respond to that actor, you could do something like this:
val origin = sender
ask(dbActor, ReadCommand(reqCtx, id)).mapTo[SomeObject] onComplete {
case Success(obj) => origin ! someResponseObject
case Failure(ex) => origin ! Status.Failure(ex)
}
This approach assumes that in the success block you first want to massage the result object before responding. If you don't want to do that and you want to defer the result handling to the sender then you could just do:
val origin = sender
val fut = ask(dbActor, ReadCommand(reqCtx, id))
fut pipeTo origin
For simpler systems one may want to catch and forward all of the errors. For that I made this small function to wrap the receive method, without bothering with supervision:
import akka.actor.Actor.Receive
import akka.actor.ActorContext
/**
* Meant for wrapping the receive method with try/catch.
* A failed try will result in a reply to sender with the exception.
* #example
* def receive:Receive = honestly {
* case msg => sender ! riskyCalculation(msg)
* }
* ...
* (honestActor ? "some message") onComplete {
* case e:Throwable => ...process error
* case r:_ => ...process result
* }
* #param receive
* #return Actor.Receive
*
* #author Bijou Trouvaille
*/
def honestly(receive: =>Receive)(implicit context: ActorContext):Receive = { case msg =>
try receive(msg) catch { case error:Throwable => context.sender ! error }
}
you can then place it into a package file and import a la akka.pattern.pipe and such. Obviously, this won't deal with exceptions thrown by asynchronous code.