Child actors, futures and exceptions - scala

I'm currently working on an application with a signup process. This signup process will, at some point, communicate with external systems in an asynchronous manner. To keep this question concise, I'm showing you two important actors I've written:
SignupActor.scala
class SignupActor extends PersistentFSM[SignupActor.State, Data, DomainEvt] {
private val apiActor = context.actorOf(ExternalAPIActor.props(new HttpClient))
// At a certain point, a CreateUser(data) message is sent to the apiActor
}
ExternalAPIActor.scala
class ExternalAPIActor(apiClient: HttpClient) extends Actor {
override def preRestart(reason: Throwable, message: Option[Any]) = {
message.foreach(context.system.scheduler.scheduleOnce(3 seconds, self, _))
super.preRestart(reason, message)
}
def receive: Receive = {
case CreateUser(data) =>
Await.result(
apiClient.post(data)
.map(_ => UserCreatedInAPI())
.pipeTo(context.parent),
Timeout(5 seconds).duration
)
}
}
This setup seems to work as expected. When there is an issue with the external API (such as a timeout or network problems), the Future returned by HttpClient::post fails and will result in an exception thanks to Await.result. This, in turn thanks to the SupervisorStrategy of the SignupActor parent actor, will restart the ExternalAPIActor where we can re-send the last message to itself with a small delay to avoid deadlock.
I see a couple of issues with this setup:
Within the receive method of ExternalAPIActor, blocking occurs. As far as I understand, blocking within Actors is considered an anti-pattern.
The delay used to re-send the message is static. If the API is unavailable for longer periods of time, we will keep on sending HTTP requests every 3 seconds. I'd like some kind of exponential backoff mechanism here instead.
To continue on with the latter, I've tried the following in the SignupActor:
SignupActor.scala
val supervisor = BackoffSupervisor.props(
Backoff.onFailure(
ExternalAPIActor.props(new HttpClient),
childName = "external-api",
minBackoff = 3 seconds,
maxBackoff = 30 seconds,
randomFactor = 0.2
)
)
private val apiActor = context.actorOf(supervisor)
Unfortunately, this doesn't seem to do anything at all -- the preRestart method of ExternalAPIActor isn't called at all. When replacing Backoff.onFailure with Backoff.onStop, the preRestart method is called, but without any kind of exponential backoff at all.
Given the above, my questions are as follows:
Is using Await.result the recommended (the only?) way to make sure exceptions thrown in a Future returned from services called within actors are caught and handled accordingly? An especially important part of my particular use case is the fact that messages shouldn't be dropped but retried when something went wrong. Or is there some other (idiomatic) way that exceptions thrown in asynchronous contexts should be handled within Actors?
How would one use the BackoffSupervisor as intended in this case? Again: it is very important that the message responsible for the exception is not dropped, but retried until a N-number of times (to be determined by the maxRetries argument of SupervisorStrategy.

Is using Await.result the recommended (the only?) way to make sure
exceptions thrown in a Future returned from services called within
actors are caught and handled accordingly?
No. Generally that's not how you want to handle failures in Akka. A better alternative is to pipe the failure to your own actor, avoiding the need to use Await.result at all:
def receive: Receive = {
case CreateUser(data) =>
apiClient.post(data)
.map(_ => UserCreatedInAPI())
.pipeTo(self)
case Success(res) => context.parent ! res
case Failure(e) => // Invoke retry here
}
This would mean no restart is required to handle failure, they are all part of the normal flow of your actor.
An additional way to handle this can be to create a "supervised future". Taken from this blog post:
object SupervisedPipe {
case class SupervisedFailure(ex: Throwable)
class SupervisedPipeableFuture[T](future: Future[T])(implicit executionContext: ExecutionContext) {
// implicit failure recipient goes to self when used inside an actor
def supervisedPipeTo(successRecipient: ActorRef)(implicit failureRecipient: ActorRef): Unit =
future.andThen {
case Success(result) => successRecipient ! result
case Failure(ex) => failureRecipient ! SupervisedFailure(ex)
}
}
implicit def supervisedPipeTo[T](future: Future[T])(implicit executionContext: ExecutionContext): SupervisedPipeableFuture[T] =
new SupervisedPipeableFuture[T](future)
/* `orElse` with the actor receive logic */
val handleSupervisedFailure: Receive = {
// just throw the exception and make the actor logic handle it
case SupervisedFailure(ex) => throw ex
}
def supervised(receive: Receive): Receive =
handleSupervisedFailure orElse receive
}
This way, you only pipe to self once you get a Failure, and otherwise send it to the actor the message was meant to be sent to, avoiding the need for the case Success I added to the receive method. All you need to do is replace supervisedPipeTo with the original framework provided pipeTo.

Alright, I've done some more thinking and tinkering and I've come up with the following.
ExternalAPIActor.scala
class ExternalAPIActor(apiClient: HttpClient) extends Actor with Stash {
import ExternalAPIActor._
def receive: Receive = {
case msg # CreateUser(data) =>
context.become(waitingForExternalServiceReceive(msg))
apiClient.post(data)
.map(_ => UserCreatedInAPI())
.pipeTo(self)
}
def waitingForExternalServiceReceive(event: InputEvent): Receive = LoggingReceive {
case Failure(_) =>
unstashAll()
context.unbecome()
context.system.scheduler.scheduleOnce(3 seconds, self, event)
case msg:OutputEvent =>
unstashAll()
context.unbecome()
context.parent ! msg
case _ => stash()
}
}
object ExternalAPIActor {
sealed trait InputEvent
sealed trait OutputEvent
final case class CreateUser(data: Map[String,Any]) extends InputEvent
final case class UserCreatedInAPI() extends OutputEvent
}
I've used this technique to prevent the original message from being lost in case there is something wrong with the external service we're calling. During the process of a request to an external service, I switch context, waiting for either a response of a failure and switch back afterwards. Thanks to the Stash trait, I can make sure other requests to external services aren't lost as well.
Since I have multiple actors in my application calling external services, I abstracted the waitingForExternalServiceReceive to its own trait:
WaitingForExternalService.scala
trait WaitingForExternalServiceReceive[-tInput, +tOutput] extends Stash {
def waitingForExternalServiceReceive(event: tInput)(implicit ec: ExecutionContext): Receive = LoggingReceive {
case akka.actor.Status.Failure(_) =>
unstashAll()
context.unbecome()
context.system.scheduler.scheduleOnce(3 seconds, self, event)
case msg:tOutput =>
unstashAll()
context.unbecome()
context.parent ! msg
case _ => stash()
}
}
Now, the ExternalAPIActor can extend this trait:
ExternalAPIActor.scala
class ExternalAPIActor(apiClient: HttpClient) extends Actor with WaitingForExternalServiceReceive[InputEvent,OutputEvent] {
import ExternalAPIActor._
def receive: Receive = {
case msg # CreateUser(data) =>
context.become(waitingForExternalServiceReceive(msg))
apiClient.post(data)
.map(_ => UserCreatedInAPI())
.pipeTo(self)
}
}
object ExternalAPIActor {
sealed trait InputEvent
sealed trait OutputEvent
final case class CreateUser(data: Map[String,Any]) extends InputEvent
final case class UserCreatedInAPI() extends OutputEvent
}
Now, the actor won't get restarted in case of failures/errors and the message isn't lost. What's more, the entire flow of the actor now is non-blocking.
This setup is (most probably) far from perfect, but it seems to work exactly as I need it to.

Related

Akka Supervisor Strategy - Correct Use Case

I have been using Akka Supervisor Strategy to handle business logic exceptions.
Reading one of the most famous Scala blog series Neophyte, I found him giving a different purpose for what I have always been doing.
Example:
Let's say I have an HttpActor that should contact an external resource and in case it's down, I will throw an Exception, for now a ResourceUnavailableException.
In case my Supervisor catches that, I will call a Restart on my HttpActor, and in my HttpActor preRestart method, I will call do a schedulerOnce to retry that.
The actor:
class HttpActor extends Actor with ActorLogging {
implicit val system = context.system
override def preRestart(reason: Throwable, message: Option[Any]): Unit = {
log.info(s"Restarting Actor due: ${reason.getCause}")
message foreach { msg =>
context.system.scheduler.scheduleOnce(10.seconds, self, msg)
}
}
def receive = LoggingReceive {
case g: GetRequest =>
doRequest(http.doGet(g), g.httpManager.url, sender())
}
A Supervisor:
class HttpSupervisor extends Actor with ActorLogging with RouterHelper {
override val supervisorStrategy =
OneForOneStrategy(maxNrOfRetries = 5) {
case _: ResourceUnavailableException => Restart
case _: Exception => Escalate
}
var router = makeRouter[HttpActor](5)
def receive = LoggingReceive {
case g: GetRequest =>
router.route(g, sender())
case Terminated(a) =>
router = router.removeRoutee(a)
val r = context.actorOf(Props[HttpActor])
context watch r
router = router.addRoutee(r)
}
}
What's the point here?
In case my doRequest method throws the ResourceUnavailableException, the supervisor will get that and restart the actor, forcing it to resend the message after some time, according to the scheduler. The advantages I see is the fact I get for free the number of retries and a nice way to handle the exception itself.
Now looking at the blog, he shows a different approach in case you need a retry stuff, just sending messages like this:
def receive = {
case EspressoRequest =>
val receipt = register ? Transaction(Espresso)
receipt.map((EspressoCup(Filled), _)).recover {
case _: AskTimeoutException => ComebackLater
} pipeTo(sender)
case ClosingTime => context.system.shutdown()
}
Here in case of AskTimeoutException of the Future, he pipes the result as a ComebackLater object, which he will handle doing this:
case ComebackLater =>
log.info("grumble, grumble")
context.system.scheduler.scheduleOnce(300.millis) {
coffeeSource ! EspressoRequest
}
For me this is pretty much what you can do with the strategy supervisor, but in a manually way, with no built in number of retries logic.
So what is the best approach here and why? Is my concept of using akka supervisor strategy completely wrong?
You can use BackoffSupervisor:
Provided as a built-in pattern the akka.pattern.BackoffSupervisor implements the so-called exponential backoff supervision strategy, starting a child actor again when it fails, each time with a growing time delay between restarts.
val supervisor = BackoffSupervisor.props(
Backoff.onFailure(
childProps,
childName = "myEcho",
minBackoff = 3.seconds,
maxBackoff = 30.seconds,
randomFactor = 0.2 // adds 20% "noise" to vary the intervals slightly
).withAutoReset(10.seconds) // the child must send BackoffSupervisor.Reset to its parent
.withSupervisorStrategy(
OneForOneStrategy() {
case _: MyException => SupervisorStrategy.Restart
case _ => SupervisorStrategy.Escalate
}))

Kill actor if it times out in Spray app

In my Spray app, I delegate requests to actors. I want to be able to kill a actor that takes too long. I'm not sure whether I should be using Spray timeouts, Akka ask pattern or something else.
I have implemented:
def processRouteRequest(system: ActorSystem) = {
respondWithMediaType(`text/json`) {
params { p => ctx =>
val builder = newBuilderActor
builder ! Request(p) // the builder calls `ctx.complete`
builder ! PoisonPill
system.scheduler.scheduleOnce(routeRequestMaxLife, builder, Kill)
}
}
}
The idea being that the actor lives only for the duration of a single request and if it doesn't complete within routeRequestMaxLife it gets forcibly killed. This approach seems over-the-top (and spews a lot of info about undelivered messages). I'm not even certain it works correctly.
It seems like what I'm trying to achieve should be a common use-case. How should I approach it?
I would tend to using the ask pattern and handling the requests as follows:
class RequestHandler extends Actor {
def receive = {
case "quick" =>
sender() ! "Quick Reply"
self ! PoisonPill
case "slow" =>
val replyTo = sender()
context.system.scheduler.scheduleOnce(5 seconds, self, replyTo)
case a:ActorRef =>
a ! "Slow Reply"
self ! PoisonPill
}
}
class ExampleService extends HttpService with Actor {
implicit def actorRefFactory = context
import context.dispatcher
def handleRequest(mode: String):Future[String] = {
implicit val timeout = Timeout(1 second)
val requestHandler = context.actorOf(Props[RequestHandler])
(requestHandler ? mode).mapTo[String]
}
val route: Route =
path("endpoint" / Segment) { str =>
get {
onComplete(handleRequest(str)) {
case Success(str) => complete(str)
case Failure(ex) => complete(ex)
}
}
}
def receive = runRoute(route)
}
This way the actor takes care of stopping itself, and the semantics of Ask give you the information about whether or not the request timed out.

Execution context for futures in Actors

I have a Actor, and on some message I'm running some method which returns Future.
def receive: Receive = {
case SimpleMessge() =>
val futData:Future[Int] = ...
futData.map { data =>
...
}
}
Is it possible to pass actual context to wait for this data? Or Await is the best I can do if I need this data in SimpleMessage?
If you really need to wait for the future to complete before processing the next message, you can try something like this:
object SimpleMessageHandler{
case class SimpleMessage()
case class FinishSimpleMessage(i:Int)
}
class SimpleMessageHandler extends Actor with Stash{
import SimpleMessageHandler._
import context._
import akka.pattern.pipe
def receive = waitingForMessage
def waitingForMessage: Receive = {
case SimpleMessage() =>
val futData:Future[Int] = ...
futData.map(FinishSimpleMessage(_)) pipeTo self
context.become(waitingToFinish(sender))
}
def waitingToFinish(originalSender:ActorRef):Receive = {
case SimpleMessage() => stash()
case FinishSimpleMessage(i) =>
//Do whatever you need to do to finish here
...
unstashAll()
context.become(waitingForMessage)
case Status.Failure(ex) =>
//log error here
unstashAll()
context.become(waitingForMessage)
}
}
In this approach, we process a SimpleMessage and then switch handling logic to stash all subsequent SimpleMessages received until we get a result from the future. When we get a result, failure or not, we unstash all of the other SimpleMessages we have received while waiting for the future and go on our merry way.
This actor just toggles back and forth between two states and that allows you to only fully process one SimpleMessage at a time without needing to block on the Future.

Akka matching Failures, and recovery

Stuff I need help with is in bold.
I have an actor that is flying multiple spray HttpRequests, the requests are paginated and the actor makes sure it writes the results in sequence into a database (sequence is important to resume crawlers). I explain this because I don't want to explore other patterns of concurrency at the moment. The actor needs to recover from timeouts without restarting.
in my actor I have the following :
case f : Failure => {
system.log.error("faiure")
system.log.error(s"$f")
system.shutdown()
}
case f : AskTimeoutException => {
system.log.error("faiure")
system.log.error(s"$f")
system.shutdown()
}
case msg # _ => {
system.log.error("Unexpected message in harvest")
system.log.error(s"${msg}")
system.shutdown()
}
but I can't match correctly :
[ERROR] [11/23/2013 14:58:10.694] [Crawler-akka.actor.default-dispatcher-3] [ActorSystem(Crawler)] Unexpected message in harvest
[ERROR] [11/23/2013 14:58:10.694] [Crawler-akka.actor.default-dispatcher-3] [ActorSystem(Crawler)] Failure(akka.pattern.AskTimeoutException: Timed out)
My dispatches look as follows :
abstract class CrawlerActor extends Actor {
private implicit val timeout: Timeout = 20.seconds
import context._
def dispatchRequest(node: CNode) {
val reqFut = (System.requester ? CrawlerRequest(node,Get(node.url))).map(r=> CrawlerResponse(node,r.asInstanceOf[HttpResponse]))
reqFut pipeTo self
}
class CrawlerRequester extends Actor {
import context._
val throttler = context.actorOf(Props(classOf[TimerBasedThrottler],System.Config.request_rate),"throttler")
throttler ! SetTarget(Some(IO(Http).actorRef))
def receive : Receive = {
case CrawlerRequest(type_,request) => {
throttler forward request
}
}
}
Once I find the correct way of matching, is there anyway I can get my hands on the CrawlerRequest that the timeout occurred with ? it contains some state I need to figure out how to recover.
This situation occurs if you use pipeTo to respond to message that sent by tell.
For example:
in actorA: actorB ! message
in actorB: message => doStuff pipeTo sender
in actorA: receives not 'scala.util.Failure', but 'akka.actor.Status.Failure'
The additional logic in pipeTo is to transform Try's Failure into akka's actor Failure (akka.actor.Status.Failure). This works fine when you use ask pattern, because temporary ask actor handle akka.actor.Status.Failure for you, but does not work well with tell.
Hope this short answer helps :)
Good luck!
Need to type out the full path of the Failure case class, (or import it I guess).
case f: akka.actor.Status.Failure => {
system.log.error("faiure")
system.log.error(s"${f.cause}")
system.shutdown()
}
That just leaves getting to the request associated with the timeout. Seems a map and pipe with a custom failure handler is needed at point request dispatch. Looking into it now.
The following trampolines the timeout into the actor.
case class CrawlerRequestTimeout(request: CrawlerRequest)
abstract class CrawlerActor extends Actor {
private implicit val timeout: Timeout = 20.seconds
import context._
def dispatchRequest(node: CNode) {
val req = CrawlerRequest(node,Get(node.url))
val reqFut = (System.requester ? req).map(r=> CrawlerResponse(node,r.asInstanceOf[HttpResponse]))
reqFut onFailure {
case te: akka.pattern.AskTimeoutException => self ! CrawlerRequestTimeout(req)
}
reqFut pipeTo self
}
}
with a match of :
case timeout : CrawlerRequestTimeout => {
println("boom")
system.shutdown()
}
Need to find a way of suppressing the exception though, it's still firing. Perhaps suppression isn't really a concern, verifying.
No, suppression is a concern, or the exception trickles down to the msg # _, need to put in a case class to absorb the redundant failure message.
ok, so getting rid of the pipeto gets rid of the exception entering the client actor. It's also a lot easier to read :D
abstract class CrawlerActor extends Actor {
private implicit val timeout: Timeout = 20.seconds
import context._
def dispatchRequest(node: CNode) {
val req = CrawlerRequest(node,Get(node.url))
val reqFut = (System.requester ? req)
reqFut onFailure {
case te: akka.pattern.AskTimeoutException => self ! CrawlerRequestTimeout(req)
}
reqFut onSuccess {
case r: HttpResponse => self ! CrawlerResponse(node,r)
}
}
}
If I understand correctly, you currently don't succeed in matching the AskTimeoutException.
If so, you should match case Failure(AskTimeoutException) => ... instead of case f : AskTimeoutException => ....

Ask an actor and let him respond when he reaches a particular state in Akka 2

I'm quite new to Akka so my question may seem simple:
I have an actor called workerA that uses FSM and can thus be either in those two states Finishedand Computing:
sealed trait State
case object Finished extends State
case object Computing extends State
sealed trait Data
case object Uninitialized extends Data
case class Todo(target: ActorRef, queue: immutable.Seq[Any]) extends Data
When workerA receives GetResponse it should answer if and if only it is in state Finished.
What is the proper way of doing this? I know we should avoid to be blocking in this paradigm but here it is only the top actor which is concerned.
Thanks
I'm not necessarily sure you even need FSM here. FSM is a really good tool for when you have many states and many possible (and possibly complicated) state transitions between those states. In your case, if I understand correctly, you basically have two states; gathering data and finished. It also seems that there is only a single state transition, going from gathering -> finished. If I have this all correct, then I'm going to suggest that you simply use become to solve your problem.
I have some code below to show a trivial example of what I'm describing. The basic idea is that the main actor farms some work off to some workers and then waits for the results. If anyone asks for the results while the work is being done, the actor stashes that request until the work is done. When done, the actor will reply back to anyone that has asked for the results. The code is as follows:
case object GetResults
case class Results(ints:List[Int])
case object DoWork
class MainActor extends Actor with Stash{
import context._
override def preStart = {
val a = actorOf(Props[WorkerA], "worker-a")
val b = actorOf(Props[WorkerB], "worker-b")
a ! DoWork
b ! DoWork
}
def receive = gathering(Nil, 2)
def gathering(ints:List[Int], count:Int):Receive = {
case GetResults => stash()
case Results(i) =>
val results = i ::: ints
val newCount = count - 1
if (newCount == 0){
unstashAll()
become(finished(results))
child("worker-a") foreach (stop(_))
child("worker-b") foreach (stop(_))
}
else
become(gathering(results, newCount))
}
def finished(results:List[Int]):Receive = {
case GetResults => sender ! results
}
}
class WorkerA extends Actor{
def receive = {
case DoWork =>
//Only sleeping to simulate work. Not a good idea in real code
Thread sleep 3000
val ints = for(i <- 2 until 100 by 2) yield i
sender ! Results(ints.toList)
}
}
class WorkerB extends Actor{
def receive = {
case DoWork =>
//Only sleeping to simulate work. Not a good idea in real code
Thread sleep 2000
val ints = for(i <- 1 until 100 by 2) yield i
sender ! Results(ints.toList)
}
}
Then you could test it as follows:
val mainActor = system.actorOf(Props[MainActor])
val fut = mainActor ? GetResults
fut onComplete (println(_))
You can pattern match on FSM states:
// insert pattern matching stuff instead of ...
class MyActor extends Actor with FSM[State, Message] {
startWith(Finished, WaitMessage(null))
when(Finished) {
case Event(Todo(... =>
// work
goto(Computing) using Todo(...)
case Event(GetResponse(... =>
// reply: sender ! msg // or similar
}
/* the rest is optional. You can use onTransition below to send yourself a message to report status of the job: */
when(Busy) {
case Event(Finished(... =>
// reply to someone: sender ! msg // or similar
goto(Finished)
}
onTransition {
case Finished -> Computing =>
// I prefer to run stuff here in a future, and then send a message to myself to signal the end of the job:
self ! Finished(data)
}
An Edit to more specifically address the question:
class MyActor extends Actor with FSM[State, Message] {
startWith(Finished, WaitMessage(null))
when(Finished) {
case Event(Todo(... =>
// work
goto(Computing) using Todo(...)
case Event(GetResponse(... =>
// reply: sender ! msg // or similar
stay
}
initialize()
}