Akka Supervisor Strategy - Correct Use Case - scala

I have been using Akka Supervisor Strategy to handle business logic exceptions.
Reading one of the most famous Scala blog series Neophyte, I found him giving a different purpose for what I have always been doing.
Example:
Let's say I have an HttpActor that should contact an external resource and in case it's down, I will throw an Exception, for now a ResourceUnavailableException.
In case my Supervisor catches that, I will call a Restart on my HttpActor, and in my HttpActor preRestart method, I will call do a schedulerOnce to retry that.
The actor:
class HttpActor extends Actor with ActorLogging {
implicit val system = context.system
override def preRestart(reason: Throwable, message: Option[Any]): Unit = {
log.info(s"Restarting Actor due: ${reason.getCause}")
message foreach { msg =>
context.system.scheduler.scheduleOnce(10.seconds, self, msg)
}
}
def receive = LoggingReceive {
case g: GetRequest =>
doRequest(http.doGet(g), g.httpManager.url, sender())
}
A Supervisor:
class HttpSupervisor extends Actor with ActorLogging with RouterHelper {
override val supervisorStrategy =
OneForOneStrategy(maxNrOfRetries = 5) {
case _: ResourceUnavailableException => Restart
case _: Exception => Escalate
}
var router = makeRouter[HttpActor](5)
def receive = LoggingReceive {
case g: GetRequest =>
router.route(g, sender())
case Terminated(a) =>
router = router.removeRoutee(a)
val r = context.actorOf(Props[HttpActor])
context watch r
router = router.addRoutee(r)
}
}
What's the point here?
In case my doRequest method throws the ResourceUnavailableException, the supervisor will get that and restart the actor, forcing it to resend the message after some time, according to the scheduler. The advantages I see is the fact I get for free the number of retries and a nice way to handle the exception itself.
Now looking at the blog, he shows a different approach in case you need a retry stuff, just sending messages like this:
def receive = {
case EspressoRequest =>
val receipt = register ? Transaction(Espresso)
receipt.map((EspressoCup(Filled), _)).recover {
case _: AskTimeoutException => ComebackLater
} pipeTo(sender)
case ClosingTime => context.system.shutdown()
}
Here in case of AskTimeoutException of the Future, he pipes the result as a ComebackLater object, which he will handle doing this:
case ComebackLater =>
log.info("grumble, grumble")
context.system.scheduler.scheduleOnce(300.millis) {
coffeeSource ! EspressoRequest
}
For me this is pretty much what you can do with the strategy supervisor, but in a manually way, with no built in number of retries logic.
So what is the best approach here and why? Is my concept of using akka supervisor strategy completely wrong?

You can use BackoffSupervisor:
Provided as a built-in pattern the akka.pattern.BackoffSupervisor implements the so-called exponential backoff supervision strategy, starting a child actor again when it fails, each time with a growing time delay between restarts.
val supervisor = BackoffSupervisor.props(
Backoff.onFailure(
childProps,
childName = "myEcho",
minBackoff = 3.seconds,
maxBackoff = 30.seconds,
randomFactor = 0.2 // adds 20% "noise" to vary the intervals slightly
).withAutoReset(10.seconds) // the child must send BackoffSupervisor.Reset to its parent
.withSupervisorStrategy(
OneForOneStrategy() {
case _: MyException => SupervisorStrategy.Restart
case _ => SupervisorStrategy.Escalate
}))

Related

Send message to actor after restart from Supervisor

I am using BackoffSupervisor strategy to create a child actor that has to process some message. I want to implement a very simple restart strategy, in which in case of exception:
Child propagates failing message to supervisor
Supervisor restarts child and sends the failing message again.
Supervisor gives up after 3 retries
Akka persistence is not an option
So far what I have is this:
Supervisor definition:
val childProps = Props(new SenderActor())
val supervisor = BackoffSupervisor.props(
Backoff.onFailure(
childProps,
childName = cmd.hashCode.toString,
minBackoff = 1.seconds,
maxBackoff = 2.seconds,
randomFactor = 0.2
)
.withSupervisorStrategy(
OneForOneStrategy(maxNrOfRetries = 3, loggingEnabled = true) {
case msg: MessageException => {
println("caught specific message!")
SupervisorStrategy.Restart
}
case _: Exception => SupervisorStrategy.Restart
case _ ⇒ SupervisorStrategy.Escalate
})
)
val sup = context.actorOf(supervisor)
sup ! cmd
Child actor that is supposed to send the e-mail, but fails (throws some Exception) and propagates Exception back to supervisor:
class SenderActor() extends Actor {
def fakeSendMail():Unit = {
Thread.sleep(1000)
throw new Exception("surprising exception")
}
override def receive: Receive = {
case cmd: NewMail =>
println("new mail received routee")
try {
fakeSendMail()
} catch {
case t => throw MessageException(cmd, t)
}
}
}
In the above code I wrap any exception into custom class MessageException that gets propagated to SupervisorStrategy, but how to propagate it further to the new child to force reprocessing? Is this the right approach?
Edit. I attempted to resent the message to the Actor on preRestart hook, but somehow the hook is not being triggered:
class SenderActor() extends Actor {
def fakeSendMail():Unit = {
Thread.sleep(1000)
// println("mail sent!")
throw new Exception("surprising exception")
}
override def preStart(): Unit = {
println("child starting")
}
override def preRestart(reason: Throwable, message: Option[Any]): Unit = {
reason match {
case m: MessageException => {
println("aaaaa")
message.foreach(self ! _)
}
case _ => println("bbbb")
}
}
override def postStop(): Unit = {
println("child stopping")
}
override def receive: Receive = {
case cmd: NewMail =>
println("new mail received routee")
try {
fakeSendMail()
} catch {
case t => throw MessageException(cmd, t)
}
}
}
This gives me something similar to following output:
new mail received routee
caught specific message!
child stopping
[ERROR] [01/26/2018 10:15:35.690]
[example-akka.actor.default-dispatcher-2]
[akka://example/user/persistentActor-4-scala/$a/1962829645] Could not
process message sample.persistence.MessageException:
Could not process message <stacktrace>
child starting
But no logs from preRestart hook
The reason that the child's preRestart hook is not invoked is because Backoff.onFailure uses BackoffOnRestartSupervisor underneath the covers, which replaces the default restart behavior with a stop-and-delayed-start behavior that is consistent with the backoff policy. In other words, when using Backoff.onFailure, when a child is restarted, the child's preRestart method is not called because the underlying supervisor actually stops the child, then starts it again later. (Using Backoff.onStop can trigger the child's preRestart hook, but that's tangential to the present discussion.)
The BackoffSupervisor API doesn't support the automatic resending of a message when the supervisor's child restarts: you have to implement this behavior yourself. An idea for retrying messages is to let the BackoffSupervisor's supervisor handle it. For example:
val supervisor = BackoffSupervisor.props(
Backoff.onFailure(
...
).withReplyWhileStopped(ChildIsStopped)
).withSupervisorStrategy(
OneForOneStrategy(maxNrOfRetries = 3, loggingEnabled = true) {
case msg: MessageException =>
println("caught specific message!")
self ! Error(msg.cmd) // replace cmd with whatever the property name is
SupervisorStrategy.Restart
case ...
})
)
val sup = context.actorOf(supervisor)
def receive = {
case cmd: NewMail =>
sup ! cmd
case Error(cmd) =>
timers.startSingleTimer(cmd.id, Replay(cmd), 10.seconds)
// We assume that NewMail has an id field. Also, adjust the time as needed.
case Replay(cmd) =>
sup ! cmd
case ChildIsStopped =>
println("child is stopped")
}
In the above code, the NewMail message embedded in the MessageException is wrapped in a custom case class (in order to easily distinguish it from a "normal"/new NewMail message) and sent to self. In this context, self is the actor that created the BackoffSupervisor. This enclosing actor then uses a single timer to replay the original message at some point. This point in time should be far enough in the future such that the BackoffSupervisor can potentially exhaust SenderActor's restart attempts, so that the child can have ample opportunity to get in a "good" state before it receives the resent message. Obviously this example involves only one message resend regardless of the number of child restarts.
Another idea is to create a BackoffSupervisor-SenderActor pair for every NewMail message, and have the SenderActor send the NewMail message to itself in the preStart hook. One concern with this approach is the cleaning up of resources; i.e., shutting down the BackoffSupervisors (which will, in turn, shut down their respective SenderActor children) when the processing is successful or when the child restarts are exhausted. A map of NewMail ids to (ActorRef, Int) tuples (in which the ActorRef is a reference to a BackoffSupervisor actor, and the Int is the number of restart attempts) would be helpful in this case:
class Overlord extends Actor {
var state = Map[Long, (ActorRef, Int)]() // assuming the mail id is a Long
def receive = {
case cmd: NewMail =>
val childProps = Props(new SenderActor(cmd, self))
val supervisor = BackoffSupervisor.props(
Backoff.onFailure(
...
).withSupervisorStrategy(
OneForOneStrategy(maxNrOfRetries = 3, loggingEnabled = true) {
case msg: MessageException =>
println("caught specific message!")
self ! Error(msg.cmd)
SupervisorStrategy.Restart
case ...
})
)
val sup = context.actorOf(supervisor)
state += (cmd.id -> (sup, 0))
case ProcessingDone(cmdId) =>
state.get(cmdId) match {
case Some((backoffSup, _)) =>
context.stop(backoffSup)
state -= cmdId
case None =>
println(s"${cmdId} not found")
}
case Error(cmd) =>
val cmdId = cmd.id
state.get(cmdId) match {
case Some((backoffSup, numRetries)) =>
if (numRetries == 3) {
println(s"${cmdId} has already been retried 3 times. Giving up.")
context.stop(backoffSup)
state -= cmdId
} else
state += (cmdId -> (backoffSup, numRetries + 1))
case None =>
println(s"${cmdId} not found")
}
case ...
}
}
Note that SenderActor in the above example takes a NewMail and an ActorRef as constructor arguments. The latter argument allows the SenderActor to send a custom ProcessingDone message to the enclosing actor:
class SenderActor(cmd: NewMail, target: ActorRef) extends Actor {
override def preStart(): Unit = {
println(s"child starting, sending ${cmd} to self")
self ! cmd
}
def fakeSendMail(): Unit = ...
def receive = {
case cmd: NewMail => ...
}
}
Obviously the SenderActor is set up to fail every time with the current implementation of fakeSendMail. I'll leave the additional changes needed in SenderActor to implement the happy path, in which SenderActor sends a ProcessingDone message to target, to you.
In the good solution that #chunjef provides, he alert about the risk of schedule a job resend before the backoff supervisor has started the worker
This enclosing actor then uses a single timer to replay the original message at some point. This point in time should be far enough in the future such that the BackoffSupervisor can potentially exhaust SenderActor's restart attempts, so that the child can have ample opportunity to get in a "good" state before it receives the resent message.
If this happens, the scenario will be jobs going to dead letters and no further progress will be done.
I've made a simplified fiddle with this scenario.
So, the schedule delay should be larger than the maxBackoff, and this could represent an impact in job completion time.
A possible solution to avoid this scenario is making the worker actor to send a message to his father when is ready to work, like here.
The failed child actor is available as the sender in your supervisor strategy. Quoting https://doc.akka.io/docs/akka/current/fault-tolerance.html#creating-a-supervisor-strategy:
If the strategy is declared inside the supervising actor (as opposed
to within a companion object) its decider has access to all internal
state of the actor in a thread-safe fashion, including obtaining a
reference to the currently failed child (available as the sender of
the failure message).
Sending emails is a dangerous operation with some third party software in your case. Why not to apply Circuit Breaker pattern and skip the sender actor entirely? Also, you can still have an actor (with some Backoff Supervisor) and Circuit Breaker inside it (if that makes sense for you).

Child actors, futures and exceptions

I'm currently working on an application with a signup process. This signup process will, at some point, communicate with external systems in an asynchronous manner. To keep this question concise, I'm showing you two important actors I've written:
SignupActor.scala
class SignupActor extends PersistentFSM[SignupActor.State, Data, DomainEvt] {
private val apiActor = context.actorOf(ExternalAPIActor.props(new HttpClient))
// At a certain point, a CreateUser(data) message is sent to the apiActor
}
ExternalAPIActor.scala
class ExternalAPIActor(apiClient: HttpClient) extends Actor {
override def preRestart(reason: Throwable, message: Option[Any]) = {
message.foreach(context.system.scheduler.scheduleOnce(3 seconds, self, _))
super.preRestart(reason, message)
}
def receive: Receive = {
case CreateUser(data) =>
Await.result(
apiClient.post(data)
.map(_ => UserCreatedInAPI())
.pipeTo(context.parent),
Timeout(5 seconds).duration
)
}
}
This setup seems to work as expected. When there is an issue with the external API (such as a timeout or network problems), the Future returned by HttpClient::post fails and will result in an exception thanks to Await.result. This, in turn thanks to the SupervisorStrategy of the SignupActor parent actor, will restart the ExternalAPIActor where we can re-send the last message to itself with a small delay to avoid deadlock.
I see a couple of issues with this setup:
Within the receive method of ExternalAPIActor, blocking occurs. As far as I understand, blocking within Actors is considered an anti-pattern.
The delay used to re-send the message is static. If the API is unavailable for longer periods of time, we will keep on sending HTTP requests every 3 seconds. I'd like some kind of exponential backoff mechanism here instead.
To continue on with the latter, I've tried the following in the SignupActor:
SignupActor.scala
val supervisor = BackoffSupervisor.props(
Backoff.onFailure(
ExternalAPIActor.props(new HttpClient),
childName = "external-api",
minBackoff = 3 seconds,
maxBackoff = 30 seconds,
randomFactor = 0.2
)
)
private val apiActor = context.actorOf(supervisor)
Unfortunately, this doesn't seem to do anything at all -- the preRestart method of ExternalAPIActor isn't called at all. When replacing Backoff.onFailure with Backoff.onStop, the preRestart method is called, but without any kind of exponential backoff at all.
Given the above, my questions are as follows:
Is using Await.result the recommended (the only?) way to make sure exceptions thrown in a Future returned from services called within actors are caught and handled accordingly? An especially important part of my particular use case is the fact that messages shouldn't be dropped but retried when something went wrong. Or is there some other (idiomatic) way that exceptions thrown in asynchronous contexts should be handled within Actors?
How would one use the BackoffSupervisor as intended in this case? Again: it is very important that the message responsible for the exception is not dropped, but retried until a N-number of times (to be determined by the maxRetries argument of SupervisorStrategy.
Is using Await.result the recommended (the only?) way to make sure
exceptions thrown in a Future returned from services called within
actors are caught and handled accordingly?
No. Generally that's not how you want to handle failures in Akka. A better alternative is to pipe the failure to your own actor, avoiding the need to use Await.result at all:
def receive: Receive = {
case CreateUser(data) =>
apiClient.post(data)
.map(_ => UserCreatedInAPI())
.pipeTo(self)
case Success(res) => context.parent ! res
case Failure(e) => // Invoke retry here
}
This would mean no restart is required to handle failure, they are all part of the normal flow of your actor.
An additional way to handle this can be to create a "supervised future". Taken from this blog post:
object SupervisedPipe {
case class SupervisedFailure(ex: Throwable)
class SupervisedPipeableFuture[T](future: Future[T])(implicit executionContext: ExecutionContext) {
// implicit failure recipient goes to self when used inside an actor
def supervisedPipeTo(successRecipient: ActorRef)(implicit failureRecipient: ActorRef): Unit =
future.andThen {
case Success(result) => successRecipient ! result
case Failure(ex) => failureRecipient ! SupervisedFailure(ex)
}
}
implicit def supervisedPipeTo[T](future: Future[T])(implicit executionContext: ExecutionContext): SupervisedPipeableFuture[T] =
new SupervisedPipeableFuture[T](future)
/* `orElse` with the actor receive logic */
val handleSupervisedFailure: Receive = {
// just throw the exception and make the actor logic handle it
case SupervisedFailure(ex) => throw ex
}
def supervised(receive: Receive): Receive =
handleSupervisedFailure orElse receive
}
This way, you only pipe to self once you get a Failure, and otherwise send it to the actor the message was meant to be sent to, avoiding the need for the case Success I added to the receive method. All you need to do is replace supervisedPipeTo with the original framework provided pipeTo.
Alright, I've done some more thinking and tinkering and I've come up with the following.
ExternalAPIActor.scala
class ExternalAPIActor(apiClient: HttpClient) extends Actor with Stash {
import ExternalAPIActor._
def receive: Receive = {
case msg # CreateUser(data) =>
context.become(waitingForExternalServiceReceive(msg))
apiClient.post(data)
.map(_ => UserCreatedInAPI())
.pipeTo(self)
}
def waitingForExternalServiceReceive(event: InputEvent): Receive = LoggingReceive {
case Failure(_) =>
unstashAll()
context.unbecome()
context.system.scheduler.scheduleOnce(3 seconds, self, event)
case msg:OutputEvent =>
unstashAll()
context.unbecome()
context.parent ! msg
case _ => stash()
}
}
object ExternalAPIActor {
sealed trait InputEvent
sealed trait OutputEvent
final case class CreateUser(data: Map[String,Any]) extends InputEvent
final case class UserCreatedInAPI() extends OutputEvent
}
I've used this technique to prevent the original message from being lost in case there is something wrong with the external service we're calling. During the process of a request to an external service, I switch context, waiting for either a response of a failure and switch back afterwards. Thanks to the Stash trait, I can make sure other requests to external services aren't lost as well.
Since I have multiple actors in my application calling external services, I abstracted the waitingForExternalServiceReceive to its own trait:
WaitingForExternalService.scala
trait WaitingForExternalServiceReceive[-tInput, +tOutput] extends Stash {
def waitingForExternalServiceReceive(event: tInput)(implicit ec: ExecutionContext): Receive = LoggingReceive {
case akka.actor.Status.Failure(_) =>
unstashAll()
context.unbecome()
context.system.scheduler.scheduleOnce(3 seconds, self, event)
case msg:tOutput =>
unstashAll()
context.unbecome()
context.parent ! msg
case _ => stash()
}
}
Now, the ExternalAPIActor can extend this trait:
ExternalAPIActor.scala
class ExternalAPIActor(apiClient: HttpClient) extends Actor with WaitingForExternalServiceReceive[InputEvent,OutputEvent] {
import ExternalAPIActor._
def receive: Receive = {
case msg # CreateUser(data) =>
context.become(waitingForExternalServiceReceive(msg))
apiClient.post(data)
.map(_ => UserCreatedInAPI())
.pipeTo(self)
}
}
object ExternalAPIActor {
sealed trait InputEvent
sealed trait OutputEvent
final case class CreateUser(data: Map[String,Any]) extends InputEvent
final case class UserCreatedInAPI() extends OutputEvent
}
Now, the actor won't get restarted in case of failures/errors and the message isn't lost. What's more, the entire flow of the actor now is non-blocking.
This setup is (most probably) far from perfect, but it seems to work exactly as I need it to.

Kill actor if it times out in Spray app

In my Spray app, I delegate requests to actors. I want to be able to kill a actor that takes too long. I'm not sure whether I should be using Spray timeouts, Akka ask pattern or something else.
I have implemented:
def processRouteRequest(system: ActorSystem) = {
respondWithMediaType(`text/json`) {
params { p => ctx =>
val builder = newBuilderActor
builder ! Request(p) // the builder calls `ctx.complete`
builder ! PoisonPill
system.scheduler.scheduleOnce(routeRequestMaxLife, builder, Kill)
}
}
}
The idea being that the actor lives only for the duration of a single request and if it doesn't complete within routeRequestMaxLife it gets forcibly killed. This approach seems over-the-top (and spews a lot of info about undelivered messages). I'm not even certain it works correctly.
It seems like what I'm trying to achieve should be a common use-case. How should I approach it?
I would tend to using the ask pattern and handling the requests as follows:
class RequestHandler extends Actor {
def receive = {
case "quick" =>
sender() ! "Quick Reply"
self ! PoisonPill
case "slow" =>
val replyTo = sender()
context.system.scheduler.scheduleOnce(5 seconds, self, replyTo)
case a:ActorRef =>
a ! "Slow Reply"
self ! PoisonPill
}
}
class ExampleService extends HttpService with Actor {
implicit def actorRefFactory = context
import context.dispatcher
def handleRequest(mode: String):Future[String] = {
implicit val timeout = Timeout(1 second)
val requestHandler = context.actorOf(Props[RequestHandler])
(requestHandler ? mode).mapTo[String]
}
val route: Route =
path("endpoint" / Segment) { str =>
get {
onComplete(handleRequest(str)) {
case Success(str) => complete(str)
case Failure(ex) => complete(ex)
}
}
}
def receive = runRoute(route)
}
This way the actor takes care of stopping itself, and the semantics of Ask give you the information about whether or not the request timed out.

Resolving Akka futures from ask in the event of a failure

I am calling an Actor using the ask pattern within a Spray application, and returning the result as the HTTP response. I map failures from the actor to a custom error code.
val authActor = context.actorOf(Props[AuthenticationActor])
callService((authActor ? TokenAuthenticationRequest(token)).mapTo[LoggedInUser]) { user =>
complete(StatusCodes.OK, user)
}
def callService[T](f: => Future[T])(cb: T => RequestContext => Unit) = {
onComplete(f) {
case Success(value: T) => cb(value)
case Failure(ex: ServiceException) => complete(ex.statusCode, ex.errorMessage)
case e => complete(StatusCodes.InternalServerError, "Unable to complete the request. Please try again later.")
//In reality this returns a custom error object.
}
}
This works correctly when the authActor sends a failure, but if the authActor throws an exception, nothing happens until the ask timeout completes. For example:
override def receive: Receive = {
case _ => throw new ServiceException(ErrorCodes.AuthenticationFailed, "No valid session was found for that token")
}
I know that the Akka docs say that
To complete the future with an exception you need send a Failure message to the sender. This is not done automatically when an actor throws an exception while processing a message.
But given that I use asks for a lot of the interface between the Spray routing actors and the service actors, I would rather not wrap the receive part of every child actor with a try/catch. Is there a better way to achieve automatic handling of exceptions in child actors, and immediately resolve the future in the event of an exception?
Edit: this is my current solution. However, it's quite messy to do this for every child actor.
override def receive: Receive = {
case default =>
try {
default match {
case _ => throw new ServiceException("")//Actual code would go here
}
}
catch {
case se: ServiceException =>
logger.error("Service error raised:", se)
sender ! Failure(se)
case ex: Exception =>
sender ! Failure(ex)
throw ex
}
}
That way if it's an expected error (i.e. ServiceException), it's handled by creating a failure. If it's unexpected, it returns a failure immediately so the future is resolved, but then throws the exception so it can still be handled by the SupervisorStrategy.
If you want a way to provide automatic sending of a response back to the sender in case of an unexpected exception, then something like this could work for you:
trait FailurePropatingActor extends Actor{
override def preRestart(reason:Throwable, message:Option[Any]){
super.preRestart(reason, message)
sender() ! Status.Failure(reason)
}
}
We override preRestart and propagate the failure back to the sender as a Status.Failure which will cause an upstream Future to be failed. Also, it's important to call super.preRestart here as that's where child stopping happens. Using this in an actor looks something like this:
case class GetElement(list:List[Int], index:Int)
class MySimpleActor extends FailurePropatingActor {
def receive = {
case GetElement(list, i) =>
val result = list(i)
sender() ! result
}
}
If I was to call an instance of this actor like so:
import akka.pattern.ask
import concurrent.duration._
val system = ActorSystem("test")
import system.dispatcher
implicit val timeout = Timeout(2 seconds)
val ref = system.actorOf(Props[MySimpleActor])
val fut = ref ? GetElement(List(1,2,3), 6)
fut onComplete{
case util.Success(result) =>
println(s"success: $result")
case util.Failure(ex) =>
println(s"FAIL: ${ex.getMessage}")
ex.printStackTrace()
}
Then it would properly hit my Failure block. Now, the code in that base trait works well when Futures are not involved in the actor that is extending that trait, like the simple actor here. But if you use Futures then you need to be careful as exceptions that happen in the Future don't cause restarts in the actor and also, in preRestart, the call to sender() will not return the correct ref because the actor has already moved into the next message. An actor like this shows that issue:
class MyBadFutureUsingActor extends FailurePropatingActor{
import context.dispatcher
def receive = {
case GetElement(list, i) =>
val orig = sender()
val fut = Future{
val result = list(i)
orig ! result
}
}
}
If we were to use this actor in the previous test code, we would always get a timeout in the failure situation. To mitigate that, you need to pipe the results of futures back to the sender like so:
class MyGoodFutureUsingActor extends FailurePropatingActor{
import context.dispatcher
import akka.pattern.pipe
def receive = {
case GetElement(list, i) =>
val fut = Future{
list(i)
}
fut pipeTo sender()
}
}
In this particular case, the actor itself is not restarted because it did not encounter an uncaught exception. Now, if your actor needed to do some additional processing after the future, you can pipe back to self and explicitly fail when you get a Status.Failure:
class MyGoodFutureUsingActor extends FailurePropatingActor{
import context.dispatcher
import akka.pattern.pipe
def receive = {
case GetElement(list, i) =>
val fut = Future{
list(i)
}
fut.to(self, sender())
case d:Double =>
sender() ! d * 2
case Status.Failure(ex) =>
throw ex
}
}
If that behavior becomes common, you can make it available to whatever actors need it like so:
trait StatusFailureHandling{ me:Actor =>
def failureHandling:Receive = {
case Status.Failure(ex) =>
throw ex
}
}
class MyGoodFutureUsingActor extends FailurePropatingActor with StatusFailureHandling{
import context.dispatcher
import akka.pattern.pipe
def receive = myReceive orElse failureHandling
def myReceive:Receive = {
case GetElement(list, i) =>
val fut = Future{
list(i)
}
fut.to(self, sender())
case d:Double =>
sender() ! d * 2
}
}

Akka matching Failures, and recovery

Stuff I need help with is in bold.
I have an actor that is flying multiple spray HttpRequests, the requests are paginated and the actor makes sure it writes the results in sequence into a database (sequence is important to resume crawlers). I explain this because I don't want to explore other patterns of concurrency at the moment. The actor needs to recover from timeouts without restarting.
in my actor I have the following :
case f : Failure => {
system.log.error("faiure")
system.log.error(s"$f")
system.shutdown()
}
case f : AskTimeoutException => {
system.log.error("faiure")
system.log.error(s"$f")
system.shutdown()
}
case msg # _ => {
system.log.error("Unexpected message in harvest")
system.log.error(s"${msg}")
system.shutdown()
}
but I can't match correctly :
[ERROR] [11/23/2013 14:58:10.694] [Crawler-akka.actor.default-dispatcher-3] [ActorSystem(Crawler)] Unexpected message in harvest
[ERROR] [11/23/2013 14:58:10.694] [Crawler-akka.actor.default-dispatcher-3] [ActorSystem(Crawler)] Failure(akka.pattern.AskTimeoutException: Timed out)
My dispatches look as follows :
abstract class CrawlerActor extends Actor {
private implicit val timeout: Timeout = 20.seconds
import context._
def dispatchRequest(node: CNode) {
val reqFut = (System.requester ? CrawlerRequest(node,Get(node.url))).map(r=> CrawlerResponse(node,r.asInstanceOf[HttpResponse]))
reqFut pipeTo self
}
class CrawlerRequester extends Actor {
import context._
val throttler = context.actorOf(Props(classOf[TimerBasedThrottler],System.Config.request_rate),"throttler")
throttler ! SetTarget(Some(IO(Http).actorRef))
def receive : Receive = {
case CrawlerRequest(type_,request) => {
throttler forward request
}
}
}
Once I find the correct way of matching, is there anyway I can get my hands on the CrawlerRequest that the timeout occurred with ? it contains some state I need to figure out how to recover.
This situation occurs if you use pipeTo to respond to message that sent by tell.
For example:
in actorA: actorB ! message
in actorB: message => doStuff pipeTo sender
in actorA: receives not 'scala.util.Failure', but 'akka.actor.Status.Failure'
The additional logic in pipeTo is to transform Try's Failure into akka's actor Failure (akka.actor.Status.Failure). This works fine when you use ask pattern, because temporary ask actor handle akka.actor.Status.Failure for you, but does not work well with tell.
Hope this short answer helps :)
Good luck!
Need to type out the full path of the Failure case class, (or import it I guess).
case f: akka.actor.Status.Failure => {
system.log.error("faiure")
system.log.error(s"${f.cause}")
system.shutdown()
}
That just leaves getting to the request associated with the timeout. Seems a map and pipe with a custom failure handler is needed at point request dispatch. Looking into it now.
The following trampolines the timeout into the actor.
case class CrawlerRequestTimeout(request: CrawlerRequest)
abstract class CrawlerActor extends Actor {
private implicit val timeout: Timeout = 20.seconds
import context._
def dispatchRequest(node: CNode) {
val req = CrawlerRequest(node,Get(node.url))
val reqFut = (System.requester ? req).map(r=> CrawlerResponse(node,r.asInstanceOf[HttpResponse]))
reqFut onFailure {
case te: akka.pattern.AskTimeoutException => self ! CrawlerRequestTimeout(req)
}
reqFut pipeTo self
}
}
with a match of :
case timeout : CrawlerRequestTimeout => {
println("boom")
system.shutdown()
}
Need to find a way of suppressing the exception though, it's still firing. Perhaps suppression isn't really a concern, verifying.
No, suppression is a concern, or the exception trickles down to the msg # _, need to put in a case class to absorb the redundant failure message.
ok, so getting rid of the pipeto gets rid of the exception entering the client actor. It's also a lot easier to read :D
abstract class CrawlerActor extends Actor {
private implicit val timeout: Timeout = 20.seconds
import context._
def dispatchRequest(node: CNode) {
val req = CrawlerRequest(node,Get(node.url))
val reqFut = (System.requester ? req)
reqFut onFailure {
case te: akka.pattern.AskTimeoutException => self ! CrawlerRequestTimeout(req)
}
reqFut onSuccess {
case r: HttpResponse => self ! CrawlerResponse(node,r)
}
}
}
If I understand correctly, you currently don't succeed in matching the AskTimeoutException.
If so, you should match case Failure(AskTimeoutException) => ... instead of case f : AskTimeoutException => ....