How should I handle an exception thrown by the DbActor here ? I'm not sure how to handle it, should pipe the Failure case ?
class RestActor extends Actor with ActorLogging {
import context.dispatcher
val dbActor = context.actorOf(Props[DbActor])
implicit val timeout = Timeout(10 seconds)
override val supervisorStrategy: SupervisorStrategy = {
OneForOneStrategy(maxNrOfRetries = 10, withinTimeRange = 10 seconds) {
case x: Exception => ???
}
}
def receive = {
case GetRequest(reqCtx, id) => {
// perform db ask
ask(dbActor, ReadCommand(reqCtx, id)).mapTo[SomeObject] onComplete {
case Success(obj) => { // some stuff }
case Failure(err) => err match {
case x: Exception => ???
}
}
}
}
}
Would be glad to get your thought, thanks in advance !
There are a couple of questions I can see here based on the questions in your code sample:
What types of things can I do when I override the default supervisor behavior in the definition of how to handle exceptions?
When using ask, what types of things can I do when I get a Failure result on the Future that I am waiting on?
Let's start with the first question first (usually a good idea). When you override the default supervisor strategy, you gain the ability to change how certain types of unhandled exceptions in the child actor are handled in regards to what to do with that failed child actor. The key word in that previous sentence is unhandled. For actors that are doing request/response, you may actually want to handle (catch) specific exceptions and return certain response types instead (or fail the upstream future, more on that later) as opposed to letting them go unhandled. When an unhandled exception happens, you basically lose the ability to respond to the sender with a description of the issue and the sender will probably then get a TimeoutException instead as their Future will never be completed. Once you figured out what you handle explicitly, then you can consider all the rest of exceptions when defining your custom supervisor strategy. Inside this block here:
OneForOneStrategy(maxNrOfRetries = 10, withinTimeRange = 10 seconds) {
case x: Exception => ???
}
You get a chance to map an exception type to a failure Directive, which defines how the failure will be handled from a supervision standpoint. The options are:
Stop - Completely stop the child actor and do not send any more messages to it
Resume - Resume the failed child, not restarting it thus keeping its current internal state
Restart - Similar to resume, but in this case, the old instance is thrown away and a new instance is constructed and internal state is reset (preStart)
Escalate - Escalate up the chain to the parent of the supervisor
So let's say that given a SQLException you wanted to resume and given all others you want to restart then your code would look like this:
OneForOneStrategy(maxNrOfRetries = 10, withinTimeRange = 10 seconds) {
case x: SQLException => Resume
case other => Restart
}
Now for the second question which pertains to what to do when the Future itself returns a Failure response. In this case, I guess it depends on what was supposed to happen as a result of that Future. If the rest actor itself was responsible for completing the http request (let's say that httpCtx has a complete(statusCode:Int, message:String) function on it), then you could do something like this:
ask(dbActor, ReadCommand(reqCtx, id)).mapTo[SomeObject] onComplete {
case Success(obj) => reqCtx.complete(200, "All good!")
case Failure(err:TimeoutException) => reqCtx.complete(500, "Request timed out")
case Failure(ex) => reqCtx.complete(500, ex.getMessage)
}
Now if another actor upstream was responsible for completing the http request and you needed to respond to that actor, you could do something like this:
val origin = sender
ask(dbActor, ReadCommand(reqCtx, id)).mapTo[SomeObject] onComplete {
case Success(obj) => origin ! someResponseObject
case Failure(ex) => origin ! Status.Failure(ex)
}
This approach assumes that in the success block you first want to massage the result object before responding. If you don't want to do that and you want to defer the result handling to the sender then you could just do:
val origin = sender
val fut = ask(dbActor, ReadCommand(reqCtx, id))
fut pipeTo origin
For simpler systems one may want to catch and forward all of the errors. For that I made this small function to wrap the receive method, without bothering with supervision:
import akka.actor.Actor.Receive
import akka.actor.ActorContext
/**
* Meant for wrapping the receive method with try/catch.
* A failed try will result in a reply to sender with the exception.
* #example
* def receive:Receive = honestly {
* case msg => sender ! riskyCalculation(msg)
* }
* ...
* (honestActor ? "some message") onComplete {
* case e:Throwable => ...process error
* case r:_ => ...process result
* }
* #param receive
* #return Actor.Receive
*
* #author Bijou Trouvaille
*/
def honestly(receive: =>Receive)(implicit context: ActorContext):Receive = { case msg =>
try receive(msg) catch { case error:Throwable => context.sender ! error }
}
you can then place it into a package file and import a la akka.pattern.pipe and such. Obviously, this won't deal with exceptions thrown by asynchronous code.
Related
I am using BackoffSupervisor strategy to create a child actor that has to process some message. I want to implement a very simple restart strategy, in which in case of exception:
Child propagates failing message to supervisor
Supervisor restarts child and sends the failing message again.
Supervisor gives up after 3 retries
Akka persistence is not an option
So far what I have is this:
Supervisor definition:
val childProps = Props(new SenderActor())
val supervisor = BackoffSupervisor.props(
Backoff.onFailure(
childProps,
childName = cmd.hashCode.toString,
minBackoff = 1.seconds,
maxBackoff = 2.seconds,
randomFactor = 0.2
)
.withSupervisorStrategy(
OneForOneStrategy(maxNrOfRetries = 3, loggingEnabled = true) {
case msg: MessageException => {
println("caught specific message!")
SupervisorStrategy.Restart
}
case _: Exception => SupervisorStrategy.Restart
case _ ⇒ SupervisorStrategy.Escalate
})
)
val sup = context.actorOf(supervisor)
sup ! cmd
Child actor that is supposed to send the e-mail, but fails (throws some Exception) and propagates Exception back to supervisor:
class SenderActor() extends Actor {
def fakeSendMail():Unit = {
Thread.sleep(1000)
throw new Exception("surprising exception")
}
override def receive: Receive = {
case cmd: NewMail =>
println("new mail received routee")
try {
fakeSendMail()
} catch {
case t => throw MessageException(cmd, t)
}
}
}
In the above code I wrap any exception into custom class MessageException that gets propagated to SupervisorStrategy, but how to propagate it further to the new child to force reprocessing? Is this the right approach?
Edit. I attempted to resent the message to the Actor on preRestart hook, but somehow the hook is not being triggered:
class SenderActor() extends Actor {
def fakeSendMail():Unit = {
Thread.sleep(1000)
// println("mail sent!")
throw new Exception("surprising exception")
}
override def preStart(): Unit = {
println("child starting")
}
override def preRestart(reason: Throwable, message: Option[Any]): Unit = {
reason match {
case m: MessageException => {
println("aaaaa")
message.foreach(self ! _)
}
case _ => println("bbbb")
}
}
override def postStop(): Unit = {
println("child stopping")
}
override def receive: Receive = {
case cmd: NewMail =>
println("new mail received routee")
try {
fakeSendMail()
} catch {
case t => throw MessageException(cmd, t)
}
}
}
This gives me something similar to following output:
new mail received routee
caught specific message!
child stopping
[ERROR] [01/26/2018 10:15:35.690]
[example-akka.actor.default-dispatcher-2]
[akka://example/user/persistentActor-4-scala/$a/1962829645] Could not
process message sample.persistence.MessageException:
Could not process message <stacktrace>
child starting
But no logs from preRestart hook
The reason that the child's preRestart hook is not invoked is because Backoff.onFailure uses BackoffOnRestartSupervisor underneath the covers, which replaces the default restart behavior with a stop-and-delayed-start behavior that is consistent with the backoff policy. In other words, when using Backoff.onFailure, when a child is restarted, the child's preRestart method is not called because the underlying supervisor actually stops the child, then starts it again later. (Using Backoff.onStop can trigger the child's preRestart hook, but that's tangential to the present discussion.)
The BackoffSupervisor API doesn't support the automatic resending of a message when the supervisor's child restarts: you have to implement this behavior yourself. An idea for retrying messages is to let the BackoffSupervisor's supervisor handle it. For example:
val supervisor = BackoffSupervisor.props(
Backoff.onFailure(
...
).withReplyWhileStopped(ChildIsStopped)
).withSupervisorStrategy(
OneForOneStrategy(maxNrOfRetries = 3, loggingEnabled = true) {
case msg: MessageException =>
println("caught specific message!")
self ! Error(msg.cmd) // replace cmd with whatever the property name is
SupervisorStrategy.Restart
case ...
})
)
val sup = context.actorOf(supervisor)
def receive = {
case cmd: NewMail =>
sup ! cmd
case Error(cmd) =>
timers.startSingleTimer(cmd.id, Replay(cmd), 10.seconds)
// We assume that NewMail has an id field. Also, adjust the time as needed.
case Replay(cmd) =>
sup ! cmd
case ChildIsStopped =>
println("child is stopped")
}
In the above code, the NewMail message embedded in the MessageException is wrapped in a custom case class (in order to easily distinguish it from a "normal"/new NewMail message) and sent to self. In this context, self is the actor that created the BackoffSupervisor. This enclosing actor then uses a single timer to replay the original message at some point. This point in time should be far enough in the future such that the BackoffSupervisor can potentially exhaust SenderActor's restart attempts, so that the child can have ample opportunity to get in a "good" state before it receives the resent message. Obviously this example involves only one message resend regardless of the number of child restarts.
Another idea is to create a BackoffSupervisor-SenderActor pair for every NewMail message, and have the SenderActor send the NewMail message to itself in the preStart hook. One concern with this approach is the cleaning up of resources; i.e., shutting down the BackoffSupervisors (which will, in turn, shut down their respective SenderActor children) when the processing is successful or when the child restarts are exhausted. A map of NewMail ids to (ActorRef, Int) tuples (in which the ActorRef is a reference to a BackoffSupervisor actor, and the Int is the number of restart attempts) would be helpful in this case:
class Overlord extends Actor {
var state = Map[Long, (ActorRef, Int)]() // assuming the mail id is a Long
def receive = {
case cmd: NewMail =>
val childProps = Props(new SenderActor(cmd, self))
val supervisor = BackoffSupervisor.props(
Backoff.onFailure(
...
).withSupervisorStrategy(
OneForOneStrategy(maxNrOfRetries = 3, loggingEnabled = true) {
case msg: MessageException =>
println("caught specific message!")
self ! Error(msg.cmd)
SupervisorStrategy.Restart
case ...
})
)
val sup = context.actorOf(supervisor)
state += (cmd.id -> (sup, 0))
case ProcessingDone(cmdId) =>
state.get(cmdId) match {
case Some((backoffSup, _)) =>
context.stop(backoffSup)
state -= cmdId
case None =>
println(s"${cmdId} not found")
}
case Error(cmd) =>
val cmdId = cmd.id
state.get(cmdId) match {
case Some((backoffSup, numRetries)) =>
if (numRetries == 3) {
println(s"${cmdId} has already been retried 3 times. Giving up.")
context.stop(backoffSup)
state -= cmdId
} else
state += (cmdId -> (backoffSup, numRetries + 1))
case None =>
println(s"${cmdId} not found")
}
case ...
}
}
Note that SenderActor in the above example takes a NewMail and an ActorRef as constructor arguments. The latter argument allows the SenderActor to send a custom ProcessingDone message to the enclosing actor:
class SenderActor(cmd: NewMail, target: ActorRef) extends Actor {
override def preStart(): Unit = {
println(s"child starting, sending ${cmd} to self")
self ! cmd
}
def fakeSendMail(): Unit = ...
def receive = {
case cmd: NewMail => ...
}
}
Obviously the SenderActor is set up to fail every time with the current implementation of fakeSendMail. I'll leave the additional changes needed in SenderActor to implement the happy path, in which SenderActor sends a ProcessingDone message to target, to you.
In the good solution that #chunjef provides, he alert about the risk of schedule a job resend before the backoff supervisor has started the worker
This enclosing actor then uses a single timer to replay the original message at some point. This point in time should be far enough in the future such that the BackoffSupervisor can potentially exhaust SenderActor's restart attempts, so that the child can have ample opportunity to get in a "good" state before it receives the resent message.
If this happens, the scenario will be jobs going to dead letters and no further progress will be done.
I've made a simplified fiddle with this scenario.
So, the schedule delay should be larger than the maxBackoff, and this could represent an impact in job completion time.
A possible solution to avoid this scenario is making the worker actor to send a message to his father when is ready to work, like here.
The failed child actor is available as the sender in your supervisor strategy. Quoting https://doc.akka.io/docs/akka/current/fault-tolerance.html#creating-a-supervisor-strategy:
If the strategy is declared inside the supervising actor (as opposed
to within a companion object) its decider has access to all internal
state of the actor in a thread-safe fashion, including obtaining a
reference to the currently failed child (available as the sender of
the failure message).
Sending emails is a dangerous operation with some third party software in your case. Why not to apply Circuit Breaker pattern and skip the sender actor entirely? Also, you can still have an actor (with some Backoff Supervisor) and Circuit Breaker inside it (if that makes sense for you).
I have an actor that is created at application startup as a child of another actor and receives a message once per day from the parent to perform operation to fetch some files from some SFTP server.
Now, there might be some minor temporary connection exceptions that cause the operation to fail. In this case, a retry is needed.
But there might be a case in which exception is thrown and is not going to be resolved on a retry (ex: file not found, some configuration is improper etc.)
So, in this case what could be an appropriate retry mechanism and supervision strategy considering that the actor will receive messages after a long interval (once a day).
In this case, the message sent to the actor is not bad input - it is just a trigger. Example:
case object FileFetch
If I have a supervision strategy in the parent like this, it is going to restart the failing child on every minor/major exception without retries.
override val supervisorStrategy =
OneForOneStrategy(maxNrOfRetries = -1, withinTimeRange = Duration.inf) {
case _: Exception => Restart
}
What I want to have is something like this:
override val supervisorStrategy =
OneForOneStrategy(maxNrOfRetries = -1, withinTimeRange = Duration.inf) {
case _: MinorException => Retry same message 2, 3 times and then Restart
case _: Exception => Restart
}
"Retrying" or resending a message in the event of an exception is something that you have to implement yourself. From the documentation:
If an exception is thrown while a message is being processed (i.e. taken out of its mailbox and handed over to the current behavior), then this message will be lost. It is important to understand that it is not put back on the mailbox. So if you want to retry processing of a message, you need to deal with it yourself by catching the exception and retry[ing] your flow. Make sure that you put a bound on the number of retries since you don’t want a system to livelock (so consuming a lot of cpu cycles without making progress).
If you want to resend the FileFetch message to the child in the event of a MinorException without restarting the child, then you could catch the exception in the child to avoid triggering the supervision strategy. In the try-catch block, you could send a message to the parent and have the parent track the number of retries (and perhaps include a timestamp in this message, if you want the parent to enact some kind of backoff policy, for example). In the child:
def receive = {
case FileFetch =>
try {
...
} catch {
case m: MinorException =>
val now = System.nanoTime
context.parent ! MinorIncident(self, now)
}
case ...
}
In the parent:
override val supervisorStrategy =
OneForOneStrategy(maxNrOfRetries = -1, withinTimeRange = Duration.Inf) {
case _: Exception => Restart
}
var numFetchRetries = 0
def receive = {
case MinorIncident(fetcherRef, time) =>
log.error(s"${fetcherRef} threw a MinorException at ${time}")
if (numFetchRetries < 3) { // possibly use the time in the retry logic; e.g., a backoff
numFetchRetries = numFetchRetries + 1
fetcherRef ! FileFetch
} else {
numFetchRetries = 0
context.stop(fetcherRef)
... // recreate the child
}
case SomeMsgFromChildThatFetchSucceeded =>
numFetchRetries = 0
case ...
}
Alternatively, instead of catching the exception in the child, you could set the supervisor strategy to Resume the child in the event of a MinorException, while still having the parent handle the message retry logic:
override val supervisorStrategy =
OneForOneStrategy(maxNrOfRetries = -1, withinTimeRange = Duration.Inf) {
case m: MinorException =>
val child = sender()
val now = System.nanoTime
self ! MinorIncident(child, now)
Resume
case _: Exception => Restart
}
Tried googling variations on this trivial question but didn't get an answer...
Basically I have a pattern match in my receive method.
In some cases I want to break early from the receive handling
override def receive = {
case blah => {
... preflight code
if (preflight failed) {
sender() ! errorMSG
"break" or "return" here // get error "method receive has a return statement ; needs result type -
// I tried adding Unit to the receive and return statements
}
... more code
....
if (something happened) {
sender() ! anotherErrorMSG
"break" or "return" here
}
...
}
case foo => {...}
case bar => {...}
} // end receive
See this discussion of return's semantics and remember that receive returns a PartialFunction[Any, Unit] which is then evaluated after receive has returned. In short, there's no way to return early.
Ömer Erden's solution of throwing an exception and using actor supervision works (indeed, exception throwing with all of its overhead is basically the only way to reliably end a computation early), but if you need any state to carry over from message to message, you'll need Akka persistence.
If you don't want to nest if-elses as in chunjef's solution, you can use context.become and stash to create some spaghetti-ish code.
But the best solution may be to have the things that might fail be their own functions with Either result types. Note that the Either API in scala 2.12 is quite a bit nicer than in previous versions.
import scala.util.{ Either, Left, Right }
type ErrorMsg = ...
type PreflightSuccess = ... // contains anything created in preflight that you need later
type MoreCodeSuccess = ... // contains anything created in preflight or morecode that you need later
def preflight(...): Either[ErrorMsg, PreFlightSuccess] = {
... // preflight
if (preflight failed)
Left(errorMsg)
else
Right(...) // create a PreflightSuccess
}
def moreCode1(pfs: PreFlightSuccess): Either[ErrorMsg, MoreCodeSuccess] = {
... // more code
if (something happened)
Left(anotherErrorMSG)
else
Right(...) // create a MoreCodeSuccess
}
def moreCode2(mcs: MoreCodeSuccess): Either[ErrorMsg, Any] = {
... // more code, presumably never fails
Right(...)
}
override def receive = {
case blah =>
val pf = preflight(...)
val result = pf.map(morecode1).joinRight.map(moreCode2).joinRight // only calls morecode1 if preflight succeeded, and only calls morecode2 if preflight and morecode1 succeeded
result.fold(
{ errorMsg => sender ! errorMsg },
()
)
case foo => ...
case bar => ...
}
Whether this is preferable to nested if-else's is a question of taste...
This may not be your question's exact answer but in your case adding supervisor actor would be the better solution. In Akka Supervision model convince you to handle exceptions on supervisor actor instead of sending error messages back to the sender.
This approach brings you a fault-tolerant model and also you can throw exception at any line you want(which solves your current problem), your supervisor actor will handle the throwable with restarting, resuming or stopping the child actor.
please check the link
I'm currently working on an application with a signup process. This signup process will, at some point, communicate with external systems in an asynchronous manner. To keep this question concise, I'm showing you two important actors I've written:
SignupActor.scala
class SignupActor extends PersistentFSM[SignupActor.State, Data, DomainEvt] {
private val apiActor = context.actorOf(ExternalAPIActor.props(new HttpClient))
// At a certain point, a CreateUser(data) message is sent to the apiActor
}
ExternalAPIActor.scala
class ExternalAPIActor(apiClient: HttpClient) extends Actor {
override def preRestart(reason: Throwable, message: Option[Any]) = {
message.foreach(context.system.scheduler.scheduleOnce(3 seconds, self, _))
super.preRestart(reason, message)
}
def receive: Receive = {
case CreateUser(data) =>
Await.result(
apiClient.post(data)
.map(_ => UserCreatedInAPI())
.pipeTo(context.parent),
Timeout(5 seconds).duration
)
}
}
This setup seems to work as expected. When there is an issue with the external API (such as a timeout or network problems), the Future returned by HttpClient::post fails and will result in an exception thanks to Await.result. This, in turn thanks to the SupervisorStrategy of the SignupActor parent actor, will restart the ExternalAPIActor where we can re-send the last message to itself with a small delay to avoid deadlock.
I see a couple of issues with this setup:
Within the receive method of ExternalAPIActor, blocking occurs. As far as I understand, blocking within Actors is considered an anti-pattern.
The delay used to re-send the message is static. If the API is unavailable for longer periods of time, we will keep on sending HTTP requests every 3 seconds. I'd like some kind of exponential backoff mechanism here instead.
To continue on with the latter, I've tried the following in the SignupActor:
SignupActor.scala
val supervisor = BackoffSupervisor.props(
Backoff.onFailure(
ExternalAPIActor.props(new HttpClient),
childName = "external-api",
minBackoff = 3 seconds,
maxBackoff = 30 seconds,
randomFactor = 0.2
)
)
private val apiActor = context.actorOf(supervisor)
Unfortunately, this doesn't seem to do anything at all -- the preRestart method of ExternalAPIActor isn't called at all. When replacing Backoff.onFailure with Backoff.onStop, the preRestart method is called, but without any kind of exponential backoff at all.
Given the above, my questions are as follows:
Is using Await.result the recommended (the only?) way to make sure exceptions thrown in a Future returned from services called within actors are caught and handled accordingly? An especially important part of my particular use case is the fact that messages shouldn't be dropped but retried when something went wrong. Or is there some other (idiomatic) way that exceptions thrown in asynchronous contexts should be handled within Actors?
How would one use the BackoffSupervisor as intended in this case? Again: it is very important that the message responsible for the exception is not dropped, but retried until a N-number of times (to be determined by the maxRetries argument of SupervisorStrategy.
Is using Await.result the recommended (the only?) way to make sure
exceptions thrown in a Future returned from services called within
actors are caught and handled accordingly?
No. Generally that's not how you want to handle failures in Akka. A better alternative is to pipe the failure to your own actor, avoiding the need to use Await.result at all:
def receive: Receive = {
case CreateUser(data) =>
apiClient.post(data)
.map(_ => UserCreatedInAPI())
.pipeTo(self)
case Success(res) => context.parent ! res
case Failure(e) => // Invoke retry here
}
This would mean no restart is required to handle failure, they are all part of the normal flow of your actor.
An additional way to handle this can be to create a "supervised future". Taken from this blog post:
object SupervisedPipe {
case class SupervisedFailure(ex: Throwable)
class SupervisedPipeableFuture[T](future: Future[T])(implicit executionContext: ExecutionContext) {
// implicit failure recipient goes to self when used inside an actor
def supervisedPipeTo(successRecipient: ActorRef)(implicit failureRecipient: ActorRef): Unit =
future.andThen {
case Success(result) => successRecipient ! result
case Failure(ex) => failureRecipient ! SupervisedFailure(ex)
}
}
implicit def supervisedPipeTo[T](future: Future[T])(implicit executionContext: ExecutionContext): SupervisedPipeableFuture[T] =
new SupervisedPipeableFuture[T](future)
/* `orElse` with the actor receive logic */
val handleSupervisedFailure: Receive = {
// just throw the exception and make the actor logic handle it
case SupervisedFailure(ex) => throw ex
}
def supervised(receive: Receive): Receive =
handleSupervisedFailure orElse receive
}
This way, you only pipe to self once you get a Failure, and otherwise send it to the actor the message was meant to be sent to, avoiding the need for the case Success I added to the receive method. All you need to do is replace supervisedPipeTo with the original framework provided pipeTo.
Alright, I've done some more thinking and tinkering and I've come up with the following.
ExternalAPIActor.scala
class ExternalAPIActor(apiClient: HttpClient) extends Actor with Stash {
import ExternalAPIActor._
def receive: Receive = {
case msg # CreateUser(data) =>
context.become(waitingForExternalServiceReceive(msg))
apiClient.post(data)
.map(_ => UserCreatedInAPI())
.pipeTo(self)
}
def waitingForExternalServiceReceive(event: InputEvent): Receive = LoggingReceive {
case Failure(_) =>
unstashAll()
context.unbecome()
context.system.scheduler.scheduleOnce(3 seconds, self, event)
case msg:OutputEvent =>
unstashAll()
context.unbecome()
context.parent ! msg
case _ => stash()
}
}
object ExternalAPIActor {
sealed trait InputEvent
sealed trait OutputEvent
final case class CreateUser(data: Map[String,Any]) extends InputEvent
final case class UserCreatedInAPI() extends OutputEvent
}
I've used this technique to prevent the original message from being lost in case there is something wrong with the external service we're calling. During the process of a request to an external service, I switch context, waiting for either a response of a failure and switch back afterwards. Thanks to the Stash trait, I can make sure other requests to external services aren't lost as well.
Since I have multiple actors in my application calling external services, I abstracted the waitingForExternalServiceReceive to its own trait:
WaitingForExternalService.scala
trait WaitingForExternalServiceReceive[-tInput, +tOutput] extends Stash {
def waitingForExternalServiceReceive(event: tInput)(implicit ec: ExecutionContext): Receive = LoggingReceive {
case akka.actor.Status.Failure(_) =>
unstashAll()
context.unbecome()
context.system.scheduler.scheduleOnce(3 seconds, self, event)
case msg:tOutput =>
unstashAll()
context.unbecome()
context.parent ! msg
case _ => stash()
}
}
Now, the ExternalAPIActor can extend this trait:
ExternalAPIActor.scala
class ExternalAPIActor(apiClient: HttpClient) extends Actor with WaitingForExternalServiceReceive[InputEvent,OutputEvent] {
import ExternalAPIActor._
def receive: Receive = {
case msg # CreateUser(data) =>
context.become(waitingForExternalServiceReceive(msg))
apiClient.post(data)
.map(_ => UserCreatedInAPI())
.pipeTo(self)
}
}
object ExternalAPIActor {
sealed trait InputEvent
sealed trait OutputEvent
final case class CreateUser(data: Map[String,Any]) extends InputEvent
final case class UserCreatedInAPI() extends OutputEvent
}
Now, the actor won't get restarted in case of failures/errors and the message isn't lost. What's more, the entire flow of the actor now is non-blocking.
This setup is (most probably) far from perfect, but it seems to work exactly as I need it to.
I have the following actor which sends a request to a WebService:
class VigiaActor extends akka.actor.Actor {
val log = Logging(context.system, this)
context.setReceiveTimeout(5 seconds)
import VigiaActor._
def receive = {
case ObraExists(numero: String, unidadeGestora: String) =>
WS.url(baseURL + s"""/obras/exists/$unidadeGestora/$numero""").withHeaders("Authorization" -> newToken).get.pipeTo(sender)
case ReceiveTimeout =>
val e = TimeOutException("VIGIA: Receive timed out")
throw e
}
override val supervisorStrategy =
OneForOneStrategy(maxNrOfRetries = 2, withinTimeRange = 1 minute) {
case _: ArithmeticException => Resume
case _: NullPointerException => Restart
case _: IllegalArgumentException => Stop
case _: TimeOutException => Resume
case _: Exception => Restart
}
}
The call to this actor is part of a validation method that should throw an exception in case of a timeout while trying communicate to the WS:
implicit val timeout = Timeout(5 seconds)
lazy val vigiaActor : ActorRef = Akka.system.actorOf(Props[VigiaActor])
(vigiaActor ? VigiaActor.ObraExists(empenho.obra.get, empenho.unidadeGestora)).map {
case r : WSResponse =>
val exists = r.body.toBoolean
if (!exists && empenho.tipoMeta.get.equals(4)) {
erros.adicionarErro(controle.codigoArquivo, row, line, s"Nº de Obra não informado ou inválido para o Tipo de Meta 4 - Obras" , TipoErroImportacaoEnum.WARNING)
}
case _ => erros.adicionarErro(controle.codigoArquivo, row, line, s"Nº de Obra não informado ou inválido para o Tipo de Meta 4 - Obras" , TipoErroImportacaoEnum.WARNING)
}
I am new to this Actor thing, and I am trying to solve some blocking situations on the code.
The problem is I have no Idea of how to "catch" the TimeOutException on the actors call.
UPDATE
switched validation method to:
protected def validateRow(row: Int, line: String, empenho: Empenho, calendarDataEnvioArquivo: Calendar)(implicit s: Session, controle: ControleArquivo, erros:ImportacaoException): Unit = {
implicit val timeout = Timeout(5 seconds)
lazy val vigiaActor : ActorRef = Akka.system.actorOf(Props[VigiaActor])
(vigiaActor ? VigiaActor.ObraExists(empenho.obra.get, empenho.unidadeGestora)).map {
case e: TimeOutException => println("TIMOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOUT!!!!")
case r: WSResponse => {...}
}
}
and the actor ReceiveTimout part to:
case ReceiveTimeout =>
val e = TimeOutException("VIGIA: Receive timed out")
sender ! e
I am getting the following log message as I was before:
[INFO] [07/20/2017 10:28:05.738] [application-akka.actor.default-dispatcher-5] [akka://application/deadLetters] Message [model.exception.TimeOutException] from Actor[akka://application/user/$c#1834419855] to Actor[akka://application/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
context.setReceiveTimeout(5 seconds) triggers the sending of a ReceiveTimeout message to VigiaActor if that actor doesn't receive a message for five seconds. Akka internally sends the ReceiveTimeout to your actor, which is why in your updated code, trying to send the exception to sender doesn't do what you expect. In other words, sender in the case ReceiveTimeout => clause is not the original sender of the ObraExists message.
Setting the receive timeout in VigiaActor has nothing to do with a WS request timeout, because no message is sent to VigiaActor if the request times out. Even if a message was sent to the actor when a WS request isn't completed in five seconds, another ObraExists message could have been enqueued in the actor's mailbox in the meantime, thus failing to trigger a ReceiveTimeout.
In short, setting the actor's receive timeout is not the right mechanism to handle the WS request timeout. (With your current approach of piping the result of the get request to the sender, you could adjust the sender to handle a timeout. In fact, I'd forgo the VigiaActor altogether and simply make the WS call directly in the validateRow method, but getting rid of the actor is probably not the point of your question.)
If you must handle a WS request timeout in the actor, one way to do that is something like the following:
import scala.util.{Failure, Success}
class VigiaActor extends akka.actor.Actor {
import VigiaActor._
val log = Logging(context.system, this)
def receive = {
case ObraExists(numero: String, unidadeGestora: String) =>
val s = sender // capture the original sender
WS.url(baseURL + s"""/obras/exists/$unidadeGestora/$numero""")
.withHeaders("Authorization" -> newToken)
.withRequestTimeout(5 seconds) // set the timeout
.get
.onComplete {
case Success(resp) =>
s ! resp
case Failure(e: scala.concurrent.TimeoutException) =>
s ! TimeOutException("VIGIA: Receive timed out")
case Failure(_) =>
// do something in the case of non-timeout failures
}
}
}
I think you're over-interpreting the "Let it Crash" mentality. You only throw Exceptions inside Actors in exceptional circumstances. That is, you build your Actors to cope if something crashes unexpectedly. But if it's something normal and reasonably expected, you just treat it like any other code path.
So in your case, it has nothing to do with throw or catch -- in your ReceiveTimeout clause, just send a message back to the original sender, saying that the request failed due to a timeout, and let the sender handle it however they consider appropriate. It winds up fairly similar to your success case.