I have akka system which is basically two producer actors that send messages to one consumer actor. In a simplified form I have something like this:
class ProducerA extends Actor {
def receive = {
case Produce => Consumer ! generateMessageA()
}
... more code ...
}
class ProducerB extends Actor {
def receive = {
case Produce => Consumer ! generateMessageB()
}
... more code ...
}
class Consumer extends Actor {
def receive = {
case A => handleMessageA(A)
case B => handleMessageB(B)
}
... more code ...
}
And they are all siblings of the same akka system.
I am trying to figure out how to terminate this system gracefully. This means that on shutdown I want ProducerA and ProducerB to stop immediately and then I want Consumer to finish processing whatever messages are left in the message queue and then shutdown.
It seems like what I want is for the Consumer actor to be able to watch for the termination of both ProducerA and ProducerB. Or generally, it seems like what I want is to be able to send a PoisonPill message to the Consumer after both the producers are stopped.
https://alvinalexander.com/scala/how-to-monitor-akka-actor-death-with-watch-method
The above tutorial has a pretty good explanation of how one actor can watch for the termination of one other actor but not sure how an actor could watch for the termination of multiple actors.
An actor can watch multiple actors simply via multiple invocations of context.watch, passing in a different ActorRef with each call. For example, your Consumer actor could watch the termination of the Producer actors in the following way:
case class WatchMe(ref: ActorRef)
class Consumer extends Actor {
var watched = Set[ActorRef]()
def receive = {
case WatchMe(ref) =>
context.watch(ref)
watched = watched + ref
case Terminated(ref) =>
watched = watched - ref
if (watched.isEmpty) self ! PoisonPill
// case ...
}
}
Both Producer actors would send their respective references to Consumer, which would then monitor the Producer actors for termination. When the Producer actors are both terminated, the Consumer sends a PoisonPill to itself. Because a PoisonPill is treated like a normal message in an actor's mailbox, the Consumer will process any messages that are already enqueued before handling the PoisonPill and shutting itself down.
A similar pattern is described in Derek Wyatt's "Shutdown Patterns in Akka 2" blog post, which is mentioned in the Akka documentation.
import akka.actor._
import akka.util.Timeout
import scala.concurrent.duration.DurationInt
class Producer extends Actor {
def receive = {
case _ => println("Producer received a message")
}
}
case object KillConsumer
class Consumer extends Actor {
def receive = {
case KillConsumer =>
println("Stopping Consumer After All Producers")
context.stop(self)
case _ => println("Parent received a message")
}
override def postStop(): Unit = {
println("Post Stop Consumer")
super.postStop()
}
}
class ProducerWatchers(producerListRef: List[ActorRef], consumerRef: ActorRef) extends Actor {
producerListRef.foreach(x => context.watch(x))
context.watch(consumerRef)
var producerActorCount = producerListRef.length
implicit val timeout: Timeout = Timeout(5 seconds)
override def receive: Receive = {
case Terminated(x) if producerActorCount == 1 && producerListRef.contains(x) =>
consumerRef ! KillConsumer
case Terminated(x) if producerListRef.contains(x) =>
producerActorCount -= 1
case Terminated(`consumerRef`) =>
println("Killing ProducerWatchers On Consumer End")
context.stop(self)
case _ => println("Dropping Message")
}
override def postStop(): Unit = {
println("Post Stop ProducerWatchers")
super.postStop()
}
}
object ProducerWatchers {
def apply(producerListRef: List[ActorRef], consumerRef: ActorRef) : Props = Props(new ProducerWatchers(producerListRef, consumerRef))
}
object AkkaActorKill {
def main(args: Array[String]): Unit = {
val actorSystem = ActorSystem("AkkaActorKill")
implicit val timeout: Timeout = Timeout(10 seconds)
val consumerRef = actorSystem.actorOf(Props[Consumer], "Consumer")
val producer1 = actorSystem.actorOf(Props[Producer], name = "Producer1")
val producer2 = actorSystem.actorOf(Props[Producer], name = "Producer2")
val producer3 = actorSystem.actorOf(Props[Producer], name = "Producer3")
val producerWatchers = actorSystem.actorOf(ProducerWatchers(List[ActorRef](producer1, producer2, producer3), consumerRef),"ProducerWatchers")
producer1 ! PoisonPill
producer2 ! PoisonPill
producer3 ! PoisonPill
Thread.sleep(5000)
actorSystem.terminate
}
}
It can be implemented using ProducerWatchers actor, which manages producers killed, once all the producers are killed you can kill the Consumer actor, and then the ProducerWatchers actor.
so the solution I ended up going with was inspired by Derek Wyatt's terminator pattern
val shutdownFut = Future.sequence(
Seq(
gracefulStop(producerA, ProducerShutdownWaitSeconds seconds),
gracefulStop(producerB, ProducerShutdownWaitSeconds seconds),
)
).flatMap(_ => gracefulStop(consumer, ConsumerShutdownWaitSeconds seconds))
Await.result(shutdownFut, (ProducerShutdownWaitSeconds seconds) + (ConsumerShutdownWaitSeconds seconds))
This was more or less exactly what I wanted. The consumer shutdown waits for the producers to shutdown based on the fulfillment of futures. Furthermore, the whole shutdown itself results in a future which you can wait on therefore being able to keep the thread up long enough for everything to clean up properly.
Related
I have actors, that looks like as follows:
As you can see on the image, the ActorStream is a child of Actor.
The question is, when I terminate the Actor, will the ActorStream also be terminated?
Here is the way, how I create the ActorStream in an Actor:
def create(fsm: ActorRef[ServerHealth], cancel: Option[Cancellable]): Behavior[ServerHealthStreamer] =
Behaviors.setup { context =>
implicit val system = context.system
implicit val materializer = ActorMaterializer()
implicit val dispatcher = materializer.executionContext
val kafkaServer = system
.settings
.config
.getConfig("kafka")
.getString("servers")
val sink: Sink[ServerHealth, NotUsed] = ActorSink.actorRefWithAck[ServerHealth, ServerHealthStreamer, Ack](
ref = context.self,
onCompleteMessage = Complete,
onFailureMessage = Fail.apply,
messageAdapter = Message.apply,
onInitMessage = Init.apply,
ackMessage = Ack)
val cancel = Source.tick(1.seconds, 15.seconds, NotUsed)
.flatMapConcat(_ => Source.fromFuture(health(kafkaServer)))
.map {
case true =>
KafkaActive
case false =>
KafkaInactive
}
.to(sink)
.run()
Behaviors.receiveMessage {
case Init(ackTo) =>
ackTo ! Ack
Behaviors.same
case Message(ackTo, msg) =>
fsm ! msg
ackTo ! Ack
create(fsm, Some(cancel))
case Complete =>
Behaviors.same
case Fail(_) =>
fsm ! KafkaInactive
Behaviors.same
}
}
In your case actor termination must terminate stream because under the hood stage actor watching passed actorRef and complete stage if Terminated arrived
I think you can find more information here
https://blog.colinbreck.com/integrating-akka-streams-and-akka-actors-part-ii/
An extremely important aspect to understand is that the materialized
stream is running as a set of actors on the threads of the execution
context on which they were allocated. In other words, the stream is
running independently from the actor that allocated it. This becomes
very important if the stream is long-running, or even infinite, and we
want the actor to manage the life-cycle of the stream, such that when
the actor stops, the stream is terminated. Expanding on the example
above, I will make the stream infinite and use a KillSwitch to manage
the life-cycle of the stream.
I have an actor with stash usage. Sometimes, when it crashes, it loses all stashed messages. I found that it depends on what supervision logic I use.
I wrote a simple example.
An actor with the stash:
case object WrongMessage
case object TestMessage
case object InitialMessage
class TestActor extends Actor with Stash {
override def receive: Receive = uninitializedReceive
def uninitializedReceive: Receive = {
case TestMessage =>
println(s"stash test message")
stash()
case WrongMessage =>
println(s"wrong message")
throw new Throwable("wrong message")
case InitialMessage =>
println(s"initial message")
context.become(initializedReceive)
unstashAll()
}
def initializedReceive: Receive = {
case TestMessage =>
println(s"test message")
}
}
In the following code, TestActor never receives stashed TestMessage:
object Test1 extends App {
implicit val system: ActorSystem = ActorSystem()
val actorRef = system.actorOf(BackoffSupervisor.props(Backoff.onFailure(
Props[TestActor], "TestActor", 1 seconds, 1 seconds, 0
).withSupervisorStrategy(OneForOneStrategy()({
case _ => SupervisorStrategy.Restart
}))))
actorRef ! TestMessage
Thread.sleep(5000L)
actorRef ! WrongMessage
Thread.sleep(5000L)
actorRef ! InitialMessage
}
But this code works well:
class SupervisionActor extends Actor {
val testActorRef: ActorRef = context.actorOf(Props[TestActor])
override def supervisorStrategy: SupervisorStrategy = OneForOneStrategy()({
case _ => SupervisorStrategy.Restart
})
override def receive: Receive = {
case message => testActorRef forward message
}
}
object Test2 extends App {
implicit val system: ActorSystem = ActorSystem()
val actorRef = system.actorOf(Props(classOf[SupervisionActor]))
actorRef ! TestMessage
Thread.sleep(5000L)
actorRef ! WrongMessage
Thread.sleep(5000L)
actorRef ! InitialMessage
}
I looked into sources and found that actor supervision uses
LocalActorRef.restart method which backed by system dispatcher logic, but BackoffSupervisor simply creates a new actor after termination of the old one. Is there any way to work around it?
I'm not sure one can make restart under BackoffSupervisor properly send stashed messages without some custom re-implementation effort.
As you've already pointed out that BackoffSupervisor does its own restart that bypasses the standard actor lifecycle. In fact, it's explicitly noted in the BackoffOnRestartSupervisor source code:
Whatever the final Directive is, we will translate all Restarts to our
own Restarts, which involves stopping the child.
In case you haven't read about this reported issue, it has a relevant discussion re: problem with Backoff.onFailure.
Backoff.onStop would also give the wanted BackoffSupervisor feature, but unfortunately it has its own use cases and won't be triggered by an exception.
I have an Akka application with a Router Group composed by actors executing some jobs. When I detected a shutdown of my application, I want that my actors complete their work before shutting down the application completely. The use case of my question is in case of redeployment : I don't want to authorize it if current jobs are not executed.
To detect shutdown of my application, I'm using following code :
scala.sys.addShutdownHook {
// let actors finished their work
}
To make some tests, I add an infinite loop to see if the shutdown hook is blocked but application ends so this is not the expected behavior for me.
In order to let my actors finished their job, I will implement the idea in following article : http://letitcrash.com/post/30165507578/shutdown-patterns-in-akka-2
So now I'm searching a way to ignore the shutdown hook and closing all resources and application when all jobs have been executed by my workers.
Update after #Edmondo1984 comment
My main app :
val workers = this.createWorkerActors()
val masterOfWorkers = system.actorOf(Master.props(workers), name = "master")
this.monitorActors(supervisor,workers,masterOfWorkers)
this.addShutDownHook(system,masterOfWorkers,supervisor)
def monitorActors(supervisor : ActorRef,workers : List[ActorRef], master : ActorRef) : Unit = {
val actorsToMonitor = master +: workers
supervisor ! MonitorActors(actorsToMonitor)
}
def addShutDownHook
(
system : ActorSystem,
masterOfWorkers : ActorRef, // actor wrapping a ClusterGroup router, brodcasting a PoisonPill to each worker
supervisor : ActorRef
) : Unit = {
scala.sys.addShutdownHook {
implicit val timeout = Timeout(10.hours) // How to block here until actors are terminated ?
system.log.info("Send a Init Shutdown to {}", masterOfWorkers.path.toStringWithoutAddress)
masterOfWorkers ! InitShutDown
system.log.info("Gracefully shutdown all actors of ActorSystem {}", system.name)
Await.result((supervisor ? InitShutDown), Duration.Inf)
system.log.info("Gracefully shutdown actor system")
Await.result(system.terminate(), 1.minutes)
system.log.info("Gracefully shutdown Akka management ...")
Await.result(AkkaManagement(system).stop(), 1.minutes)
System.exit(0)
}
}
Supervisor actor
case class Supervisor() extends Actor with ActorLogging {
var numberOfActorsToWatch = 0L
override def receive: Receive = {
case MonitorActors(actorsToMonitor) =>
log.info("Monitor {} actors, received by {}", actorsToMonitor.length, sender().path)
this.numberOfActorsToWatch = actorsToMonitor.length
actorsToMonitor foreach(context.watch(_))
case Terminated(terminatedActor) if this.numberOfActorsToWatch > 0 =>
log.info("Following actor {} is terminated. Remaining alives actors is {}", terminatedActor.path.toStringWithoutAddress, this.numberOfActorsToWatch)
this.numberOfActorsToWatch -= 1
case Terminated(lastTerminatedActor) if this.numberOfActorsToWatch == 0 =>
log.info("Following actor {} is terminated. All actors has been terminated",lastTerminatedActor.path.toStringWithoutAddress, this.numberOfActorsToWatch)
// what I can do here ?
//context.stop(self)
}
}
application.conf
akka {
actor {
coordinated-shutdown {
default-phase-timeout = 20 s
terminate-actor-system = off
exit-jvm = off
run-by-jvm-shutdown-hook = off
}
}
}
I don't know how to block the main thread, the one killing finally the app.
This is easily achieved by placing a supervisor actor in front of your hierarchy:
When you need shutdown, you send a message to the supervisor and you cache the sender A
The supervisor subscribes to children death through DeadWatch (see https://doc.akka.io/docs/akka/2.5/actors.html)
The supervisor will set a counter variable to the number of children, then sends a message to all the childs telling them to shut down asap. When the childs are done, they will terminate themselves. The supervisor will receive a notification and decrease the counter
When the counter reach 0, the supervisor will send a message to A saying ShutdownTerminated and terminates itself.
Your code will become like so:
class Supervisor extends Actor with ActorLogging {
var shutdownInitiator:ActorRef = _
var numberOfActorsToWatch = 0L
override def receive: Receive = {
case InitShutdown =>
this.numberOfActorsToWatch = context.children.length
context.children.foreach(context.watch(_))
context.children.foreach { s => s ! TerminateSomehow }
shutdownInitiator = sender
case Terminated(terminatedActor) if this.numberOfActorsToWatch > 0 =>
log.info("Following actor {} is terminated. Remaining alives actors is {}", terminatedActor.path.toStringWithoutAddress, this.numberOfActorsToWatch)
this.numberOfActorsToWatch -= 1
case Terminated(lastTerminatedActor) if this.numberOfActorsToWatch == 0 =>
log.info("Following actor {} is terminated. All actors has been terminated",lastTerminatedActor.path.toStringWithoutAddress, this.numberOfActorsToWatch)
// what I can do here ?
shutdownInitiator ! Done
context.stop(self)
}
}
On your shutdown hook, you need a reference to the supervisor and use the ask pattern:
Await.result(supervisor ? InitShutdown, Duration.Inf)
actorSystem.terminate()
I would like to know how to efficiently cleanup akka actors that are created on the fly.
To give a bit of background:
Actor Hierarchy created per event.
Supervisor -> child1 -> grandChild1
In my application the supervisor actor dynamically creates other actors(on a periodic event). I wanted to cleanup the actors after the processing steps for that event is complete.
So, I would like to kill all the child actors once the processing is complete.
I am propagating a message (successfulProcessing) after successful completion in the reverse of creation. (Grandchild1 -> child1 -> Supervisor ).
In the Supervisor, I will send a PoisonPill to the child actor.
This is the code for the Supervisor actor.
class Supervisor extends Actor {
def receive={
case onEvent: OnEvent =>
//Create child actor and send message
case successfulProcessing =>
sender() ! PoisonPill
}
override val supervisorStrategy = AllForOneStrategy() {
case e: Exception =>
Stop
}
}
Is this the correct approach to cleanup the dynamically created actors. If there is any disadvantage to this approach or is there a pattern to be followed?
As per Akka Document 2.4.14 ,
Better way to handle PoisonPill/Kill message is to broadcast them.
ActorRef ! Broadcast(PoisonPill)
Note: Do not broadcast messages when using BalancingPool
The pattern I've seen is to have an actor who manages other actors. In the following example from this tutorial, actor1 manages actor2, where actor2 does all the work. actor1 then cleans up.
case class StartCounting(n: Int, actor: ActorRef)
case class CountDown(n: Int)
class CountDownActor extends Actor {
def receive = {
case StartCounting(n, actor) =>
println(n)
actor ! CountDown(n-1)
case CountDown(n) =>
if(n > 0) {
println(n)
sender ! CountDown(n-1)
} else {
context.system.shutdown()
}
}
}
object Main extends App {
val system = ActorSystem("HelloSystem")
// default Actor constructor
val actor1 = system.actorOf(Props[CountDownActor], name = "manager")
val actor2 = system.actorOf(Props[CountDownActor], name = "worker")
actor1 ! StartCounting(10, actor2)
}
You can think of this like recursion: base and inductive cases. You can apply this at depth for all sibling actors being managed their parent.
In my Spray app, I delegate requests to actors. I want to be able to kill a actor that takes too long. I'm not sure whether I should be using Spray timeouts, Akka ask pattern or something else.
I have implemented:
def processRouteRequest(system: ActorSystem) = {
respondWithMediaType(`text/json`) {
params { p => ctx =>
val builder = newBuilderActor
builder ! Request(p) // the builder calls `ctx.complete`
builder ! PoisonPill
system.scheduler.scheduleOnce(routeRequestMaxLife, builder, Kill)
}
}
}
The idea being that the actor lives only for the duration of a single request and if it doesn't complete within routeRequestMaxLife it gets forcibly killed. This approach seems over-the-top (and spews a lot of info about undelivered messages). I'm not even certain it works correctly.
It seems like what I'm trying to achieve should be a common use-case. How should I approach it?
I would tend to using the ask pattern and handling the requests as follows:
class RequestHandler extends Actor {
def receive = {
case "quick" =>
sender() ! "Quick Reply"
self ! PoisonPill
case "slow" =>
val replyTo = sender()
context.system.scheduler.scheduleOnce(5 seconds, self, replyTo)
case a:ActorRef =>
a ! "Slow Reply"
self ! PoisonPill
}
}
class ExampleService extends HttpService with Actor {
implicit def actorRefFactory = context
import context.dispatcher
def handleRequest(mode: String):Future[String] = {
implicit val timeout = Timeout(1 second)
val requestHandler = context.actorOf(Props[RequestHandler])
(requestHandler ? mode).mapTo[String]
}
val route: Route =
path("endpoint" / Segment) { str =>
get {
onComplete(handleRequest(str)) {
case Success(str) => complete(str)
case Failure(ex) => complete(ex)
}
}
}
def receive = runRoute(route)
}
This way the actor takes care of stopping itself, and the semantics of Ask give you the information about whether or not the request timed out.