How to terminate Akka actor system from a main method? - scala

I've this application using Akka Streams and ReactiveMongo. There are no user defined actors. The application is launched from a main method.
Problem is the JVM continues to run forever after the main method has completed. This is what I'm doing now:
val g = (file: String) => RunnableGraph.fromGraph(GraphDSL.create(Sink.ignore) {
implicit builder =>
sink =>
import GraphDSL.Implicits._
// Source
val A: Outlet[(String, String)] = builder.add(Source.fromIterator(() => parseMovies(file).iterator)).out
// Flow
val B: FlowShape[(String, String), Either[String, Movie]] = builder.add(findMovie)
// Flow
val C: FlowShape[Either[String, Movie], Option[String]] = builder.add(persistMovie)
A ~> B ~> C ~> sink.in
ClosedShape
})
def main(args: Array[String]): Unit = {
require(args.size >= 1, "Path to file is required.")
g(args(0)).run
.onComplete(_ => Await.result(system.terminate(), 5.seconds))
}
I've read this thread, and this, none of which work. system.shutdown is deprecated and I don't have any explicit actors to watch for. I can call system.exit but that's hardly graceful.
From the logs, it appears that Akka is trying to shut down but then I see a bunch of Mongo messages.
2017-01-13 11:35:57.320 [DEBUG] a.e.EventStream.$anonfun$applyOrElse$4 - shutting down: StandardOutLogger started
2017-01-13 11:36:05.397 [DEBUG] r.c.a.MongoDBSystem.debug - [Supervisor-1/Connection-2] ConnectAll Job running... Status: {{NodeSet None Node[localhost:27017: Primary (10/10 available connections), latency=6], auth=Set() }}
2017-01-13 11:36:05.420 [DEBUG] r.c.a.MongoDBSystem.debug - [Supervisor-1/Connection-2] RefreshAll Job running... Status: {{NodeSet None Node[localhost:27017: Primary (10/10 available connections), latency=6], auth=Set() }}
// more of MongoDBSystem.debug messages
Why won't it.just.die?

I think you want to add a shutdown hook or call actorSystem.registerOnTermination(driver.close()):
def main(args: Array[String]): Unit = {
import akka.actor.CoordinatedShutdown
require(args.size >= 1, "Path to file is required.")
CoordinatedShutdown(system).addTask(CooordinatedShutdown.PhaseBeforeActorSystemTerminate, "shutdownMongoDriver") { () => driver.close(5.seconds); Future.successful(Done) }
g(args(0)).run.onComplete(_ => CoordinatedShutdown(system).run())
}

Related

Akka streams Source.repeat stops after 100 requests

I am working on the below stream processing system to grab frames from one source, process, and send to another. I'm using a combination of akka-streams and akka-http through their scapa api. The pipeline is very short but I can't seem to locate where the system decides to stop after precisely 100 requests to the endpoint.
object frameProcessor extends App {
implicit val system: ActorSystem = ActorSystem("VideoStreamProcessor")
val decider: Supervision.Decider = _ => Supervision.Restart
implicit val materializer: ActorMaterializer = ActorMaterializer()
implicit val dispatcher: ExecutionContextExecutor = system.dispatcher
val http = Http(system)
val sourceConnectionFlow: Flow[HttpRequest, HttpResponse, Future[Http.OutgoingConnection]] = http.outgoingConnection(sourceUri)
val byteFlow: Flow[HttpResponse, Future[ByteString], NotUsed] =
Flow[HttpResponse].map(_.entity.dataBytes.runFold(ByteString.empty)(_ ++ _))
Source.repeat(HttpRequest(uri = sourceUri))
.via(sourceConnectionFlow)
.via(byteFlow)
.map(postFrame)
.runWith(Sink.ignore)
.onComplete(_ => system.terminate())
def postFrame(imageBytes: Future[ByteString]): Unit = {
imageBytes.onComplete{
case Success(res) => system.log.info(s"post frame. ${res.length} bytes")
case Failure(_) => system.log.error("failed to post image!")
}
}
}
Fore reference, I'm using akka-streams version 2.5.19 and akka-http version 10.1.7. No error is thrown, no error codes on the source server where the frames come from, and the program exits with error code 0.
My application.conf is as follows:
logging = "DEBUG"
Always 100 units processed.
Thanks!
Edit
Added logging to the stream like so
.onComplete{
case Success(res) => {
system.log.info(res.toString)
system.terminate()
}
case Failure(res) => {
system.log.error(res.getMessage)
system.terminate()
}
}
Received a connection reset exception but this is inconsistent. The stream completes with Done.
Edit 2
Using .mapAsync(1)(postFrame) I get the same Success(Done) after precisely 100 requests. Additionally, when I check the nginx server access.log and error.log there are only 200 responses.
I had to modify postFrame as follows to run mapAsync
def postFrame(imageBytes: Future[ByteString]): Future[Unit] = {
imageBytes.onComplete{
case Success(res) => system.log.info(s"post frame. ${res.length} bytes")
case Failure(_) => system.log.error("failed to post image!")
}
Future(Unit)
}
I believe I have found the answer on on the Akka docs using delayed restarts with a backoff operator. Instead of sourcing direct from an unstable remote connection, I use RestartSource.withBackoff and not RestartSource.onFailureWithBackoff. The modified stream looks like;
val restartSource = RestartSource.withBackoff(
minBackoff = 100.milliseconds,
maxBackoff = 1.seconds,
randomFactor = 0.2
){ () =>
Source.single(HttpRequest(uri = sourceUri))
.via(sourceConnectionFlow)
.via(byteFlow)
.mapAsync(1)(postFrame)
}
restartSource
.runWith(Sink.ignore)
.onComplete{
x => {
println(x)
system.terminate()
}
}
I was not able to find the source of the problem but it seems this will work.

Play framework akka stream websocket handling message get sent to deadletters

I'm trying to wrap my head around akka streams and the way to handle web sockets, but some things are quite clear to me.
For starters, I'm trying to accomplish one-way communication from some client to the server and communication between the same server and some other client.
client1 -----> Server <------> client2
I was looking at the example provided here.
The resulting code looks something like this:
1) starting with the controller
class Test #Inject()(#Named("connManager") myConnectionsManager: ActorRef, cc: ControllerComponents)
(implicit val actorSystem: ActorSystem,
val mat: Materializer,
implicit val executionContext: ExecutionContext)
extends AbstractController(cc) {
private def wsFutureFlow(id: String): Future[Flow[String, String, NotUsed]] = {
implicit val timeout: Timeout = Timeout(5.seconds)
val future = myConnectionsManager ? CreateRemote(id)
val futureFlow = future.mapTo[Flow[String, String, NotUsed]]
futureFlow
}
private def wsFutureLocalFlow: Future[Flow[String, String, NotUsed]] = {
implicit val timeout: Timeout = Timeout(5.seconds)
val future = myConnectionsManager ? CreateLocal
val futureFlow = future.mapTo[Flow[String, String, NotUsed]]
futureFlow
}
def ws: WebSocket = WebSocket.acceptOrResult[String, String] {
rh =>
wsFutureFlow(rh.id.toString).map { flow =>
Right(flow)
}
}
def wsLocal: WebSocket = WebSocket.acceptOrResult[String, String] {
_ =>
wsFutureLocalFlow.map { flow =>
Right(flow)
}
}
}
As for the connection manager actor. That would be the equivalent of the UserParentActor from the example.
class MyConnectionsManager #Inject()(childFactory: MyTestActor.Factory)
(implicit ec: ExecutionContext, mat: Materializer) extends Actor with InjectedActorSupport {
import akka.pattern.{ask, pipe}
implicit val timeout: Timeout = Timeout(2.seconds)
override def receive: Receive = {
case CreateRemote(x) =>
val child = injectedChild(childFactory(), s"remote-$x")
context.watch(child)
privatePipe(child)
case CreateLocal =>
val child = injectedChild(childFactory(), "localConnection")
context.become(onLocalConnected(child))
privatePipe(child)
case Terminated(child) =>
println(s"${child.path.name} terminated...")
}
def onLocalConnected(local: ActorRef): Receive = {
case CreateRemote(x) =>
val child = injectedChild(childFactory(), s"remote-$x")
context.watch(child)
privatePipe(child)
case x: SendToLocal => local ! x
}
private def privatePipe(child: ActorRef) = {
val future = (child ? Init).mapTo[Flow[String, String, _]]
pipe(future) to sender()
() // compiler throws exception without this: non-unit value discarded
}
}
And the MyTestActor looks like this:
class MyTestActor #Inject()(implicit mat: Materializer, ec: ExecutionContext) extends Actor {
val source: Source[String, Sink[String, NotUsed]] = MergeHub.source[String]
.recoverWithRetries(-1, { case _: Exception => Source.empty })
private val jsonSink: Sink[String, Future[Done]] = Sink.foreach { json =>
println(s"${self.path.name} got message: $json")
context.parent ! SendToLocal(json)
}
private lazy val websocketFlow: Flow[String, String, NotUsed] = {
Flow.fromSinkAndSourceCoupled(jsonSink, source).watchTermination() { (_, termination) =>
val name = self.path.name
termination.foreach(_ => context.stop(self))
NotUsed
}
}
def receive: Receive = {
case Init =>
println(s"${self.path.name}: INIT")
sender ! websocketFlow
case SendToLocal(x) =>
println(s"Local got from remote: $x")
case msg: String => sender ! s"Actor got message: $msg"
}
}
What I don't understand, apart from how sinks and sources actually connect to the actors, is the following. When I start up my system, I send a few messages to the actor. However, after I close the connection to an actor named remote, and continue sending messages to the one called "localConnection", the messages get sent to DeadLetters:
[info] Done compiling.
[info] 15:49:20.606 - play.api.Play - Application started (Dev)
localConnection: INIT
localConnection got message: test data
Local got from remote: test data
localConnection got message: hello world
Local got from remote: hello world
remote-133: INIT
remote-133 got message: hello world
Local got from remote: hello world
remote-133 got message: hello from remote
Local got from remote: hello from remote
[error] 15:50:24.449 - a.a.OneForOneStrategy - Monitored actor [Actor[akka://application/user/connManager/remote-133#-998945083]] terminated
akka.actor.DeathPactException: Monitored actor [Actor[akka://application/user/connManager/remote-133#-998945083]] terminated
deadLetters got message: hello local
I assume this is because of the exception thrown... Can anyone explain to me as to why the message gets sent to DeadLetters?
Apart from that, I would like to know why I keep getting a compiler exception without the "()" returned at the end of privatePipe?
Also, should I be doing anything differently?
I realised that the exception was being thrown because I forgot to handle the Terminated message in the new behaviour of the MyConnectionsManager actor.
def onLocalConnected(local: ActorRef): Receive = {
case CreateRemote(x) =>
val child = injectedChild(childFactory(), s"remote-$x")
context.watch(child)
privatePipe(child)
case Terminated(child) => println(s"${child.path.name} terminated...")
case x: SendToLocal => local ! x
}
It seems to be working now.

Handle SIGTERM in akka-http

The current (10.1.3) Akka HTTP docs:
https://doc.akka.io/docs/akka-http/current/server-side/graceful-termination.html
talk about graceful termination, using this code sample:
import akka.actor.ActorSystem
import akka.http.scaladsl.server.Directives._
import akka.http.scaladsl.server.Route
import akka.stream.ActorMaterializer
import scala.concurrent.duration._
implicit val system = ActorSystem()
implicit val dispatcher = system.dispatcher
implicit val materializer = ActorMaterializer()
val routes = get {
complete("Hello world!")
}
val binding: Future[Http.ServerBinding] =
Http().bindAndHandle(routes, "127.0.0.1", 8080)
// ...
// once ready to terminate the server, invoke terminate:
val onceAllConnectionsTerminated: Future[Http.HttpTerminated] =
Await.result(binding, 10.seconds)
.terminate(hardDeadline = 3.seconds)
// once all connections are terminated,
// - you can invoke coordinated shutdown to tear down the rest of the system:
onceAllConnectionsTerminated.flatMap { _ ⇒
system.terminate()
}
I am wondering at what point this get called at, the comment states:
// once ready to terminate the server
What does this mean exactly, i.e. who/what determines the server is ready to terminate?
Do I have to put the shutdown code above in some hook function somewhere so that it is invoked on Akka HTTP receiving a SIGTERM?
I’ve tried putting this into the shutdown hook:
CoordinatedShutdown(system).addCancellableJvmShutdownHook{
// once ready to terminate the server, invoke terminate:
val onceAllConnectionsTerminated: Future[Http.HttpTerminated] =
Await.result(binding, 10.seconds)
.terminate(hardDeadline = 3.seconds)
// once all connections are terminated,
// - you can invoke coordinated shutdown to tear down the rest of the system:
onceAllConnectionsTerminated.flatMap { _ ⇒
system.terminate()
}
}
But requests in progress are ended immediately upon sending a SIGTERM (kill ), rather than completing.
I also found a slightly different way of shutdown from https://github.com/akka/akka-http/issues/1210#issuecomment-338825745:
CoordinatedShutdown(system).addTask(
CoordinatedShutdown.PhaseServiceUnbind, "http_shutdown") { () =>
bind.flatMap(_.unbind).flatMap { _ =>
Http().shutdownAllConnectionPools
}.map { _ =>
Done
}
}
Maybe I should using this to handle SIGTERM? I'm not sure..
Thanks!
Resolution taken from this answer here:
https://discuss.lightbend.com/t/graceful-termination-on-sigterm-using-akka-http/1619
CoordinatedShutdown(system).addTask(
CoordinatedShutdown.PhaseServiceUnbind, "http_shutdown") { () =>
bind.flatMap(_.terminate(hardDeadline = 1.minute)).map { _ =>
Done
}
}
For me the main part was to increase akka.coordinated-shutdown.default-phase-timeout as it took longer to finish the processing of the request than the 5 second default. You can also just increase the timeout for that one phase. I had the following message in my logs:
Coordinated shutdown phase [service-unbind] timed out after 5000 milliseconds

Monitoring a closed graph Akka Stream

If I have created a RunningGraph in Akka Stream, how can I know (from the outside)
when all nodes are cancelled due to completion?
when all nodes have been stopped due to an error?
I don't think there is a way to do it for an arbitrary graph, but if you have your graph under control, you just need to attach monitoring sinks to the output of each node which can fail or complete (these are nodes which have at least one output), for example:
import akka.actor.Status
// obtain graph parts (this can be done inside the graph building as well)
val source: Source[Int, NotUsed] = ...
val flow: Flow[Int, String, NotUsed] = ...
val sink: Sink[String, NotUsed] = ...
// create monitoring actors
val aggregate = actorSystem.actorOf(Props[Aggregate])
val sourceMonitorActor = actorSystem.actorOf(Props(new Monitor("source", aggregate)))
val flowMonitorActor = actorSystem.actorOf(Props(new Monitor("flow", aggregate)))
// create the graph
val graph = GraphDSL.create() { implicit b =>
import GraphDSL._
val sourceMonitor = b.add(Sink.actorRef(sourceMonitorActor, Status.Success(()))),
val flowMonitor = b.add(Sink.actorRef(flowMonitorActor, Status.Success(())))
val bc1 = b.add(Broadcast[Int](2))
val bc2 = b.add(Broadcast[String](2))
// main flow
source ~> bc1 ~> flow ~> bc2 ~> sink
// monitoring branches
bc1 ~> sourceMonitor
bc2 ~> flowMonitor
ClosedShape
}
// run the graph
RunnableGraph.fromGraph(graph).run()
class Monitor(name: String, aggregate: ActorRef) extends Actor {
override def receive: Receive = {
case Status.Success(_) => aggregate ! s"$name completed successfully"
case Status.Failure(e) => aggregate ! s"$name completed with failure: ${e.getMessage}"
case _ =>
}
}
class Aggregate extends Actor {
override def receive: Receive = {
case s: String => println(s)
}
}
It is also possible to create only one monitoring actor and use it in all monitoring sinks, but in that case you won't be able to differentiate easily between streams which have failed.
And there also is watchTermination() method on sources and flows which allows to materialize a future which terminates together with the flow at this point. I think it may be difficult to use with GraphDSL, but with regular stream methods it could look like this:
import akka.Done
import akka.actor.Status
import akka.pattern.pipe
val monitor = actorSystem.actorOf(Props[Monitor])
source
.watchTermination()((f, _) => f pipeTo monitor)
.via(flow).watchTermination((f, _) => f pipeTo monitor)
.to(sink)
.run()
class Monitor extends Actor {
override def receive: Receive = {
case Done => println("stream completed")
case Status.Failure(e) => println(s"stream failed: ${e.getMessage}")
}
}
You can transform the future before piping its value to the actor to differentiate between streams.

Scala/Akka/ReactiveMongo: process does not terminate after system.shutdown()

I just started learning Scala and Akka and now I am trying to develop an application that uses ReactiveMongo framework to connect to a MongoDb server.
The problem is that when I call system.shutdown() in the end of my App object, the process does not terminate and just hangs forever.
I am now testing the case when there is no connection available, so my MongoDB server is not running. I have the following actor class for querying the database:
class MongoDb(val db: String, val nodes: Seq[String], val authentications: Seq[Authenticate] = Seq.empty, val nbChannelsPerNode: Int = 10) extends Actor with ActorLogging {
def this(config: Config) = this(config.getString("db"), config.getStringList("nodes").asScala.toSeq,
config.getList("authenticate").asScala.toSeq.map(c => {
val l = c.unwrapped().asInstanceOf[java.util.HashMap[String, String]]; Authenticate(l.get("db"), l.get("user"), l.get("password"))
}),
config.getInt("nbChannelsPerNode"))
implicit val ec = context.system.dispatcher
val driver = new MongoDriver(context.system)
val connection = driver.connection(nodes, authentications, nbChannelsPerNode)
connection.monitor.ask(WaitForPrimary)(Timeout(30.seconds)).onFailure {
case reason =>
log.error("Waiting for MongoDB primary connection timed out: {}", reason)
log.error("MongoDb actor kills itself as there is no connection available")
self ! PoisonPill
}
val dbConnection = connection(db)
val tasksCollection = dbConnection("tasks")
val taskTargetsCollection = dbConnection("taskTargets")
import Protocol._
override def receive: Receive = {
case GetPendingTask =>
sender ! NoPendingTask
}
}
My app class looks like this:
object HelloAkkaScala extends App with LazyLogging {
import scala.concurrent.duration._
// Create the 'helloakka' actor system
val system = ActorSystem("helloakka")
implicit val ec = system.dispatcher
//val config = ConfigFactory.load(ConfigFactory.load.getString("my.helloakka.app.environment"))
val config = ConfigFactory.load
logger.info("Creating MongoDb actor")
val db = system.actorOf(Props(new MongoDb(config.getConfig("my.helloakka.db.MongoDb"))))
system.scheduler.scheduleOnce(Duration.create(60, TimeUnit.SECONDS), new Runnable() { def run() = {
logger.info("Shutting down the system")
system.shutdown()
logger.info("System has been shut down!")
}})
}
And the log output in my terminal looks like this:
[DEBUG] [08/07/2014 00:32:06.358] [run-main-0] [EventStream(akka://helloakka)] logger log1-Logging$DefaultLogger started
[DEBUG] [08/07/2014 00:32:06.358] [run-main-0] [EventStream(akka://helloakka)] Default Loggers started
00:32:06.443 INFO [run-main-0] [HelloAkkaScala$] - Creating MongoDb actor
00:32:06.518 DEBUG [helloakka-akka.actor.default-dispatcher-3] [reactivemongo.core.actors.MonitorActor] - Actor[akka://helloakka/temp/$a] is waiting for a primary... not available, warning as soon a primary is available.
00:32:06.595 DEBUG [helloakka-akka.actor.default-dispatcher-2] [reactivemongo.core.actors.MongoDBSystem] - Channel #-774976050 unavailable (ChannelClosed(-774976050)).
00:32:06.599 DEBUG [helloakka-akka.actor.default-dispatcher-2] [reactivemongo.core.actors.MongoDBSystem] - The entire node set is still unreachable, is there a network problem?
00:32:06.599 DEBUG [helloakka-akka.actor.default-dispatcher-2] [reactivemongo.core.actors.MongoDBSystem] - -774976050 is disconnected
00:32:08.573 DEBUG [helloakka-akka.actor.default-dispatcher-3] [reactivemongo.core.actors.MongoDBSystem] - ConnectAll Job running... Status: Node[localhost: Unknown (0/10 available connections), latency=0], auth=Set()
00:32:08.574 DEBUG [helloakka-akka.actor.default-dispatcher-3] [reactivemongo.core.actors.MongoDBSystem] - Channel #-73322193 unavailable (ChannelClosed(-73322193)).
00:32:08.575 DEBUG [helloakka-akka.actor.default-dispatcher-3] [reactivemongo.core.actors.MongoDBSystem] - The entire node set is still unreachable, is there a network problem?
00:32:08.575 DEBUG [helloakka-akka.actor.default-dispatcher-3] [reactivemongo.core.actors.MongoDBSystem] - -73322193 is disconnected
... (the last 3 messages repeated many times as per documentation the MongoDriver tries to re-connect with 2 seconds interval)
[ERROR] [08/07/2014 00:32:36.474] [helloakka-akka.actor.default-dispatcher-3] [akka://helloakka/user/$a] Waiting for MongoDB primary connection timed out: akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://helloakka/user/$c#1684233695]] after [30000 ms]
[ERROR] [08/07/2014 00:32:36.475] [helloakka-akka.actor.default-dispatcher-3] [akka://helloakka/user/$a] MongoDb actor kills itself as there is no connection available
... (the same 3 messages repeated again)
00:32:46.461 INFO [helloakka-akka.actor.default-dispatcher-4] [HelloAkkaScala$] - Shutting down the system
00:32:46.461 INFO [helloakka-akka.actor.default-dispatcher-4] [HelloAkkaScala$] - Awaiting system termination...
00:32:46.465 WARN [helloakka-akka.actor.default-dispatcher-2] [reactivemongo.core.actors.MongoDBSystem] - MongoDBSystem Actor[akka://helloakka/user/$b#537715233] stopped.
00:32:46.465 DEBUG [helloakka-akka.actor.default-dispatcher-5] [reactivemongo.core.actors.MonitorActor] - Monitor Actor[akka://helloakka/user/$c#1684233695] stopped.
[DEBUG] [08/07/2014 00:32:46.468] [helloakka-akka.actor.default-dispatcher-2] [EventStream] shutting down: StandardOutLogger started
00:32:46.483 INFO [helloakka-akka.actor.default-dispatcher-4] [HelloAkkaScala$] - System has been terminated!
And after that the process hands forever and does never terminate. What am I doing wrong?
You aren't doing anything incorrect. This is a known issue.
https://github.com/ReactiveMongo/ReactiveMongo/issues/148