I am new to supervision in akka i want to know what supervision strategy is good when we get a ask timeout exception, what is more appropriate Restart or Resume
here is the sample code
class ActorA extends Actor{
override val supervisorStrategy = OneForOneStrategy(
maxNrOfRetries = 10, withinTimeRange = 10 seconds) {
case _:AskTimeoutException => ??? (Resume/Restart)
case _:Exception => Restart
}
val actorB =context.actorof ...//actor creation code
implicit val timeout = Timeout(interval , SECONDS)
val future = ask(actorB, MessageB).mapTo[Boolean] //what if actorB does not reply withing the time and AskTimeoutException is thrown the what should be the supervision strategy
var response = Await.result(future, timeout.duration)
}
please guide me ,Thanks
Related
I am working on the below stream processing system to grab frames from one source, process, and send to another. I'm using a combination of akka-streams and akka-http through their scapa api. The pipeline is very short but I can't seem to locate where the system decides to stop after precisely 100 requests to the endpoint.
object frameProcessor extends App {
implicit val system: ActorSystem = ActorSystem("VideoStreamProcessor")
val decider: Supervision.Decider = _ => Supervision.Restart
implicit val materializer: ActorMaterializer = ActorMaterializer()
implicit val dispatcher: ExecutionContextExecutor = system.dispatcher
val http = Http(system)
val sourceConnectionFlow: Flow[HttpRequest, HttpResponse, Future[Http.OutgoingConnection]] = http.outgoingConnection(sourceUri)
val byteFlow: Flow[HttpResponse, Future[ByteString], NotUsed] =
Flow[HttpResponse].map(_.entity.dataBytes.runFold(ByteString.empty)(_ ++ _))
Source.repeat(HttpRequest(uri = sourceUri))
.via(sourceConnectionFlow)
.via(byteFlow)
.map(postFrame)
.runWith(Sink.ignore)
.onComplete(_ => system.terminate())
def postFrame(imageBytes: Future[ByteString]): Unit = {
imageBytes.onComplete{
case Success(res) => system.log.info(s"post frame. ${res.length} bytes")
case Failure(_) => system.log.error("failed to post image!")
}
}
}
Fore reference, I'm using akka-streams version 2.5.19 and akka-http version 10.1.7. No error is thrown, no error codes on the source server where the frames come from, and the program exits with error code 0.
My application.conf is as follows:
logging = "DEBUG"
Always 100 units processed.
Thanks!
Edit
Added logging to the stream like so
.onComplete{
case Success(res) => {
system.log.info(res.toString)
system.terminate()
}
case Failure(res) => {
system.log.error(res.getMessage)
system.terminate()
}
}
Received a connection reset exception but this is inconsistent. The stream completes with Done.
Edit 2
Using .mapAsync(1)(postFrame) I get the same Success(Done) after precisely 100 requests. Additionally, when I check the nginx server access.log and error.log there are only 200 responses.
I had to modify postFrame as follows to run mapAsync
def postFrame(imageBytes: Future[ByteString]): Future[Unit] = {
imageBytes.onComplete{
case Success(res) => system.log.info(s"post frame. ${res.length} bytes")
case Failure(_) => system.log.error("failed to post image!")
}
Future(Unit)
}
I believe I have found the answer on on the Akka docs using delayed restarts with a backoff operator. Instead of sourcing direct from an unstable remote connection, I use RestartSource.withBackoff and not RestartSource.onFailureWithBackoff. The modified stream looks like;
val restartSource = RestartSource.withBackoff(
minBackoff = 100.milliseconds,
maxBackoff = 1.seconds,
randomFactor = 0.2
){ () =>
Source.single(HttpRequest(uri = sourceUri))
.via(sourceConnectionFlow)
.via(byteFlow)
.mapAsync(1)(postFrame)
}
restartSource
.runWith(Sink.ignore)
.onComplete{
x => {
println(x)
system.terminate()
}
}
I was not able to find the source of the problem but it seems this will work.
I'm trying to wrap my head around akka streams and the way to handle web sockets, but some things are quite clear to me.
For starters, I'm trying to accomplish one-way communication from some client to the server and communication between the same server and some other client.
client1 -----> Server <------> client2
I was looking at the example provided here.
The resulting code looks something like this:
1) starting with the controller
class Test #Inject()(#Named("connManager") myConnectionsManager: ActorRef, cc: ControllerComponents)
(implicit val actorSystem: ActorSystem,
val mat: Materializer,
implicit val executionContext: ExecutionContext)
extends AbstractController(cc) {
private def wsFutureFlow(id: String): Future[Flow[String, String, NotUsed]] = {
implicit val timeout: Timeout = Timeout(5.seconds)
val future = myConnectionsManager ? CreateRemote(id)
val futureFlow = future.mapTo[Flow[String, String, NotUsed]]
futureFlow
}
private def wsFutureLocalFlow: Future[Flow[String, String, NotUsed]] = {
implicit val timeout: Timeout = Timeout(5.seconds)
val future = myConnectionsManager ? CreateLocal
val futureFlow = future.mapTo[Flow[String, String, NotUsed]]
futureFlow
}
def ws: WebSocket = WebSocket.acceptOrResult[String, String] {
rh =>
wsFutureFlow(rh.id.toString).map { flow =>
Right(flow)
}
}
def wsLocal: WebSocket = WebSocket.acceptOrResult[String, String] {
_ =>
wsFutureLocalFlow.map { flow =>
Right(flow)
}
}
}
As for the connection manager actor. That would be the equivalent of the UserParentActor from the example.
class MyConnectionsManager #Inject()(childFactory: MyTestActor.Factory)
(implicit ec: ExecutionContext, mat: Materializer) extends Actor with InjectedActorSupport {
import akka.pattern.{ask, pipe}
implicit val timeout: Timeout = Timeout(2.seconds)
override def receive: Receive = {
case CreateRemote(x) =>
val child = injectedChild(childFactory(), s"remote-$x")
context.watch(child)
privatePipe(child)
case CreateLocal =>
val child = injectedChild(childFactory(), "localConnection")
context.become(onLocalConnected(child))
privatePipe(child)
case Terminated(child) =>
println(s"${child.path.name} terminated...")
}
def onLocalConnected(local: ActorRef): Receive = {
case CreateRemote(x) =>
val child = injectedChild(childFactory(), s"remote-$x")
context.watch(child)
privatePipe(child)
case x: SendToLocal => local ! x
}
private def privatePipe(child: ActorRef) = {
val future = (child ? Init).mapTo[Flow[String, String, _]]
pipe(future) to sender()
() // compiler throws exception without this: non-unit value discarded
}
}
And the MyTestActor looks like this:
class MyTestActor #Inject()(implicit mat: Materializer, ec: ExecutionContext) extends Actor {
val source: Source[String, Sink[String, NotUsed]] = MergeHub.source[String]
.recoverWithRetries(-1, { case _: Exception => Source.empty })
private val jsonSink: Sink[String, Future[Done]] = Sink.foreach { json =>
println(s"${self.path.name} got message: $json")
context.parent ! SendToLocal(json)
}
private lazy val websocketFlow: Flow[String, String, NotUsed] = {
Flow.fromSinkAndSourceCoupled(jsonSink, source).watchTermination() { (_, termination) =>
val name = self.path.name
termination.foreach(_ => context.stop(self))
NotUsed
}
}
def receive: Receive = {
case Init =>
println(s"${self.path.name}: INIT")
sender ! websocketFlow
case SendToLocal(x) =>
println(s"Local got from remote: $x")
case msg: String => sender ! s"Actor got message: $msg"
}
}
What I don't understand, apart from how sinks and sources actually connect to the actors, is the following. When I start up my system, I send a few messages to the actor. However, after I close the connection to an actor named remote, and continue sending messages to the one called "localConnection", the messages get sent to DeadLetters:
[info] Done compiling.
[info] 15:49:20.606 - play.api.Play - Application started (Dev)
localConnection: INIT
localConnection got message: test data
Local got from remote: test data
localConnection got message: hello world
Local got from remote: hello world
remote-133: INIT
remote-133 got message: hello world
Local got from remote: hello world
remote-133 got message: hello from remote
Local got from remote: hello from remote
[error] 15:50:24.449 - a.a.OneForOneStrategy - Monitored actor [Actor[akka://application/user/connManager/remote-133#-998945083]] terminated
akka.actor.DeathPactException: Monitored actor [Actor[akka://application/user/connManager/remote-133#-998945083]] terminated
deadLetters got message: hello local
I assume this is because of the exception thrown... Can anyone explain to me as to why the message gets sent to DeadLetters?
Apart from that, I would like to know why I keep getting a compiler exception without the "()" returned at the end of privatePipe?
Also, should I be doing anything differently?
I realised that the exception was being thrown because I forgot to handle the Terminated message in the new behaviour of the MyConnectionsManager actor.
def onLocalConnected(local: ActorRef): Receive = {
case CreateRemote(x) =>
val child = injectedChild(childFactory(), s"remote-$x")
context.watch(child)
privatePipe(child)
case Terminated(child) => println(s"${child.path.name} terminated...")
case x: SendToLocal => local ! x
}
It seems to be working now.
If I have created a RunningGraph in Akka Stream, how can I know (from the outside)
when all nodes are cancelled due to completion?
when all nodes have been stopped due to an error?
I don't think there is a way to do it for an arbitrary graph, but if you have your graph under control, you just need to attach monitoring sinks to the output of each node which can fail or complete (these are nodes which have at least one output), for example:
import akka.actor.Status
// obtain graph parts (this can be done inside the graph building as well)
val source: Source[Int, NotUsed] = ...
val flow: Flow[Int, String, NotUsed] = ...
val sink: Sink[String, NotUsed] = ...
// create monitoring actors
val aggregate = actorSystem.actorOf(Props[Aggregate])
val sourceMonitorActor = actorSystem.actorOf(Props(new Monitor("source", aggregate)))
val flowMonitorActor = actorSystem.actorOf(Props(new Monitor("flow", aggregate)))
// create the graph
val graph = GraphDSL.create() { implicit b =>
import GraphDSL._
val sourceMonitor = b.add(Sink.actorRef(sourceMonitorActor, Status.Success(()))),
val flowMonitor = b.add(Sink.actorRef(flowMonitorActor, Status.Success(())))
val bc1 = b.add(Broadcast[Int](2))
val bc2 = b.add(Broadcast[String](2))
// main flow
source ~> bc1 ~> flow ~> bc2 ~> sink
// monitoring branches
bc1 ~> sourceMonitor
bc2 ~> flowMonitor
ClosedShape
}
// run the graph
RunnableGraph.fromGraph(graph).run()
class Monitor(name: String, aggregate: ActorRef) extends Actor {
override def receive: Receive = {
case Status.Success(_) => aggregate ! s"$name completed successfully"
case Status.Failure(e) => aggregate ! s"$name completed with failure: ${e.getMessage}"
case _ =>
}
}
class Aggregate extends Actor {
override def receive: Receive = {
case s: String => println(s)
}
}
It is also possible to create only one monitoring actor and use it in all monitoring sinks, but in that case you won't be able to differentiate easily between streams which have failed.
And there also is watchTermination() method on sources and flows which allows to materialize a future which terminates together with the flow at this point. I think it may be difficult to use with GraphDSL, but with regular stream methods it could look like this:
import akka.Done
import akka.actor.Status
import akka.pattern.pipe
val monitor = actorSystem.actorOf(Props[Monitor])
source
.watchTermination()((f, _) => f pipeTo monitor)
.via(flow).watchTermination((f, _) => f pipeTo monitor)
.to(sink)
.run()
class Monitor extends Actor {
override def receive: Receive = {
case Done => println("stream completed")
case Status.Failure(e) => println(s"stream failed: ${e.getMessage}")
}
}
You can transform the future before piping its value to the actor to differentiate between streams.
Here is the setup: I want to be able to stream messages (jsons converted to bytestrings) from a publisher to a remote server subscriber over a tcp connection.
Ideally, the publisher would be an actor that would receive internal messages, queue them and then stream them to the subscriber server if there is outstanding demand of course. I understood that what is necessary for this is to extend ActorPublisher class in order to onNext() the messages when needed.
My problem is that so far I am able just to send (receive and decode properly) one shot messages to the server opening a new connection each time. I did not manage to get my head around the akka doc and be able to set the proper tcp Flow with the ActorPublisher.
Here is the code from the publisher:
def send(message: Message): Unit = {
val system = Akka.system()
implicit val sys = system
import system.dispatcher
implicit val materializer = ActorMaterializer()
val address = Play.current.configuration.getString("eventservice.location").getOrElse("localhost")
val port = Play.current.configuration.getInt("eventservice.port").getOrElse(9000)
/*** Try with actorPublisher ***/
//val result = Source.actorPublisher[Message] (Props[EventActor]).via(Flow[Message].map(Json.toJson(_).toString.map(ByteString(_))))
/*** Try with actorRef ***/
/*val source = Source.actorRef[Message](0, OverflowStrategy.fail).map(
m => {
Logger.info(s"Sending message: ${m.toString}")
ByteString(Json.toJson(m).toString)
}
)
val ref = Flow[ByteString].via(Tcp().outgoingConnection(address, port)).to(Sink.ignore).runWith(source)*/
val result = Source(Json.toJson(message).toString.map(ByteString(_))).
via(Tcp().outgoingConnection(address, port)).
runFold(ByteString.empty) { (acc, in) ⇒ acc ++ in }//Handle the future
}
and the code from the actor which is quite standard in the end:
import akka.actor.Actor
import akka.stream.actor.ActorSubscriberMessage.{OnComplete, OnError}
import akka.stream.actor.{ActorPublisherMessage, ActorPublisher}
import models.events.Message
import play.api.Logger
import scala.collection.mutable
class EventActor extends Actor with ActorPublisher[Message] {
import ActorPublisherMessage._
var queue: mutable.Queue[Message] = mutable.Queue.empty
def receive = {
case m: Message =>
Logger.info(s"EventActor - message received and queued: ${m.toString}")
queue.enqueue(m)
publish()
case Request => publish()
case Cancel =>
Logger.info("EventActor - cancel message received")
context.stop(self)
case OnError(err: Exception) =>
Logger.info("EventActor - error message received")
onError(err)
context.stop(self)
case OnComplete =>
Logger.info("EventActor - onComplete message received")
onComplete()
context.stop(self)
}
def publish() = {
while (queue.nonEmpty && isActive && totalDemand > 0) {
Logger.info("EventActor - message published")
onNext(queue.dequeue())
}
}
I can provide the code from the subscriber if necessary:
def connect(system: ActorSystem, address: String, port: Int): Unit = {
implicit val sys = system
import system.dispatcher
implicit val materializer = ActorMaterializer()
val handler = Sink.foreach[Tcp.IncomingConnection] { conn =>
Logger.info("Event server connected to: " + conn.remoteAddress)
// Get the ByteString flow and reconstruct the msg for handling and then output it back
// that is how handleWith work apparently
conn.handleWith(
Flow[ByteString].fold(ByteString.empty)((acc, b) => acc ++ b).
map(b => handleIncomingMessages(system, b.utf8String)).
map(ByteString(_))
)
}
val connections = Tcp().bind(address, port)
val binding = connections.to(handler).run()
binding.onComplete {
case Success(b) =>
Logger.info("Event server started, listening on: " + b.localAddress)
case Failure(e) =>
Logger.info(s"Event server could not bind to $address:$port: ${e.getMessage}")
system.terminate()
}
}
thanks in advance for the hints.
My first recommendation is to not write your own queue logic. Akka provides this out-of-the-box. You also don't need to write your own Actor, Akka Streams can provide it as well.
First we can create the Flow that will connect your publisher to your subscriber via Tcp. In your publisher code you only need to create the ActorSystem once and connect to the outside server once:
//this code is at top level of your application
implicit val actorSystem = ActorSystem()
implicit val actorMaterializer = ActorMaterializer()
import actorSystem.dispatcher
val host = Play.current.configuration.getString("eventservice.location").getOrElse("localhost")
val port = Play.current.configuration.getInt("eventservice.port").getOrElse(9000)
val publishFlow = Tcp().outgoingConnection(host, port)
publishFlow is a Flow that will input ByteString data that you want to send to the external subscriber and outputs ByteString data that comes from subscriber:
// data to subscriber ----> publishFlow ----> data returned from subscriber
The next step is the publisher Source. Instead of writing your own Actor you can use Source.actorRef to "materialize" the Stream into an ActorRef. Essentially the Stream will become an ActorRef for us to use later:
//these values control the buffer
val bufferSize = 1024
val overflowStrategy = akka.stream.OverflowStrategy.dropHead
val messageSource = Source.actorRef[Message](bufferSize, overflowStrategy)
We also need a Flow to convert Messages into ByteString
val marshalFlow =
Flow[Message].map(message => ByteString(Json.toJson(message).toString))
Finally we can connect all of the pieces. Since you aren't receiving any data back from the external subscriber we'll ignore any data coming in from the connection:
val subscriberRef : ActorRef = messageSource.via(marshalFlow)
.via(publishFlow)
.runWith(Sink.ignore)
We can now treat this stream as if it were an Actor:
val message1 : Message = ???
subscriberRef ! message1
val message2 : Message = ???
subscriberRef ! message2
I'm developing a system that pulls messages from a JMS(the consumer) and push it to a Kafka Topic(the producer).
Since my consumer stays alive waiting for new messages arriving in the JMS queue and push it to Kafka, how can I effectively measure how many messages I can pull by second?
Here is my code:
My Consumer:
class ActiveMqConsumerActor extends Consumer {
var startTime: Long = _
val log = Logging(context.system, this)
val producerActor = context.actorOf(Props[KafkaProducerActor])
override def autoAck = false
override def endpointUri: String = "activemq:KafkaTest"
override def receive: Receive = LoggingReceive {
case msg: CamelMessage =>
val camelMsg = msg.bodyAs[String]
producerActor ! Message(camelMsg.getBytes)
sender() ! Ack
case ex: Exception => sender() ! Failure(ex)
case _ =>
log.error("Got a message that I don't understand")
sender() ! Failure(new Exception("Got a message that I don't understand"))
}
}
The main:
object ActiveMqConsumerTest extends App {
val system = ActorSystem("KafkaSystem")
val camel = CamelExtension(system)
val camelContext = camel.context
camelContext.addComponent("activemq", ActiveMQComponent.activeMQComponent("tcp://0.0.0.0:61616"))
val consumer = system.actorOf(Props[ActiveMqConsumerActor].withRouter(FromConfig), "consumer")
val producer = system.actorOf(Props[KafkaProducerActor].withRouter(FromConfig), "producer")
}
Thanks
You can try using something like "Metrics". https://dropwizard.github.io/metrics/3.1.0/manual/ You can define precise metrics, including time and use that inside of your actor.