Akka StreamTcp timeout restart - scala

Hi i'm creating a TCP server in Scala with the lib akka. I want to implement a restart of the tcp socket if my client doesn't send any data in a set interval. I tried with idleTimeout and a SupervisionStrategy but i can't catch the TimeoutException. On the client i see the log "Closing connection due to IO error java.io.IOException: Connection reset by peer"..
How i can resolve that? And restart the stream??
object TCPServer {
def serverLogic(connection: IncomingConnection) (implicit system: ActorSystem): Flow[ByteString, ByteString, NotUsed] = {
val converter: Flow[ByteString, String, NotUsed] = Flow[ByteString].map { (bytes: ByteString) =>
val message = bytes.utf8String
Logging.getLogger(system,this.getClass).debug(s"server received message $message")
message
}
val httpOut: Flow[String, String, NotUsed] = Flow[String].map { string =>
val answer: String = s"hello"
answer
}
val responder: Flow[String, ByteString, NotUsed] = Flow[String].map { string =>
val answer: String = s"Server responded with message [$string]"
ByteString(answer)
}
Flow[ByteString]
.idleTimeout(Duration.apply(1,"minutes"))
.via(converter)
.via(httpOut)
.via(responder)
}
def server(address: String, port: Int)(implicit system: ActorSystem) : Unit = {
val decider: Supervision.Decider = { e =>
LoggerFactory.getLogger(this.getClass).error("Failed ", e)
Supervision.Restart
}
val log = Logging.getLogger(system, this)
import system.dispatcher
val materializerSettings = ActorMaterializerSettings(system).withSupervisionStrategy(decider)
implicit val materializer = ActorMaterializer(materializerSettings)(system)
val connectionHandler: Sink[IncomingConnection, Future[Done]] = Sink.foreach[Tcp.IncomingConnection] { (conn: IncomingConnection) =>
log.debug(s"incomig connection from ${conn.remoteAddress}")
conn.handleWith(serverLogic(conn))
}
val incomingConnections: Source[IncomingConnection, Future[ServerBinding]] = Tcp().bind(address, port)
val binding: Future[ServerBinding] = incomingConnections.to(connectionHandler).run()
binding.onComplete {
case Success(b) =>
log.info("Server started, listening on: " + b.localAddress)
case Failure(e) =>
log.error(s"Server could not bind to $address:$port: ${e.getMessage}")
system.terminate()
}
}
}

Take a look to the doc. available in the akka stream webbsite: https://doc.akka.io/docs/akka/2.6.0/stream/stream-error.html. With this RestartSource you can apply a policy in order to restart TCP server.

Related

Scala Akka actors, ask pattern, dead letters encountered while sending reply

I am trying to send a request to remote actor using ask pattern. The local actor recieves some value and it performs some task on it and updates it.
Then when local actor try to send back the updated value to remote actor , error occurs while sending. How should i handle this error?
Error:
[INFO] [03/31/2017 17:28:18.383] [ClientSystem-akka.actor.default-dispatcher-3] [akka://ClientSystem/deadLetters]
Message [check.package$Rcvdcxt] from Actor[akka://ClientSystem/user/localA1#1050660737] to Actor[akka://ClientSystem/deadLetters] was not delivered.
[1] dead letters encountered.
Remote Actor:
class RemoteActor() extends Actor {
def receive = {
case TaskFromLocal() =>{
implicit val timeout: Timeout = 15000
val currentSender = sender
val f1 = currentSender ? RemoteActor.rtree.cxtA
f1.onComplete{
case Success(Rcvdcxt(cxtA))=>
println("Success"+cxtA)
case Success(s) =>
println("Success :"+s)
case Failure(ex) =>
println("failure:"+ex)
}
}
case _ => println("unknown msg")
}
}
object RemoteActor{
def createRndCxtC(count: Int):List[CxtC] = (for (i <- 1 to count) yield CxtC(Random.nextString(5), Random.nextInt())).toList
def createRndCxtB(count: Int): List[CxtB] = (for (i <- 1 to count) yield CxtB(createRndCxtC(count), Random.nextInt())).toList
def createRndCxtA(count: Int): List[CxtA] = (for (i <- 1 to count) yield CxtA(createRndCxtC(count), 5)).toList
var rtree = RCxt(createRndCxtA(1),createRndCxtB(2),1,"")
def main(args: Array[String]) {
val configFile = getClass.getClassLoader.getResource("remote_application.conf").getFile
val config = ConfigFactory.parseFile(new File(configFile))
val system = ActorSystem("RemoteSystem" , config)
val remoteActor = system.actorOf(Props[RemoteActor], name="remote")
println("remote is ready")
}
}
Local Actor :
class LocalActorA extends Actor{
#throws[Exception](classOf[Exception])
val remoteActor = context.actorSelection("akka.tcp://RemoteSystem#127.0.0.1:5150/user/remote")
def receive = {
case TaskLA1(taskA) => {
implicit val timeout: Timeout = 15000
val rCxt = remoteActor ? TaskFromLocal()
val currentSender = sender
rCxt.onComplete{
case Success(Rcvdcxt(cxtA))=>
println("Success"+cxtA)
println("Sender: "+ sender)
currentSender ! Rcvdcxt(cxtA)
case Success(s)=>
println("Got nothing from Remote"+s)
currentSender ! "Failuree"
case Failure(ex) =>
println("Failure in getting remote")
currentSender ! "Failure"
}
}
}
}
object LocalActorA {
def createRndCxtC(count: Int):List[CxtC] = (for (i <- 1 to count) yield CxtC(Random.nextString(5), Random.nextInt())).toList
def createRndCxtB(count: Int): List[CxtB] = (for (i <- 1 to count) yield CxtB(createRndCxtC(count), Random.nextInt())).toList
def createRndCxtA(count: Int): List[CxtA] = (for (i <- 1 to count) yield CxtA(createRndCxtC(count), 3)).toList
var tree = RCxt(createRndCxtA(2),createRndCxtB(2),1,"")
def main(args: Array[String]) {
val configFile = getClass.getClassLoader.getResource("local_application.conf").getFile
val config = ConfigFactory.parseFile(new File(configFile))
val system = ActorSystem("ClientSystem",config)
val localActorA1 = system.actorOf(Props[LocalActorA], name="localA1")
println("LocalActor A tree : "+tree)
localActorA1 ! TaskLA1(new DummySum())
}
}
Since you didn't post all the code I can't really tell exactly the error, but my best guess is related to the fact that you are calling sender in the onComplete in the LocalActor. This is unsafe and should be avoided at all costs. Instead, do something similar with the remote actor:
class LocalActor {
def receive = {
case TaskLA1(taskA) =>
val currentSender = sender
rCxt.onComplete {
case Success(Rcvdcxt(cxtA))=>
currentSender ! Rcvdcxt(cxtA)
...
}
}
}

akka-http: send element to akka sink from http route

How can I send elements/messages to an Akka Sink from an Akka HTTP route? My HTTP route still needs to return a normal HTTP response.
I imagine this requires a stream branch/junction. The normal HTTP routes are flows from HttpRequest -> HttpResponse. I would like to add a branch/junction so that HttpRequests can trigger events to my separate sink as well as generate the normal HttpResponse.
Below is a very simple single route akka-http app. For simplicity, I'm using a simple println sink. My production use case, will obviously involve a less trivial sink.
def main(args: Array[String]): Unit = {
implicit val actorSystem = ActorSystem("my-akka-http-test")
val executor = actorSystem.dispatcher
implicit val materializer = ActorMaterializer()(actorSystem)
// I would like to send elements to this sink in response to HTTP GET operations.
val sink: Sink[Any, Future[Done]] = Sink.foreach(println)
val route: akka.http.scaladsl.server.Route =
path("hello" / Segment) { p =>
get {
// I'd like to send a message to an Akka Sink as well as return an HTTP response.
complete {
s"<h1>Say hello to akka-http. p=$p</h1>"
}
}
}
val httpExt: akka.http.scaladsl.HttpExt = Http(actorSystem)
val bindingFuture = httpExt.bindAndHandle(RouteResult.route2HandlerFlow(route), "localhost", 8080)
println("Server online at http://localhost:8080/")
println("Press RETURN to stop...")
scala.io.StdIn.readLine()
bindingFuture
.flatMap(_.unbind())(executor) // trigger unbinding from the port
.onComplete(_ => Await.result(actorSystem.terminate(), Duration.Inf))(executor) // and shutdown when done
}
EDIT: Or in using the low-level akka-http API, how could I send specific messages to a sink from a specific route handler?
def main(args: Array[String]): Unit = {
implicit val actorSystem = ActorSystem("my-akka-http-test")
val executor = actorSystem.dispatcher
implicit val materializer = ActorMaterializer()(actorSystem)
// I would like to send elements to this sink in response to HTTP GET operations.
val sink: Sink[Any, Future[Done]] = Sink.foreach(println)
val requestHandler: HttpRequest => HttpResponse = {
case HttpRequest(GET, Uri.Path("/"), _, _, _) =>
HttpResponse(entity = HttpEntity(
ContentTypes.`text/html(UTF-8)`,
"<html><body>Hello world!</body></html>"))
case HttpRequest(GET, Uri.Path("/ping"), _, _, _) =>
HttpResponse(entity = "PONG!")
case HttpRequest(GET, Uri.Path("/crash"), _, _, _) =>
sys.error("BOOM!")
case r: HttpRequest =>
r.discardEntityBytes() // important to drain incoming HTTP Entity stream
HttpResponse(404, entity = "Unknown resource!")
}
val serverSource = Http().bind(interface = "localhost", port = 8080)
val bindingFuture: Future[Http.ServerBinding] =
serverSource.to(Sink.foreach { connection =>
println("Accepted new connection from " + connection.remoteAddress)
connection handleWithSyncHandler requestHandler
// this is equivalent to
// connection handleWith { Flow[HttpRequest] map requestHandler }
}).run()
println("Server online at http://localhost:8080/")
println("Press RETURN to stop...")
scala.io.StdIn.readLine()
bindingFuture
.flatMap(_.unbind())(executor) // trigger unbinding from the port
.onComplete(_ => Await.result(actorSystem.terminate(), Duration.Inf))(executor) // and shutdown when done
}
IF you want to send the whole HttpRequest to a sink of yours, I'd say the simplest way is to use the alsoTo combinator. The result would be something along the lines of
val mySink: Sink[HttpRequest, NotUsed] = ???
val handlerFlow = Flow[HttpRequest].alsoTo(mySink).via(RouteResult.route2HandlerFlow(route))
val bindingFuture = Http().bindAndHandle(handlerFlow, "localhost", 8080)
FYI: alsoTo in fact hides a Broadcast stage.
IF instead you need to selectively send a message to a Sink from a specific subroute, you have no other choice but to materialize a new flow for each incoming request. See example below
val sink: Sink[Any, Future[Done]] = Sink.foreach(println)
val route: akka.http.scaladsl.server.Route =
path("hello" / Segment) { p =>
get {
(extract(_.request) & extractMaterializer) { (req, mat) ⇒
Source.single(req).runWith(sink)(mat)
complete {
s"<h1>Say hello to akka-http. p=$p</h1>"
}
}
}
}
Also, keep in mind you can always ditch the high-level DSL completely, and model you whole route using the lower-level streams DSL. This will result in more verbose code - but will give you full control of your stream materialization.
EDIT: example below
val sink: Sink[Any, Future[Done]] = Sink.foreach(println)
val handlerFlow =
Flow.fromGraph(GraphDSL.create() { implicit b =>
import GraphDSL.Implicits._
val partition = b.add(Partition[HttpRequest](2, {
case HttpRequest(GET, Uri.Path("/"), _, _, _) ⇒ 0
case _ ⇒ 1
}))
val merge = b.add(Merge[HttpResponse](2))
val happyPath = Flow[HttpRequest].map{ req ⇒
HttpResponse(entity = HttpEntity(
ContentTypes.`text/html(UTF-8)`,
"<html><body>Hello world!</body></html>"))
}
val unhappyPath = Flow[HttpRequest].map{
case HttpRequest(GET, Uri.Path("/ping"), _, _, _) =>
HttpResponse(entity = "PONG!")
case HttpRequest(GET, Uri.Path("/crash"), _, _, _) =>
sys.error("BOOM!")
case r: HttpRequest =>
r.discardEntityBytes() // important to drain incoming HTTP Entity stream
HttpResponse(404, entity = "Unknown resource!")
}
partition.out(0).alsoTo(sink) ~> happyPath ~> merge
partition.out(1) ~> unhappyPath ~> merge
FlowShape(partition.in, merge.out)
})
val bindingFuture = Http().bindAndHandle(handlerFlow, "localhost", 8080)
This is the solution I used that seems ideal. Akka Http seems like it's designed so that your routes are simple HttpRequest->HttpResponse flows and don't involve any extra branches.
Rather than build everything into a single Akka stream graph, I have a separate QueueSource->Sink graph, and the normal Akka Http HttpRequest->HttpResponse flow just adds elements to the source queue as needed.
object HttpWithSinkTest {
def buildQueueSourceGraph(): RunnableGraph[(SourceQueueWithComplete[String], Future[Done])] = {
val annotateMessage: Flow[String, String, NotUsed] = Flow.fromFunction[String, String](s => s"got message from queue: $s")
val sourceQueue = Source.queue[String](100, OverflowStrategy.dropNew)
val sink: Sink[String, Future[Done]] = Sink.foreach(println)
val annotatedSink = annotateMessage.toMat(sink)(Keep.right)
val queueGraph = sourceQueue.toMat(annotatedSink)(Keep.both)
queueGraph
}
def buildHttpFlow(queue: SourceQueueWithComplete[String],
actorSystem: ActorSystem, materializer: ActorMaterializer): Flow[HttpRequest, HttpResponse, NotUsed] = {
implicit val actorSystemI = actorSystem
implicit val materializerI = materializer
val route: akka.http.scaladsl.server.Route =
path("hello" / Segment) { p =>
get {
complete {
queue.offer(s"got http event p=$p")
s"<h1>Say hello to akka-http. p=$p</h1>"
}
}
}
val routeFlow = RouteResult.route2HandlerFlow(route)
routeFlow
}
def main(args: Array[String]): Unit = {
val actorSystem = ActorSystem("my-akka-http-test")
val executor = actorSystem.dispatcher
implicit val materializer = ActorMaterializer()(actorSystem)
val (queue, _) = buildQueueSourceGraph().run()(materializer)
val httpFlow = buildHttpFlow(queue, actorSystem, materializer)
val httpExt: akka.http.scaladsl.HttpExt = Http(actorSystem)
val bindingFuture = httpExt.bindAndHandle(httpFlow, "localhost", 8080)
println("Server online at http://localhost:8080/")
println("Press RETURN to stop...")
scala.io.StdIn.readLine()
println("Shutting down...")
val serverBinding = Await.result(bindingFuture, Duration.Inf)
Await.result(serverBinding.unbind(), Duration.Inf)
Await.result(actorSystem.terminate(), Duration.Inf)
println("Done. Exiting")
}
}

stopping akka stream Source created from file on reaching file's end

I've created an akka-stream source by reading contents from a file.
implicit val system = ActorSystem("reactive-process")
implicit val materializer = ActorMaterializer()
implicit val ec: ExecutionContextExecutor = system.dispatcher
case class RequestCaseClass(data: String)
def httpPoolFlow(host: String, port: String) = Http().cachedHostConnectionPool[RequestCaseClass](host, port)(httpMat)
def makeFileSource(filePath: String) = FileIO.fromFile(new File(filePath))
.via(Framing.delimiter(ByteString(System.lineSeparator), 10000, allowTruncation = false))
.map(_.utf8String)
def requestFramer = Flow.fromFunction((dataRow: String) =>
(getCmsRequest(RequestCaseClass(dataRow)), RequestCaseClass(dataRow))
).filter(_._1.isSuccess).map(z => (z._1.get, z._2))
def responseHandler = Flow.fromFunction(
(in: (Try[HttpResponse], RequestCaseClass)) => {
in._1 match {
case Success(r) => r.status.intValue() match {
case 200 =>
r.entity.dataBytes.runWith(Sink.ignore)
logger.info(s"Success response for: ${in._2.id}")
true
case _ =>
val b = Await.result(r.entity.dataBytes.runFold(ByteString.empty)(_ ++ _)(materializer)
.map(bb => new String(bb.toArray))(materializer.executionContext), 60.seconds)
logger.warn(s"Non-200 response for: ${in._2.id} r: $b")
false
}
case Failure(t) =>
logger.error(s"Failed completing request for: ${in._2.id} e: ${t.getMessage}")
false
}
}
Now when using it in a runnable graph, I want the entire stream to stop on reaching End Of File being read using makeFileSource(fp).
makeFileSource(fp)
.via(requestFramer)
.via(httpPoolFlow)
.via(responseHandler)
.runWith(Sink.ignore)
The stream currently doesn't stop on reaching end of file.

Do i need to create ActorMaterializer multiple times?

I have an app that generate reports, with akka-http + akka-actors + akka-camel + akka-streams. When a post request arrives , the ActiveMqProducerActor enqueue the request into ActiveMq Broker. Then the ActiveMqConsumerActor consumes the message and start the task using akka-streams(in this actor i need the materializer) .
The main class create the ActorSystem and the ActorMaterializer, but i dont know how is the correct way to "inject" the materializer into the akka-actor
object ReportGeneratorApplication extends App {
implicit val system: ActorSystem = ActorSystem()
implicit val executor = system.dispatcher
implicit val materializer = ActorMaterializer()
val camelExtension: Camel = CamelExtension(system);
val amqc: ActiveMQComponent = ActiveMQComponent.activeMQComponent(env.getString("jms.url"))
amqc.setUsePooledConnection(true)
amqc.setAsyncConsumer(true)
amqc.setTrustAllPackages(true)
amqc.setConcurrentConsumers(1)
camelExtension.context.addComponent("jms", amqc);
val jmsProducer: ActorRef = system.actorOf(Props[ActiveMQProducerActor])
//Is this the correct way to pass the materializer?
val jmsConsumer: ActorRef = system.actorOf(Props(new ActiveMQConsumerActor()(materializer)), name = "jmsConsumer")
val endpoint: ReportEndpoint = new ReportEndpoint(jmsProducer);
Http().bindAndHandle(endpoint.routes, "localhost", 8881)
}
The ReportEndPoint class, that have the jmsProducerActor . Mongo is a trait with CRUD methods. JsonSupport(==SprayJsonSupport)
class ReportEndpoint(jmsProducer: ActorRef)
(implicit val system:ActorSystem,
implicit val executor: ExecutionContext,
implicit val materializer : ActorMaterializer)
extends JsonSupport with Mongo {
val routes =
pathPrefix("reports"){
post {
path("generate"){
entity(as[DataRequest]) { request =>
val id = java.util.UUID.randomUUID.toString
// **Enqueue the request into ActiveMq**
jmsProducer ! request
val future: Future[Seq[Completed]] = insertReport(request)
complete {
future.map[ToResponseMarshallable](r => r.head match {
case r : Completed => println(r); s"Reporte Generado con id $id"
case _ => HttpResponse(StatusCodes.InternalServerError, entity = "Error al generar reporte")
})
}
}
}
} ....
The idea of ActiveMqConsumerActor, is send the messages, with streams and backpressure, one by one,because ReportBuilderActor makes many mongo operations (and the datacenter it`s not very good).
//Is this the correct way to pass the materializer??
class ActiveMQConsumerActor (implicit materializer : ActorMaterializer) extends Consumer with Base {
override def endpointUri: String = env.getString("jms.queue")
val log = Logging(context.system, this)
val reportActor: ActorRef = context.actorOf(Props(new ReportBuilderActor()(materializer)), name = "reportActor")
override def receive: Receive = {
case msg: CamelMessage => msg.body match {
case data: DataRequest => {
//I need only one task running
Source.single(data).buffer(1, OverflowStrategy.backpressure).to(Sink.foreach(d => reportActor ! d)).run()
}
case _ => log.info("Invalid")
}
case _ => UnhandledMessage
}
}
Is a good idea have implicit values in companion objects?
Thanks!!

How do you throttle Flow in the latest Akka (2.4.6)?

How do you throttle Flow in the latest Akka (2.4.6) ? I'd like to throttle Http client flow to limit number of requests to 3 requests per second. I found following example online but it's for old Akka and akka-streams API changed so much that I can't figure out how to rewrite it.
def throttled[T](rate: FiniteDuration): Flow[T, T] = {
val tickSource: Source[Unit] = TickSource(rate, rate, () => ())
val zip = Zip[T, Unit]
val in = UndefinedSource[T]
val out = UndefinedSink[T]
PartialFlowGraph { implicit builder =>
import FlowGraphImplicits._
in ~> zip.left ~> Flow[(T, Unit)].map { case (t, _) => t } ~> out
tickSource ~> zip.right
}.toFlow(in, out)
}
Here is my best attempt so far
def throttleFlow[T](rate: FiniteDuration) = Flow.fromGraph(GraphDSL.create() { implicit builder =>
import GraphDSL.Implicits._
val ticker = Source.tick(rate, rate, Unit)
val zip = builder.add(Zip[T, Unit.type])
val map = Flow[(T, Unit.type)].map { case (value, _) => value }
val messageExtractor = builder.add(map)
val in = Inlet[T]("Req.in")
val out = Outlet[T]("Req.out")
out ~> zip.in0
ticker ~> zip.in1
zip.out ~> messageExtractor.in
FlowShape.of(in, messageExtractor.out)
})
it throws exception in my main flow though :)
private val queueHttp = Source.queue[(HttpRequest, (Any, Promise[(Try[HttpResponse], Any)]))](1000, OverflowStrategy.backpressure)
.via(throttleFlow(rate))
.via(poolClientFlow)
.mapAsync(4) {
case (util.Success(resp), any) =>
val strictFut = resp.entity.toStrict(5 seconds)
strictFut.map(ent => (util.Success(resp.copy(entity = ent)), any))
case other =>
Future.successful(other)
}
.toMat(Sink.foreach({
case (triedResp, (value: Any, p: Promise[(Try[HttpResponse], Any)])) =>
p.success(triedResp -> value)
case _ =>
throw new RuntimeException()
}))(Keep.left)
.run
where poolClientFlow is Http()(system).cachedHostConnectionPool[Any](baseDomain)
Exception is:
Caused by: java.lang.IllegalArgumentException: requirement failed: The output port [Req.out] is not part of the underlying graph.
at scala.Predef$.require(Predef.scala:219)
at akka.stream.impl.StreamLayout$Module$class.wire(StreamLayout.scala:204)
Here is an attempt that uses the throttle method as mentioned by #Qingwei. The key is to not use bindAndHandle(), but to use bind() and throttle the flow of incoming connections before handling them. The code is taken from the implementation of bindAndHandle(), but leaves out some error handling for simplicity. Please don't do that in production.
implicit val system = ActorSystem("test")
implicit val mat = ActorMaterializer()
import system.dispatcher
val maxConcurrentConnections = 4
val handler: Flow[HttpRequest, HttpResponse, NotUsed] = complete(LocalDateTime.now().toString)
def handleOneConnection(incomingConnection: IncomingConnection): Future[Done] =
incomingConnection.flow
.watchTermination()(Keep.right)
.joinMat(handler)(Keep.left)
.run()
Http().bind("127.0.0.1", 8080)
.throttle(3, 1.second, 1, ThrottleMode.Shaping)
.mapAsyncUnordered(maxConcurrentConnections)(handleOneConnection)
.to(Sink.ignore)
.run()