Im working on an application that has a dedicated actor to process incoming data over a websocket connection. What I am finding is that occasionally, the websocket will terminate, close unexpectedly. I'm not sure why it is closing, but in the event that it does close, I would like to attempt to reconnect. What is the correct way to do this in akka http? Ive tried looking through the docs, but didn't have much luck. Here is what I have for setting up the WS connection (taken mostly from the akka http docs):
val incoming: Sink[Message, Future[Done]] =
Sink.foreach[Message] {
case message: TextMessage.Strict =>
handleIncomingWSMessage(message.text)
case _ =>
// ignore other message types
}
val subscribeData =
"""{"action": "subscribe", "params": { "symbols": "QQQ" } }""".stripMargin
val subscribeSource = Source(List(TextMessage(subscribeData)))
//this sends one message on the outgoing websocket to subscribe, then waits
val flow: Flow[Message, Message, Promise[Option[Message]]] =
Flow.fromSinkAndSourceMat(
incoming,
subscribeSource.concatMat(Source.maybe[Message])(Keep.right)
)(Keep.right)
val (upgradeResponse, closed) =
Http(context.system).singleWebSocketRequest(
WebSocketRequest(
s"wss://ws.twelvedata.com/v1/quotes/price?apikey=$apiKey"
),
flow,
settings = getClientConnectionSettings()
)(mat)
val connected = upgradeResponse.flatMap { upgrade =>
if (upgrade.response.status == StatusCodes.SwitchingProtocols) {
Future.successful(Done)
} else {
throw new RuntimeException(s"Connection failed: ${upgrade.response.status}")
}
}
upgradeResponse.onComplete {
case Success(_) => {
logger.info("Websocket success.")
}
case Failure(t) => {
logger.error("Websocket failed.", t)
}
}
closed.future.map(o => {
logger.error("Closed.") // This is what I see when the websocket fails
})
Related
i am using akka http one of my routes is interacting with an external service via akka http client side api and the httpRequest is continuously running i am unable to make it work
here is my use case -> i am interacting with a janus server and doing a long poll get request as soon as the server responded back with an 'keepAlive' or an "event" i am requesting again and so on the server keeps on responding
all of this is happening inside an actor and i have an akka htttp route which is intiailising the first request
here is my code
final case class CreateLongPollRequest(sessionId:BigInt)
class LongPollRequestActor (config: Config) extends Actor {
def receive = {
case CreateLongPollRequest(sessionId) =>
senderRef = Some(sender())
val uri: String = "localhost:8080/" + sessionId
val request = HttpRequest(HttpMethods.GET, uri)
val responseFuture = Http(context.system).singleRequest(request)
responseFuture
.onComplete {
case Success(res)
Unmarshal(res.entity.toStrict(40 seconds)).value.map { result =>
val responseStr = result.data.utf8String
log.info("Actor LongPollRequestActor: long poll responseStr {}",responseStr)
senderRef match {
case Some(ref) =>
ref ! responseStr
case None => log.info("Actor LongPollRequestActor: sender ref is null")
}
}
case Failure(e) =>log.error(e)
}
}
}
final case class JanusLongPollRequest(sessionId: BigInt)
class JanusManagerActor(childMaker: List[ActorRefFactory => ActorRef]) extends Actor {
var senderRef: Option[akka.actor.ActorRef] = None
val longPollRequestActor = childMaker(1)(context)
def receive: PartialFunction[Any, Unit] = {
case JanusLongPollRequest(sessionId)=>
senderRef = Some(sender)
keepAlive(sessionId,senderRef)
}
def keepAlive(sessionId:BigInt,sender__Ref: Option[ActorRef]):Unit= {
val senderRef = sender__Ref
val future = ask(longPollRequestActor, CreateLongPollRequest(sessionId)).mapTo[String] //.pipeTo(sender)
if (janus.equals("keepalive")) {
val janusRequestResponse = Future {
JanusSessionRequestResponse(janus = janus)
}
senderRef match {
case Some(sender_ref) =>
janusRequestResponse.pipeTo(sender_ref)
}
keepAlive(sessionId,senderRef)
}
else if (janus.equals("event")) {
//some fetching of values from server
val janusLongPollRequestResponse = Future {
JanusLongPollRequestResponse(janus = janus,sender=sender, transaction=transaction,pluginData=Some(pluginData))
}
senderRef match {
case Some(sender_ref) =>
janusLongPollRequestResponse.pipeTo(sender_ref)
}
keepAlive(sessionId,senderRef)
}
def createLongPollRequest: server.Route =
path("create-long-poll-request") {
post {
entity(as[JsValue]) {
json =>
val sessionID = json.asJsObject.fields("sessionID").convertTo[String]
val future = ask(janusManagerActor, JanusLongPollRequest(sessionID)).mapTo[JanusSessionRequestResponse]
onComplete(future) {
case Success(sessionDetails) =>
log.info("janus long poll request created")
val jsonResponse = JsObject("longpollDetails" -> sessionDetails.toJson)
complete(OK, routeResponseMessage.getResponse(StatusCodes.OK.intValue, ServerMessages.JANUS_SESSION_CREATED, jsonResponse))
case Failure(ex) =>
failWith(ex)
}
}
}
now the above route createLongPollRequest worked fine for the first time I can see the response and for the next attempts i am getting a dead letter as follows
[INFO] [akkaDeadLetter][07/30/2021 12:13:53.587] [demo-Janus-ActorSystem-akka.actor.default-dispatcher-6] [akka://demo-Janus-ActorSystem/deadLetters] Message [com.ifkaar.lufz.janus.models.janus.JanusSessionRequestResponse] from Actor[akka://demo-Janus-ActorSystem/user/ActorManager/ManagerActor#-721316187] to Actor[akka://demo-Janus-ActorSystem/deadLetters] was not delivered. [4] dead letters encountered. If this is not an expected behavior then Actor[akka://demo-Janus-ActorSystem/deadLetters] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
probably this is causing the issue after the first iteration
responseFuture.pipeTo(sender()
IS there a way where i can get a response in my akkahttp route when ever my backend server responds?
The Actor should only reply once to the CreateLongPollRequest and it should only do this when it has valid data. If the poll fails the Actor should just issue another poll request.
It is difficult to give more help without the details of the Actor.
How can we consume SSE in scala play framework? Most of the resources that I could find were to make an SSE source. I want to reliably listen to SSE events from other services (with autoconnect). The most relevant article was https://doc.akka.io/docs/alpakka/current/sse.html . I implemented this but this does not seem to work (code below). Also the event that I am su
#Singleton
class SseConsumer #Inject()((implicit ec: ExecutionContext) {
implicit val system = ActorSystem()
val send: HttpRequest => Future[HttpResponse] = foo
def foo(x:HttpRequest) = {
try {
println("foo")
val authHeader = Authorization(BasicHttpCredentials("user", "pass"))
val newHeaders = x.withHeaders(authHeader)
Http().singleRequest(newHeaders)
}catch {
case e:Exception => {
println("Exception", e.printStackTrace())
throw e
}
}
}
val eventSource: Source[ServerSentEvent, NotUsed] =
EventSource(
uri = Uri("https://abc/v1/events"),
send,
initialLastEventId = Some("2"),
retryDelay = 1.second
)
def orderStatusEventStable() = {
val events: Future[immutable.Seq[ServerSentEvent]] =
eventSource
.throttle(elements = 1, per = 500.milliseconds, maximumBurst = 1, ThrottleMode.Shaping)
.take(10)
.runWith(Sink.seq)
events.map(_.foreach( x => {
println("456")
println(x.data)
}))
}
Future {
blocking{
while(true){
try{
Thread.sleep(2000)
orderStatusEventStable()
} catch {
case e:Exception => {
println("Exception", e.printStackTrace())
}
}
}
}
}
}
This does not give any exceptions and println("456") is never printed.
EDIT:
Future {
blocking {
while(true){
try{
Await.result(orderStatusEventStable() recover {
case e: Exception => {
println("exception", e)
throw e
}
}, Duration.Inf)
} catch {
case e:Exception => {
println("Exception", e.printStackTrace())
}
}
}
}
}
Added an await and it started working. Able to read 10 messages at a time. But now I am faced with another problem.
I have a producer which can at times produce faster than I can consume and with this code I have 2 issues:
I have to wait until 10 messages are available. How can we take a max. of 10 and a min. of 0 messages?
When the production rate > consumption rate, I am missing few events. I am guessing this is due to throttling. How do we handle it using backpressure?
The issue in your code is that the events: Future would only complete when the stream (eventSource) completes.
I'm not familiar with SSE but the stream likely never completes in your case as it's always listening for new events.
You can learn more in Akka Stream documentation.
Depending on what you want to do with the events, you could just map on the stream like:
eventSource
...
.map(/* do something */)
.runWith(...)
Basically, you need to work with the Akka Stream Source as data is going through it but don't wait for its completion.
EDIT: I didn't notice the take(10), my answer applies only if the take was not here. Your code should work after 10 events sent.
Server code :
object EchoService {
def route: Route = path("ws-echo") {
get {
handleWebSocketMessages(flow)
}
} ~ path("send-client") {
get {
sourceQueue.map(q => {
println(s"Offering message from server")
q.offer(BinaryMessage(ByteString("ta ta")))
} )
complete("Sent from server successfully")
}
}
val (source, sourceQueue) = {
val p = Promise[SourceQueue[Message]]
val s = Source.queue[Message](100, OverflowStrategy.backpressure).mapMaterializedValue(m => {
p.trySuccess(m)
m
})
(s, p.future)
}
val flow =
Flow.fromSinkAndSourceMat(Sink.ignore, source)(Keep.right)
}
Client Code :
object Client extends App {
implicit val actorSystem = ActorSystem("akka-system")
implicit val flowMaterializer = ActorMaterializer()
val config = actorSystem.settings.config
val interface = config.getString("app.interface")
val port = config.getInt("app.port")
// print each incoming strict text message
val printSink: Sink[Message, Future[Done]] =
Sink.foreach {
case message: TextMessage.Strict =>
println(message.text)
case _ => println(s"received unknown message format")
}
val (source, sourceQueue) = {
val p = Promise[SourceQueue[Message]]
val s = Source.queue[Message](100, OverflowStrategy.backpressure).mapMaterializedValue(m => {
p.trySuccess(m)
m
})
(s, p.future)
}
val flow =
Flow.fromSinkAndSourceMat(printSink, source)(Keep.right)
val (upgradeResponse, sourceClosed) =
Http().singleWebSocketRequest(WebSocketRequest("ws://localhost:8080/ws-echo"), flow)
val connected = upgradeResponse.map { upgrade =>
// just like a regular http request we can get 404 NotFound,
// with a response body, that will be available from upgrade.response
if (upgrade.response.status == StatusCodes.SwitchingProtocols || upgrade.response.status == StatusCodes.OK ) {
Done
} else {
throw new RuntimeException(s"Connection failed: ${upgrade.response.status}")
}
}
connected.onComplete(println)
}
when i hit http://localhost:8080/send-client i see messages coming to client but after a while if try to send to client again i don't see any messages on client side :s . I also tried source.concatMat(Source.maybe)(Keep.right) but no luck :(
Edit : I tested with js client, somehow connection/flow closed on server end , is there anyway to prevent this ? and how can i listen to this event while using akka-http websocket client :s
Hi,
The reason why it does not keep connected is because by default all
HTTP connections have idle-timeout on by default to keep the system
from leaking connections if clients disappear without any signal.
One way to overcome this limitation (and actually my recommended
approach) is to inject keep-alive messages on the client side
(messages that the server otherwise ignore, but informs the underlying
HTTP server that the connection is still live).
You can override the idle-timeouts in the HTTP server configuration to
a larger value but I don't recommend that.
If you are using stream based clients, injecting heartbeats when
necessary is as simple as calling keepAlive and providing it a time
interval and a factory for the message you want to inject:
http://doc.akka.io/api/akka/2.4.7/index.html#akka.stream.scaladsl.Flow#keepAliveU>:Out:FlowOps.this.Repr[U]
That combinator will make sure that no periods more than T will be
silent as it will inject elements to keep this contract if necessary
(and will not inject anything if there is enough background traffic)
-Endre
thank you Endre :) , working snippet ..
// on client side
val (source, sourceQueue) = {
val p = Promise[SourceQueue[Message]]
val s = Source.queue[Message](Int.MaxValue, OverflowStrategy.backpressure).mapMaterializedValue(m => {
p.trySuccess(m)
m
}).keepAlive(FiniteDuration(1, TimeUnit.SECONDS), () => TextMessage.Strict("Heart Beat"))
(s, p.future)
}
I have the following flow:
val actorSource = Source.actorRef(10000, OverflowStrategy.dropHead)
val targetSink = Flow[ByteString]
.map(_.utf8String)
.via(new JsonStage())
.map { json =>
MqttMessages.jsonToObject(json)
}
.to(Sink.actorRef(self, "Done"))
sourceRef = Some(Flow[ByteString]
.via(conn.flow)
.to(targetSink)
.runWith(actorSource))
within an Actor (which is the Sink.actorRef one). The conn.flow is an incoming TCP Connection using Tcp().bind(address, port).
Currently the Sink.actorRef Actor keeps running when the tcp connection is closed from the client side. Is there a way to register the client side termination of the tcp connection to shutdown the Actor?
Edit:
I tried handling both cases as suggested:
case "Done" =>
context.stop(self)
case akka.actor.Status.Failure =>
context.stop(self)
But when I test with a socket client and cancel it, the actor is not being shutdown. So neither the "Done" message nor the Failure seem to be registered if the TCP connection is terminated.
Here is the whole code:
private var connection: Option[Tcp.IncomingConnection] = None
private var mqttpubsub: Option[ActorRef] = None
private var sourceRef: Option[ActorRef] = None
private val sdcTopic = "out"
private val actorSource = Source.actorRef(10000, OverflowStrategy.dropHead)
implicit private val system = context.system
implicit private val mat = ActorMaterializer.create(context.system)
override def receive: Receive = {
case conn: Tcp.IncomingConnection =>
connection = Some(conn)
mqttpubsub = Some(context.actorOf(Props(classOf[MqttPubSub], PSConfig(
brokerUrl = "tcp://127.0.0.1:1883", //all params is optional except brokerUrl
userName = null,
password = null,
//messages received when disconnected will be stash. Messages isOverdue after stashTimeToLive will be discard
stashTimeToLive = 1.minute,
stashCapacity = 100000, //stash messages will be drop first haft elems when reach this size
reconnectDelayMin = 10.millis, //for fine tuning re-connection logic
reconnectDelayMax = 30.seconds
))))
val targetSink = Flow[ByteString]
.alsoTo(Sink.foreach(println))
.map(_.utf8String)
.via(new JsonStage())
.map { json =>
MqttMessages.jsonToObject(json)
}
.to(Sink.actorRef(self, "Done"))
sourceRef = Some(Flow[ByteString]
.via(conn.flow)
.to(targetSink)
.runWith(actorSource))
case msg: MqttMessages.MqttMessage =>
processMessage(msg)
case msg: Message =>
val jsonMsg = JsonParser(msg.payload).asJsObject
val mqttMsg = MqttMessages.jsonToObject(jsonMsg)
try {
sourceRef.foreach(_ ! ByteString(msg.payload))
} catch {
case e: Throwable => e.printStackTrace()
}
case SubscribeAck(Subscribe(topic, self, qos), fail) =>
case "Done" =>
context.stop(self)
case akka.actor.Status.Failure =>
context.stop(self)
}
the Actor keeps running
Which actor do you mean, the one you've registered with Sink.actorRef? If yes, then to shut it down when the stream shuts down, you need to handle "Done" and akka.actor.Status.Failure messages in it and invoke context.stop(self) explicitly. "Done" message will be sent when the stream closes successfully, while Status.Failure will be sent if there is an error.
For more information see Sink.actorRef API docs, they explain the termination semantics.
I ended up creating another Stage, which only passes elements through but and emits an additional message to the next flow if the upstream closes:
class TcpStage extends GraphStage[FlowShape[ByteString, ByteString]] {
val in = Inlet[ByteString]("TCPStage.in")
val out = Outlet[ByteString]("TCPStage.out")
override val shape = FlowShape.of(in, out)
override def createLogic(inheritedAttributes: Attributes): GraphStageLogic = new GraphStageLogic(shape) {
setHandler(out, new OutHandler {
override def onPull(): Unit = {
if (isClosed(in)) emitDone()
else pull(in)
}
})
setHandler(in, new InHandler {
override def onPush(): Unit = {
push(out, grab(in))
}
override def onUpstreamFinish(): Unit = {
emitDone()
completeStage()
}
})
private def emitDone(): Unit = {
push(out, ByteString("{ }".getBytes("utf-8")))
}
}
}
Which I then use in my flow:
val targetSink = Flow[ByteString]
.via(new TcpStage())
.map(_.utf8String)
.via(new JsonStage())
.map { json =>
MqttMessages.jsonToObject(json)
}
.to(Sink.actorRef(self, MqttDone))
sourceRef = Some(Flow[ByteString]
.via(conn.flow)
.to(targetSink)
.runWith(actorSource))
I use Play 2.2.2 with Scala.
I have this code in my controller:
def wsTest = WebSocket.using[JsValue] {
implicit request =>
val (out, channel) = Concurrent.broadcast[JsValue]
val in = Iteratee.foreach[JsValue] {
msg => println(msg)
}
userAuthenticatorRequest.tracked match { //detecting wheter the user is authenticated
case Some(u) =>
mySubscriber.start(u.id, channel)
case _ =>
channel push Json.toJson("{error: Sorry, you aren't authenticated yet}")
}
(in, out)
}
calling this code:
object MySubscriber {
def start(userId: String, channel: Concurrent.Channel[JsValue]) {
ctx.getBean(classOf[ActorSystem]).actorOf(Props(classOf[MySubscriber], Seq("comment"), channel), name = "mySubscriber") ! "start"
//a simple refresh would involve a duplication of this actor!
}
}
class MySubscriber(redisChannels: Seq[String], channel: Concurrent.Channel[JsValue]) extends RedisSubscriberActor(new InetSocketAddress("localhost", 6379), redisChannels, Nil) with ActorLogging {
def onMessage(message: Message) {
println(s"message received: $message")
channel.push(Json.parse(message.data))
}
override def onPMessage(pmessage: PMessage) {
//not used
println(s"message received: $pmessage")
}
}
The problem is that when the user refreshes the page, then a new websocket restarts involving a duplication of Actors named mySubscriber.
I noticed that the Play's Java version has a way to detect a closed connection, in order to shutdown an actor.
Example:
// When the socket is closed.
in.onClose(new Callback0() {
public void invoke() {
// Shutdown the actor
defaultRoom.shutdown();
}
});
How to handle the same thing with the Scala WebSocket API? I want to close the actor each time the socket is closed.
As #Mik378 suggested, Iteratee.map serves the role of onClose.
val in = Iteratee.foreach[JsValue] {
msg => println(msg)
} map { _ =>
println("Connection has closed")
}