As websocket connections increase app starts to hang - scala

Just getting invited to look at an issue in a third party application where the behavior is as more clients connect (using web socket) app hangs after a certain connections. I am trying to get more info and better access to the codebase but below is what I have right now which looks like a standard code flow. Any gottachs to keep in mind when play, akka, web sockets are in the mix? Will post more info as it becomes available.
Controller has
def service = WebSocket.async[JsValue] { request =>
Service.createConnection
}
Service.createConnection looks as
def createConnection: Future[(Iteratee[JsValue, _], Enumerator[JsValue])] = {
val serviceActor = Akka.system.actorOf(Props[ServiceActor])
val socket_id = UUID.randomUUID().toString
val (enumerator, mChannel) = Concurrent.broadcast[JsValue]
(serviceActor ? Connect(socket_id, mChannel)).map{
...........
}
}

Related

Stop Akka stream Source when web socket connection is closed by the client

I have an akka http web socket Route with a code similar to:
private val wsReader: Route =
path("v1" / "data" / "ws") {
log.info("Opening websocket connecting ...")
val testSource = Source
.repeat("Hello")
.throttle(1, 1.seconds)
.map(x => {
println(x)
x
})
.map(TextMessage.Strict)
.limit(1000)
extractUpgradeToWebSocket { upgrade ⇒
complete(upgrade.handleMessagesWithSinkSource(Sink.ignore, testSource))
}
}
Everything works fine (I receive from the client 1 test message every second). The only problem is that I don't understand how to stop/close the Source (testSource) if the client close the web socket connection.
You can see that the source continue to produce elements (see println) also if the web socket is down.
How can I detect a client disconnection?
handleMessagesWithSinkSource is implemented as:
/**
* The high-level interface to create a WebSocket server based on "messages".
*
* Returns a response to return in a request handler that will signal the
* low-level HTTP implementation to upgrade the connection to WebSocket and
* use the supplied inSink to consume messages received from the client and
* the supplied outSource to produce message to sent to the client.
*
* Optionally, a subprotocol out of the ones requested by the client can be chosen.
*/
def handleMessagesWithSinkSource(
inSink: Graph[SinkShape[Message], Any],
outSource: Graph[SourceShape[Message], Any],
subprotocol: Option[String] = None): HttpResponse =
handleMessages(Flow.fromSinkAndSource(inSink, outSource), subprotocol)
This means the sink and the source are independent, and indeed the source should keep producing elements even when the client closes the incoming side of the connection. It should stop when the client resets the connection completely, though.
To stop producing outgoing data as soon as the incoming connection is closed, you may use Flow.fromSinkAndSourceCoupled, so:
val socket = upgrade.handleMessages(
Flow.fromSinkAndSourceCoupled(inSink, outSource)
subprotocol = None
)
One way is to use KillSwitches to handle testSource shutdown.
private val wsReader: Route =
path("v1" / "data" / "ws") {
logger.info("Opening websocket connecting ...")
val sharedKillSwitch = KillSwitches.shared("my-kill-switch")
val testSource =
Source
.repeat("Hello")
.throttle(1, 1.seconds)
.map(x => {
println(x)
x
})
.map(TextMessage.Strict)
.limit(1000)
.via(sharedKillSwitch.flow)
extractUpgradeToWebSocket { upgrade ⇒
val inSink = Sink.onComplete(_ => sharedKillSwitch.shutdown())
val outSource = testSource
val socket = upgrade.handleMessagesWithSinkSource(inSink, outSource)
complete(socket)
}
}

Terminate Akka-Http Web Socket connection asynchronously

Web Socket connections in Akka Http are treated as an Akka Streams Flow. This seems like it works great for basic request-reply, but it gets more complex when messages should also be pushed out over the websocket. The core of my server looks kind of like:
lazy val authSuccessMessage = Source.fromFuture(someApiCall)
lazy val messageFlow = requestResponseFlow
.merge(updateBroadcastEventSource)
lazy val handler = codec
.atop(authGate(authSuccessMessage))
.join(messageFlow)
handleWebSocketMessages {
handler
}
Here, codec is a (de)serialization BidiFlow and authGate is a BidiFlow that processes an authorization message and prevents outflow of any messages until authorization succeeds. Upon success, it sends authSuccessMessage as a reply. requestResponseFlow is the standard request-reply pattern, and updateBroadcastEventSource mixes in async push messages.
I want to be able to send an error message and terminate the connection gracefully in certain situations, such as bad authorization, someApiCall failing, or a bad request processed by requestResponseFlow. So basically, basically it seems like I want to be able to asynchronously complete messageFlow with one final message, even though its other constituent flows are still alive.
Figured out how to do this using a KillSwitch.
Updated version
The old version had the problem that it didn't seem to work when triggered by a BidiFlow stage higher up in the stack (such as my authGate). I'm not sure exactly why, but modeling the shutoff as a BidiFlow itself, placed further up the stack, resolved the issue.
val shutoffPromise = Promise[Option[OutgoingWebsocketEvent]]()
/**
* Shutoff valve for the connection. It is triggered when `shutoffPromise`
* completes, and sends a final optional termination message if that
* promise resolves with one.
*/
val shutoffBidi = {
val terminationMessageSource = Source
.maybe[OutgoingWebsocketEvent]
.mapMaterializedValue(_.completeWith(shutoffPromise.future))
val terminationMessageBidi = BidiFlow.fromFlows(
Flow[IncomingWebsocketEventOrAuthorize],
Flow[OutgoingWebsocketEvent].merge(terminationMessageSource)
)
val terminator = BidiFlow
.fromGraph(KillSwitches.singleBidi[IncomingWebsocketEventOrAuthorize, OutgoingWebsocketEvent])
.mapMaterializedValue { killSwitch =>
shutoffPromise.future.foreach { _ => println("Shutting down connection"); killSwitch.shutdown() }
}
terminationMessageBidi.atop(terminator)
}
Then I apply it just inside the codec:
val handler = codec
.atop(shutoffBidi)
.atop(authGate(authSuccessMessage))
.join(messageFlow)
Old version
val shutoffPromise = Promise[Option[OutgoingWebsocketEvent]]()
/**
* Shutoff valve for the flow of outgoing messages. It is triggered when
* `shutoffPromise` completes, and sends a final optional termination
* message if that promise resolves with one.
*/
val shutoffFlow = {
val terminationMessageSource = Source
.maybe[OutgoingWebsocketEvent]
.mapMaterializedValue(_.completeWith(shutoffPromise.future))
Flow
.fromGraph(KillSwitches.single[OutgoingWebsocketEvent])
.mapMaterializedValue { killSwitch =>
shutoffPromise.future.foreach(_ => killSwitch.shutdown())
}
.merge(terminationMessageSource)
}
Then handler looks like:
val handler = codec
.atop(authGate(authSuccessMessage))
.join(messageFlow via shutoffFlow)

How to use Flink streaming to process Data stream of Complex Protocols

I'm using Flink Stream for the handling of data traffic log in 3G network (GPRS Tunnelling Protocol). And I'm having trouble in the synthesis of information in a user session of the user.
For example: how to map the start and end one session. I don't know that there Flink streaming suited to handle complex protocols like that?
p/s:
We capture data exchanging between SGSN and GGSN in 3G network (use GTP protocol with GTP-C/U messages). A session is started when the SGSN sends the CreateReq (TEID, Seq, IMSI, TEID_dl,TEID_data_dl) message and GGSN responses CreateRsp(TEID_dl, Seq, TEID_ul, TEID_data_ul) message.
After the session is established, others GTP-C messages (ex: UpdateReq, DeleteReq) sent from SGSN to GGSN uses TEID_ul and response message uses TEID_dl, GTP- U message uses TEID_data_ul (SGSN -> GGSN) and TEID_data_dl (GGSN -> SGSN). GTP-U messages contain information such as AppID (facebook, twitter, web), url,...
Finally, I want to handle continuous log data stream and map the GTP-C messages and GTP-U of the same one user (IMSI) to make a report.
I've tried this:
val sessions = createReqs.connect(createRsps).flatMap(new CoFlatMapFunction[CreateReq, CreateRsp, Session] {
// holds CreateReqs indexed by (tedid_dl,seq)
private val createReqs = mutable.HashMap.empty[(String, String), CreateReq]
// holds CreateRsps indexed by (tedid,seq)
private val createRsps = mutable.HashMap.empty[(String, String), CreateRsp]
override def flatMap1(req: CreateReq, out: Collector[Session]): Unit = {
val key = (req.teid_dl, req.header.seqNum)
val oRsp = createRsps.get(key)
if (!oRsp.isEmpty) {
val rsp = oRsp.get
println("OK")
out.collect(new Session(rsp.header.time, req.imsi, req.teid_dl, req.teid_ddl, rsp.teid_upl, rsp.teid_dupl, req.rat, req.apn))
createRsps.remove(key)
} else {
createReqs.put(key, req)
}
}
override def flatMap2(rsp: CreateRsp, out: Collector[Session]): Unit = {
val key = (rsp.header.teid, rsp.header.seqNum)
val oReq = createReqs.get(key)
if (!oReq.isEmpty) {
val req = oReq.get
out.collect(new Session(rsp.header.time, req.imsi, req.teid_dl, req.teid_ddl, rsp.teid_upl, rsp.teid_dupl, req.rat, req.apn))
createReqs.remove(key)
} else {
createRsps.put(key, rsp)
}
}
}).print()
This code always returns empty result. The fact that the input stream contains CreateRsp and CreateReq message of the same session. They appear very close together (within 1 second). When I debug, the oReq.isEmpty == true every time.
What i'm doing wrong?
To be honest it is a bit difficult to see through the telco specifics here, but if I understand correctly you have at least 3 streams, the first two being the CreateReq and the CreateRsp streams.
To detect the establishment of a session I would use the ConnectedDataStream abstraction to share state between the two aforementioned streams. Check out this example for usage or the related Flink docs.
Is this what you are trying to achieve?

scalaz-stream how to implement `ask-then-wait-reply` tcp client

I want to implement an client app that first send an request to server then wait for its reply(similar to http)
My client process may be
val topic = async.topic[ByteVector]
val client = topic.subscribe
Here is the api
trait Client {
val incoming = tcp.connect(...)(client)
val reqBus = topic.pubsh()
def ask(req: ByteVector): Task[Throwable \/ ByteVector] = {
(tcp.writes(req).flatMap(_ => tcp.reads(1024))).to(reqBus)
???
}
}
Then, how to implement the remain part of ask ?
Usually, the implementation is done with publishing the message via sink and then awaiting some sort of reply on some source, like your topic.
Actually we have a lot of idioms of this in our code :
def reqRply[I,O,O2](src:Process[Task,I],sink:Sink[Task,I],reply:Process[Task,O])(pf: PartialFunction[O,O2]):Process[Task,O2] = {
merge.mergeN(Process(reply, (src to sink).drain)).collectFirst(pf)
}
Essentially this first hooks to reply stream to await any resulting O confirming our request sent. Then we publish message I and consult pf for any incoming O to be eventually translated to O2 and then terminate.

Scala Remote Actors stop client from terminating

I am writing a simple chat server, and I want to keep it as simple as possible. My server listed below only receives connections and stores them in the clients set. Incoming messages are then broadcasted to all clients on that Server. The server works with no problem, but on the client side, the RemoteActor stops my program from termination. Is there a way to remove the Actor on my client without terminating the Actor on the Server?
I don't want to use a "one actor per client" model yet.
import actors.{Actor,OutputChannel}
import actors.remote.RemoteActor
object Server extends Actor{
val clients = new collection.mutable.HashSet[OutputChannel[Any]]
def act{
loop{
react{
case 'Connect =>
clients += sender
case 'Disconnect =>
clients -= sender
case message:String =>
for(client <- clients)
client ! message
}
}
}
def main(args:Array[String]){
start
RemoteActor.alive(9999)
RemoteActor.register('server,this)
}
}
my client would then look like this
val server = RemoteActor.select(Node("localhost",9999),'server)
server.send('Connect,messageHandler) //answers will be redirected to the messageHandler
/*do something until quit*/
server ! 'Disconnect
I would suggest placing the client side code into an actor itself - ie not calling alive/register in the main thread
(implied by http://www.scala-lang.org/api/current/scala/actors/remote/RemoteActor$.html)
something like
//body of your main:
val client = actor {
alive(..)
register(...)
loop {
receive {
case 'QUIT => exit()
}
}
}
client.start
//then to quit:
client ! 'QUIT
Or similar (sorry I am not using 2.8 so might have messed something up - feel free to edit if you make it actually work for you !).