Simple server-push broadcast flow with Akka - scala

I am struggling to implement a - rather simple - Akka flow.
Here is what I think I need:
I have a single server and n clients and want to be able to react to external events by broadcasting messages (JSON) to the clients. The clients can register/deregister at any time.
So for example:
1 client is registered
server throws an event ("Hello World!")
server broadcasts a "Hello World!" to all clients (one client)
a new client opens a websocket connection
server throws another event ("Hello Akka!")
server broadcasts a "Hello Akka!" to all clients (two clients)
Here is what I have so far:
def route: Route = {
val register = path("register") {
// registration point for the clients
handleWebSocketMessages(serverPushFlow)
}
}
// ...
def broadcast(msg: String): Unit = {
// use the previously created flow to send messages to all clients
// ???
}
// my broadcast sink to send messages to the clients
val broadcastSink: Sink[String, Source[String, NotUsed]] = BroadcastHub.sink[String]
// a source that emmits simple strings
val simpleMsgSource = Source(Nil: List[String])
def serverPushFlow = {
Flow[Message].mapAsync(1) {
case TextMessage.Strict(text) => Future.successful(text)
case streamed: TextMessage.Streamed => streamed.textStream.runFold("")(_ ++ _)
}
.via(Flow.fromSinkAndSource(broadcastSink, simpleMsgSource))
.map[Message](string => TextMessage(string))
}

To be able to use a broadcastHub, you have to define two flows. One that runs your websocket TextMessage to the broadcastHub. You have to run it, it produces a Source that you connect to each client.
Here it is this concept described in simple runnable app.
import akka.NotUsed
import akka.actor.ActorSystem
import akka.stream.ActorMaterializer
import akka.stream.scaladsl.{BroadcastHub, Sink, Source}
import org.slf4j.LoggerFactory
import scala.concurrent.duration._
object BroadcastSink extends App {
private val logger = LoggerFactory.getLogger("logger")
implicit val actorSystem = ActorSystem()
implicit val actorMaterializer = ActorMaterializer()
val broadcastSink: Sink[String, Source[String, NotUsed]] =
BroadcastHub.sink[String]
val simpleMsgSource = Source.tick(500.milli, 500.milli, "Single Message")
val sourceForClients: Source[String, NotUsed] = simpleMsgSource.runWith(broadcastSink)
sourceForClients.to(Sink.foreach(t => logger.info(s"Client 1: $t"))).run()
Thread.sleep(1000)
sourceForClients.to(Sink.foreach(t => logger.info(s"Client 2: $t"))).run()
Thread.sleep(1000)
sourceForClients.to(Sink.foreach(t => logger.info(s"Client 3: $t"))).run()
Thread.sleep(1000)
sourceForClients.to(Sink.foreach(t => logger.info(s"Client 4: $t"))).run()
Thread.sleep(1000)
actorSystem.terminate()
}
Prints
10:52:01.774 Client 1: Single Message
10:52:02.273 Client 1: Single Message
10:52:02.273 Client 2: Single Message
10:52:02.773 Client 2: Single Message
10:52:02.773 Client 1: Single Message
10:52:03.272 Client 3: Single Message
10:52:03.272 Client 2: Single Message
10:52:03.272 Client 1: Single Message
10:52:03.772 Client 1: Single Message
10:52:03.772 Client 3: Single Message
10:52:03.773 Client 2: Single Message
10:52:04.272 Client 2: Single Message
10:52:04.272 Client 4: Single Message
10:52:04.272 Client 1: Single Message
10:52:04.273 Client 3: Single Message
10:52:04.772 Client 1: Single Message
10:52:04.772 Client 2: Single Message
10:52:04.772 Client 3: Single Message
10:52:04.772 Client 4: Single Message
10:52:05.271 Client 4: Single Message
10:52:05.271 Client 1: Single Message
10:52:05.271 Client 3: Single Message
10:52:05.272 Client 2: Single Message
If clients are known in advance, you don't need BrodacastHub and you can use alsoTo method:
def webSocketHandler(clients: List[Sink[Message, NotUsed]]): Flow[Message, Message, Any] = {
val flow = Flow[Message]
clients.foldLeft(flow) {case (fl, client) =>
fl.alsoTo(client)
}
}

Related

how to make several request to websocket server using akka client side websocket

i am new to akka websockets and learning akka client side websockets https://doc.akka.io/docs/akka-http/current/client-side/websocket-support.html
i am using websockets for my webrtc janus server for that i have URL and i need to send many messages to it and receive a different response each time and send further messages based on that response here i am confused how can we do that by looking at the example code i think i need to repeat the below code every time i need to send a message to the server but it does not seemed correct si what is the right approach?
example in my case
websocket server is running at
ws://0.0.0.0:8188
first i will send a message to the server for initiating the sessionID
request# 1
{
"janus" : "create",
"transaction" : "<random alphanumeric string>"
}
the server will respond with the session id
response #1
{
"janus": "success",
"session_id": 2630959283560140,
"transaction": "asqeasd4as3d4asdasddas",
"data": {
"id": 4574061985075210
}
}
then based on id 4574061985075210 i will send another message and receive further info
request # 02 {
}
response # 02 {
}
----
how can i achieve this with akka client side websockets
here is my code
import akka.http.scaladsl.model.ws._
import scala.concurrent.Future
object WebSocketClientFlow {
def main(args: Array[String]) = {
implicit val system = ActorSystem()
implicit val materializer = ActorMaterializer()
import system.dispatcher
val incoming: Sink[Message, Future[Done]] =
Sink.foreach[Message] {
case message: TextMessage.Strict =>
println(message.text)
//suppose here based on the server response i need to send another message to the server and so on do i need to repeat this same code here again ?????
}
val outgoing = Source.single(TextMessage("hello world!"))
val webSocketFlow = Http().webSocketClientFlow(WebSocketRequest("ws://echo.websocket.org"))
val (upgradeResponse, closed) =
outgoing
.viaMat(webSocketFlow)(Keep.right) // keep the materialized Future[WebSocketUpgradeResponse]
.toMat(incoming)(Keep.both) // also keep the Future[Done]
.run()
val connected = upgradeResponse.flatMap { upgrade =>
if (upgrade.response.status == StatusCodes.SwitchingProtocols) {
Future.successful(Done)
} else {
throw new RuntimeException(s"Connection failed: ${upgrade.response.status}")
}
}
connected.onComplete(println)
closed.foreach(_ => println("closed"))
}
}
I would recommend you goto this website and check out the documentation. IT has all the information you need.
https://doc.akka.io/docs/akka/current/typed/actors.html

Does composite flow make loop?

I am trying to understand, how the following code snippet works:
val flow: Flow[Message, Message, Future[Done]] =
Flow.fromSinkAndSourceMat(printSink, helloSource)(Keep.left)
Two guys gave a very wonderful explanation on this thread. I understand the concept of the Composite flow, but how does it work on the websocket client.
Consider the following code:
import akka.actor.ActorSystem
import akka.{ Done, NotUsed }
import akka.http.scaladsl.Http
import akka.stream.ActorMaterializer
import akka.stream.scaladsl._
import akka.http.scaladsl.model._
import akka.http.scaladsl.model.ws._
import scala.concurrent.Future
object SingleWebSocketRequest {
def main(args: Array[String]) = {
implicit val system = ActorSystem()
implicit val materializer = ActorMaterializer()
import system.dispatcher
// print each incoming strict text message
val printSink: Sink[Message, Future[Done]] =
Sink.foreach {
case message: TextMessage.Strict =>
println(message.text)
}
val helloSource: Source[Message, NotUsed] =
Source.single(TextMessage("hello world!"))
// the Future[Done] is the materialized value of Sink.foreach
// and it is completed when the stream completes
val flow: Flow[Message, Message, Future[Done]] =
Flow.fromSinkAndSourceMat(printSink, helloSource)(Keep.left)
// upgradeResponse is a Future[WebSocketUpgradeResponse] that
// completes or fails when the connection succeeds or fails
// and closed is a Future[Done] representing the stream completion from above
val (upgradeResponse, closed) =
Http().singleWebSocketRequest(WebSocketRequest("ws://echo.websocket.org"), flow)
val connected = upgradeResponse.map { upgrade =>
// just like a regular http request we can access response status which is available via upgrade.response.status
// status code 101 (Switching Protocols) indicates that server support WebSockets
if (upgrade.response.status == StatusCodes.SwitchingProtocols) {
Done
} else {
throw new RuntimeException(s"Connection failed: ${upgrade.response.status}")
}
}
// in a real application you would not side effect here
// and handle errors more carefully
connected.onComplete(println)
closed.foreach(_ => println("closed"))
}
}
It is a websocket client, that send a message to the websocket server and the printSink receives it and print it out.
How can it be, that printSink receives messages, there is no a connection between the Sink and Source.
Is it like a loop?
Stream flow is from left to right, how it comes that the Sink can consume messages from websocket server?
Flow.fromSinkAndSourceMat puts an independent Sink and a Source to a shape of the Flow. Elements going into that Sink do not end up at the Source.
From the Websocket client API perspective, it needs a Source from which requests will be sent to the server and a Sink that it will send the responses to. The singleWebSocketRequest could take a Source and a Sink separately, but that would be a bit more verbose API.
Here is a shorter example that demonstrates the same as in your code snippet but is runnable, so you can play around with it:
import akka._
import akka.actor._
import akka.stream._
import akka.stream.scaladsl._
implicit val sys = ActorSystem()
implicit val mat = ActorMaterializer()
def openConnection(userFlow: Flow[String, String, NotUsed])(implicit mat: Materializer) = {
val processor = Flow[String].map(_.toUpperCase)
processor.join(userFlow).run()
}
val requests = Source(List("one", "two", "three"))
val responses = Sink.foreach(println)
val userFlow = Flow.fromSinkAndSource(responses, requests)
openConnection(userFlow)

akka stream consume web socket

Getting started with akka-streams I want to build a simple example.
In chrome using a web socket plugin I simply can connect to a stream like this one https://blockchain.info/api/api_websocket via wss://ws.blockchain.info/inv and sending 2 commands
{"op":"ping"}
{"op":"unconfirmed_sub"}
will stream the results in chromes web socket plugin window.
I tried to implement the same functionality in akka streams but am facing some problems:
2 commands are executed, but I actually do not get the streaming output
the same command is executed twice (the ping command)
When following the tutorial of http://doc.akka.io/docs/akka/2.4.7/scala/http/client-side/websocket-support.html or http://doc.akka.io/docs/akka-http/10.0.0/scala/http/client-side/websocket-support.html#half-closed-client-websockets
Here is my adaption below:
object SingleWebSocketRequest extends App {
implicit val system = ActorSystem()
implicit val materializer = ActorMaterializer()
import system.dispatcher
// print each incoming strict text message
val printSink: Sink[Message, Future[Done]] =
Sink.foreach {
case message: TextMessage.Strict =>
println(message.text)
}
val commandMessages = Seq(TextMessage("{\"op\":\"ping\"}"), TextMessage("{\"op\":\"unconfirmed_sub\"}"))
val helloSource: Source[Message, NotUsed] = Source(commandMessages.to[scala.collection.immutable.Seq])
// the Future[Done] is the materialized value of Sink.foreach
// and it is completed when the stream completes
val flow: Flow[Message, Message, Future[Done]] =
Flow.fromSinkAndSourceMat(printSink, helloSource)(Keep.left)
// upgradeResponse is a Future[WebSocketUpgradeResponse] that
// completes or fails when the connection succeeds or fails
// and closed is a Future[Done] representing the stream completion from above
val (upgradeResponse, closed) =
Http().singleWebSocketRequest(WebSocketRequest("wss://ws.blockchain.info/inv"), flow)
val connected = upgradeResponse.map { upgrade =>
// just like a regular http request we can access response status which is available via upgrade.response.status
// status code 101 (Switching Protocols) indicates that server support WebSockets
if (upgrade.response.status == StatusCodes.SwitchingProtocols) {
Done
} else {
throw new RuntimeException(s"Connection failed: ${upgrade.response.status}")
}
}
// in a real application you would not side effect here
// and handle errors more carefully
connected.onComplete(println) // TODO why do I not get the same output as in chrome?
closed.foreach(_ => println("closed"))
}
when using the flow version from http://doc.akka.io/docs/akka-http/10.0.0/scala/http/client-side/websocket-support.html#websocketclientflow modified as outlined below, again, the result is twice the same output:
{"op":"pong"}
{"op":"pong"}
See the code:
object WebSocketClientFlow extends App {
implicit val system = ActorSystem()
implicit val materializer = ActorMaterializer()
import system.dispatcher
// Future[Done] is the materialized value of Sink.foreach,
// emitted when the stream completes
val incoming: Sink[Message, Future[Done]] =
Sink.foreach[Message] {
case message: TextMessage.Strict =>
println(message.text)
}
// send this as a message over the WebSocket
val commandMessages = Seq(TextMessage("{\"op\":\"ping\"}"), TextMessage("{\"op\":\"unconfirmed_sub\"}"))
val outgoing: Source[Message, NotUsed] = Source(commandMessages.to[scala.collection.immutable.Seq])
// val outgoing = Source.single(TextMessage("hello world!"))
// flow to use (note: not re-usable!)
val webSocketFlow = Http().webSocketClientFlow(WebSocketRequest("wss://ws.blockchain.info/inv"))
// the materialized value is a tuple with
// upgradeResponse is a Future[WebSocketUpgradeResponse] that
// completes or fails when the connection succeeds or fails
// and closed is a Future[Done] with the stream completion from the incoming sink
val (upgradeResponse, closed) =
outgoing
.viaMat(webSocketFlow)(Keep.right) // keep the materialized Future[WebSocketUpgradeResponse]
.toMat(incoming)(Keep.both) // also keep the Future[Done]
.run()
// just like a regular http request we can access response status which is available via upgrade.response.status
// status code 101 (Switching Protocols) indicates that server support WebSockets
val connected = upgradeResponse.flatMap { upgrade =>
if (upgrade.response.status == StatusCodes.SwitchingProtocols) {
Future.successful(Done)
} else {
throw new RuntimeException(s"Connection failed: ${upgrade.response.status}")
}
}
// in a real application you would not side effect here
connected.onComplete(println)
closed.foreach(_ => {
println("closed")
system.terminate
})
}
How can I achieve the same result as in chrome
display print of subscribed stream
at best periodically send update (ping statements) as outlined in https://blockchain.info/api/api via {"op":"ping"}messages
Note, I am using akka in version 2.4.17 and akka-http in version 10.0.5
A couple of things I notice are:
1) you need to consume all types of incoming messages, not only the TextMessage.Strict kind. The blockchain stream is definitely a Streamed message, as it contains loads of text and it will be delivered in chunks over the network. A more complete incoming Sink could be:
val incoming: Sink[Message, Future[Done]] =
Flow[Message].mapAsync(4) {
case message: TextMessage.Strict =>
println(message.text)
Future.successful(Done)
case message: TextMessage.Streamed =>
message.textStream.runForeach(println)
case message: BinaryMessage =>
message.dataStream.runWith(Sink.ignore)
}.toMat(Sink.last)(Keep.right)
2) your source of 2 elements might complete too early, i.e. before the websocket responses come back. You can concatenate a Source.maybe by doing
val outgoing: Source[Strict, Promise[Option[Nothing]]] =
Source(commandMessages.to[scala.collection.immutable.Seq]).concatMat(Source.maybe)(Keep.right)
and then
val ((completionPromise, upgradeResponse), closed) =
outgoing
.viaMat(webSocketFlow)(Keep.both)
.toMat(incoming)(Keep.both)
.run()
by keeping the materialized promise non-complete, you keep the source open and avoid the flow shutdown.

How to represent multiple incoming TCP connections as a stream of Akka streams?

I'm prototyping a network server using Akka Streams that will listen on a port, accept incoming connections, and continuously read data off each connection. Each connected client will only send data, and will not expect to receive anything useful from the server.
Conceptually, I figured it would be fitting to model the incoming events as one single stream that only incidentally happens to be delivered via multiple TCP connections. Thus, assuming that I have a case class Msg(msg: String) that represents each data message, what I want is to represent the entirety of incoming data as a Source[Msg, _]. This makes a lot of sense for my use case, because I can very simply connect flows & sinks to this source.
Here's the code I wrote to implement my idea:
import akka.actor.ActorSystem
import akka.stream.ActorMaterializer
import akka.stream.SourceShape
import akka.stream.scaladsl._
import akka.util.ByteString
import akka.NotUsed
import scala.concurrent.{ Await, Future }
import scala.concurrent.duration._
case class Msg(msg: String)
object tcp {
val N = 2
def main(argv: Array[String]) {
implicit val system = ActorSystem()
implicit val materializer = ActorMaterializer()
val connections = Tcp().bind("0.0.0.0", 65432)
val delim = Framing.delimiter(
ByteString("\n"),
maximumFrameLength = 256, allowTruncation = true
)
val parser = Flow[ByteString].via(delim).map(_.utf8String).map(Msg(_))
val messages: Source[Msg, Future[Tcp.ServerBinding]] =
connections.flatMapMerge(N, {
connection =>
println(s"client connected: ${connection.remoteAddress}")
Source.fromGraph(GraphDSL.create() { implicit builder =>
import GraphDSL.Implicits._
val F = builder.add(connection.flow.via(parser))
val nothing = builder.add(Source.tick(
initialDelay = 1.second,
interval = 1.second,
tick = ByteString.empty
))
F.in <~ nothing.out
SourceShape(F.out)
})
})
import scala.concurrent.ExecutionContext.Implicits.global
Await.ready(for {
_ <- messages.runWith(Sink.foreach {
msg => println(s"${System.currentTimeMillis} $msg")
})
_ <- system.terminate()
} yield (), Duration.Inf)
}
}
This code works as expected, however, note the val N = 2, which is passed into the flatMapMerge call that ultimately combines the incoming data streams into one. In practice this means that I can only read from that many streams at a time.
I don't know how many connections will be made to this server at any given time. Ideally I would want to support as many as possible, but hardcoding an upper bound doesn't seem like the right thing to do.
My question, at long last, is: How can I obtain or create a flatMapMerge stage that can read from more than a fixed number of connections at one time?
As indicated by Viktor Klang's comments I don't think this is possible in 1 stream. However, I think it would be possible to create a stream that can receive messages after materialization and use that as a "sink" for messages coming from the TCP connections.
First create the "sink" stream:
val sinkRef =
Source
.actorRef[Msg](Int.MaxValue, fail)
.to(Sink foreach {m => println(s"${System.currentTimeMillis} $m")})
.run()
This sinkRef can be used by each Connection to receive the messages:
connections foreach { conn =>
Source
.empty[ByteString]
.via(conn.flow)
.via(parser)
.runForeach(msg => sinkRef ! msg)
}

akka streams over tcp

Here is the setup: I want to be able to stream messages (jsons converted to bytestrings) from a publisher to a remote server subscriber over a tcp connection.
Ideally, the publisher would be an actor that would receive internal messages, queue them and then stream them to the subscriber server if there is outstanding demand of course. I understood that what is necessary for this is to extend ActorPublisher class in order to onNext() the messages when needed.
My problem is that so far I am able just to send (receive and decode properly) one shot messages to the server opening a new connection each time. I did not manage to get my head around the akka doc and be able to set the proper tcp Flow with the ActorPublisher.
Here is the code from the publisher:
def send(message: Message): Unit = {
val system = Akka.system()
implicit val sys = system
import system.dispatcher
implicit val materializer = ActorMaterializer()
val address = Play.current.configuration.getString("eventservice.location").getOrElse("localhost")
val port = Play.current.configuration.getInt("eventservice.port").getOrElse(9000)
/*** Try with actorPublisher ***/
//val result = Source.actorPublisher[Message] (Props[EventActor]).via(Flow[Message].map(Json.toJson(_).toString.map(ByteString(_))))
/*** Try with actorRef ***/
/*val source = Source.actorRef[Message](0, OverflowStrategy.fail).map(
m => {
Logger.info(s"Sending message: ${m.toString}")
ByteString(Json.toJson(m).toString)
}
)
val ref = Flow[ByteString].via(Tcp().outgoingConnection(address, port)).to(Sink.ignore).runWith(source)*/
val result = Source(Json.toJson(message).toString.map(ByteString(_))).
via(Tcp().outgoingConnection(address, port)).
runFold(ByteString.empty) { (acc, in) ⇒ acc ++ in }//Handle the future
}
and the code from the actor which is quite standard in the end:
import akka.actor.Actor
import akka.stream.actor.ActorSubscriberMessage.{OnComplete, OnError}
import akka.stream.actor.{ActorPublisherMessage, ActorPublisher}
import models.events.Message
import play.api.Logger
import scala.collection.mutable
class EventActor extends Actor with ActorPublisher[Message] {
import ActorPublisherMessage._
var queue: mutable.Queue[Message] = mutable.Queue.empty
def receive = {
case m: Message =>
Logger.info(s"EventActor - message received and queued: ${m.toString}")
queue.enqueue(m)
publish()
case Request => publish()
case Cancel =>
Logger.info("EventActor - cancel message received")
context.stop(self)
case OnError(err: Exception) =>
Logger.info("EventActor - error message received")
onError(err)
context.stop(self)
case OnComplete =>
Logger.info("EventActor - onComplete message received")
onComplete()
context.stop(self)
}
def publish() = {
while (queue.nonEmpty && isActive && totalDemand > 0) {
Logger.info("EventActor - message published")
onNext(queue.dequeue())
}
}
I can provide the code from the subscriber if necessary:
def connect(system: ActorSystem, address: String, port: Int): Unit = {
implicit val sys = system
import system.dispatcher
implicit val materializer = ActorMaterializer()
val handler = Sink.foreach[Tcp.IncomingConnection] { conn =>
Logger.info("Event server connected to: " + conn.remoteAddress)
// Get the ByteString flow and reconstruct the msg for handling and then output it back
// that is how handleWith work apparently
conn.handleWith(
Flow[ByteString].fold(ByteString.empty)((acc, b) => acc ++ b).
map(b => handleIncomingMessages(system, b.utf8String)).
map(ByteString(_))
)
}
val connections = Tcp().bind(address, port)
val binding = connections.to(handler).run()
binding.onComplete {
case Success(b) =>
Logger.info("Event server started, listening on: " + b.localAddress)
case Failure(e) =>
Logger.info(s"Event server could not bind to $address:$port: ${e.getMessage}")
system.terminate()
}
}
thanks in advance for the hints.
My first recommendation is to not write your own queue logic. Akka provides this out-of-the-box. You also don't need to write your own Actor, Akka Streams can provide it as well.
First we can create the Flow that will connect your publisher to your subscriber via Tcp. In your publisher code you only need to create the ActorSystem once and connect to the outside server once:
//this code is at top level of your application
implicit val actorSystem = ActorSystem()
implicit val actorMaterializer = ActorMaterializer()
import actorSystem.dispatcher
val host = Play.current.configuration.getString("eventservice.location").getOrElse("localhost")
val port = Play.current.configuration.getInt("eventservice.port").getOrElse(9000)
val publishFlow = Tcp().outgoingConnection(host, port)
publishFlow is a Flow that will input ByteString data that you want to send to the external subscriber and outputs ByteString data that comes from subscriber:
// data to subscriber ----> publishFlow ----> data returned from subscriber
The next step is the publisher Source. Instead of writing your own Actor you can use Source.actorRef to "materialize" the Stream into an ActorRef. Essentially the Stream will become an ActorRef for us to use later:
//these values control the buffer
val bufferSize = 1024
val overflowStrategy = akka.stream.OverflowStrategy.dropHead
val messageSource = Source.actorRef[Message](bufferSize, overflowStrategy)
We also need a Flow to convert Messages into ByteString
val marshalFlow =
Flow[Message].map(message => ByteString(Json.toJson(message).toString))
Finally we can connect all of the pieces. Since you aren't receiving any data back from the external subscriber we'll ignore any data coming in from the connection:
val subscriberRef : ActorRef = messageSource.via(marshalFlow)
.via(publishFlow)
.runWith(Sink.ignore)
We can now treat this stream as if it were an Actor:
val message1 : Message = ???
subscriberRef ! message1
val message2 : Message = ???
subscriberRef ! message2