Akka-Http Websockets: How to Send consumers the same stream of data - scala

I have a WebSocket that clients can connect to I also have a stream of data using akka-streams. How can I make it that all clients get the same data. At the moment they seem to be racing for the data.
Thanks

One way you could do is is to have an actor that extends ActorPublisher and have it subscribe
to some message.
class MyPublisher extends ActorPublisher[MyData]{
override def preStart = {
context.system.eventStream.subscribe(self, classOf[MyData])
}
override def receive: Receive = {
case msg: MyData ⇒
if (isActive && totalDemand > 0) {
// Pushes the message onto the stream
onNext(msg)
}
}
}
object MyPublisher {
def props(implicit ctx: ExecutionContext): Props = Props(new MyPublisher())
}
case class MyData(data:String)
You can then use that actor as the source for the stream:
val dataSource = Source.actorPublisher[MyData](MyPublisher.props(someExcutionContext))
You can then create a flow from that datasource and apply a transform to convert the data into a websocket message
val myFlow = Flow.fromSinkAndSource(Sink.ignore, dataSource map {d => TextMessage.Strict(d.data)})
Then you can use that flow in your route handling.
path("readings") {
handleWebsocketMessages(myFlow)
}
From the processing of the original stream, you can then publish the data to the event stream and any instance of that actor will pick it up and put in onto the stream that their websocket is being served from.
val actorSystem = ActorSystem("foo")
val otherSource = Source.fromIterator(() => List(MyData("a"), MyData("b")).iterator)
otherSource.runForeach { msg ⇒ actorSystem.eventStream.publish(MyData("data"))}
Each socket will then have its own instance of the actor to provide it with data all coming from a single source.

Related

Persistent Redis Pub/Sub Actors

I have a backend app writed in Scala Play. Because I have a realtime implementation using Akka Actors with data stored in a Redis server, I want as my each backend instance (deployed on centos servers) to be a Publisher and in same time a Subscriber to Redis service. Why this? Because a 3rd party app will send requests to my backend to update the data from Redis, and I want that all actors from all instances to push data to clients (frontend) indifferent on which backend instance is redirected this request (a load balancer is used there).
So, when instance1 will publish on Redis, I want that all subscribers(instance2, instance3, even instance1 because I said each instance must be pub/sub) to push data to clients.
I created an object with a Publisher and a Subscriber client and I was expecting that these will have a singleton behavior. But, for an unknown reason, over the night I see that my instances are unsubscribed from the Redis server without a message. I think this, because in the next day, my Redis service have 0 subscribers. I don't know if I have a bad implementation there or just Redis kill the connections after some time.
RedisPubSubServer.scala (In facts, here are just 2 Akka Actors which take RedisClient as params)
class Subscriber(client: RedisClient) extends Actor {
var callback: PubSubMessage => Any = { m => }
implicit val timeout = Timeout(2 seconds)
override def receive: Receive = {
case Subscribe(channel) => client.subscribe(channel)(callback)
case Register(cb) => callback = cb; self ? true
case Unsubscribe(channel) => client.unsubscribe(channel); self ? true
}
}
class Publisher(client: RedisClient) extends Actor {
implicit val timeout = Timeout(2 seconds)
override def receive: Receive = {
case Publish(channel, msg) => client.publish(channel, msg); self ? true
}
}
RedisPubSubClient.scala (here I create the Publisher and Subscriber as singleton)
object Pub {
println("starting publishing service...")
val config = ConfigFactory.load.getObject("redis").toConfig
val client = new RedisClient(config.getString("master"), config.getInt("port"))
val system = ActorSystem("RedisPublisher")
val publisher = system.actorOf(Props(new Publisher(client)))
def publish(channel: String, message: String) =
publisher ! Publish(channel, message)
}
object Sub {
val client = new RedisClient(config.getString("master"), config.getInt("port"))
val system = ActorSystem("RedisSubscriber")
val subscriber = system.actorOf(Props(new Subscriber(client)))
println("SUB Registering...")
subscriber ! Register(callback)
def sub(channel: String) = subscriber ! Subscribe(channel)
def unsub(channel: String) = subscriber ! Unsubscribe(channel)
def callback(msg: PubSubMessage) = {
msg match {
case S(channel, no) => println(s"subscribed to $channel and count $no")
case U(channel, no) => println(s"unsubscribed from $channel and count $no")
case M(channel, msg) => msg match {
case "exit" => client.unsubscribe()
case jsonString => // do the job
}
case E(e) => println(s"ERR = ${e.getMessage}")
}
}
}
and the RedisService
object RedisService {
val system = ActorSystem("RedisServiceSubscriber")
val subscriber = system.actorOf(Props(new Subscriber(client)))
subscriber ! Register(callback)
subscriber ! Subscribe("channelName")
// So, here I'm expecting that subscriber to have a life-cycle as the backend instance
}
from an api endpoint, I push data calling Pub publish method as:
def reloadData(request: AnyType) {
Pub.publish("channelName", requestAsString)
}
Can be possible as Publisher/Subscriber Actors to be killed after a while and due of that to throw in some errors for redis clients Pub/Sub?
For Publisher, I must say that I'm thinking to create the client each time when the api call is made, but for the Subscriber, I can not use another way that a singleton object which will listen the Redis entire life of the backend.
thanks
edit: used library:
"net.debasishg" %% "redisclient" % "3.41"
After some researches, I found another scala redis lib which seems to do exactly what I need in an easier maner
"com.github.etaty" %% "rediscala" % "1.9.0"

How to save streaming data using Akka Persistence

I use StreamRefs to establish streaming connections between actors in the cluster. Currently, in the writing node, I save incoming messages to the log file manually, but I wonder is it possible to replace it with persistent Sink for writing and persistent Source for reading on actor startup
from the Akka Persistence journal. I've been thinking of replacing the log file sink with Persistent actor's persist { evt => ... }, but since it executes asynchronously I'll lose the backpressure. So is it possible to write streaming data with backpressure into Akka Persistence journal and read this data in a streaming manner on actor recover?
Current implementation:
object Writer {
case class WriteSinkRequest(userId: String)
case class WriteSinkReady(userId: String, sinkRef: SinkRef[ByteString])
case class ReadSourceRequest(userId: String)
case class ReadSourceReady(userId: String, sourceRef: SourceRef[ByteString])
}
class Writer extends Actor {
// code omitted
val logsDir = "logs"
val path = Files.createDirectories(FileSystems.getDefault.getPath(logsDir))
def logFile(id: String) = {
path.resolve(id)
}
def logFileSink(logId: String): Sink[ByteString, Future[IOResult]] = FileIO.toPath(logFile(logId), Set(CREATE, WRITE, APPEND))
def logFileSource(logId: String): Source[ByteString, Future[IOResult]] = FileIO.fromPath(logFile(logId))
override def receive: Receive = {
case WriteSinkRequest(userId) =>
// obtain the source you want to offer:
val sink = logFileSink(userId)
// materialize the SinkRef (the remote is like a source of data for us):
val ref: Future[SinkRef[ByteString]] = StreamRefs.sinkRef[ByteString]().to(sink).run()
// wrap the SinkRef in some domain message, such that the sender knows what source it is
val reply: Future[WriteSinkReady] = ref.map(WriteSinkReady(userId, _))
// reply to sender
reply.pipeTo(sender())
case ReadSourceRequest(userId) =>
val source = logFileSource(userId)
val ref: Future[SourceRef[ByteString]] = source.runWith(StreamRefs.sourceRef())
val reply: Future[ReadSourceReady] = ref.map(ReadSourceReady(userId, _))
reply pipeTo sender()
}
}
P.S. Is it possible to create not a "save-to-journal" sink, but flow:
incoming data to write ~> save to persistence journal ~> data that was written?
One idea for streaming data to a persistent actor in a backpressured fashion is to use Sink.actorRefWithAck: have the actor send an acknowledgement message when it has persisted a message. This would look something like the following:
// ...
case class WriteSinkReady(userId: String, sinkRef: SinkRef[MyMsg])
// ...
def receive = {
case WriteSinkRequest(userId) =>
val persistentActor: ActorRef = ??? // a persistent actor that handles MyMsg messages
// as well as the messages used in persistentSink
val persistentSink: Sink[MyMsg, NotUsed] = Sink.actorRefWithAck[MyMsg](
persistentActor,
/* additional parameters: see the docs */
)
val ref: Future[SinkRef[MyMsg]] = StreamRefs.sinkRef[MyMsg]().to(persistentSink).run()
val reply: Future[WriteSinkReady] = ref.map(WriteSinkReady(userId, _))
reply.pipeTo(sender())
case ReadSourceRequest(userId) =>
// ...
}
The above example uses a custom case class named MyMsg instead of ByteString.
In the sender, assuming it's an actor:
def receive = {
case WriteSinkReady(userId, sinkRef) =>
source.runWith(sinkRef) // source is a Source[MyMsg, _]
// ...
}
The materialized stream in the sender will send the messages to the persistent actor.

Play Framework WebSocket Actor Filtering

I'm currently considering an implementation with which I can stream some events from my Play framework web application. There is a set of IoT devices that emit events and alerts. These devices are identified by their id. I have a HTTP endpoint with which I can get the telemetry signals for these devices. Now I want to do the same thing for the alerts and events. So I started with this simple approach by first defining my end point in my controller like this:
def events = WebSocket.accept[String, String] { request =>
ActorFlow.actorRef { out =>
EventsActor.props(out)
}
}
My EventsActor:
class EventsActor(out: ActorRef) extends Actor {
def receive = {
case msg: String =>
out ! ("I received your message: " + msg)
}
}
object EventsActor {
def props(out: ActorRef) =
Props(new EventsActor(out))
}
Right now, I'm not doing much with my EventsActor, but later on this Actor is going to get Events and Alert messages being pushed into which, this will then be routed to the WebSocket endpoint.
Now me requirement is that in the WebSocket endpoint, when the client makes a connection, he should be able to specify an id for the IoT device that he wishes to connect to and I should be able to pass this id to the EventsActor where I can filter for events that contain the passed in id.
Any clues on how to do this?
I did a quick example of how you might go about this. It leaves much to be desired, but I hope it is of some inspiration to you!
What you effectively want is a coordinator/router which can keep track of which websocket actors are listening to which event types. You can inject that hub into all of the created actors, and dispatch events to it to register those websocket actors to sets of events.
object TelemetryHub {
/** This is the message external sensors could use to stream the data in to the hub **/
case class FreshData(eventId: UUID, data: Int)
def props = Props(new TelemetryHub)
}
class TelemetryHub extends Actor {
type EventID = UUID
private var subscriptions =
mutable.Map.empty[EventID, Set[ActorRef]].withDefault(_ => Set())
override def receive = {
/** we can use the sender ref to add the requesting actor to the set of subscribers **/
case SubscribeTo(topic) => subscriptions(topic) = subscriptions(topic) + sender()
/** Now when the hub receives data, it can send a message to all subscribers
* of that particular topic
*/
case FreshData(incomingDataTopicID, data) =>
subscriptions.find { case (topic, _) => topic == incomingDataTopicID } match {
case Some((_, subscribers)) => subscribers foreach { _ ! EventData(data) }
case None => println("This topic was not yet subscribed to")
}
}
}
Now that we have the above structure, your websocket endpoint could look like the following:
object WebsocketClient {
/**
* Some messages with which we can subscribe to a topic
* These messages could be streamed through the websocket from the client
*/
case class SubscribeTo(eventID: UUID)
/** This is an example of some data we want to go back to the client. Uses int for simplicity **/
case class EventData(data: Int)
def props(out: ActorRef, telemetryHub: ActorRef) = Props(new WebsocketClient(out, telemetryHub))
}
/** Every client will own one of these actors. **/
class WebsocketClient(out: ActorRef, telemetryHub: ActorRef) extends Actor {
def receive = {
/** We can send these subscription request to a hub **/
case subscriptionRequest: SubscribeTo => telemetryHub ! subscriptionRequest
/** When we get data back, we can send it right to the client **/
case EventData(data) => out ! data
}
}
/** We can make a single hub which will be shared between all the connections **/
val telemetryHub = actorSys actorOf TelemetryHub.props
def events = WebSocket.accept[String, String] { _ =>
ActorFlow.actorRef { out => {
WebsocketClient.props(out, telemetryHub)
}}
}
Alternatively you could use the internal event bus that Akka provides to achieve the same thing with much less hassle!

ActorPublisher as Comet Event Source in Play Application

I'm trying to write an Actor which connects to an Amazon Kinesis stream and then relays any messages received via Comet to a Web UI. I'm using Source.actorPublisher for this and using the json method with Comet in Play described here. I got the events working just fine using Source.tick(), but when I tried using an ActorPublisher, the Actor never seems to be sent any Request messages as expected. How are requests for data usually sent down an Akka flow? I'm using v2.5 of the Play Framework.
My controller code:
def subDeviceSeen(id: Id): Action[AnyContent] = Action {
val deviceSeenSource: Source[DeviceSeenMessage, ActorRef] = Source.actorPublisher(DeviceSeenEventActor.props)
Ok.chunked(deviceSeenSource
.filter(m => m.id == id)
.map(Json.toJson(_))
via Comet.json("parent.deviceSeen")).as(ContentTypes.JSON)
}
Am I doing anything obviously wrong in the above? Here is my Actor code:
object DeviceSeenEventActor {
def props: Props = Props[DeviceSeenEventActor]
}
class DeviceSeenEventActor extends ActorPublisher[DeviceSeenMessage] {
implicit val mat = ActorMaterializer()(context)
val log = Logging(context.system, this)
def receive: Receive = {
case Request => log.debug("Received request message")
initKinesis()
context.become(run)
case Cancel => context.stop(self)
}
def run: Receive = {
case vsm:DeviceSeenMessage => onNext(vsm)
log.debug("Received request message")
onCompleteThenStop() //we are currently only interested in one message
case _:Any => log.warning("Unknown message received by event Actor")
}
private def initKinesis() = {
//init kinesis, a worker is created and given a reference to this Actor.
//The reference is used to send messages to the actor.
}
}
The 'Received request message' log line is never displayed. Am I missing some implicit? There are no warnings or anything else obvious displayed in the play console.
The issue was that I was pattern matching on case Request => ... instead of case Request() => .... Since I didn't have a default case in my receive() method, the message was simply dropped by the Actor.

Subscribing multiple actors to Dead Letters in Akka

I am trying to create a simple application that has two actors:
Master actor that handles some App actions
DeadLettersListener that is supposed to handle all dead or unhandled messages
Here is the code that works perfectly:
object Hw extends App {
// creating Master actor
val masterActorSystem = ActorSystem("Master")
val master = masterActorSystem.actorOf(Props[Master], "Master")
// creating Dead Letters listener actor
val deadLettersActorSystem = ActorSystem.create("DeadLettersListener")
val listener = deadLettersActorSystem.actorOf(Props[DeadLettersListener])
// subscribe listener to Master's DeadLetters
masterActorSystem.eventStream.subscribe(listener, classOf[DeadLetter])
masterActorSystem.eventStream.subscribe(listener, classOf[UnhandledMessage])
}
According to the akka manual though, ActorSystem is a heavy object and we should create only one per application. But when I replace these lines:
val deadLettersActorSystem = ActorSystem.create("DeadLettersListener")
val listener = deadLettersActorSystem.actorOf(Props[DeadLettersListener])
with this code:
val listener = masterActorSystem.actorOf(Props[DeadLettersListener], "DeadLettersListener")
The subscription does not work any more and DeadLettersListener is not getting any Dead or Unhandled messages.
Can you please explain what am I doing wrong and give an advice how to subscribe to Dead Letters in this case?
I can't really imagine what are you doing wrong, I created a small example, and it seems to work:
object Hw extends App {
class Master extends Actor {
override def receive: Receive = {
case a => println(s"$a received in $self")
}
}
class DeadLettersListener extends Actor {
override def receive: Actor.Receive = {
case a => println(s"$a received in $self")
}
}
// creating Master actor
val masterActorSystem = ActorSystem("Master")
val master = masterActorSystem.actorOf(Props[Master], "Master")
val listener = masterActorSystem.actorOf(Props[DeadLettersListener])
// subscribe listener to Master's DeadLetters
masterActorSystem.eventStream.subscribe(listener, classOf[DeadLetter])
masterActorSystem.eventStream.subscribe(listener, classOf[UnhandledMessage])
masterActorSystem.actorSelection("/unexistingActor") ! "yo"
}
Could you try it?