Akka Actor Searching and Streaming Events - scala

I have a scenario where I have a bunch of Akka Actors running with each Actor representing an IoT device. I have a web application based on Play inside which these Actors are running and are connected to these IoT devices.
Now I want to expose the signals from there Actors to the outside world by means of a WebSocket Endpoint. Each of the Actor has some sort of mechanism with which I can ask for the latest signal status.
My idea is to do the following:
Add a WebSocket endpoint in my controller which expects the id of the IoT device for which it needs the signals. In this controller, I will do an actor selection to get the Actor instance that corresponds to the id of the IoT device that is passed in.
Use the ActorRef obtained in step 1 and instantiate the WebSocketActor
In this WebSocketActor, I will instantiate a Monix Observable that will at regular intervals use the actorRef and ask it for the signals.
As soon as I get these signals, I will pass it on to the WebSocket endpoint
Now my question is:
What happens say if a client has opened a WebSocket stream and after some time the Actor representing the IoT device is dead. I probably should handle this situation in my WebSocketActor. But how would this look like?
If the Actor representing the IoT device comes back alive (assuming that I have some supervison set up), can I continue processing the client that opened the socket connection before the Actor was dead? I mean will the client need to somehow close and open the connection again?
Please suggestions?

If you'd like to see an Akka actors + Monix integration example, communicating over WebSocket, look no further than the monix-sample project.
The code handles network failure. If you'll load that sample in the browser, disconnect the network and you'll see it recover once connectivity is back on.

Related

Proper way to maintain an SSE connection in Play

I would like to maintain an SSE pipeline in the front end of my Play 2.7.x application, which would listen indefinitely for some irregularly spaced events from server (possibly triggered by other users). I send the events via a simple Akka flow, like this:
Ok.chunked(mySource via EventSource.flow).as(ContentTypes.EVENT_STREAM)
However the connection is automatically closed by Play/Akka server. What would be the best course of action here:
set play.server.http.idleTimeout to infinite (but documentation
does not recommend it; also it would affect other non-SSE endpoints)?
rely on browser to automatically reestablish the connection (but as far as I know not all browsers do it)?
explicitly implement some reconnection logic in Javascript on the client?
perhaps idleTimeout can be overridden locally for a specific action (I have not found a way though)?
Periodically send an empty Event to keep the connection alive:
import scala.concurrent.duration._
val heartbeat = Event("", None, None)
val sseSource =
mySource
.via(EventSource.flow)
.keepAlive(1.second, () => heartbeat)
Ok.chunked(sseSource).as(ContentTypes.EVENT_STREAM)
Akka HTTP's support for server-sent events demonstrates the same approach (Play internally uses Akka HTTP).

Moving from socko to akka-http websockets

I have an existing akka application built on socko websockets. Communication with the sockets takes place inside a single actor and messages both leaving and entering the actor (incoming and outgoing messages, respectively) are labelled with the socket id, which is a first class property of a socko websocket (in socko a connection request arrives labelled with the id, and all the lifecycle transitions such as handshaking, disconnection, incoming frames etc. are similarly labelled)
I'd like to reimplement this single actor using akka-http (socko is more-or-less abandonware these days, for obvious reasons) but it's not straightforward because the two libraries are conceptually very different; akka-http hides the lower level details of the handshaking, disconnection etc, simply sending whichever actor was bound to the http server an UpgradeToWebsocket request header. The header object contains a method that takes a materialized Flow as a handler for all messages exchanged with the client.
So far, so good; I am able to receive messages on the web socket and reply them directly. The official examples all assume some kind of stateless request-reply model, so I'm struggling with understanding how to make the next step to assigning a label to the materialized flow, managing its lifecycle and connection state (I need to inform other actors in the application when a connection is dropped by a client, as well as label the messages.)
The alternative (remodelling the whole application using akka-streams) is far too big a job, so any advice about how to keep track of the sockets would be much appreciated.
To interface with an existing actor-based system, you should look at Source.actorRef and Sink.actorRef. Source.actorRef creates an ActorRef that you can send messages to, and Sink.actorRef allows you to process the incoming messages using an actor and also to detect closing of the websocket.
To connect the actor created by Source.actorRef to the existing long-lived actor, use Flow#mapMaterializedValue. This would also be a good place to assign an unique id for a socket connection.
This answer to a related question might get you started.
One thing to be aware of. The current websocket implementation does not close the server to client flow when the client to server flow is closed using a websocket close message. There is an issue open to implement this, but until it is implemented you have to do this yourself. For example by having something like this in your protocol stack.
The answer from RĂ¼diger Klaehn was a useful starting point, thanks!
In the end I went with ActorPublisher after reading another question here (Pushing messages via web sockets with akka http).
The key thing is that the Flow is 'materialized' somewhere under the hood of akka-http, so you need to pass into UpgradeToWebSocket.handleMessagesWithSinkSource a Source/Sink pair that already know about an existing actor. So I create an actor (which implements ActorPublisher[TextMessage.Strict]) and then wrap it in Source.fromPublisher(ActorPublisher(myActor)).
When you want to inject a message into the stream from the actor's receive method you first check if totalDemand > 0 (i.e. the stream is willing to accept input) and if so, call onNext with the contents of the message.

ZeroMQ mixed PUB/SUB DEALER/ROUTER pattern

I need to do the following:
multiple clients connecting to the SAME remote port
each of the clients open 2 different sockets, one is a PUB/SUB, the
other is a ROUTER/DEALER ( the server can occasionally send back to client heartbeats, different server related information ).
I am completely lost whether it can be done in ZeroMQ or not.
Obviously if I can use 2 remote ports, that is not an issue, but I fail
to understand if my setup can be achieved with some kind of envelope
usage in ZeroMQ.
Can it be done?
Thanks,
Update:
To clarify what I wish to achieve.
Multiple clients can communicate with the server
Clients operate on request-response basis mostly(on one socket)
Clients create a session socket, which means that whenever this
type of socket is created, a separate worker thread needs to be created
and from that time on the client communicates with this worker thread
with regards to requests processing, e.g. server thread must not block
the connection of other clients when dealing with the request of one client
However clients can receive occasional messages from the worker thread with regards to heartbeats of the worker.
Update2:
Actually I could sort it out. What I did:
identify clients obviously, so ROUTER/DEALER is used, e.g. clients
are indeed dealers, hence async processing is provided
clients send messages to the one and only local port, where the router sits
router peeks into messages (kinda the lazy pirate example), checks whether a new client comes in; if yes it offloads to a separate thread, and connects the separate thread with an internal "inproc:" socket
router obviously polls for the frontend and all connected clients' backends and sends messages back and forth.
What bugs me is that it is an overkill if I compare this with a "regular" socket solution, where I could have connected the client with the worker thread DIRECTLY (e.g. worker thread could recv from the socket opened by the client directly), hence I could spare the routing completely.
What am I missing?
There was a discussion on the ZeroMQ mailing list recently about multiplexing multiple services on one TCP socket. The proposed solutions is essentially what you implemented.
The discussion also mentions Malamute with its brokers which essentially provides a framework based on ZeroMQ which also provides the functionality you need. I haven't had the time to look into it myself, but it looks promising.

Implementing a message bus using ZeroMQ

I have to develop a message bus for processes to send, receive messages from each other. Currently, we are running on Linux with the view of porting to other platforms later.
For this, I am using ZeroMQ over TCP. The pattern is PUB-SUB with a forwarder. My bus runs as a separate process and all clients connect to SUB port to receive messages and PUB to send messages. Each process subscribes to messages by a unique tag. A send call from a process sends messages to all. A receive call will fetch that process the messages marked with the tag of that process. This is working fine.
Now I need to wrap the ZeroMQ stuff. My clients only need to supply a unique tag. I need to maintain a global list of tags vs. ZeroMQ context and sockets details. When a client say,
initialize_comms("name"); the bus needs to check if this name is unique, create ZeroMQ contexts and sockets. Similarly, if a client say receive("name"); the bus needs to fetch messages with that tag.
To summarize the problems I am facing;
Is there anyway to achieve this using facilities provided by ZeroMQ?
Is ZeroMQ the right tool for this, or should I look for something like nanomsg?
Is PUB-SUB with forwarder the right pattern for this?
Or, am I missing something here?
Answers
Yes, ZeroMQ is capable of serving this need
Yes. ZeroMQ is a right tool ( rather a powerful tool-box of low-latency components ) for this. While nanomsg has a straight primitive for bus, the core distributed logic can be integrated in ZeroMQ framework
Yes & No. PUB-SUB as given above may serve for emulation of the "shout-cast"-to-bus and build on a SUB side-effect of using a subscription key(s). The WHOLE REST of the logic has to be re-thought and designed so as the whole scope of the fabrication meets your plans (ref. below). Also kindly bear in mind, that initial versions of ZeroMQ operated PUB/SUB primitive as "subscription filtering" of the incoming stream of messages being done on receiver side, so massive designs shall check against traffic-volumes / risk-of-flooding / process-inefficiency on the massive scale...
Yes. ZeroMQ is rather a well-tuned foundation of primitive elements ( as far as the architecture is discussed, not the power & performance thereof ) to build more clever, more robust & almost-linearly-scaleable Formal Communication Pattern(s). Do not get stuck to PUB/SUB or PAIR primitives once sketching Architecture. Any design will remain poor if one forgets where the True Powers comes from.
A good place to start a next step forward towards a scaleable & fault-resilient Bus
Thus a best next step one may do is IMHO to get a bit more global view, which may sound complicated for the first few things one tries to code with ZeroMQ, but if you at least jump to the page 265 of the Code Connected, Volume 1, if it were not the case of reading step-by-step thereto.
The fastest-ever learning-curve would be to have first an un-exposed view on the Fig.60 Republishing Updates and Fig.62 HA Clone Server pair for a possible High-availability approach and then go back to the roots, elements and details.
Here is what I ended up designing, if anyone is interested. Thanks everyone for the tips and pointers.
I have a message bus implemented using ZeroMQ (and CZMQ) running as a separate process.
The pattern is PUBLISHER-SUBSCRIBER with a LISTENER. They are connected using a PROXY.
In addition, there is a ROUTER invoked using a newly forked thread.
These three endpoints run on TCP and are bound to predefined ports which the clients know of.
PUBLISHER accepts all messages from clients.
SUBSCRIBER sends messages with a unique tag to the client who have subscribed to that tag.
LISTENER listens to all messages passing through. currently, this is for logging testing and purposes.
ROUTER provides a separate comms channel to clients. Messages such as control commands are directed here so that they will not get passed downstream.
Clients connect to,
PUBLISHER to send messages.
SUBSCRIBER to receive messages. Subscription is using unique tags.
ROUTER to send commands (check tag uniqueness etc.)
I am still doing implementation so there may be unseen problems, but right now it works fine. Also, there may be a more elegant way but I didn't want to throw away the PUB-SUB thing I had built.

Play framework: service to service continuous communication

I need some advice/insight how to best implement certain functionality. The idea of my task is live system monitoring dashboard.
Let's say I have a following setup based on two physical servers:
Server1 is running Play application which monitors certain files, services, etc for changes. As soon as change occurs it alerts another Play application running on Server2.
Server2 is running a Play application that serves a web front end displaying live dashboard data being sent to it from Play application sitting on Server1.
I am only familiar with Play framework in a way that it serves data to http requests, but the way I need it to run in this particular situation is a bit different.
My question is how do I keep these two Play applications in constant connection the way I've described above? The requirement is that Server1 application would be pushing data to Server2 application on a need basis as opposed to Server2 application running in an endless loop and asking Server1 application if there is any new data every 5 seconds.
I'm using Play Framework 2.2.1 with Scala.
Actually Akka introduced in Play 2.0 perfectly fits your requirements (as Venkat pointed).
Combining its remoting, scheduler and futures possibilities you will be able to build every monitor you need.
Scanerio may be ie:
S1 let's name it a Doctor uses Akka's scheduler to monitor resources each several seconds
if Doctor detects changes sends Akka message to S2's actor (FrontEnd) otherwise does nothing.
Mentioned actor of FrontEnd can add event to some queue, or push it directly ie to some WebSocket, which will push it to browser. Other option is setting another scheduler at FrontEnd which will check if queue contains new events.
Check included sample applications how you can communicate your FrontEnd with browser (ie. commet-live-monitoring or eventsource-clock).
For communication between a Doctor and FrontEnd apps, akka-remote is promising feature.
I think Server-Sent Events (SSE: http://dev.w3.org/html5/eventsource/) are what you are looking for. Since it's supposed to be only one-directional push (server1 pushes data to server2), SSE is probably a better choice over WebSockets which are full-duplex bidirectional connections. Since your Server2 has a web-front end, the browser can automatically reconnect to Server1 if you are using SSE. Most modern browsers support SSE (IE doesn't).
Since you are using Play Framework: You can use Play WS API for Service to Service communication and also you can take advantage of the powerful abstractions for handling data asynchronously like Enumerator and Iteratee. As Play! integrates seamlessly with Akka, you can manage/supervise the HTTP connection using Actors.
Edit:
Answering "How exactly one service can push data to another on a need basis" in steps:
Manage the HTTP connection: Server1 needs to have a WebService client to manage HTTP connection with Server2. By "manage HTTP connection" I mean: reconnect/reset/disconnect the HTTP connection. Akka Actors are a great usecase for solving this problem. Basically this actor receives messages like CONNECT, CHECK_CONN_STATUS, DISCONNECT, RESET etc. Have a scheduler for your HttpSupervisor actor to check the connection status, so that you can reconnect if the connection is dead.
val system = ActorSystem("Monitor")
val supervisorRef = system.actorOf(Props(new HttpSupervisor(system.eventStream)), "MonitorSupervisor")
system.scheduler.schedule(60 seconds, 60 seconds, supervisorRef, CHECK_CONN_STATUS)
Listen to the changes and PUSH on need:
Create an Enumerator which produces the changes. Create an Iteratee for consuming the changes asynchronously. Again, some code that may be of help:
val monitorIteratee = play.api.libs.iteratee.Iteratee.foreach[Array[Byte]]
(WS.url(postActionURLOnServer2).post(new String(_, "UTF-8")))
Attach the iteratee to the enumerator.