I have an existing akka application built on socko websockets. Communication with the sockets takes place inside a single actor and messages both leaving and entering the actor (incoming and outgoing messages, respectively) are labelled with the socket id, which is a first class property of a socko websocket (in socko a connection request arrives labelled with the id, and all the lifecycle transitions such as handshaking, disconnection, incoming frames etc. are similarly labelled)
I'd like to reimplement this single actor using akka-http (socko is more-or-less abandonware these days, for obvious reasons) but it's not straightforward because the two libraries are conceptually very different; akka-http hides the lower level details of the handshaking, disconnection etc, simply sending whichever actor was bound to the http server an UpgradeToWebsocket request header. The header object contains a method that takes a materialized Flow as a handler for all messages exchanged with the client.
So far, so good; I am able to receive messages on the web socket and reply them directly. The official examples all assume some kind of stateless request-reply model, so I'm struggling with understanding how to make the next step to assigning a label to the materialized flow, managing its lifecycle and connection state (I need to inform other actors in the application when a connection is dropped by a client, as well as label the messages.)
The alternative (remodelling the whole application using akka-streams) is far too big a job, so any advice about how to keep track of the sockets would be much appreciated.
To interface with an existing actor-based system, you should look at Source.actorRef and Sink.actorRef. Source.actorRef creates an ActorRef that you can send messages to, and Sink.actorRef allows you to process the incoming messages using an actor and also to detect closing of the websocket.
To connect the actor created by Source.actorRef to the existing long-lived actor, use Flow#mapMaterializedValue. This would also be a good place to assign an unique id for a socket connection.
This answer to a related question might get you started.
One thing to be aware of. The current websocket implementation does not close the server to client flow when the client to server flow is closed using a websocket close message. There is an issue open to implement this, but until it is implemented you have to do this yourself. For example by having something like this in your protocol stack.
The answer from RĂ¼diger Klaehn was a useful starting point, thanks!
In the end I went with ActorPublisher after reading another question here (Pushing messages via web sockets with akka http).
The key thing is that the Flow is 'materialized' somewhere under the hood of akka-http, so you need to pass into UpgradeToWebSocket.handleMessagesWithSinkSource a Source/Sink pair that already know about an existing actor. So I create an actor (which implements ActorPublisher[TextMessage.Strict]) and then wrap it in Source.fromPublisher(ActorPublisher(myActor)).
When you want to inject a message into the stream from the actor's receive method you first check if totalDemand > 0 (i.e. the stream is willing to accept input) and if so, call onNext with the contents of the message.
Related
I have a scenario where I have a bunch of Akka Actors running with each Actor representing an IoT device. I have a web application based on Play inside which these Actors are running and are connected to these IoT devices.
Now I want to expose the signals from there Actors to the outside world by means of a WebSocket Endpoint. Each of the Actor has some sort of mechanism with which I can ask for the latest signal status.
My idea is to do the following:
Add a WebSocket endpoint in my controller which expects the id of the IoT device for which it needs the signals. In this controller, I will do an actor selection to get the Actor instance that corresponds to the id of the IoT device that is passed in.
Use the ActorRef obtained in step 1 and instantiate the WebSocketActor
In this WebSocketActor, I will instantiate a Monix Observable that will at regular intervals use the actorRef and ask it for the signals.
As soon as I get these signals, I will pass it on to the WebSocket endpoint
Now my question is:
What happens say if a client has opened a WebSocket stream and after some time the Actor representing the IoT device is dead. I probably should handle this situation in my WebSocketActor. But how would this look like?
If the Actor representing the IoT device comes back alive (assuming that I have some supervison set up), can I continue processing the client that opened the socket connection before the Actor was dead? I mean will the client need to somehow close and open the connection again?
Please suggestions?
If you'd like to see an Akka actors + Monix integration example, communicating over WebSocket, look no further than the monix-sample project.
The code handles network failure. If you'll load that sample in the browser, disconnect the network and you'll see it recover once connectivity is back on.
Im learning akka streams but obviously its relevant to any streaming framework :)
quoting akka documentation:
Reactive Streams is just to define a common mechanism of how to move
data across an asynchronous boundary without losses, buffering or
resource exhaustion
Now, from what I understand is that if up until before streams, lets take an http server for example, the request would come and when the receiver wasent finished with a request, so the new requests that are coming will be collected in a buffer that will hold the waiting requests, and then there is a problem that this buffer have an unknown size and at some point if the server is overloaded we can loose requests that were waiting.
So then stream processing came to play and they bounded this buffer to be controllable...so we can predefine the number of messages (requests in my example) we want to have in line and we can take care of each at a time.
my question, if we implement that a source in our server can have a 3 messages at most, so if the 4th id coming what happens with it?
I mean when another server will call us and we are already taking care of 3 requests...what will happened to he's request?
What you're describing is not actually the main problem that Reactive Streams implementations solve.
Backpressure in terms of the number of requests is solved with regular networking tools. For example, in Java you can configure a thread pool of a networking library (for example Netty) to some parallelism level, and the library will take care of accepting as much requests as possible. Or, if you use synchronous sockets API, it is even simpler - you can postpone calling accept() on the server socket until all of the currently connected clients are served. In either case, there is no "buffer" on either side, it's just until the server accepts a connection, the client will be blocked (either inside a system call for blocking APIs, or in an event loop for async APIs).
What Reactive Streams implementations solve is how to handle backpressure inside a higher-level data pipeline. Reactive streams implementations (e.g. akka-streams) provide a way to construct a pipeline of data in which, when the consumer of the data is slow, the producer will slow down automatically as well, and this would work across any kind of underlying transport, be it HTTP, WebSockets, raw TCP connections or even in-process messaging.
For example, consider a simple WebSocket connection, where the client sends a continuous stream of information (e.g. data from some sensor), and the server writes this data to some database. Now suppose that the database on the server side becomes slow for some reason (networking problems, disk overload, whatever). The server now can't keep up with the data the client sends, that is, it cannot save it to the database in time before the new piece of data arrives. If you're using a reactive streams implementation throughout this pipeline, the server will signal to the client automatically that it cannot process more data, and the client will automatically tweak its rate of producing in order not to overload the server.
Naturally, this can be done without any Reactive Streams implementation, e.g. by manually controlling acknowledgements. However, like with many other libraries, Reactive Streams implementations solve this problem for you. They also provide an easy way to define such pipelines, and usually they have interfaces for various external systems like databases. In particular, such libraries may implement backpressure on the lowest level, down to to the TCP connection, which may be hard to do manually.
As for Reactive Streams itself, it is just a description of an API which can be implemented by a library, which defines common terms and behavior and allows such libraries to be interchangeable or to interact easily, e.g. you can connect an akka-streams pipeline to a Monix pipeline using the interfaces from the specification, and the combined pipeline will work seamlessly and supporting all of the backpressure features of Reacive Streams.
I have to develop a message bus for processes to send, receive messages from each other. Currently, we are running on Linux with the view of porting to other platforms later.
For this, I am using ZeroMQ over TCP. The pattern is PUB-SUB with a forwarder. My bus runs as a separate process and all clients connect to SUB port to receive messages and PUB to send messages. Each process subscribes to messages by a unique tag. A send call from a process sends messages to all. A receive call will fetch that process the messages marked with the tag of that process. This is working fine.
Now I need to wrap the ZeroMQ stuff. My clients only need to supply a unique tag. I need to maintain a global list of tags vs. ZeroMQ context and sockets details. When a client say,
initialize_comms("name"); the bus needs to check if this name is unique, create ZeroMQ contexts and sockets. Similarly, if a client say receive("name"); the bus needs to fetch messages with that tag.
To summarize the problems I am facing;
Is there anyway to achieve this using facilities provided by ZeroMQ?
Is ZeroMQ the right tool for this, or should I look for something like nanomsg?
Is PUB-SUB with forwarder the right pattern for this?
Or, am I missing something here?
Answers
Yes, ZeroMQ is capable of serving this need
Yes. ZeroMQ is a right tool ( rather a powerful tool-box of low-latency components ) for this. While nanomsg has a straight primitive for bus, the core distributed logic can be integrated in ZeroMQ framework
Yes & No. PUB-SUB as given above may serve for emulation of the "shout-cast"-to-bus and build on a SUB side-effect of using a subscription key(s). The WHOLE REST of the logic has to be re-thought and designed so as the whole scope of the fabrication meets your plans (ref. below). Also kindly bear in mind, that initial versions of ZeroMQ operated PUB/SUB primitive as "subscription filtering" of the incoming stream of messages being done on receiver side, so massive designs shall check against traffic-volumes / risk-of-flooding / process-inefficiency on the massive scale...
Yes. ZeroMQ is rather a well-tuned foundation of primitive elements ( as far as the architecture is discussed, not the power & performance thereof ) to build more clever, more robust & almost-linearly-scaleable Formal Communication Pattern(s). Do not get stuck to PUB/SUB or PAIR primitives once sketching Architecture. Any design will remain poor if one forgets where the True Powers comes from.
A good place to start a next step forward towards a scaleable & fault-resilient Bus
Thus a best next step one may do is IMHO to get a bit more global view, which may sound complicated for the first few things one tries to code with ZeroMQ, but if you at least jump to the page 265 of the Code Connected, Volume 1, if it were not the case of reading step-by-step thereto.
The fastest-ever learning-curve would be to have first an un-exposed view on the Fig.60 Republishing Updates and Fig.62 HA Clone Server pair for a possible High-availability approach and then go back to the roots, elements and details.
Here is what I ended up designing, if anyone is interested. Thanks everyone for the tips and pointers.
I have a message bus implemented using ZeroMQ (and CZMQ) running as a separate process.
The pattern is PUBLISHER-SUBSCRIBER with a LISTENER. They are connected using a PROXY.
In addition, there is a ROUTER invoked using a newly forked thread.
These three endpoints run on TCP and are bound to predefined ports which the clients know of.
PUBLISHER accepts all messages from clients.
SUBSCRIBER sends messages with a unique tag to the client who have subscribed to that tag.
LISTENER listens to all messages passing through. currently, this is for logging testing and purposes.
ROUTER provides a separate comms channel to clients. Messages such as control commands are directed here so that they will not get passed downstream.
Clients connect to,
PUBLISHER to send messages.
SUBSCRIBER to receive messages. Subscription is using unique tags.
ROUTER to send commands (check tag uniqueness etc.)
I am still doing implementation so there may be unseen problems, but right now it works fine. Also, there may be a more elegant way but I didn't want to throw away the PUB-SUB thing I had built.
It's not clear how to use jain SIP stack in mutli-thread environment. I need to create multiple SIP sessions from different threads, e.g each client should be proceeded in its own transaction. Below is few options:
Use single SipProvider for receiving and sending SIP requests and do multiplexing on application side. SipProvider is not thread-safe, hence sending requests requires proper locking.
Create new SipProvider and new ListeningPoint for each client. This is how it works for me now. However, I don't really like it. And it's not clear, whther SipStack threadsafe or not
Create new instance of SipStack for every client
Its been a long time since I thought about JAIN-SIP (or even SIP for that matter or even Java) but here goes:
Set the re-entrant listener flag when you create the stack. (look up the javadocs). Specify a thread pool size. When a sip request or response comes along, the stack may potentially create a new thread for you and invoke your listener.
Your critical section is the SipListener implementation. You should not block for ever in it - otherwise new inbound requests and responses will not be routed to the sip listener for the transaction that is being processed at the time you blocked.
Hope that answers your question. Happy hacking.
Thats it.
why don't you ue SIP Servlets, it lets you focus on your application logic and handles those details for you ?
See http://code.google.com/p/sipservlets/
I am looking at the websocket-chat example. It unveils much, but I still cannot get something. I understand how messages are received, processed and sent on the web page side.
However, Play captures websocket messages by means of the receive method of an Akka actor. In the websocket-chat, there are several cases in this method, but I don't get, how does it know which websocket message should be mapped to which case. In fact, I don't understand the path that a websocket message follows upon entering Play's domain, how is it processed and how can different message types/kinds be sent from the webpage.
I have not find any info or sources related to this. Could please someone explain this or point to some kind of a good reference?
UPDATE:
The link to the original example.
The receive method from the sample doesn't have any link to the Play Websocket API. This receive method comes from the Akka library.
The Websockets events are managed through an Iteratee, which create and send a Talk message to the Actor system.
Simply put, it allows to have a highly scalable system (non-blocking), by sending messages between "workers".
So I suggest that you take a look at the Actor model in the Akka library.