GWT Server Push using Jetty Continuations? - gwt

I'm supposed to implement a web application where the user log's in and by that registers for some sort of events (in this case, alarms). When an alarm happens, the server needs to push the alarm to all of the clients.
At the moment I'm using
GWT on the Client side
Jetty on the Server side
Is implementing the server push by using Jetty Continuations a good idea? My requirements are:
the number of clients will be quite small (<20) but could increase in the future
alarms must not get lost (i.e. if the client will be down, it must not miss any alarms)
if a client goes down, other clients need to be informed about it (or at least the admin should receive some sort of notification, e.g. by Mail).

The main reason for using Comet (e.g. Jetty Continuations) is, that it allows to reduce the polling frequency. In other words: You can achieve the same thing without Comet, by using frequent polling from the client side. Which alternative to choose depends on the characteristics of your application - depending on that, each alternative can be more or less efficent than the other!
In your case, since you need notifications when a client goes down, it makes sense to use frequent polling. Comet (long polling) is not suited very well for this task: Because of its priciple, it can take a long time until a client sends a new request. And receiving a new request is the only way a server can know that a client is still alive (remember, that a web server - no matter if Comet or not - can never send a request to the client).

Your requirement states that alarms must not get lost, implies a more complicated solution than long polling or frequent polling.
Your client should send an acknowledgement message to the server, because your user could close the application just after the alarm message arrived, he/she can loss that alarm.
Also, your user should click an alarm message to acknowledge the server . You can put a time limit to acknowledge , if the client does not send an ack message , then you can assume that alarm has been lost..
Long polling with acknowledgment algortihm would be my choices to solve your problem..

Related

send data from server to an OSX app

I am building an OSX app that needs to get data from server. The easy way, is to make a GET request at some fixed time interval, and process results. Thats not what I want. I want the other way around: e.g. server to send data to my app, when something happens on the server side. That way I do not need to make constant requests from client side. I don't need the data to visually be displayed, just processed.
Can this be implemented in OSX with Swift?
You have two ways to achieve this:
Websocket:
Websocket is a full-duplex communication channel over a TCP-Connection. It's established via HTTP.
Long Polling:
Same as you said before but without responding directly. Your client makes a HTTP request and set a very long timeout timer. The server responds after something is happening. (More)
I would recommend you Websocket since it was built exactly for this use case. But if you have to implement it quickly you should probably go with long polling for now, since the barrier to implement it is much lower and switch to Websocket later.

Implementation of server-side responses to long polling via REST API

Say you are designing a REST API over HTTP for a server "room" where subscribing clients want to monitor public events happening to the room (e.g. a new participant joins the room, another one leaves the room, and so on...) by making long poll requests.
What is the best way to implement this from a server side point of view so that the client will not miss any events between consecutive polls? For example, should the server implement a queue of events which need to exist in the queue until all the subscribers have got them?
Are there any tutorials, examples, some theory on internet about designing such an API and all the things that should be taken into account from the server perspective?
Very short answer - why not just use EventStore?
Short answer - why not just use Event Store as a reference implementation, and adapt their solution to match your implementation constraints?
What is the best way to implement this from a server side point of view so that the client will not miss any events between consecutive polls? For example, should the server implement a queue of events which need to exist in the queue until all the subscribers have got them?
REST by itself offers a few guidelines. There should be no application state stored on the server; the message sent by the client should include any client side state (like current position in the event stream) that the server will need to fulfill the request. The resource identified in the request is an abstraction - so the client can send messages to, for example "the event that comes after event 7", which makes sense even if that next event doesn't exist yet. The uniform interface should be respected, to allow for scaling via caches and the like that are outside of the control of the server. The representation of the state of the resource should be hypermedia, with controls that allow the client to advance after it has consumed the currently available messages.
HTTP throws in a few more specifics. Since there is no tracking of client state on the server, reading from the queue is a safe operation. Therefore, one of the safe HTTP methods (GET, to be precise) should be used for the read. Since GET doesn't actually support content body in the request, the information that the server will need should all be packed into the header of the request.
In other words, the URI is used to specify the current position of the client in the event stream.
Atom Syndication provides a good hypermedia format for event processing - the event stream maps to a feed, events map to entries.
By itself, those pieces give you a big head start on an event processor that conforms to the REST architectural constraints. You just need to bolt long polling onto it.
To get a rough idea at how you might implement long polling on your own, you can take a look at the ticketing demo, written by Michael Barker (maintainer of LMAX Disruptor).
The basic plot in Michael's demo is that a single writer thread is tracking (a) all of the clients currently waiting for an update and (b) the local cache of events. That thread reads a batch of events, identifies which requests need to be notified, responds to each of those requests in turn, and then advances to process the next batch of events.
I tend to think of the local cache of events as a ring buffer (like the disruptor itself, but private to the writer thread). The writer thread knows (from the information in the HTTP request) the position of each client in the event stream. Comparing that position to the current pointer in the ring buffer, each pending request can be classified has
Far Past The position that the client is seeking has already been evicted from the cache. Redirect the client to a "cold" persistent copy of that location in the stream, where it can follow the hypermedia controls to catch up to the present.
Recent Past The position that the client is seeking is currently available in the cache, so immediately generate a response to the client with the events that are available, and dispatch that response.
Near future The position that the client is seeking is not available in the cache, but the writer anticipates being able to satisfy that request before the SLA expires. So we park the client until more events arrive.
Far future The position that the client is seeking is not available in the cache, and we don't anticipate that we will be able to satisfy the request in the allotted time. So we just respond now, and let the client decide what to do.
(If you get enough polling clients that you need to start scaling out the long polling server, you need to consider the case where those servers get out of sync, and a client gets directed from a fast server to one that has fallen behind. So you'll want to have instrumentation in place that lets you track how often this is happening, so that you can apply the appropriate remedy).
There are also some edge cases to consider -- if a very large batch comes in, then you may need to evict the events your clients are waiting on before you get a chance to send them.
Simple, have the client pass in the timestamp (or id, or index) of the last message they received.
Requesting GET /rooms/5/messages returns all the messages the server knows about, like
[
{
"message": "hello",
"timestamp": "2016-07-18T18:44:34Z"
},
{
"message": "world",
"timestamp": "2016-07-18T18:47:16Z"
}
]
The client then long polls the server with GET /rooms/5/messages?since=2016-07-18T18:47:16Z which returns either all the messages since that time (if there are any) or blocks until the room has a new message.
Send reference number with all the events.
Cleint will call with reference number of the latest event received. You will block long poll request if no event is available and respond once event is available again with new reference number.
In Case events are already available it will return all events generated after the request reference number event.
I strongly recommend using WebSockets. Check out socket.io. Long polling is a hack that isn't necessarily desirable and isn't really "supported".
Long polling is not a good idea. Specifically when one wants to live monitor the changes those happen at server side.There are mechanisms where server send the notifications to clients for the changes. This can be achieved by using, as gcoreb already mentioned, Socket.io (Nodejs stack) or SignalR (.net stack).

WebSocket/REST: Client connections?

I understand the main principles behind both. I have however a thought which I can't answer.
Benchmarks show that WebSockets can serve more messages as this website shows: http://blog.arungupta.me/rest-vs-websocket-comparison-benchmarks/
This makes sense as it states the connections do not have to be closed and reopened, also the http headers etc.
My question is, what if the connections are always from different clients all the time (and perhaps maybe some from the same client). The benchmark suggests it's the same clients connecting from what I understand, which would make sense keeping a constant connection.
If a user only does a request every minute or so, would it not be beneficial for the communication to run over REST instead of WebSockets as the server frees up sockets and can handle a larger crowd as to speak?
To fix the issue of REST you would go by vertical scaling, and WebSockets would be horizontal?
Doe this make sense or am I out of it?
This is my experience so far, I am happy to discuss my conclusions about using WebSockets in big applications approached with CQRS:
Real Time Apps
Are you creating a financial application, game, chat or whatever kind of application that needs low latency, frequent, bidirectional communication? Go with WebSockets:
Well supported.
Standard.
You can use either publisher/subscriber model or request/response model (by creating a correlationId with each request and subscribing once to it).
Small size apps
Do you need push communication and/or pub/sub in your client and your application is not too big? Go with WebSockets. Probably there is no point in complicating things further.
Regular Apps with some degree of high load expected
If you do not need to send commands very fast, and you expect to do far more reads than writes, you should expose a REST API to perform CRUD (create, read, update, delete), specially C_UD.
Not all devices prefer WebSockets. For example, mobile devices may prefer to use REST, since maintaining a WebSocket connection may prevent the device from saving battery.
You expect an outcome, even if it is a time out. Even when you can do request/response in WebSockets using a correlationId, still the response is not guaranteed. When you send a command to the system, you need to know if the system has accepted it. Yes you can implement your own logic and achieve the same effect, but what I mean, is that an HTTP request has the semantics you need to send a command.
Does your application send commands very often? You should strive for chunky communication rather than chatty, so you should probably batch those change request.
You should then expose a WebSocket endpoint to subscribe to specific topics, and to perform low latency query-response, like filling autocomplete boxes, checking for unique items (eg: usernames) or any kind of search in your read model. Also to get notification on when a change request (write) was actually processed and completed.
What I am doing in a pet project, is to place the WebSocket endpoint in the read model, then on connection the server gives a connectionID to the client via WebSocket. When the client performs an operation via REST, includes an optional parameter that indicates "when done, notify me through this connectionID". The REST server returns saying if the command was sent correctly to a service bus. A queue consumer processes the command, and when done (well or wrong), if the command had notification request, another message is placed in a "web notification queue" indicating the outcome of the command and the connectionID to be notified. The read model is subscribed to this queue, gets messessages and forward them to the appropriate WebSocket connection.
However, if your REST API is going to be consumed by non-browser clients, you may want to offer a way to check of the completion of a command using the async REST approach: https://www.adayinthelifeof.nl/2011/06/02/asynchronous-operations-in-rest/
I know, that is quite appealing to have an low latency UP channel available to send commands, but if you do, your overall architecture gets messed up. For example, if you are using a CQRS architecture, where is your WebSocket endpoint? in the read model or in the write model?
If you place it on the read model, then you can easy access to your read DB to answer fast search queries, but then you have to couple somehow the logic to process commands, being the read model the responsible of send the commands to the write model and notify if it is unable to do so.
If you place it on the write model, then you have it easy to place commands, but then you need access to your read model and read DB if you want to answer search queries through the WebSocket.
By considering WebSockets part of your read model and leaving command processing to the REST interface, you keep your loose coupling between your read model and your write model.

Why lift 3 round trips are doing 2 kinds of HTTP request

I am using lift 3 round trip and I am trying to understand what happens behind the scene.
Why are there 2 kinds of request :
GET on comet_request
POST on ajax_request
Lift's uses HTTP Long Polling for asynchronous responses to the browser. I won't go into great detail on why the Lift developers have chosen Long Polling over other implementations, like Web Sockets, but there are well thought out reasons and if you're interested just do a quick search through the Lift mailing list where it's been discussed many times.
The gist of how it works is that the browser makes a request to the server, and the server holds the request open until there is information to send. When information becomes available, it gets pushed down the pipe, the browser processes it, and the browser initiates a new long poll request. Lift uses the servlet container's asynchronous support to hold the connection open with very little resource consumption, and because Javascript is asynchronous by nature, waiting on new information is not resource intensive for the browser either. Since there is a limit on the number of requests a browser can make to the same domain at once, Lift only opens one of these long poll connections at a time and multiplexes responses from what could be many different "responders" through it.
Initially Lift's asynchronous support was added so that data generated by server side events could be pushed to the client as they occurred. With the growth in popularity of client side frameworks, the ability to push asynchronous data initiated by client events became useful, hence the addition of round trips. The idea is that the client makes a request to the server, and rather than respond immediately, the server does some stuff in another thread then sends a response (potentially much) later. To users of the client side API, this is modeled as a promise, but behind the scenes what happens is that Lift receives the request and responds immediately (remember, we can't have too many requests open to the same domain) but will stream the actual data that satisfies the promise through the long polling connection when it becomes available.
So, that's what you're seeing. Your initial request is the ajax POST, which triggers the beginning of a round trip. If you were to look at the data returned by that request, you'd see that it's not the data that satisfies the promise. The actual response data is delivered via Lift's long polling mechanism, and that is what you see with the GET request.

Web notification service pattern like Facebook or Twitter

I've never done a notification service on web client and I just would like to know what is the most common pattern.
Like if the server has to push the client or if it's the client which needs to get the server info every minutes for example.
Or if there is another pattern.
There are multiple ways to implement push notifications:
HTTP Long Polling : The client initiates a request. The server checks if it has any new notifications. Irrespective of whether or not it has new notifications appropriate response is send and connection is closed. After time X client initiates another request (+ Very easy to implement - notifications are not real time. They depend on X since data retrieval is client initiated. As X decreases overhead on server increases )
HTTP Streaming: This is very similar to HTTP Long Polling however the connection is not closed. The server sends chunked response. So as soon as server receives new notification that it wants to push it can simply write to the socket. ( + lower latency than long polling and almost real time behaviour / overhead of closing connection and re opening reduced - memory usage client side keeps on piling up / ugly hacks etc )
WebSocket: TCP based protocol provides true two way communication. The server can push data to client any time. ( + ve: true real time - some older browsers dont support it ). Read more about it WebSocket.org | About WebSocket
Now based on the technology stack there are various solutions available:
(A) Nodejs : the cross-browser WebSocket for realtime apps. ( does heavy lifting for you. Gracefully falls back in case websocket is not supported )
(B) Django : As mentioned previously you can use signals for notifications. Also you can try django-websocket 0.3.0 for supporting websocket
(C) Jetty / Netty and Grizzly (Java based) : All have websocket support
from link
This depends on what web framework you use. With a modern framework like meteor, it's very easy for the server to push notifications to clients, and many kinds of display updates can happen automatically, without having to construct a notification mechanism to take care of them.
Have a look at the two Meteor screencasts listed at http://meteor.com.