How to do HTTP Server Push -- aka do I NEED STOMP, AMPQ, etc? - iphone

I am writing a collection of web services, one of which needs to implement server push.
The client will be native Objective-C. I want this to be as simple, fast, and lightweight as possible. The data transmitted will be JSON. Is it possible to do this without using a message broker?

There's an HTTP technique called COMET in which the client spins up a thread that makes a potentially very long-lived request to the HTTP server. Whenever the server wants to send something to the client, it sends a response to this request. The client processes this response and immediately makes another long-lived request to the server. In this way the server can send information while other things happen in the client's main execution thread(s). The information sent by the serve can be in any format you like. (In fact, for clients in a web browser doing COMET with a Javascript library, JSON is perfect.)
#DevDevDev: It's true that COMET is most often associated with a Javascript-enabled browser, but I don't think it has to be. You might check out iStreamLight, which is an Objective-C client for the iPhone that connects to COMET servers. It's also discussed in this interview with the author.

Related

Long-running operations in web-application

Web application operations are generally meant to be quick to avoid long wait times to users. However, some operations the web application may perform may be computationally-intensive and take a fair bit of time. What is the best practice in REST to deal with such operations that may be take several minutes yet require an immediate response to users? Is it okay for the web application to take several minutes to return the response of the HTTP request, or is it better to return a 202 response, process in the background somewhere else, and then provide some form of notification to the user?
Is it okay for the web application to take several minutes to return the response of the HTTP request
No. Part of the problem with this approach is that if the server doesn't acknowledge the request in a timely fashion, the client won't know that it reached its intended destination.
is it better to return a 202 response, process in the background somewhere else, and then provide some form of notification to the user?
Yes. That's exactly what 202 Accepted is designed for
The 202 response is intentionally noncommittal. Its purpose is to allow a server to accept a request for some other process (perhaps a batch-oriented process that is only run once per day) without requiring that the user agent's connection to the server persist until the process is completed. The representation sent with this response ought to describe the request's current status and point to (or embed) a status monitor that can provide the user with an estimate of when the request will be fulfilled.
It can help, I think, to remember that we're talking about your integration domain; the client isn't talking to your app. It's instead talking to your API, which pretends to be a web site that the client can integrate with. So your client sends the request to the API, and the API responds with an accepted message accompanied by a bunch of links that will help the client continue with the protocol and eventually reach its goal.

Retrieve data from webservice, with side effects - what method to use

I am in the process of writing a webservice that sends data to the client, but it has side effects.
This webservice will be called periodically, and any data that is being sent to the client will be marked as such and will not be sent again.
The client is 100% stateless, I can't expect it to send something like a timestamp of the last request. The administration of state lies with the web service.
I am a firm believer that GET requests must be idempotent, so I cannot use that as the method. POST and PUT on the other hand are used to create/update resources, which is not the case here.
What http method would you choose and why?
I finally went with POST.
Mostly the argument
If the client can not be expected to implement basic HTTP measures such as a conditional GET with an If-Modified-Since or sth. like that ... then the other end is probably not the one to be puristic about HTTP either.
is what persuaded me.

What is the difference between Async Response and Server-Sent Events in Jersey?

What is the difference between Async Response and Server-Sent Events in Jersey and when to use them?
Both are for different usage, one allows to wait for a slow resource (long-polling), the other allows to send a stream of data on the same TCP-connection.
Here's more detail :
AsyncResponse was introduced in JAX-RS 2, in order to perform long-polling requests.
Client open connection
Client send request payload
Server receive payload, pause/suspend the connection and look for the resources
Then
If timeout has been reached the server can end the connection
Resource is ready, server resume the connection and send the resource payload.
Connection is closed
As this is part of the JAX-RS specification, so you can just use it with the default jersey dependencies. Note that on a too long connection where no data is transmitted network equipment like firewall can close the TCP connection.
Server-Sent Events is a specification that allows the server to send messages on the same TCP connection.
Client use javascript EventSource to get a resource
Then the server can send at some point in time a payload, a message.
Then another
And so on
The connection can be closed programmatically at any time by either the client or the server.
SSE is not part of JAX-RS, so you need to have the Jersey SSE module in your classpath (additionaly in earlier version of Jersey 2 you had to programmatically enable the SseFeature).
Other things to consider :
SSE does not allow to pass custom headers, so no Authorisation header. It's possible to use the URL query string, but if you're not on HTTPS this a security issue.
SSE does allow to POST data, so this might go in the URL query string
Connection can close due to network (equipment failing, firewall, phone not in covered area, etc.)
In my opinion websockets are more flexible than SSE, and they even allow the client to send multiple messages. But Jersey does not implement the JEE specification that support websocket (JSR 356).
But really you should read the documentation of their SSE implementation, their's additional info like what is polling and what web-sockets.
AsyncResponse is like an ajax polling with long waiting time. The client initiate a single AJAX request to check for updates that will not return until it receives data or a timeout occurs and trigger another request. It does create unnecessary checking loop (at the server side) and the load is equivalent to the number of client connected. More client, more loop initiated = more resources needed.
Server-Sent Events is somewhat similar to long-polling at the server side, both uses loop to check for update and trigger a response. The only difference is that long-polling will continuous send request (either after timeout or receive data) whereas SSE only need to initiate once. Thus SSE is more suitable for mobile application when you consider battery usage.
Websocket uses loop as well, but not only to check for updates; also to listen for new connections and upgrade the connections to WS/WSS after handshake. Unlike long-polling and SSE; where the load increases with the number of clients, websocket constantly running the loop like a daemon. In addition to the constant loop, the load adds on as more client are connected to the socket.
For example, if you are designing a web service for administrative purposes, server running on long-polling and SSE are allow to rest after office hour when no one is around, whereas websocket will continue to run, waiting for connection. And did I mention without proper authentication, anyone can create a client and connect to your websocket? Most of the time, authentication and refuse connection is not done at the handshaking part, but after the connection was made.
And should I continue on how to implement websocket on multiple tab?

Programmatically Call REST URL

Summary
Is there a way to programmatically call REST URLs setup in JBoss via RESTEasy so that the programmatic method call actually drills down through the REST processor to find/execute the correct endpoint?
Background
We have an application that has ~20 different REST endpoints and we have set the application up to receive data from other federated peers. To cut down on cross network HTML requests, the peer site sends a bulk of requests to the server, and the receiving server needs to act upon the URL it receives. Example data flow:
Server B --> [Bulk of requests sent via HTTP/Post] --> Server A breaks list down to individual URLs --> [Begin Processing]
The individual URLs are REST URLs that the receiving server is familiar with.
Possible Solutions
Have the receiving server read through the URLs it receives, and call the management beans directly
The downside here is that we have to write additional processing code to decode the URL strings that are received.
The upside to this approach is that there is no ambiguity as to what happens
Have the receiving server execute the URL on itself
The receiving server could reform the URL to be http://127.0.0.1:8080/rest/..., and make a HTTP request on itself.
The downside here is that the receiving server could have to make a lot of HTTP requests upon itself (it's already somewhat busy processing "real" requests from the outside world)
Preferred: Have the receiving server access the main RESTEasy bean somehow and feed it the request.
Sort of combo of 1 & 2, without the manual processing of 1 or the HTTP requests involved with 2.
Technology Stack
JBoss 6.0.0 AS (2010 release) / Java 6
RESTEasy

http interface for long operation

I have a running system that process short and long running operations with a Request-Response interface based on Agatha-RRSL.
Now we want to change a little in order to be able to send requests via website in Json format so i'm trying many REST server implementation that support Json.
REST server will be one module or "shelve" handled by Topshelf, another module will be the processing module and the last the NoSQL database runner module.
To talk between REST and processing module i'm thinking about a servicebus but we have two types of request: short requests that perform work in 1-2 seconds and long requests that do work in 1 minute..
Is servicebus the right choice for this work? I'm thinking about returning a "response" for long running op with a token that can be used to request operation status and results with a new request. The problem is that big part of the requests must be used like sync request in order to complete http response.
I think I have also problems with response size (on MSMQ message transport) when I have to return huge list of objects
Any hint?
NServiceBus is not really suitable for request-response messaging patterns. It's more suited to asynchronous publish-subscribe.
Edit: In order to implement a kind of request response, you would need to message in both directions, but consisting of three logical steps:
So your client sends a message requesting the data.
The server would receive the message, process it, construct a return message with the data, and send it to the client.
The client can then process the data.
Because each of these steps takes place in isolation and in an asynchronous manner there can be no meaningful SLA or timeout enforced between when a client sends a request and receives a response. But this works nicely for large processing job which may take several minutes to complete.
Additionally a common value which can be used to tie the request to the response will need to be present in both messages. Otherwise a client could send more than one request, and receive multiple responses and not know which response was for which request.
So you can do this with NServiceBus but it takes a little more thought.
Also NServiceBus uses MSMQ as the underlying transport, not http.