How to handle timeouts in a REST Client when calling methods with side-effect - rest

Let's say we have a REST client with some UI that lists items it GETs from the server. The server also exposes some REST methods to manipulate the items (POST / PUT).
Now the user triggers one of those calls that are supposed to change the data on the server side. The UI will reflect the server state change, if the call was successful.
But what are good strategies to handle the situation when the server is not available?
What is a reasonable timeout lengths (especially in a 3G / Cloud setup)?
How do you handle the timeout in the client, considering the fact that the client can't tell whether the operation succeeded or not?
Are there any common patterns to solve that, other than a complete client termination (and subsequent restart)?

This will be application specific. You need to decide what makes the most sense in your usage case.
Perhaps start with a timeout similar to that of the the default PHP session of 24 minutes. Adjust as necessary based on testing.
Do you have server and client mixed up here? If so the server cannot tell if the client times out other than reaching the end of a session. The client can always query the server for a progress update.
This one is a little general to provide an answer for.

Related

Reusing connections with reqwest

I need to issue a long sequence of REST calls to the same server (let's call it myapi.com). At the moment, I am using the Rust library reqwest as follows:
I create a reqwest::Client with all default settings.
For each REST call:
I use client.post("https://myapi.com/path/to/the/api") to create a reqwest::RequestBuilder.
I configure the RequestBuilder to obtain a reqwest::Request.
I send() the Request, read the reqwest::Response.
I drop everything except the Client, start again.
I read in the docs that reqwest is supposed to pool connections within the same Client. Given that I always reuse the same Client, I would expect the first API call to take some more (owing to the initial TCP and HTTPS handshakes). However, I observe always a consistent, quite high latency across all requests. So, I am wondering if connections are reused at all, or re-established every time. If they are not, how do I get to recycle the same connection? I feel that latency would be drastically reduced if I could save myself some roundtrips.

Websocket vs REST when sending data to server

Background
We are writing a Messenger-like app. We have setup Websockets to Inbox and Chat.
Question
My question is simple. What are the advantages and disadvantages when sending data from Client to Server using REST instead of Websockets? (I am not interested in updates now.)
We know that REST has higher overhead in terms of message sizes and that WS is duplex (thus open all time). What about the other things we didn't keep in mind?
Here's a summary of the tradeoffs I'm aware of.
Reasons to use webSocket:
You need/want server-push of data.
You are sending lots of small pieces of data from client to server and doing it very regularly. Using webSocket has significantly less overhead per transmission.
Reasons to use REST:
You want to use server-side frameworks or modules that are built for REST, not for webSocket (such as auth, rate limiting, security, streaming, etc...).
You aren't sending data very often from client to server and thus the server-side burden of keeping a webSocket connection open all the time may lessen your server scalability.
You want your client to run in places where a long-connected webSocket during inactive periods of time may not be practical (perhaps mobile).
You want your client to run in old browsers that don't support webSocket.
You want the browser to enforce same-origin restrictions (those are enforced for REST Ajax calls, but not for webSocket connections).
You don't want to have to write code that detects when the webSocket connection has died and then auto-reconnects and handles back-offs and handles mobile issues with battery usage issues, etc...
You need to run in situations where there are proxies or other network infrastructure that may not support long running webSocket connections.
If you want request/response built in. REST is request/response. WebSocket is not - it's message based. Responses from a webSocket are done by sending a messge back. That message back is not, by itself, a response to any specific request, it's just data being sent back. If you want request/response with webSocket, then you have to build some infrastructure yourself where you tag an id into a request and the response for that particular request contains that specific id. Otherwise, if there are every multiple requests in flight at the same time, then you don't know which response belongs with which request because all the data is being sent over the same connection and you would have no way of matching response with request.
If you want other clients to be able to carry out this operation via an Ajax call.
So, if you already have a webSocket implementation, don't have any problem with it that are lessened with REST and aren't interested in any of the reasons that REST might be better, then stick with your webSocket implementation.
Related references:
websocket vs rest API for real time data?
Ajax vs Socket.io
Adding comments per your request:
It sounds like you're expecting someone to tell you the "right" way to do it. There are reasons to pick one way over the other. If none of those reason compel you one way vs. the other, then it's just an architectural choice and you must take in the whole context of what you are doing and decide which architectural choice makes more sense to you. If you already have the reliably established webSocket connection and none of the advantages of REST apply to your situation then you can optimize for "efficiency" and send your data to the server over the webSocket connection.
On the other hand, if you wanted there to be a simple API on your server that could be reached with an Ajax call from other clients, then you'd want your server to support this operation via REST so it would simplest for these other clients to carry out this one operation. So, it all depends upon which direction your requirements drive you and, if there is no particular driving reason to go one way or the other, you just make an architectural choice yourself.

WebSocket/REST: Client connections?

I understand the main principles behind both. I have however a thought which I can't answer.
Benchmarks show that WebSockets can serve more messages as this website shows: http://blog.arungupta.me/rest-vs-websocket-comparison-benchmarks/
This makes sense as it states the connections do not have to be closed and reopened, also the http headers etc.
My question is, what if the connections are always from different clients all the time (and perhaps maybe some from the same client). The benchmark suggests it's the same clients connecting from what I understand, which would make sense keeping a constant connection.
If a user only does a request every minute or so, would it not be beneficial for the communication to run over REST instead of WebSockets as the server frees up sockets and can handle a larger crowd as to speak?
To fix the issue of REST you would go by vertical scaling, and WebSockets would be horizontal?
Doe this make sense or am I out of it?
This is my experience so far, I am happy to discuss my conclusions about using WebSockets in big applications approached with CQRS:
Real Time Apps
Are you creating a financial application, game, chat or whatever kind of application that needs low latency, frequent, bidirectional communication? Go with WebSockets:
Well supported.
Standard.
You can use either publisher/subscriber model or request/response model (by creating a correlationId with each request and subscribing once to it).
Small size apps
Do you need push communication and/or pub/sub in your client and your application is not too big? Go with WebSockets. Probably there is no point in complicating things further.
Regular Apps with some degree of high load expected
If you do not need to send commands very fast, and you expect to do far more reads than writes, you should expose a REST API to perform CRUD (create, read, update, delete), specially C_UD.
Not all devices prefer WebSockets. For example, mobile devices may prefer to use REST, since maintaining a WebSocket connection may prevent the device from saving battery.
You expect an outcome, even if it is a time out. Even when you can do request/response in WebSockets using a correlationId, still the response is not guaranteed. When you send a command to the system, you need to know if the system has accepted it. Yes you can implement your own logic and achieve the same effect, but what I mean, is that an HTTP request has the semantics you need to send a command.
Does your application send commands very often? You should strive for chunky communication rather than chatty, so you should probably batch those change request.
You should then expose a WebSocket endpoint to subscribe to specific topics, and to perform low latency query-response, like filling autocomplete boxes, checking for unique items (eg: usernames) or any kind of search in your read model. Also to get notification on when a change request (write) was actually processed and completed.
What I am doing in a pet project, is to place the WebSocket endpoint in the read model, then on connection the server gives a connectionID to the client via WebSocket. When the client performs an operation via REST, includes an optional parameter that indicates "when done, notify me through this connectionID". The REST server returns saying if the command was sent correctly to a service bus. A queue consumer processes the command, and when done (well or wrong), if the command had notification request, another message is placed in a "web notification queue" indicating the outcome of the command and the connectionID to be notified. The read model is subscribed to this queue, gets messessages and forward them to the appropriate WebSocket connection.
However, if your REST API is going to be consumed by non-browser clients, you may want to offer a way to check of the completion of a command using the async REST approach: https://www.adayinthelifeof.nl/2011/06/02/asynchronous-operations-in-rest/
I know, that is quite appealing to have an low latency UP channel available to send commands, but if you do, your overall architecture gets messed up. For example, if you are using a CQRS architecture, where is your WebSocket endpoint? in the read model or in the write model?
If you place it on the read model, then you can easy access to your read DB to answer fast search queries, but then you have to couple somehow the logic to process commands, being the read model the responsible of send the commands to the write model and notify if it is unable to do so.
If you place it on the write model, then you have it easy to place commands, but then you need access to your read model and read DB if you want to answer search queries through the WebSocket.
By considering WebSockets part of your read model and leaving command processing to the REST interface, you keep your loose coupling between your read model and your write model.

A RESTful approach to data synchronization

Assume the following scenario A web application serves up resources through a RESTful API. A number of clients consume this API. The goal is to keep the data on the clients synchronized with the web application (in both directions).
The easiest way to do this is to ask the API if any of the resources have changed since the client last synchronized with the API. This means that the client needs to ask the API for the appropriate resources accompanied by timestamp (to see if the data needs to be updated). This seems to me like the approach with the least overhead in terms of needless consumption of bandwidth.
However, I have the feeling that this approach has a few downsides in terms of design and responsibilities. For example, the API shouldn't have to deal with checking whether the resources are out of date. It seems that the only responsibility of the API should be to serve up the resources when asked without having to deal with the updating aspect. By following this second approach, the client would ask for a lot of data every time it wants to update its data to keep it synchronized with the web application. In other words, the client would check whether the data it got back is newer than the locally stored data. If this process takes place every few minutes, this might become a significant burden for the system.
Am I seeing this correctly or is there a middle road that I am overlooking?
This is a pretty common problem, and a RESTful approach can help you solve it. HTTP (the application protocol typically used to build RESTful services) supports a variety of techniques that can be used to keep API clients in sync with the data on the server side.
If the client receives a Last-Modified or E-Tag header in a HTTP response, it may use that information to make conditional GET calls in the future. This allows the server to quickly indicate with a 304 – Not Modified response that the client’s previously stored representation of the resource is still valid and accurate. This will allow the server (or even better, an intermediate proxy or cache server) to be as efficient as possible in how it responds to the client’s requests, potentially reducing costly round-trips to a back-end data store.
If a response contains a Last-Modified header and the client wishes to take advantage of the performance optimization available with it, they must include an If-Modified-Since directive in a subsequent GET call to the same URI, passing in the same timestamp value it received. This instructs the server to only GET the information from the authoritative back-end source if it knows it has changed since that time. Your server will have to be built to support this technique, of course.
A similar principle applies to E-Tag headers. An E-Tag is a simple hash code representing a specific state of the resource at a particular point in time. If the resource changes in any way, so does its E-Tag value. If the client sees an E-Tag in a response it should pass it in subsequent GET requests to the same URI, thereby allowing the server to quickly determine if the client has an up-to-date representation of the resource.
Finally, you should probably look at the long polling technique to reduce the number of repeated GET requests issued by your clients to the server. In essence, the trick is to issue very long GET requests to the server to watch for server data changes. The GET will not return a response until either the data has changed or the very long timeout fires. If the latter, the client just re-issues the same long-lived request to watch for changes again. See also topics like Comet and Web Sockets which are similar in approach.

What is a RESTful way of monitoring a REST resource for changes?

If there is a REST resource that I want to monitor for changes or modifications from other clients, what is the best (and most RESTful) way of doing so?
One idea I've had for doing so is by providing specific resources that will keep the connection open rather than returning immediately if the resource does not (yet) exist. For example, given the resource:
/game/17/playerToMove
a "GET" on this resource might tell me that it's my opponent's turn to move. Rather than continually polling this resource to find out when it's my turn to move, I might note the move number (say 5) and attempt to retrieve the next move:
/game/17/move/5
In a "normal" REST model, it seems a GET request for this URL would return a 404 (not found) error. However, if instead, the server kept the connection open until my opponent played his move, i.e.:
PUT /game/17/move/5
then the server could return the contents that my opponent PUT into that resource. This would both provide me with the data I need, as well as a sort of notification for when my opponent has moved without requiring polling.
Is this sort of scheme RESTful? Or does it violate some sort of REST principle?
Your proposed solution sounds like long polling, which could work really well.
You would request /game/17/move/5 and the server will not send any data, until move 5 has been completed. If the connection drops, or you get a time-out, you simply reconnect until you get a valid response.
The benefit of this is it's very quick - as soon as the server has new data, the client will get it. It's also resilient to dropped connections, and works if the client is disconnected for a while (you could request /game/17/move/5 an hour after it's been moved and get the data instantly, then move onto move/6/ and so on)
The issue with long polling is each "poll" ties up a server thread, which quickly breaks servers like Apache (as it runs out of worker-threads, so can't accept other requests). You need a specialised web-server to serve the long-polling requests.. The Python module twisted (an "an event-driven networking engine") is great for this, but it's more work than regular polling..
In answer to your comment about Jetty/Tomcat, I don't have any experience with Java, but it seems they both use a similar pool-of-worker-threads system to Apache, so it will have that same problem. I did find this post which seems to address exactly this problem (for Tomcat)
I'd suggest a 404, if your intended client is a web browser, as keeping the connection open can actively block browser requests in the client to the same domain. It's up to the client how often to poll.
2021 Edit: The answer above was in 2009, for context.
Today, I would suggest using a WebSocket interface with push notifications.
Alternatively, in the above suggestion, I might suggest holding the connection for 500-1000ms and check twice at the server before returning the 404, to reduce the overhead of creating multiple connections at the client.
I found this article proposing a new HTTP header, "When-Modified-After", that essentially does the same thing--the server waits and keeps the connection open until the resource is modified.
I prefer a version-based approach rather than a timestamp-based approach, since it's less prone to race conditions and gives you a little more information about what it is you're retrieving. Any thoughts to this approach?