A RESTful approach to data synchronization - rest

Assume the following scenario A web application serves up resources through a RESTful API. A number of clients consume this API. The goal is to keep the data on the clients synchronized with the web application (in both directions).
The easiest way to do this is to ask the API if any of the resources have changed since the client last synchronized with the API. This means that the client needs to ask the API for the appropriate resources accompanied by timestamp (to see if the data needs to be updated). This seems to me like the approach with the least overhead in terms of needless consumption of bandwidth.
However, I have the feeling that this approach has a few downsides in terms of design and responsibilities. For example, the API shouldn't have to deal with checking whether the resources are out of date. It seems that the only responsibility of the API should be to serve up the resources when asked without having to deal with the updating aspect. By following this second approach, the client would ask for a lot of data every time it wants to update its data to keep it synchronized with the web application. In other words, the client would check whether the data it got back is newer than the locally stored data. If this process takes place every few minutes, this might become a significant burden for the system.
Am I seeing this correctly or is there a middle road that I am overlooking?

This is a pretty common problem, and a RESTful approach can help you solve it. HTTP (the application protocol typically used to build RESTful services) supports a variety of techniques that can be used to keep API clients in sync with the data on the server side.
If the client receives a Last-Modified or E-Tag header in a HTTP response, it may use that information to make conditional GET calls in the future. This allows the server to quickly indicate with a 304 – Not Modified response that the client’s previously stored representation of the resource is still valid and accurate. This will allow the server (or even better, an intermediate proxy or cache server) to be as efficient as possible in how it responds to the client’s requests, potentially reducing costly round-trips to a back-end data store.
If a response contains a Last-Modified header and the client wishes to take advantage of the performance optimization available with it, they must include an If-Modified-Since directive in a subsequent GET call to the same URI, passing in the same timestamp value it received. This instructs the server to only GET the information from the authoritative back-end source if it knows it has changed since that time. Your server will have to be built to support this technique, of course.
A similar principle applies to E-Tag headers. An E-Tag is a simple hash code representing a specific state of the resource at a particular point in time. If the resource changes in any way, so does its E-Tag value. If the client sees an E-Tag in a response it should pass it in subsequent GET requests to the same URI, thereby allowing the server to quickly determine if the client has an up-to-date representation of the resource.
Finally, you should probably look at the long polling technique to reduce the number of repeated GET requests issued by your clients to the server. In essence, the trick is to issue very long GET requests to the server to watch for server data changes. The GET will not return a response until either the data has changed or the very long timeout fires. If the latter, the client just re-issues the same long-lived request to watch for changes again. See also topics like Comet and Web Sockets which are similar in approach.

Related

How Caching Is Done Under The Hood in REST APIs?

One of the properties of REST APIs is cacheability. I want to understand how caching is done? And is it on client side (i.e. let's say on API client Postman or Insomnia) or on server side or both?
Suppose a resource is accessed as
GET /services/data/{api_version}/{product_tag}/{resource}/{id} and
we get a response.
If we again trigger the same endpoint call almost instantly, we get another response.
Considering API did caching on first call itself, two scenarios:
Data did not change in between two calls. In that case, caching gives correct result.
Data did change between calls. If client relies on cache, stale data is served to user.
How client determines that data changed and serves latest result? Is it something related like setting a dirty bit as we do in operating systems?
I know cache invalidation determination is one of the toughest problems in computer science and depends on scenario, but in general,
What things to cache on client side and what on server side? Caching done by Postman cannot be used by Insomnia.
How to always serve latest data while using cache to its fullest?

Is there standard way of making multiple API calls combined into one HTTP request?

While designing rest API's I time to time have challenge to deal with batch operations (e.g. delete or update many entities at once) to reduce overhead of many tcp client connections. And in particular situation problem usually solves by adding custom api method for specific operation (e.g. POST /files/batchDelete which accepts ids at request body) which doesn't look pretty from point of view of rest api design principles but do the job.
But for me general solution for the problem still desirable. Recently I found Google Cloud Storage JSON API batching documentation which for me looks like pretty general solution. I mean similar format may be used for any http api, not just google cloud storage. So my question is - does anybody know kind of general standard (standard or it's draft, guideline, community effort or so) of making multiple API calls combined into one HTTP request?
I'm aware of capabilities of http/2 which include usage of single tcp connection for http requests but my question is addressed to application level. Which in my opinion still make sense because despite of ability to use http/2 taking that on application level seems like the only way to guarantee that for any client including http/1 which is currently the most used version of http.
TL;DR
REST nor HTTP are ideal for batch operations.
Usually caching, which is one of RESTs constraints, which is not optional but mandatory, prevents batch processing in some form.
It might be beneficial to not expose the data to update or remove in batch as own resources but as data elements within a single resource, like a data table in a HTML page. Here updating or removing all or parts of the entries should be straight forward.
If the system in general is write-intensive it is probably better to think of other solutions such as exposing the DB directly to those clients to spare a further level of indirection and complexity.
Utilization of caching may prevent a lot of workload on the server and even spare unnecessary connecctions
To start with, REST nor HTTP are ideal for batch operations. As Jim Webber pointed out the application domain of HTTP is the transfer of documents over the Web. This is what HTTP does and this is what it is good at. However, any business rules we conclude are just a side effect of the document management and we have to come up with solutions to turn this document management side effects to something useful.
As REST is just a generalization of the concepts used in the browsable Web, it is no miracle that the same concepts that apply to Web development also apply to REST development in some form. Thereby a question like how something should be done in REST usually resolves around answering how something should be done on the Web.
As mentioned before, HTTP isn't ideal in terms of batch processing actions. Sure, a GET request may retrieve multiple results, though in reality you obtain one response containing links to further resources. The creation of resources has, according to the HTTP specification, to be indicated with a Location header that points to the newly created resource. POST is defined as an all purpose method that allows to perform tasks according to server-specific semantics. So you could basically use it to create multiple resources at once. However, the HTTP spec clearly lacks support for indicating the creation of multiple resources at once as the Location header may only appear once per response as well as define only one URI in it. So how can a server indicate the creation of multiple resources to the server?
A further indication that HTTP isn't ideal for batch processing is that a URI must reference a single resource. That resource may change over time, though the URI can't ever point to multiple resources at once. The URI itself is, more or less, used as key by caches which store a cacheable response representation for that URI. As a URI may only ever reference one single resource, a cache will also only ever store the representation of one resource for that URI. A cache will invalidate a stored representation for a URI if an unsafe operation is performed on that URI. In case of a DELETE operation, which is by nature unsafe, the representation for the URI the DELETE is performed on will be removed. If you now "redirect" the DELETE operation to remove multiple backing resources at once, how should a cache take notice of that? It only operates on the URI invoked. Hence even when you delete multiple resources in one go via DELETE a cache might still serve clients with outdated information as it simply didn't take notice of the removal yet and its freshness value would still indicate a fresh-enough state. Unless you disable caching by default, which somehow violates one of REST's constraints, or reduce the time period a representation is considered fresh enough to a very low value, clients will probably get served with outdated information. You could of course perform an unsafe operation on each of these URIs then to "clear" the cache, though in that case you could have invoked the DELETE operation on each resource you wanted to batch delete itself to start with.
It gets a bit easier though if the batch of data you want to remove is not explicitly captured via their own resources but as data of a single resource. Think of a data-table on a Web page where you have certain form-elements, such as a checkbox you can click on to mark an entry as delete candidate and then after invoking the submit button send the respective selected elements to the server which performs the removal of these items. Here only the state of one resource is updated and thus a simple POST, PUT or even PATCH operation can be performed on that resource URI. This also goes well with caching as outlined before as only one resource has to be altered, which through the usage of unsafe operations on that URI will automatically lead to an invalidation of any stored representation for the given URI.
The above mentioned usage of form-elements to mark certain elements for removal depends however on the media-type issued. In the case of HTML its forms section specifies the available components and their affordances. An affordance is the knowledge what you can and should do with certain objects. I.e. a button or link may want to be pushed, a text field may expect numeric or alphanumeric input which further may be length limited and so on. Other media types, such as hal-forms, halform or ion, attempt to provide form representations and components for a JSON based notation, however, support for such media-types is still quite limited.
As one of your concerns are the number of client connections to your service, I assume you have a write-intensive scenario as in read-intensive cases caching would probably take away a good chunk of load from your server. I.e. BBC once reported that they could reduce the load on their servers drastically just by introducing a one minute caching interval for recently requested resources. This mainly affected their start page and the linked articles as people clicked on the latest news more often than on old news. On receiving a couple of thousands, if not hundred thousands, request per minute they could, as mentioned before, reduce the number of requests actually reaching the server significantly and therefore take away a huge load on their servers.
Write intensive use-cases however can't take benefit of caching as much as read-intensive cases as the cache would get invalidated quite often and the actual request being forward to the server for processing. If the API is more or less used to perform CRUD operations, as so many "REST" APIs do in reality, it is questionable if it wouldn't be preferable to expose the database directly to the clients. Almost all modern database vendors ship with sophisticated user-right management options and allow to create views that can be exposed to certain users. The "REST API" on top of it basically just adds a further level of indirection and complexity in such a case. By exposing the DB directly, performing batch updates or deletions shouldn't be an issue at all as through the respective query languages support for such operations should already be build into the DB layer.
In regards to the number of connections clients create: HTTP from 1.0 on allows the reusage of connections via the Connection: keep-alive header directive. In HTTP/1.1 persistent connections are used by default if not explicitly requested to close via the respective Connection: close header directive. HTTP/2 introduced full-duplex connections that allow many channels and therefore requests to reuse the same connections at the same time. This is more or less a fix for the connection limitation suggested in RFC 2626 which plenty of Web developers avoided by using CDN and similar stuff. Currently most implementations use a maximum limit of 100 channels and therefore simultaneous downloads via a single connections AFAIK.
Usually opening and closing a connection takes a bit of time and server resources and the more open connections a server has to deal with the more a system may suffer. Though open connections with hardly any traffic aren't a big issue for most servers. While the connection creation was usually considered to be the costly part, through the usage of persistent connections that factor moved now towards the number of requests issued, hence the request for sending out batch-requests, which HTTP is not really made for. Again, as mentioned throughout the post, through the smart utilization of caching plenty of requests may never reach the server at all, if possible. This is probably one of the best optimization strategies to reduce the number of simultaneous requests, as probably plenty of requests might never reach the server at all. Probably the best advice to give is in such a case to have a look at what kind of resources are requested frequently, which requests take up a lot of processing capacity and which ones can easily get responded with by utilizing caching options.
reduce overhead of many tcp client connections
If this is the crux of the issue, the easiest way to solve this is to switch to HTTP/2
In a way, HTTP/2 does exactly what you want. You open 1 connection, and using that collection you can send many HTTP requests in parallel. Unlike batching in a single HTTP request, it's mostly transparent for clients and response and requests can be processed out of order.
Ultimately batching multiple operations in a single HTTP request is always a network hack.
HTTP/2 is widely available. If HTTP/1.1 is still the most used version (this might be true, but gap is closing), this has more to do with servers not yet being set up for it, not clients.

If we created multiple resource of update request in REST,what will the impact at server side

If we create the multiple resource of update request using POST method in REST.what will be the impact at server side if number of resource created .
I Know using put request ,we can achieve fault tolerance due to idempotence.if we use post instead put,what will happen?
If we created number of resource using post for update , is there any performance issue ?if we created number of resource then what is impact on server ?
In post and put if we call same request n times ,we are going to hit the server n time then creating new resource and same resource should not impact on server.can please confirm this statement right or wrong .
If we create the multiple resource of update request using POST method in REST.what will be the impact at server side if number of resource created .
First of all, HTTP, the de-facto transport layer of REST, is an application protocol for transferring documents over a network and not just YOUR application domain you can run your business rules on. Any business rules you infer from sending data over the network are just a side-effect of the actual documentation management you perform via HTTP. While certain thins might map well from the documentation management to your business layer, certain things might not. I.e. HTTP isn't designed to support larger kinds of batch processing.
By that, even though HTTP itself defines a set of methods you can use, with IANA administering additional ones, the actual implementation depends on the server itself. It should follow the semantics outline in the RFC, though it might not. It may harm interoperability with other clients though in such a case, that is why it is recommended to follow the spec.
What implications or impact a request may have on the server depends on a couple of factors such as the kind of the server, the data that needs to be processed and whether work can be offloaded, i.e. by a cache, as well as the internal infrastructure you use. If you have a server that supports a couple of hundred cores and terabytes of address space a request to be processed might have less of an impact on the server than if you have a server with only a single CPU core and just a gigabyte of RAM, which has to fit in a couple of other applications as well as the OS itself. In general though the actual impact a request has on the server isn't tide to the operation you invoke as at its core HTTP is just a remote document management protocol, as explained before. Certain methods, such as PATCH, may be an exception to this rule though as it clearly demands transaction support as either all or none of the operations defined in the patch document need to be applied.
I Know using put request ,we can achieve fault tolerance due to idempotence.if we use post instead put,what will happen?
RFC 7231 includes a hint on the difference between POST and PUT:
The fundamental difference between the POST and PUT methods is highlighted by the different intent for the enclosed representation. The target resource in a POST request is intended to handle the enclosed representation according to the resource's own semantics, whereas the enclosed representation in a PUT request is defined as replacing the state of the target resource. Hence, the intent of PUT is idempotent and visible to intermediaries, even though the exact effect is only known by the origin server.
POST does not give a client any promises on what happens in case of a network error i.e. You might not know whether a request reached the server and only the response got lost or if the actual request didn't make it to the server at all. Jim Webber gave an example why idempotency is important, especially when you deal with money and currencies.
HTTP is rather specific to inform a client when a resource was created by including a HTTP Location header in the response that contains a URI to the created resource. This works on POST as well as PUT and PATCH. This premise can now be utilized to "safely" upload data. A client can send POST requests to the server until it receives a response with a Location header pointing to the created resource which is then used in the next step to perform a PUT update on that resource with the actual content. This pattern is called the POST-PUT creation pattern and it is especially useful if you have either a large payload to send or have to guarantee that the state only triggers a business rule once, i.e. in case of an online purchase.
Note that with the help of conditional requests some form of optimistic locking could be used as well, though this would require to at least know the state of the current resource beforehand as here a certain value, that is unique to the current state, is included in the request that acts as distributed lock which, if different to the state the server currently has, as there might have been an update by an other client in the meantime, will result in a rejection of the request at the server side.
If we created number of resource using post for update , is there any performance issue ?if we created number of resource then what is impact on server ?
I'm not fully sure what you mean by created a number of resources using post for update. Do you want to create or update a resource via POST? These methods just differ in the semantics they promise. How you map the event of the document modification to trigger certain business rules in your backend is completely up to you. In general though, as mentioned before, HTTP isn't ideal in terms of batch processing.
In post and put if we call same request n times ,we are going to hit the server n time then creating new resource and same resource should not impact on server.can please confirm this statement right or wrong
If you send n requests via POST to the server, the server will perform the logic that should perform on a POST request n times (if all of the requests reached the server). Whether a new resource is created or not depends on the implementation. A POST request might only start a backing process, some kind of calculation or actually doing nothing. If one was created though the response should contain a Location header with the URI that points to the location of the new resource.
In terms of sending n requests via PUT, if the same URI is used for all of these requests, the server in general should apply the payload as the new state of the targeted resource. If it internally results in a DB update or not is an implementation detail that may very from project to project. In general a PUT request does not reflect in the creation of a new resource unless the resource the target URI pointed to didn't exist before, though it also may create further resources as a side-effect. Imagine if you design some kind of version control system. PUT is allowed to have side effects. Such a side effect may be that you perform an update on the HEAD trunk, which applies the new state to the HEAD, though as a side-effect a new resource is created for that commit in the commit history.
So in summary, you can't deduce the impact a request has on a server solely based on the HTTP operation you use as at its heart HTTP is just an application protocol that transfers documents over a network. The actual business rules that get triggered are just a side effect of the actual document management. What impact a request has on the server depends on multiple factors, such as the type of the server you use but also on the length of the request and what you do with it on the server. Each of the available methods has its own semantics and you shouldn't compare them by the impact they might have on the server, but on the premise they give to a client. Certain things like anything related to a balance or money should be done via PUT due to the idempotent property of that method.

RESTful web requests and user activity tracking websites

Someone asked me this question a couple of days ago and I didn't have an answer:
As HTTP is a stateless protocol. When we open www.google.com, can it
be called a REST call?
What I think:
When we make a search on google.com all info is passed through cookie and URL parameters. It looks like a stateless request.
But the search results are not independent of user's past request. Search results are specific to user interest and behavior. Now, it doesn't look like a stateless request.
I know this is an old question and I have read many SO answers like Why HTTP is a stateless protocol? but I am still not able to understand what happens when user activity is tracked like on google or Amazon(recommendations based on past purchases) or any other user activity based recommendation websites.
Is it RESTful or is it RESTless?
What if I want to create a web app in which I use REST architecture and still provide user-specific responses?
HTTP is stateless, however the Google Application Layer is not. The specific Cookies and their meaning is part of the Application Layer.
Consider the same with TCP/IP. IP is a stateless protocol, but TCP isn't. The existence of state in TCP embedded in IP packets does not mean that IP protocol itself has a state.
So does that make it a REST call? No.
Although HTTP is stateless & I would suspect that www.google.com when requested with cookies disabled, the results would be the same for each request, making it almost stateless (Google still probably tracks IP to limit request frequency).
But the Application Layer is not stateless. One of the principles of REST is that the system does not retain state data about about the client between requests for the purpose of modifying the responses. In the case of Google, that clearly is not happening.
It seems that the meaning of "stateless" is being (hypothetically) taken beyond its practical expression.
Consider a web system with no DB at all. You call a (RESTful) API, you always get the exactly the same results. This is perfectly stateless... But this is perfectly not a real system, either.
A real system, in practically every implementation, holds data. Moreover, that data is the "resources" that RESTful API allows us to access. Of course, data changes, due to API calls as well. So, if you get a resource's value, change its value, and then get its value again, you will get a different value than the first read; however, this clearly does not say that the reads themselves were not stateless. They are stateless in the sense that they represent the very same action (or, more exact, resource) for each call. Change has to be manually done, using another RESTful API, to change the resource value, that will then be reflected in the next call.
However, what will be the case if we have a resource that changes without a manual, standard API verb?
For example, suppose that we have a resource that counts the number of times some other resource was accessed. Or some other resource that is being populated from some other third party data. Clearly, this is still a stateless protocol.
Moreover, in some sense, almost any system -- say, any system that includes an authentication mechanism -- responds differently for the same API calls, depending, for example, on the user's privileges. And yet, clearly, RESTful systems are not forbidden to authenticate their users...
In short, stateless systems are stateless for the sake of that protocol. If Google tracks the calls so that if I call the same resource in the same session I will get different answers, then it breaks the stateless requirement. But so long as the returned response is different due to application level data, and are not session related, this requirement is not broken.
AFAIK, what Google does is not necessarily related to sessions. If the same user will run the same search under completely identical conditions (e.g., IP, geographical location, OS, browser, etc.), they will get the very same response. If a new identical search will produce different results due to what Google have "learnt" in the last call, it is still stateless, because -- again -- that second call would have produced the very same result if it was done in another session but under identical conditions.
You should probably start from Fielding's comments on cookies in his thesis, and then review Fielding's further thoughts, published on rest-discuss.
My interpretation of Fielding's thoughts, applied to this question: no, it's not REST. The search results change depending on the state of the cookie header in the request, which is to say that the representation of the resource changes depending on the cookie, which is to say that part of the resource's identifier is captured in the cookie header.
Most of the problems with cookies are due to breaking visibility,
which impacts caching and the hypertext application engine -- Fielding, 2003
As it happens, caching doesn't seem to be a big priority for Google; the representation returned to be included a cache control private header, which restricts the participation by intermediate components.

REST API: Metadata goes to DB, file to storage. To proxy or not to proxy through API end-point?

I'm currently planning a REST-style API. The problem I have is that the client will send one or more files, belonging to the same "document", but while the metadata is to be stored in a DB, the files are going to file storage (probably S3, in my case).
The way I see it, there are two ways of doing it:
Send the metadata to the API end-point, which responds with the location for storing the files. And then, in a separate request, store the files directly.
Send metadata and files, in the same request, to the API, which acts as a proxy and takes care of sending the various parts to their final destinations.
The good thing about 1. is that the API server will have less to deal with, so can be smaller, and bandwidth is only paid once (client -> storage). Giving a good UX is, on the other hand, likely to be harder, and there will be more state to keep track of.
With 2. it's easy to ensure the transaction is atomic, since the API server is the sole gatekeeper. However, the server will need to be more powerful, and bandwidth may be paid twice (client -> API -> storage).
So, what's the best way of dealing with this situation, and if going with 1. any problems to look out for?
Assuming you have external clients, I believe that #2 is the better bet. The way to catch and keep clients is to have the best possible UX, with a simple, easy to learn and use interface. As you said, you also get to keep atomic transactions, which will save you plenty of headaches. In my experience, server power is relatively cheap, and you can always send a 202 back to the client instead of a 201.