ReST - PUT vs PATCH to minimize coupling between client & API when adding new properties - rest

We are building set of new REST APIs.
Let's say we have a resource /users with the following fields:
{
id: 1
email: "test#user.com"
}
Clients implement this API and can then update this resource by sending a new resource representation to PUT /users/1.
Now let's say we add a new property name to the model like so:
{
id: 1
email: "test#user.com"
name: "test user"
}
If the models the existing clients are using are to call our API not updated, then calls to PUT /users/1 will remove the new name property since PUT is supposed to replace the resource. I know that the clients could work straight with the raw json to ensure they always receive any new properties that are added in the API, but that is a lot of extra work, and under normal circumstances clients are going to create their own model representations of the API resources on their side. This means that any time any new property is added, all clients need to update the code/models on their side to make sure they aren't accidentally removing properties. This creates unneeded coupling between systems.
As a way to solve this problem, we are considering not implementing PUT operations at all and switching updates to PATCH where properties that aren't passed in are simply not changed. That seems technically correct, but might not be in the spirit of REST. I am also slightly concerned about client support for the PATCH verb.
How are others solving this problem? Was is the best practice here?

You are in a situation where you need some form of API versioning. The most appropriate way is probably using a new media-type every time you make a change.
This way you can support older versions and a PUT would be perfectly legal.
If you don't want this and just stick to PATCH, PATCH is supported everywhere except if you use ancient browsers. Not something to worry about.

Switching from PUT to PATCH will not fix your problem, IMO. The root cause, IMO, is that clients already consider the data being returned for a representation to follow a certain type. According to Fielding
A REST API should never have “typed” resources that are significant to the client.
(Source)
Instead of using typed resources clients should use content-type negotiation to exchange data. Here, media-type formats that are generic enough to gain widespread adoption are for sure beneficial, certain domains may however require a more specific representation format.
Think of a car-vendor Web page where you can retrieve the data from your preferred car. You, as a human, can easily identify that the data depicts a typical car. However, the media-type you most likely received the data in (HTML) does not state by its syntax or the semantics of its elements that the data describes a car, unless some semantic annotation attributes or elements are present, though you might be able to update the data or use the data elsewhere.
This is possible as HTML ships with a rich specification of its elements and attributes, such as Web forms that not only describe the supported or expected input parameters but also the URI where to send the data to, the representation format to use upon sending (implicitly given by application/x-www-form-urlencoded; may be overwritten by the enctype attribute though) or the HTTP method to use, which is fixed to either GET or POST in HTML. Through this, a server is able to teach a client on how a request needs to be built. As a consequence the client does not need to know anything else besides having to understand the HTTP, URI and HTML specifications.
As Web pages are usually filled with all kinds of unrelated stuff, such as adds, styling information or scripts, and the XML(-like) syntax, which is not every ones favourite, as it may increase the size of the actual payload slightly, most so-called "REST" APIs do want to exchange JSON-based documents. While plain JSON is not an ideal representation format, as it does not ship with link-support at all, it is though very popular. Certain additions such as JSON Hyper-Schema (application/schema+json hyper-schema) or JSON Hypertext Application-Language (HAL) (application/hal+json) add support for links and link-relation. These can be used to render data received from the server as-is. However, if you want a response to automatically drive your application state (i.e. to dynamically draw the GUI with the processed data) a more specific representation format is needed, that can be parsed by your client and act accordingly as it understands what the server wants it to do with it (= affordance). If you like to instruct a client on how to build a request support for other media-types such as hal-forms or ion need to be supported. Certain media-types furthermore allow you to use a concept called profiles, that allow you to annotate a resource with a semantic type. HAL JSON i.e. does support something like that where the Content-Type header may now contain a value such as application/hal+json;profile=http://schema.org/Car that hints the media-type processor that the payload follows the definition of the given profile and may thus apply further validity checks.
As the representation format should be generic enough to gain widespread usage, and URIs itself shouldn't hint a client as well what kind of data to expect, an other mechanism needs to be used. Link relation names are basically an annotation for URIs that tell a client about the purpose of a certain link. A pageable collection might return links annotated with first, prev, next and last which are pretty obvious what they do. Other links might be hinted with prefetch, that hint a client that a resource can be loaded right after loading the current resource finished as it is very likely that the client will retrieve this resource next. Such media-types, however, should be either standardized (defined in a proposal or RFC and registerd with IANA) or follow the schema proposed by Web linking, (i.e. as used by Dublin Core). A client that just uses the URI for an invoked link-relation name will still work in case the server changes its URI scheme instead of attempting to parse some parameters from the URI itself.
In regards to de/coupling in a distributed system a certain amount of coupling has to exist otherwise parties wont be able to communicate at all. Though the point here is, the coupling should be based on well-defined and standardized formats that plenty of clients may support instead of exchanging specific representation formats only a very limited number of clients support (in worst case only the own client). Instead of directly coupling to the API and using an undefined JSON-based syntax (maybe with external documentation of the semantics of the respective fields) the coupling should now occur on the media-types parties can use to exchange the format. Here, not the question of which media-type to support should be asked but how many you want to support. The more media-types your client or server supports, the more likely it is to interact with other peers in the distributed system. On the grand-scheme of things, you want a server to be able to server a plethora of clients while a single client should be able to interact with (in best case) every server without the need for constant adoptions.
So, if you really want to decouple clients from servers, you should take a closer look at how the Web actually works and try to mimic its interaction model onto your application layer. As "Uncle Bob" Robert C. Martin mentioned
An architecture is about intent! (Source)
and the intention behind the REST architecture is the decoupling of clients from servers/services. As such, supporting multiple media-types (or defining your own-one that is generic enough to reach widespread adoption), looking up URIs just via their accompanying link-relation names and relying on content-type negotiation as well as relying only on the provided data may help you to achieve the degree of decoupling you are looking for.
All nice and well in theory, but so far every rest api I encountered in my career had predefined contracts that changed over time.
The problem here is, that almost all of those so called "REST APIs" are RPC services at its heart which should not be termed "REST" to start with - this is though a community issue. Usually such APIs ship with external documentation (i.e. Swagger) that just re-introduce the same problems classical RPC solutions, such as CORBA, RMI or SOAP, suffer from. The documentation may be seen as IDL in that process without the strict need for skeleton classes, though most "frameworks" use some kind of typed data classes that will either ignore the recently introduced field (in best case) or totally blow up on invocation.
One of the problems REST suffers from is, that most people haven't read Fieldings thesis and therefore don't see the big picture REST tries to establish but claim to know what REST is and therefore mix up things and call their services RESTful which lead to a situation where REST != REST. The ones pointing out what a REST architecture is and how one might achieve it are called out as dreamers and unworldly when the ones proclaiming the wrong term (RPC over HTTP = REST) continue to do so adding to the confusion of especially the ones just learning the whole matter.
I admit that developing a true REST architecture is really, really hard as it is just too easy to introduce some form of coupling. Hence, a very careful design needs to be done that needs time and also costs money. Money plenty of companies can't or don't want to spend, especially in a domain where new technologies evolve on a regular basis and the ones responsible for developing such solutions often leave the company before the whole process had finished.
Just saying it shouldn’t be ‘typed’ is not really a viable solution
Well, how often did you need to change your browser as it couldn't interact with a Web page? I don't talk about CSS-stuff or browser-specific CSS or JS stuff. How often needed the Web to change in the last 2-3 decades? Similar to the Web, the REST architecture is intended for long-lasting applications for years to come, that supports natural evolution by design. For simple frontend-2-backend systems it is for sure overkill. It starts to shine especially in cases where there are multiple peers not under your control you can interact with.

Related

REST API design for a "reset to defaults" operation

I'm surprised to find so little mention of this dilemma online, and it makes me wonder if I'm totally missing something.
Assume I have a singleton resource called Settings. It is created on init/install of my web server, but certain users can modify it via a REST API, lets say /settings is my URI. I have a GET operation to retrieve the settings (as JSON), and a PATCH operation to set one or more of its values.
Now, I would like to let the user reset this resource (or maybe individual properties of it) to default - the default being "whatever value was used on init", before any PATCH calls were done. I can't seem to find any "best practice" approach for this, but here are the ones I have come up with:
Use a DELETE operation on the resource. It is after all idempotent, and its pretty clear (to me). But since the URI will still exist after DELETE, meaning the resource was neither removed nor moved to an inaccessible location, this contradicts the RESTful definition of DELETE.
Use a POST to a dedicated endpoint such as /settings/reset - I really dislike this one because its the most blatantly non-RESTful, as the verb is in the URI
Use the same PATCH operation, passing some stand-in for "default" such as a null value. The issue I have with this one is the outcome of the operation is different from the input (I set a property to null, then I get it and it has a string value)
Create a separate endpoint to GET the defaults, such as /setings/defaults, and then use the response in a PATCH to set to those values. This doesn't seem to contradict REST in any way, but it does require 2 API calls for seemingly one simple operation.
If one of the above is considered the best practice, or if there is one I haven't listed above, I'd love to hear about it.
Edit:
My specific project has some attributes that simplify this question, but I didn't mention them originally because my aim was for this thread to be used as a reference for anyone in the future trying to solve the same problem. I'd like to make sure this discussion is generic enough to be useful to others, but specific enough to also be useful to me. For that, I will append the following.
In my case, I am designing APIs for an existing product. It has a web interface for the average user, but also a REST (ish) API intended to meet the needs of developers who need to automate certain tasks with said product. In this oversimplified example, I might have the product deployed to a test environment on which i run various automated tests that modify the /settings and would like to run a cleanup script that resets /settings back to normal when I'm done.
The product is not SaaS (yet), and the APIs are not public (as in, anyone on the web can access them freely) - so the audience and thus the potential types of "clients" I may encounter is rather small - developers who use my product, that is deployed in their private data center or AWS EC2 machines, and need to write a script in whatever language to automate some task rather than doing it via UI.
What that means is that some technical considerations like caching are relevant. Human user considerations, like how consistent the API design is across various resources, and how easy it is to learn, are also relevant. But "can some 3rd party crawler identify the next actions it can perform from a given state" isn't so relevant (which is why we don't implement HATEOAS, or the OPTIONS method at all)
Let's discuss your mentioned options first:
1: DELETE does not necessarily need to delete or remove the state contained in the resource targeted by the URI. It just requires that the mapping of target URI to the resource is removed, which means that a consecutive request on the same URI should not return the state of the resource further, if no other operation was performed on that URI in the meantime. As you want to reuse the URI pointing to the client's settings resource, this is probably not the correct approch.
2: REST doesn't care about the spelling of the URI as long as it is valid according to RFC3986. There is no such thing as RESTful or RESTless URI. The URI as a whole is a pointer to a resource and a client should refrain from extracting knowledge of it by parsing and interpreting it. Client and server should though make use of link relation names URIs are attached to. This way URIs can be changed anytime and client will remain to be able to interact with the service further. The presented URI however leaves an RPC kind of smell, which an automated client is totally unaware of.
3: PATCH is actually pretty-similar to patching done by code versioning tools. Here a client should precalculate the steps needed to transform a source document to its desired form and contain these instructions into a so called patch document. If this patch document is applied by someone with the state of a document that matches the version used by the patch document, the changes should be applied correctly. In any other cases the outcome is uncertain. While application/json-patch+json is very similar to the philosophy on a patch-document containing separate instructions, application/merge-patch+json has a slightly different take on it by defining default rules (nulling out a property will lead to a removal, including a property will lead to its adding or update and leaving out properties will ignore these properties in the original document)
4: In this sense first retrieving the latest state from a resource and locally updating it/calculating the changes and then send the outcome to the server is probably the best approach of the ones listed. Here you should make use of conditional requests to guarantee that the changes are only applied on the version you recently downloaded and prevent issues by ruling out any intermediary changes done to that resource.
Basically, in a REST architecture the server offers a bunch of choices to a client that based on his task will chose one of the options and issue a request to the attached URI. Usually, the client is taught everything it needs to know by the server via form representations such as HTML forms, HAL forms or ION.
In such an environment settings is, as you mentioned, a valid resource on its own, so is also a default settings resource. So, in order to allow a client to reset his settings it is just a matter of "copying" the content of the default settings resource to the target settings resource. If you want to be WebDAV compliant, which is just an extension of HTTP, you could use the COPY HTTP operation (also see other registered HTTP operations at IANA). For plain HTTP clients though you might need a different approach so that any arbitrary HTTP clients will be able to reset settings to a desired default one.
How a server wants a client to perform that request can be taught via above mentioned form support. A very simplistic approach on the Web would be to send the client a HTML page with the settings pre-filled into the HTML form, maybe also allow the user to tweak his settings to his wishes beforehand, and then click a submit button to send the request to the URI present in the action attribute of the form, which can be any URI the server wants. As HTML only supports POST and GET in forms, on the Web you are restricted to POST.
One might think that just sending a payload containing the URI of the settings resource to reset and optionally the URI to the default settings to a dedicated endpoint via POST is enough and then let it perform its magic to reset the state to the default one. However, this approach does bypass caches and might let them believe that the old state is still valid. Caching in HTTP works as such that the de-facto URI of a resource is used as key and any non-safe operations performed on that URI will lead to an eviction of that stored content so that any consecutive requests would directly go to the server instead of being served by the cache instead. As you send the unsafe POSTrequest to a dedicated resource (or endpoint in terms of RPC) you miss out on the capability to inform the cache about the modification of the actual settings resource.
As REST is just a generalization of the interaction model used on the human Web, it is no miracle that the same concepts used on the Web also apply onto the application domain level. While you can use HTML here as well, JSON-based formats such as application/hal+json or the above mentioned HAL forms or ION formats are probably more popular. In general, the more media-type your service is able to support, the more likely the server will be to server a multitude of clients.
In contrast to the human Web, where images, buttons and further stuff provide an affordance of the respective control to a user, arbitrary clients, especially automated ones, usually don't coop with such affordances good. As such other ways to hint a client on the purpose of a URI or control element need to be provided, such as link relation names. While <<, <, >, >> may be used on a HTML page link to indicate first, previous, next and last elements in a collection, link relation here provide first, prev, next and last as alternatives. Such link relations should of course be either registered with IANA or at least follow the Web linking extension approach. A client looking up the URI on a prev relation will know the purpose of the URI as well as still be able to interact with the server if the URI ever changes. This is in essence also what HATEOAS is all about, using given controls to navigate the application though the state machine offered by the server.
Some general rules of thumb in designing applications for REST architectures are:
Design the interaction as if you'd interact with a Web page on the human Web, or more formally as a state machine or domain application protocol, as Jim Webber termed it, a client can run through
Let servers teach clients on how requests need to look like via support of different form types
APIs shouldn't use typed resources but instead rely on content type negotiation
The more media type your API or client supports the more likely it will be to interact with other peers
Long story short, in summary, a very basic approach is to offer a client a pre-filled form with all the data that makes up the default settings. The target URI of the action property targets the actual resource and thus also informs caches about the modification. This approach is on top also future-proof that clients will be served automatically with the new structure and properties a resource supports.
... so the audience and thus the potential types of "clients" I may encounter is rather small - developers who use my product, that is deployed in their private data center or AWS EC2 machines, and need to write a script in whatever language to automate some task rather than doing it via UI.
REST in the sense of Fielding's architectural style shines when there are a multitude of different clients interacting with your application and when there needs to be support for future evolution inherently integrated into the design. REST just gives you the flexibility to add new features down the road and well-behaved REST clients will just pick them up and continue. If you are either only interacting with a very limited set of clients, especially ones under your control, of if the likelihood of future changes are very small, REST might be overkill and not justify the additional overhead caused by the careful desing and implementation.
... some technical considerations like caching are relevant. Human user considerations, like how consistent the API design is across various resources, and how easy it is to learn, are also relevant. But "can some 3rd party crawler identify the next actions it can perform from a given state" isn't so relevant ...
The term API design already indicates that a more RPC-like approach is desired where certain operations are exposed user can invoke to perform some tasks. This is all fine as long as you don't call it REST API from Fielding's standpoint. The plain truth here is that there are hardly any applications/systems out there that really follow the REST architectural style but there are tons of "bad examples" who misuse the term REST and therefore indicate a wrong picture of the REST architecture, its purpose as well as its benefits and weaknesses. This is to some part a problem caused by people not reading Fielding's thesis (carefully) and partly due to the overall perference towards pragmatism and using/implementing shortcuts to get the job done ASAP.
In regards to the pragmatic take on "REST" it is hard to give an exact answer as everyone seems to understand different things about it. Most of those APIs rely on external documentation anyway, such as Swagger, OpenAPI and what not and here the URI seems to be the thing to give developers clue about the purpose. So a URI ending with .../settings/reset should be clear to most of the developers. Whether the URI has an RPC-smell to it or whether or not to follow the semantics of the respective HTTP operations, i.e. partial PUT or payloads within GET, is your design choice which you should document.
It is okay to use POST
POST serves many useful purposes in HTTP, including the general purpose of “this action isn’t worth standardizing.”
POST /settings HTTP/x.y
Content-Type: text/plain
Please restore the default settings
On the web, you'd be most likely to see this as a result of submitting a form; that form might be embedded within the representation of the /settings resource, or it might live in a separate document (that would depend on considerations like caching). In that setting, the payload of the request might change:
POST /settings HTTP/x.y
Content-Type: application/x-www-form-urlencoded
action=restoreDefaults
On the other hand: if the semantics of this message were worth standardizing (ie: if many resources on the web should be expected to understand "restore defaults" the same way), then you would instead register a definition for a new method token, pushing it through the standardization process and promoting adoption.
So it would be in this definition that we would specify, for instance, that the semantics of the method are idempotent but not safe, and also define any new headers that we might need.
there is a bit in it that conflicts with this idea of using POST to reset "The only thing REST requires of methods is that they be uniformly defined for all resources". If most of my resources are typical CRUD collections, where it is universally accepted that POST will create a new resource of a given type
There's a tension here that you should pay attention to:
The reference application for the REST architectural style is the world wide web.
The only unsafe method supported by HTML forms was POST
The Web was catastrophically successful
One of the ideas that powered this is that the interface was uniform -- a browser doesn't have to know if some identifier refers to a "collection resource" or a "member resource" or a document or an image or whatever. Neither do intermediate components like caches and reverse proxies. Everybody shares the same understanding of the self descriptive messages... even the deliberately vague ones like POST.
If you want a message with more specific semantics than POST, you register a definition for it. This is, for instance, precisely what happened in the case of PATCH -- somebody made the case that defining a new method with additional constraints on the semantics of the payload would allow a richer, more powerful general purpose components.
The same thing could happen with the semantics of CREATE, if someone were clever enough to sit down and make the case (again: how can general purpose components take advantage of the additional constraints on the semantics?)
But until then, those messages should be using POST, and general purpose components should not assume that POST has create semantics, because RFC 7231 doesn't provide those additional constraint.

Rest api with generic User

I created a few Rest apis right now and I always preferred a solution, where I created an endpoint for each resource.
For example:
GET .../employees/{id}/account
GET .../supervisors/{id}/account
and the same with the other http methods like put, post and delete. This blows up my api pretty much. My rest apis in general preferred redundancy to reduce complexity but in this cases it always feels a bit cumbersome. So I create another approach where I work with inheritance to keep the "dry" principle.
In this case there is a base class User and via inheritance my employee and supervisor model extends from it. Now I only need one endpoint like
GET .../accounts/{id}
and the server decides which object is returned. Also while this thins out my api, it increases complexity and in my api documentation ( where I use spring rest docs ) I have to document two different Objects for the same endpoint.
Now I am not sure about what is the right way to do it ( or at least the better way ). When I think about Rest, I think in resources. So my employees are a seperate resource as well as my supervisors.
Because I always followed this approach, I tink I might be mentally run in it and maybe lost the objectivity.
It would be great if you can give my any objective advice on how this should be handled.
I built an online service that deals with this too. It's called Wirespec:
https://wirespec.dev
The backend automatically creates the url for users and their endpoints dynamically with very little code. The code for handling the frontend is written in Kotlin while the backend for generating APIs for users is written in Node.js. In both cases, the amount of code is very negligible and self-maintaining, meaning that if the user changes the name of their API, the endpoint automatically updates with the name. Here are some examples:
API: https://wirespec.dev/Wirespec/projects/apis/Stackoverflow/apis/getUserDetails
Endpoint: https://api.wirespec.dev/wirespec/stackoverflow/getuserdetails?id=100
So to answer your question, it really doesn't matter where you place the username in the url.
Try signing in to Wirespec with your Github account and you'll see where your Github username appears in the url.
There is, unfortunately, no wright or wrong answer to this one and it soley depends on how you want to design things.
With that being said, you need to distinguish between client and server. A client shouldn't know the nifty details of your API. It is just an arbitrary consumer of your API that is fed all the information it needs in order to make informed choices. I.e. if you want the client to send some data to the server that follows a certain structure, the best advice is to use from-like representations, such as HAL forms, Ion or even HTML. Forms not only teach a client about the respective properties a resource supports but also about the HTTP operation to use, the target URI to send the request to as well as the representation format to send the data in, which in case of HTML is application/x-www-form-urlencoded most of the time.
In regards to receiving data from the server, a client shouldn't attempt to extract knowledge from URIs directly, as they may change over time and thus break clients that rely on such a methodology, but rely on link relation names. Per URI there might be multiple link relation names attached to that URI. A client not knowing the meaning of one should simply ignore it. Here, either one of the standardized link relation names should be used or an extension mechanism as defined by Web linking. While an arbitrary client might not make sense from this "arbitrary string" out of the box, the link relation name may be considered the predicate in a tripple often used in ontologies where the link relation name "connects" the current resource with the one the link relation was annotated for. For a set of URIs and link relation names you might therefore "learn" a semantic graph over all the resources and how they are connected to each other. I.e. you might annotate an URI pointing to a form resource with prefetch to hint a client that it may load the content of the referenced URI if it is IDLE as the likelihood is high that the client will be interested to load that resource next anyway. The same URI might also be annotated with edit-form to hint a client that the resource will provide an edit form to send some data to the server. It might also contain a Web linking extension such as https://acme.org/ref/orderForm that allows clients, that support such a custom extension, to react to such a resource accordingly.
In your accounts example, it is totally fine to return different data for different resources of the same URI-path. I.e. resource A pointing to an employee account might only contain properties name, age, position, salery while resource B pointing to a supervisor could also contain a list of subordinates or the like. To a generic HTTP client these are two totally different resources even though they used a URI structure like /accounts/{id}. Resources in a REST architecture are untyped, meaning they don't have a type ouf of the box per se. Think of some arbitrary Web page you access through your browser. Your browser is not aware of whether the Web page it renders contains details about a specific car or about the most recent local news. HTML is designed to express a multitude of different data in the same way. Different media types though may provide more concrete hints about the data exchanged. I.e. text/vcard, applciation/vcard+xml or application/vcard+json all may respresent data describing an entity (i.e. human person, jusistic entity, animal, ...) while application/mathml+xml might be used to express certain mathematical formulas and so on. The more general a media type is, the more wiedspread usage it may find. With more narrow media types however you can provide more specific support. With content type negotiation you also have a tool at your hand where a client can express its capabilities to servers and if the server/API is smart enough it can respond with a representation the client is able to handle.
This in essence is all what REST is and if followed correctly allow the decoupling of clients from specific servers. While this might sound confusing and burdensome to implement at first, these techniques are intended if you strive for a long-lasting environment that still is able to operate in decateds to come. Evolution is inherently integrated into this phiolosophy and supported by the decoupled design. If you don't need all of that, REST might not be the thing you want to do actually. Buf if you still want something like REST, you for sure should design the interactions between client and server as if you'd intereact with a typical Web server. After all, REST is just a generalization of the concepts used on the Web quite successfully for the past two decades.

REST API Concepts

I'm trying to grasp the essence of REST API and i have some questions that i'd be happy if someone could clarify:
First
From Wikipedia:
it is a network of Web resources (a virtual state-machine) where the user progresses through the application by selecting resource identifiers such as http://www.example.com/articles/21 and resource operations such as GET or POST (application state transitions), resulting in the next resource's representation (the next application state) being transferred to the end user for their use.
What is the meaning of "application state"? As far as i understand, an application that exposes a REST API is stateless, so it doesn't have a "state" by definition? It just replies to client requests, which contain all the information needed by the server to respond to those requests. In other words, it doesn't hold any context. Am i correct?
Second
One of the 6 constraints is client-server architecture. Why is that a constraint? isn't it correct that every API is in a client-server architecture? eventually, API is Application Programming Interface. ??
Third
from here:
Using generic media types such as JSON is fundamentally not RESTful because REST requires the messages to be self-descriptive. Self-descriptiveness just means that the semantics of the data should travel with the data itself.
What is the original meaning behind the self-descriptiveness constraint, and does using a generic media-type violate this constraint?
Fourth
I've seen in many places that REST is not HTTP, and doesn't have to use HTTP as it's undelying protocol, it's just natural to use HTTP because the set of methods it has (GET, POST, PUT, DELETE). Can someone explain why is it natural for REST and give an example for another way to use REST other then HTTP?
As far as i understand, an application that exposes a REST API is stateless, so it doesn't have a "state" by definition?
No, the communication itself should be stateless. REST is an abbreviation for REpresentational State Transfer, so the term state is even included in the name itself.
It is probably easier to think here in terms of traditional Web pages. If you have a server that keeps client state in it, i.e. manages multiple clients sessions, what you end up is having a scaling issue sooner or later. You can't add a further server the client can connect to to retrieve the same information as the session is tide to the server it was communicating before. One might try to share the server state through a remote bus (i.e. Redis queues or the like) but this leads to plenty other challenges that are not easily solvable.
In the words of Fielding:
REST is software design on the scale of decades: every detail is intended to promote software longevity and independent evolution. Many of the constraints are directly opposed to short-term efficiency. Unfortunately, people are fairly good at short-term design, and usually awful at long-term design. Most don’t think they need to design past the current release. (Source)
In my sense statelessness is less a constraint on independent evolution than it is on system scalability though. If you just take client-server decoupling into account, based on content type negotiation, usage of HATEOAS and so on, statelessness is not really a blocker here, though it takes a way a lot of background complexities if you avoid sharing client state, i.e. its current session data, across your server landscape.
One of the 6 constraints is client-server architecture. Why is that a constraint? isn't it correct that every API is in a client-server architecture? eventually, API is Application Programming Interface. ??
What are the counterparts to client-server architectures? Applications that don't have to deal with other applications. If an application does not have to communicate with other applications you don't need to be that careful in your design for it having to adapt to changes or avoid any coupling between its components as it is always treated as one thing. As quoted above, REST is software design on the scale of decades. As such, the same services you put online should still work in years to come and in essence should have the freedom to evolve in future.
Interoperability is one of the core issues in client-server architectures. If two participants do not talk the same language or do have a different understanding of the domain, they will have a hard time communicating with each other. Just put a Chinese and a Frenchmen in the same room and watch them try to solve a particular issue. Unless they do understand a minimal language set, i.e. English, communication will be the main problem to solve that issue.
What is the original meaning behind the self-descriptiveness constraint, and does using a generic media-type violate this constraint?
I start by quoting this statement from an actually good blog post:
A self-descriptive message is one that contains all the information that the recipient needs to understand it. There should not be additional information in a separate documentation or in another message. (Source)
If you now take a closer look at the JSON spec It just defines the basic syntax but does not define any semantics. So, in essence you know that objects start and end with curly braces ({, }), that an object consists of a set of key and values where the key is a string value and the value may either be a string, a number, a boolean, a further object or an array and so on. But it doesn't tell you anything about the actual structure, which elements are shipped within a document and so on. XML i.e. has document type definitions (DTDs) and XML schemas that define which elements and attributes are in which order and what their admissible values are and the like. While JSON (Hyper-)Schema attempts to fill this gap, it still doesn't define the semantics of the fields fully, i.e. in which context which elements may appear and what not. JSON itself also lacks support for URLs/URIs and JSON hyper-schema now tries to add support for it at least.
If you take a look at HTML i.e., the spec has already gone through different versions now but it was designed with backward compatibility in mind and even in version 5 you can use the tags defined in the original version and browser will be able to handle your Web page more or less correctly. A further part of self-descriptiveness comes through HTML's form support. A server can thereby teach a client not only on the data elements of a resource, i.e. about field name expecting a text input whereas a time field presents you a calendar widget to select a specific date and time entry and so on, but it will also tell a client the URI where to send the request to, which HTTP operation to use and which media-type to use to send the request in. While this already tackles HATEOAS as well, a client understanding HTML will know what the server wants it to do and therefore does not need to consult any external documentation that describes how a request should look like.
HTML is in essence a generic media type. You can use it to depict details of a specific car model but also to show news and other data. A media type in the end is nothing more than a human-readable definition how an application (client or server) should process data that is said to be of that format. As such, a generic media type is preferable to specific ones as it allows and promotes the reusage of that media type for many other domains and those increase the likelihood of its support across different vendors.
I've seen in many places that REST is not HTTP, and doesn't have to use HTTP as it's undelying protocol, it's just natural to use HTTP because the set of methods it has (GET, POST, PUT, DELETE). Can someone explain why is it natural for REST and give an example for another way to use REST other then HTTP?
As "Uncle Bob" Martin stated architecture is about intent. As already quoted above, REST is all about decoupling clients from servers to allow servers to evolve freely in future and clients to adopt to changes easily. This is, what basically allowed the Web to grow to its todays size. Fielding just took the concepts used successfully on the Web, mostly by us humans, and started questioning why applications do not use our style of interacting with the Web also. Therefore, loosely speaking, REST is just Web surfing for applications.
REST itself is just an architectural style. Like some churches use a gothic style, others a modern one and yet other a baroque one, each style has its unique properties that differentiate it from the others. In software engineering you have also a couple of different styles you can follow, such as monolithic or n-tier architectures, MVC architecture, domain specific languages (DSLs), SOA, peer-to-peer architectures, cloud computing (serverless, ...) and so on. Each of these have their own characteristics and unique proposition features, benefits and drawbacks. As in traditional architecture you can mix and match different styles into one approach, though the final result may not be what you were initially aiming for and remember, each style attempts to tackle at least one major concern.
Fielding was working on the HTTP 1.0 and 1.1 specification (among others) and analyzed the architecture of the Web in this doctoral thesis. Therefore it is no miracle in my sense that REST does work well on top of HTTP, but as already mentioned, he might have taken a to-close look at HTTP and the Web as statelessness is, at least in my understanding, less a concern for future evolution than for scalability. While scalability might be a future concern as well, I wouldn't call it a high-priority constraint in that regard, even though Fielding claims that all of the constraints are mandatory to deserve the REST tag.
As such, REST itself does not tie you to HTTP as it is just an architectural style. It does not forbid to deviate from its core ideas, but you might miss out on some of the properties it advocates (besides misusing the term REST eventually). But as REST goes almost hand in hand with HTTP, it is like the perfect match and why change it?! Sure, you could come up with a new transport protocol in future and apply the same concepts used to interact with Web pages to that protocol and you will more or less end with a REST architecture. Your protocol however should at least support URIs, media-types, link-relations and content-type negotiation. These are the foundation blocks IMO that every REST enabled application needs to support, as these guarantee the exchange of well-defined messages and the ability to act upon these.
As HTTP is just a transport protocol to transfer a document from a source machine to a target, one might question why SMTP, FTP or similar protocols are not used for REST architectures as well. While these protocols also transfer documents from one point to an other, they either lack support for media-types (S/FTP/S) or do not support the uniform interface constraint fully, i.e. by not supporting HATEOAS fully and the like. Besides that, both require a particular login to create a session which may or not be seen as violation of the statelessness constraint.

Terminology question: API somewhere between SOAP and REST - what is the name for them?

My understanding of SOAP vs REST:
REST = JSON, simple consistent interface, gives you CRUD access to 'entities' (Abstractions of things which are not necessarily single DB rows), simpler protocol, no formally enforced 'contract' (e.g. the values an endpoint returns could change, though it shouldn't)
SOAP = XML, more complex interface, gives you access to 'services' (specific operations you can apply to entities, rather than allowing you to CRUD entities directly), formally enforced, pre-stated 'contract' (like a WSDL, where e.g. the return types are predefined and formalized)
Is that a broadly correct assessment?
What about a mixture?
If so, what do I call an API that is a mixture?
For example, If we have what at surface level looks like a REST API (returns JSON, no WSDL or formalized contract defined - but instead of giving you access to the 'entities' that the system manages (User, product, comment, etc) it instead gives you specific access to services and complex operations (/sendUserAnUpdate/1111, /makeCommentTextPurple/3333, /getAllCommentsByUserThisYear/2222) without having full coverage?
The 'services' already exist internally, and the team simply publishes access to them on a request by request basis, through what would otherwise look like a REST API.
Question:
What is the 'mixture' typically referred to as (besides, maybe, a bad API). Is there a word for it? or a concept I can refer to that'll make most developers understand what I'm referring to, without having to say the entire paragraph I did above?
Is it just "JSON SOAP API?", "A Service-based REST API?" - what would you call it?
Thanks!
Thanks!
If you take a look at all those so-called REST-APIs your observation might seem true, though REST actually is something completely different. It describes an architecture or a philosophy whose intent it is to decouple clients from servers, allowing the latter one to evolve in future without breaking clients. It is quite similar to the typical Web page interaction in that a server will teach a client on what it needs and only reacts on client-triggered requests. One has to be pretty careful and pendant when designing REST services as it is too easy to include a coupling that may affect clients when a change is introduced, especially with all the pragmatism around in (commercial) software engineering. Stefan Tilkov gave a great talk on REST back in 2014 that, alongside with Jim Webber or Asbjørn Ulsberg, can be used as introduction lectures to what REST is at its core.
The general premise in REST should always be that a server teaches clients what they need and what a server expects and offers choices to the client via links. If the server expects to receive data from the client it will send a form-esque representation to inform the client about the respective fields it supports and based on the affordance of the respective elements contained in the form a client knows whether to select one or multiple options, enter some free text or enter a date value and such. Unfortunately, most of the media-type formats that attempt to mimic HTML's forms are still in draft versions.
If you take a look at HTML forms in particular you might sense what I'm refering to. Each of the elements that may occur inside a form are well defined to avoid abmiguity and improve interoperability. This is defacto the ultimate goal in REST, having one client that is able to interact with a sheer amount of other services without having to be adapted to each single API explicitely.
The beauty of REST is, it isn't limited to a single representation form, i.e. JSON, in fact there is almost an infinite number of possible representation formats that could be exchanged in a REST environment. Plain application/json is a terrible media-type for REST applications IMO as it doesn't include any defintions in regards to links and forms and doesn't describe the semantics of certain fields that may be shipped in requests and responses. The lack of semantical description usually leads to typed resources where a recipient expects that receiving data from i.e. /api/users returns some specific user data, that may differ from host to host. If you skim through IANA's media type registry you will find a couple of media-type formats you could have used to transfer user-related data and any client supporting these representation formats whold be able to interact with this enpoint without any issues. Fielding himself claimed that
A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. Any effort spent describing what methods to use on what URIs of interest should be entirely defined within the scope of the processing rules for a media type (and, in most cases, already defined by existing media types). (Source)
Through content-type negotiation client and server will negotiate about a representation format both support and understand. The question therefore shouldn't be which one to support but how many you want to support. The more media-type your API or client is able to exchange payloads for, the more likely it will be to interact with other participants.
Most of those so-called REST APIs are in reality just RPC services exposed via HTTP that may or may not respect and support certain HTTP operations. HTTP thereby is just a transport layer whose domain is the transfer of files or data over the Web. Plenty of people still believe that you shouldn't put verbs in URIs when in reality a script or process usually doesn't (and shouldn't) care whether a URI contains a verb or not. The URI itself is just a pointer a client will follow and invoke when it is interested in receiving the payload. We humans are also not that much interested in the URI itself in regards to the content it may return after invoking that URI. The same holds true for arbitrary clients. It is more important what you ship along with that URI. On the Web a link can be annotated with certain text and/or link relation names that set the links content in relation to the current page. It may hint a client that certain content may be invoked before the whole response was parsed as it is quite likely that the client will also want to know about that. preload i.e. is such a link-relation name that hints the client about that. If certain domain-specific terms exist one might use an extension scheme as defined by Web linking or reuse common knowlege or special microformats.
The whole interaction in a REST environment is similar to playing a text-based computer game or following a certain process flow (i.e. ordering and paying produts) defined by an application domain protocol, that can be designed as a state machine. The client is therefore guided through the whole process. It basically just follows the orders the server gave it, with some choices to break out of the process (i.e. cancel the order before paying).
SOAP on the otherhand is, as you've stated, an XML-based RPC protocol reusing a subset of HTTP to exchange requests and responses. The likelihood that when you change something within your WSDL plenty of clients have to be adapted and recompiled are quite high. SOAP even defines its own security mechanism instead of reusing TLS, which requires explicit support by the clients therefore. As you have a one-to-one communication model due to the state that may be kept in process, scaling SOAP services isn't that easy. In a REST environment this is just a matter of adding a load-balancer before the server and then mirroring the server n-times. The load-balancer can send the request to any of the servers due to the stateless constraint
What is the 'mixture' typically referred to as (besides, maybe, a bad API). Is there a word for it? or a concept I can refer to that'll make most developers understand what I'm referring to, without having to say the entire paragraph I did above?
Is it just "JSON SOAP API?", "A Service-based REST API?" - what would you call it?
The general term for an API that communicates on top of HTTP would be Web API or HTTP API IMO. This article also uses this term. It also lists XML-RPC and JSON-RPC besides SOAP. I do agree with Voice though that you'll receive 5 answers on asking 4 people about the right term to use. While it would be convenient to have a respective term available everyone would agree upon, the reality shows that people are not that interested in a clear separation. Just look here at SO on the questions taged with rest. There is nothing wrong with not being "RESTful", though one should avoid the term REST for truly RPC services. Though I think we are already in a situation where the term REST can't be rescued from misusage and marketing purposes.
For something that requires external documentation to use and that ships with its own custom, non-standardized representation format or that just exposes CRUD for domain objects I'd add -RPC to it, as this is more or less what it is at its heart. So if the API sends JSON and the representation to expect is documented via Swagger or some other external documentationJSON-RPC would probably the most fitting name IMO.
To sum up this post, I hope I could shed some light on what REST truly is and how your observation is flawed by all those pragmatic attempts that unfortunately are RPC through and through. If you change something within their implementation, how many clients will break? In addition to that you can't reuse the client that you've implemented for API A to interact with API B (of a different company or vendor) out of the box and therefore have to either adapt your client or create a new one solely for that API. This is true RPC and therfore should be reflected in the name somehow to hint developers about future expectations. Unfortunately, the process of naming things propperly, especially in regards to REST, seems already lost. There is a fine but tiny group who attempt to spread the true meaning, like Voice, Cassio and some others, though it is like fighting windmills. The best advice here would be to first discuss the naming conventions and what each participant understand on which term and then agree on a naming scheme everyone agrees on to avoid future confusion.
My understanding of SOAP vs REST
...
Is that a broadly correct assessment?
No.
REST is an "architectural style", which is to say a coordinated collection of architectural constraints. The World Wide Web is an example of an application built using the REST architectural style.
SOAP is a transport agnostic message protocol specification, based on XML Information Set
If so, what do I call an API that is a mixture?
I don't think you are going to find an authoritative terminology here. Colloquially, you are likely to hear the broad umbrella term "web api" to describe an HTTP API that isn't "RESTful".
The whole space is rather polluted by semantic diffusion.

Is the much hyped REST API just a http method plus HATEOAS links?

I read that HATEOAS links are the one that separates a REST API from a normal http API. In that case, does REST need a separate name? I wonder what all this hype about REST API is about. It seems to be just a http method with one extra rule in the response.
Q) What other differences exist?
I read that HATEOAS links are the one that separates a REST API from a normal http API.
That's probably a little bit of an understatement. When Leonard Richardson (2008) described the "technology stack" of the web, he listed:
URI
HTTP
HTML
A way of exploring the latter is to consider how HTML, as a media type, differs from a text document with URI in it. To my mind, the key element is links and forms -- standardized ways of encoding into the representation the semantics of a URI (this is a link to another page, this is an embedded image, this is an embedded script, this is a form...).
Mike Admundsen, 2010:
Hypermedia Types are MIME media types that contain native hyper-linking semantics that induce application flow. For example, HTML is a hypermedia type; XML is not.
Atom Syndication/Atom Publishing is a good demonstration for defining a REST API.
Can you throw some light on what REST actually means and how it differs from normal http?
Have you noticed that websites don't normally use plain text for the representations of the information that they share? It's something of a dead end -- raw text doesn't have any hypermedia semantics built into it, so a generic client can't do anything more interesting than search for sequences that might be URI.
On the other hand, with HTML we have link semantics: we can include references to images, to style sheets, to scripts, as well as linking to other documents. We can describe forms, that allow the creation of parameterized HTTP requests.
Additionally, that means that if some relation shouldn't be used by the client, the server can easily change the representation to remove the link.
Furthermore, the use of the hypermedia representation allows the server to use a richer description of which request message should be sent by the client.
Consider, for example, Google. They can use the form to control whether search requests use GET or POST. They can remove the "I Feel Lucky" option, or arrange that it redirects to the main experience. They can embed additional information in to the fields of the form, to track what is going on. They can choose which URI targets are used in the search results, directing the client to send to Google another request which gets redirected to the actual target, with additional meta data embedded in the query parameters, all without requiring any special coordination with the client used.
For further discussion, see Leonard Richardson's slide deck from QCon 2008, or Phil Sturgeon's REST and Hypermedia in 2019.
Does n't think the client need to read the documentation if the HATEOAS link is a POST API? HATEOAS links will only guide you to an API but will not throw any light on how its request body needs to be filled....GET won't have request body. So, not much or a problem. but POST API?
Sort of - here's Fielding writing in 2008:
REST doesn’t eliminate the need for a clue. What REST does is concentrate that need for prior knowledge into readily standardizable forms.
On the web, the common use case is agents assisting human beings; the humans can resolve certain ambiguities on their own. The result is a separation of responsibilities; the humans decode the domain specific semantics of the messages, the clients determine the right way to describe an interaction as an HTTP request.
If we want to easily replace the human with a machine, then we'll need to invest extra design capital in a message schema that expresses the domain specific semantics as clearly as we express the plumbing.
To me, REST is an ideology you want to aim for if you have a system that should last for years to come which has the freedom to evolve freely without breaking stuff on parts you can't control. This is very similar to the Web where a server can't control browsers directly though browsers are able to cooperate with any changes done to Web site representations returned by the server.
I read that HATEOAS links are the one that separates a REST API from a normal http API. In that case, does REST need a separate name?
REST does basically what its name implies, it transfers the state of a resource representation. If so, we should come up with a new name for such "REST" APIs that are truly RPC in the back, to avoid confusion.
If you read through the Richardson Maturity Model (RMM) you might fall under the impression that links or hypermedia controls as Fowler named it, which are mandatory at Level 3, are the feature that separates REST from normal HTTP interaction. However, Level 3 is just not enough to reach the ultimate goal of decoupling.
Most so called "REST APIs" do put a lot of design effort into pretty URIs in a way to express meaning of the target resource to client developer. They come up with fancy documentation generated by their tooling support, such as Swagger or similar stuff, which the client developer has to follow stringent or they wont be able to interact with their API. Such APIs are RPC though. You won't be able to point the same client that interacts with API A to point to API B now and still work out of the box as they might use completely different endpoints and return different types of data for almost the same named resource endpoint. A client that is attempting to use a bit more of dynamic behavior might learn the type from parsing the endpoint and expect a URI such as .../api/users to return users, when all of a sudden now the API changed its URI structure to something like .../api/entities. What would happen now? Most of these clients would break, a clear hint that the whole interaction model doesn't follow the one outline by a REST architecture.
REST puts emphasis on link relation names that should give clients a stable way of learning the URIs intent by allowing a URI to actually change over time. A URI basically is attached to a link relation name and basically represents an affordance, something that is clear what it does. I.e. the affordance of a button could be that you can press it and something would happen as a result. Or the affordance of a light switch would be that a light goes on or off depending on the toggled state of the light switch.
Link relation names now express such an affordance and are a text-based way to represent something like a trash bin or pencil symbol next to table entry on a Web page were you might figure out that on clicking one will delete an entry from the table while the other symbol allows to edit that entry. Such link relation names should be either standardized, use widely accepted ontologies or use custom link-relation extensions as outlined by RFC 8288 (Web Linking)
It is important to note however, that a URI is just a URI which should not convey a semantic meaning to a client. This does not mean that a URI can't have a semantic meaning to the server or API, but a client should not attempt to deduce one from the URI itself. This is what the link-relation name is for, which provides the infrequently changing part of that relation. An endpoint might be referenced by multiple, different URIs, some of which might use different query parameters used for filtering. According to Fielding each of these URIs represent different resources:
The definition of resource in REST is based on a simple premise: identifiers should change as infrequently as possible. Because the Web uses embedded identifiers rather than link servers, authors need an identifier that closely matches the semantics they intend by a hypermedia reference, allowing the reference to remain static even though the result of accessing that reference may change over time. REST accomplishes this by defining a resource to be the semantics of what the author intends to identify, rather than the value corresponding to those semantics at the time the reference is created. It is then left to the author to ensure that the identifier chosen for a reference does indeed identify the intended semantics. (Source 6.2.1)
As URIs are used for caching results, they basically represent the keys used for caching the response payload. As such, it gets obvious that on adding additional query parameters to URIs used in GET requests, you end up bypassing caches as the key is not stored in the cache yet and therefore get the result of a different resource, even though it might be identical (also in response representation) as the URI without that additional parameter.
I wonder what all this hype about REST API is about. It seems to be just a http method with one extra rule in the response.
In short, this is what those self- or marketing-termed pseudo "REST APIs" do convey and many people seem to understand.
The hype for "REST" arose from the inconveniences put onto developers on interacting with other interop-solutions such as Corba, RMI or SOAP where often partly-commercial third-party libraries and frameworks had to be used in order to interact with such systems. Most languages supported HTTP both as client and server out of the box removing the requirement for external libraries or frameworks per se. In addition to that, RPC based solution usually require certain stub- or skeleton-classes to be generated first, which was usually done by the build pipeline automatically. Upon updates of the IDL, such as WSDL linking or including XSD schemata, the whole stub-generation needed to be redone and the whole code needed to looked through in order to spot whether a breaking change was added or not. Usually no obvious changelog was available which made changing or updating such stuff a pain in the ...
In those pseudo "REST" APIs plain JSON is now pretty much the de facto standard, avoiding the step of generating stub classes and the hazzle of analyzing the own code to see whether some of the forced changes had a negative impact on the system. Most of those APIs use some sort of URI based versioning allowing a developer to see based on the URI whether something breaking was introduced or not, mimicking some kind of semantic versioning.
The problem with those solution though is, that not the response representation format itself is versioned but the whole API itself leading to common issues when only a change on a part of the API should be introduced as now the whole API's version needs to be bumped. In addition to that, to URIs such as .../api/v1/users/1234 and .../api/v2/users/1234 may represent the same user and thus the same resource though are in fact different by nature as the URI is different.
Q) What other differences exist?
While REST is just an architecture model that can't force you to implement it stringent, you simply will not benefit from its properties if you ignore some of its constraints. As mentioned above, HATEOAS support is therefore not yet enough to really decouple all clients from an API and thus allow to benefit from the REST architecture.
RMM unfortunately does not talk about media types at all. A media type basically specifies how a received payload should be processed and defines the semantics and constraints of each of the elements used within that payload. I.e. if you look at text/html registered in IANA's media type registry, you can see that it points to the published specification, which always references the most recent version of HTML. HTML is designed in a way to stay backwards compatible so no special versioning stuff is required.
HTML provides, IMO, two important things:
semi-structured content
form support
The former one allows to structure data, giving certain segments or elements the possibility to express different semantics defined in the media type. I.e. a browser will handle an image differently than a div element or an article element. A crawler might favor links and content contained in an article element and ignore script and image elements completely. Based on the existence or absence of certain elements even certain processing differences may occur.
Including support for forms is a very important thing in REST actually as this is the feature which allows a server to teach a client on what a server needs as input. Most so called "REST APIs" just force a developer to go through their documentation, which might be outdated, incorrect or incomplete, and send data to a predefined endpoint according to the documentation. In case of outdated or incomplete documentation, how should a client ever be able to send data to the server? Moreover, a server might never be able to change as basically the documentation is now the truth and the API has to align with the documentation.
Unfortunately, form-support is still a bit in its infancy. Besides HTML, which provides <form>...</form>, you have a couple of JSON based form attempts such as hal-forms, halo-json (halform), Ion or hydra. None of these have yet wide library or framework support yet as some of these form representations still have not finalized their specification on how to support forms more effectively.
Other media-types, unfortunately, might not use semi-structured content or provide support for forms that teach a client on the needs of a server, though they are still valuable to REST in general. First, through Web linking link support can be added to media types that do not naturally support those. Second, the data itself does not really need to be text-based at all in order for an application to use it further. I.e. pictures an videos usually are encoded and byte based anyways still a client can present them to users.
The main point about media-types though is, as Fielding already pointed out in one of his cited blog posts, is, that representations shouldn't be confused with types. Fielding stated that:
A REST API should never have “typed” resources that are significant to the client. Specification authors may use resource types for describing server implementation behind the interface, but those types must be irrelevant and invisible to the client. The only types that are significant to a client are the current representation’s media type and standardized relation names.
Jørn Wildt explained in an excellent blog post what a "typed" resource is and why a REST architecture shouldn't use such types. Basically, to sum the blog post up, a client expecting a ../api/users endpoint to return a pre-assumed data payload might break if the server adds additional, unexpected fields, renames existing fields or leave out expected fields. This coupling can be avoided by using simple content-type negotiation where a client informs a server on which capabilities the client supports and the client will chose the representation that best fits the target resource. If the server can't support the client with a representation the client supports the server should respond with a failure (or a default representation) the client might log to inform the user.
This in essence is exactly what the name REST stands for, the transfer of a resource's state representation where the representation may differ depending on the representation format defined by the selected media type. While HATEOAS may be one of the most obvious changes between REST and a non-REST based HTTP solution, this for sure is not the only factor that makes up a payload in REST. I hope I could shed some light on the decoupling intention and that a server should teach clients what the server expects through forms and that the affordance of URIs is captured by link-relation names. All these tiny aspects in sum make up REST, and you will only benefit from REST, unfortunately, if you respect all of its constraints and not only those that are either easy to obtain or what you have the mood for implementing.