I'm trying to grasp the essence of REST API and i have some questions that i'd be happy if someone could clarify:
First
From Wikipedia:
it is a network of Web resources (a virtual state-machine) where the user progresses through the application by selecting resource identifiers such as http://www.example.com/articles/21 and resource operations such as GET or POST (application state transitions), resulting in the next resource's representation (the next application state) being transferred to the end user for their use.
What is the meaning of "application state"? As far as i understand, an application that exposes a REST API is stateless, so it doesn't have a "state" by definition? It just replies to client requests, which contain all the information needed by the server to respond to those requests. In other words, it doesn't hold any context. Am i correct?
Second
One of the 6 constraints is client-server architecture. Why is that a constraint? isn't it correct that every API is in a client-server architecture? eventually, API is Application Programming Interface. ??
Third
from here:
Using generic media types such as JSON is fundamentally not RESTful because REST requires the messages to be self-descriptive. Self-descriptiveness just means that the semantics of the data should travel with the data itself.
What is the original meaning behind the self-descriptiveness constraint, and does using a generic media-type violate this constraint?
Fourth
I've seen in many places that REST is not HTTP, and doesn't have to use HTTP as it's undelying protocol, it's just natural to use HTTP because the set of methods it has (GET, POST, PUT, DELETE). Can someone explain why is it natural for REST and give an example for another way to use REST other then HTTP?
As far as i understand, an application that exposes a REST API is stateless, so it doesn't have a "state" by definition?
No, the communication itself should be stateless. REST is an abbreviation for REpresentational State Transfer, so the term state is even included in the name itself.
It is probably easier to think here in terms of traditional Web pages. If you have a server that keeps client state in it, i.e. manages multiple clients sessions, what you end up is having a scaling issue sooner or later. You can't add a further server the client can connect to to retrieve the same information as the session is tide to the server it was communicating before. One might try to share the server state through a remote bus (i.e. Redis queues or the like) but this leads to plenty other challenges that are not easily solvable.
In the words of Fielding:
REST is software design on the scale of decades: every detail is intended to promote software longevity and independent evolution. Many of the constraints are directly opposed to short-term efficiency. Unfortunately, people are fairly good at short-term design, and usually awful at long-term design. Most don’t think they need to design past the current release. (Source)
In my sense statelessness is less a constraint on independent evolution than it is on system scalability though. If you just take client-server decoupling into account, based on content type negotiation, usage of HATEOAS and so on, statelessness is not really a blocker here, though it takes a way a lot of background complexities if you avoid sharing client state, i.e. its current session data, across your server landscape.
One of the 6 constraints is client-server architecture. Why is that a constraint? isn't it correct that every API is in a client-server architecture? eventually, API is Application Programming Interface. ??
What are the counterparts to client-server architectures? Applications that don't have to deal with other applications. If an application does not have to communicate with other applications you don't need to be that careful in your design for it having to adapt to changes or avoid any coupling between its components as it is always treated as one thing. As quoted above, REST is software design on the scale of decades. As such, the same services you put online should still work in years to come and in essence should have the freedom to evolve in future.
Interoperability is one of the core issues in client-server architectures. If two participants do not talk the same language or do have a different understanding of the domain, they will have a hard time communicating with each other. Just put a Chinese and a Frenchmen in the same room and watch them try to solve a particular issue. Unless they do understand a minimal language set, i.e. English, communication will be the main problem to solve that issue.
What is the original meaning behind the self-descriptiveness constraint, and does using a generic media-type violate this constraint?
I start by quoting this statement from an actually good blog post:
A self-descriptive message is one that contains all the information that the recipient needs to understand it. There should not be additional information in a separate documentation or in another message. (Source)
If you now take a closer look at the JSON spec It just defines the basic syntax but does not define any semantics. So, in essence you know that objects start and end with curly braces ({, }), that an object consists of a set of key and values where the key is a string value and the value may either be a string, a number, a boolean, a further object or an array and so on. But it doesn't tell you anything about the actual structure, which elements are shipped within a document and so on. XML i.e. has document type definitions (DTDs) and XML schemas that define which elements and attributes are in which order and what their admissible values are and the like. While JSON (Hyper-)Schema attempts to fill this gap, it still doesn't define the semantics of the fields fully, i.e. in which context which elements may appear and what not. JSON itself also lacks support for URLs/URIs and JSON hyper-schema now tries to add support for it at least.
If you take a look at HTML i.e., the spec has already gone through different versions now but it was designed with backward compatibility in mind and even in version 5 you can use the tags defined in the original version and browser will be able to handle your Web page more or less correctly. A further part of self-descriptiveness comes through HTML's form support. A server can thereby teach a client not only on the data elements of a resource, i.e. about field name expecting a text input whereas a time field presents you a calendar widget to select a specific date and time entry and so on, but it will also tell a client the URI where to send the request to, which HTTP operation to use and which media-type to use to send the request in. While this already tackles HATEOAS as well, a client understanding HTML will know what the server wants it to do and therefore does not need to consult any external documentation that describes how a request should look like.
HTML is in essence a generic media type. You can use it to depict details of a specific car model but also to show news and other data. A media type in the end is nothing more than a human-readable definition how an application (client or server) should process data that is said to be of that format. As such, a generic media type is preferable to specific ones as it allows and promotes the reusage of that media type for many other domains and those increase the likelihood of its support across different vendors.
I've seen in many places that REST is not HTTP, and doesn't have to use HTTP as it's undelying protocol, it's just natural to use HTTP because the set of methods it has (GET, POST, PUT, DELETE). Can someone explain why is it natural for REST and give an example for another way to use REST other then HTTP?
As "Uncle Bob" Martin stated architecture is about intent. As already quoted above, REST is all about decoupling clients from servers to allow servers to evolve freely in future and clients to adopt to changes easily. This is, what basically allowed the Web to grow to its todays size. Fielding just took the concepts used successfully on the Web, mostly by us humans, and started questioning why applications do not use our style of interacting with the Web also. Therefore, loosely speaking, REST is just Web surfing for applications.
REST itself is just an architectural style. Like some churches use a gothic style, others a modern one and yet other a baroque one, each style has its unique properties that differentiate it from the others. In software engineering you have also a couple of different styles you can follow, such as monolithic or n-tier architectures, MVC architecture, domain specific languages (DSLs), SOA, peer-to-peer architectures, cloud computing (serverless, ...) and so on. Each of these have their own characteristics and unique proposition features, benefits and drawbacks. As in traditional architecture you can mix and match different styles into one approach, though the final result may not be what you were initially aiming for and remember, each style attempts to tackle at least one major concern.
Fielding was working on the HTTP 1.0 and 1.1 specification (among others) and analyzed the architecture of the Web in this doctoral thesis. Therefore it is no miracle in my sense that REST does work well on top of HTTP, but as already mentioned, he might have taken a to-close look at HTTP and the Web as statelessness is, at least in my understanding, less a concern for future evolution than for scalability. While scalability might be a future concern as well, I wouldn't call it a high-priority constraint in that regard, even though Fielding claims that all of the constraints are mandatory to deserve the REST tag.
As such, REST itself does not tie you to HTTP as it is just an architectural style. It does not forbid to deviate from its core ideas, but you might miss out on some of the properties it advocates (besides misusing the term REST eventually). But as REST goes almost hand in hand with HTTP, it is like the perfect match and why change it?! Sure, you could come up with a new transport protocol in future and apply the same concepts used to interact with Web pages to that protocol and you will more or less end with a REST architecture. Your protocol however should at least support URIs, media-types, link-relations and content-type negotiation. These are the foundation blocks IMO that every REST enabled application needs to support, as these guarantee the exchange of well-defined messages and the ability to act upon these.
As HTTP is just a transport protocol to transfer a document from a source machine to a target, one might question why SMTP, FTP or similar protocols are not used for REST architectures as well. While these protocols also transfer documents from one point to an other, they either lack support for media-types (S/FTP/S) or do not support the uniform interface constraint fully, i.e. by not supporting HATEOAS fully and the like. Besides that, both require a particular login to create a session which may or not be seen as violation of the statelessness constraint.
Related
Let's imagine a web service X that has a single purpose - help to integrate two existing services (A and B) having different domain models. Some sort of adapter pattern.
There are cases when A wants to call B, and cases when B wants to call A.
How should endpoints of X be named to make clear for which direction each endpoint is meant?
For example, let's assume that the service A manages "apples". And the service B wants to get updates on "apples".
The adapter service X would have two endpoints:
PUT /apples - when A wants to push updated "apples" to B
GET /apples - when B wants read "apples" from A
(without awaiting a push from A)
Such endpoint structure as above is quite misleading. The endpoints are quite different and use different domain models: PUT-endpoint awaits model of A, and GET-endpoint return model of B.
I would appreciate any advice on designing the API in such a case.
I don't like my own variant:
PUT /gateway-for-A/apples
GET /gateway-for-B/apples
In my view, it is fine, but can be improved:
PUT /gateway-for-A/apples
GET /gateway-for-B/apples
Because
forward slashes are conventionally used to show the hierarchy between individual resources and collections: /gateway-for-A/apples
What can be improved:
it is better to use lowercase
remove unnecessary words
So I would stick with the foloowing URI:
PUT /a/apples
GET /b/apples
Read more here about Restful API naming conventions
First things first: REST has no endpoints, but resources
Next, in terms of HTTP you should use the same URI for updating the state of a resource and retrieving updates done to it as caching, which basically uses the effective URI of a resource, will automatically invalidate any stored responses for an URI if a non-safe operation is performed on it and forward the request to the actual server. If you split concerns onto different URIs you basically bypass that cache management performed for you under the hood completely.
Note further, HTTP/0.9, HTTP/1.0 and HTTP/1.1 itself don't have a "push" option. It is a request-response protocol and as such if a client is interested in getting updates done to a resource it should poll the respective resource whenever it needs updates. If you need above-mentioned push though you basically need to switch to Web Sockets or the like. While HTTP/2 introduced server push functionality, this effectively just populates your local 2nd level cache preventing the client from effectively requesting the resource and instead using the previously received and cached one.
Such endpoint structure as above is quite misleading. The endpoints are quite different and use different domain models: PUT-endpoint awaits model of A, and GET-endpoint return model of B.
A resource shouldn't map your domain model 1:1. Usually in a REST architecture there can be way more resources than there are entities in your domain model. Just think of form-like resources that explain a client on how to request the creation or update of a resource or the like.
On the Web and therefore also in REST architectures the representation format exchanged should be based on well-defined media-types. These media types should define the syntax and semantics of elements that can be found within an exchanged document of that kind. The elements in particular provide the affordance that tell a client in particular what certain elements can be used for. I.e. a button wants to be pressed while a slider can be dragged left or right to change some numeric values or the like. You never have to frequent any external documentation once support for that media type is added to your client and/or server. A rule of thumb in regards to REST is to design the system as if you'd interact with a traditional Web page and then apply the same concepts you used for interacting with that Web page and translate it onto the REST application domain.
Client and server should furthermore use content-type negotiation to negotiate which representation format the server should generate for responses so that clients can process them. REST is all about indirections that ultimately allow a server to change its internals without affecting clients that behave well negatively. Staying interoperable whilst changing is an inherent design decision of REST. If that is not important to you, REST is probably overkill for your needs and you probably should use something more (Web-) RPC based.
In regards to you actual question, IMO a messaging queue could be a better fit to your problem than trying to force your design onto a REST architecture.
I was hoping that there is a well-known pattern for adapter service (when two different services are being integrated without knowing each other formats)
I'd compare that case with communication attempts among humans stemming from different countries. I.e. imagine a Chines who speaks Mandarin trying to communicate with a Frech. Either the Chinese needs to be able to talk French, the French being able to talk in Mandarin, they both use an intermediary language such as English or they make use of a translator. In terms of trust and cost, the latter option might be the most expansive one of all of these. As learning laguages though is a time-consuming, ongoing process this usually won't happen quickly unless special support is added, by hiring people with that language skills or using external translators.
The beauty of REST is, servers and clients aren't limited to one particular representation format (a.k.a. language). In contrast to traditional RPC services, which limit themselves to one syntax, in REST servers and clients can support a multitude of media types. I.e. your browser knows how to process HTML pages, how to render JPG, PNG, GIF, ... images, how to embed Microsoft Word, Excel, ... documents and so forth. This support was added over the years and allows a browser to basically render a plethora of documents.
So, one option is to either create a translation service that is able to translate one representation to an other and then act as middleman in the process or you directly add support for the non yet understood media types to the service/client directly. The more media-types your client/servers are able to process, the more likely they will be to interoperate with other peers in the network.
The former approach clearly requires that the middleman service is able to at least support the two representation formats issued by A and B but on the other hand allows you to use services not directly under your control. If at least one of the services though is under your control, directly adding the not-yet-supported media type could be potentially less work in the end. Especially when certain library support for the media type is already available or can be obtained easily.
In a REST architecture clients and servers aren't build with the purpose of knowing the other one by heart. This is already a hint that there is a strong coupling between these two. They shouldn't be aware of the others "API" other than that they use HTTP as transport layer and URIs as their addressing scheme. All other stuff are negotiated and discovered on the fly. If they don't share the same language capabilities the server will responde with a 406 Not Accepttable response that informs a client that they don't speak the same languages and thus won't be able to communicate meaningfully.
As mentioned before, REST is all about introducing indirections to aid in the decoupling intent which allows servers to evolve freely in future without those changes breaking clients as these will just coop with the change. Therefore, eventual change in future is an inherent design concept. If at least one participant in a REST architecture doesn't respect these design concepts they are a potential candidate for introducing the problems traditional RPC services did in the past, like breaking clients on a required change, maintaining v2/3/4/.../n of various different APIs and scaling issues due to the direct coupling of client and servers.
Not sure why you need to distinguish it in the path and why the domain or subdomain is not enough for it:
A: PUT x.example.com/apples -> X: PUT b.example.com/apples
B: GET x.example.com/apples -> X: GET a.example.com/apples
As of the model, you want to do PUSH-PULL in a system which is designed for REQ-REP. Let's translate the upper: pushApples(apples) and pullApples() -> apples if this is all they do, then PUT and GET are just fine with the /apples URI if you ask me. Maybe /apples/new is somewhat more expressive if you need only the updates, but I would rather use if-modified-since header instead and maybe push with if-unmodified-since.
Though I think you should describe what the two service does, not what you do with the apples, which appear to be a collection of database entities instead of a web resource, which is several layers above the database. Currently your URIs describe the communication, not the services. For example why is X necessary, why don't they call each other directly? If you can answer that question, then you will understand what X does with the apples and how to name its operations and how to design the URIs and methods which describe them.
We are building set of new REST APIs.
Let's say we have a resource /users with the following fields:
{
id: 1
email: "test#user.com"
}
Clients implement this API and can then update this resource by sending a new resource representation to PUT /users/1.
Now let's say we add a new property name to the model like so:
{
id: 1
email: "test#user.com"
name: "test user"
}
If the models the existing clients are using are to call our API not updated, then calls to PUT /users/1 will remove the new name property since PUT is supposed to replace the resource. I know that the clients could work straight with the raw json to ensure they always receive any new properties that are added in the API, but that is a lot of extra work, and under normal circumstances clients are going to create their own model representations of the API resources on their side. This means that any time any new property is added, all clients need to update the code/models on their side to make sure they aren't accidentally removing properties. This creates unneeded coupling between systems.
As a way to solve this problem, we are considering not implementing PUT operations at all and switching updates to PATCH where properties that aren't passed in are simply not changed. That seems technically correct, but might not be in the spirit of REST. I am also slightly concerned about client support for the PATCH verb.
How are others solving this problem? Was is the best practice here?
You are in a situation where you need some form of API versioning. The most appropriate way is probably using a new media-type every time you make a change.
This way you can support older versions and a PUT would be perfectly legal.
If you don't want this and just stick to PATCH, PATCH is supported everywhere except if you use ancient browsers. Not something to worry about.
Switching from PUT to PATCH will not fix your problem, IMO. The root cause, IMO, is that clients already consider the data being returned for a representation to follow a certain type. According to Fielding
A REST API should never have “typed” resources that are significant to the client.
(Source)
Instead of using typed resources clients should use content-type negotiation to exchange data. Here, media-type formats that are generic enough to gain widespread adoption are for sure beneficial, certain domains may however require a more specific representation format.
Think of a car-vendor Web page where you can retrieve the data from your preferred car. You, as a human, can easily identify that the data depicts a typical car. However, the media-type you most likely received the data in (HTML) does not state by its syntax or the semantics of its elements that the data describes a car, unless some semantic annotation attributes or elements are present, though you might be able to update the data or use the data elsewhere.
This is possible as HTML ships with a rich specification of its elements and attributes, such as Web forms that not only describe the supported or expected input parameters but also the URI where to send the data to, the representation format to use upon sending (implicitly given by application/x-www-form-urlencoded; may be overwritten by the enctype attribute though) or the HTTP method to use, which is fixed to either GET or POST in HTML. Through this, a server is able to teach a client on how a request needs to be built. As a consequence the client does not need to know anything else besides having to understand the HTTP, URI and HTML specifications.
As Web pages are usually filled with all kinds of unrelated stuff, such as adds, styling information or scripts, and the XML(-like) syntax, which is not every ones favourite, as it may increase the size of the actual payload slightly, most so-called "REST" APIs do want to exchange JSON-based documents. While plain JSON is not an ideal representation format, as it does not ship with link-support at all, it is though very popular. Certain additions such as JSON Hyper-Schema (application/schema+json hyper-schema) or JSON Hypertext Application-Language (HAL) (application/hal+json) add support for links and link-relation. These can be used to render data received from the server as-is. However, if you want a response to automatically drive your application state (i.e. to dynamically draw the GUI with the processed data) a more specific representation format is needed, that can be parsed by your client and act accordingly as it understands what the server wants it to do with it (= affordance). If you like to instruct a client on how to build a request support for other media-types such as hal-forms or ion need to be supported. Certain media-types furthermore allow you to use a concept called profiles, that allow you to annotate a resource with a semantic type. HAL JSON i.e. does support something like that where the Content-Type header may now contain a value such as application/hal+json;profile=http://schema.org/Car that hints the media-type processor that the payload follows the definition of the given profile and may thus apply further validity checks.
As the representation format should be generic enough to gain widespread usage, and URIs itself shouldn't hint a client as well what kind of data to expect, an other mechanism needs to be used. Link relation names are basically an annotation for URIs that tell a client about the purpose of a certain link. A pageable collection might return links annotated with first, prev, next and last which are pretty obvious what they do. Other links might be hinted with prefetch, that hint a client that a resource can be loaded right after loading the current resource finished as it is very likely that the client will retrieve this resource next. Such media-types, however, should be either standardized (defined in a proposal or RFC and registerd with IANA) or follow the schema proposed by Web linking, (i.e. as used by Dublin Core). A client that just uses the URI for an invoked link-relation name will still work in case the server changes its URI scheme instead of attempting to parse some parameters from the URI itself.
In regards to de/coupling in a distributed system a certain amount of coupling has to exist otherwise parties wont be able to communicate at all. Though the point here is, the coupling should be based on well-defined and standardized formats that plenty of clients may support instead of exchanging specific representation formats only a very limited number of clients support (in worst case only the own client). Instead of directly coupling to the API and using an undefined JSON-based syntax (maybe with external documentation of the semantics of the respective fields) the coupling should now occur on the media-types parties can use to exchange the format. Here, not the question of which media-type to support should be asked but how many you want to support. The more media-types your client or server supports, the more likely it is to interact with other peers in the distributed system. On the grand-scheme of things, you want a server to be able to server a plethora of clients while a single client should be able to interact with (in best case) every server without the need for constant adoptions.
So, if you really want to decouple clients from servers, you should take a closer look at how the Web actually works and try to mimic its interaction model onto your application layer. As "Uncle Bob" Robert C. Martin mentioned
An architecture is about intent! (Source)
and the intention behind the REST architecture is the decoupling of clients from servers/services. As such, supporting multiple media-types (or defining your own-one that is generic enough to reach widespread adoption), looking up URIs just via their accompanying link-relation names and relying on content-type negotiation as well as relying only on the provided data may help you to achieve the degree of decoupling you are looking for.
All nice and well in theory, but so far every rest api I encountered in my career had predefined contracts that changed over time.
The problem here is, that almost all of those so called "REST APIs" are RPC services at its heart which should not be termed "REST" to start with - this is though a community issue. Usually such APIs ship with external documentation (i.e. Swagger) that just re-introduce the same problems classical RPC solutions, such as CORBA, RMI or SOAP, suffer from. The documentation may be seen as IDL in that process without the strict need for skeleton classes, though most "frameworks" use some kind of typed data classes that will either ignore the recently introduced field (in best case) or totally blow up on invocation.
One of the problems REST suffers from is, that most people haven't read Fieldings thesis and therefore don't see the big picture REST tries to establish but claim to know what REST is and therefore mix up things and call their services RESTful which lead to a situation where REST != REST. The ones pointing out what a REST architecture is and how one might achieve it are called out as dreamers and unworldly when the ones proclaiming the wrong term (RPC over HTTP = REST) continue to do so adding to the confusion of especially the ones just learning the whole matter.
I admit that developing a true REST architecture is really, really hard as it is just too easy to introduce some form of coupling. Hence, a very careful design needs to be done that needs time and also costs money. Money plenty of companies can't or don't want to spend, especially in a domain where new technologies evolve on a regular basis and the ones responsible for developing such solutions often leave the company before the whole process had finished.
Just saying it shouldn’t be ‘typed’ is not really a viable solution
Well, how often did you need to change your browser as it couldn't interact with a Web page? I don't talk about CSS-stuff or browser-specific CSS or JS stuff. How often needed the Web to change in the last 2-3 decades? Similar to the Web, the REST architecture is intended for long-lasting applications for years to come, that supports natural evolution by design. For simple frontend-2-backend systems it is for sure overkill. It starts to shine especially in cases where there are multiple peers not under your control you can interact with.
My understanding of SOAP vs REST:
REST = JSON, simple consistent interface, gives you CRUD access to 'entities' (Abstractions of things which are not necessarily single DB rows), simpler protocol, no formally enforced 'contract' (e.g. the values an endpoint returns could change, though it shouldn't)
SOAP = XML, more complex interface, gives you access to 'services' (specific operations you can apply to entities, rather than allowing you to CRUD entities directly), formally enforced, pre-stated 'contract' (like a WSDL, where e.g. the return types are predefined and formalized)
Is that a broadly correct assessment?
What about a mixture?
If so, what do I call an API that is a mixture?
For example, If we have what at surface level looks like a REST API (returns JSON, no WSDL or formalized contract defined - but instead of giving you access to the 'entities' that the system manages (User, product, comment, etc) it instead gives you specific access to services and complex operations (/sendUserAnUpdate/1111, /makeCommentTextPurple/3333, /getAllCommentsByUserThisYear/2222) without having full coverage?
The 'services' already exist internally, and the team simply publishes access to them on a request by request basis, through what would otherwise look like a REST API.
Question:
What is the 'mixture' typically referred to as (besides, maybe, a bad API). Is there a word for it? or a concept I can refer to that'll make most developers understand what I'm referring to, without having to say the entire paragraph I did above?
Is it just "JSON SOAP API?", "A Service-based REST API?" - what would you call it?
Thanks!
Thanks!
If you take a look at all those so-called REST-APIs your observation might seem true, though REST actually is something completely different. It describes an architecture or a philosophy whose intent it is to decouple clients from servers, allowing the latter one to evolve in future without breaking clients. It is quite similar to the typical Web page interaction in that a server will teach a client on what it needs and only reacts on client-triggered requests. One has to be pretty careful and pendant when designing REST services as it is too easy to include a coupling that may affect clients when a change is introduced, especially with all the pragmatism around in (commercial) software engineering. Stefan Tilkov gave a great talk on REST back in 2014 that, alongside with Jim Webber or Asbjørn Ulsberg, can be used as introduction lectures to what REST is at its core.
The general premise in REST should always be that a server teaches clients what they need and what a server expects and offers choices to the client via links. If the server expects to receive data from the client it will send a form-esque representation to inform the client about the respective fields it supports and based on the affordance of the respective elements contained in the form a client knows whether to select one or multiple options, enter some free text or enter a date value and such. Unfortunately, most of the media-type formats that attempt to mimic HTML's forms are still in draft versions.
If you take a look at HTML forms in particular you might sense what I'm refering to. Each of the elements that may occur inside a form are well defined to avoid abmiguity and improve interoperability. This is defacto the ultimate goal in REST, having one client that is able to interact with a sheer amount of other services without having to be adapted to each single API explicitely.
The beauty of REST is, it isn't limited to a single representation form, i.e. JSON, in fact there is almost an infinite number of possible representation formats that could be exchanged in a REST environment. Plain application/json is a terrible media-type for REST applications IMO as it doesn't include any defintions in regards to links and forms and doesn't describe the semantics of certain fields that may be shipped in requests and responses. The lack of semantical description usually leads to typed resources where a recipient expects that receiving data from i.e. /api/users returns some specific user data, that may differ from host to host. If you skim through IANA's media type registry you will find a couple of media-type formats you could have used to transfer user-related data and any client supporting these representation formats whold be able to interact with this enpoint without any issues. Fielding himself claimed that
A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. Any effort spent describing what methods to use on what URIs of interest should be entirely defined within the scope of the processing rules for a media type (and, in most cases, already defined by existing media types). (Source)
Through content-type negotiation client and server will negotiate about a representation format both support and understand. The question therefore shouldn't be which one to support but how many you want to support. The more media-type your API or client is able to exchange payloads for, the more likely it will be to interact with other participants.
Most of those so-called REST APIs are in reality just RPC services exposed via HTTP that may or may not respect and support certain HTTP operations. HTTP thereby is just a transport layer whose domain is the transfer of files or data over the Web. Plenty of people still believe that you shouldn't put verbs in URIs when in reality a script or process usually doesn't (and shouldn't) care whether a URI contains a verb or not. The URI itself is just a pointer a client will follow and invoke when it is interested in receiving the payload. We humans are also not that much interested in the URI itself in regards to the content it may return after invoking that URI. The same holds true for arbitrary clients. It is more important what you ship along with that URI. On the Web a link can be annotated with certain text and/or link relation names that set the links content in relation to the current page. It may hint a client that certain content may be invoked before the whole response was parsed as it is quite likely that the client will also want to know about that. preload i.e. is such a link-relation name that hints the client about that. If certain domain-specific terms exist one might use an extension scheme as defined by Web linking or reuse common knowlege or special microformats.
The whole interaction in a REST environment is similar to playing a text-based computer game or following a certain process flow (i.e. ordering and paying produts) defined by an application domain protocol, that can be designed as a state machine. The client is therefore guided through the whole process. It basically just follows the orders the server gave it, with some choices to break out of the process (i.e. cancel the order before paying).
SOAP on the otherhand is, as you've stated, an XML-based RPC protocol reusing a subset of HTTP to exchange requests and responses. The likelihood that when you change something within your WSDL plenty of clients have to be adapted and recompiled are quite high. SOAP even defines its own security mechanism instead of reusing TLS, which requires explicit support by the clients therefore. As you have a one-to-one communication model due to the state that may be kept in process, scaling SOAP services isn't that easy. In a REST environment this is just a matter of adding a load-balancer before the server and then mirroring the server n-times. The load-balancer can send the request to any of the servers due to the stateless constraint
What is the 'mixture' typically referred to as (besides, maybe, a bad API). Is there a word for it? or a concept I can refer to that'll make most developers understand what I'm referring to, without having to say the entire paragraph I did above?
Is it just "JSON SOAP API?", "A Service-based REST API?" - what would you call it?
The general term for an API that communicates on top of HTTP would be Web API or HTTP API IMO. This article also uses this term. It also lists XML-RPC and JSON-RPC besides SOAP. I do agree with Voice though that you'll receive 5 answers on asking 4 people about the right term to use. While it would be convenient to have a respective term available everyone would agree upon, the reality shows that people are not that interested in a clear separation. Just look here at SO on the questions taged with rest. There is nothing wrong with not being "RESTful", though one should avoid the term REST for truly RPC services. Though I think we are already in a situation where the term REST can't be rescued from misusage and marketing purposes.
For something that requires external documentation to use and that ships with its own custom, non-standardized representation format or that just exposes CRUD for domain objects I'd add -RPC to it, as this is more or less what it is at its heart. So if the API sends JSON and the representation to expect is documented via Swagger or some other external documentationJSON-RPC would probably the most fitting name IMO.
To sum up this post, I hope I could shed some light on what REST truly is and how your observation is flawed by all those pragmatic attempts that unfortunately are RPC through and through. If you change something within their implementation, how many clients will break? In addition to that you can't reuse the client that you've implemented for API A to interact with API B (of a different company or vendor) out of the box and therefore have to either adapt your client or create a new one solely for that API. This is true RPC and therfore should be reflected in the name somehow to hint developers about future expectations. Unfortunately, the process of naming things propperly, especially in regards to REST, seems already lost. There is a fine but tiny group who attempt to spread the true meaning, like Voice, Cassio and some others, though it is like fighting windmills. The best advice here would be to first discuss the naming conventions and what each participant understand on which term and then agree on a naming scheme everyone agrees on to avoid future confusion.
My understanding of SOAP vs REST
...
Is that a broadly correct assessment?
No.
REST is an "architectural style", which is to say a coordinated collection of architectural constraints. The World Wide Web is an example of an application built using the REST architectural style.
SOAP is a transport agnostic message protocol specification, based on XML Information Set
If so, what do I call an API that is a mixture?
I don't think you are going to find an authoritative terminology here. Colloquially, you are likely to hear the broad umbrella term "web api" to describe an HTTP API that isn't "RESTful".
The whole space is rather polluted by semantic diffusion.
POST , PUT, PATCH and GET are all different. Idempotent and safety being the key difference makers.
While writing RESTFul APIs , I encountered guidelines on when and where to use one of the HTTP methods. Since I am using Java for the back-end implementation, I can control the behavior of the HTTP methods on the persistent data.
For example , GET v1/book/{id} can be replaced with POST v1/book (with "id" in body) now with that id I can perform a query on db , fetching that particular book. (assuming book with that id already exists).
Similarly , I can achieve the workings of PATCH and PUT with POST itself.
Now, coming to the question , why don't we just use POST instead of GET , PUT and PATCH almost every time, ALMOST, when we can control the idempotent and safety behavior in the back-end?
Or , Is it just a guideline mentioned in RESTFul docs somewhere or stated by Roy fielding and we all are blindly following? Even if the guidelines are so what is the major idea behind them?
https://restfulapi.net/rest-put-vs-post/
https://restful-api-design.readthedocs.io/en/latest/methods.html
https://www.keycdn.com/support/put-vs-post
Above resources just mention either what does all the methods do or their differences. Articles mention the workings as if they were some guidelines , none of the docs online speak about the reason behind them.
None of them says , what if I used POST instead of PUT, PATCH and GET, what would be the side-effects? (as I can control their behaviors in the back-end)
Http methods are designed in the way that each method holds some responsibility. I will say that REST are the standards which are conventions and not the obligation. The convention doesn't stress us to follow the rules but they are designed for our code betterment. You can tweak the things and can use them in your way but that would be a bad idea. Like in this case if you are performing all the three actions with one method it would create great confusion in code (As the simple definition of POST is to create an object and that is what understood by everyone) and also degrade our coding standards.
I strongly discourage to replace three methods with one.
If you do that, you can't say you are "writing RESTFul APIs".
Whoever knows the RESTFul standard, will be confused about the behaviour of your apis.
If you fit the standard, then you will have an easier life.
After all, you have no real benefit in your approach.
HTTP is a transport protocol which as its name suggest is responsible for transfering data such as files or db entries across the wire to or from a remote system. In version 0.9 you basically only had the GET operation at your disposal while in HTTP 1.0 almost all of the current operations were added to the spec.
Each of these methods fulfills its own purpose. POST i.e. does process the payload according to the server's own semantics, whatever they will be. In theory it could be used therefore for retrieving, updating or removing content. Though, to a client it is basically unclear what a server actually does with the payload. There is no guarantee whether invoking a URI with that method is safe (the remote resource being altered) or not. Think of a crawler that is just invoking any URIs it finds and one of the links is an order link or a link where you perform a payment process. Do you really want a crawler to trigger one of your processes? The spec is rather clear that if something like that happens, the client must not made accountable for that. So, if a crawler ordered 10k products as one of your links, did trigger such a process, and the products are created in that process, you can't claim refund from the crawler's maintainer.
In additon to that, a response from a GET operation is cacheable by default. So if you invoke the same resource twice in a certain amount of time, chances are that the resource does not need to be fetched again a second (third, ...) time as it can be reused from the cache. This can reduce the load on the server quite significantly if used propperly.
As you've mentioned Fielding and REST. REST is an architectural style which you should use if you have plenty of different clients connecting to your services that are furthermore not under your control. Plenty of so-called REST APIs aren't adhering to REST as they follow a more simple and pragmatic RPC approach with external documentations such as Swagger and similar. RESTs main focus is on the decoupling of clients from servers which allow the latters to evolve freely without having to fear breaking clients. Clients on the other hand get more robust to changes.
Fielding only added few constraints a REST architecture has to adhere to. One of them is support for caching. Though Fielding later on wrote a well-cited blog-post where he explains what API designers have to consider before calling their API REST. True decoupling can only occur if all of the constraints are followed strictly. If only one clients violates these premises it won't benefit from REST at all.
The main premise in REST is (and should always be): Server teaches clients what they need and clients only use what they are served with. In the browsable Web, the big cousin of REST, a server will teach a client i.e. on what data the server expects via Web Forms through HTML and links are annotated with link-relation names to give the browser some hints on when to invoke that URI. On a Web page a trash bin icon may indicate a delition while a pencil icon may indicate an edit link. Such visual hints are also called affordacne. Such visual hints may not be applicable in machine to machine communication though such affordances may hint on other things they may provide. Think of a stylesheet that is annotated with preload. In HTTP 2 i.e. such a resource could be attempted to be pushed by the server or in HTTP 1.1 the browser could alread load that stylesheet while the page is still parsed to speed things up. In order to gain whitespread knowledge of those meanings, such values should be standardized at IANA. Through custom extensions or certain microformats such as dublin core or the like you may add new relation names that are too specific for common cases but are common to the domain itself.
The same holds true for media-types client and server negotiate about. A general applicable media-type will probably reach wider acceptance than a tailor-made one that is only usable by a single company. The main aim here is to reach a point where the same media-type can be reused for various areas and APIs. REST vision is to have a minimal amount of clients that are able to interact with a plethora of servers or APIs, similar to a browser that is able to interact with almost all Web sites.
Your ultimate goal in REST is that a user is following an interaction protocol you've set up, which could be something similar to following an order process or playing a text game or what not. By giving a client choices it will progress through a certain process which can easily be depicted as state machine. It is following a kind of application-driven protocol by following URIs that caught the clients attention and by returning data that was taught through a form like representation. As, more or less, only standardized representation formats should be used, there is no need for out-of-band information on how to interact with the API necessary.
In reality though, plenty of enterprises don't really care about long-lasting APIs that are free to evolve over the years but in short-term success stories. They usually also don't care that much whether they use the propper HTTP operations at all or stay in bounds with the HTTP spec (i.e. sending payloads with HTTP GET requesst). Their primary intent is to get the job done. Therefore pragmatism usually wins over design and as such plenty of developers follow the way of short success and have to adept their work later on, which is often cumbersome as the API is now the driving factor of their business and therefore they can't change it easily without having to revampt the whole design.
... why don't we just use POST instead of GET , PUT and PATCH almost every time, ALMOST, when we can control the idempotent and safety behavior in the back-end?
A server may know that a request is idempotent, but the client does not. Properties such as safe and idempotency are promisses to the client. Whether the server satisfies these or not is a different story. How should a client know whether a sent payment request reached the server and the response just got lost or the initial request didn't make it to the server at all in case of a temporary connection issue? A PUT requests does guarante idempotency. I.e. you don't want to order the same things twice if you resubmit the same request again in case of a network issue. While the same request could also be sent via POST and the server being smart enough to not process it again, the client doesn't know the server's behavior unless it is externally documented somehwere, which violates REST principles again also somehow. So, to state it differently, such properties are more or less promisses to the client, less to the server.
I read that HATEOAS links are the one that separates a REST API from a normal http API. In that case, does REST need a separate name? I wonder what all this hype about REST API is about. It seems to be just a http method with one extra rule in the response.
Q) What other differences exist?
I read that HATEOAS links are the one that separates a REST API from a normal http API.
That's probably a little bit of an understatement. When Leonard Richardson (2008) described the "technology stack" of the web, he listed:
URI
HTTP
HTML
A way of exploring the latter is to consider how HTML, as a media type, differs from a text document with URI in it. To my mind, the key element is links and forms -- standardized ways of encoding into the representation the semantics of a URI (this is a link to another page, this is an embedded image, this is an embedded script, this is a form...).
Mike Admundsen, 2010:
Hypermedia Types are MIME media types that contain native hyper-linking semantics that induce application flow. For example, HTML is a hypermedia type; XML is not.
Atom Syndication/Atom Publishing is a good demonstration for defining a REST API.
Can you throw some light on what REST actually means and how it differs from normal http?
Have you noticed that websites don't normally use plain text for the representations of the information that they share? It's something of a dead end -- raw text doesn't have any hypermedia semantics built into it, so a generic client can't do anything more interesting than search for sequences that might be URI.
On the other hand, with HTML we have link semantics: we can include references to images, to style sheets, to scripts, as well as linking to other documents. We can describe forms, that allow the creation of parameterized HTTP requests.
Additionally, that means that if some relation shouldn't be used by the client, the server can easily change the representation to remove the link.
Furthermore, the use of the hypermedia representation allows the server to use a richer description of which request message should be sent by the client.
Consider, for example, Google. They can use the form to control whether search requests use GET or POST. They can remove the "I Feel Lucky" option, or arrange that it redirects to the main experience. They can embed additional information in to the fields of the form, to track what is going on. They can choose which URI targets are used in the search results, directing the client to send to Google another request which gets redirected to the actual target, with additional meta data embedded in the query parameters, all without requiring any special coordination with the client used.
For further discussion, see Leonard Richardson's slide deck from QCon 2008, or Phil Sturgeon's REST and Hypermedia in 2019.
Does n't think the client need to read the documentation if the HATEOAS link is a POST API? HATEOAS links will only guide you to an API but will not throw any light on how its request body needs to be filled....GET won't have request body. So, not much or a problem. but POST API?
Sort of - here's Fielding writing in 2008:
REST doesn’t eliminate the need for a clue. What REST does is concentrate that need for prior knowledge into readily standardizable forms.
On the web, the common use case is agents assisting human beings; the humans can resolve certain ambiguities on their own. The result is a separation of responsibilities; the humans decode the domain specific semantics of the messages, the clients determine the right way to describe an interaction as an HTTP request.
If we want to easily replace the human with a machine, then we'll need to invest extra design capital in a message schema that expresses the domain specific semantics as clearly as we express the plumbing.
To me, REST is an ideology you want to aim for if you have a system that should last for years to come which has the freedom to evolve freely without breaking stuff on parts you can't control. This is very similar to the Web where a server can't control browsers directly though browsers are able to cooperate with any changes done to Web site representations returned by the server.
I read that HATEOAS links are the one that separates a REST API from a normal http API. In that case, does REST need a separate name?
REST does basically what its name implies, it transfers the state of a resource representation. If so, we should come up with a new name for such "REST" APIs that are truly RPC in the back, to avoid confusion.
If you read through the Richardson Maturity Model (RMM) you might fall under the impression that links or hypermedia controls as Fowler named it, which are mandatory at Level 3, are the feature that separates REST from normal HTTP interaction. However, Level 3 is just not enough to reach the ultimate goal of decoupling.
Most so called "REST APIs" do put a lot of design effort into pretty URIs in a way to express meaning of the target resource to client developer. They come up with fancy documentation generated by their tooling support, such as Swagger or similar stuff, which the client developer has to follow stringent or they wont be able to interact with their API. Such APIs are RPC though. You won't be able to point the same client that interacts with API A to point to API B now and still work out of the box as they might use completely different endpoints and return different types of data for almost the same named resource endpoint. A client that is attempting to use a bit more of dynamic behavior might learn the type from parsing the endpoint and expect a URI such as .../api/users to return users, when all of a sudden now the API changed its URI structure to something like .../api/entities. What would happen now? Most of these clients would break, a clear hint that the whole interaction model doesn't follow the one outline by a REST architecture.
REST puts emphasis on link relation names that should give clients a stable way of learning the URIs intent by allowing a URI to actually change over time. A URI basically is attached to a link relation name and basically represents an affordance, something that is clear what it does. I.e. the affordance of a button could be that you can press it and something would happen as a result. Or the affordance of a light switch would be that a light goes on or off depending on the toggled state of the light switch.
Link relation names now express such an affordance and are a text-based way to represent something like a trash bin or pencil symbol next to table entry on a Web page were you might figure out that on clicking one will delete an entry from the table while the other symbol allows to edit that entry. Such link relation names should be either standardized, use widely accepted ontologies or use custom link-relation extensions as outlined by RFC 8288 (Web Linking)
It is important to note however, that a URI is just a URI which should not convey a semantic meaning to a client. This does not mean that a URI can't have a semantic meaning to the server or API, but a client should not attempt to deduce one from the URI itself. This is what the link-relation name is for, which provides the infrequently changing part of that relation. An endpoint might be referenced by multiple, different URIs, some of which might use different query parameters used for filtering. According to Fielding each of these URIs represent different resources:
The definition of resource in REST is based on a simple premise: identifiers should change as infrequently as possible. Because the Web uses embedded identifiers rather than link servers, authors need an identifier that closely matches the semantics they intend by a hypermedia reference, allowing the reference to remain static even though the result of accessing that reference may change over time. REST accomplishes this by defining a resource to be the semantics of what the author intends to identify, rather than the value corresponding to those semantics at the time the reference is created. It is then left to the author to ensure that the identifier chosen for a reference does indeed identify the intended semantics. (Source 6.2.1)
As URIs are used for caching results, they basically represent the keys used for caching the response payload. As such, it gets obvious that on adding additional query parameters to URIs used in GET requests, you end up bypassing caches as the key is not stored in the cache yet and therefore get the result of a different resource, even though it might be identical (also in response representation) as the URI without that additional parameter.
I wonder what all this hype about REST API is about. It seems to be just a http method with one extra rule in the response.
In short, this is what those self- or marketing-termed pseudo "REST APIs" do convey and many people seem to understand.
The hype for "REST" arose from the inconveniences put onto developers on interacting with other interop-solutions such as Corba, RMI or SOAP where often partly-commercial third-party libraries and frameworks had to be used in order to interact with such systems. Most languages supported HTTP both as client and server out of the box removing the requirement for external libraries or frameworks per se. In addition to that, RPC based solution usually require certain stub- or skeleton-classes to be generated first, which was usually done by the build pipeline automatically. Upon updates of the IDL, such as WSDL linking or including XSD schemata, the whole stub-generation needed to be redone and the whole code needed to looked through in order to spot whether a breaking change was added or not. Usually no obvious changelog was available which made changing or updating such stuff a pain in the ...
In those pseudo "REST" APIs plain JSON is now pretty much the de facto standard, avoiding the step of generating stub classes and the hazzle of analyzing the own code to see whether some of the forced changes had a negative impact on the system. Most of those APIs use some sort of URI based versioning allowing a developer to see based on the URI whether something breaking was introduced or not, mimicking some kind of semantic versioning.
The problem with those solution though is, that not the response representation format itself is versioned but the whole API itself leading to common issues when only a change on a part of the API should be introduced as now the whole API's version needs to be bumped. In addition to that, to URIs such as .../api/v1/users/1234 and .../api/v2/users/1234 may represent the same user and thus the same resource though are in fact different by nature as the URI is different.
Q) What other differences exist?
While REST is just an architecture model that can't force you to implement it stringent, you simply will not benefit from its properties if you ignore some of its constraints. As mentioned above, HATEOAS support is therefore not yet enough to really decouple all clients from an API and thus allow to benefit from the REST architecture.
RMM unfortunately does not talk about media types at all. A media type basically specifies how a received payload should be processed and defines the semantics and constraints of each of the elements used within that payload. I.e. if you look at text/html registered in IANA's media type registry, you can see that it points to the published specification, which always references the most recent version of HTML. HTML is designed in a way to stay backwards compatible so no special versioning stuff is required.
HTML provides, IMO, two important things:
semi-structured content
form support
The former one allows to structure data, giving certain segments or elements the possibility to express different semantics defined in the media type. I.e. a browser will handle an image differently than a div element or an article element. A crawler might favor links and content contained in an article element and ignore script and image elements completely. Based on the existence or absence of certain elements even certain processing differences may occur.
Including support for forms is a very important thing in REST actually as this is the feature which allows a server to teach a client on what a server needs as input. Most so called "REST APIs" just force a developer to go through their documentation, which might be outdated, incorrect or incomplete, and send data to a predefined endpoint according to the documentation. In case of outdated or incomplete documentation, how should a client ever be able to send data to the server? Moreover, a server might never be able to change as basically the documentation is now the truth and the API has to align with the documentation.
Unfortunately, form-support is still a bit in its infancy. Besides HTML, which provides <form>...</form>, you have a couple of JSON based form attempts such as hal-forms, halo-json (halform), Ion or hydra. None of these have yet wide library or framework support yet as some of these form representations still have not finalized their specification on how to support forms more effectively.
Other media-types, unfortunately, might not use semi-structured content or provide support for forms that teach a client on the needs of a server, though they are still valuable to REST in general. First, through Web linking link support can be added to media types that do not naturally support those. Second, the data itself does not really need to be text-based at all in order for an application to use it further. I.e. pictures an videos usually are encoded and byte based anyways still a client can present them to users.
The main point about media-types though is, as Fielding already pointed out in one of his cited blog posts, is, that representations shouldn't be confused with types. Fielding stated that:
A REST API should never have “typed” resources that are significant to the client. Specification authors may use resource types for describing server implementation behind the interface, but those types must be irrelevant and invisible to the client. The only types that are significant to a client are the current representation’s media type and standardized relation names.
Jørn Wildt explained in an excellent blog post what a "typed" resource is and why a REST architecture shouldn't use such types. Basically, to sum the blog post up, a client expecting a ../api/users endpoint to return a pre-assumed data payload might break if the server adds additional, unexpected fields, renames existing fields or leave out expected fields. This coupling can be avoided by using simple content-type negotiation where a client informs a server on which capabilities the client supports and the client will chose the representation that best fits the target resource. If the server can't support the client with a representation the client supports the server should respond with a failure (or a default representation) the client might log to inform the user.
This in essence is exactly what the name REST stands for, the transfer of a resource's state representation where the representation may differ depending on the representation format defined by the selected media type. While HATEOAS may be one of the most obvious changes between REST and a non-REST based HTTP solution, this for sure is not the only factor that makes up a payload in REST. I hope I could shed some light on the decoupling intention and that a server should teach clients what the server expects through forms and that the affordance of URIs is captured by link-relation names. All these tiny aspects in sum make up REST, and you will only benefit from REST, unfortunately, if you respect all of its constraints and not only those that are either easy to obtain or what you have the mood for implementing.