If an API only provides POST requests functions, is it RESTful? - rest

I'm not sure I understand correctly the notion of RESTful API. If I understand correctly, such an API should provide functions you can trigger with GET, POST, PUT & DELETE requests. My question is: if an API only provides POST requests functions, is it still RESTful?

You should probably watch this lecture and read this article.
REST a such has nothing to do with how much of available HTTP methods you use. So, the quick answer is: yes, it could be considered "restful" (whatever that actually means).
Buuut ... it most likely - isn't. And it has nothing to do with the abuse of POST calls.
The main indicator for this magical "RESTfulness" has nothing really to do with how you make the HTTP request (methods and pretty URLs are pointless worthless as a determining factor).
What matters is the returned data and whether, by looking at this data, you can learn about other resources and actions, that are related the resource in any given endpoint. It's basically about the discover-ability.

REST is a misused term for some time and the community especially at Stackoverflow doesn't even care about its actual intention, the decoupling of clients from server APIs in a distributed system.
Client and server achieve the decoupling by following certain recommendations like avoiding stateful connections where client state is stored at and managed by the server, using unique identifiers for resources (URIs) and further nice-to-have features like cacheability to reduce the workload both server and clients have to perform. While Fieldings dissertation lists 6 constraints, he later on explained some further rules applications following the REST architectural style have to follow and the benefits the system gains by following these. Among these are:
The API should not depend on any single communication protocol and adhere to and not violate the underlying protocol used. Altough REST is used via HTTP most of the time, it is not restricted to this protocol.
Strong focus on resources and their presentation via media-types.
Clients should not have initial knowledge or assumptions on the available resources or their returned state ("typed" resource) in an API but learn them on the fly via issued requests and analyzed responses. This gives the server the opportunity to move arround or rename resources easily without breaking a client implementation.
So, basically, if you limit yourself only to HTTP you somehow already violate the general idea REST tries to impose.
As #tereško mentioned the Richardson maturity model I want to clarify that this model is rather nonsense in the scope of REST. Even if level 3 is reached it does not mean that this architecture follows REST. And any application that hasn't reached level 3 isn't following this architectural style anyways. Note that an application that only partially follows REST isn't actually following it. It's like either properly or not at all.
In regards to RESTful (the dissertation doesn't contain this term) usually one regards a JSON based API exposed via HTTP as such.
To your actual question:
Based on this quote
... such an API should provide functions you can trigger with GET, POST, PUT & DELETE requests
in terms of REST architectural style I'd say NO as you basically use such an API for RPC calls (a relaxed probably JSON based SOAP if you will), limit yourself to HTTP only and do not use the semantics of the underlying HTTP protocol fully; if you follow the JSON based HTTP API crowd the answer is probably it depends on who you ask as there is no precise definition of the term "RESTful" IMO. I'd say no here as well if you trigger functions rather than resources on the server.

Yes. Restful has some guidelines you should follow. As long as you use HTTP verbs correctly and good practices with regards to URLs naming having only POSTs would be OK. If, on the other hand, a POST request in your application can also delete a record, then I would not call it Restful.

Related

Is a GraphQL API RESTful by default?

My understanding of REST is simply that a resource needs some means of self-describing itself. My understanding is that this isn't specifically tied to any one protocol (i.e. HTTP) and that there are theoretically numerous ways of achieving this. This is based on an answer to a SO question here: SOAP vs REST (differences) (and unlike the terrible answer to this question: Are Relay and Graphql RESTful?)
Since a GraphQL API is self-describing via introspection, doesn't that mean that GraphQL is RESTful by default since a client can use introspection to figure out how to query it?
While GraphQL is often mentioned as the replacement for REST, both tackle different problems actually.
REST, to start with, is not a protocol but just a style, which, if applied correctly and fully, just decouples clients from servers. A server following the REST principals will therefore provide the client with any information needed to take further steps. A client initially starts without any a-priori knowledge and learns on the fly through issuing requests and processing responses. HATEOAS describes the interaction model a REST architectue should be build upon. It thereby states that a link should be used to request new information which drives its internal flow. On utilizing similar representation to Web forms (HTML) a server can teach a client on needed inputs. Through the affordance of the respective elements a client knows, without any need for external documentation, what to do. I.e. It might find a couple of options to chose one or multiple options from, enter or update some freetext or push some buttons. In HTML forms usually trigger a POST request and send the entered data as application/x-www-form-urlenceded to the server though the form element itself may define something different.
While REST is protocol agnostic, meaning it can be build up ontop of many protocols, HTTP is probably the most prominent one. A common sample for a RESTful client is the Web browser we are all to familiar with. It will start by invoking either a bookmarked URI or invoke one entered in the address bar and progress from there on.
HTTP doesn't specify the representation the request or response has to be sent in but leaves that to clients and servers negotiating them. This helps in decoupling as both client and servers can rely on the common interface (HTTP) and only bind strongly onto the known media types used to exchange data in. A peer not being able to process a document in a certain representation (due to the lack of the respective mime type support) will indicate his other peer via a respective HTTP status code that it does not understand, and therefore can't serve, the requested media-type format. The media type, which is just a human readable documentation of the syntax and the semantics of the data payload, is therefore the most important part in a REST architecture. Even Fielding claimed:
A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. Any effort spent describing what methods to use on what URIs of interest should be entirely defined within the scope of the processing rules for a media type (and, in most cases, already defined by existing media types). [Failure here implies that out-of-band information is driving interaction instead of hypertext.]
A media type teaches a peer how to parse and interpret the received payload and to actually make sense out of it, though plenty of people still confuse REST for a JSON based HTTP API with over-engineered URIs they put to much effort in to give the URI some kind of logical sense when actually neither client nor server will interpret it anyway as they will probably use the link relation name given for the URI.
GraphQL on the other hand is a basically just a query language which gives the client the power to request specific fields and elements it wants to retrieve from the server. It is, loosely speaking, some kind of SQL for the Web, or as Fielding termed it just a Remote Data Access (RDA). It therefore has to have some knowledge of the available data beforehand which couples clients somehow to the server. If the server will rename some of the fields, the client might not be able to retrieve that kind of information further, though I'm not a GraphQL expert.
As stated above, REST is often confused for a JSON based HTTP API that allows to perform queries on directly mapped DB entries/entities. Keep in mind that REST doesn't prohibit this, though its focus is on the decoupling of peers not the retrieval aspect of some Web exposed database entries. As Jim Webber pointed out in a great talk back in 2011 in REST you don't simply expose database tables, you create a domain application protocol which clients will follow along like in a text-based computer game or in a typical Webshop system on the internet.
Especially the linked introspection documentation of GraphQL reminds me of reflection in Java, which couples to the actual class model available. If something along the datamodel changes, how does the GraphQL interaction behave? Is it able to change and adapt? Is a client built for one API able to work with an other API out of the box? All these are basically requirements for a true RESTful client. It basically has to adept to changes in future as the server is free to evolve anytime. It further shouldn't assume certain endpoints returning certain types but use content type negotiation to request a representation it can work upon.
These should give you enough insights to determine for yourself whether GraphQL can be RESTful or not. In my opinion it isn't, but my insights into GraphQL are rather limited, TBH.
Because graphql publishes Metadata about its types, it's entirely plausible (I think) to build a graphql client that could consume any graphql endpoint ...
SOAP did the exact same thing, though it was still an RPC protocol. A client could look up the ...?wsdl information at run-time and then generate a request according to the schema defined in the WSDL dynamically, though what usually happened was that some pre-generated stub-classes were generated based on the WSDL data that got compiled into a specific client. A client dynamically generating a request still needed a routine that defines what message-type to create and what data the message required as input.
While SOAP could potentially define multiple endpoints within a WSDL, in most cases only one was defined though. This endpoint usually only operates on POST requests even when later on (SOAP 1.2) GET would have been possible also.
According to Fielding's thesis
REST uses a resource identifier to identify the particular resource involved in an interaction between components.
, what would be the resource identifier in GraphQL? GraphQL's documentation states that
... In contrast, GraphQL's conceptual model is an entity graph. As a result, entities in GraphQL are not identified by URLs. Instead, a GraphQL server operates on a single URL/endpoint, usually /graphql, and all GraphQL requests for a given service should be directed at this endpoint.
Similar to SOAP, all the request are targeted towards a single endpoint. This has some impact if you consider caching, which is a further constraint REST implies. How are responses cacheable if the URI is the key used to store the response in the cache?
While all of the aggregation stuff and the flexibility may be nice from a consumer perspective, they are, probably, not in line with the constraints of REST, though Fielding himself claimed that REST is not applicable in all situations and that designers should select a style that fits their needs as not every style is the "silver bullet" to each problem. Even Mike Amundsen stated that GraphQL violates at least 3 constraints imposed by the REST architecture, even though GraphQL seems to have changed the default retrieval method from POST to GET since.
Usually, if you aim for long-living APIs that should be free to evolve in future and that has to deal with lots of clients, especially ones not under your direct control, this is when REST starts to shine. Fielding admits that most developers have problems when thinking long-term. For a single frontend-to-backend system or for a tailor-made client interacting with the own API, REST is not the architecture one should probably follow.
Last but not least, in a later tweet Fielding stated
There is no such thing as a REST endpoint. There are resources. A countably infinite set of resources bound only by restrictions on URL length. A client can POST to a REST service to create a resource that is a GraphQL query, and then GET that resource with all benefits of REST…
which I interpret as, don't focus to much on justifying whether GraphQL is REST or not, but think about how you can integrate its benefits into the overall design.

Terminology question: API somewhere between SOAP and REST - what is the name for them?

My understanding of SOAP vs REST:
REST = JSON, simple consistent interface, gives you CRUD access to 'entities' (Abstractions of things which are not necessarily single DB rows), simpler protocol, no formally enforced 'contract' (e.g. the values an endpoint returns could change, though it shouldn't)
SOAP = XML, more complex interface, gives you access to 'services' (specific operations you can apply to entities, rather than allowing you to CRUD entities directly), formally enforced, pre-stated 'contract' (like a WSDL, where e.g. the return types are predefined and formalized)
Is that a broadly correct assessment?
What about a mixture?
If so, what do I call an API that is a mixture?
For example, If we have what at surface level looks like a REST API (returns JSON, no WSDL or formalized contract defined - but instead of giving you access to the 'entities' that the system manages (User, product, comment, etc) it instead gives you specific access to services and complex operations (/sendUserAnUpdate/1111, /makeCommentTextPurple/3333, /getAllCommentsByUserThisYear/2222) without having full coverage?
The 'services' already exist internally, and the team simply publishes access to them on a request by request basis, through what would otherwise look like a REST API.
Question:
What is the 'mixture' typically referred to as (besides, maybe, a bad API). Is there a word for it? or a concept I can refer to that'll make most developers understand what I'm referring to, without having to say the entire paragraph I did above?
Is it just "JSON SOAP API?", "A Service-based REST API?" - what would you call it?
Thanks!
Thanks!
If you take a look at all those so-called REST-APIs your observation might seem true, though REST actually is something completely different. It describes an architecture or a philosophy whose intent it is to decouple clients from servers, allowing the latter one to evolve in future without breaking clients. It is quite similar to the typical Web page interaction in that a server will teach a client on what it needs and only reacts on client-triggered requests. One has to be pretty careful and pendant when designing REST services as it is too easy to include a coupling that may affect clients when a change is introduced, especially with all the pragmatism around in (commercial) software engineering. Stefan Tilkov gave a great talk on REST back in 2014 that, alongside with Jim Webber or Asbjørn Ulsberg, can be used as introduction lectures to what REST is at its core.
The general premise in REST should always be that a server teaches clients what they need and what a server expects and offers choices to the client via links. If the server expects to receive data from the client it will send a form-esque representation to inform the client about the respective fields it supports and based on the affordance of the respective elements contained in the form a client knows whether to select one or multiple options, enter some free text or enter a date value and such. Unfortunately, most of the media-type formats that attempt to mimic HTML's forms are still in draft versions.
If you take a look at HTML forms in particular you might sense what I'm refering to. Each of the elements that may occur inside a form are well defined to avoid abmiguity and improve interoperability. This is defacto the ultimate goal in REST, having one client that is able to interact with a sheer amount of other services without having to be adapted to each single API explicitely.
The beauty of REST is, it isn't limited to a single representation form, i.e. JSON, in fact there is almost an infinite number of possible representation formats that could be exchanged in a REST environment. Plain application/json is a terrible media-type for REST applications IMO as it doesn't include any defintions in regards to links and forms and doesn't describe the semantics of certain fields that may be shipped in requests and responses. The lack of semantical description usually leads to typed resources where a recipient expects that receiving data from i.e. /api/users returns some specific user data, that may differ from host to host. If you skim through IANA's media type registry you will find a couple of media-type formats you could have used to transfer user-related data and any client supporting these representation formats whold be able to interact with this enpoint without any issues. Fielding himself claimed that
A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. Any effort spent describing what methods to use on what URIs of interest should be entirely defined within the scope of the processing rules for a media type (and, in most cases, already defined by existing media types). (Source)
Through content-type negotiation client and server will negotiate about a representation format both support and understand. The question therefore shouldn't be which one to support but how many you want to support. The more media-type your API or client is able to exchange payloads for, the more likely it will be to interact with other participants.
Most of those so-called REST APIs are in reality just RPC services exposed via HTTP that may or may not respect and support certain HTTP operations. HTTP thereby is just a transport layer whose domain is the transfer of files or data over the Web. Plenty of people still believe that you shouldn't put verbs in URIs when in reality a script or process usually doesn't (and shouldn't) care whether a URI contains a verb or not. The URI itself is just a pointer a client will follow and invoke when it is interested in receiving the payload. We humans are also not that much interested in the URI itself in regards to the content it may return after invoking that URI. The same holds true for arbitrary clients. It is more important what you ship along with that URI. On the Web a link can be annotated with certain text and/or link relation names that set the links content in relation to the current page. It may hint a client that certain content may be invoked before the whole response was parsed as it is quite likely that the client will also want to know about that. preload i.e. is such a link-relation name that hints the client about that. If certain domain-specific terms exist one might use an extension scheme as defined by Web linking or reuse common knowlege or special microformats.
The whole interaction in a REST environment is similar to playing a text-based computer game or following a certain process flow (i.e. ordering and paying produts) defined by an application domain protocol, that can be designed as a state machine. The client is therefore guided through the whole process. It basically just follows the orders the server gave it, with some choices to break out of the process (i.e. cancel the order before paying).
SOAP on the otherhand is, as you've stated, an XML-based RPC protocol reusing a subset of HTTP to exchange requests and responses. The likelihood that when you change something within your WSDL plenty of clients have to be adapted and recompiled are quite high. SOAP even defines its own security mechanism instead of reusing TLS, which requires explicit support by the clients therefore. As you have a one-to-one communication model due to the state that may be kept in process, scaling SOAP services isn't that easy. In a REST environment this is just a matter of adding a load-balancer before the server and then mirroring the server n-times. The load-balancer can send the request to any of the servers due to the stateless constraint
What is the 'mixture' typically referred to as (besides, maybe, a bad API). Is there a word for it? or a concept I can refer to that'll make most developers understand what I'm referring to, without having to say the entire paragraph I did above?
Is it just "JSON SOAP API?", "A Service-based REST API?" - what would you call it?
The general term for an API that communicates on top of HTTP would be Web API or HTTP API IMO. This article also uses this term. It also lists XML-RPC and JSON-RPC besides SOAP. I do agree with Voice though that you'll receive 5 answers on asking 4 people about the right term to use. While it would be convenient to have a respective term available everyone would agree upon, the reality shows that people are not that interested in a clear separation. Just look here at SO on the questions taged with rest. There is nothing wrong with not being "RESTful", though one should avoid the term REST for truly RPC services. Though I think we are already in a situation where the term REST can't be rescued from misusage and marketing purposes.
For something that requires external documentation to use and that ships with its own custom, non-standardized representation format or that just exposes CRUD for domain objects I'd add -RPC to it, as this is more or less what it is at its heart. So if the API sends JSON and the representation to expect is documented via Swagger or some other external documentationJSON-RPC would probably the most fitting name IMO.
To sum up this post, I hope I could shed some light on what REST truly is and how your observation is flawed by all those pragmatic attempts that unfortunately are RPC through and through. If you change something within their implementation, how many clients will break? In addition to that you can't reuse the client that you've implemented for API A to interact with API B (of a different company or vendor) out of the box and therefore have to either adapt your client or create a new one solely for that API. This is true RPC and therfore should be reflected in the name somehow to hint developers about future expectations. Unfortunately, the process of naming things propperly, especially in regards to REST, seems already lost. There is a fine but tiny group who attempt to spread the true meaning, like Voice, Cassio and some others, though it is like fighting windmills. The best advice here would be to first discuss the naming conventions and what each participant understand on which term and then agree on a naming scheme everyone agrees on to avoid future confusion.
My understanding of SOAP vs REST
...
Is that a broadly correct assessment?
No.
REST is an "architectural style", which is to say a coordinated collection of architectural constraints. The World Wide Web is an example of an application built using the REST architectural style.
SOAP is a transport agnostic message protocol specification, based on XML Information Set
If so, what do I call an API that is a mixture?
I don't think you are going to find an authoritative terminology here. Colloquially, you are likely to hear the broad umbrella term "web api" to describe an HTTP API that isn't "RESTful".
The whole space is rather polluted by semantic diffusion.

Can GET, PUT and PATCH be replaced with POST HTTP method?

POST , PUT, PATCH and GET are all different. Idempotent and safety being the key difference makers.
While writing RESTFul APIs , I encountered guidelines on when and where to use one of the HTTP methods. Since I am using Java for the back-end implementation, I can control the behavior of the HTTP methods on the persistent data.
For example , GET v1/book/{id} can be replaced with POST v1/book (with "id" in body) now with that id I can perform a query on db , fetching that particular book. (assuming book with that id already exists).
Similarly , I can achieve the workings of PATCH and PUT with POST itself.
Now, coming to the question , why don't we just use POST instead of GET , PUT and PATCH almost every time, ALMOST, when we can control the idempotent and safety behavior in the back-end?
Or , Is it just a guideline mentioned in RESTFul docs somewhere or stated by Roy fielding and we all are blindly following? Even if the guidelines are so what is the major idea behind them?
https://restfulapi.net/rest-put-vs-post/
https://restful-api-design.readthedocs.io/en/latest/methods.html
https://www.keycdn.com/support/put-vs-post
Above resources just mention either what does all the methods do or their differences. Articles mention the workings as if they were some guidelines , none of the docs online speak about the reason behind them.
None of them says , what if I used POST instead of PUT, PATCH and GET, what would be the side-effects? (as I can control their behaviors in the back-end)
Http methods are designed in the way that each method holds some responsibility. I will say that REST are the standards which are conventions and not the obligation. The convention doesn't stress us to follow the rules but they are designed for our code betterment. You can tweak the things and can use them in your way but that would be a bad idea. Like in this case if you are performing all the three actions with one method it would create great confusion in code (As the simple definition of POST is to create an object and that is what understood by everyone) and also degrade our coding standards.
I strongly discourage to replace three methods with one.
If you do that, you can't say you are "writing RESTFul APIs".
Whoever knows the RESTFul standard, will be confused about the behaviour of your apis.
If you fit the standard, then you will have an easier life.
After all, you have no real benefit in your approach.
HTTP is a transport protocol which as its name suggest is responsible for transfering data such as files or db entries across the wire to or from a remote system. In version 0.9 you basically only had the GET operation at your disposal while in HTTP 1.0 almost all of the current operations were added to the spec.
Each of these methods fulfills its own purpose. POST i.e. does process the payload according to the server's own semantics, whatever they will be. In theory it could be used therefore for retrieving, updating or removing content. Though, to a client it is basically unclear what a server actually does with the payload. There is no guarantee whether invoking a URI with that method is safe (the remote resource being altered) or not. Think of a crawler that is just invoking any URIs it finds and one of the links is an order link or a link where you perform a payment process. Do you really want a crawler to trigger one of your processes? The spec is rather clear that if something like that happens, the client must not made accountable for that. So, if a crawler ordered 10k products as one of your links, did trigger such a process, and the products are created in that process, you can't claim refund from the crawler's maintainer.
In additon to that, a response from a GET operation is cacheable by default. So if you invoke the same resource twice in a certain amount of time, chances are that the resource does not need to be fetched again a second (third, ...) time as it can be reused from the cache. This can reduce the load on the server quite significantly if used propperly.
As you've mentioned Fielding and REST. REST is an architectural style which you should use if you have plenty of different clients connecting to your services that are furthermore not under your control. Plenty of so-called REST APIs aren't adhering to REST as they follow a more simple and pragmatic RPC approach with external documentations such as Swagger and similar. RESTs main focus is on the decoupling of clients from servers which allow the latters to evolve freely without having to fear breaking clients. Clients on the other hand get more robust to changes.
Fielding only added few constraints a REST architecture has to adhere to. One of them is support for caching. Though Fielding later on wrote a well-cited blog-post where he explains what API designers have to consider before calling their API REST. True decoupling can only occur if all of the constraints are followed strictly. If only one clients violates these premises it won't benefit from REST at all.
The main premise in REST is (and should always be): Server teaches clients what they need and clients only use what they are served with. In the browsable Web, the big cousin of REST, a server will teach a client i.e. on what data the server expects via Web Forms through HTML and links are annotated with link-relation names to give the browser some hints on when to invoke that URI. On a Web page a trash bin icon may indicate a delition while a pencil icon may indicate an edit link. Such visual hints are also called affordacne. Such visual hints may not be applicable in machine to machine communication though such affordances may hint on other things they may provide. Think of a stylesheet that is annotated with preload. In HTTP 2 i.e. such a resource could be attempted to be pushed by the server or in HTTP 1.1 the browser could alread load that stylesheet while the page is still parsed to speed things up. In order to gain whitespread knowledge of those meanings, such values should be standardized at IANA. Through custom extensions or certain microformats such as dublin core or the like you may add new relation names that are too specific for common cases but are common to the domain itself.
The same holds true for media-types client and server negotiate about. A general applicable media-type will probably reach wider acceptance than a tailor-made one that is only usable by a single company. The main aim here is to reach a point where the same media-type can be reused for various areas and APIs. REST vision is to have a minimal amount of clients that are able to interact with a plethora of servers or APIs, similar to a browser that is able to interact with almost all Web sites.
Your ultimate goal in REST is that a user is following an interaction protocol you've set up, which could be something similar to following an order process or playing a text game or what not. By giving a client choices it will progress through a certain process which can easily be depicted as state machine. It is following a kind of application-driven protocol by following URIs that caught the clients attention and by returning data that was taught through a form like representation. As, more or less, only standardized representation formats should be used, there is no need for out-of-band information on how to interact with the API necessary.
In reality though, plenty of enterprises don't really care about long-lasting APIs that are free to evolve over the years but in short-term success stories. They usually also don't care that much whether they use the propper HTTP operations at all or stay in bounds with the HTTP spec (i.e. sending payloads with HTTP GET requesst). Their primary intent is to get the job done. Therefore pragmatism usually wins over design and as such plenty of developers follow the way of short success and have to adept their work later on, which is often cumbersome as the API is now the driving factor of their business and therefore they can't change it easily without having to revampt the whole design.
... why don't we just use POST instead of GET , PUT and PATCH almost every time, ALMOST, when we can control the idempotent and safety behavior in the back-end?
A server may know that a request is idempotent, but the client does not. Properties such as safe and idempotency are promisses to the client. Whether the server satisfies these or not is a different story. How should a client know whether a sent payment request reached the server and the response just got lost or the initial request didn't make it to the server at all in case of a temporary connection issue? A PUT requests does guarante idempotency. I.e. you don't want to order the same things twice if you resubmit the same request again in case of a network issue. While the same request could also be sent via POST and the server being smart enough to not process it again, the client doesn't know the server's behavior unless it is externally documented somehwere, which violates REST principles again also somehow. So, to state it differently, such properties are more or less promisses to the client, less to the server.

Difference between Swagger & HATEOAS

Can anyone explain difference between Swagger & HATEOAS. I can Search many time but no buddy can explain the proper detailed answer this two aspects.
The main difference between Swagger and HATEOAS IMO, which is not covered in the accepted answer, is, that Swagger is only needed for RPC'esque APIs. Such APIs, however, have actually hardly anything to do with REST.
There is a further, widespread misconception that anything exchanged via HTTP is automatically RESTful (~ in accordance with the REST archtitectural style), which it is not. REST just defines a set of constraints that are not choices or options but are mandatory. From start to finish. There is nothing wrong from being not RESTful, but it is wrong to term such an architecture REST.
Swagger describe the operations that can be performed on an endpoint and the payload (including headers and the expected representation formats) that needs to be sent to the service and also describe what a client might expect as response. This allows Swagger to be used both as documentation as well as testing-framework for the API. Due to the tight coupling of Swagger to the API it behaves much like a typical RPC service description, i.e. similar to WSDL files in SOAP or stub or skeletton classes in RMI or CORBA. If either the endpoint changes or something in the payload changes, clients implementing against a Swagger documentation will probably break over time just reintroducing the same problems typical RPC implementations have.
REST and HATEOAS, on the other side, are designed for disovery and further development. REST isn't a protocol but an architectural style to start with that describes the interaction flow between a client and server in a distributed system. It basically took the concepts which made the Web so successful and translated it onto the application layer. So the same concepts that apply to the browsable Web also apply to REST. Therefore it is no miracle that also HATEOAS (the usage of and support for links, link relations and link names) behave similar to the Web.
On designing a REST architecture it is benefitial to think of a state machine where a server provides all of the information a client needs to take further actions. Asbjørn Ulsberg held a great talk back in 2016 where he explains affordances and how a state machine might be implemented through HATEOAS. Besides common or standardized media-types and relation names no out-of-band knowledge is necessary to interact with the service further. In the case of the toaster example Asbjørn gave in his talk, a toaster may have the states off, on, heating and idle where turning a toaster on will lead to a state transition from off to on followed by a transition to heating till a certain temperature is reached where the state is transitioned to idle and switches between idle and heating till the toaster is turned off.
HATOAS will provide a client with the information on the current state and include links a client can invoke to transition to the next state, i.e. turning the toaster off again. It's important to stress here, that a client is provided by the server with every action the client might perform next. There is no need for a client implementor to consult any proprietary API documentation in order for a client to be able to interact with a REST service. Further, URIs do not have to be meaningful or designed to convey a semantical-expressive structure as clients will determine whether invoking that URI makes sense via the link-relation name. Such relation names are either specified by IANA, by a common approach such as Dublin Core or schema.org or by absolut URIs acting as extension attributes which might point to a human-readable description, which further might be propagated to the user via mouse-over tooltips or such.
I hope you can see by yourself that Swagger is only needed to describe RPC Web-APIs rather than applications that follow the REST architectural design. Messages exchanged via REST APIs should include all the information needed by a client to make informed choices on the next state transition. As such it is benefitial to design such message flows and interactions as state machine.
Update:
How are Swagger and HATEOAS mutually exclusive? The former documents your endpoints (making auto-generating code possible) and the latter adds meta-information to your endpoints which tell the consumer what they can do (i.e. which other endpoints are available). These are very different things.
I never stated that they are mutually exclusive, just that they serve two different purposes, where if you follow one approach the other gets more or less useless. Using both does not make any sense though.
Let's move the discussion to the Web domain as this is probably more easily understandable and REST is de facto just a generalization of the concepts used on the Web, so doing this step is just natural and also a good recommendation in terms of designing REST architectures in general. Think of a case where you as a user want to send some data to the server. You have never used the service before so you basically don't know how a request has to look like.
In Swagger you would call the endpoint documentation, select the option that most likely might solve your task, read up on how the request needs to look like and hack a test-case into your application that ends up generating a HTTP request that is sent to the respective location. Auto-generating code might spare you some hacking time, though you still need to integrate the stub classes into your application and test the whole thing at least once just to be safe. If you later on need to integrate a second service of that API or of yet an other API in general, you need to start from the beginning and look up the Swagger documentation, generate or hack the interaction code and integrate it into your domain. Plenty of manual steps involved and in cases of API changes you need to update your client as otherwise it might stop working.
In the Web example however, you just start your browser/Web client, invoke the respective URI that allows you to send the data to the server and the server will most likely send you a HTML form you just need to fill out and click the send button which automatically sends the request to the server which will start to process it. This is HATEOAS. You used the given controls to drive your workflow. The server taught your client every little detail it needed to make a valid request. It served your client with the target URI to send the request to, the HTTP method it should use and most often also implicitly the media type the payload should be in. In addition to that it also gave your clients a skeleton of the expected and/or supported elements the payload should contain. I.e. the form may require you to fill out a couple of input fields, select among a given set of choices or use some other controls such as a date or time picker value that is translated to a valid date or time representation for you. All you needed to do was to invoke the respective resource in your Web client. No auto-generation, no integration into your browser/application. Using other services (from the same or different providers) will, most likely, just work the same way so no need to change or update your HTTP client (browser) as long as the media-type request and responses are exchanged are supported.
In the case where you rely on Swagger RPC'esque documentation, that documentation is the truth on how to interact with the service. Mixing in some HATEOAS information doesn't provide you any benefits. In the Swagger case, carrying around additional meta-information that bloat up the request/response for no obvious reasons, as all the required information is given in the reference documentation, will, with some certainty, lead to people starting questioning the sanity of the developers of that service and ask for payload reduction. Just look here at SO for a while and you will find enough question asking on how to optimize the interaction further and further and reducing message size to a minimum as they process every little request and don't make use of response caching at all. In the HATEOAS case, pointing to an external reference is just useless as peers in such an architecture most likely already have support for the required necessities, such as URI, HTTP and the respective media types, implemented into it. In cases where custom media-types are used, support can be added at runtime via plug-ins or add-ons dynamically (if supported).
So, Swagger and HATEOAS are not mutually exclusive but the other gets more or less useless once you decided for one route or the other.
Swagger: Swagger aids in development across the entire API lifecycle, from design and documentation, to test and deployment. (Refer to swagger.io)
HATEOAS: Hypermedia as the Engine of Application State
An Ion Form is a Collection Object where the value member array contains Form Fields. Ion Forms ensure that resource transitions (links) that support data submissions can be discovered automatically (colloquially referred to as HATEOAS). (Refer to https://ionspec.org/)
One is a framework for supporting designing and testing for APIs, the other is an API design architecture.
Building a RESTful API is not a binary concept. That is why we use the Richardson maturity model in order to measure how RESTful an API is.
Based on this maturity model
At level 0 we provide mechanisms for client of the API to call some methods on the server (Simple RPC)
At level 1 we expose resources on the server so the client of the API can have direct access to the resources that it requires (exposing Resources)
At level 2 we provide a uniform way for the client of the api to interact with the API (exposed resources) and the HTTP protocol has these methods (using HTTP verbs to interact with resources).
the ultimate step is to make our api explorable by the client. HATEOAS provides such functionality (over HTTP) meaning that it adds relevant links and affordances (extra methods) that can executed on the resource so the client of the API can understand its behavior.
Based on these definitions in properly designed RESTful API there is no coupling between client and server and client can interact with the exposed endpoints an discover them.
On the other hand, swagger is a tool that helps you document your API along with some extra goodies (code generators).
I believe that Swagger (with the help of swagger Hub) provides services for implementing a RESTful endpoint with maturity levels up to 2. But it does not go any further and it does not provide proper support of HATEOAS.
You can define your resources and HTTP verbs in (json/yml) files. And based on this definition Swagger can generate API documentation and the extra goodies (client stubs and skeletal implementation of the server API).
For all those people who have worked with Java RMI, SOAP,... the extra goodies part is a reminder of old technologies where there was tight coupling between Client and Server because the stubs and skeletal implementations are all built based on the same API definition file.

Can I have a REST element URI without a collection URI?

a basic REST question.. I design a REST API and would like to be able to get a list of book recommendations based on a book id (i.e. client sends book id=w to server and server replies with a list of recommended books, id=x,y,z).
I see two ways to do this:
/recommendation?bookId=thetitle
/recommendation/thetitle
Option 2 seems a bit cleaner to me but I'm not sure if it would be considered good REST design? Because /recommendation/thetitle looks like an element URI, not a collection URI (although in this case it would return a collection). Also, the first part of the resource (/recommendation) would not make any sense by itself.
Thankful for any advice.
URL patterns of this kind have nothing to do with REST. None of the defining properties of REST requires readable URLs.
At the same time, one of the core principles (HATEOAS), if followed properly, allows API clients (applications, not people!) to browse the API and obtain every link required to perform a desired transition of application state or resource state based on a well known message format.
If you feel your API must have readable URLs, it's a good sign that its design probably isn't RESTful at all. This implies the need for a developer to understand the URL structure and hardcode it somewhere in a client application. Something that REST is supposed to avoid by principle.
To quote Roy Fielding's blog post on the subject:
A REST API must not define fixed resource names or hierarchies (an obvious coupling of client and server). Servers must have the freedom to control their own namespace. Instead, allow servers to instruct clients on how to construct appropriate URIs, such as is done in HTML forms and URI templates, by defining those instructions within media types and link relations. [Failure here implies that clients are assuming a resource structure due to out-of band information, such as a domain-specific standard, which is the data-oriented equivalent to RPC’s functional coupling].
Obviously, nothing stops you from actually making URLs meaningful regardless of how RESTful your API actually is. Even if it's for a purpose not dictated by REST itself (viewing the logs left by a client of a properly RESTful API could be easier for a human if they're readable, off the top of my head).
Finally, if you're fine with developing a Web API that's not completely RESTful and you expect developers of clients to read this kind of docs and care about path building, you might actually benefit from comprehensible URLs. This can be very useful in APIs of the so-called levels 0-3, according to Richardson's maturity model.
What's important in terms of REST is how you're leveraging the underlying protocol (HTTP in this case) and what it allows you to do. If we consider your examples from this perspective, /recommendation/thetitle seems preferable. This is because the use of query parameters may prevent responses from being cached by browsers (important if you're writing a JS client) or proxies, making it harder to reuse existing tools and infrastructure.