After having read a lot of material on REST versioning, I am thinking of versioning the calls instead of the API. For example:
http://api.mydomain.com/callfoo/v2.0/param1/param2/param3
http://api.mydomain.com/verifyfoo/v1.0/param1/param2
instead of first having
http://api.mydomain.com/v1.0/callfoo/param1/param2
http://api.mydomain.com/v1.0/verifyfoo/param1/param2
then going to
http://api.mydomain.com/v2.0/callfoo/param1/param2/param3
http://api.mydomain.com/v2.0/verifyfoo/param1/param2
The advantage I see are:
When the calls change, I do not have to rewrite my entire client - only the parts that are affected by the changed calls.
Those parts of the client that work can continue as is (we have a lot of testing hours invested to ensure both the client and the server sides are stable.)
I can use permanent or non-permanent redirects for calls that have changed.
Backward compatibility would be a breeze as I can leave older call versions as is.
Am I missing something? Please advise.
Require an HTTP header.
Version: 1
The Version header is provisionally registered in RFC 4229 and there some legitimate reasons to avoid using an X- prefix or a usage-specific URI. A more typical header was proposed by yfeldblum at https://stackoverflow.com/a/2028664:
X-API-Version: 1
In either case, if the header is missing or doesn't match what the server can deliver, send a 412 Precondition Failed response code along with the reason for the failure. This requires clients to specify the version they support every single time but enforces consistent responses between client and server. (Optionally supporting a ?version= query parameter would give clients an extra bit of flexibility.)
This approach is simple, easy to implement and standards-compliant.
Alternatives
I'm aware that some very smart, well-intentioned people have suggested URL versioning and content negotiation. Both have significant problems in certain cases and in the form that they're usually proposed.
URL Versioning
Endpoint/service URL versioning works if you control all servers and clients. Otherwise, you'll need to handle newer clients falling back to older servers, which you'll end up doing with custom HTTP headers because system administrators of server software deployed on heterogeneous servers outside of your control can do all sorts of things to screw up the URLs you think will be easy to parse if you use something like 302 Moved Temporarily.
Content Negotiation
Content negotiation via the Accept header works if you are deeply concerned about following the HTTP standard but also want to ignore what the HTTP/1.1 standard documents actually say. The proposed MIME Type you tend to see is something of the form application/vnd.example.v1+json. There are a few problems:
There are cases where the vendor extensions are actually appropriate, of course, but slightly different communication behaviors between client and server doesn't really fit the definition of a new 'media type'. Also, RFC 2616 (HTTP/1.1) reads, "Media-type values are registered with the Internet Assigned Number Authority. The media type registration process is outlined in RFC 1590. Use of non-registered media types is discouraged." I don't want to see a separate media type for every version of every software product that has a REST API.
Any subtype ranges (e.g., application/*) don't make sense. For REST APIs that return structured data to clients for processing and formatting, what good is accepting */* ?
The Accept header takes some effort to parse correctly. There's both an implied and explicit precedence that should be followed to minimize the back-and-forth required to actually do content negotiation correctly. If you're concerned about implementing this standard correctly, this is important to get right.
RFC 2616 (HTTP/1.1) describes the behavior for any client that does not include an Accept header: "If no Accept header field is present, then it is assumed that the client accepts all media types." So, for clients you don't write yourself (where you have the least control), the most correct thing to do would be to respond to requests using the newest, most prone-to-breaking-old-versions version that the server knows about. In other words, you could have not implemented versioning at all and those clients would still be breaking in exactly the same way.
Edited, 2014:
I've read a lot of the other answers and everyone's thoughtful comments; I hope I can improve on this with the benefit of a couple of years of feedback:
Don't use an 'X-' prefix. I think Accept-Version is probably more meaningful in 2014, and there are some valid concerns about the semantics of re-using Version raised in the comments. There's overlap with defined headers like Content-Version and the relative opaqueness of the URI for sure, and I try to be careful about confusing the two with variations on content negotiation, which the Version header effectively is. The third 'version' of the URL https://example.com/api/212315c2-668d-11e4-80c7-20c9d048772b is wholly different than the 'second', regardless of whether it contains data or a document.
Regarding what I said above about URL versioning (endpoints like https://example.com/v1/users, for instance) the converse probably holds more truth: if you control all servers and clients, URL/URI versioning is probably what you want. For a large-scale service that could publish a single service URL, I would go with a different endpoint for every version, like most do. My particular take is heavily influenced by the fact that the implementation as described above is most commonly deployed on lots of different servers by lots of different organizations, and, perhaps most importantly, on servers I don't control. I always want a canonical service URL, and if a site is still running the v3 version of the API, I definitely don't want a request to https://example.com/v4/ to come back with their web server's 404 Not Found page (or even worse, 200 OK that returns their homepage as 500k of HTML over cellular data back to an iPhone app.)
If you want very simple /client/ implementations (and wider adoption), it's very hard to argue that requiring a custom header in the HTTP request is as simple for client authors as GET-ting a vanilla URL. (Although authentication often requires your token or credentials to be passed in the headers, anyway. Using Version or Accept-Version as a secret handshake along with an actual secret handshake fits pretty well.)
Content negotiation using the Accept header is good for getting different MIME types for the same content (e.g., XML vs. JSON vs. Adobe PDF), but not defined for versions of those things (Dublin Core 1.1 vs. JSONP vs. PDF/A). If you want to support the Accept header because it's important to respect industry standards, then you won't want a made-up MIME Type interfering with the media type negotiation you might need to use in your requests. A bespoke API version header is guaranteed not to interfere with the heavily-used, oft-cited Accept, whereas conflating them into the same usage will just be confusing for both server and client. That said, namespacing what you expect into a named profile per 2013's RFC6906 is preferable to a separate header for lots of reasons. This is pretty clever, and I think people should seriously consider this approach.
Adding a header for every request is one particular downside to working within a stateless protocol.
Malicious proxy servers can do almost anything to destroy HTTP requests and responses. They shouldn't, and while I don't talk about the Cache-Control or Vary headers in this context, all service creators should carefully consider how their content is consumed in lots of different environments.
This is a matter of opinion; here's mine, along with the motivation behind the opinion.
include the version in the URL.
For those who say, it belongs in the HTTP header, I say: maybe. But putting in the URL is the accepted way to do it according to the early leaders in the field. (Google, yahoo, twitter, and more). This is what developers expect and doing what developers expect, in other words acting in accordance with the principle of least astonishment, is probably a good idea. It absolutely does not make it "harder for clients to upgrade". If the change in URL somehow represents an obstacle to the developer of a consuming application, as suggested in a different answer here, that developer needs to be fired.
Skip the minor version
There are plenty of integers. You're not gonna run out. You don't need the decimal in there. Any change from 1.0 to 1.1 of your API shouldn't break existing clients anyway. So just use the natural numbers. If you like to use separation to imply larger changes, you can start at v100 and do v200 and so on, but even there I think YAGNI and it's overkill.
Put the version leftmost in the URI
Presumably there are going to be multiple resources in your model. They all need to be versioned in synchrony. You can't have people using v1 of resource X, and v2 of resource Y. It's going to break something. If you try to support that it will create a maintenance nightmare as you add versions, and there's no value add for the developer anyway. So, http://api.mydomain.com/v1/Resource/12345 , where Resource is the type of resource, and 12345 gets replaced by the resource id.
You didn't ask, but...
Omit verbs from your URL path
REST is resource oriented. You have things like "CallFoo" in your URL path, which looks suspiciously like a verb, and unlike a noun. This is wrong. Use the Force, Luke. Use the verbs that are part of REST: GET PUT POST DELETE and so on. If you want to get the verification on a resource, then do GET http://domain/v1/Foo/12345/verification. If you want to update it, do POST /v1/Foo/12345.
Put optional params as a query param or payload
The optional params should not be in the URL path (before the first question mark) unless you are suggesting that those optional params constitute a self-standing resource. So, POST /v1/Foo/12345?action=partialUpdate¶m1=123¶m2=abc.
Don't do either of those things, because they push the version into the URI structure, and that's going to have downsides for your client applications. It will make it harder for them to upgrade to take advantage of new features in your application.
Instead, you should version your media types, not your URIs. This will give you maximum flexibility and evolutionary ability. For more information, see this answer I gave to another question.
I like using the profile media type parameter:
application/json; profile="http://www.myapp.com/schema/entity/v1"
More Info:
https://www.rfc-editor.org/rfc/rfc6906
http://buzzword.org.uk/2009/draft-inkster-profile-parameter-00.html
It depends on what you call versions in your API, if you call versions to different representations (xml, json, etc) of the entities then you should use the accept headers or a custom header. That is the way http is designed for working with representations. It is RESTful because if I call the same resource at the same time but requesting different representations, the returned entities will have exactly the same information and property structure but with different format, this kind of versioning is cosmetic.
In the other hand if you understand 'versions' as changes in entity structure, for example adding a field 'age' to the 'user' entity. Then you should approach this from a resource perspective which is in my opinion the RESTful approach. As described by Roy Fielding in his disseration ...a REST resource is a mapping from an identifier to a set of entities... Therefore makes sense that when changing the structure of an entity you need to have a proper resource that points to that version. This kind of versioning is structural.
I made a similar comment in: http://codebetter.com/howarddierking/2012/11/09/versioning-restful-services/
When working with url versioning the version should come later and not earlier in the url:
GET/DELETE/PUT onlinemall.com/grocery-store/customer/v1/{id}
POST onlinemall.com/grocery-store/customer/v1
Another way of doing that in a cleaner way but which could be problematic when implementing:
GET/DELETE/PUT onlinemall.com/grocery-store/customer.v1/{id}
POST onlinemall.com/grocery-store/customer.v1
Doing it this way allows the client to request specifically the resource they want which maps to the entity they need. Without having to mess with headers and custom media types which is really problematic when implementing in a production environment.
Also having the url late in the url allows the clients to have more granularity when choosing specifically the resources they want, even at method level.
But the most important thing from a developer perspective, you don't need to maintain the whole mappings (paths) for every version to all the resources and methods. Which is very valuable when you have lot of sub-resources (embedded resources).
From an implementation perspective having it at the level of resource is really easy to implement, for example if using Jersey/JAX-RS:
#Path("/customer")
public class CustomerResource {
...
#GET
#Path("/v{version}/{id}")
public IDto getCustomer(#PathParam("version") String version, #PathParam("id") String id) {
return locateVersion(version, customerService.findCustomer(id));
}
...
#POST
#Path("/v1")
#Consumes(MediaType.APPLICATION_JSON)
public IDto insertCustomerV1(CustomerV1Dto customer) {
return customerService.createCustomer(customer);
}
#POST
#Path("/v2")
#Consumes(MediaType.APPLICATION_JSON)
public IDto insertCustomerV2(CustomerV2Dto customer) {
return customerService.createCustomer(customer);
}
...
}
IDto is just an interface for returning a polymorphic object, CustomerV1 and CustomerV2 implement that interface.
Facebook does verisoning in the url. I feel url versioning is cleaner and easier to maintain as well in the real world.
.Net makes it super easy to do versioning this way:
[HttpPost]
[Route("{version}/someCall/{id}")]
public HttpResponseMessage someCall(string version, int id))
Related
I'm surprised to find so little mention of this dilemma online, and it makes me wonder if I'm totally missing something.
Assume I have a singleton resource called Settings. It is created on init/install of my web server, but certain users can modify it via a REST API, lets say /settings is my URI. I have a GET operation to retrieve the settings (as JSON), and a PATCH operation to set one or more of its values.
Now, I would like to let the user reset this resource (or maybe individual properties of it) to default - the default being "whatever value was used on init", before any PATCH calls were done. I can't seem to find any "best practice" approach for this, but here are the ones I have come up with:
Use a DELETE operation on the resource. It is after all idempotent, and its pretty clear (to me). But since the URI will still exist after DELETE, meaning the resource was neither removed nor moved to an inaccessible location, this contradicts the RESTful definition of DELETE.
Use a POST to a dedicated endpoint such as /settings/reset - I really dislike this one because its the most blatantly non-RESTful, as the verb is in the URI
Use the same PATCH operation, passing some stand-in for "default" such as a null value. The issue I have with this one is the outcome of the operation is different from the input (I set a property to null, then I get it and it has a string value)
Create a separate endpoint to GET the defaults, such as /setings/defaults, and then use the response in a PATCH to set to those values. This doesn't seem to contradict REST in any way, but it does require 2 API calls for seemingly one simple operation.
If one of the above is considered the best practice, or if there is one I haven't listed above, I'd love to hear about it.
Edit:
My specific project has some attributes that simplify this question, but I didn't mention them originally because my aim was for this thread to be used as a reference for anyone in the future trying to solve the same problem. I'd like to make sure this discussion is generic enough to be useful to others, but specific enough to also be useful to me. For that, I will append the following.
In my case, I am designing APIs for an existing product. It has a web interface for the average user, but also a REST (ish) API intended to meet the needs of developers who need to automate certain tasks with said product. In this oversimplified example, I might have the product deployed to a test environment on which i run various automated tests that modify the /settings and would like to run a cleanup script that resets /settings back to normal when I'm done.
The product is not SaaS (yet), and the APIs are not public (as in, anyone on the web can access them freely) - so the audience and thus the potential types of "clients" I may encounter is rather small - developers who use my product, that is deployed in their private data center or AWS EC2 machines, and need to write a script in whatever language to automate some task rather than doing it via UI.
What that means is that some technical considerations like caching are relevant. Human user considerations, like how consistent the API design is across various resources, and how easy it is to learn, are also relevant. But "can some 3rd party crawler identify the next actions it can perform from a given state" isn't so relevant (which is why we don't implement HATEOAS, or the OPTIONS method at all)
Let's discuss your mentioned options first:
1: DELETE does not necessarily need to delete or remove the state contained in the resource targeted by the URI. It just requires that the mapping of target URI to the resource is removed, which means that a consecutive request on the same URI should not return the state of the resource further, if no other operation was performed on that URI in the meantime. As you want to reuse the URI pointing to the client's settings resource, this is probably not the correct approch.
2: REST doesn't care about the spelling of the URI as long as it is valid according to RFC3986. There is no such thing as RESTful or RESTless URI. The URI as a whole is a pointer to a resource and a client should refrain from extracting knowledge of it by parsing and interpreting it. Client and server should though make use of link relation names URIs are attached to. This way URIs can be changed anytime and client will remain to be able to interact with the service further. The presented URI however leaves an RPC kind of smell, which an automated client is totally unaware of.
3: PATCH is actually pretty-similar to patching done by code versioning tools. Here a client should precalculate the steps needed to transform a source document to its desired form and contain these instructions into a so called patch document. If this patch document is applied by someone with the state of a document that matches the version used by the patch document, the changes should be applied correctly. In any other cases the outcome is uncertain. While application/json-patch+json is very similar to the philosophy on a patch-document containing separate instructions, application/merge-patch+json has a slightly different take on it by defining default rules (nulling out a property will lead to a removal, including a property will lead to its adding or update and leaving out properties will ignore these properties in the original document)
4: In this sense first retrieving the latest state from a resource and locally updating it/calculating the changes and then send the outcome to the server is probably the best approach of the ones listed. Here you should make use of conditional requests to guarantee that the changes are only applied on the version you recently downloaded and prevent issues by ruling out any intermediary changes done to that resource.
Basically, in a REST architecture the server offers a bunch of choices to a client that based on his task will chose one of the options and issue a request to the attached URI. Usually, the client is taught everything it needs to know by the server via form representations such as HTML forms, HAL forms or ION.
In such an environment settings is, as you mentioned, a valid resource on its own, so is also a default settings resource. So, in order to allow a client to reset his settings it is just a matter of "copying" the content of the default settings resource to the target settings resource. If you want to be WebDAV compliant, which is just an extension of HTTP, you could use the COPY HTTP operation (also see other registered HTTP operations at IANA). For plain HTTP clients though you might need a different approach so that any arbitrary HTTP clients will be able to reset settings to a desired default one.
How a server wants a client to perform that request can be taught via above mentioned form support. A very simplistic approach on the Web would be to send the client a HTML page with the settings pre-filled into the HTML form, maybe also allow the user to tweak his settings to his wishes beforehand, and then click a submit button to send the request to the URI present in the action attribute of the form, which can be any URI the server wants. As HTML only supports POST and GET in forms, on the Web you are restricted to POST.
One might think that just sending a payload containing the URI of the settings resource to reset and optionally the URI to the default settings to a dedicated endpoint via POST is enough and then let it perform its magic to reset the state to the default one. However, this approach does bypass caches and might let them believe that the old state is still valid. Caching in HTTP works as such that the de-facto URI of a resource is used as key and any non-safe operations performed on that URI will lead to an eviction of that stored content so that any consecutive requests would directly go to the server instead of being served by the cache instead. As you send the unsafe POSTrequest to a dedicated resource (or endpoint in terms of RPC) you miss out on the capability to inform the cache about the modification of the actual settings resource.
As REST is just a generalization of the interaction model used on the human Web, it is no miracle that the same concepts used on the Web also apply onto the application domain level. While you can use HTML here as well, JSON-based formats such as application/hal+json or the above mentioned HAL forms or ION formats are probably more popular. In general, the more media-type your service is able to support, the more likely the server will be to server a multitude of clients.
In contrast to the human Web, where images, buttons and further stuff provide an affordance of the respective control to a user, arbitrary clients, especially automated ones, usually don't coop with such affordances good. As such other ways to hint a client on the purpose of a URI or control element need to be provided, such as link relation names. While <<, <, >, >> may be used on a HTML page link to indicate first, previous, next and last elements in a collection, link relation here provide first, prev, next and last as alternatives. Such link relations should of course be either registered with IANA or at least follow the Web linking extension approach. A client looking up the URI on a prev relation will know the purpose of the URI as well as still be able to interact with the server if the URI ever changes. This is in essence also what HATEOAS is all about, using given controls to navigate the application though the state machine offered by the server.
Some general rules of thumb in designing applications for REST architectures are:
Design the interaction as if you'd interact with a Web page on the human Web, or more formally as a state machine or domain application protocol, as Jim Webber termed it, a client can run through
Let servers teach clients on how requests need to look like via support of different form types
APIs shouldn't use typed resources but instead rely on content type negotiation
The more media type your API or client supports the more likely it will be to interact with other peers
Long story short, in summary, a very basic approach is to offer a client a pre-filled form with all the data that makes up the default settings. The target URI of the action property targets the actual resource and thus also informs caches about the modification. This approach is on top also future-proof that clients will be served automatically with the new structure and properties a resource supports.
... so the audience and thus the potential types of "clients" I may encounter is rather small - developers who use my product, that is deployed in their private data center or AWS EC2 machines, and need to write a script in whatever language to automate some task rather than doing it via UI.
REST in the sense of Fielding's architectural style shines when there are a multitude of different clients interacting with your application and when there needs to be support for future evolution inherently integrated into the design. REST just gives you the flexibility to add new features down the road and well-behaved REST clients will just pick them up and continue. If you are either only interacting with a very limited set of clients, especially ones under your control, of if the likelihood of future changes are very small, REST might be overkill and not justify the additional overhead caused by the careful desing and implementation.
... some technical considerations like caching are relevant. Human user considerations, like how consistent the API design is across various resources, and how easy it is to learn, are also relevant. But "can some 3rd party crawler identify the next actions it can perform from a given state" isn't so relevant ...
The term API design already indicates that a more RPC-like approach is desired where certain operations are exposed user can invoke to perform some tasks. This is all fine as long as you don't call it REST API from Fielding's standpoint. The plain truth here is that there are hardly any applications/systems out there that really follow the REST architectural style but there are tons of "bad examples" who misuse the term REST and therefore indicate a wrong picture of the REST architecture, its purpose as well as its benefits and weaknesses. This is to some part a problem caused by people not reading Fielding's thesis (carefully) and partly due to the overall perference towards pragmatism and using/implementing shortcuts to get the job done ASAP.
In regards to the pragmatic take on "REST" it is hard to give an exact answer as everyone seems to understand different things about it. Most of those APIs rely on external documentation anyway, such as Swagger, OpenAPI and what not and here the URI seems to be the thing to give developers clue about the purpose. So a URI ending with .../settings/reset should be clear to most of the developers. Whether the URI has an RPC-smell to it or whether or not to follow the semantics of the respective HTTP operations, i.e. partial PUT or payloads within GET, is your design choice which you should document.
It is okay to use POST
POST serves many useful purposes in HTTP, including the general purpose of “this action isn’t worth standardizing.”
POST /settings HTTP/x.y
Content-Type: text/plain
Please restore the default settings
On the web, you'd be most likely to see this as a result of submitting a form; that form might be embedded within the representation of the /settings resource, or it might live in a separate document (that would depend on considerations like caching). In that setting, the payload of the request might change:
POST /settings HTTP/x.y
Content-Type: application/x-www-form-urlencoded
action=restoreDefaults
On the other hand: if the semantics of this message were worth standardizing (ie: if many resources on the web should be expected to understand "restore defaults" the same way), then you would instead register a definition for a new method token, pushing it through the standardization process and promoting adoption.
So it would be in this definition that we would specify, for instance, that the semantics of the method are idempotent but not safe, and also define any new headers that we might need.
there is a bit in it that conflicts with this idea of using POST to reset "The only thing REST requires of methods is that they be uniformly defined for all resources". If most of my resources are typical CRUD collections, where it is universally accepted that POST will create a new resource of a given type
There's a tension here that you should pay attention to:
The reference application for the REST architectural style is the world wide web.
The only unsafe method supported by HTML forms was POST
The Web was catastrophically successful
One of the ideas that powered this is that the interface was uniform -- a browser doesn't have to know if some identifier refers to a "collection resource" or a "member resource" or a document or an image or whatever. Neither do intermediate components like caches and reverse proxies. Everybody shares the same understanding of the self descriptive messages... even the deliberately vague ones like POST.
If you want a message with more specific semantics than POST, you register a definition for it. This is, for instance, precisely what happened in the case of PATCH -- somebody made the case that defining a new method with additional constraints on the semantics of the payload would allow a richer, more powerful general purpose components.
The same thing could happen with the semantics of CREATE, if someone were clever enough to sit down and make the case (again: how can general purpose components take advantage of the additional constraints on the semantics?)
But until then, those messages should be using POST, and general purpose components should not assume that POST has create semantics, because RFC 7231 doesn't provide those additional constraint.
We are building set of new REST APIs.
Let's say we have a resource /users with the following fields:
{
id: 1
email: "test#user.com"
}
Clients implement this API and can then update this resource by sending a new resource representation to PUT /users/1.
Now let's say we add a new property name to the model like so:
{
id: 1
email: "test#user.com"
name: "test user"
}
If the models the existing clients are using are to call our API not updated, then calls to PUT /users/1 will remove the new name property since PUT is supposed to replace the resource. I know that the clients could work straight with the raw json to ensure they always receive any new properties that are added in the API, but that is a lot of extra work, and under normal circumstances clients are going to create their own model representations of the API resources on their side. This means that any time any new property is added, all clients need to update the code/models on their side to make sure they aren't accidentally removing properties. This creates unneeded coupling between systems.
As a way to solve this problem, we are considering not implementing PUT operations at all and switching updates to PATCH where properties that aren't passed in are simply not changed. That seems technically correct, but might not be in the spirit of REST. I am also slightly concerned about client support for the PATCH verb.
How are others solving this problem? Was is the best practice here?
You are in a situation where you need some form of API versioning. The most appropriate way is probably using a new media-type every time you make a change.
This way you can support older versions and a PUT would be perfectly legal.
If you don't want this and just stick to PATCH, PATCH is supported everywhere except if you use ancient browsers. Not something to worry about.
Switching from PUT to PATCH will not fix your problem, IMO. The root cause, IMO, is that clients already consider the data being returned for a representation to follow a certain type. According to Fielding
A REST API should never have “typed” resources that are significant to the client.
(Source)
Instead of using typed resources clients should use content-type negotiation to exchange data. Here, media-type formats that are generic enough to gain widespread adoption are for sure beneficial, certain domains may however require a more specific representation format.
Think of a car-vendor Web page where you can retrieve the data from your preferred car. You, as a human, can easily identify that the data depicts a typical car. However, the media-type you most likely received the data in (HTML) does not state by its syntax or the semantics of its elements that the data describes a car, unless some semantic annotation attributes or elements are present, though you might be able to update the data or use the data elsewhere.
This is possible as HTML ships with a rich specification of its elements and attributes, such as Web forms that not only describe the supported or expected input parameters but also the URI where to send the data to, the representation format to use upon sending (implicitly given by application/x-www-form-urlencoded; may be overwritten by the enctype attribute though) or the HTTP method to use, which is fixed to either GET or POST in HTML. Through this, a server is able to teach a client on how a request needs to be built. As a consequence the client does not need to know anything else besides having to understand the HTTP, URI and HTML specifications.
As Web pages are usually filled with all kinds of unrelated stuff, such as adds, styling information or scripts, and the XML(-like) syntax, which is not every ones favourite, as it may increase the size of the actual payload slightly, most so-called "REST" APIs do want to exchange JSON-based documents. While plain JSON is not an ideal representation format, as it does not ship with link-support at all, it is though very popular. Certain additions such as JSON Hyper-Schema (application/schema+json hyper-schema) or JSON Hypertext Application-Language (HAL) (application/hal+json) add support for links and link-relation. These can be used to render data received from the server as-is. However, if you want a response to automatically drive your application state (i.e. to dynamically draw the GUI with the processed data) a more specific representation format is needed, that can be parsed by your client and act accordingly as it understands what the server wants it to do with it (= affordance). If you like to instruct a client on how to build a request support for other media-types such as hal-forms or ion need to be supported. Certain media-types furthermore allow you to use a concept called profiles, that allow you to annotate a resource with a semantic type. HAL JSON i.e. does support something like that where the Content-Type header may now contain a value such as application/hal+json;profile=http://schema.org/Car that hints the media-type processor that the payload follows the definition of the given profile and may thus apply further validity checks.
As the representation format should be generic enough to gain widespread usage, and URIs itself shouldn't hint a client as well what kind of data to expect, an other mechanism needs to be used. Link relation names are basically an annotation for URIs that tell a client about the purpose of a certain link. A pageable collection might return links annotated with first, prev, next and last which are pretty obvious what they do. Other links might be hinted with prefetch, that hint a client that a resource can be loaded right after loading the current resource finished as it is very likely that the client will retrieve this resource next. Such media-types, however, should be either standardized (defined in a proposal or RFC and registerd with IANA) or follow the schema proposed by Web linking, (i.e. as used by Dublin Core). A client that just uses the URI for an invoked link-relation name will still work in case the server changes its URI scheme instead of attempting to parse some parameters from the URI itself.
In regards to de/coupling in a distributed system a certain amount of coupling has to exist otherwise parties wont be able to communicate at all. Though the point here is, the coupling should be based on well-defined and standardized formats that plenty of clients may support instead of exchanging specific representation formats only a very limited number of clients support (in worst case only the own client). Instead of directly coupling to the API and using an undefined JSON-based syntax (maybe with external documentation of the semantics of the respective fields) the coupling should now occur on the media-types parties can use to exchange the format. Here, not the question of which media-type to support should be asked but how many you want to support. The more media-types your client or server supports, the more likely it is to interact with other peers in the distributed system. On the grand-scheme of things, you want a server to be able to server a plethora of clients while a single client should be able to interact with (in best case) every server without the need for constant adoptions.
So, if you really want to decouple clients from servers, you should take a closer look at how the Web actually works and try to mimic its interaction model onto your application layer. As "Uncle Bob" Robert C. Martin mentioned
An architecture is about intent! (Source)
and the intention behind the REST architecture is the decoupling of clients from servers/services. As such, supporting multiple media-types (or defining your own-one that is generic enough to reach widespread adoption), looking up URIs just via their accompanying link-relation names and relying on content-type negotiation as well as relying only on the provided data may help you to achieve the degree of decoupling you are looking for.
All nice and well in theory, but so far every rest api I encountered in my career had predefined contracts that changed over time.
The problem here is, that almost all of those so called "REST APIs" are RPC services at its heart which should not be termed "REST" to start with - this is though a community issue. Usually such APIs ship with external documentation (i.e. Swagger) that just re-introduce the same problems classical RPC solutions, such as CORBA, RMI or SOAP, suffer from. The documentation may be seen as IDL in that process without the strict need for skeleton classes, though most "frameworks" use some kind of typed data classes that will either ignore the recently introduced field (in best case) or totally blow up on invocation.
One of the problems REST suffers from is, that most people haven't read Fieldings thesis and therefore don't see the big picture REST tries to establish but claim to know what REST is and therefore mix up things and call their services RESTful which lead to a situation where REST != REST. The ones pointing out what a REST architecture is and how one might achieve it are called out as dreamers and unworldly when the ones proclaiming the wrong term (RPC over HTTP = REST) continue to do so adding to the confusion of especially the ones just learning the whole matter.
I admit that developing a true REST architecture is really, really hard as it is just too easy to introduce some form of coupling. Hence, a very careful design needs to be done that needs time and also costs money. Money plenty of companies can't or don't want to spend, especially in a domain where new technologies evolve on a regular basis and the ones responsible for developing such solutions often leave the company before the whole process had finished.
Just saying it shouldn’t be ‘typed’ is not really a viable solution
Well, how often did you need to change your browser as it couldn't interact with a Web page? I don't talk about CSS-stuff or browser-specific CSS or JS stuff. How often needed the Web to change in the last 2-3 decades? Similar to the Web, the REST architecture is intended for long-lasting applications for years to come, that supports natural evolution by design. For simple frontend-2-backend systems it is for sure overkill. It starts to shine especially in cases where there are multiple peers not under your control you can interact with.
I read that HATEOAS links are the one that separates a REST API from a normal http API. In that case, does REST need a separate name? I wonder what all this hype about REST API is about. It seems to be just a http method with one extra rule in the response.
Q) What other differences exist?
I read that HATEOAS links are the one that separates a REST API from a normal http API.
That's probably a little bit of an understatement. When Leonard Richardson (2008) described the "technology stack" of the web, he listed:
URI
HTTP
HTML
A way of exploring the latter is to consider how HTML, as a media type, differs from a text document with URI in it. To my mind, the key element is links and forms -- standardized ways of encoding into the representation the semantics of a URI (this is a link to another page, this is an embedded image, this is an embedded script, this is a form...).
Mike Admundsen, 2010:
Hypermedia Types are MIME media types that contain native hyper-linking semantics that induce application flow. For example, HTML is a hypermedia type; XML is not.
Atom Syndication/Atom Publishing is a good demonstration for defining a REST API.
Can you throw some light on what REST actually means and how it differs from normal http?
Have you noticed that websites don't normally use plain text for the representations of the information that they share? It's something of a dead end -- raw text doesn't have any hypermedia semantics built into it, so a generic client can't do anything more interesting than search for sequences that might be URI.
On the other hand, with HTML we have link semantics: we can include references to images, to style sheets, to scripts, as well as linking to other documents. We can describe forms, that allow the creation of parameterized HTTP requests.
Additionally, that means that if some relation shouldn't be used by the client, the server can easily change the representation to remove the link.
Furthermore, the use of the hypermedia representation allows the server to use a richer description of which request message should be sent by the client.
Consider, for example, Google. They can use the form to control whether search requests use GET or POST. They can remove the "I Feel Lucky" option, or arrange that it redirects to the main experience. They can embed additional information in to the fields of the form, to track what is going on. They can choose which URI targets are used in the search results, directing the client to send to Google another request which gets redirected to the actual target, with additional meta data embedded in the query parameters, all without requiring any special coordination with the client used.
For further discussion, see Leonard Richardson's slide deck from QCon 2008, or Phil Sturgeon's REST and Hypermedia in 2019.
Does n't think the client need to read the documentation if the HATEOAS link is a POST API? HATEOAS links will only guide you to an API but will not throw any light on how its request body needs to be filled....GET won't have request body. So, not much or a problem. but POST API?
Sort of - here's Fielding writing in 2008:
REST doesn’t eliminate the need for a clue. What REST does is concentrate that need for prior knowledge into readily standardizable forms.
On the web, the common use case is agents assisting human beings; the humans can resolve certain ambiguities on their own. The result is a separation of responsibilities; the humans decode the domain specific semantics of the messages, the clients determine the right way to describe an interaction as an HTTP request.
If we want to easily replace the human with a machine, then we'll need to invest extra design capital in a message schema that expresses the domain specific semantics as clearly as we express the plumbing.
To me, REST is an ideology you want to aim for if you have a system that should last for years to come which has the freedom to evolve freely without breaking stuff on parts you can't control. This is very similar to the Web where a server can't control browsers directly though browsers are able to cooperate with any changes done to Web site representations returned by the server.
I read that HATEOAS links are the one that separates a REST API from a normal http API. In that case, does REST need a separate name?
REST does basically what its name implies, it transfers the state of a resource representation. If so, we should come up with a new name for such "REST" APIs that are truly RPC in the back, to avoid confusion.
If you read through the Richardson Maturity Model (RMM) you might fall under the impression that links or hypermedia controls as Fowler named it, which are mandatory at Level 3, are the feature that separates REST from normal HTTP interaction. However, Level 3 is just not enough to reach the ultimate goal of decoupling.
Most so called "REST APIs" do put a lot of design effort into pretty URIs in a way to express meaning of the target resource to client developer. They come up with fancy documentation generated by their tooling support, such as Swagger or similar stuff, which the client developer has to follow stringent or they wont be able to interact with their API. Such APIs are RPC though. You won't be able to point the same client that interacts with API A to point to API B now and still work out of the box as they might use completely different endpoints and return different types of data for almost the same named resource endpoint. A client that is attempting to use a bit more of dynamic behavior might learn the type from parsing the endpoint and expect a URI such as .../api/users to return users, when all of a sudden now the API changed its URI structure to something like .../api/entities. What would happen now? Most of these clients would break, a clear hint that the whole interaction model doesn't follow the one outline by a REST architecture.
REST puts emphasis on link relation names that should give clients a stable way of learning the URIs intent by allowing a URI to actually change over time. A URI basically is attached to a link relation name and basically represents an affordance, something that is clear what it does. I.e. the affordance of a button could be that you can press it and something would happen as a result. Or the affordance of a light switch would be that a light goes on or off depending on the toggled state of the light switch.
Link relation names now express such an affordance and are a text-based way to represent something like a trash bin or pencil symbol next to table entry on a Web page were you might figure out that on clicking one will delete an entry from the table while the other symbol allows to edit that entry. Such link relation names should be either standardized, use widely accepted ontologies or use custom link-relation extensions as outlined by RFC 8288 (Web Linking)
It is important to note however, that a URI is just a URI which should not convey a semantic meaning to a client. This does not mean that a URI can't have a semantic meaning to the server or API, but a client should not attempt to deduce one from the URI itself. This is what the link-relation name is for, which provides the infrequently changing part of that relation. An endpoint might be referenced by multiple, different URIs, some of which might use different query parameters used for filtering. According to Fielding each of these URIs represent different resources:
The definition of resource in REST is based on a simple premise: identifiers should change as infrequently as possible. Because the Web uses embedded identifiers rather than link servers, authors need an identifier that closely matches the semantics they intend by a hypermedia reference, allowing the reference to remain static even though the result of accessing that reference may change over time. REST accomplishes this by defining a resource to be the semantics of what the author intends to identify, rather than the value corresponding to those semantics at the time the reference is created. It is then left to the author to ensure that the identifier chosen for a reference does indeed identify the intended semantics. (Source 6.2.1)
As URIs are used for caching results, they basically represent the keys used for caching the response payload. As such, it gets obvious that on adding additional query parameters to URIs used in GET requests, you end up bypassing caches as the key is not stored in the cache yet and therefore get the result of a different resource, even though it might be identical (also in response representation) as the URI without that additional parameter.
I wonder what all this hype about REST API is about. It seems to be just a http method with one extra rule in the response.
In short, this is what those self- or marketing-termed pseudo "REST APIs" do convey and many people seem to understand.
The hype for "REST" arose from the inconveniences put onto developers on interacting with other interop-solutions such as Corba, RMI or SOAP where often partly-commercial third-party libraries and frameworks had to be used in order to interact with such systems. Most languages supported HTTP both as client and server out of the box removing the requirement for external libraries or frameworks per se. In addition to that, RPC based solution usually require certain stub- or skeleton-classes to be generated first, which was usually done by the build pipeline automatically. Upon updates of the IDL, such as WSDL linking or including XSD schemata, the whole stub-generation needed to be redone and the whole code needed to looked through in order to spot whether a breaking change was added or not. Usually no obvious changelog was available which made changing or updating such stuff a pain in the ...
In those pseudo "REST" APIs plain JSON is now pretty much the de facto standard, avoiding the step of generating stub classes and the hazzle of analyzing the own code to see whether some of the forced changes had a negative impact on the system. Most of those APIs use some sort of URI based versioning allowing a developer to see based on the URI whether something breaking was introduced or not, mimicking some kind of semantic versioning.
The problem with those solution though is, that not the response representation format itself is versioned but the whole API itself leading to common issues when only a change on a part of the API should be introduced as now the whole API's version needs to be bumped. In addition to that, to URIs such as .../api/v1/users/1234 and .../api/v2/users/1234 may represent the same user and thus the same resource though are in fact different by nature as the URI is different.
Q) What other differences exist?
While REST is just an architecture model that can't force you to implement it stringent, you simply will not benefit from its properties if you ignore some of its constraints. As mentioned above, HATEOAS support is therefore not yet enough to really decouple all clients from an API and thus allow to benefit from the REST architecture.
RMM unfortunately does not talk about media types at all. A media type basically specifies how a received payload should be processed and defines the semantics and constraints of each of the elements used within that payload. I.e. if you look at text/html registered in IANA's media type registry, you can see that it points to the published specification, which always references the most recent version of HTML. HTML is designed in a way to stay backwards compatible so no special versioning stuff is required.
HTML provides, IMO, two important things:
semi-structured content
form support
The former one allows to structure data, giving certain segments or elements the possibility to express different semantics defined in the media type. I.e. a browser will handle an image differently than a div element or an article element. A crawler might favor links and content contained in an article element and ignore script and image elements completely. Based on the existence or absence of certain elements even certain processing differences may occur.
Including support for forms is a very important thing in REST actually as this is the feature which allows a server to teach a client on what a server needs as input. Most so called "REST APIs" just force a developer to go through their documentation, which might be outdated, incorrect or incomplete, and send data to a predefined endpoint according to the documentation. In case of outdated or incomplete documentation, how should a client ever be able to send data to the server? Moreover, a server might never be able to change as basically the documentation is now the truth and the API has to align with the documentation.
Unfortunately, form-support is still a bit in its infancy. Besides HTML, which provides <form>...</form>, you have a couple of JSON based form attempts such as hal-forms, halo-json (halform), Ion or hydra. None of these have yet wide library or framework support yet as some of these form representations still have not finalized their specification on how to support forms more effectively.
Other media-types, unfortunately, might not use semi-structured content or provide support for forms that teach a client on the needs of a server, though they are still valuable to REST in general. First, through Web linking link support can be added to media types that do not naturally support those. Second, the data itself does not really need to be text-based at all in order for an application to use it further. I.e. pictures an videos usually are encoded and byte based anyways still a client can present them to users.
The main point about media-types though is, as Fielding already pointed out in one of his cited blog posts, is, that representations shouldn't be confused with types. Fielding stated that:
A REST API should never have “typed” resources that are significant to the client. Specification authors may use resource types for describing server implementation behind the interface, but those types must be irrelevant and invisible to the client. The only types that are significant to a client are the current representation’s media type and standardized relation names.
Jørn Wildt explained in an excellent blog post what a "typed" resource is and why a REST architecture shouldn't use such types. Basically, to sum the blog post up, a client expecting a ../api/users endpoint to return a pre-assumed data payload might break if the server adds additional, unexpected fields, renames existing fields or leave out expected fields. This coupling can be avoided by using simple content-type negotiation where a client informs a server on which capabilities the client supports and the client will chose the representation that best fits the target resource. If the server can't support the client with a representation the client supports the server should respond with a failure (or a default representation) the client might log to inform the user.
This in essence is exactly what the name REST stands for, the transfer of a resource's state representation where the representation may differ depending on the representation format defined by the selected media type. While HATEOAS may be one of the most obvious changes between REST and a non-REST based HTTP solution, this for sure is not the only factor that makes up a payload in REST. I hope I could shed some light on the decoupling intention and that a server should teach clients what the server expects through forms and that the affordance of URIs is captured by link-relation names. All these tiny aspects in sum make up REST, and you will only benefit from REST, unfortunately, if you respect all of its constraints and not only those that are either easy to obtain or what you have the mood for implementing.
We have a lot of different Microservices and we test amongst other things the REST APIs of these different Microservices. These automated REST API tests are not included in the Microservice projects/Repos. Instead we have a testautomation project containing all different tests, like API test and end2end tests, because we want to test everything as a black box.
Now the problem is that there are infinitely combinations of different Microservice versions are possible to test against on a test environment (example: Executing the tests today against Microservice A with version 1.0 and Microservice B with version 2.0 is different to execute the same tests tomorrow against Microservice A with version 1.1 and Microservice B with version 2.1). So, we will need some kind of versioning or tagging our testautomation project or the executed tests, that we are able to identify which combinations of different Microservice versions are valid and which combinations are not valid/working, because e.g. some tests will fail.
Are there any recommendations or experiences to implement and integrate such a versioning/tagging mechanism?
To me the actual problem already lays grounded in your actual design. You state that you maintain some micro-services based on a REST architecture, though in such an environment you don't need to version any endpoint as such to start with. Fielding himself answered how a API in a REST environment should be versioned by simply responding with: Don't.
But why is that? One of the few constraints REST has is HATEOAS (or Hate-Us as I tend to pronounce it) which stands for Hypertext-As-The-Engine-Of-Application-State. This acronym basically just describes the interaction model used in the Web, which will have no preasumption on the content to receive and will only render to the user what it received, including any URIs returned by the server. A browser will trigger a state change upon calling an endpoint targeted by a URI invoked by the user. This might be a link, an image or a form-button (or what not). The core idea here is that the client will be served by the API or server with all the information it needs to take further actions and just present the results to the user.
While browsing the Web page of your preferred manufacturer or vendor you might notice that you'll most likely receive a HTML page containing images, links and further content. The browser itself isn't aware of the product offered on that site though it is sill able to render the result to the user as it knows how to render HTML. If you visit an other page your browser will still be able to render HTML regardless of the content that page offers. Yet if one of these pages is changing in some way your browser will still be able to render the result to you, unless the server responds with a media type that your browser isn't yet aware of, which is though very unlikely on the Web.
What most self-claimed "REST" APIs however return is some arbitrary content specific to a certain API, even though most of these use application/json as representation format. A tailor-made client, that has some knowledge about the API built in, is usually interacting with such an API that however is very unlikely to be able to interact with any other API out there. If something on the API level changes the likelihood of breaking that client without any additional updates of it are therefore high. This is very common to RPC like systems such as SOAP, RMI and CORBA.
Such clients often assume certain endpoints such as /api/users/12345 to return data about a particular user in a most likely JSON representation. The payload is marshalled later on to a object of the underlying programming language probably ignoring any unknown fields and nulling out specified fields that aren't available within the response. Though the fundamental problem here is that clients assume that certain endpoints have a certain types. "Smart" developers will now introduce versioning to the endpoints so that the above mentioned URI will change to /api/v1/users/12345 for a JSON representation containing the old fields while /api/v2/users/12345 will return the new fields. Both versions, however, still describe the same user. Having two different URIs for the same user is already a bad design per se, though usually the versioning of an endpoint does not come alone. Usually the whole API itself is versioned itself so that if you encounter a breaking change you are forced to introduce a whole new API version either copying the other unmodified resources or reusing the same models internally further just exposed under multiple URIs again.
Instead of assuming endpoints to return a certain type with a predefined representation format clients and server should negotiate about the content. HTTP here in particular supports content type negotiation where a client informs a server about its capabilities and the server should respond in a representation format understood by the client. This could be something like application/vnd.acmee-users+json or application/vcard+xml or the like. A client understanding application/vnd.acmee-users.v2+json i.e. might get served by the server the new representation while older clients will still inform the server that they only understand application/vnd.acmee-users+json and get served by such a representation. How the server handles the change internally is not of interest to the client. It is just interested in a representation format it can handle.
Though versioning media-types is also not the preferred way of versioning changes by some architects out there as you fundamentally still describe the same thing just with a bit different syntax or a slightly different semantics. HTML i.e. still ships with application/html (rarely with text/html) but not with application/html_5 or the like. It is designed explicitly in a way to stay backwards compatible. A server generating HTML 5 output will still be rendered on a browser that only supports HTML 4.01 or 2. Maybe not all elements will be rendered the same way as on a HTML 5 compatible browser, but the client won't stop to work unexpectedly.
Mark Nottingham, who is the co-chair in the IETF HTTP working group, stated that the underlying principle of versioning is to not break existing clients. Therefore according to him
This implies that API versioning absolutely cannot be tied to software versioning in any way; doing so will needlessly limit (and often break) your clients, and generally upset people. (Source)
Nottingham even states that the product-token used in User-Agent or Server headers should be taken in preference to any URI or media-type versioning to produce responses specific for certain software versions. With the sheer number of client software out there I'm not the biggest fan of that approach however, as this would require the server to have certain knowledge of the capabilities of HTTP clients and their versions used. For APIs, however, that only have a limited number of clients, probably most of them also under the same control as the API/server, this could be a viable approach though.
As you might see for yourself, in a REST architecture there is no real need to version endpoints itself as client will only handle what they are served with by the API/server. Whether a product-token approach is preferable over a media-type based one might be rather opinionated. The latter one, however, should be based on standardized media-types, registered with IANA. In best case the media-type itself is designed in a way that is backward compatible like HTML, which might avoid introducing new media-types for the same things over and over again.
As Phil Sturgeon mentioned in one of his blog posts
If people are going to design their APIs as RPC with a RESTish facade, they should just commit to being an RPC API and build endpoint for specific clients like they’re literally already doing.
Just be honest about it. Hide the false intention, RPC the lot, document as such, and maybe just use gRPC.
I've been convinced by a fellow developer (now left) that the proper way to evolve RESTful web services is by creating custom media types for your services.
For example application/vnd.acme.payroll.v1+json
This way, you can tell your client to specify the encoding to use without changing the URI.
Is this technique a good one? Usually services embed the version into the url:
eg /acme/1.0/payroll/
I've had a lot of difficulty enforcing clients to use this scheme, especially as it seems DELETE does not enforce a media type
There are a few main signaling mechanisms you can use in a RESTful service:
The media type
The rel of a resource you are linking to.
Custom headers, like Accept-Version/Api-Version.
Each of these has distinct uses, and I will outline the ways in which we have come to understand them while designing our API.
Media Types
To signal what operations are possible on a given resource, and what the semantics of these operations are, many use custom media types. In my opinion, this is not quite correct, and a rel is more accurate.
A custom media type should tell you about the type of the data, e.g. its format or the way certain information is embodied or embedded. Having a custom media type means consumers of your API are tightly coupled to that specific representation. Whereas, using something more generic like application/json says "this is just JSON data."
Usually JSON alone is not enough for a RESTful service, since it has no built-in linking or resource-embedding functionality. That is where something like HAL (application/hal+json) comes in. It is a specialization of JSON that is still a generic format, and not application-specific. But it gives just enough to overlay the linking and embedding semantics on top of JSON that is necessary for coherently expressing a RESTful API.
Link Relation Types (rels)
This brings us to rels. To me, a custom rel is a perfect way to signal what type of resource is being dealt with or linked to. For example, a custom rel for a user resource might be http://rel.myapi.com/user, which serves two purposes:
Clients of your API must know this key ahead of time, as it is API-specific knowledge. For example, if it was available on your initial resource and you were using HAL to link to the user resource, clients might find the user link via initialResource._links["http://rel.myapi.com/user"].href.
Developers writing API clients can visit that URI in their web browser, and get an explanation of what that resource represents in your API, including what methods are applicable and what they do. This is a very convenient way to communicate that API-specific knowledge I mentioned. For examples of this, see http://rel.nkstdy.co.
If you combine rels with a standard or semi-standard media type like application/hal+json, you get resources which follow a uniform format specified by their media type, with API-specific semantics defined by their rels. This gets you almost all the way there.
Custom Headers
The remaining question is versioning. How do you allow clients to negotiate different versions of the resource, while not invalidating old URIs?
Our solution, inspired by the Restify Node.js framework, is two custom headers: Accept-Version from the client, which much match X-Api-Version from the server (or Api-Version in the upcoming Restify 2.0 release, as per the new RFC 6648). If they don't match, a 400 Bad Request is the result.
I admit that custom media types are a fairly popular solution here. In my opinion they don't fit very well conceptually, in light of the above considerations, but you would not be doing something weird if you chose them as your versioning mechanism. It has some semantic issues when used with methods other than GET though, as you note.
One thing to keep in mind is that in a truly RESTful system, versioning should not be such an issue. It should only matter in one very specific situation: when the representations of your resources change in backward-incompatible ways, but you still want to keep the same rels. So if the http://rel.myapi.com/friend resource suddenly loses its username field and gains an id field, that would qualify. But if it suddenly gains a nickname field, that's not backward-incompatible, so no versioning is needed. And if the concept of "friends" is completely replaced in your API with the concept of, say, "connection", this is not actually backward-incompatible, because API consumers will simply no longer find http://rel.myapi.com/friend links anywhere in the API for them to follow.
Yes, it's a good option. It clarifies the encoding you'll be using for payloads and lets both sides negotiate a different version of the encoding without changing the URI, as you correctly pointed out.
And yes, there's no need for a client to send a DELETE along with an entity-body. I believe it will simply be ignored by a compliant HTTP server, given that no payload data is transferred in that case. The client issues a DELETE for a URI, and the server returns a response code indicating whether it succeeded. Nice and simple! If the server wishes to return some data after a DELETE then it is free to do so, and should specify the media type of the response when it does.