What's the correct way to view idempotency in terms of HTTP DELETE? - rest

I have spent a lot of time recently reading the HTTP 1.1 specification and relating it to REST. I have found that there are two interpretations of the HTTP DELETE method in regards to its "idempotency" and safety. Here are the two camps:
If you delete a resource with HTTP DELETE, and it succeeds (200 OK), and then you try to delete that resource N number of times, you should get back a success message (200 OK) for each and every one of those delete calls. This is its "idempotencyness".
If you delete a resource with HTTP DELETE, and it succeeds (200 OK), and then you try to delete that resource again, you should get back an error message (410 Gone) because the resource was deleted.
The specification says DELETE is idempotent, sure, but it also says that sequences of idempotent events can still produce side effects. I really feel like the second camp is correct, and the first is misleading. What "safety" have we introduced by allowing clients to think they were the cause for deleting a resource previously deleted?
There are a LOT of people in the first camp, including several authors on the subject, so I wanted to check if there was some compelling reason other than emotions that lead people into the first camp.

Being idempotent does not mean that a request is not allowed to have side-effects (that's what the 'safe' property describes). It just mean that issuing the same request multiple times will not result in different or additional side-effects.
In my opinion, the subsequent DELETE request should return an error - it's still idempotent because the state of the server is that same as if only one DELETE request were made. Then again returning the 200 OK status should be OK as well - I don't think being idempotent requires the returning of an error code for the subsequent DELETE requests - it's just that returning the error status seems to make more sense to me.

#MichaelBurr is correct about idempotency and side-effects.
My opinion is that there are 2 states involved in a given REST request, the client's state and the server's state. REST is all about transferring these states between the server and the client, such that the client's state maps to a subset of the server's state, in other words, the subset stays consistent with the server. Because of that idempotency should mean that subsequent idempotent requests will not result in either state being different than it would be from only making the request once. With the first DELETE you would imagine that the server deletes the resource and lets the client know it can delete the resource as well (as the resource "doesn't exist anymore"). Now both states should be identical to before with minus the item that was deleted. For the client to do anything different when it tries to delete the item after it has already been deleted, then the state that is transfered from the server to the client must contain different information. The server can do things slightly differently with the information that the resource was already deleted, but once it responds with something different idempotency of the methods is essentially broken.
For idempotent function:
delete(client_state) -> client_state - {item}
delete(delete(client_state)) -> client_state - {item}
delete(client_state) = delete(delete(client_state))
The best way to guarantee this idempotency is if the server's response is identical, that means the only way for the client's state to break the idempotency is for there to be non-determinacy or side effects in the client's handling of the response (which probably points to an incorrect implementation of handling the response).
If there is an agreement between the client and server that the status codes exist outside of the representation of the state being transferred (REST), then it is possible to inform the client that the item "doesn't exists anymore" (as it would in the first request) with the extra comment that it had previously been deleted. What the client does with this information is unclear, but it shouldn't effect the resulting client state. But then the status code can't be used to communicate state, or rather if it does also communicate state in other situations (like maybe "you don't have permission to delete this item" or "item was not deleted"), then there's some introduced ambiguity or confusion. So, you at least need a pretty good reason for introducing more confusion into the communication if you want to say that DELETE is idempotent and still have the server's response depend on previous DELETE requests that are identical.
HTTP requests involve remove methods, so the function might resemble
delete(client_state) = send_delete(client_state) -> receive_delete(client_state)
-> respond_to_delete(informative_state)
-> handle_response(informative_state)
-> client_state - {item}

Wikipedia defines Idempotence as an operation that:
can be applied multiple times without changing the result beyond the initial application.
Notice that they talk about the result of the operation. To me, this includes both the server state and the response code.
The HTTP specification is a bit more vague on the matter. It defines it specifies that HTTP methods are Idempotent:
if the intended effect of multiple identical requests is the same as for a single request.
If you interpret effect as result in the Wikipedia definition then they mean the same. In any case, I question the practical benefit of telling clients that the resource as already been deleted.
Final point: Idempotence is defined in terms of a single client. Once you start introducing concurrent requests by other clients, all bets are off. You are supposed to use conditional-update headers (such as If-Match-ETag) to deal with such cases.
To reiterate: you should return the same return code, whether the resource just got deleted, was deleted by a previous request, or never existed at all.

Related

How to handle PUT endpoint with immutable resource

A microservice I develop exposes a single endpoint, PUT /deployments/{uuid} . The endpoint is used to initiate a potentially expensive deployment operation, so we only ever want it to happen once, which is why we chose PUT + UUID over POST (for uniqueness). The deployment is immutable, so it can never be updated, so we currently raise an exception if the PUT is called more than once with the same uuid.
As a person who loves bikeshedding and therefore cares deeply about restfulness, this grinds my gears. PUT is supposed to be idempotent, so raising an exception after making the same request multiple times is an antipattern. However, we have a requirement to not allow sequential identical requests to generate new deployments, so the usual POST is out.
While the best solution is one that works, I'd like ours to be a little more elegant, if possible. I've posited a POST with the UUID in the payload, but my team seems to think that's worse than the current solution. I'm considering just returning a 200 OK from a PUT to the same UUID rather than a 201 CREATED, but I'm not sure if that has the same problem as non-idempotent-put in not semantically conveying the information I want.
Is there a "best solution" here? Or am I doomed to be "that guy" on my team if I pursue this further (joke's on you i'm already that guy).
tl;dr What is the correct RESTful API signature for a /deployments endpoint that is immutable, and required to not allow the same request to be processed twice?
Idempotent does not mean "2 identical requests should yield the same response". It means: "The server state after 2 identical requests should be the same as when only 1 is made".
A similar example, if you call DELETE on a resource and get a 204 No Content back, and call DELETE again and get a 404, this doesn't violate the idempotency requirement. After the second delete the resource is still removed, just like it was after the first.
So multiple identical idempotent requests are allowed to give different responses.
That said though, I think it might be nicer to the second identical request to also get a 2xx status back. It doesn't have to be the same as the first.
The use-case is if a client sent a HTTP request but got disconnected before it got a response. The client should retry and if the server detects the request is the same as the first, the server can just give the client a success response (but don't do anything).
This is generally a good idea, because if the client got an error back for the second request, it might be harder to know if the request failed because it succeeded earlier, or for other reasons.
That all being said though, there is also a way here to have your cake and eat it too.
A client could send the following header along with the PUT request:
If-None-Match: *
If the client omits the header, you can always return 424 Precondition Required.
If the resource does not yet exist, it's a success response. If the resource was created earlier, you can return 412 Precondition Failed.
Using this mechanism a client has a standard way to figure out that the request failed because a successful one was made earlier.
Based on the docs here, PUT is the best method to use. The response should be 201 when it triggers the deployment, and either 200 or 204 when nothing is changed. It shouldn't be POST because calling a POST endpoint twice should trigger the effect both times.

Idempotentency of GET verb in an RESTful API

As it was mentioned here https://restfulapi.net/http-methods/ (and in other places as well):
GET APIs should be idempotent, which means that making multiple
identical requests must produce same result everytime until another
API (POST or PUT) has changed the state of resource on server.
How to make this true in an API that return time for example? or that return data that is affected by time.
In other words, each time I use GET http://ip:port/get-time-now/, it is going to return a different response. However, I did not send any POST or PUT between two sequenced GET's
Does this make the previous statement wrong? Did I misunderstand something?
Idempotency is a promise to clients/intermediaries that the request can be reissued in case of network failures or the like without any further considerations and not so much that the data will never change.
If you take a POST request for example, in case of a network failure you do not know if the previous request reached the server but the response got lost midway or if the initial request didn't even reach the server at all. If you re-issue the request you might create a further resource actually, hence POST is not idempotent. PUT on the other side has the contract that it replaces the current representation with the one contained in the request. If you send the same request twice the content of the resource should be the same after any of the two PUT requests was processed. Note that the actual result can still differ as the service is free to modify the received entity to a corresponding representation. Also, between sending the data via PUT and retrieving it via GET a further client could have updated the state in between, so there is no guarantee that you will actually receive the exact representation you've sent to the service.
Safetiness is an other promise that only GET, HEAD and OPTIONS supports. It promises the invoker that it wont modify any state at all hence clients/intermediaries are safe on issuing such request without having to fear that it will modify any state. In practice this is an important promise to crawlers which blindly invoke any URLs in order to learn their content. In case of violating such promises, i.e. by deleting data while processing a GET request the only one to blame is the service implementor but not the invoker. If a crawler invokes such URLs and hence removes some data it is not the crawlers fault actually but only the service implementor.
As you have a dynamic value in your response, you might want to prevent caching of responses though as otherwise intermediaries might return an old state for your resource
The main basic concept of idempotent and safe methods of HTTP:-
Idempotent Method:- The method can called multiple times with same input and it produce same result.
Safe Method:- The method can called multiple times with same input and it doesn't modify the resource onto the server side.
Http methods are categorized into following 3 groups-
GET,HEAD,OPTIONS are safe and idempotent
PUT,DELETE are not safe but idempotent
POST,PATCH are neither safe & nor idempotent

How to handle network connectivity loss in the middle of REST POST request?

REST POST is used to create resources.
Let's say we have resource url
"http://example.com/cars"
We want to create a new car.
We POST to "http://example.com/cars" with JSON payload containing car properties (color, weight, model, etc).
Server receives the request, creates a new car, sends a response over the network.
At this point network fails (let's say router stops working properly and ignores every packet).
Client fails with TCP timeout (like 90 seconds).
Client has no idea whether car was created or not.
Also client haven't received car resource id, so it can't GET it to check if it was created.
Now what?
How do you handle this?
You can't simply retry creating, because retrying will just create a duplicate (which is bad).
REST POST is used to create resources.
HTTP POST is used for lots of things. REST doesn't particularly care; it just wants resources that support a uniform interface, and hypermedia.
At this point network fails
Bummer!
Now what? How do you handle this? You can't simply retry creating, because retrying will just create a duplicate (which is bad).
This is a general messaging concern, not directly related to REST. The most common solution is to use the Idempotent Receiver pattern. In short, you
need to define your messages so that the receiver has enough information to recognize the request as something that has already been done.
Ideally, this is being supported at the business level.
Idempotent collections of values are often straight forward; we just need to be thinking sets, rather than lists.
Idempotent collections of entities are trickier; if the request includes an identifier for the new entity, or if we can compute one from the data provided, then we can think of our collection as a hash.
If none of those approaches fits, then there's another possibility. Instead of performing an idempotent mutation of the collection, we make the mutation of the collection itself idempotent. Think "compare and swap" - we encode into the request information that identifies the current state of the collection; is that state is still current when the request arrives, then the mutation is applied. If the condition does not hold, then the request becomes a no-op.
Translating this into HTTP, we make a small modification to the protocol for updating the collection resource. First, we GET the current representation; and in the meta data the server provides validators that can be used in subsquent requests. Having obtained the validator, the client evaluates the current representation of the resource to determine if it needs to be changed. If the client decides to make a change, then submits the change with an If-Match or an If-Unmodified-Since header including the validator. The server, before processing the requests, then considers the validator, immediately abandoning the request with 412 Precondition Failed.
Thus, if a conditional state-changing request is lost, the client can at its own discretion repeat the request without concern that server will misunderstand the client's intent.
Retry it a limited number of times, with increasing delays between the attempts, and make sure the transaction concerned is idempotent.
because retrying will just create a duplicate (which is bad).
It is indeed, and it needs fixing, see above. It should be impossible in your system to create two entries with the same attributes. This is easily accomplished at the database level. You can attain idempotence by having the transaction return the same thing whether the entry already existed or was newly created. Or else just have it return EXISTS if the entry already exists, and adjust your client accordingly.

Avoid duplicate POSTs with REST

I have been using POST in a REST API to create objects. Every once in a while, the server will create the object, but the client will be disconnected before it receives the 201 Created response. The client only sees a failed POST request, and tries again later, and the server happily creates a duplicate object...
Others must have had this problem, right? But I google around, and everyone just seems to ignore it.
I have 2 solutions:
A) Use PUT instead, and create the (GU)ID on the client.
B) Add a GUID to all objects created on the client, and have the server enforce their UNIQUE-ness.
A doesn't match existing frameworks very well, and B feels like a hack. How does other people solve this, in the real world?
Edit:
With Backbone.js, you can set a GUID as the id when you create an object on the client. When it is saved, Backbone will do a PUT request. Make your REST backend handle PUT to non-existing id's, and you're set.
Another solution that's been proposed for this is POST Once Exactly (POE), in which the server generates single-use POST URIs that, when used more than once, will cause the server to return a 405 response.
The downsides are that 1) the POE draft was allowed to expire without any further progress on standardization, and thus 2) implementing it requires changes to clients to make use of the new POE headers, and extra work by servers to implement the POE semantics.
By googling you can find a few APIs that are using it though.
Another idea I had for solving this problem is that of a conditional POST, which I described and asked for feedback on here.
There seems to be no consensus on the best way to prevent duplicate resource creation in cases where the unique URI generation is unable to be PUT on the client and hence POST is needed.
I always use B -- detection of dups due to whatever problem belongs on the server side.
Detection of duplicates is a kludge, and can get very complicated. Genuine distinct but similar requests can arrive at the same time, perhaps because a network connection is restored. And repeat requests can arrive hours or days apart if a network connection drops out.
All of the discussion of identifiers in the other anwsers is with the goal of giving an error in response to duplicate requests, but this will normally just incite a client to get or generate a new id and try again.
A simple and robust pattern to solve this problem is as follows: Server applications should store all responses to unsafe requests, then, if they see a duplicate request, they can repeat the previous response and do nothing else. Do this for all unsafe requests and you will solve a bunch of thorny problems. Repeat DELETE requests will get the original confirmation, not a 404 error. Repeat POSTS do not create duplicates. Repeated updates do not overwrite subsequent changes etc. etc.
"Duplicate" is determined by an application-level id (that serves just to identify the action, not the underlying resource). This can be either a client-generated GUID or a server-generated sequence number. In this second case, a request-response should be dedicated just to exchanging the id. I like this solution because the dedicated step makes clients think they're getting something precious that they need to look after. If they can generate their own identifiers, they're more likely to put this line inside the loop and every bloody request will have a new id.
Using this scheme, all POSTs are empty, and POST is used only for retrieving an action identifier. All PUTs and DELETEs are fully idempotent: successive requests get the same (stored and replayed) response and cause nothing further to happen. The nicest thing about this pattern is its Kung-Fu (Panda) quality. It takes a weakness: the propensity for clients to repeat a request any time they get an unexpected response, and turns it into a force :-)
I have a little google doc here if any-one cares.
You could try a two step approach. You request an object to be created, which returns a token. Then in a second request, ask for a status using the token. Until the status is requested using the token, you leave it in a "staged" state.
If the client disconnects after the first request, they won't have the token and the object stays "staged" indefinitely or until you remove it with another process.
If the first request succeeds, you have a valid token and you can grab the created object as many times as you want without it recreating anything.
There's no reason why the token can't be the ID of the object in the data store. You can create the object during the first request. The second request really just updates the "staged" field.
Server-issued Identifiers
If you are dealing with the case where it is the server that issues the identifiers, create the object in a temporary, staged state. (This is an inherently non-idempotent operation, so it should be done with POST.) The client then has to do a further operation on it to transfer it from the staged state into the active/preserved state (which might be a PUT of a property of the resource, or a suitable POST to the resource).
Each client ought to be able to GET a list of their resources in the staged state somehow (maybe mixed with other resources) and ought to be able to DELETE resources they've created if they're still just staged. You can also periodically delete staged resources that have been inactive for some time.
You do not need to reveal one client's staged resources to any other client; they need exist globally only after the confirmatory step.
Client-issued Identifiers
The alternative is for the client to issue the identifiers. This is mainly useful where you are modeling something like a filestore, as the names of files are typically significant to user code. In this case, you can use PUT to do the creation of the resource as you can do it all idempotently.
The down-side of this is that clients are able to create IDs, and so you have no control at all over what IDs they use.
There is another variation of this problem. Having a client generate a unique id indicates that we are asking a customer to solve this problem for us. Consider an environment where we have a publicly exposed APIs and have 100s of clients integrating with these APIs. Practically, we have no control over the client code and the correctness of his implementation of uniqueness. Hence, it would probably be better to have intelligence in understanding if a request is a duplicate. One simple approach here would be to calculate and store check-sum of every request based on attributes from a user input, define some time threshold (x mins) and compare every new request from the same client against the ones received in past x mins. If the checksum matches, it could be a duplicate request and add some challenge mechanism for a client to resolve this.
If a client is making two different requests with same parameters within x mins, it might be worth to ensure that this is intentional even if it's coming with a unique request id.
This approach may not be suitable for every use case, however, I think this will be useful for cases where the business impact of executing the second call is high and can potentially cost a customer. Consider a situation of payment processing engine where an intermediate layer ends up in retrying a failed requests OR a customer double clicked resulting in submitting two requests by client layer.
Design
Automatic (without the need to maintain a manual black list)
Memory optimized
Disk optimized
Algorithm [solution 1]
REST arrives with UUID
Web server checks if UUID is in Memory cache black list table (if yes, answer 409)
Server writes the request to DB (if was not filtered by ETS)
DB checks if the UUID is repeated before writing
If yes, answer 409 for the server, and blacklist to Memory Cache and Disk
If not repeated write to DB and answer 200
Algorithm [solution 2]
REST arrives with UUID
Save the UUID in the Memory Cache table (expire for 30 days)
Web server checks if UUID is in Memory Cache black list table [return HTTP 409]
Server writes the request to DB [return HTTP 200]
In solution 2, the threshold to create the Memory Cache blacklist is created ONLY in memory, so DB will never be checked for duplicates. The definition of 'duplication' is "any request that comes into a period of time". We also replicate the Memory Cache table on the disk, so we fill it before starting up the server.
In solution 1, there will be never a duplicate, because we always check in the disk ONLY once before writing, and if it's duplicated, the next roundtrips will be treated by the Memory Cache. This solution is better for Big Query, because requests there are not imdepotents, but it's also less optmized.
HTTP response code for POST when resource already exists

Status code when deleting a resource using HTTP DELETE for the second time

Given that the DELETE verb in HTTP is idempotent, when I issue the following request, what should happen the second (or third, or fourth, etc...) time I make it?
DELETE /person/123
The first time, the resource is deleted and I return a 204 (successful, no content). Should I return a 204 on subsequent calls or a 404 (not found)?
As HTTP requests in a stateless system should be independent, the results of one request should not be dependent on a previous request. Consider what should happen if two users did a DELETE on the same resource simultaneously. It makes sense for the second request to get a 404. The same should be true if one user makes two requests.
I am guessing that having DELETE return two different responses does not feel idempotent to you. I find it useful to think of idempotent requests as leaving the system in the same state, not necessarily having the same response. So regardless of whether you DELETE an existing resource, or attempt to DELETE a resource that does not exist, the server resource state is the same.
I agree with what the current chosen answer has said, that the 2nd (and 3rd, 4th, ...) DELETE should get a 404. And, I noticed that answer has 143 up votes but also has an opposite comment which has 54 up votes, so the community is divided into 2 camps in roughly 3:1 ratio. Here comes more information to settle this long time debate.
First of all, let's NOT start with what "I" think, what "you" think, or what yet another book author thinks. Let's start with the HTTP specs i.e. RFC 7231.
RFC 7231, section 4.3.5 DELETE happened to only mention a successful response should be 2xx, but it did not call out what a subsequent DELETE would get. So let's dig deeper.
RFC 7231, section 6.5.4 404 Not Found says 404 response is for a resource does not exist. Since no specific http method (in particular, not DELETE) being called out to be treated otherwise, we can intuitively get an impression (and rightfully so), that my request DELETE /some/resource/which/does/not/exist should result in a 404. Then, DELETE /some/resource/which/happened/to/be/removed/by/someone/else/five/days/ago might as well also return a 404. Then, why should DELETE /some/resource/i/deleted/five/seconds/ago be any different?
"But how about idempotency?!", I can hear you are screaming that. Hang on, we are about to get into that.
Historically, RFC 2616, published at 1999, was the most-referenced HTTP 1.1 specs. Unfortunately its description on idempotency was vague, that leaves room for all these debates. But that specs has been superseded by RFC 7231. Quoted from RFC 7231, section 4.2.2 Idempotent Methods, emphasis mine:
A request method is considered "idempotent" if the intended EFFECT ON
THE SERVER of multiple identical requests with that method is the
same as the effect for a single such request. Of the request methods
defined by this specification, PUT, DELETE, and safe request methods
are idempotent.
So, it is written in the specs, idempotency is all about the effect on the server. The first DELETE returning a 204 and then subsequent DELETE returning 404, such different status code does NOT make the DELETE non-idempotent. Using this argument to justify a subsequent 204 return, is simply irrelevant.
OK so it is not about idempotency. But then a follow-up question may be, what if we still choose to use 204 in subsequent DELETE? Is it OK?
Good question. The motivation is understandable: to allow the client to still reach its intended outcome, without worrying about error handling. I would say, returning 204 in subsequent DELETE, is a largely harmless server-side "white lie", which the client-side won't immediately tell a difference. That's why there are ~25% people doing that in the wild and it seemingly still works. Just keep in mind that, such lie can be considered semantically weird, because GET /non-exist returns 404 but DELETE /non-exist gives 204, at that point the client would figure out your service does not fully comply with section 6.5.4 404 Not Found.
But I want to point out that, the intended way hinted by RFC 7231, i.e. returning 404 on subsequent DELETE, shouldn't be an issue in the first place. 3x more developers chose to do that, and did you ever hear a major incident or complain caused by a client not being able to handle 404? Presumably, nope, and that is because, any decent client which implements HTTP DELETE (or any HTTP method, for that matter), would not blindly assume the result would always be successful 2xx. And then, once the developer starts to consider the error handling, 404 Not Found would be one of the first errors that comes into mind. At that point, he/she would probably draw a conclusion that, it is semantically safe for an HTTP DELETE operation to ignore a 404 error. And they did so.
Problem solved.
The RESTful web services cookbook is a great resource for this. By chance, its google preview show the page about DELETE (page 11):
The DELETE method is idempotent. This
implies that the server must return
response code 200 (OK) even if the
server deleted the resource in a
previous request. But in practice,
implementing DELETE as an idempotent
operation requires the server to keep
track of all deleted resources.
Otherwise, it can return a 404 (Not
Found).
First DELETE: 200 or 204.
Subsequent DELETEs: 200 or 204.
Rationale: DELETE should be idempotent. If you return 404 on a second DELETE, your response is changing from a success code to an error code. The client program may take incorrect actions based on the assumption the DELETE failed.
Example:
Suppose your DELETE operation is part of a multi-step operation (or a "saga") executed by the client program.
The client program may be a mobile app performing a bank transaction, for example.
Let's say the client program has an automatic retry for a DELETE operation (it makes sense, because DELETE is supposed to be idempotent).
Let's say the first DELETE was executed successfully, but the 200 response got lost on its way to the client program.
The client program will retry the DELETE.
If the second attempt returns 404, the client program may cancel the overall operation because of this error code.
But because the first DELETE executed successfully on the server, the system may be left at an inconsistent state.
If the second attempt returns 200 or 204, the client program will proceed as expected.
Just to illustrate the use of this approach, the HTTP API style guide for PayPal has the following guideline:
DELETE: This method SHOULD return status code 204 as there is no need to return any content in most cases as the request is to delete a resource and it was successfully deleted.
As the DELETE method MUST be idempotent as well, it SHOULD still return 204, even if the resource was already deleted. Usually the API consumer does not care if the resource was deleted as part of this operation, or before. This is also the reason why 204 instead of 404 should be returned.