RESTful idempotence - rest

I'm designing a RESTful web service utilizing ROA(Resource oriented architecture).
I'm trying to work out an efficient way to guarantee idempotence for PUT requests that create new resources in cases that the server designates the resource key.
From my understanding, the traditional approach is to create a type of transaction resource such as /CREATE_PERSON. The the client-server interaction for creating a new person resource would be in two parts:
Step 1: Get unique transaction id for creating the new PERSON resource:::
**Client request:**
POST /CREATE_PERSON
**Server response:**
200 OK
transaction-id:"as8yfasiob"
Step 2: Create the new person resource in a request guaranteed to be unique by using the transaction id:::
**Client request**
PUT /CREATE_PERSON/{transaction_id}
first_name="Big bubba"
**Server response**
201 Created // (If the request is a duplicate, it would send this
PersonKey="398u4nsdf" // same response without creating a new resource. It
// would perhaps send an error response if the was used
// on a transaction id non-duplicate request, but I have
// control over the client, so I can guarantee that this
// won't happen)
The problem that I see with this approach is that it requires sending two requests to the server in order to do to single operation of creating a new PERSON resource. This creates a performance issues increasing the chance that the user will be waiting around for the client to complete their request.
I've been trying to hash out ideas for eliminating the first step such as pre-sending transaction-id's with each request, but most of my ideas have other issues or involve sacrificing the statelessness of the application.
Is there a way to do this?
Edit::::::
The solution that we ended up going with was for the client to acquire a UUID and send it along with the request. A UUID is a very large number occupying the space of 16 bytes (2^128). Contrary to what someone with a programming mind might intuitively think, it is accepted practice to randomly generate a UUID and assume that it is a unique value. This is because the number of possible values is so large that the odds of generating two of the same number randomly are low enough to be virtually impossible.
One caveat is that we are having our clients request a UUID from the server (GET uuid/). This is because we cannot guarantee the environment that our client is running in. If there was a problem such as with seeding the random number generator on the client, then there very well could be a UUID collision.

You are using the wrong HTTP verb for your create operation. RFC 2616 specifies the semantic of the operations for POST and PUT.
Paragraph 9.5:
POST method is used to request
that the origin server accept the
entity enclosed in the request as
a new subordinate of the resource
identified by the Request-URI in the Request-Line
Paragraph 9.6
PUT method requests that the
enclosed entity be stored under the
supplied Request-URI.
There are subtle details of that behavior, for example PUT can be used to create new resource at the specified URL, if one does not already exist. However, POST should never put the new entity at the request URL and PUT should always put any new entity at the request URL. This relationship to the request URL defines POST as CREATE and PUT as UPDATE.
As per that semantic, if you want to use PUT to create a new person, it should be created in /CREATE_PERSON/{transaction_id}. In other words, the transaction ID returned by your first request should be the person key used to fetch that record later. You shouldn't make PUT request to a URL that is not going to be the final location of that record.
Better yet, though, you can do this as an atomic operation by using a POST to /CREATE_PERSON. This allows you with a single request to create the new person record and in the response to get the new ID (which should also be referred in the HTTP Location header as well).
Meanwhile, the REST guidelines specify that verbs should not be part of the resource URL. Thus, the URL to create new person should be the same as the location to get the list of all persons - /PERSONS (I prefer the plural form :-)).
Thus, your REST API becomes:
to get all persons - GET /PERSONS
to get single person - GET /PERSONS/{id}
to create new person - POST /PERSONS with the body containing the data for the new record
to update existing person or create new person with well-known id - PUT /PERSONS/{id} with the body containing the data for the updated record.
to delete existing person - DELETE /PERSONS/{id}
Note: I personally prefer not using PUT for creating records for two reasons, unless I need to create a sub record that has the same id as an already existing record from a different data set (also known as 'the poor man's foreign key' :-)).
Update: You are right that POST is not idempotent and that is as per HTTP spec. POST will always return a new resource. In your example above that new resource will be the transaction context.
However, my point is that you want the PUT to be used to create a new resource (a person record) and according to the HTTP spec, that new resource itself should be located at the URL. In particular, where your approach breaks is that the URL you use with the PUT is a representation of the transactional context that was created by the POST, not a representation of the new resource itself. In other words, the person record is a side effect of updating the transaction record, not the immediate result of it (the updated transaction record).
Of course, with this approach the PUT request will be idempotent, since once the person record is created and the transaction is 'finalized', subsequent PUT requests will do nothing. But now you have a different problem - to actually update that person record, you will need to make a PUT request to a different URL - one that represents the person record, not the transaction in which it was created. So now you have two separate URLs your API clients have to know and make requests against to manipulate the same resource.
Or you could have a complete representation of the last resource state copied in the transaction record as well and have person record updates go through the transaction URL for updates as well. But at this point, the transaction URL is for intends and purposes the person record, which means it was created by the POST request in first place.

I just came across this post:
Simple proof that GUID is not unique
Although the question is universally ridiculed, some of the answers go into deeper explanation of GUIDs. It seems that a GUID is a number of 2^128 in size and that the odds of randomly generating two of the same numbers of this size so low as to be impossible for all practical purposes.
Perhaps the client could just generate its own transaction id the size of a GUID instead of querying the server for one. If anyone can discredit this, please let me know.

I'm not sure I have a direct answer to your question, but I see a few issues that may lead to answers.
Your first operation is a GET, but it is not a safe operation as it is "creating" a new transaction Id. I would suggest POST is a more appropriate verb to use.
You mention that you are concerned about performance issues that would be perceived by the user caused by two round trips. Is this because your user is going to create 500 objects at once, or because you are on a network with massive latency problems?
If two round trips are not a reasonable expense for creating an object in response to a user request, then I would suggest HTTP is not the right protocol for your scenario. If however, your user needs to create large amounts of objects at once, then we can probably find a better way of exposing resources to enable that.

Why don't you just use a simple POST, also including the payload on your first call. This way you save on extra call and don't have to spawn a transaction:
POST /persons
first_name=foo
response would be:
HTTP 201 CREATED
...
payload_containing_data_and_auto_generated_id
server-internally an id would be generated. for simplicity i would go for an artifial primary key (e.g. auto-increment id from database).

Related

What happens if i try to post one POST request with existing record in the database

As per my knowledge the main difference between PUT and POST method in REST is that POST request will create a new record where as PUT request will update the existing record/ create new record if not present.
Now my question is that :
Suppose we have a User with Id = 1 and name= "Pritam" in database.
Now if i try to make a POST request with request body Id = 1 and name= "Pritam", then what happens. (Duplicate records). will a new record will get created or what happens exactly.
Please help me to understand the difference between PUT and POST method. When to use PUT and When to use POST in real time scenarios.
As per my knowledge the main difference between PUT and POST method in REST is that POST request will create a new record where as PUT request will update the existing record/ create new record if not present.
That's not right. (It's also not your fault -- that misunderstanding is common). The real differences in the semantics of POST and PUT are currently described by RFC 7231
POST is the more general method, which can be used for any operation on the target resource
PUT is more specific - it indicates that the included document is intended as a replacement for the representation on the server.
Suppose we have a User with Id = 1 and name= "Pritam" in database. Now if i try to make a POST request with request body Id = 1 and name= "Pritam", then what happens. (Duplicate records). will a new record will get created or what happens exactly.
Those are implementation details; precisely the sort of thing that the REST API is insulating the client from needing to understand (as far as the client is concerned, the server is just a web site).
The "right" thing in your domain might be:
create a new user in your domain model, using the information in the POST message-body, and possibly creating a duplicate, or
report an error to the client, because of the conflict
report success to the client, with reference to the previously created user
None of those things happens by magic, you actually have to choose what makes sense for your circumstances and implement it, then work out the right way to describe what has happened in the body of the HTTP Response, and what information to include in the metadata so that generic components can intelligently participate in the exchange.

How to handle network connectivity loss in the middle of REST POST request?

REST POST is used to create resources.
Let's say we have resource url
"http://example.com/cars"
We want to create a new car.
We POST to "http://example.com/cars" with JSON payload containing car properties (color, weight, model, etc).
Server receives the request, creates a new car, sends a response over the network.
At this point network fails (let's say router stops working properly and ignores every packet).
Client fails with TCP timeout (like 90 seconds).
Client has no idea whether car was created or not.
Also client haven't received car resource id, so it can't GET it to check if it was created.
Now what?
How do you handle this?
You can't simply retry creating, because retrying will just create a duplicate (which is bad).
REST POST is used to create resources.
HTTP POST is used for lots of things. REST doesn't particularly care; it just wants resources that support a uniform interface, and hypermedia.
At this point network fails
Bummer!
Now what? How do you handle this? You can't simply retry creating, because retrying will just create a duplicate (which is bad).
This is a general messaging concern, not directly related to REST. The most common solution is to use the Idempotent Receiver pattern. In short, you
need to define your messages so that the receiver has enough information to recognize the request as something that has already been done.
Ideally, this is being supported at the business level.
Idempotent collections of values are often straight forward; we just need to be thinking sets, rather than lists.
Idempotent collections of entities are trickier; if the request includes an identifier for the new entity, or if we can compute one from the data provided, then we can think of our collection as a hash.
If none of those approaches fits, then there's another possibility. Instead of performing an idempotent mutation of the collection, we make the mutation of the collection itself idempotent. Think "compare and swap" - we encode into the request information that identifies the current state of the collection; is that state is still current when the request arrives, then the mutation is applied. If the condition does not hold, then the request becomes a no-op.
Translating this into HTTP, we make a small modification to the protocol for updating the collection resource. First, we GET the current representation; and in the meta data the server provides validators that can be used in subsquent requests. Having obtained the validator, the client evaluates the current representation of the resource to determine if it needs to be changed. If the client decides to make a change, then submits the change with an If-Match or an If-Unmodified-Since header including the validator. The server, before processing the requests, then considers the validator, immediately abandoning the request with 412 Precondition Failed.
Thus, if a conditional state-changing request is lost, the client can at its own discretion repeat the request without concern that server will misunderstand the client's intent.
Retry it a limited number of times, with increasing delays between the attempts, and make sure the transaction concerned is idempotent.
because retrying will just create a duplicate (which is bad).
It is indeed, and it needs fixing, see above. It should be impossible in your system to create two entries with the same attributes. This is easily accomplished at the database level. You can attain idempotence by having the transaction return the same thing whether the entry already existed or was newly created. Or else just have it return EXISTS if the entry already exists, and adjust your client accordingly.

Rest API: path for accessing derived data

It is not clear to me that if I have a micro service that is in place to provide some derived data how the rest api should be designed for this. For instance :-
If I have a customer and I want to access the customer I would define the API as:
/customer/1234
this would return everything we know about the customer
however if I want to provide a microservice that simply tells me if the customer was previously known to the system with another account number what do I do. I want this logic to be in the microservice but how do I define the API
customer/1234/previouslyKnow
customerPreviouslyKnown/1234
Both don't seem correct. In the first case it implies
customer/1234
could be used to get all the customer information but the microservice doesn't offer this.
Confused!
Adding some extra details for clarification.
I suppose my issue is, I don't really want a massive service which handles everything customer related. It would be better if there were lighter weight services that handles customer orders, customer info, customer history, customer status (live, lost, dead....).
It strikes me all of these would start with
/customer/XXXX
so would all the services be expected to provide a customer object back if only customer/XXXX was given with no extra in the path such as /orders
Also some of the data as mentioned isn't actually persisted anywhere it is derived and I want the logic of this hidden in a service and not in the calling code. So how is this requested and returned.
Doing microservices doesn't mean to have a separate artifact for each method. The rules of coupling and cohesion also apply to the microservices world. So if you can query several data all related to a customer, the related resources should probably belong to the same service.
So your resource would be /customers/{id}/previous-customer-numbers whereas /customers (plural!) is the list of customers, /customers/{id} is a single customer and /customers/{id}/previous-customer-numbers the list of customer numbers the customer previously had.
Try to think in resources, not operations. So returning the list of previously used customer numbers is better than returning just a boolean value. /customer/{id}/previous-accounts would be even better, I think...
Back to topic: If the value of previous-accounts is directly derived from the same data, i.e. you don't need to query a second database, etc. I would even recommend just adding the value to the customer representation:
{
"id": "1234",
"firstName": "John",
"lastName": "Doe",
"previouslyKnown": true,
"previousAccounts": [
{
"id": "987",
...
}
]
}
Whether the data is stored or derived shouldn't matter so the service client to it should not be visible on the boundary.
Adding another resource or even another service is unnecessary complexity and complexity kills you in the long run.
You mention other examples:
customer orders, customer info, customer history, customer status (live, lost, dead....)
Orders is clearly different from customer data so it should reside in a separate service. An order typically also has an order id which is globally unique. So there is the resource /orders/{orderId}. Retrieving orders by customer id is also possible:
/orders;customer={customerId}
which reads give me the list of orders for which the customer is identified by the given customer id.
These parameters which filter a list-like rest resource are called matrix parameters. You can also use a query parameter: /orders?customer={customerId} This is also quite common but a matrix parameter has the advantage that it clearly belongs to a specific part of the URL. Consider the following:
/orders;customer=1234/notifications
This would return the list of notifications belonging to the orders of the customer with the id 1234.
With a query parameter it would look like this:
/orders/notifications?customer=1234
It is not clear from the URL that the orders are filtered and not the notifications.
The drawback is that framework support for matrix parameters is varying. Some support them, some don't.
I'd like matrix parameters best here but a query parameter is OK, too.
Going back to your list:
customer orders, customer info, customer history, customer status (live, lost, dead....)
Customer info and customer status most likely belong to the same service (customer core data or the like) or even the same resource. Customer history can also go there. I would place it there as long as there isn't a reason to think of it separately. Maybe customer history is such a complicated domain (and it surely can be) that it's worth a separate service: /customer-history/{id} or maybe just /customer/{id}.
It's no problem that different services use the same paths for providing different information about one customer. They are different services and they have different endpoints so there is no collision whatsoever. Ideally you even have a DNS alias pointing to the corresponding service:
https://customer-core-data.service.lan/customers/1234
https://customer-history.service.lan/customers/1234
I'm not sure if I really understand your question. However, let me show how you can check if a certain resource exist in your server.
Consider the server provides a URL that locates a certain resource (in this situation, the URL locates a customer with the identifier 1): http://example.org/api/customers/1.
When a client perform a GET request to this URL, the client can expect the following results (there may be other situation, like authentication/authorization problems, but let's keep it simple):
If a customer with the identifier 1 exists, the client is supposed to receive a response with the status code 200 and a representation of the resource (for example, a JSON or XML representing the customer) in the response payload.
If the customer with the identifier 1 do not exist, the client is supposed to receive a response with the status code 404.
To check whether a resource exists or not, the client doesn't need the resource representation (the JSON or XML that represents the customer). What's relevant here is the status code: 200 when the resource exists and 404 when the resource do not exist. Besides GET requests, the URL that locates a customer (http://example.org/api/customers/1) could also handle HEAD requests. The HEAD method is identical to the GET method, but the server won't send the resource representation in HEAD requests. Hence, it's useful to check whether a resource exists or not.
See more details regarding the HEAD method:
4.3.2. HEAD
The HEAD method is identical to GET except that the server MUST NOT
send a message body in the response (i.e., the response terminates at
the end of the header section). The server SHOULD send the same
header fields in response to a HEAD request as it would have sent if
the request had been a GET, except that the payload header fields MAY be omitted. This method can be used for obtaining
metadata about the selected representation without transferring the
representation data and is often used for testing hypertext links for
validity, accessibility, and recent modification. [...]
If the difference between resource and resource representation is not clear, please check this answer.
One thing I want to add to the already great answers is: URLS design doesn't really matter that much if you do REST correctly.
One of the important tenets of REST is that urls are discovered. A client that has the customers's information already, and wants to find out what the "previously known" information, should just be able to discover that url on the main customer resource. If it links from there to the "previously known" information, it doesn't matter if the url is on a different domain, path, or even protocol.
So if you application naturally makes more sense if "previouslyKnown" is on a separate base path, then maybe you should just go for that.

What is the proper HTTP method for modifying a subordinate of the named resource?

I am creating a web client which has the purpose of modifying a set of database tables by adding records to them and removing records from them. It must do so atomically, so both deletion and insertion must be done with a single HTTP request. Clearly, this is a write operation of some sort, but I struggle to identify which method is appropriate.
POST seemed right at first, except that RFC 2616 specifies that a POST request must describe "a new subordinate" of the named resource. That isn't quite what I'm doing here.
PUT can be used to make changes to existing things, so that seemed about right, except that RFC 2616 also specifies that "the URI in a PUT request identifies the entity enclosed with the request [...] and the server MUST NOT attempt to apply the request to some other resource," which rules that method out because my URI does not directly specify the database tables.
PATCH seemed closer - now I am not cheating by only partly overwriting a resource - but RFC 5789 makes it clear that this method, like PUT, must actually modify the resource specified by the URI, not some subordinate resource.
So what method should I be using?
Or, more broadly for the benefit of other users:
For a request to X, you use
POST to create a new subordinate of X,
PUT to create a new X,
PATCH to modify X.
But what method should you use if you want to modify a subordinate of X?
To start.. not everything has to be REST. If REST is your hammer, everything may look like a nail.
If you really want to conform to REST ideals, PATCH is kind of out of the question. You're only really supposed to transfer state.
So the common 'solution' to this problem is to work outside the resources that you already have, but invent a new resource that represents the 'transaction' you wish to perform. This transaction can contain information about the operations you're doing in sequence, potentially atomically.
This allows you to PUT (or maybe POST) the transaction, and if needed, also GET the current state of the transaction to find out if it was successful.
In most designs this is not really appropriate though, and you should just fall back on POST and define a simple rpc-style action you perform on the parent.
First, allow me to correct your understanding of these methods.
POST is all about creating a brand new resource. You send some data to the server, and expect a response back saying where this new resource is created. The expectation would be that if you POST to /things/ the new resource will be stored at /things/theNewThing/. With POST you leave it to the server to decide the name of the resource that was created. Sending multiple identical POST requests results in multiple resources, each their own 'thing' with their own URI (unless the server has some additional logic to detect the duplicates).
PUT is mostly about creating a resource. The first major difference between PUT and POST is that PUT leaves the client in control of the URI. Generally, you don't really want this, but that's getting of the point. The other thing that PUT does, is not modify, if you read the specification carefully, it states that you replace what ever resource is at a URI with a brand new version. This has the appearance of making a modification, but is actually just a brand new resource at the same URI.
PATCH is for, as the name suggest, PATCHing a resource. You send a data to the server describing how to modify a particular resource. Consider a huge resource, PATCH allows you to send just the tiny bit of data that you wish to change, whilst PUT would require you send the entire new version.
Next, consider the resources. You have a set of tables each with many rows, that equates to a set of collections with many resources. Now, your problem is that you want to be able to atomically add resources and remove them at the same time. So you can't just POST then DELETE, as that's clearly not atomic. PATCHing the table how ever can be...
{ "add": [
{ /* a resource */ },
{ /* a resource */ } ],
"remove" : [ "id one", "id two" ] }
In that one body, we have sent the data to the server to both create two resources and delete two resources in the server. Now, there is a draw back to this, and that is that it's hard to let clients know what is going on. There's no 'proper' way of the client of the two new resources, 204 created is sort of there, but is meant have a header for the URI of the one new resource... but we added two. Sadly, this a problem you are going to face no matter what, HTTP simple isn't designed to handle multiple resources at once.
Transaction Resources
So this is a common solution people propose, and I think it stinks. The basic idea is that you first POST/PUT a blob of data on the server the encodes the transaction you wish to make. You then use another method to 'activate' this transaction.
Well hang on... that's two requests... it sends the same data that you would via PATCH and then you have fudge HTTP even more in order to somehow 'activate' this transaction. And what's more, we have this 'transaction' resource now floating around! What do we even do with that?
I know this question has been asked already some time ago, but I thought I should provide some commentary to this myself. This is actually not a real "answer" but a response to thecoshman's answer. Unfortunately, I am unable to comment on his answer which would be the right thing to do, but I don't have enough "reputation" which is a strange (and unnecessary) concept, IMHO.
So, now on to my comment for #thecoshman:
You seem to question the concept of "transactional resources" but in your answer it looks to me that you might have misunderstood the concept of them. In your answer, you describe that you first do a POST with the resource and the associated transaction and then POST another resource to "activate" this transaction. But I believe the concept of transactional resources are somehow different.
Let me give you a simple example:
In a system you have a "customer" resource and his address with customer as the primary (or named) resource and the address being the subordinate address. For this example, let us assume we have a customer with a customerId of 1234. The URI to reach this customer would be /api/customer/1234. So, how would you now just update the customer's address without having to update the entire customer resource? You could define a "transaction resource" called "updateCustomerAddress". With that you would then POST the updated customer address data (JSON or even XML) to the following URI: POST /api/customer/1234/updateCustomerAddress. The service would then create this new transactional resource to be applied to the customer with customerId=1234. Once the transaction resource has been created, the call would return with 201, although the actual change may not have been applied to the customer resource. So a subsequent GET /api/customer/1234 may return the old address, or already the new and updated address. This supports well an asynchronous model for updating subordinate resources, or even named resources.
And what would we do with the created transactional resource? It would be completely opaque to the client and discarded as soon as the transaction has been completed. So the call may actually not return a URI of the transactional resource since it may have disappeared already by the time a client would try to access it.
As you can see, transactional resources should not require two HTTP calls to a service and can be done in just one.
RFC 2616 is obsolete. Please read RFC 723* instead, in particular https://datatracker.ietf.org/doc/html/rfc7231#section-4.3.3.

Avoid duplicate POSTs with REST

I have been using POST in a REST API to create objects. Every once in a while, the server will create the object, but the client will be disconnected before it receives the 201 Created response. The client only sees a failed POST request, and tries again later, and the server happily creates a duplicate object...
Others must have had this problem, right? But I google around, and everyone just seems to ignore it.
I have 2 solutions:
A) Use PUT instead, and create the (GU)ID on the client.
B) Add a GUID to all objects created on the client, and have the server enforce their UNIQUE-ness.
A doesn't match existing frameworks very well, and B feels like a hack. How does other people solve this, in the real world?
Edit:
With Backbone.js, you can set a GUID as the id when you create an object on the client. When it is saved, Backbone will do a PUT request. Make your REST backend handle PUT to non-existing id's, and you're set.
Another solution that's been proposed for this is POST Once Exactly (POE), in which the server generates single-use POST URIs that, when used more than once, will cause the server to return a 405 response.
The downsides are that 1) the POE draft was allowed to expire without any further progress on standardization, and thus 2) implementing it requires changes to clients to make use of the new POE headers, and extra work by servers to implement the POE semantics.
By googling you can find a few APIs that are using it though.
Another idea I had for solving this problem is that of a conditional POST, which I described and asked for feedback on here.
There seems to be no consensus on the best way to prevent duplicate resource creation in cases where the unique URI generation is unable to be PUT on the client and hence POST is needed.
I always use B -- detection of dups due to whatever problem belongs on the server side.
Detection of duplicates is a kludge, and can get very complicated. Genuine distinct but similar requests can arrive at the same time, perhaps because a network connection is restored. And repeat requests can arrive hours or days apart if a network connection drops out.
All of the discussion of identifiers in the other anwsers is with the goal of giving an error in response to duplicate requests, but this will normally just incite a client to get or generate a new id and try again.
A simple and robust pattern to solve this problem is as follows: Server applications should store all responses to unsafe requests, then, if they see a duplicate request, they can repeat the previous response and do nothing else. Do this for all unsafe requests and you will solve a bunch of thorny problems. Repeat DELETE requests will get the original confirmation, not a 404 error. Repeat POSTS do not create duplicates. Repeated updates do not overwrite subsequent changes etc. etc.
"Duplicate" is determined by an application-level id (that serves just to identify the action, not the underlying resource). This can be either a client-generated GUID or a server-generated sequence number. In this second case, a request-response should be dedicated just to exchanging the id. I like this solution because the dedicated step makes clients think they're getting something precious that they need to look after. If they can generate their own identifiers, they're more likely to put this line inside the loop and every bloody request will have a new id.
Using this scheme, all POSTs are empty, and POST is used only for retrieving an action identifier. All PUTs and DELETEs are fully idempotent: successive requests get the same (stored and replayed) response and cause nothing further to happen. The nicest thing about this pattern is its Kung-Fu (Panda) quality. It takes a weakness: the propensity for clients to repeat a request any time they get an unexpected response, and turns it into a force :-)
I have a little google doc here if any-one cares.
You could try a two step approach. You request an object to be created, which returns a token. Then in a second request, ask for a status using the token. Until the status is requested using the token, you leave it in a "staged" state.
If the client disconnects after the first request, they won't have the token and the object stays "staged" indefinitely or until you remove it with another process.
If the first request succeeds, you have a valid token and you can grab the created object as many times as you want without it recreating anything.
There's no reason why the token can't be the ID of the object in the data store. You can create the object during the first request. The second request really just updates the "staged" field.
Server-issued Identifiers
If you are dealing with the case where it is the server that issues the identifiers, create the object in a temporary, staged state. (This is an inherently non-idempotent operation, so it should be done with POST.) The client then has to do a further operation on it to transfer it from the staged state into the active/preserved state (which might be a PUT of a property of the resource, or a suitable POST to the resource).
Each client ought to be able to GET a list of their resources in the staged state somehow (maybe mixed with other resources) and ought to be able to DELETE resources they've created if they're still just staged. You can also periodically delete staged resources that have been inactive for some time.
You do not need to reveal one client's staged resources to any other client; they need exist globally only after the confirmatory step.
Client-issued Identifiers
The alternative is for the client to issue the identifiers. This is mainly useful where you are modeling something like a filestore, as the names of files are typically significant to user code. In this case, you can use PUT to do the creation of the resource as you can do it all idempotently.
The down-side of this is that clients are able to create IDs, and so you have no control at all over what IDs they use.
There is another variation of this problem. Having a client generate a unique id indicates that we are asking a customer to solve this problem for us. Consider an environment where we have a publicly exposed APIs and have 100s of clients integrating with these APIs. Practically, we have no control over the client code and the correctness of his implementation of uniqueness. Hence, it would probably be better to have intelligence in understanding if a request is a duplicate. One simple approach here would be to calculate and store check-sum of every request based on attributes from a user input, define some time threshold (x mins) and compare every new request from the same client against the ones received in past x mins. If the checksum matches, it could be a duplicate request and add some challenge mechanism for a client to resolve this.
If a client is making two different requests with same parameters within x mins, it might be worth to ensure that this is intentional even if it's coming with a unique request id.
This approach may not be suitable for every use case, however, I think this will be useful for cases where the business impact of executing the second call is high and can potentially cost a customer. Consider a situation of payment processing engine where an intermediate layer ends up in retrying a failed requests OR a customer double clicked resulting in submitting two requests by client layer.
Design
Automatic (without the need to maintain a manual black list)
Memory optimized
Disk optimized
Algorithm [solution 1]
REST arrives with UUID
Web server checks if UUID is in Memory cache black list table (if yes, answer 409)
Server writes the request to DB (if was not filtered by ETS)
DB checks if the UUID is repeated before writing
If yes, answer 409 for the server, and blacklist to Memory Cache and Disk
If not repeated write to DB and answer 200
Algorithm [solution 2]
REST arrives with UUID
Save the UUID in the Memory Cache table (expire for 30 days)
Web server checks if UUID is in Memory Cache black list table [return HTTP 409]
Server writes the request to DB [return HTTP 200]
In solution 2, the threshold to create the Memory Cache blacklist is created ONLY in memory, so DB will never be checked for duplicates. The definition of 'duplication' is "any request that comes into a period of time". We also replicate the Memory Cache table on the disk, so we fill it before starting up the server.
In solution 1, there will be never a duplicate, because we always check in the disk ONLY once before writing, and if it's duplicated, the next roundtrips will be treated by the Memory Cache. This solution is better for Big Query, because requests there are not imdepotents, but it's also less optmized.
HTTP response code for POST when resource already exists