CQRS and REST HATEOAS mismatch - rest

Suppose you have a model Foo.
One business case is to simply create an instance of Foo, so there is a corresponding CreateFooCommand in my model, triggered by invoking a POST request to a given REST endpoint.
There are of course other Commands too.
But now, there is a ViewModel, which is derived from my DomainModel. It's simply a sql table with raw data - each Foo instance from DomainModel has corresponding derived ViewModel instance. Both have different IDs (on DomainModel there is a DomainID, on ViewModel it's simply a long value).
Now: should I even care about HATEOAS in such a case? In a proper REST implementation, I should at least return location-url in the header. But since my view model is only derived from DomainModel, should I care? I don't even have the view model's ID at the time my DomainModel is created.

Since CQRS means that Queries are separated from Commands, you may not be able to perform a Query right away, because the Command may not yet have been applied (perhaps it never will).
In order to reconcile that with HATEOAS, instead of returning 200 OK from the POST request, the service can return 202 Accepted:
The request has been accepted for processing, but the processing has not been completed. The request might or might not eventually be acted upon, as it might be disallowed when processing actually takes place. There is no facility for re-sending a status code from an asynchronous operation such as this.
The 202 response is intentionally non-committal. Its purpose is to allow a server to accept a request for some other process (perhaps a batch-oriented process that is only run once per day) without requiring that the user agent's connection to the server persist until the process is completed. The entity returned with this response SHOULD include an indication of the request's current status and either a pointer to a status monitor or some estimate of when the user can expect the request to be fulfilled.
(My emphasis)
That pointer could be a link that the client can query to get the status of the Command. When/if the Command completes and the View is updated, that status resource could then contain a link to the view.
This is pretty much a workflow straight out of REST in Practice - very reminiscent of its Restbucks example.
Another option to deal with the ID issue is to generate the ID before accepting the Command - perhaps even asking the client to supply the ID. Read more about such options here.

As Greg Young explains, CQRS is nothing more than "splitting one object into two". So assume that you have one domain aggregate and it has an id. Now you are talking about your view model having another id. However, you are unable to update your view model unless you have the aggregate id in your view model as well. From my point of view, your REST POST request should return a result that has the aggregate id in it. This is your id, the view model id has no interest to anyone except the read model storage.
Should it return a command status URI like Mark suggests is a topic for another discussion. Many CQRS practitioners currently tend to handle commands synchronously to avoid FE/BE mismatch in case of failure and give the FE an ability to react on errors on the BE. There is no real win to execute commands asynchronously for one user. Commands do mutate the state and in 99% of cases the user needs to know if the state was mutated properly.

Related

How do PUT, POST or PATCH request differ ultimately?

The data, being sent over a PUT/PATCH/POST request, ultimately ends up in the database.
Now whether we are inserting a new resource or updating or modifying an existing one - it all depends upon the database operation being carried out.
Even if we send a POST and ultimately perform just an update in the database, it does not impact anywhere at all, isn't it?!
Hence, do they actually differ - apart from a purely conceptual point of view?
Hence, do they actually differ - apart from a purely conceptual point of view?
The semantics differ - what the messages mean, and what general purpose components are allowed to assume is going on.
The meanings are just those described by the references listed in the HTTP method registry. Today, that means that POST and PUT are described by HTTP semantics; PATCH is described by RFC 5789.
Loosely: PUT means that the request content is a proposed replacement for the current representation of some resource -- it's the method we would use to upload or replace a single web page if we were using the HTTP protocol to do that.
PATCH means that the request content is a patch document - which is to say a proposed edit to the current representation of some resource. So instead of sending the entire HTML document with PUT, you might instead just send a fix to the spelling error in the title element.
POST is... well, POST is everything else.
POST serves many useful purposes in HTTP, including the general purpose of “this action isn’t worth standardizing.” -- Fielding 2009
The POST method has the fewest constraints on its semantics (which is why we can use it for anything), but the consequence is that the HTTP application itself has to be very conservative with it.
Webber 2011 includes a good discussion of the implementations of the fact that HTTP is an application protocol.
Now whether we are inserting a new resource or updating or modifying an existing one - it all depends upon the database operation being carried out.
The HTTP method tells us what the request means - it doesn't place any constraints on how your implementation works.
See Fielding, 2002:
HTTP does not attempt to require the results of a GET to be safe. What it does is require that the semantics of the operation be safe, and therefore it is a fault of the implementation, not the interface or the user of that interface, if anything happens as a result that causes loss of property (money, BTW, is considered property for the sake of this definition).
The HTTP methods are part of the "transfer of documents over a network" domain - ie they are part of the facade that allows us to pretend that the bank/book store/cat video archive you are implementing is just another "web site".
It is about the intent of the sender and from my perspective it has a different behaviour on the server side.
in a nutshell:
POST : creates new data entry on the server (especially with REST)
PUT : updates full data entry on the server (REST) or it creates a new data entry (non REST). The difference to a POST request is that the client specifies the target location on the server.
PATCH : the client requests a partial update (Id and partial data of entry are given). The difference to PUT is that the client sends not the full data back to the server this can save bandwidth.
In general you can use any HTTP request to store data (GET, HEAD, DELETE...) but it is common practice to use POST, PUT, and PATCH for specific and standardized scenarios. Because every developer can understand it later
They are slightly different and they bind to different concepts of REST API (which is based on HTTP)
Just imagine that you have some Booking entity. And yo perform the following actions with resources:
POST - creates a new resource. And it is not idempotent - if you sent the same request twice -> two bookings will be stored. The third time - will create the third one. You are updating your DB with every request.
PUT - updates the full representation of a resource. It means - it replaces the booking full object with a new one. And it is idempotent - you could send a request ten times result will be the same (if a resource wasn't changed between your calls)
PATCH - updates some part of the resource. For example, your booking entity has a date property -> you update only this property. For example, replace the existing date with new date which is sent at the request.
However, for either of the above - who is deciding whether it is going to be a new resource creation or updating/modifying an existing one, it's the database operation or something equivalent to that which takes care of persistence
You are mixing different things.
The persistence layer and UI layer are two different things.
The general pattern used at Java - Model View Controller.
REST API has its own concept. And the DB layer has its own purpose. Keep in mind that separating the work of your application into layers is exactly high cohesion - when code is narrow-focused and does one thing and does it well.
Mainly at the answer, I posted some concepts for REST.
The main decision about what the application should do - create the new entity or update is a developer. And this kind of decision is usually done through the service layer. There are many additional factors that could be done, like transactions support, performing filtering of the data from DB, pagination, etc.
Also, it depends on how the DB layer is implemented. If JPA with HIbernate is used or with JDBC template, custom queries execution...

Which HTTP Verb should I use to claim and lock an item in a job queue?

I plan on using an HTTP REST interface to connect to a Job Control service.
One key operation is to request a computational Job.
The caller does not know the ID of the Job; that is what it will be told.
The job will be marked in the database as locked by the service.
The data needed for processing of the job will be returned to the caller.
Later on, when the caller is done processing the job, it will send the results back via another REST call.
Now it knows the ID of the record to be updated.
The second REST call will update the Job record with the results.
and change the Job's status and release the lock.
Only the Success/Fail status needs to be returned.
I am leaning towards using PUT for each operation because no new record is being created; it is being updated in both cases.
Is this proper? Can the first PUT return a large JSON payload with the Job data or does it just return an HTTP status? Should I use a POST instead, even though I am not creating a record, just updating it?
I would have used a GET for the first operation, but a GET is not supposed to change any objects on the service, and I am locking it, which is a change. Is locking a record acceptable in a GET request?
Which HTTP Verb should I use to claim and lock an item in a job queue?
Key idea: a REST API is a facade - your application/service pretends to be an HTTP compliant document store. All of the interesting things that happen are side effects triggered by modifying documents. See Jim Webber, 2011.
With that in mind...
POST is fine. It's okay to use POST.
PUT/PATCH are a good for remote authoring; the client fetches your representation of a resource, makes edits to his local copy, and sends you a copy of the representation (PUT) or a patch document describing the changes (PATCH). The server can then apply those edits to its copy, or not.
So for your specific example, I would expect the client to GET a representation of your resource, change the information in that representation from unlocked to locked, and then to PUT the changed representation back to your server. You server would be expected to update your copy of the representation to match.
It may remind you of a declarative style - the client tells the server what the representation should look like, and it's up to the server to figure out how to do that.
Included for Completeness, NOT Recommened:
The HTTP method registry also includes a method LOCK, with a corresponding UNLOCK. The semantics for these method tokens are defined by the WebDAV specification. If your meaning of LOCK matches that of WebDAV, then using that might be an answer. Note that the specification includes comments like
Any resource that supports the LOCK method MUST, at minimum, support the XML request and response formats defined herein.
Unless you are already in a space where people are expecting to be able to use general-purpose WebDAV clients to interact with your API, that's probably not a good fit.
The HTTP method registry is extendable. So you could define the semantics of your own method token, then push to have it adopted as a standard.

How to handle network connectivity loss in the middle of REST POST request?

REST POST is used to create resources.
Let's say we have resource url
"http://example.com/cars"
We want to create a new car.
We POST to "http://example.com/cars" with JSON payload containing car properties (color, weight, model, etc).
Server receives the request, creates a new car, sends a response over the network.
At this point network fails (let's say router stops working properly and ignores every packet).
Client fails with TCP timeout (like 90 seconds).
Client has no idea whether car was created or not.
Also client haven't received car resource id, so it can't GET it to check if it was created.
Now what?
How do you handle this?
You can't simply retry creating, because retrying will just create a duplicate (which is bad).
REST POST is used to create resources.
HTTP POST is used for lots of things. REST doesn't particularly care; it just wants resources that support a uniform interface, and hypermedia.
At this point network fails
Bummer!
Now what? How do you handle this? You can't simply retry creating, because retrying will just create a duplicate (which is bad).
This is a general messaging concern, not directly related to REST. The most common solution is to use the Idempotent Receiver pattern. In short, you
need to define your messages so that the receiver has enough information to recognize the request as something that has already been done.
Ideally, this is being supported at the business level.
Idempotent collections of values are often straight forward; we just need to be thinking sets, rather than lists.
Idempotent collections of entities are trickier; if the request includes an identifier for the new entity, or if we can compute one from the data provided, then we can think of our collection as a hash.
If none of those approaches fits, then there's another possibility. Instead of performing an idempotent mutation of the collection, we make the mutation of the collection itself idempotent. Think "compare and swap" - we encode into the request information that identifies the current state of the collection; is that state is still current when the request arrives, then the mutation is applied. If the condition does not hold, then the request becomes a no-op.
Translating this into HTTP, we make a small modification to the protocol for updating the collection resource. First, we GET the current representation; and in the meta data the server provides validators that can be used in subsquent requests. Having obtained the validator, the client evaluates the current representation of the resource to determine if it needs to be changed. If the client decides to make a change, then submits the change with an If-Match or an If-Unmodified-Since header including the validator. The server, before processing the requests, then considers the validator, immediately abandoning the request with 412 Precondition Failed.
Thus, if a conditional state-changing request is lost, the client can at its own discretion repeat the request without concern that server will misunderstand the client's intent.
Retry it a limited number of times, with increasing delays between the attempts, and make sure the transaction concerned is idempotent.
because retrying will just create a duplicate (which is bad).
It is indeed, and it needs fixing, see above. It should be impossible in your system to create two entries with the same attributes. This is easily accomplished at the database level. You can attain idempotence by having the transaction return the same thing whether the entry already existed or was newly created. Or else just have it return EXISTS if the entry already exists, and adjust your client accordingly.

Batch processing via REST service

Are there any best practices for performing BATCH operations via REST for POST, PUT, PATCH verbs?
The current paradigm I am following is that the JSON payload is specified in the body for all 3 operations:
a) POST to return the location of the created resource
b) PUT / PATCH return a 201 if the update is successful
For a batch operation, I intend to accept a collection of JSON objects in the payload body but am trying to figure what to return to the client.
While processing the batch, the operation may succeed for some of the items but might fail for others.
Taking this into account, my take is that the best thing to do is to return a collection of objects indicating the Success/Failure status of each item from the payload.
But this deviates from my paradigm outlined in (a) and (b) above.
Instead, does it make sense to return an identifier representing an ID of the Batch operation itself to the client?
The client would then issue a subsequent GET to get the result of the operation it requested.
Does this approach sound reasonable? If so, does it make sense to block the client on the subsequent GET if the operation hasn't completed OR does it make sense to always return the most current state i.e. a collection of responses for each of the items that the client requested to process.
Ideas/Thoughts/Suggestions?
Since REST is an architectural style with no necessarily clear "guidelines" and no
mandate on how the actions for HTTP verbs should be implements, clearly there is no right or wrong answer here.
I am looking for a solution that is elegant, natural and intuitive.
REST operations are supposed to be atomic as seen from the outside. That is, if one part of the request fails, then the whole state of the server should revert to the pre-request state and a 4xx or 5xx response returned (so, for example, the request could be repeated in whole without ill effects if it failed the first time). This however has nothing to do with batch operations per se—such a request could be any kind of request.
Batch operations violate a different REST constraint, that of the uniform interface (defined by HTTP's methods and their operation upon a resource at the specified URL).
If you want to do batch operations, give up on trying to call your API RESTful, because you have already lost out on the benefits that REST imparts, and are just lying to yourself.
If you want to retain those benefits, give up on batch operations.
REST is an architectural style.
I implement APIs to always return the result of what was updated. So in the case of a POST it will return the created entity, with PATCH and PUT it will return the updated entity.
Depending on the batch size I would either return an array of what was processed, or alternatively an array of identifiers of what was processed.
If the batch operation is long running return an identifier for the batch, but make it painfully clear that the endpoint is different from other endpoints
ex. if you create a user by posting to
http://somesite.com/users
send batch requests to
http://somesite.com/batch/users
On a get return the status of the batch operation while it is still running, on complete return an array of record that were updated.
The most important thing is consistency, whatever you choose, always follow the same approach with batch operation throughout the system.

Avoid duplicate POSTs with REST

I have been using POST in a REST API to create objects. Every once in a while, the server will create the object, but the client will be disconnected before it receives the 201 Created response. The client only sees a failed POST request, and tries again later, and the server happily creates a duplicate object...
Others must have had this problem, right? But I google around, and everyone just seems to ignore it.
I have 2 solutions:
A) Use PUT instead, and create the (GU)ID on the client.
B) Add a GUID to all objects created on the client, and have the server enforce their UNIQUE-ness.
A doesn't match existing frameworks very well, and B feels like a hack. How does other people solve this, in the real world?
Edit:
With Backbone.js, you can set a GUID as the id when you create an object on the client. When it is saved, Backbone will do a PUT request. Make your REST backend handle PUT to non-existing id's, and you're set.
Another solution that's been proposed for this is POST Once Exactly (POE), in which the server generates single-use POST URIs that, when used more than once, will cause the server to return a 405 response.
The downsides are that 1) the POE draft was allowed to expire without any further progress on standardization, and thus 2) implementing it requires changes to clients to make use of the new POE headers, and extra work by servers to implement the POE semantics.
By googling you can find a few APIs that are using it though.
Another idea I had for solving this problem is that of a conditional POST, which I described and asked for feedback on here.
There seems to be no consensus on the best way to prevent duplicate resource creation in cases where the unique URI generation is unable to be PUT on the client and hence POST is needed.
I always use B -- detection of dups due to whatever problem belongs on the server side.
Detection of duplicates is a kludge, and can get very complicated. Genuine distinct but similar requests can arrive at the same time, perhaps because a network connection is restored. And repeat requests can arrive hours or days apart if a network connection drops out.
All of the discussion of identifiers in the other anwsers is with the goal of giving an error in response to duplicate requests, but this will normally just incite a client to get or generate a new id and try again.
A simple and robust pattern to solve this problem is as follows: Server applications should store all responses to unsafe requests, then, if they see a duplicate request, they can repeat the previous response and do nothing else. Do this for all unsafe requests and you will solve a bunch of thorny problems. Repeat DELETE requests will get the original confirmation, not a 404 error. Repeat POSTS do not create duplicates. Repeated updates do not overwrite subsequent changes etc. etc.
"Duplicate" is determined by an application-level id (that serves just to identify the action, not the underlying resource). This can be either a client-generated GUID or a server-generated sequence number. In this second case, a request-response should be dedicated just to exchanging the id. I like this solution because the dedicated step makes clients think they're getting something precious that they need to look after. If they can generate their own identifiers, they're more likely to put this line inside the loop and every bloody request will have a new id.
Using this scheme, all POSTs are empty, and POST is used only for retrieving an action identifier. All PUTs and DELETEs are fully idempotent: successive requests get the same (stored and replayed) response and cause nothing further to happen. The nicest thing about this pattern is its Kung-Fu (Panda) quality. It takes a weakness: the propensity for clients to repeat a request any time they get an unexpected response, and turns it into a force :-)
I have a little google doc here if any-one cares.
You could try a two step approach. You request an object to be created, which returns a token. Then in a second request, ask for a status using the token. Until the status is requested using the token, you leave it in a "staged" state.
If the client disconnects after the first request, they won't have the token and the object stays "staged" indefinitely or until you remove it with another process.
If the first request succeeds, you have a valid token and you can grab the created object as many times as you want without it recreating anything.
There's no reason why the token can't be the ID of the object in the data store. You can create the object during the first request. The second request really just updates the "staged" field.
Server-issued Identifiers
If you are dealing with the case where it is the server that issues the identifiers, create the object in a temporary, staged state. (This is an inherently non-idempotent operation, so it should be done with POST.) The client then has to do a further operation on it to transfer it from the staged state into the active/preserved state (which might be a PUT of a property of the resource, or a suitable POST to the resource).
Each client ought to be able to GET a list of their resources in the staged state somehow (maybe mixed with other resources) and ought to be able to DELETE resources they've created if they're still just staged. You can also periodically delete staged resources that have been inactive for some time.
You do not need to reveal one client's staged resources to any other client; they need exist globally only after the confirmatory step.
Client-issued Identifiers
The alternative is for the client to issue the identifiers. This is mainly useful where you are modeling something like a filestore, as the names of files are typically significant to user code. In this case, you can use PUT to do the creation of the resource as you can do it all idempotently.
The down-side of this is that clients are able to create IDs, and so you have no control at all over what IDs they use.
There is another variation of this problem. Having a client generate a unique id indicates that we are asking a customer to solve this problem for us. Consider an environment where we have a publicly exposed APIs and have 100s of clients integrating with these APIs. Practically, we have no control over the client code and the correctness of his implementation of uniqueness. Hence, it would probably be better to have intelligence in understanding if a request is a duplicate. One simple approach here would be to calculate and store check-sum of every request based on attributes from a user input, define some time threshold (x mins) and compare every new request from the same client against the ones received in past x mins. If the checksum matches, it could be a duplicate request and add some challenge mechanism for a client to resolve this.
If a client is making two different requests with same parameters within x mins, it might be worth to ensure that this is intentional even if it's coming with a unique request id.
This approach may not be suitable for every use case, however, I think this will be useful for cases where the business impact of executing the second call is high and can potentially cost a customer. Consider a situation of payment processing engine where an intermediate layer ends up in retrying a failed requests OR a customer double clicked resulting in submitting two requests by client layer.
Design
Automatic (without the need to maintain a manual black list)
Memory optimized
Disk optimized
Algorithm [solution 1]
REST arrives with UUID
Web server checks if UUID is in Memory cache black list table (if yes, answer 409)
Server writes the request to DB (if was not filtered by ETS)
DB checks if the UUID is repeated before writing
If yes, answer 409 for the server, and blacklist to Memory Cache and Disk
If not repeated write to DB and answer 200
Algorithm [solution 2]
REST arrives with UUID
Save the UUID in the Memory Cache table (expire for 30 days)
Web server checks if UUID is in Memory Cache black list table [return HTTP 409]
Server writes the request to DB [return HTTP 200]
In solution 2, the threshold to create the Memory Cache blacklist is created ONLY in memory, so DB will never be checked for duplicates. The definition of 'duplication' is "any request that comes into a period of time". We also replicate the Memory Cache table on the disk, so we fill it before starting up the server.
In solution 1, there will be never a duplicate, because we always check in the disk ONLY once before writing, and if it's duplicated, the next roundtrips will be treated by the Memory Cache. This solution is better for Big Query, because requests there are not imdepotents, but it's also less optmized.
HTTP response code for POST when resource already exists