Idempotent Keys From Client's Perspective - rest

Suppose I have an API that calls a downstream service's API called /charge (POST). Suppose while doing charge, a timeout happened at the reverse-proxy and I got a 5xx. But the charge actually happened.
In this case, I would respond with a 5xx to my consumer. Now, if the consumer calls with the same idempotent key, then his request can succeed as the downstream service would return a cached copy of the response. But if he uses a different idempotent key while calling my API, he would keep getting 409s as the payment was already charged.
Here's my two questions:
How does the client know when to retry with the same idempotentId or initiate a new request altogether?
(Augmenting the previous question) How does the UI make the decision to use different idempotent Ids? Does each new request contain a new Id and only the retry logic reuses the same Id?
Basically, I am trying to understand idempotent keys from the client
's perspective.

A timeout should be retried automatically a few times before returning a failure response to the user. Thus if the error is transient, the user wouldn't notice any issue (except possibly a negligible delay in response).
The request originating system should maintain a log of all requests with their status. Thus if the glitch persists for a longer duration, the system can retry failed requests periodically as well as provide a detailed UI view of the submitted requests to the user. This eliminates the need for the user to ever retry a request. The system will do that on user's behalf.

Related

Should users await for a response after the http request in a saga pattern architecture?

I am designing a microservice architecture, using a database per service pattern.
Following the example of Order Service and Shipping Service, when a user makes an HTTP REST request to the Order Service, this one fires an event to notify shipping service. All this happens asynchronously. So, what happens with the user experience? I mean, the user needs an immediate response from the HTTP request. How can I handle this scenario?
All this happens asynchronously. So, What happen with the user experience? I mean, the user needs an immediately response from the HTTP request. How can I handle this scenario?
Respond as soon as you have stored the request.
Part of the point of microservices is that you have a system composed of independently deployable elements that do not require coordination.
If you want a system that is reliable even though the services don't have 100% uptime, then you need to have some form of durable message storage so that the sender and the receiver don't need to be running at the same time.
Therefore, your basic pattern for data from the outside is that the information from the incoming HTTP request is copied, not directly into a running service, but instead into the message store, to be processed by the service at some later time.
In other words, your REST API is a facade in front of your storage, not in front of the service itself.
The actor model may be a useful analogy; information moves around by copying messages into different inboxes, and are later consumed by the subscribing actor.
From the perspective of the client, the HTTP response is an acknowledgement that the request has been received and recognized as valid. Think "thank you for your order, we'll send you an email when your purchase is ready for pick up."
On the web, we would include in the response links to other useful resources; click here to see the status of your order, click there to see your history of recent orders, and so on.

POST/PUT response REST in a CQRS/ES system

I'm implementing a CQRS/ES based system with a RESTful interface which is used by a webapp.
When performing certain actions e.g. creating a new profile I need to be able to check certain conditions, such as uniqueness of the profile ID, or that the person has the right to create a resource under a group. Which means I have a couple of options:
Context: POST/profiles { "email": "unique#example.com" }
From my REST API return 202 from my service with a location of the new resource where my client can poll for it. In this case, however, how do I handle errors as in effect the view will not exist or ever exist.
Create a saga on the initial request then dispatch the event. Once my service creates the view or finds the error then the result is written to the saga. When the saga has been completed return the result to the user.
From these two options - the second seems more reasonable to me, if not more complex. Is this a viable option for building RESTful request/response models on a CQRS/ES event sourced backend?
Yes, the second solution seems to better fit the business.
From what I understand from your case, from the DDD point of view, the creation of a user profile is a business process, with more than one steps (verifying the uniqueness of the profile, creating the profile and recovering from a duplicate profile situation). This process acts like an entity, it starts, runs and ends with a result (success or error). Being an entity it has an ID and it can be viewed as a REST resource. A Saga will be responsible for executing it.
So, in response to the client's request you send the URI of the process resource where the client can poll for the status. In case of error, it reads the error message. In case of success, it gets the URI of its profile.
The first solution can still be used if the use-case is simpler, if the command can be executed synchronously and the client gets the final result (error or success) as an immediate response.
From my REST API return 202 from my service with a location of the new resource where my client can poll for it. In this case, however, how do I handle errors as in effect the view will not exist or ever exist.
The usual answer here is that, as part of the 202 Accepted response, you include monitoring information
The representation sent with this response ought to describe the request's current status and point to (or embed) a status monitor that can provide the user with an estimate of when the request will be fulfilled.
In other words, a link to a resource that will change when the accepted request is finally run.
So in describing the protocol, in addition to the resource that you create, you'll also need to document the representation used when you defer the work for later, and the representation used by the monitor.
When the saga has been completed return the result to the user.
Depending on the work, that may be overkill.
Which is to say, you are raising two different questions here; one of those is whether the request should be handled synchronously (don't respond until the work is done) or asynchronously (return right away, but give the client the means to monitor progress).
The other question is how the work looks from the business layer. If you are going to need multiple transactions to make the change, and if you may need to "revert" previously committed transactions in some variants of the process, then a saga (or a process manager) makes sense.
Set Validation -- the broader term for enforcing an invariant like "uniqueness" -- is awkward. Make sure you study, and ensure that you and the business understand the impact of having a failure.

Handle REST API timeout in time consuming operations

How is possible to handle timeouts in time consuming operations in a REST API. Let's say we have the following scenario as example:
A client service sends a request to insert a resource through a REST API.
Timeout elapses. The client thinks the insertion failed.
REST API keep working and finishes the insertion.
Client do not notify the resource insertion and it status is "Failed".
I can think I a solution with a message broker to send orders to a queue and wait until they are solved.
Any other workaround?
EDIT 1:
POST-PUT Pattern as has been suggested in this thread.
A Message Broker (add more complexity to the system)
Callback or webhook. Pass in the request a return url that the server API can call to let the client know that the work is completed.
HTTP offers a set of properties for invoking certain methods. These are primarily safetiness, idempotency and cacheability. While the first one guarantees a client that no data is modified, the 2nd one gives a promise whether a request can be reissued in regards to connection issues and the client not knowing whether the initial request succeeded or not and only the response got lost mid way. PUT i.e. does provide such a property, i.e.
A simple POST request to "insert" some data does not have any of these properties. A server receiving a POST request furthermore processes the payload according to its own semantics. The client does not know beforehand whether a resource will be created or if the server just ignores the request. In case the server created a resource the server will inform the client via the Location HTTP response header pointing to the actual location the client can retrieve information from.
PUT is usually used only to "update" a resource, though according to the spec it can also be used in order to create a new resource if it does not yet exist. As with POST on a successful resource creation the PUT response should include such a Location HTTP response header to inform the client that a resource was created.
The POST-PUT-Creation pattern separates the creation of the URI from the actual persistence of the representation by first firing off POST requests to the server until a response is received containing a Location HTTP response header. This header is used in a PUT request to actually send the payload to the server. As PUT is idempotent the server simply can reissue the request until it receives a valid response from the server.
On sending the initial POST request to the server, a client can't be sure whether the request reached the server and only the response got lost, or the initial request didn't make it to the server. As the request is only used to create a new URI (without any content yet) the client may simply reissue the request and in worst case just create a new URI that points to nothing. The server may have a cleanup routine that frees unused URIs after a certain amount of time.
Once the client receives the URI, it simply can use PUT to reliably send data to the server. As long as the client didn't receive a valid response, it can just reissue the request over and over until it receives a response.
I therefore do not see the need to use a message-oriented middleware (MOM) using brokers and queues in order to guarantee reliable messaging.
You could also cache the data after a successful insertion with a previously exchanged request_id or something of that sort. But I believe message broker with some asynchronous task runner is a much better way to deal with the problem especially if your request thread is a scarce resource. What I mean by that is. If you are receiving a good amount of requests all the time. Then it is a good idea to keep your responses as quickly as possible so the workers will be available for any requests to come.

Correct RESTful verb

I have a resource with conditional operations:
/foos/{id}/authorize
/foos/{id}/cancel
The idea is that authorize will change the status of the resource from saved (the default) to authorized (by a third party application). The authorize could return a error from the remote part or could be authorized. Once authorized the resource could not be authorized again so this is not an action that could be called again and again.
The cancel occurs when an authorized resource is revoked. Once cancelled the resource will stay as cancelled forever.
What's the correct verb in a RESTful world for this kind of operation considering that this operation is not safe and could not be considered idempotent as a second call will return a error like "resource already cancelled" and at the same time I'm not creating a new resource, just making a status change in a known resource?
I would use
DELETE /authorization/1234
There's a whole debate around DELETE's idempotence on previously deleted resources. See https://evertpot.com/idempotence-in-http/ and https://leedavis81.github.io/is-a-http-delete-requests-idempotent/
The bottomline here is that idempotence makes sense in a mathematical world where there's always one result, but in HTTP you get two different outcomes -- the server's response and the resource's new state. It becomes difficult identifying what is idempotent and what is not.
In such areas where the HTTP specification is not clear, I recommend pargmatism over dogmatism.
If you really want the client to know if they deleted the resource themselves or if someone else did, then I see no problem responding 404 on a previously deleted resource.
If you don't care, or think that it will never happen (either because there's not enough concurrent access or because all clients always do a GET moments before sending the DELETE), you can happily stick to 204 in all cases.

What to do if network fails before POST response can be read?

When accessing a REST service from a client that has an unreliable network connection (e.g., some crappy cell network), what are some best practices for handling an error where the network connection drops before the response to a POST can be read. Since POSTs are not idempotent, it's unsafe to naively retry. Are there best practices for this? Assume I'm also designing the service end of this, so there are no constraints on that end of the wire either.
Write a protocol which does not allow to create a second resource when the client did not consume the first one. For example, after GETting the resource, the client should POST back that it consumed it, so the service can safely create another one when the next GET arrives. If no verification POST arrives, the server should respond every subsequent GETs by sending the same resource which was created for the first GET (this may be client-specific). -- This way you can safely repeat the GET after the predefined timeout interval elapses. (If the number of repeats exceeds a given value, it means that you have a permanent network or service error, about which you will have to notify the user.)