PATCH when the resource location is not known - rest

I am building a rest API and want to implement a PATCH for a job manager monitoring tool.
All my clients know the job manager's job ID that is not unique. The job manager resets the jobs ID from time to time (so starting from 1 again) This happens at random intervals (can be months or days) for different reasons.
I want to let the job manager send me updates about a job but I don't want it to first do a GET to find out the job's unique ID (let's say DBid) and then do the PATCH /jobs/:DBid. This is because of performance and slow network reasons. Having to wait for the GET could block the job manager which is critical.
Selecting the latest job with the job manager ID will return the right job. But how to model this in a rest API?

You need the id of the job manager to be in a 'pot' on the server. I've worked on this type of thing where a client needs to know the internal id of a resource but that id rarely changes. It does change but very infrequently. The solution for that is a cache service but for this, it sounds like you need a 'pot', on the server, where the id is stored.
The endpoint handling the PATCH knows where the pot is on the server so can load the id from it. A simple flat file perhaps.
If the manager needs to change the id, whatever process handles that change, changes what's in the pot. So the PATCH endpoint is always getting the correct id from the pot.
You can add synchronisation for accessing the pot in case a PATCH comes in while the manager is updating the id.

Related

HTTP GET for 'background' job creation and acquiring

I'm designing API for jobs scheduler. There is one scheduler with some set of resources and DB tables for them. Also there are multiple 'workers' that request 'jobs' from scheduler. Worker can't create job it must only request it. Job must be calculated on the server side. Also job is a dynamic entity and calculated using multiple DB tables and time. There is no 'job' table.
In general this system is very similar to task queue. But without queue. I need a method for worker to request next task. That task should be calculated and assigned for this agent.
Is it OK to use GET verb to retrieve and 'lock' job for the specific worker?
In terms of resources this query does not modify anything. Only internal DB state is updated. For client it looks like fetching records one by one. It doesn't know about internal modifications.
In pure REST style I probably should define a job table and CRUD api for it. Then I would need to create some auxilary service to POST jobs to that table. Then each agent would list jobs using GET and then lock it using PATCH. That approach requires multiple potential retries due to race-conditions. (Job can be already locked by another agent). Also it looks a little bit complicated if I need to assign job to specific agent based on server side logic. In that case I need to implement some check logic on client side to iterate through jobs based on different responces.
This approach looks complicated.
Is it OK to use GET verb to retrieve and 'lock' job for the specific worker?
Maybe? But probably not.
The important thing to understand about GET is that it is safe
The purpose of distinguishing between safe and unsafe methods is to
allow automated retrieval processes (spiders) and cache performance
optimization (pre-fetching) to work without fear of causing harm. In
addition, it allows a user agent to apply appropriate constraints on
the automated use of unsafe methods when processing potentially
untrusted content.
If aggressive cache performance optimization would make a mess in your system, then GET is not the http method you want triggering that behavior.
If you were designing your client interactions around resources, then you would probably have something like a list of jobs assigned to a worker. Reading the current representation of that resource doesn't require that a server change it, so GET is completely appropriate. And of course the server could update that resource for its own reasons at any time.
Requests to modify that resource should not be safe. For instance, if the client is going to signal that some job was completed, that should be done via an unsafe method (POST/PUT/PATCH/DELETE/...)
I don't have such resource. It's an ephymeric resource which is spread across the tables. There is no DB table for that and there is no ID column to update that job. That's another question why I don't have such table but it's current requirement and limitation.
Fair enough, though the main lesson still stands.
Another way of thinking about it is to think about failure. The network is unreliable. In a distributed environment, the client cannot distinguish a lost request from a lost response. All it knows is that it didn't receive an acknowledgement for the request.
When you use GET, you are implicitly telling the client that it is safe (there's that word again) to resend the request. Not only that, but you are also implicitly telling any intermediate components that it is safe to repeat the request.
If there are no adverse effects to handling multiple copies of the same request, the GET is fine. But if processing multiple copies of the same request is expensive, then you should probably be using POST instead.
It's not required that the GET handler be safe -- the standard only describes the semantics of the messages; it doesn't constraint the implementation at all. But any loss of property incurred is properly understood to be the responsibility of the server.

How to keep state consistent across distributed systems

When building distributed systems, it must be ensured the client and the server eventually ends up with consistent view of the data they are operating on, i.e they never get out of sync. Extra care is needed, because network can not be considered reliable. In other words, in the case of network failure, client never knows if the operation was successful, and may decide to retry the call.
Consider a microservice, which exposes simple CRUD API, and unbounded set of clients, maintained in-house by the same team, by different teams and by different companies also.
In the example, client request a creation of new entity, which the microservice successfully creates and persists, but the network fails and client connection times out. The client will most probably retry, unknowingly persisting the same entity second time. Here is one possible solution to this I came up with:
Use client-generated identifier to prevent duplicate post
This could mean the primary key as it is, the half of the client and server -generated composite key, or the token issued by the service. A service would either persist the entity, or reply with OK message in the case the entity with that identifier is already present.
But there is more to this: What if the client gives up after network failure (but entity got persisted), mutates it's internal view of the entity, and later decides to persist it in the service with the same id. At this point and generally, would it be reasonable for the service just silently:
Update the existing entity with the state that client posted
Or should the service answer with some more specific status code about what happened? The point is, developer of the service couldn't really influence the client design solutions.
So, what are some sensible practices to keep the state consistent across distributed systems and avoid most common pitfalls in the case of network and system failure?
There are some things that you can do to minimize the impact of the client-server out-of-sync situation.
The first measure that you can take is to let the client generate the entity IDs, for example by using GUIDs. This prevents the server to generate a new entity every time the client retries a CreateEntityCommand.
In addition, you can make the command handing idempotent. This means that if the server receives a second CreateEntityCommand, it just silently ignores it (i.e. it does not throw an exception). This depends on every use case; some commands cannot be made idempotent (like updateEntity).
Another thing that you can do is to de-duplicate commands. This means that every command that you send to a server must be tagged with an unique ID. This can also be a GUID. When the server receives a command with an ID that it already had processed then it ignores it and gives a positive response (i.e. 200), maybe including some meta-information about the fact that the command was already processed. The command de-duplication can be placed on top of the stack, as a separate layer, independent of the domain (i.e. in front of the Application layer).

Windows Workflow Correlation with Workflow Services

I have a locally hosted Windows Workflow (4.5) site running on App Fabric. The Workflow is very simple, at present, consisting of two workflow services the first saves correlation based on Guid (DB Generated) and the second just receives the same object with the guid in, the second is set to retrieve the guid for correlation.
I have a problem with the second part of my workflow, apparently, not being called. The calling site returns with this error:
The operation did not complete within the allotted timeout of 00:01:00. The time allotted to this operation may have been a portion of a longer timeout.
Now, what is puzzling is if I intentionally put in a correlation id (guid) that is not correct, then Workflow returns saying there is no matching process. When there is correct correlation identifier it times-out.
Correlation keys are the same at both points, it is in idle state in IIS and App Fabric. App Fabric has the above error in it.
Any ideas?

Avoid duplicate POSTs with REST

I have been using POST in a REST API to create objects. Every once in a while, the server will create the object, but the client will be disconnected before it receives the 201 Created response. The client only sees a failed POST request, and tries again later, and the server happily creates a duplicate object...
Others must have had this problem, right? But I google around, and everyone just seems to ignore it.
I have 2 solutions:
A) Use PUT instead, and create the (GU)ID on the client.
B) Add a GUID to all objects created on the client, and have the server enforce their UNIQUE-ness.
A doesn't match existing frameworks very well, and B feels like a hack. How does other people solve this, in the real world?
Edit:
With Backbone.js, you can set a GUID as the id when you create an object on the client. When it is saved, Backbone will do a PUT request. Make your REST backend handle PUT to non-existing id's, and you're set.
Another solution that's been proposed for this is POST Once Exactly (POE), in which the server generates single-use POST URIs that, when used more than once, will cause the server to return a 405 response.
The downsides are that 1) the POE draft was allowed to expire without any further progress on standardization, and thus 2) implementing it requires changes to clients to make use of the new POE headers, and extra work by servers to implement the POE semantics.
By googling you can find a few APIs that are using it though.
Another idea I had for solving this problem is that of a conditional POST, which I described and asked for feedback on here.
There seems to be no consensus on the best way to prevent duplicate resource creation in cases where the unique URI generation is unable to be PUT on the client and hence POST is needed.
I always use B -- detection of dups due to whatever problem belongs on the server side.
Detection of duplicates is a kludge, and can get very complicated. Genuine distinct but similar requests can arrive at the same time, perhaps because a network connection is restored. And repeat requests can arrive hours or days apart if a network connection drops out.
All of the discussion of identifiers in the other anwsers is with the goal of giving an error in response to duplicate requests, but this will normally just incite a client to get or generate a new id and try again.
A simple and robust pattern to solve this problem is as follows: Server applications should store all responses to unsafe requests, then, if they see a duplicate request, they can repeat the previous response and do nothing else. Do this for all unsafe requests and you will solve a bunch of thorny problems. Repeat DELETE requests will get the original confirmation, not a 404 error. Repeat POSTS do not create duplicates. Repeated updates do not overwrite subsequent changes etc. etc.
"Duplicate" is determined by an application-level id (that serves just to identify the action, not the underlying resource). This can be either a client-generated GUID or a server-generated sequence number. In this second case, a request-response should be dedicated just to exchanging the id. I like this solution because the dedicated step makes clients think they're getting something precious that they need to look after. If they can generate their own identifiers, they're more likely to put this line inside the loop and every bloody request will have a new id.
Using this scheme, all POSTs are empty, and POST is used only for retrieving an action identifier. All PUTs and DELETEs are fully idempotent: successive requests get the same (stored and replayed) response and cause nothing further to happen. The nicest thing about this pattern is its Kung-Fu (Panda) quality. It takes a weakness: the propensity for clients to repeat a request any time they get an unexpected response, and turns it into a force :-)
I have a little google doc here if any-one cares.
You could try a two step approach. You request an object to be created, which returns a token. Then in a second request, ask for a status using the token. Until the status is requested using the token, you leave it in a "staged" state.
If the client disconnects after the first request, they won't have the token and the object stays "staged" indefinitely or until you remove it with another process.
If the first request succeeds, you have a valid token and you can grab the created object as many times as you want without it recreating anything.
There's no reason why the token can't be the ID of the object in the data store. You can create the object during the first request. The second request really just updates the "staged" field.
Server-issued Identifiers
If you are dealing with the case where it is the server that issues the identifiers, create the object in a temporary, staged state. (This is an inherently non-idempotent operation, so it should be done with POST.) The client then has to do a further operation on it to transfer it from the staged state into the active/preserved state (which might be a PUT of a property of the resource, or a suitable POST to the resource).
Each client ought to be able to GET a list of their resources in the staged state somehow (maybe mixed with other resources) and ought to be able to DELETE resources they've created if they're still just staged. You can also periodically delete staged resources that have been inactive for some time.
You do not need to reveal one client's staged resources to any other client; they need exist globally only after the confirmatory step.
Client-issued Identifiers
The alternative is for the client to issue the identifiers. This is mainly useful where you are modeling something like a filestore, as the names of files are typically significant to user code. In this case, you can use PUT to do the creation of the resource as you can do it all idempotently.
The down-side of this is that clients are able to create IDs, and so you have no control at all over what IDs they use.
There is another variation of this problem. Having a client generate a unique id indicates that we are asking a customer to solve this problem for us. Consider an environment where we have a publicly exposed APIs and have 100s of clients integrating with these APIs. Practically, we have no control over the client code and the correctness of his implementation of uniqueness. Hence, it would probably be better to have intelligence in understanding if a request is a duplicate. One simple approach here would be to calculate and store check-sum of every request based on attributes from a user input, define some time threshold (x mins) and compare every new request from the same client against the ones received in past x mins. If the checksum matches, it could be a duplicate request and add some challenge mechanism for a client to resolve this.
If a client is making two different requests with same parameters within x mins, it might be worth to ensure that this is intentional even if it's coming with a unique request id.
This approach may not be suitable for every use case, however, I think this will be useful for cases where the business impact of executing the second call is high and can potentially cost a customer. Consider a situation of payment processing engine where an intermediate layer ends up in retrying a failed requests OR a customer double clicked resulting in submitting two requests by client layer.
Design
Automatic (without the need to maintain a manual black list)
Memory optimized
Disk optimized
Algorithm [solution 1]
REST arrives with UUID
Web server checks if UUID is in Memory cache black list table (if yes, answer 409)
Server writes the request to DB (if was not filtered by ETS)
DB checks if the UUID is repeated before writing
If yes, answer 409 for the server, and blacklist to Memory Cache and Disk
If not repeated write to DB and answer 200
Algorithm [solution 2]
REST arrives with UUID
Save the UUID in the Memory Cache table (expire for 30 days)
Web server checks if UUID is in Memory Cache black list table [return HTTP 409]
Server writes the request to DB [return HTTP 200]
In solution 2, the threshold to create the Memory Cache blacklist is created ONLY in memory, so DB will never be checked for duplicates. The definition of 'duplication' is "any request that comes into a period of time". We also replicate the Memory Cache table on the disk, so we fill it before starting up the server.
In solution 1, there will be never a duplicate, because we always check in the disk ONLY once before writing, and if it's duplicated, the next roundtrips will be treated by the Memory Cache. This solution is better for Big Query, because requests there are not imdepotents, but it's also less optmized.
HTTP response code for POST when resource already exists

Obtain ServiceDeploymentId in TrackingParticipant

In WF4, I've created a descendant of TrackingParticipant. In the Track method, record.InstanceId gives me the GUID of the workflow instance.
I'm using the SqlWorkflowInstanceStore for persistence. By default records are automatically deleted from the InstancesTable when the workflow completes. I want to keep it that way to keep the transaction database small.
This creates a problem for reporting, though. My TrackingParticipant will log the instance ID to a reporting table (along with other tracking information), but I'll want to join to the ServiceDeploymentsTable. If the workflow is complete, that GUID won't be in the InstancesTable, so I won't be able to look up the ServiceDeploymentId.
How can I obtain the ServiceDeploymentId in the TrackingParticipant? Alternately, how can I obtain it in the workflow to add it to a CustomTrackingRecord?
You can't get the ServiceDeploymentId in the TrackingParticipant. Basically the ServiceDeploymentId is an internal detail of the SqlWorkflowInstanceStore.
I would either set the SqlWorkflowInstanceStore to not delete the worklow instance upon completion and do so myself at some later point in time after saving the ServiceDeploymentId with the InstanceId.
An alternative is to use auto cleanup with the SqlWorkflowInstanceStore and retreive the ServiceDeploymentId when the first tracking record is generated. At that point the workflow is not complete so the original instance record is still there.