How do you ensure consistent client reads in an eventual consistent system? - cqrs

I'm digging into CQRS and I am looking for articles on how to solve client reads in an eventual consistent system. Consider for example a web shop where users can add items to their cart. How can you ensure that the client displays items in the cart if the actual processing of the command "AddItemToCart" is done async? I understand the principles of dispatching commands async and updating the read model async based on domain events, but I fail to see how this is handled from the clients perspective.

There are a few different ways of doing it;
Wait at user till consistent
Just poll the server until you get the read model updated. This is similar to what Ben showed.
Ensure consistency through 2PC
You have a queue that supports DTC; and your commands are put there first. They are then; executed, events sent, read model updated; all inside a single transaction. You have not actually gained anything with this method though, so don't do it this way.
Fool the client
Place the read models in local storage at the client and update them when the corresponding event is sent -- but you were expecting this event anyway, so you had already updated the javascript view of the shopping cart.

I'd recommend you have a look at the Microsoft Patterns & Practices team's guidance on CQRS. Although this is still work-in-progress they have given one solution to the issue you've raised.
Their approach for commands requiring feedback is to submit the command asynchronously, redirect to another controller action and then poll the read model for the expected change or a time-out occurs. This is using the Post-Redirect-Get pattern which works better with the browser's forward and back navigation buttons, and gives the infrastructure more time to process the command before the MVC controller starts polling.
Example code from the RegistrationController using ASP.NET MVC 4 asynchronous controllers.
[HttpGet]
[OutputCache(Duration = 0, NoStore = true)]
public Task<ActionResult> SpecifyRegistrantAndPaymentDetails(Guid orderId, int orderVersion)
{
return this.WaitUntilOrderIsPriced(orderId, orderVersion)
.ContinueWith<ActionResult>(
...
);
}
...
private Task<PricedOrder> WaitUntilOrderIsPriced(Guid orderId, int lastOrderVersion)
{
return
TimerTaskFactory.StartNew<PricedOrder>(
() => this.orderDao.FindPricedOrder(orderId),
order => order != null && order.OrderVersion > lastOrderVersion,
PricedOrderPollPeriodInMilliseconds,
DateTime.Now.AddSeconds(PricedOrderWaitTimeoutInSeconds));
}
I'd probably use AJAX polling instead of having a blocked web request at the server.

Post-Redirect-Get
You're hoping that the save command executes on time before Get is called. What if the command takes 10 seconds to complete in the back end but Get is called in 1 second?
Local Storage
With storing the result of the command on the client while the command goes off to execute, you're assuming that the command will go through without errors. What if the back-end runs into an error while processing the command? Then what you have locally isn't consistent.
Polling
Polling seems to be the option that is actually in line with eventual consistency; you're not faking or assuming. Your polling mechanism can be an asynchronous as a part of your page, e.g. shopping cart page component polls until it gets an update without refreshing the page.
Callbacks
You could introduce something like web hooks to make a call back to the client if the client is capable of receiving such. By providing a correlation Id once the command is accepted by the back-end, once the command has finished processing, the back-end can notify the front end of the command's status along with the correlation Id on whether the command went through successfully or not. There is no need for any kind of polling with this approach.

Related

Creating an atomic process for a netconf edit-config request

I am creating a custom system that, when a user submits a netconf edit-config, it will initiate a set of actions in my system that will atomically alter the configuration of our system and then submit a notification to the user of its success or failure.
Think of it as a big SQL transaction that, at the end, either commits or rolls back.
So, steps
User submits an edit-config
System accepts config and works to implement this config
If the config is successful, sends by a thumbs up response (not sure the formal way of doing this)
If the config is a failure, sends by a thumbs down response (and I will have to make sure the config is rolled back internally)
All this is done atomically. So, if a user submits two configs in a row, they won't conflict with each other.
Our working idea (probably not the best one) to implement this was to go about this by accepting the edit-config and then, within sysrepo, we would edit parts of our leafs with the success or failure flags and they would happen within the same session as the initial change. We were hoping this would keep everything atomic; by doing edits outside of the session, multiple configuration changes could conflict with each other.
We weren't sure to go about this with pure netconf or to leverage sysrepo directly. We noticed all these plugins/bindings made for sysrepo and figured those could be used directly to talk to our datastore.
But that said, our working idea is most likely not best-practice approach. What would be the best way to achieve this?
Our system is:
netopeer 1.1.27
sysrepo 1.4.58
libyang 1.0.167
libnetconf2 1.1.24
And our yang file is
module rxmbn {
namespace "urn:com:zug:rxmbn";
prefix rxmbn;
container rxmbn-config {
config true;
leaf raw {
type string;
}
leaf raw_hashCode {
type int32;
}
leaf odl_last_processed_hashCode {
type int32;
}
leaf processed {
type boolean;
default "false";
}
}
}
Currently we can:
Execute an edit-config to netopeer server
We can see the new config register in the sysrepo datastore
We can capture the moment sysrepo registers the data via sysrepo's API
But we are having problems
Atomically editing the datastore during the update session (due to locks, which is normal. In fact, if there is no way to edit during an update session, that is fine and not necessary. The main goal is the next bullet)
Atomically reacting to the new edit-config and responding to the end user
We are all a bit new to netconf and yang, so I am sure there is some way to leverage the notification api or event api either through the netopeer session or sysrepo, we just don't know enough yet.
If there are any examples or implementation advice to create an atomic transaction for this, that'd be really useful.
I know nothing of sysrepo so this is from a NETCONF perspective.
NETCONF severs process requests serially within a single session in a request-response fashion, meaning that everything you do within a single NETCONF session should already be "atomic" - you cannot send two requests and have them applied in reverse order or in parallel no matter what you do. A well behaved client would also wait for each response from the server before sending a new request, especially if all updates must execute successfully and in specific order. The protocol also defines no way to cancel a request already sent to a server.
If you need to prevent other sessions from modifying a datatstore while another session is performing a multi- edit-config, you use <lock> and <unlock> NETCONF operations to lock the entire datastore. There is also RFC5717 and partial lock, which would only lock a specific branch of the datastore.
Using notifications to report success of an <edit-config> would be highly unusual - that's what <rpc-reply> and <rpc-error> are there for within the same session. You would use notifications to inform other sessions about what's happening. In fact, there are standard base notifications for config changes.
I suggest reading the entire RFC6241 before proceeding further. There are things like candidate datastores, confirmed-commits, etc. you should know about.
Which component are you developing? Netconf client/manager or Netconf server?
In general, the Netconf server should implement individual Netconf RPC operations in an atomic way.
When a Netconf client wants to perform a set of operations in an atomic way, it should follow the procedure explained in Apendix E.1 in RFC 6241.

How to merge/consolidate responses from multiple RESTful microservices?

Let's say there are two (or more) RESTful microservices serving JSON. Service (A) stores user information (name, login, password, etc) and service (B) stores messages to/from that user (e.g. sender_id, subject, body, rcpt_ids).
Service (A) on /profile/{user_id} may respond with:
{id: 1, name:'Bob'}
{id: 2, name:'Alice'}
{id: 3, name:'Sue'}
and so on
Service (B) responding at /user/{user_id}/messages returns a list of messages destined for that {user_id} like so:
{id: 1, subj:'Hey', body:'Lorem ipsum', sender_id: 2, rcpt_ids: [1,3]},
{id: 2, subj:'Test', body:'blah blah', sender_id: 3, rcpt_ids: [1]}
How does the client application consuming these services handle putting the message listing together such that names are shown instead of sender/rcpt ids?
Method 1: Pull the list of messages, then start pulling profile info for each id listed in sender_id and rcpt_ids? That may require 100's of requests and could take a while. Rather naive and inefficient and may not scale with complex apps???
Method 2: Pull the list of messages, extract all user ids and make bulk request for all relevant users separately... this assumes such service endpoint exists. There is still delay between getting message listing, extracting user ids, sending request for bulk user info, and then awaiting for bulk user info response.
Ideally I want to serve out a complete response set in one go (messages and user info). My research brings me to merging of responses at service layer... a.k.a. Method 3: API Gateway technique.
But how does one even implement this?
I can obtain list of messages, extract user ids, make a call behind the scenes and obtain users data, merge result sets, then serve this final result up... This works ok with 2 services behind the scenes... But what if the message listing depends on more services... What if I needed to query multiple services behind the scenes, further parse responses of these, query more services based on secondary (tertiary?) results, and then finally merge... where does this madness stop? How does this affect response times?
And I've now effectively created another "client" that combines all microservice responses into one mega-response... which is no different that Method 1 above... except at server level.
Is that how it's done in the "real world"? Any insights? Are there any open source projects that are built on such API Gateway architecture I could examine?
The solution which we used for such problem was denormalization of data and events for updating.
Basically, a microservice has a subset of data it requires from other microservices beforehand so that it doesn't have to call them at run time. This data is managed through events. Other microservices when updated, fire an event with id as a context which can be consumed by any microservice which have any interest in it. This way the data remain in sync (of course it requires some form of failure mechanism for events). This seems lots of work but helps us with any future decisions regarding consolidation of data from different microservices. Our microservice will always have all data available locally for it process any request without synchronous dependency on other services
In your case i.e. for showing names with a message, you can keep an extra property for names in Service(B). So whenever a name update in Service(A) it will fire an update event with id for the updated name. The Service(B) then gets consumes the event, fetches relevant data from Service(A) and updates its database. This way even if Service(A) is down Service(B) will function, albeit with some stale data which will eventually be consistent when Service(A) comes up and you will always have some name to be shown on UI.
https://enterprisecraftsmanship.com/2017/07/05/how-to-request-information-from-multiple-microservices/
You might want to perform response aggregation strategies on your API gateway. I've written an article on how to perform this on ASP.net Core and Ocelot, but there should be a counter-part for other API gateway technologies:
https://www.pogsdotnet.com/2018/09/api-gateway-response-aggregation-with.html
You need to write another service called Aggregator which will internally call both services and get the response and merge/filter them and return the desired result. This can be easily achieved in non-blocking using Mono/Flux in Spring Reactive.
An API Gateway often does API composition.
But this is typical engineering problem where you have microservices which is implementing databases per service pattern.
The API Composition and Command Query Responsibility Segregation (CQRS) pattern are useful ways to implement queries .
Ideally I want to serve out a complete response set in one go
(messages and user info).
The problem you've described is what Facebook realized years ago in which they decided to tackle that by creating an open source specification called GraphQL.
But how does one even implement this?
It is already implemented in various popular programming languages and maybe you can give it a try in the programming language of your choice.

CQRS and REST HATEOAS mismatch

Suppose you have a model Foo.
One business case is to simply create an instance of Foo, so there is a corresponding CreateFooCommand in my model, triggered by invoking a POST request to a given REST endpoint.
There are of course other Commands too.
But now, there is a ViewModel, which is derived from my DomainModel. It's simply a sql table with raw data - each Foo instance from DomainModel has corresponding derived ViewModel instance. Both have different IDs (on DomainModel there is a DomainID, on ViewModel it's simply a long value).
Now: should I even care about HATEOAS in such a case? In a proper REST implementation, I should at least return location-url in the header. But since my view model is only derived from DomainModel, should I care? I don't even have the view model's ID at the time my DomainModel is created.
Since CQRS means that Queries are separated from Commands, you may not be able to perform a Query right away, because the Command may not yet have been applied (perhaps it never will).
In order to reconcile that with HATEOAS, instead of returning 200 OK from the POST request, the service can return 202 Accepted:
The request has been accepted for processing, but the processing has not been completed. The request might or might not eventually be acted upon, as it might be disallowed when processing actually takes place. There is no facility for re-sending a status code from an asynchronous operation such as this.
The 202 response is intentionally non-committal. Its purpose is to allow a server to accept a request for some other process (perhaps a batch-oriented process that is only run once per day) without requiring that the user agent's connection to the server persist until the process is completed. The entity returned with this response SHOULD include an indication of the request's current status and either a pointer to a status monitor or some estimate of when the user can expect the request to be fulfilled.
(My emphasis)
That pointer could be a link that the client can query to get the status of the Command. When/if the Command completes and the View is updated, that status resource could then contain a link to the view.
This is pretty much a workflow straight out of REST in Practice - very reminiscent of its Restbucks example.
Another option to deal with the ID issue is to generate the ID before accepting the Command - perhaps even asking the client to supply the ID. Read more about such options here.
As Greg Young explains, CQRS is nothing more than "splitting one object into two". So assume that you have one domain aggregate and it has an id. Now you are talking about your view model having another id. However, you are unable to update your view model unless you have the aggregate id in your view model as well. From my point of view, your REST POST request should return a result that has the aggregate id in it. This is your id, the view model id has no interest to anyone except the read model storage.
Should it return a command status URI like Mark suggests is a topic for another discussion. Many CQRS practitioners currently tend to handle commands synchronously to avoid FE/BE mismatch in case of failure and give the FE an ability to react on errors on the BE. There is no real win to execute commands asynchronously for one user. Commands do mutate the state and in 99% of cases the user needs to know if the state was mutated properly.

How to guard against repeated request?

we have a button in a web game for the users to collect reward. That should only be clicked once, and upon receiving the request, we'll mark it collected in DB.
we've already blocked the buttons in the client from repeated clicking. But that won't help if people resend the package multiple times to our server in short period of time.
what I want is a method to block this from server side.
we're using Playframework 2 (2.0.3-RC2) for server side and so far it's stateless, I'm tempted to use a Set to guard like this:
if processingSet has userId then BadRequest
else put userId in processingSet and handle request
after that remove userId from that Set
but then I'd have to face problem like Updating Scala collections thread-safely and still fail to block the user once we have more than one server behind load balancing.
one possibility I'm thinking about is to have a table in DB in place of the processingSet above, but that would incur 1+ DB operation per request, are there any better solution~?
thanks~
Additional DB operation is relatively 'cheap' solution in that case. You should use it if you'e planning to save the buttons state permanently.
If the button is disabled only for some period of time (for an example until the game is over) you can also consider using the cache API however keep in mind that's not dedicated for solutions which should be stored for long time (it should not be considered as DB alternative).
Given that you're using Mongo and so don't have transactions spanning separate collections, I think you can probably implement this guard using an atomic operation - namely "Update if current", which is effectively CompareAndSwap.
Assuming you've got a collection like "rewards" which has a "collected" attribute, you can update the collected flag to true only if it is currently false and if that operation doesn't fail you can proceed to apply the reward knowing that for any other requests the same operation will fail.

MSMQ querying for a specific message

I have a questing regarding MSMQ...
I designed an async arhitecture like this:
CLient - > WCF Service (hosted in WinService) -> MSMQ
so basically the WCF service takes the requests, processes them, adds them to an INPUT queue and returns a GUID. The same WCF service (through a listener) takes first message from queue (does some stuff...) and then it puts it back into another queue (OUTPUT).
The problem is how can I retrieve the result from the OUTPUT queue when a client requests it... because MSMQ does not allow random access to it's messages and the only solution would be to iterate through all messages and push them back in until I find the exact one I need. I do not want to use DB for this OUTPUT queue, because of some limitations imposed by the client...
You can look in your Output-Queue for your message by using
var mq = new MessageQueue(outputQueueName);
mq.PeekById(yourId);
Receiving by Id:
mq.ReceiveById(yourId);
A queue is inherently a "first-in-first-out" kind of data structure, while what you want is a "random access" data structure. It's just not designed for what you're trying to achieve here, so there isn't any "clean" way of doing this. Even if there was a way, it would be a hack.
If you elaborate on the limitations imposed by the client perhaps there might be other alternatives. Why don't you want to use a DB? Can you use a local SQLite DB, perhaps, or even an in-memory one?
Edit: If you have a client dictating implementation details to their own detriment then there are really only three ways you can go:
Work around them. In this case, that could involve using a SQLite DB - it's just a file and the client probably wouldn't even think of it as a "database".
Probe deeper and find out just what the underlying issue is, ie. why don't they want to use a DB? What are their real concerns and underlying assumptions?
Accept a poor solution and explain to the client that this is due to their own restriction. This is never nice and never easy, so it's really a last resort.
You may could use CorrelationId and set it when you send the message. Then, to receive the same message you can pick the specific message with ReceiveByCorrelationId as follow:
message = queue.ReceiveByCorrelationId(correlationId);
Moreover, CorrelationId is a string with the following format:
Guid()\\Number