CQRS - When a command cannot resolve to a domain - cqrs

I'm trying to wrap my head around CQRS. I'm drawing from the code example provided here. Please be gentle I'm very new to this pattern.
I'm looking at a logon scenario. I like this scenario because it's not really demonstrated in any examples i've read. In this case I do not know what the aggregate id of the user is or even if there is one as all I start with is a username and password.
In the fohjin example events are always fired from the domain (if needed) and the command handler calls some method on the domain. However if a user logon is invalid I have no domain to call anything on. Also most, if not all of the base Command/Event classes defined in the fohjin project pass around an aggregate id.
In the case of the event LogonFailure I may want to update a LogonAudit report.
So my question is: how to handle commands that do not resolve to a particular aggregate? How would that flow?
public void Execute(UserLogonCommand command)
{
var user = null;//user looked up by username somehow, should i query the report database to resolve the username to an id?
if (user == null || user.Password != command.Password)
;//What to do here? I want to raise an event somehow that doesn't target a specific user
else
user.LogonSuccessful();
}

You should take into account that it most cases CQRS and DDD is suitable just for some parts of the system. It is very uncommon to model entire system with CQRS concepts - it fits best to the parts with complex business domain and I wouldn't call logging user in a particularly complex business scenario. In fact, in most cases it's not business-related at all. The actual business domain starts when user is already identified.
Another thing to remember is that due to eventual consistency it is extremely beneficial to check as much as we can using only query-side, without event creating any commands/events.
Assuming however, that the information about successful / failed user log-ins is meaningful I'd model your scenario with following steps
User provides name and password
Name/password is validated against some kind of query database
When provided credentials are valid RegisterValidUserCommand(userId) is executed which results in proper event
If provided credentials are not valid
RegisterInvalidCredentialsCommand(providedUserName) is executed which results in proper event
The point is that checking user credentials is not necessarily part of business domain.
That said, there is another related concept, in which not every command or event needs to be business - related, thus it is possible to handle events that don't need aggregates to be loaded.
For example you want to change data that is informational-only and in no way affects business concepts of your system, like information about person's sex (once again, assuming that it has no business meaning).
In that case when you handle SetPersonSexCommand there's actually no need to load aggregate as that information doesn't even have to be located on entities, instead you create PersonSexSetEvent, register it, and publish so the query side could project it to the screen/raport.

Related

Onion Architecture - What should an Interface do if has some Data to check after giving structured data (p.ex : an Object) to a Usecase

I have a REST API based on Onion Architecture.
But I have some challenges to apply this way of building a server. Concretely with what should be the behaviour of an Interface if has some data to check before giving structured data to a Usecase.
That is one of my problems:
I have some methods in the Interface that catch info about timers from the request. But I'm facing always the same question. Must I catch all and give to the Usecase and do all checks there, or instead of that, first I have to check if a timer exists in the DB (if i'm updating a timer) and after that do what I need?
This type of checks like Role of who is requesting and what is allowed to do or not, if timers exist, if user exists, if an user already exists and you can't create someone with the same username (I want an unique username restriction) etc, are annoying me because depending on where I'm doing the check, following strictly the Onion Architecture or not, I'm executing more or less code that sometimes is unnecessary.
If I check some things in the Interface, I am avoiding executing code that would be unnecesary. But I'm not following this Architecture correctly, and viceversa.
Any thoughts?

Ensure consistence for foreignkeys/ownerships in microservices

I have two bounded contexts which lead into two micro services
PersonalManagement
DocumentStorage
I keep the entity model simple here.
PersonalManagement:
Entity/Table Person:
#id - int
tenantId - int
name - string
...
DocumentStorage
Entity/Table Document:
#id - int
tenantId - int
personId - int
dateIssued - string
...
You need to know that before the application is started - a company (tenant) is choosen to define the company context.
I want to store a new document by using REST/JSON.
This is a POST to /tenants/1/persons/5/documents
with the body
{
"dateIssued" : "2018-06-11"
}
On the backend side - I validate the input body.
One validation might be "if the person specified exists and really belongs to given tenant".
Since this info is stored in the PersonalManagement-MicroService, I need to provide an operation like this:
"Does exists (personId=5,tenantId=1)"
in PersonalManagement to ensure consistence since caller might be evil.
Or in general:
What is best practise to check "ownership" of entities cross database in micro services
It might also be an option that if a new person is created (tenantId,personId) this information is stored additionally(!) in DocumentStorage but wanna avoid this redundancy.
I'm not going to extend this answer into whether your bounded contexts and service endpoints are well defined since your question seems to be simplifying the issue to keep a well defined scope, but regarding your specific question:
What is best practise to check "ownership" of entities cross database in micro services
Microservice architectures use strive for a "share nothing" principle. And that usually extends from code base to data base. So you're right to assume you're checking for this constraint "cross-DB" in your scenario.
You have a few options on this particular case, each with their set of drawbacks:
1) Your proposed "Does exists (personId=5,tenantId=1)" call from the DocumentContext to the PersonContext is not wrong on itself, but you will generate a straight dependency between these two microservices, so you must ask yourself whether it seems ok for you not to accept new documents if the PersonManagement microservice is offline.
In specific situations, such dependencies might be acceptable but the more of these you have, the less your microservice architecture will behave as one and more like a "distributed monolith" which on itself it pretty much an anti-pattern.
2) The other main option you have is that you should recognize that the DocumentContext is a very much interested in some information/behavior relating to People so it should be ok with modelling the Person Entity inside its boundaries.
That means, you can have the DocumentContext subscribe for changes in the PersonContext to be aware of which People currently exist and what their characteristics are and thus being able to keep a local copy of such information.
That way, your validation will be kept entirely inside the DocumentContext which will have its operation unhindered by eventual issues with the PersonContext and you will find out your modelling of the document related entities will be much cleaner than before.
But in the end, you will also discover that a "share nothing" principle usually will cost you in what seems to be redundancy, but it's actually independence of contexts.
just for the tenancy check , this can be done using the JWT token (token which can store tenancy information and other metadata).
Let me provide another example of the same scenario which can't be solved with JWT.
Assume one Customer wants to create a Order and our system wants to check whether the customer exist or not while creating the order.
As Order and Customer service are separate, and we want minimal dependencies between them, there are multiple sol. to above problems:
create Order in "validating state" and on OrderCreated event check for customer validity and update customer state to "Valid"
another one before creating order check for the customer (which is not the right way as it creates dependency, untill and unless very critical do not do it)
last way is the let the order be created , somebody who will final check the order for delivery will verify customer will remove

Creation Concurrency with CQRS and EventStore

Baseline info:
I'm using an external OAuth provider for login. If the user logs into the external OAuth, they are OK to enter my system. However this user may not yet exist in my system. It's not really a technology issue, but I'm using JOliver EventStore for what it's worth.
Logic:
I'm not given a guid for new users. I just have an email address.
I check my read model before sending a command, if the user email
exists, I issue a Login command with the ID, if not I issue a
CreateUser command with a generated ID. My issue is in the case of a new user.
A save occurs in the event store with the new ID.
Issue:
Assume two create commands are somehow issued before the read model is updated due to browser refresh or some other anomaly that occurs before consistency with the read model is achieved. That's OK that's not my problem.
What Happens:
Because the new ID is a Guid comb, there's no chance the event store will know that these two CreateUser commands represent the same user. By the time they get to the read model, the read model will know (because they have the same email) and can merge the two records or take some other compensating action. But now my read model is out of sync with the event store which still thinks these are two separate entities.
Perhaps it doesn't matter because:
Replaying the events will have the same effect on the read model
so that should be OK.
Because both commands are duplicate "Create" commands, they should contain identical information, so it's not like I'm losing anything in the event store.
Can anybody illuminate how they handled similar issues? If some compensating action needs to occur does the read model service issue some kind of compensation command when it realizes it's got a duplicate entry? Is there a simpler methodology I'm not considering?
You're very close to what I'd consider a proper possible solution. The scenario, if I may summarize, is somewhat like this:
Perform the OAuth-entication.
Using the read model decide between a recurring visitor and a new visitor, based on the email address.
In case of a new visitor, send a RegisterNewVisitor command message that gets handled and stored in the eventstore.
Assume there is some concurrency going on that, for the same email address, causes two RegisterNewVisitor messages, each containing what the system thinks is the key associated with the email address. These keys (guids) are different.
Detect this duplicate key issue in the read model and merge both read model records into one record.
Now instead of merging the records in the read model, why not send a ResolveDuplicateVisitorEmailAddress { Key1, Key2 } towards your domain model, leaving it up to the domain model (the codified form of the business decision to be taken) to resolve this issue. You could even have a dedicated read model to deal with these kind of issues, the other read model will just get a kind of DuplicateVisitorEmailAddressResolved event, and project it into the proper records.
Word of warning: You've asked a technical question and I gave you a technical, possible solution. In general, I would not apply this technique unless I had some business indicator that this is worth investing in (what's the frequency of a user logging in concurrently for the first time - maybe solving it this way is just a way of ignoring the root cause (flakey OAuth, no register new visitor process in place, etc)). There are other technical solutions to this problem but I wanted to give you the one closest to what you already have in place. They range from registering new visitors sequentially to keeping an in-memory projection of the visitors not yet in the read model.

SO style reputation system with CQRS & Event Sourcing

I am diving into my first forays with CQRS and Event Sourcing and I have a few points Id like some guidance on. I would like to implement a SO style reputation system. This seems a perfect fit for this architecture.
Keeping SO as the example. Say a question is upvoted this generates an UpvoteCommand which increases the questions total score and fires off a QuestionUpvotedEvent.
It seems like the author's User aggregate should subscribe to the QuestionUpvotedEvent which could increase the reputation score. But how/when you do this subscription is not clear to me? In Greg Youngs example the event/command handling is wired up in the global.asax but this doesn't seem to involve any routing based on aggregate Id.
It seems as though every User aggregate would subscribe to every QuestionUpvotedEvent which doesn't seem correct, to make such a scheme work the event handler would have to exhibit behavior to identify if that user owned the question that was just upvoted. Greg Young implied this should not be in event handler code, which should merely involve state change.
What am i getting wrong here?
Any guidance much appreciated.
EDIT
I guess what we are talking about here is inter-aggregate communication between the Question & User aggregates. One solution I can see is that the QuestionUpvotedEvent is subscribed to by a ReputationEventHandler which could then fetch the corresponding User AR and call a corresponding method on this object e.g. YourQuestionWasUpvoted. This would in turn generated a user specific UserQuestionUpvoted event thereby preserving replay ability in the future. Is this heading in the right direction?
EDIT 2
See also the discussion on google groups here.
My understanding is that aggregates themselves should not be be subscribing to events. The domain model only raises events. It's the query side or other infrastructure components (such as an emailing component) that subscribe to events.
Domain Services are designed to work with use-cases/commands that involve more than one aggregate.
What I would do in this situation:
VoteUpQuestionCommand gets invoked.
The handler for VoteUpQuestionCommand calls:
IQuestionVotingService.VoteUpQuestion(Guid questionId, Guid UserId);
This then fecthes both the question & user aggregates, calling the appropriate methods on both, such as user.IncrementReputation(int amount) and question.VoteUp(). This would raise two events; UsersReputationIncreasedEvent and QuestionUpVotedEvent respectively, which would be handled by the query side.
My rule of thumb: if you do inter-AR communication use a saga. It keeps things within the transactional boundary and makes your links explicit => easier to handle/maintain.
The user aggregate should have a QuestionAuthored event... in that event is subscribes to the QuestionUpvotedEvent... similarly it should have a QuestionDeletedEvent and/or QuestionClosedEvent in which it does the proper handling like unsibscribing from the QuestionUpvotedEvent etc.
EDIT - as per comment:
I would implement the Question is an external event source and handle it via a gateway. The gateway in turn is the one responsible for handling any replay correctly so the end result stays exactly the same - except for special events like rejection events...
This is the old question and tagged as answered but I think can add something to it.
After few months of reading, practice and create small framework and application base on CQRS+ES, I think CQRS try to decouple components dependencies and responsibilities. In some resources write for each command you Should change maximum one aggregate on command handler (you can load more than one aggregate on handler but only one of them can change).
So in your case I think the best practice is #Tom answer and you should use saga. If your framework doesn't support saga (Like my small framework) you can create some event handler like UpdateUserReputationByQuestionVotedEvent. In that, handler create UpdateUserReputation(Guid user id, int amount) OR UpdateUserReputation(Guid user id, Guid QuestionId, int amount) OR
UpdateUserReputation(Guid user id, string description, int amount). After command sends to handler, the handler load user by user id and update states and properties. In this type of handling you can create a more complex scenario or workflow.

How do I pretend duplicate values in my read database with CQRS

Say that I have a User table in my ReadDatabase (use SQL Server). In a regulare read/write database I can put like a index on the table to make sure that 2 users aren't addedd to the table with the same emailadress.
So if I try to add a user with a emailadress that already exist in my table for a diffrent user, the sql server will throw an exception back.
In Cqrs I can't do that since if I decouple the write to my readdatabas from the domain model, by puting it on an asyncronus queue I wont get the exception thrown back to me, and I will return "OK" to the UI and the user will think that he is added to the database, when infact he will never be added to the read database.
I can do a search in the read database checking if there is a user already in my database with the emailadress, and if there is one, then thru an exception back to the UI. But if they press the save button the same time, I will do 2 checks to the database and see that there isn't any user in the database with the emailadress, I send back that it's okay. Put it on my queue and later it will fail (by hitting the unique identifier).
Am I suppose to load all users from my EventSource (it's a SQL Server) and then do the check on that collection, to see if I have a User that already has this emailadress. That sounds a bit crazy too me...
How have you people solved it?
The way I can see is to not using an asyncronized queue, but use a syncronized one but that will affect perfomance really bad, specially when you have many "read storages" to write to...
Need some help here...
Searching for CQRS Set Based Validation will give you solutions to this issue.
Greg Young posted about the business impact of embracing eventual consistency http://codebetter.com/gregyoung/2010/08/12/eventual-consistency-and-set-validation/
Jérémie Chassaing posted about discovering missing aggregate roots in the domain http://thinkbeforecoding.com/post/2009/10/28/Uniqueness-validation-in-CQRS-Architecture
Related stack overflow questions:
How to handle set based consistency validation in CQRS?
CQRS Validation & uniqueness