I am new to CQRS and confused on how command will write a address change to a customer object
Lets say I have divided customer information into two tables
customer - Domain database
Active
Preferred
Customer_Read database
Name,
Address,
Phone,
email
User modifies address of the customer. The address fields are all in read database.
there may be 3 or more query friendly tables that is keeping address information.
If I understand the CQRS implementations (sample) Customer Domain (removed Aggregate root) should be publishing event about address change that should be handled by multiple handlers to update each of the table.
How do I implement this when I wont be changing the state of customer object?
Do domain have to know that it has address in another database ?
Thank you in advance.
Regards,
The Mar
Update--
After going through more posts on net I am assuming that if the state is not changed by the command no event will be generated to save the domain itself but events will be applied to change the address in query / View Model friendly tables.
You still need to persist some domain data somewhere in the write persistence. This way the address is stored in this persistence store, event is published after changing it.
This way:
if there were no change - we can skip publishing the event
domain does not need to know anything about objects that may (or may not) be subscribed to his events.
This logic applies to both persistence in relational DBs (MS SQL with NHibernate, for example) and event sourcing approach.
Related
In my postgreSQL I currently have one database with user and business relevant tables. Our company is planing in future to add more categories, therefor I would like to seperate the categories into individual databases, as most of the data is quiet static once initialized. Below I am a showing an example with cars, please keep in mind that our company services differ much more as Pkw and LorryTruck and need much more extra tables within their scope.
Current State:
PostgreSQL:
CompanyDB
UserTable
PkwTable
ServiceTable
BookingTable
Future State
PostgreSQL:
UserDB
UserTable
PkwDB
PkwTable
ServiceTable
BookingTable
LorryTruckDB
LorryTruckTable
ServiceTable
BookingTable
My concern is if and how I could connect user relevant Data to the desired databases. In example a user can register for Pkw services and might be later on interested in LorryTruck services. The main goal is also that a user should only register once on our system.
Is this possible or could I design this better?
Please provide your opinion or experiences.
Thank you!
I would use a SCHEMA, not a different database. It's not possible (or easy) to get some data from a different database, while using different schema's is standard and working great.
I have an SaaS application in the pipeworks.
One of the things that has me a bit confused is the best way to manage the stable of Austalian suburb and state data across multiple databases (this applies to any country as each country has a list like this).
For example in Australia you have Australian Postcode list that links all the postcodes to the suburbs and you can use that to create a dropdown for state, suburb and postcode etc.
An example of the CSV of australian postcodes can be found HERE.
So you can upload a csv file for example but the problem remains..
Whats the best way to hold this data.. its common to all databases where you have a person, client, employee etc..
Do you replcate it in each database? Is there a better way than having redundant stores of data..
Best way to implement it..
There are several options and considerations I would look at for this problem. Some considerations:
Number of address rows expected
Whether a client database is concerned with prefill/validated international addresses
Whether the client system is web connected or can operate in isolation
Are these databases/systems hosted by you or distributed to individual clients? (SaaS implies "Web" and "Hosted by You" to points 3 & 4)
How critical address integrity is.
For smaller systems, a simple option for address systems is to de-normalize the address data (state, postcode, suburb) and consider using a central lookup database/service, either under your own control or a third party. The denormalized address table would contain the text fields for the State, Postcode, Suburb etc. rather than FK values (stateId, suburbId, etc.) This avoids needing to store lookup tables in every client DB, just one Lookup DB or leave that to a 3rd party service.
The advantage of a third-party lookup is that keeping it up to date with new areas and changes is handled for you. Third party services would require a web connection, and you have to factor in the risk of their service being down or a web connection being unavailable. Larger systems with millions of addresses might benefit from normalizing the address table, so the "cost" of replicating suitable address lookup tables might be worthwhile. You can still a central service to look up addresses, then resolve whether the client DB already has a StateId, SuburbId etc. for the respective state/suburb for that post-code before inserting one if necessary. (Cutting down the number of rows each client DB needs to address values that are actually used)
In that last example you might have lookup tables for State and Suburb linked to PostCodes, linked to Country. Country would default to the target, maybe be an optional selection for international addresses. The user provides a post code to the service which returns suburbs, they select a suburb. The address validation service could go as far as to validate the street address. When you're happy an address is "valid" and ready to be saved, you search your local State, Suburb, (even Street) tables for matches for that PostCode, if found use those FKs, otherwise insert new entries and link the FK.
Using a separate service, or services would be my consideration especially if you need to support validating/storing international addresses. For instance if the client is in Australia but regularly has address information for New Zealand. Storing entire address validation tables could get rather large if clients could be resolving addresses for many countries. (I.e. European countries and neighbours) You can write a Façade service to support different 3rd party address validation providers and/or homemade implementations with a standard interface.
If a system has to operate in isolation of an internet connection then you'll probably be stuck with each database having one or more local data sources to resolve address information.
Data integrity of address information is a separate concern you might want to consider. In some systems you need to validate that an address is recognized and don't want to allow invalid combinations or detect unexpected changes. Services that validate a particular address can provide unique IDs for an address that you can store as part of your address information. (These often tie into geocoordinate solutions where you want to quickly direct a map service to a particular location) Alternatively, if you successfully look up an address then validate that the address information is valid, even if just the country, post code, and suburb, you can create and store a hash of those values to check for tampering. (I.e. someone or some system changed a field to make the address invalid, the combined address won't match the stored hash) Addresses can be checked before use and flagged if not valid.
I'm developing an API with Sails, and now I need to securize some variables from an entity. Those variable will be accesed only from Admin or own user.
I have an structure like this:
Employee (contains your employee records)
fullName
hourlyWage
phoneNumber
accountBank
Location (contains a record for each location you operate)
streetAddress
city
state
zipcode
...
I need to encrypt phonenumber and accountbank, to avoid anyone to see the values of this fields in the DataBase. Only the owner or the admin.
How I can do that? Thanks
You are looking for a way to encrypt data so that people with no required access right could not see it.
The solution for that is not Sails.js specific and Node actually comes with tools to encrypt data :https://nodejs.org/api/crypto.html.
The key rule here is to always keep your secret password safe.
As for integration in your Sails.js application, I would use callbacks in Models. The official documentation provides a good example here : http://sailsjs.org/documentation/concepts/models-and-orm/lifecycle-callbacks
Basically you just define a function that will be called each time the record is about to be created, fetched or updated. You can then apply your encrypt/decrypt functions there.
This will encrypt/decrypt your phone numbers and bank account numbers automatically.
Regarding access control, you can use Sails' policies along with authentication to determine if the client has the right to access the resource. If not you can always remove attributes from the response sent back to the client.
I'm trying to wrap my head around CQRS. I'm drawing from the code example provided here. Please be gentle I'm very new to this pattern.
I'm looking at a logon scenario. I like this scenario because it's not really demonstrated in any examples i've read. In this case I do not know what the aggregate id of the user is or even if there is one as all I start with is a username and password.
In the fohjin example events are always fired from the domain (if needed) and the command handler calls some method on the domain. However if a user logon is invalid I have no domain to call anything on. Also most, if not all of the base Command/Event classes defined in the fohjin project pass around an aggregate id.
In the case of the event LogonFailure I may want to update a LogonAudit report.
So my question is: how to handle commands that do not resolve to a particular aggregate? How would that flow?
public void Execute(UserLogonCommand command)
{
var user = null;//user looked up by username somehow, should i query the report database to resolve the username to an id?
if (user == null || user.Password != command.Password)
;//What to do here? I want to raise an event somehow that doesn't target a specific user
else
user.LogonSuccessful();
}
You should take into account that it most cases CQRS and DDD is suitable just for some parts of the system. It is very uncommon to model entire system with CQRS concepts - it fits best to the parts with complex business domain and I wouldn't call logging user in a particularly complex business scenario. In fact, in most cases it's not business-related at all. The actual business domain starts when user is already identified.
Another thing to remember is that due to eventual consistency it is extremely beneficial to check as much as we can using only query-side, without event creating any commands/events.
Assuming however, that the information about successful / failed user log-ins is meaningful I'd model your scenario with following steps
User provides name and password
Name/password is validated against some kind of query database
When provided credentials are valid RegisterValidUserCommand(userId) is executed which results in proper event
If provided credentials are not valid
RegisterInvalidCredentialsCommand(providedUserName) is executed which results in proper event
The point is that checking user credentials is not necessarily part of business domain.
That said, there is another related concept, in which not every command or event needs to be business - related, thus it is possible to handle events that don't need aggregates to be loaded.
For example you want to change data that is informational-only and in no way affects business concepts of your system, like information about person's sex (once again, assuming that it has no business meaning).
In that case when you handle SetPersonSexCommand there's actually no need to load aggregate as that information doesn't even have to be located on entities, instead you create PersonSexSetEvent, register it, and publish so the query side could project it to the screen/raport.
Baseline info:
I'm using an external OAuth provider for login. If the user logs into the external OAuth, they are OK to enter my system. However this user may not yet exist in my system. It's not really a technology issue, but I'm using JOliver EventStore for what it's worth.
Logic:
I'm not given a guid for new users. I just have an email address.
I check my read model before sending a command, if the user email
exists, I issue a Login command with the ID, if not I issue a
CreateUser command with a generated ID. My issue is in the case of a new user.
A save occurs in the event store with the new ID.
Issue:
Assume two create commands are somehow issued before the read model is updated due to browser refresh or some other anomaly that occurs before consistency with the read model is achieved. That's OK that's not my problem.
What Happens:
Because the new ID is a Guid comb, there's no chance the event store will know that these two CreateUser commands represent the same user. By the time they get to the read model, the read model will know (because they have the same email) and can merge the two records or take some other compensating action. But now my read model is out of sync with the event store which still thinks these are two separate entities.
Perhaps it doesn't matter because:
Replaying the events will have the same effect on the read model
so that should be OK.
Because both commands are duplicate "Create" commands, they should contain identical information, so it's not like I'm losing anything in the event store.
Can anybody illuminate how they handled similar issues? If some compensating action needs to occur does the read model service issue some kind of compensation command when it realizes it's got a duplicate entry? Is there a simpler methodology I'm not considering?
You're very close to what I'd consider a proper possible solution. The scenario, if I may summarize, is somewhat like this:
Perform the OAuth-entication.
Using the read model decide between a recurring visitor and a new visitor, based on the email address.
In case of a new visitor, send a RegisterNewVisitor command message that gets handled and stored in the eventstore.
Assume there is some concurrency going on that, for the same email address, causes two RegisterNewVisitor messages, each containing what the system thinks is the key associated with the email address. These keys (guids) are different.
Detect this duplicate key issue in the read model and merge both read model records into one record.
Now instead of merging the records in the read model, why not send a ResolveDuplicateVisitorEmailAddress { Key1, Key2 } towards your domain model, leaving it up to the domain model (the codified form of the business decision to be taken) to resolve this issue. You could even have a dedicated read model to deal with these kind of issues, the other read model will just get a kind of DuplicateVisitorEmailAddressResolved event, and project it into the proper records.
Word of warning: You've asked a technical question and I gave you a technical, possible solution. In general, I would not apply this technique unless I had some business indicator that this is worth investing in (what's the frequency of a user logging in concurrently for the first time - maybe solving it this way is just a way of ignoring the root cause (flakey OAuth, no register new visitor process in place, etc)). There are other technical solutions to this problem but I wanted to give you the one closest to what you already have in place. They range from registering new visitors sequentially to keeping an in-memory projection of the visitors not yet in the read model.