CQRS + Event Sourcing: (is it correct that) Commands are generally communicated point-to-point, while Domain Events are communicated through pub/sub? - publish-subscribe

Didn't know how to shorten that title.
I'm basically trying to wrap my head around the concept of CQRS (http://en.wikipedia.org/wiki/Command-query_separation) and related concepts.
Although CQRS doesn't necessarily incorporate Messaging and Event Sourcing it seems to be a good combination (as can be seen with a lot of examples / blogposts combining these concepts )
Given a use-case for a state change for something (say to update a Question on SO), would you consider the following flow to be correct (as in best practice) ?
The system issues an aggregate UpdateQuestionCommand which might be separated into a couple of smaller commands: UpdateQuestion which is targeted at the Question Aggregate Root, and UpdateUserAction(to count points, etc) targeted at the User Aggregate Root. These are send asynchronously using point-to-point messaging.
The aggregate roots do their thing and if all goes well fire events QuestionUpdated and UserActionUpdated respectively, which contain state that is outsourced to an Event Store.. to be persisted yadayada, just to be complete, not really the point here.
These events are also put on a pub/sub queue for broadcasting. Any subscriber (among which likely one or multiple Projectors which create the Read Views) are free to subscribe to these events.
The general question: Is it indeed best practice, that Commands are communicated Point-to-Point (i.e: The receiver is known) whereas events are broadcasted (I.e: the receiver(s) are unknown) ?
Assuming the above, what would be the advantage/ disadvantage of allowing Commands to be broadcasted through pub/sub instead of point-to-point?
For example: When broadcasting Commands while using Saga's (http://blog.jonathanoliver.com/2010/09/cqrs-sagas-with-event-sourcing-part-i-of-ii/) could be a problem, since the mediation role a Saga needs to play in case of failure of one of the aggregate roots is hindered, because the saga doesn't know which aggregate roots participate to begin with.
On the other hand, I see advantages (flexibility) when broadcasting commands would be allowed.
Any help in clearing my head is highly appreciated.

Yes, for Command or Query there is only one and exactly one receiver (thus you can still load balance), but for Events there could be zero or more receivers (subscribers)

Related

Axon or Kafka to support CQRS/ES

Consider the simple use case in which I want to store product ratings as events in an event store.
I could use two different approaches:
Using Axon: A Rating aggregate is responsible for handling the CreateRatingCommand and sending the RatingCreatedEvent. Sending the event would case the Rating to be stored in the event store. Other event handlers have the possibility to replay the event stream when connecting to the Axon server instance and doing whatever needed with the ratings. In this case, the event handler will be used as a stream processor.
Using Kafka: A KafkaProducer will be used to store a Rating POJO (after proper serialization) in a Kafka topic. Setting the topic's retention time to indefinite would cause no events to get lost in time. Kafka Streams would in this case be used to do the actual rating processing logic.
Some architectural questions appear to me for both approaches:
When using Axon:
Is there any added value to use Axon (or similar solutions) if there is no real state to be maintained or altered within the aggregate? The aggregate just serves as a "dumb" placeholder for the data, but does not provide any state changing logic.
How does Axon handle multiple event handlers of the same event type? Will they all handle the same event (same aggregate id) in parallel, or is the same event only handled once by one of the handlers?
Are events stored in the Axon event store kept until the end of time?
When using Kafka:
Kafka stores events/messages with the same key in the same partition. How does one select the best value for a key in the use case of user-product ratings? UserId, ProductId or a separate topic for both and publish each event in both topics.
Would it be wise to use a separate topic for each user and each product resulting in a massive amount of topics on the cluster? (Approximately <5k products and >10k users).
I don't know if SO is the preferred forum for this kind of questions... I was just wondering what you (would) recommend in this particular use case as the best practise. Looking forward to your feedback and feel free to point out other points of thought I missed in the previous questions.
EDIT#12/11/2020 : I just found a related discussion containing useful information related to my question.
As Jan Galinski already puts it, this hasn't got a fool proof answer to it really. This is worth a broader discussion on for example indeed AxonIQ's Discuss forum. Regardless, there are some questions in here I can definitely give an answer to, so let's get to it:
Axon Question 1 - Axon Framework is as you've noticed used a lot for DDD centric applications. Nothing however forces you to base yourself on that notion at all. You can strip the framework from Event Sourcing specifics, as well as modelling specifics entirely and purely go for the messaging idea of distinct commands, events and queries. It has been a conscious decision to segregate Axon Framework version 3 into these sub-part when version 4 (current) was released actually. Next to that, I think there is great value in not just basing yourself on event messages. Using distinct commands and queries only further decouples your components, making for a far richer and easier to extend application landscape.
Axon Question 2 - This depends on where the #EventHandler annotated methods are located actually. If they're in the same class only one will be invoked. If they're positioned into distinct classes, then both will receive the same event. Furthermore if they're segregated between distinct classes, it is important to note Axon uses an Event Processor as the technical solution to invoking your event handlers. If distinct classes are grouped under the same Event Processor, you can impose a certain ordering which handler is invoked first. Next to this if the event handling should occur in parallel, you will have to configure a so called TrackingEventProcessor (the default in Axon Framework), as it allows configuration of several threads to handle events concurrently. Well, to conclude this section, everything you're asking in question two is an option, neither a necessity. Just a matter of configuration really. Might be worth checking up on this documentation page of Axon Framework on the matter.
Axon Question 3 - As Axon Server serves the purpose of an Event Store, there is no retention period at all. So yes, they're by default kept until the end of time. There is nothing stopping your from dropping the events though, if you feel there's no value in storing the events to for example base all your models on (as you'd do when using Event Sourcing).
It's the Kafka question I'm personally less familiar with (figures as a contributor to Axon Framework I guess). I can give you my two cents on the matter here too though, although I'd recommend a second opinion here:
Kafka Question 1 - From my personal feeling of what such an application would require, I'd assume you'd want to be able to retrieve all data for a given product as efficient as possible. I'd wager it's important that all events are in the same partition to make this process as efficient as possible, is it wouldn't require any merging afterwards. With this in mind, I'd think using the ProductId will make most sense.
Kafka Question 2 - If you are anticipating only 5_000 products and 10_000 users, I'd guess it should be doable to have separate topics for these. Opinion incoming - It is here though were I personally feel that Kafka's intent to provide you direct power to decide on when to use topics over complicates from what you'd actually try to achieve, which business functionality. Giving the power to segregate streams feels more like an after thought from the perspective of application development. As soon as you'd require an enterprise grade/efficient message bus, that's when this option really shines I think, as then you can optimize for bulk.
Hoping all this helps you further #KDW!

CQRS Event Sourcing valueobject & entities accepting commands

Experts,
I am in the evaluating moving a nice designed DDD app to an event sourced architecture as a pet project.
As can be inferred, my Aggregate operations are relatively coarse grained. As part of my design process, I now find myself emitting a large number of events from a small subset of operations to satisfy what I know to be requirements of the read model. Is this acceptable practice ?
In addition to this, I have distilled a lot of the domain complexity via use of ValueObjects & entities. Can VO's/ E accept commands and emit events themselves, or should I expose state and add from the command handler lower down the stack ?
In terms of VO's note that I use mutable operations sparingly and it is a trade off between over complicating other areas of my domain.
I now find myself emitting a large number of events from a small subset of operations to satisfy what I know to be requirements of the read model. Is this acceptable practice ?
In most cases, you should have 1 event by command. Remember events describe the user intent so you have to stay close to the user action.
Can VO's/ E accept commands and emit events themselves
No only aggregates emit events and yes you can come up with very messy aggregates if you have a lot of events, but there is solutions for that like delegating the work to the commands and events themselves. I blogged about this technique here: https://dev.to/maximegel/cqrs-scalable-aggregates-731
In terms of VO's note that I use mutable operations sparingly
As long as you're aware of the consequences. That's fine. Trade-off are part of the job, but be sure you team is aware of that since it's written everywhere that value objects are immutable you expose yourselves to confusion and pointer issues.

How to retrieve data from another bounded context in ddd?

Initially, There is an app runs in Desktop, however, the app will run in web platform in the future.
There are some bounded contexts in the app and some of them needs to retrieve data from another. In this case, I don't know which approach I have to use for this case.
I thought of using mediator pattern that a bound context "A" requests data "X" and then mediator call another bound context, like B" " and gets the correct data "X". Finally, The mediator brings data "X" to BC "A".
This scenario will be change when the app runs in web, then I've thought of using a microservice requests data from another microservice using meaditor pattern too.
Do the both approaches are interest or there is another better solution?
Could anyone help me, please?
Thanks a lot!
If you're retrieving data from other bounded contexts through either DB or API calls, your architecture might potentially fall to death star pattern because it introduces unwanted coupling and knowledge to the client context.
A better approach might be is looking at event-driven mechanisms like webhooks or message queues as a way of emitting data that you want to share to subscribing context(s). This is good because it reduces coupling of your bounded context(s) through data replication across contexts which results to higher bounded contexts independence.
This gives you the feeling of "Who cares if bounded context B is not available ATM, bounded context A and C have data they need inside them and I can resume syncing later since my data update related events are recorded on my queue"
The answer to this question breaks down into two distinct areas:
the logical challenge of communicating between different contexts, where the same data could be used in very different ways. How does one context interpret the meaning of the data?
and the technical challenge of synchronizing data between independent systems. How do we guarantee the correctness of each system's behavior when they both have independent copies of the "same" data?
Logically, a context map is used to define the relationship between any bounded contexts that need to communicate (share data) in any way. The domain models that control the data are only applicable with a single bounded context, so some method for interpreting data from another context is needed. That's where the patterns from Evan's book come in to play: customer/supplier, conformist, published language, open host, anti-corruption layer, or (the cop-out pattern) separate ways.
Using a mediator between services can be though of as an implementation of the anti-corruption layer pattern: the services don't need to speak the same language, because there's an independent buffer between them doing the translation. In a microservice architecture, this could be some kind of integration service between two very different contexts.
From a technical perspective, direct API calls between services in different bounded contexts introduce dependencies between those services, so an event-driven approach like what Allan mentioned is preferred, assuming your application is okay with the implications of that (eventual consistency of the data). Picking a messaging platforms that gives you the guarantees necessary to keep the data in sync is important. Most asynchronous messaging protocols guarantee "at least once" delivery, but ordering of messages and de-duplication of repeats is up to the application.
Sometimes it's simpler to use a synchronous API call, especially if you find yourself doing a lot of request/response type messaging (which can happen if you have services sending command-type messages to each other).
A composite UI is another pattern that allows you to do data integration in the presentation layer, by having each component pull data from the relevant service, then share/combine the data in the UI itself. This can be easier to manage than a tangled web of cross-service API calls in the backend, especially if you use something like an IT/Ops service, NGINX, or MuleSoft's Experience API approach to implement a "backend-for-frontend".
What you need is a ddd pattern for integration. BC "B" is upstream, "A" is downstream. You could go for an OHS PL in upstream, and ACL in downstream. In practice this is a REST API upstream and an adapter downstream. Every time A needs the data from B , the adapter calls the REST API and adapts the info returned to A domain model. This would be sync. If you wanna go for an async integration, B would publish events to MQ with the info, and A would listen for those events and get the info.
I want to add-on a comment about analysis in DDD. Exist e several approaches for sending data to analytic.
1) If you have a big enterprise application and you should collect a lot of statistic from all bounded context better move analytic in separate service and use a message queue for send data there.
2) If you have a simple application separate your Analytic from your App in other context and use an event or command to speak with there.

CQRS sagas - did I understand them right?

I'm trying to understand sagas, and meanwhile I have a specific way of thinking of them - but I am not sure whether I got the idea right. Hence I'd like to elaborate and have others tell me whether it's right or wrong.
In my understanding, sagas are a solution to the question of how to model long-running processes. Long-running means: Involving multiple commands, multiple events and possibly multiple aggregates. The process is not modeled inside one of the participating aggregates to avoid dependencies between them.
Basically, a saga is nothing more but a command / event handler that reacts on internal and external commands / events. It does not contain its own logic, it's just a (finite) state machine, and therefor provides tasks such as When event X happens, send command Y.
Sagas are persisted to the event store as well as aggregates, are correlated to a specific aggregate instance, and hence are reloaded when this specific aggregate (or set of aggregates) is used.
Is this right?
There are different means of implementing Sagas. Reaching from stateless event handlers that publish commands all the way to carrying all the state and basically being the domain's aggregates themselves. Udi Dahan once wrote an article about Sagas being the only Aggregates in a (in his specific case) correctly modeled system. I'll look it up and update this answer.
There's also the concept of document-based sagas.
Your definition of Sagas sounds right for me and I also would define them so.
The only change in your description I would made is that a saga is only a eventhandler (not a command) for event(s) and based on the receiving event and its internal state constructs a command and sents it to the CommandBus for execution.
Normally has a Saga only a single event to be started from (StartByEvent) and multiple events to transition (TransitionByEvent) to the next state and mutiple event to be ended by(EndByEvent).
On MSDN they defined Sagas as ProcessManager.
The term saga is commonly used in discussions of CQRS to refer to a
piece of code that coordinates and routes messages between bounded
contexts and aggregates. However, for the purposes of this guidance we
prefer to use the term process manager to refer to this type of code
artifact. There are two reasons for this: There is a well-known,
pre-existing definition of the term saga that has a different meaning
from the one generally understood in relation to CQRS. The term
process manager is a better description of the role performed by this
type of code artifact. Although the term saga is often used in the
context of the CQRS pattern, it has a pre-existing definition. We have
chosen to use the term process manager in this guidance to avoid
confusion with this pre-existing definition. The term saga, in
relation to distributed systems, was originally defined in the paper
"Sagas" by Hector Garcia-Molina and Kenneth Salem. This paper proposes
a mechanism that it calls a saga as an alternative to using a
distributed transaction for managing a long-running business process.
The paper recognizes that business processes are often comprised of
multiple steps, each of which involves a transaction, and that overall
consistency can be achieved by grouping these individual transactions
into a distributed transaction. However, in long-running business
processes, using distributed transactions can impact on the
performance and concurrency of the system because of the locks that
must be held for the duration of the distributed transaction.
reference: http://msdn.microsoft.com/en-us/library/jj591569.aspx

Recreate a graph that change in time

I have an entity in my domain that represent a city electrical network. Actually my model is an entity with a List that contains breakers, transformers, lines.
The network change every time a breaker is opened/closed, user can change connections etc...
In all examples of CQRS the EventStore is queried with Version and aggregateId.
Do you think I have to implement events only for the "network" aggregate or also for every "Connectable" item?
In this case when I have to replay all events to get the "actual" status (based on a date) I can have near 10000-20000 events to process.
An Event modify one property or I need an Event that modify an object (containing all properties of the object)?
Theres always an exception to the rule but I think you need to have an event for every command handled in your domain. You can get around the problem of processing so many events by making use of Snapshots.
http://thinkbeforecoding.com/post/2010/02/25/Event-Sourcing-and-CQRS-Snapshots
I assume you mean currently your "connectable items" are part of the "network" aggregate and you are asking if they should be their own aggregate? That really depends on the nature of your system and problem and is more of a DDD issue than simple a CQRS one. However if the nature of your changes is typically to operate on the items independently of one another then then should probably be aggregate roots themselves. Regardless in order to answer that question we would need to know much more about the system you are modeling.
As for the challenge of replaying thousands of events, you certainly do not have to replay all your events for each command. Sure snapshotting is an option, but even better is caching the aggregate root objects in memory after they are first loaded to ensure that you do not have to source from events with each command (unless the system crashes, in which case you can rely on snapshots for quicker recovery though you may not need them with caching since you only pay the penalty of loading once).
Now if you are distributing this system across multiple hosts or threads there are some other issues to consider but I think that discussion is best left for another question or the forums.
Finally you asked (I think) can an event modify more than one property of the state of an object? Yes if that is what makes sense based on what that event represents. The idea of an event is simply that it represents a state change in the aggregate, however these events should also represent concepts that make sense to the business.
I hope that helps.