To which layer in DDD event handlers belongs - event-handling

We are currently using domain driven desing with commands and events.
I can not decide in which layer of DDD should event handlers and command handlers resides.
My feeling is it should reside in application layer but I have not proper arguments.
My understanding of application layer in brief is, application layer coordinate business tasks and does not hold any domain state.
Event handlers are the coordinators which reacts to message and call domain objects to do their tasks.
Most of the time event handlers only decide which command/event(message) will be called next. Or call some API or other domain logic in domain layer, to do the task before further message execution.
Thats the why I think event and command handlers should resides in application layer.
But, now it is getting more complex. We are using Process Managers too.
They handles events same as event handlers do. But they contains temporary state.
Moreover how they are constructed, they are holders of business logic too. You can read from them how are events chained and what must be achieved to fulfill this particular process.
How should I threat them? Did they belong to application layer or domain layer?
Summary:
Did event handlers belongs to application layer?
Did command handlers belongs to application layer?
Did Process Managers belongs to application layer?

Did event handlers belongs to application layer?
Event handlers are located in the Application Layer and sometimes in Domain layer.
Application layer event handlers are more infrastructural in nature, carrying out tasks like sending e-mails and publishing events to other bounded contexts.
Responsibilities:
trigger communication with external bounded contexts, call another domain to do something
manage communication with external services
may notify user about changes
forward the Event via a messaging infrastructure
Domain layer event handlers invoke domain logic, such as invoking a domain service.
Responsibilities:
handling domain-specific logic in one bounded context
delegating to a domain services
Summarise: Event handlers that purely perform domain logic better to place at Domain Layer, but others only at Application Layer
Did command handlers belongs to application layer?
Yes.
Events represent a past, something that already happened and can’t be undone. They are part of the business domain and its ubiquitous language. Commands, on the other hand, represent a wish, an action in the future that can be rejected. Commands are typically the result of a user action. Inside the Domain Layer we don’t have user actions. The command handlers handle the command by orchestrating the workflow of the business operation and effectively replace the Application Service in this job. So the correct place for Command Handlers is Application Layer.
Did Process Managers belongs to application layer?
Yes.
Process Manager coordinates a business process that spans more than a single bounded context. It encapsulates the process-specific logic and maintains a central point of control. In other words, it is an orchestrator, which does not contain any bussiness rules. So it must belong to Application Layer

Related

DDD with Microservices and Multiple inputs via REST and Message Queue

I have an aggregate root with the business logic in a c# project. Also in the solution is a REST web.api project that passes commands / requests to the aggregate root to do work and handle queries. This is my microservice. Now I want some of my events / commands / request to come of a message queue. I'm considering this:
Put a console app in the solution to listen for messages from a message queue. Then reference the aggregate root project in the console app
Is it a bad pattern to share "microservice business logic" between two services? Because now I have two "services" an api and a console app doing the work. I would have to ensure that when the business logic changes both services are deployed.
Personally I think it is fine to do what I suggest, a good CI/CD pipeline should mitigate that. But are there any other cons I might have missed?
For some background I would suggest watching DDD & Microservices: At Last, Some Boundaries! by Eric Evans.
A bounded context is the micro service. How you surface it is another matter. What you describe seems to be what I actually do quite frequently. I have an Identity & Access open source project that I'm working on (so depending on when you read this it may be in a different state) that demonstrates this structure.
Internal to an organization one may access the BC either via a service bus or via the web-api. External parties would utilize only the web-api as messaging should not be exposed.
The web-api either returns data from the query layer or sends commands via the service bus (messaging) to the BC functional endpoint. Depending on the complexity of the system I may introduce an orchestration concern that interacts with multiple BCs. It is probably a BC in its own right much along the lines of a reporting BC.

Distributing events across different JVMs with Axon Server to Subscribing Event Processors (without Event Sourcing)

I'm using Axon Framework (4.1) with aggregates in one module (JVM, container) and projections/Sagas in another module. What I want to do is to have a distributed application taking advantage of CQRS but without Event Sourcing.
It is rather trivial to setup and everything works as expected in a single application. The problem arises when there are several independent modules (across separate JVMs) involved. Out of the box Axon starter uses tracking processors connected to AxonServerEventStore, which allows to have "location transparency" when it comes to listening to the events across different JVMs.
In my case, I don't want any infrastructure for persisting or tracking the events. I just want to distribute the events to any subscribing processors (SEPs) from my aggregates in a fire-and-forget style, just like AxonServerQueryBus is doing to distribute scatter-gather queries, for example.
If I just declare all processors as subscribing as follows:
#Autowired
public void configureEventSubscribers(EventProcessingConfigurer configurer) {
configurer.usingSubscribingEventProcessors();
}
events are reaching all #EventHandler methods in the same JVM, but events are not reaching any handlers in other JVMs anymore. If my understanding is correct, then, Axon Server will distribute the events across JVMs for tracking processors only (TEPs).
Obviously, what I can do, is to use an external message broker (RabbitMQ, Kafka) in combination with SpringAMQPMessageSource (as in the docs) to distribute events to all subscribers through something like fanout in RabbitMQ. This works, but this requires to maintain the broker myself.
What would be nice is to have Axon Server taking care of this just like it takes care of distributing commands and queries (this would give me one less infrastructure piece to care about).
As a side note, I've actually managed to distribute the events to projections using QueryBus and passing events as payloads to GenericQueryMessage sent as scatter-gather queries. Needless to say, this is not a robust solution. But it goes to demonstrate that there is nothing inherently impossible with Axon Server distributing events (just another type of a message, after all) to SEPs or TEPs indifferently.
Finally, the questions:
1) What is the community's recommendation for pure CQRS (without Event Sourcing) using Axon when it comes to location transparency and distributing the events?
2) Is it possible to make Axon Server to distribute events to SEPs across JVMs (eliminating the need for an external message broker)?
Note on Event Sourcing
From Axon Framework's perspective, Event Sourcing is a sole concern of your Command Model. This stance is taken, as Event Sourcing defines the recreation of a model through the events it has published. A Query Model however does not react to commands with publishing events changing its state, it simply listen to (distributed) events to update its state to be queried by others.
As such, the framework only thinks about Event Sourcing when it recreates your Aggregates, by providing the EventSourcingRepository.
The Event Processor's job is to be the "mechanical aspect of providing events to your Event Handlers". This relates to the Q part in CQRS, to recreating the Query Model.
Thus, the Framework does not regard Event Processors to be part of the notion of Event Sourcing.
Answer to your scenario
I do want to emphasize that if you are distributing your application by running several instances of a given app, you will very likely need to have a way to ensure a given event is only handled once.
This is one of the concerns a Tracking Event Processor (TEP) addresses, and it does so by using a Tracking Token.
The Tracking Token essential acts as a marker defining which events have been processed. Added, a given TEP's thread is inclined to have a claim on a token to be able to work, which thus ensure a given event is not handled twice.
Concluding, you will need to define infrastructure to store Tracking Tokens to be able to distributed the event load, essentially opting against the use of the SubscribingEventProcessor entirely.
However, whether the above is an issu does depend on your application landscape.
Maybe you aren't duplicating a given application at all, thus effectively not duplicating a given Tracking Event Processor.
In this case, you can fulfill your request to "not track events", whilst still using Tracking Event Processors.
All you have to do, is to ensure you are not storing them. The interface used to storing tokens, is the TokenStore, for which an in memory version exists.
Using the InMemoryTokenStore in a default Axon set up will however mean you'll technically be replaying your events every time. This occurs due to the default "initial Tracking Token" process. This is, of course, also configurable, for which I'd suggest you to use the following approach:
// Creating the configuration for a TEP
TrackingEventProcessorConfiguration tepConfig =
TrackingEventProcessorConfiguration
.forSingleThreadedProcessing() // Note: could also be multi-threaded
.andInitialTrackingToken(StreamableMessageSource::createHeadToken);
// Registering as default TEP config
EventProcessingConfigurer.
registerTrackingEventProcessorConfiguration(config -> tepConfig);
This should set you up to use TEP, without the necessity to set up infrastructure to store Tokens. Note however, this will require you not to duplicate the given application.
I'd like to end with the following question you've posted:
Is it possible to make Axon Server to distribute events to SEPs across JVMs (eliminating the need for an external message broker)?
As you have correctly noted, SEPs are (currently) only usable for subscribing to events which have been published within a given JVM. Axon Server does not (yet) have a mechanism to bridge events from one JVM to another for the purpose allowing distributed Subscribing Event Processing. I am (as part of AxonIQ) however relatively sure we will look in to this in the future. If such a feature is of importance to successful conclusion of your project, I suggest to contact AxonIQ directly.
If you are considering Apache Kafka for this use case, you might want to look into kalium.alkal.io.
It will make your code to be much simpler
MyObject myObject = ....
kalium.post(myObject); //this is used to send POJOs/protobuffs using Kafka Producer
//On the consumer side. This will use a deserializer with the Kafka Consumer API
kalium.on(MyObject.class, myObject -> {
// do something with the object
}, "consumer_group");

Are independent (micro)services a paradox?

Ideas about microservices:
Microservices should be functionally idependent
Microservices should specialize in doing some useful work in a domain they own
Microservices are intended to communicate with each other
I find this these ideas to be contradictory.
Let me give a hypothetical business scenario.
I need a service that executes a workflow that has a step requiring auto-translation.
My business has an auto-translation service. There is a RESTful API where I can POST a source language, target language and text, and it returns a translation. This is a perfect example of a useful standalone service. It is reusable and completely unaware of its consumers.
Should the workflow service that my business needs leverage this service? If so, then my service has a "dependency" on another service.
If you take this reasoning to the exteme, every service in the world would have every functionality in the world.
Now, I know you're thinking we can break this dependency by moving out of resquestion-response (REST) and into messaging. My service publishes a translation request message. A translation response message is published when the translation is complete and my service consumes this message. Ok, but my service has to freeze the workflow and continue when the message arrives. It's still "waiting" even if the waiting is true async waiting (say the workflow state was persisted and the translation message arrives a day later). This is just a delayed request-response.
For me personally, "independent" is a quality that applies to multiple dimensions. It may not be independent from runtime perspective, but it can be independent from development, deployment, operations and scalability perspectives.
For example, translation service may be independently developed, deployed, operated by another team. At the same time they can scale translation service independently from your business workflow service according to the demand they get. And you can scale business workflow service according to your own demand (of course downstream dependencies come in play here, but that's a whole another topic)

Adding Data from UI to different microservices

Imagine you have a user registration form, where you fill in the form:
First Name, Last Name, Age, Address, Prefered way of communication: Sms, Email (radiobuttons). You have 2 microservices:
UserManagement service
Communication service
When user is registered we should create 2 aggregates in 2 services: User in UserManagementContext and UserCommunicationSettings in Communication. There are three ways I could think of achieving this:
Perform 2 different requests from UI. What if one of them fails?
Put all that data in User and then raise integration event with all that data, catch it in CommunicationContext. Fat events, integration events shouldn't contain domain data, just the ids of aggreagates.
Put the message in the queue, so both contexts would have the adapters to take the needed info.
What is the best option to split this data and save the information?
Perform 2 different requests from UI. What if one of them fails?
I don't think this would do. You are going to end up in inconsistent state.
I'm for approach 3#:
User is persisted (created) in your user store.
UserRegistered event is sent around, containing the ID of the user.
All interested parties handle UserRegistered event.
I'd opt for slim events, because your services may need different user data and its better to let them to get this data on their own rather than putting all the things into the event.
As you mentioned to store communication settings, assuming that communication data are supposedly not part of the bounded context of UserManagement Service.
I see a couple of problems with approach #3 proposed in other answer. Though I think approach #3 in original answer is quite similar to my answer.
What if more communication modes are added? Naturally, it should only cause Communication Service to change, not UserManagement Service. SRP. Communication settings MS should store all communication settings related data in its own datastore.
What if user updates his communication settings preference only? Why user management service should take burden of handling that? Communication settings change should just trigger changes in its corresponding micro-service that is Communication Service in our case.
I just find it better to use some natural keys to identify & correlate entities across micro-services rather than internal IDs generated by DB. Consider that tomorrow you decide to use completely different strategy to create "ids" of user for UserManagement service e.g. non-numeric IDs, different id generation algorithm etc. I would want to keep other micro-services unaffected of any such decisions.
Proposed approach:
Include API Gateway in architecture. Frontend always talks to API Gateway.
API Gateway sends commands to message queue like RegisterUser to be consumed by interested micro-services.
If you wish to keep architecture simple, you may go with publishing a single message with all data that can be consumed by any interested micro-services. If you strictly want individual micro-services to see only its relevant data, create a message queue per unique data structure expected by consuming services.

Should a custom http header or a parameter by used to identify the context of a caller to a RESTful service?

My team has inherited a WCF service that serves as a gateway into multiple back-end systems. The first step in every call to this service is a decision point based on a context key that identifies the caller. This decision point is essentially a factory to provide a handler based on which back end system the request should be directed to.
We're looking at simplifying this service into a RESTful service and are considering the benefits and consequences of passing the context key as part of the request header rather than adding the context key as a parameter to every call into the service. On the one hand, when looking at the individual implementations of the service for each of the backend systems, the context of the caller seems like an orthogonal concern. However, using a custom header leaves me with a slightly uncomfortable feeling since an essential detail for calls to the service are masked from the visible interface. I should note that this is a purely internal solution, which mitigates some of my concern about the visibility of the interface, but even internally there's no telling whether the next engineer to attempt to connect to or modify the service is going to be aware of this hidden detail.